glance-16.0.0/0000775000175100017510000000000013245511661013057 5ustar zuulzuul00000000000000glance-16.0.0/releasenotes/0000775000175100017510000000000013245511661015550 5ustar zuulzuul00000000000000glance-16.0.0/releasenotes/notes/0000775000175100017510000000000013245511661016700 5ustar zuulzuul00000000000000glance-16.0.0/releasenotes/notes/scrubber-refactor-73ddbd61ebbf1e86.yaml0000666000175100017510000000073313245511421025634 0ustar zuulzuul00000000000000others: - | The ``glance-scrubber`` utility, which is used to perfom offline deletion of images when the Glance ``delayed_delete`` option is enabled, has been refactored so that it no longer uses the Glance Registry API (and hence no longer has a dependency on the Registry v1 Client). Configuration options associated with connecting to the Glance registry are no longer required, and operators may remove them from the glance-scrubber.conf file. glance-16.0.0/releasenotes/notes/soft_delete-tasks-43ea983695faa565.yaml0000666000175100017510000000056013245511421025353 0ustar zuulzuul00000000000000--- prelude: > - Expired tasks are now deleted. other: - Expired tasks are now deleted in Glance. As with other Glance resources, this is a "soft" deletion, that is, a deleted task is marked as ``deleted`` in the database so that the task will not appear in API responses, but the information associated with the task persists in the database. glance-16.0.0/releasenotes/notes/pike-rc-2-acc173005045e16a.yaml0000666000175100017510000000761113245511421023370 0ustar zuulzuul00000000000000--- features: - | A new policy, ``tasks_api_access`` has been introduced so that ordinary user credentials may be used by Glance to manage the tasks that accomplish the interoperable image import process without requiring that operators expose the Tasks API to end users. upgrade: - | If you wish to enable the EXPERIMENTAL version 2.6 API that contains the new interoperable image import functionality, set the configuration option ``enable_image_import`` to True in the glance-api.conf file. The default value for this option is False. The interoperable image import functionality uses the Glance tasks engine. This is transparent to end users, as they do *not* use the Tasks API for the interoperable image import workflow. The operator, however, must make sure that the following configuration options are set correctly. - ``enable_image_import`` - ``node_staging_uri`` - the options in the ``[task]`` group - the options in the ``[taskflow_executor]`` group See the documentation in the sample glance-api.conf file for more information. Additionally, you will need to verify that the task-related policies in the Glance policy.json file are set correctly. These settings are described below. - | A new policy, ``tasks_api_access`` has been introduced so that ordinary user credentials may be used by Glance to manage the tasks that accomplish the interoperable image import process without requiring that operators expose the Tasks API to end users. The `Tasks API`_ was made admin-only by default in Mitaka by restricting the following policy targets to **role:admin**: **get_task**, **get_tasks**, **add_task**, and **modify_task**. The new ``tasks_api_access`` policy target directly controls access to the Tasks API, whereas targets just mentioned indirectly affect what can be manipulated via the API by controlling what operations can be performed on Glance's internal task objects. The key point is that if you want to expose the new interoperable image import process to end users while keeping the Tasks API admin-only, you can accomplish this by using the following settings: .. code-block:: none "get_task": "", "get_tasks": "", "add_task": "", "modify_task": "", "tasks_api_access": "role:admin", To summarize: end users do **not** need access to the Tasks API in order to use the new interoperable image import process. They do, however, need permission to access internal Glance task objects. We recommend that all operators adopt the policy settings just described independently of the decision whether to expose the EXPERIMENTAL version 2.6 API. .. _`Tasks API`: https://developer.openstack.org/api-ref/image/v2/index.html#tasks security: - | A new policy, ``tasks_api_access`` has been introduced so that ordinary user credentials may be used by Glance to manage the tasks that accomplish the interoperable image import process without requiring that operators expose the Tasks API to end users. This is a good time to review your Glance ``policy.json`` file to make sure that if it contains a ``default`` target, the rule is fairly restrictive ("role:admin" or "!" are good choices). The ``default`` target is used when the policy engine cannot find the target it's looking for. This can happen when a new policy is introduced but the policy file in use is from a prior release. other: - | The Image Service API Reference has been updated with a section on the `Interoperable image import`_ process (also known as "image import refactored") and the API calls that are exposed to implement it in the EXPERIMENTAL v2.6 of the API. .. _`Interoperable image import`: https://developer.openstack.org/api-ref/image/v2/index.html#interoperable-image-import glance-16.0.0/releasenotes/notes/restrict_location_updates-05454bb765a8c92c.yaml0000666000175100017510000000153713245511421027201 0ustar zuulzuul00000000000000--- prelude: > Location updates for images are now restricted to images in ``active`` or ``queued`` status. Please refer to the "Bug Fixes" section for more information. fixes: - | Image location updates to an image which is not in ``active`` or ``queued`` status can introduce race conditions and security issues and hence a bad experience for users and operators. As a result, we have restricted image location updates in this release. Users will now observe the following: * HTTP Response Code 409 (Conflict) will be returned in response to an attempt to remove an image location when the image status is not ``active`` * HTTP Response Code 409 (Conflict) will be returned in response to an attempt to replace an image location when the image status is not ``active`` or ``queued`` glance-16.0.0/releasenotes/notes/oslo-log-use-stderr-changes-07f5daf3e6abdcd6.yaml0000666000175100017510000000143413245511421027532 0ustar zuulzuul00000000000000--- upgrade: - A recent change to oslo.log (>= 3.17.0) set the default value of ``[DEFAULT]/use_stderr`` to ``False`` in order to prevent duplication of logs (as reported in bug \#1588051). Since this would change the current behaviour of certain glance commands (e.g., glance-replicator, glance-cache-manage, etc.), we chose to override the default value of ``use_stderr`` to ``True`` in those commands. We also chose not to override that value in any Glance service (e.g., glance-api, glance-registry) so that duplicate logs are not created by those services. Operators that have a usecase that relies on logs being reported on standard error may set ``[DEFAULT]/use_stderr = True`` in the appropriate service's configuration file upon deployment. glance-16.0.0/releasenotes/notes/new_image_filters-c888361e6ecf495c.yaml0000666000175100017510000000144513245511421025507 0ustar zuulzuul00000000000000--- features: - Implement the ability to filter images by the properties `id`, `name`, `status`,`container_format`, `disk_format` using the 'in' operator between the values. Following the pattern of existing filters, new filters are specified as query parameters using the field to filter as the key and the filter criteria as the value in the parameter. Filtering based on the principle of full compliance with the template, for example 'name = in:deb' does not match 'debian'. Changes apply exclusively to the API v2 Image entity listings An example of an acceptance criteria using the 'in' operator for name ?name=in:name1,name2,name3. These filters were added using syntax that conforms to the latest guidelines from the OpenStack API Working Group. glance-16.0.0/releasenotes/notes/wsgi-containerization-369880238a5e793d.yaml0000666000175100017510000000450113245511421026125 0ustar zuulzuul00000000000000--- features: - | Glance is now packaged with a WSGI script entrypoint, enabling it to be run as a WSGI application hosted by a performant web server. See `Running Glance in HTTPD `_ in the Glance documentation for details. There are some limitations with this method of deploying Glance and we do not recommend its use in production environments at this time. See the `Known Issues`_ section of this document for more information. issues: - | Although support has been added for Glance to be run as a WSGI application hosted by a web server, the atypical nature of the Images APIs provided by Glance, which enable transfer of copious amounts of image data, makes it difficult for this approach to work without careful configuration. Glance relies on the use of chunked transfer encoding for image uploads, and the support of chunked transfer encoding is not required by the `WSGI specification`_. The Glance documentation section `Running Glance in HTTPD`_ outlines some approaches to use (and not to use) Glance with the Apache httpd server. This is the way Glance is configured as a WSGI application in devstack, so it's the method with which we've had the most experience. If you try deploying Glance using a different web server, please consider contributing your findings to the Glance documentation. Currently, we are experiencing some problems in the gate when Glance is configured to run in devstack following the guidelines recommended in the documentation. You can follow `Bug 1703856`_ to learn more. As far as the Glance team can determine, the difficulties running Glance as a WSGI application are caused by issues external to Glance. Thus the Glance team recommends that Glance be run in its normal standalone configuration, particularly in production environments. If you choose to run Glance as a WSGI application in a web server, be sure to test your installation carefully with realistic usage scenarios. .. _`WSGI specification`: https://www.python.org/dev/peps/pep-0333/ .. _`Running Glance in HTTPD`: https://docs.openstack.org/glance/latest/admin/apache-httpd.html .. _`Bug 1703856`: https://bugs.launchpad.net/glance/+bug/1703856 glance-16.0.0/releasenotes/notes/Prevent-removing-last-image-location-d5ee3e00efe14f34.yaml0000666000175100017510000000062713245511421031235 0ustar zuulzuul00000000000000--- security: - Fixing bug 1525915; image might be transitioning from active to queued by regular user by removing last location of image (or replacing locations with empty list). This allows user to re-upload data to the image breaking Glance's promise of image data immutability. From now on, last location cannot be removed and locations cannot be replaced with empty list. glance-16.0.0/releasenotes/notes/improved-config-options-221c58a8c37602ba.yaml0000666000175100017510000000245113245511421026466 0ustar zuulzuul00000000000000--- prelude: > - Improved configuration option descriptions and handling. other: - | The glance configuration options have been improved with detailed help texts, defaults for sample configuration files, explicit choices of values for operators to choose from, and a strict range defined with ``min`` and ``max`` boundaries. * It must be noted that the configuration options that take integer values now have a strict range defined with ``min`` and/or ``max`` boundaries where appropriate. * This renders the configuration options incapable of taking certain values that may have been accepted before but were actually invalid. * For example, configuration options specifying counts, where a negative value was undefined, would have still accepted the supplied negative value. Such options will no longer accept negative values. * Options where a negative value was previously defined (for example, -1 to mean unlimited) will remain unaffected by this change. * Values which do not comply with the new restrictions will prevent the service from starting. The logs will contain a message indicating the problematic configuration option and the reason why the supplied value has been rejected. glance-16.0.0/releasenotes/notes/remove-db-downgrade-0d1cc45b97605775.yaml0000666000175100017510000000067013245511421025474 0ustar zuulzuul00000000000000--- prelude: > - Database downgrades have been removed from the Glance source tree. upgrade: - The ``db_downgrade`` command has been removed from the ``glance-manage`` utility and all database downgrade scripts have been removed. In accord with OpenStack policy, Glance cannot be downgraded any more. Operators are advised to make a full database backup of their production data before attempting any upgrade. glance-16.0.0/releasenotes/notes/make-task-api-admin-only-by-default-7def996262e18f7a.yaml0000666000175100017510000000133213245511421030543 0ustar zuulzuul00000000000000--- deprecations: - The task API was added to allow users for uploading images asynchronously and for deployers to have more control in the upload process. Unfortunately, this API has not worked the way it was expected to. Therefore, the task API has entered a deprecation period and it is meant to be replaced by the new import API. This change makes the task API admin only by default so that it is not accidentally deployed as a public API. upgrade: - The task API is being deprecated and it has been made admin only. If deployers of Glance would like to have this API as a public one, it is necessary to change the `policy.json` file and remove `role:admin` from every `task` related field.glance-16.0.0/releasenotes/notes/add-cpu-thread-pinning-metadata-09b1866b875c4647.yaml0000666000175100017510000000023113245511421027567 0ustar zuulzuul00000000000000--- upgrade: - Added additional metadata for CPU thread pinning policies to 'compute-cpu-pinning.json'. Use the ``glance_manage`` tool to upgrade. glance-16.0.0/releasenotes/notes/api-2-6-current-9eeb83b7ecc0a562.yaml0000666000175100017510000000553113245511421024676 0ustar zuulzuul00000000000000--- prelude: > - The CURRENT version of the Images API v2 is bumped to **2.6**. The 2.6 API was available in the previous (Pike) release as an experimental API to introduce the calls necessary for the `interoperable image import functionality`_. - A new interoperable image import method, ``web-download`` is introduced. features: - | A new interoperable image import method, ``web-download`` is introduced. This method allows an end user to import an image from a remote URL. The image data is retrieved from the URL and stored in the Glance backend. (In other words, this is a **copy-from** operation.) This feature is enabled by default, but it is optional. Whether it is offered at your installation depends on the value of the ``enabled_import_methods`` configuration option in the ``glance-api.conf`` file (assuming, of course, that you have not disabled image import at your site). upgrade: - | The **CURRENT** version of the Images API supplied by Glance is introduced as **2.6**. It includes the new API calls introduced on an experimental basis in the Pike release. While the 2.6 API is CURRENT, whether the interoperable image import functionality it makes available is exposed to end users is controlled by a configuration option, ``enable_image_import``. Although this option existed in the previous release, its effect is slightly different in Queens. * ``enable_image_import`` is **True** by default (in Pike it was False) * When ``enable_image_import`` is **True**, a new import-method, ``web-download`` is available. (In Pike, only ``glance-direct`` was offered.) Which import-methods you offer can be configured using the ``enabled_import_methods`` option in the ``glance-api.conf`` file. * If ``enable_image_import`` is set **False**, requests to the v2 endpoint for URIs defined only in v2.6 will return 404 (Not Found) with a message in the response body stating "Image import is not supported at this site." Additionally, the image-create response will not contain the "OpenStack-image-import-methods" header. The ``enable_image_import`` configuration option was introduced as DEPRECATED in Pike and will be removed in Rocky. The discovery calls defined in the `refactored image import spec`_ remain in an abbreviated form in this release. Finally, there are no changes to the version 2.5 API in this release. All version 2.5 calls will work whether the new import functionality is enabled or not. .. _`interoperable image import functionality`: https://developer.openstack.org/api-ref/image/v2/#interoperable-image-import .. _`refactored image import spec`: https://specs.openstack.org/openstack/glance-specs/specs/mitaka/approved/image-import/image-import-refactor.html glance-16.0.0/releasenotes/notes/range-header-request-83cf11eebf865fb1.yaml0000666000175100017510000000110613245511421026147 0ustar zuulzuul00000000000000--- fixes: - | Glance had been accepting the Content-Range header for GET v2/images/{image_id}/file requests, contrary to RFC 7233. Following RFC 7233, Glance will now: * Accept the Range header in requests to serve partial images. * Include a ``Content-Range`` header upon successful delivery of the requested partial content. Please note that not all Glance storage backends support partial downloads. A Range request to a Glance server with such a backend will result in the entire image content being delivered despite the 206 response code. glance-16.0.0/releasenotes/notes/use-cursive-c6b15d94845232da.yaml0000666000175100017510000000160413245511421024167 0ustar zuulzuul00000000000000--- other: - | Glance and Nova contain nearly identical digital signature modules. In order to better maintain and evolve this code and to eliminate the possibility that the modules diverge, we have replaced the digital signature module in Glance with the new ``cursive`` library. * The ``cursive`` library is an OpenStack project which implements OpenStack-specific verification of digital signatures. * In Newton, the majority of the signature verification code was removed from Glance. ``cursive`` has been added to Glance as a dependency and will be installed by default. * Glance uses the ``cursive`` library's functionality to verify digital signatures. To familiarize yourself with this new dependency and see the list of transitive dependencies visit http://git.openstack.org/cgit/openstack/cursive glance-16.0.0/releasenotes/notes/reordered-store-config-opts-newton-3a6575b5908c0e0f.yaml0000666000175100017510000000250213245511421030560 0ustar zuulzuul00000000000000--- prelude: > - Sample configuration file shipped with Glance source now has reordered store drivers configuration options for future consistent ordering. other: - | The sample configuration files autogenerated using the oslo-config-generator tool now give consistent ordering of the store drivers configurations. * Some operators have reported issues with reordering observed in the sample configurations shipped with Glance release tarballs. This reordering may result into a incorrect "diff" of the configurations used downstream vs. newly introduced upstream. * Latest release of ``glance_store`` library (used in the **Newton** release of Glance) will include fix for the ``glance_store`` bug 1619487. * Until now every run of the oslo-config-generator resulted in random ordering of the store drivers configuration. After **Newton** release this order will remain consistent. * The store drivers configuration order in the sample or autogenerated files should be expected to be alphabetical as - ``cinder``, ``filesystem``, ``http``, ``rbd``, ``sheepdog``, ``swift``, ``vmware``. * Note the code name for the "ceph" driver is ``rbd``. * Note the ordering of the options within a store is not alphabetical. glance-16.0.0/releasenotes/notes/location-add-status-checks-b70db66100bc96b7.yaml0000666000175100017510000000344313245511421027107 0ustar zuulzuul00000000000000--- prelude: > - Adding locations to a non-active or non-queued image is no longer allowed. critical: - | Attempting to set image locations to an image *not* in ``active`` or ``queued`` status will now result in a HTTP Conflict (HTTP status code 409) to the user. * Until now, no image status checks were in place while **adding** a location on it. In some circumstances, this may result in a bad user experience. It may also cause problems for a security team evaluating the condition of an image in ``deactivated`` status. * **Adding** locations is disallowed on the following image statuses - ``saving``, ``deactivated``, ``deleted``, ``pending_delete``, ``killed``. * Note that there are race conditions associated with adding a location to an image in the ``active``, ``queued``, ``saving``, or ``deactivated`` status. Because these are non-terminal image statuses, it is possible that when a user attempts to add a location, a status transition could occur that might block the **add** (or might appear to allow an add that should not be allowed). * For example, a user is not allowed to add a location to an image in ``saving`` status. Suppose a user decides to add a location anyway. It is possible that before the user's request is processed, the transmission of data being saved is completed and the image transitioned into ``active`` status, in which case the user's add location request will succeed. To the user, however, this success will appear anomalous because in most cases, an attempt to add a location to an image in ``saving`` status will fail. * We mention this so that you can be aware of this situation in your own testing. glance-16.0.0/releasenotes/notes/implement-lite-spec-db-sync-check-3e2e147aec0ae82b.yaml0000666000175100017510000000203013245511421030403 0ustar zuulzuul00000000000000--- features: - | Added a new command ``glance-manage db check``, the command will allow a user to check the status of upgrades in the database. upgrade: - | Using db check In order to check the current state of your database upgrades, you may run the command ``glance-manage db check``. This will inform you of any outstanding actions you have left to take. Here is a list of possible return codes: - A return code of ``0`` means you are currently up to date with the latest migration script version and all ``db`` upgrades are complete. - A return code of ``3`` means that an upgrade from your current database version is available and your first step is to run ``glance-manage db expand``. - A return code of ``4`` means that the expansion stage is complete, and the next step is to run ``glance-manage db migrate``. - A return code of ``5`` means that the expansion and data migration stages are complete, and the next step is to run ``glance-manage db contract``. glance-16.0.0/releasenotes/notes/deprecate-show-multiple-location-9890a1e961def2f6.yaml0000666000175100017510000000266013245511421030370 0ustar zuulzuul00000000000000--- prelude: > - Deprecate the ``show_multiple_locations`` configuration option in favor of the existing Role Based Access Control (RBAC) for Image locations which uses ``policy.json`` file to define the appropriate rules. upgrade: - | Some additional points about ``show_multiple_locations`` configuration option deprecation. * Maintaining two different ways to configure, enable and/or disable a feature is painful for developers and operators, so the less granular means of controlling this feature will be eliminated in the **Ocata** release. * For the Newton release, this option will still be honored. However, it is important to update ``policy.json`` file for glance-api nodes. In particular, please consider updating the policies ``delete_image_location``, ``get_image_location`` and ``set_image_location`` as per your requirements. As this is an advanced option and prone to expose some risks, please check the policies to ensure security and privacy of your cloud. * Future releases will ignore this option and just follow the policy rules. It is recommended that this option is disabled for public endpoints and is used only internally for service-to-service communication. * As mentioned above, the same recommendation applies to the policy-based configuration for exposing multiple image locations. glance-16.0.0/releasenotes/notes/queens-uwsgi-issues-4cee9e4fdf62c646.yaml0000666000175100017510000000332213245511421026120 0ustar zuulzuul00000000000000--- issues: - | The Pike release notes pointed out that although support had been added to run Glance as a WSGI application hosted by a web server, the Glance team recommended that Glance be run in its normal standalone configuration, particularly in production environments. We renew that recommendation for the Queens release. In particular, Glance tasks (which are required for the interoperable image import functionality) do not execute when Glance is run under uWSGI (which is the OpenStack recommended way to run WSGI applications hosted by a web server). This is in addition to the chunked transfer encoding problems addressed by `Bug 1703856`_ and will be more difficult to fix. (Additionally, as far as we are aware, the fix for `Bug 1703856`_ has never been tested at scale.) Briefly, Glance tasks are run by the API service and would have to be split out into a different service so that API alone would run under uWSGI. The Glance project team did not have sufficient testing and development resources during the Queens cycle to attempt this (or even to discuss whether this is in fact a good idea). The Glance project team is committed to the stability of Glance. As part of OpenStack, we are committed to `The Four Opens`_. If the ability to run Glance under uWSGI is important to you, feel free to participate in the Glance community to help coordinate and drive such an effort. (We gently remind you that "participation" includes providing testing and development resources.) .. _`Bug 1703856`: https://bugs.launchpad.net/glance/+bug/1703856 .. _`The Four Opens`: https://governance.openstack.org/tc/reference/opens.html glance-16.0.0/releasenotes/notes/lock_path_config_option-2771feaa649e4563.yaml0000666000175100017510000000032413245511421026605 0ustar zuulzuul00000000000000--- upgrade: - The lock_path config option from oslo.concurrency is now required for using the sql image_cache driver. If one is not specified it will default to the image_cache_dir and emit a warning. glance-16.0.0/releasenotes/notes/queens-metadefs-changes-daf02bef18d049f4.yaml0000666000175100017510000000173213245511421026632 0ustar zuulzuul00000000000000--- upgrade: - | The following metadata definitions have been modified in the Queens release: * The property img_linked_clone_ has been added to the namespace ``OS::Compute::VMware``. * An enumeration of values was added for the `vmware:hw_version`_ property in the ``OS::Compute::VMwareFlavor`` namespace. * Additional values were added to the enumeration for the `hw_disk_bus`_ property in the ``OS::Compute::LibvirtImage`` namespace. You may upgrade these definitions using: ``glance-manage db load_metadefs [--path ] [--merge] [--prefer_new]`` .. _img_linked_clone: https://git.openstack.org/cgit/openstack/glance/commit/?id=5704ba6305b8aec380f90c3a35cbc4031f54f112 .. _`vmware:hw_version`: https://git.openstack.org/cgit/openstack/glance/commit/?id=c1a845d5532ae43248dd4b9714ffa0a403737cf7 .. _`hw_disk_bus`: https://git.openstack.org/cgit/openstack/glance/commit/?id=f8a5a4022441617aaa508e8e59f542d047ba5ba2 glance-16.0.0/releasenotes/notes/exp-emc-mig-fix-a7e28d547ac38f9e.yaml0000666000175100017510000000046413245511421024773 0ustar zuulzuul00000000000000--- fixes: - | There was a bug in the **experimental** zero-downtime database upgrade path introduced in the Ocata release that prevented the **experimental** upgrade from working. This has been fixed in the Pike release. The bug did not affect the the normal database upgrade operation. glance-16.0.0/releasenotes/notes/remove-s3-driver-639c60b71761eb6f.yaml0000666000175100017510000000075213245511421025040 0ustar zuulzuul00000000000000--- prelude: > - The ``s3`` store driver has been removed. upgrade: - The latest release of glance_store library does not have the support for the ``s3`` driver. All code references of the same have been removed from the library. As this release of Glance uses the updated glance_store library, you will find the ``s3`` driver support removed from Glance too. For example the Glance image location strategy modules no longer offer the ``s3`` driver support. glance-16.0.0/releasenotes/notes/alembic-migrations-902b31edae7a5d7d.yaml0000666000175100017510000000363313245511421025704 0ustar zuulzuul00000000000000--- prelude: > - **Experimental** zero-downtime database upgrade using an expand-migrate-contract series of operations is available. upgrade: - | The database migration engine used by Glance for database upgrades has been changed from *SQLAlchemy Migrate* to *Alembic* in this release. * This has necessitated a change in the location and naming convention for migration scripts. Developers, operators, and DevOps are strongly encouraged to read through the `Database Management`_ section of the Glance documentation for details of the changes introduced in the Ocata release. Here's a brief summary of the changes: - All the ``glance manage db`` commands are changed appropriately to use Alembic to perform operations such as ``version``, ``upgrade``, ``sync`` and ``version_control``. Hence, the "old-style" migration scripts will no longer work with the Ocata glance manage db commands. - Database versions are no longer numerical. Instead, they are the *revision ID* of the last migration applied on the database. * For example, the Liberty migration, which was version ``42`` under the old system, will now appear as ``liberty``. The Mitaka migrations ``43`` and ``44`` appear as ``mitaka01`` and ``mitaka02``, respectively. * The change in migration engine has been undertaken in order to enable zero-downtime database upgrades, which are part of the effort to implement rolling upgrades for Glance (scheduled for the Pike release). - A preview of zero-downtime database upgrades is available in this release, but it is **experimental** and **not supported for production systems**. Please consult the `Database Management`_ section of the Glance documentation for details. .. _`Database Management`: http://docs.openstack.org/developer/glance/db.html glance-16.0.0/releasenotes/notes/add-vhdx-format-2be99354ad320cca.yaml0000666000175100017510000000060113245511421025027 0ustar zuulzuul00000000000000--- prelude: > - Add ``vhdx`` to list of supported disk format. features: - The identifier ``vhdx`` has been added to the list of supported disk formats in Glance. The respective configuration option has been updated and the default list shows ``vhdx`` as a supported format. upgrade: - The ``disk_format`` config option enables ``vhdx`` as supported by default. glance-16.0.0/releasenotes/notes/consistent-store-names-57374b9505d530d0.yaml0000666000175100017510000000265213245511421026205 0ustar zuulzuul00000000000000--- upgrade: - | Some backend store names were inconsistent between glance and glance_store. This meant that operators of the VMware datastore or file system store were required to use store names in ``glance-api.conf`` that did not correspond to any valid identifier in glance_store. As this situation encouraged misconfiguration and operator unhappiness, we have made the store names consistent in the Newton release. What this means for you: * This change applies only to operators who are using multiple image locations * This change applies only to operators using the VMware datastore or filesystem stores * This change applies only to the ``store_type_preference`` option * *VMware datastore operators*: The old name, now **DEPRECATED**, was ``vmware_datastore``. The **new** name, used in both glance and glance_store, is ``vmware`` * *File system store operators*: the old name, now **DEPRECATED**, was ``filesystem``. The **new** name, used in both glance and glance_store, is ``file`` * This change is backward compatible, that is, the old names will be recognized by the code during the deprecation period. Support for the deprecated names will be removed in the **Pike** release * We strongly encourage operators to modify their ``glance-api.conf`` files immediately to use the **new** names glance-16.0.0/releasenotes/notes/update-show_multiple_locations-helptext-7fa692642b6b6d52.yaml0000666000175100017510000000062413245511421032006 0ustar zuulzuul00000000000000--- other: - | The deprecation path for the configuration option ``show_multiple_locations`` has been changed because the mitigation instructions for `OSSN-0065`_ refer to this option. It is now subject to removal on or after the **Pike** release. The help text for this option has been updated accordingly. .. _`OSSN-0065`: https://wiki.openstack.org/wiki/OSSN/OSSN-0065 glance-16.0.0/releasenotes/notes/remove-osprofiler-paste-ini-options-c620dedc8f9728ff.yaml0000666000175100017510000000130713245511421031214 0ustar zuulzuul00000000000000--- deprecations: - OSprofiler support requires passing of trace information between various OpenStack services. This information is signed by one of HMAC keys, which we historically defined in glance-api-paste.ini and glance-registry-paste.ini files (together with enabled option, that in fact was duplicated in the corresponding configuration files). OSprofiler 0.3.1 and higher supports passing this information via configuration files, therefore it's recommended to modify the ``[filter:osprofiler]`` section in \*-paste.ini to look like ``paste.filter_factor = osprofiler.web:WsgiMiddleware.factory`` and set the ``hmac_keys`` option in the glance-\*.conf files. glance-16.0.0/releasenotes/notes/api-minor-ver-bump-2-6-aa3591fc58f08055.yaml0000666000175100017510000000266613245511421025661 0ustar zuulzuul00000000000000--- prelude: > - The *minor* version of the Images API v2 is bumped to **2.6** to introduce an EXPERIMENTAL version of the API that includes the new calls introduced for the Minimal Viable Product delivery of the `refactored image import`_ functionality. Version **2.5** remains the CURRENT version of the Images API. upgrade: - | An **EXPERIMENTAL** version of the Images API supplied by Glance is introduced as **2.6**. It includes the new API calls introduced for the `refactored image import`_ functionality. This functionality is **not** enabled by default, so the CURRENT version of the Images API remains at 2.5. There are no changes to the version 2.5 API in this release, so all version 2.5 calls will work whether or not the new import functionality is enabled or not. The version 2.6 API is being introduced as EXPERIMENTAL because it is a Minimal Viable Product delivery of the functionality described in the `refactored image import`_ specification. As an MVP, the responses described in that specification are abbreviated in version 2.6. It is expected that version 2.6 will be completed in Queens, but at this time, we encourage operators to try out the new functionality, but keep in mind its EXPERIMENTAL nature. .. _`refactored image import`: https://specs.openstack.org/openstack/glance-specs/specs/mitaka/approved/image-import/image-import-refactor.html glance-16.0.0/releasenotes/notes/pike-metadefs-changes-95b54e0bf8bbefd6.yaml0000666000175100017510000000111213245511421026343 0ustar zuulzuul00000000000000--- upgrade: - | The following metadata definitions have been modified in the Pike release: * The property ``img_hide_hypervisor_id`` has been added to the namespace ``OS::Compute::LibvirtImage``. * Several `new values`_ were added for the ``vmware_ostype`` property in the ``OS::Compute::VMware`` namespace. You may upgrade these definitions using: ``glance-manage db load_metadefs [--path ] [--merge] [--prefer_new]`` .. _`new values`: https://git.openstack.org/cgit/openstack/glance/commit/?id=b505ede170837c50db41a71b46075d4b211c8a48glance-16.0.0/releasenotes/notes/newton-1-release-065334d464f78fc5.yaml0000666000175100017510000000154713245511421024744 0ustar zuulzuul00000000000000--- prelude: > - Glance no longer returns a 500 when 4 byte unicode characters are passed to the metadefs API. - Deprecated "sign-the-hash" approach for image signing. Old run_tests and related scripts have been removed. upgrade: - The image signature verification feature has been updated to follow the "sign-the-data" approach, which uses a signature of the image data directly. The prior deprecated "sign-the-hash" approach, which uses a signature of an MD5 hash of the image data, has been removed. security: - The initial implementation of the image signature verification feature in Glance was insecure, because it relied on an MD5 hash of the image data. More details can be found in bug 1516031. This "sign-the-hash" approach was deprecated in Mitaka, and has been removed in Newton. Related CVE-2015-8234. glance-16.0.0/releasenotes/notes/bug-1593177-8ef35458d29ec93c.yaml0000666000175100017510000000031613245511421023432 0ustar zuulzuul00000000000000--- upgrade: - The ``default`` policy in ``policy.json`` now uses the admin role rather than any role. This is to make the policy file restrictive rather than permissive and tighten security. glance-16.0.0/releasenotes/notes/bump-api-2-4-efa266aef0928e04.yaml0000666000175100017510000000066413245511421024077 0ustar zuulzuul00000000000000--- prelude: > - Glance API ``minor`` version bumped to 2.4. upgrade: - | Glance API **CURRENT** ``minor`` version is now ``2.4``. * To partially fix an important image locations bug 1587985, an API impacting change has been merged into Glance. * This will result into a non-backward compatible experience before and after **Newton** release, for users using ``add`` feature to image locations. glance-16.0.0/releasenotes/notes/scrubber-exit-e5d77f6f1a38ffb7.yaml0000666000175100017510000000061213245511421024732 0ustar zuulzuul00000000000000--- fixes: - | Please note a change in the Scrubber's behavior in case of job fetching errors: * If configured to work in daemon mode, the Scrubber will log an error message at level critical, but will not exit the process. * If configured to work in non-daemon mode, the Scrubber will log an error message at level critical and exit with status one. glance-16.0.0/releasenotes/notes/add-ploop-format-fdd583849504ab15.yaml0000666000175100017510000000061213245511421025066 0ustar zuulzuul00000000000000--- prelude: > - Add ``ploop`` to the list of supported disk formats. features: - The identifier ``ploop`` has been added to the list of supported disk formats in Glance. The respective configuration option has been updated and the default list shows ``ploop`` as a supported format. upgrade: - The ``disk_format`` config option enables ``ploop`` as supported by default. glance-16.0.0/releasenotes/notes/deprecate-registry-ff286df90df793f0.yaml0000666000175100017510000000107113245511421025700 0ustar zuulzuul00000000000000--- deprecations: - | The Glance Registry Service and its APIs are officially DEPRECATED in this release and are subject to removal at the beginning of the 'S' development cycle, following the `OpenStack standard deprecation policy `_. For more information, see the Glance specification document `Actually Deprecate the Glance Registry `_. glance-16.0.0/releasenotes/notes/glare-ectomy-72a1f80f306f2e3b.yaml0000666000175100017510000000424113245511421024360 0ustar zuulzuul00000000000000--- upgrade: - | Code for the OpenStack Artifacts Service (`Glare`_) and its EXPERIMENTAL API has been removed from the Glance codebase, as it was relocated into an independent `Glare`_ project repository during a previous release cycle. The database upgrade for the Glance Pike release drops the Glare tables (named 'artifacts' and 'artifact_*') from the Glance database. OpenStack deployments, packagers, and deployment projects which provided Glare should have begun to consume Glare from its own `Glare`_ respository during the Newton and Ocata releases. With the Pike release, it is no longer possible to consume Glare code from the Glance repository. .. _`Glare`: https://git.openstack.org/cgit/openstack/glare other: - | Code for the OpenStack Artifacts Service (Glare) and its EXPERIMENTAL API has been `removed`_ from the Glance codebase. The Artifacts API was an EXPERIMENTAL API that ran on the Glance service endpoint as ``/v3`` in the Liberty release. In the Mitaka release, the Glance ``/v3`` EXPERIMENTAL API was deprecated and the Artifacts Service ran on its own endpoint (completely independent from the Glance service endpoint) as an EXPERIMENTAL API, versioned as ``v0.1``. In both the Liberty and Mitaka releases, Glare ran on code stored in the Glance code repository and used its own tables in the Glance database. In the Newton release, the Glare code was relocated into its own `Glare`_ project repository. Also in the Newton release, Glare ran an EXPERIMENTAL Artifacts API versioned as ``v1.0`` on its own endpoint and used its own database. For the Pike release, the legacy Glare code has been removed from the Glance code repository and the legacy 'artifacts' and 'artifact_*' database tables are dropped from the Glance database. As the Artifacts service API was an EXPERIMENTAL API in Glance and has not used the Glance database since Mitaka, no provision is made for migrating data from the Glance database to the Glare database. .. _`removed`: http://specs.openstack.org/openstack/glance-specs/specs/mitaka/implemented/deprecate-v3-api.html glance-16.0.0/releasenotes/notes/deprecate-glance-api-opts-23bdbd1ad7625999.yaml0000666000175100017510000000047513245511421026726 0ustar zuulzuul00000000000000--- deprecations: - The use_user_token, admin_user, admin_password, admin_tenant_name, auth_url, auth_strategy and auth_region options in the [DEFAULT] configuration section in glance-api.conf are deprecated, and will be removed in the O release. See https://wiki.openstack.org/wiki/OSSN/OSSN-0060 glance-16.0.0/releasenotes/notes/bug-1537903-54b2822eac6cfc09.yaml0000666000175100017510000000100313245511421023452 0ustar zuulzuul00000000000000--- upgrade: - Metadata definitions previously associated with OS::Nova::Instance have been changed to be associated with OS::Nova::Server in order to align with Heat and Searchlight. You may either upgrade them using glance-manage db load_metadefs [path] [merge] [prefer_new] or glance-manage db upgrade 44. fixes: - Metadata definitions previously associated with OS::Nova::Instance have been changed to be associated with OS::Nova::Server in order to align with Heat and Searchlight. glance-16.0.0/releasenotes/notes/virtuozzo-hypervisor-fada477b64ae829d.yaml0000666000175100017510000000050513245511421026447 0ustar zuulzuul00000000000000--- upgrade: - | The metadata definition for ``hypervisor_type`` in the ``OS::Compute::Hypervisor`` namespace has been extended to include the Virtuozzo hypervisor, designated as ``vz``. You may upgrade the definition using: ``glance-manage db load_metadefs [--path ] [--merge] [--prefer_new]`` glance-16.0.0/releasenotes/notes/bug-1719252-name-validation-443a2e2a36be2cec.yaml0000666000175100017510000000062013245511421026571 0ustar zuulzuul00000000000000--- other: - | The metadefs schemas for 'property', 'properties', 'tag', 'tags', 'object', and 'objects' previously specified a 'name' element of maximum 255 characters. Any attempt to add a name of greater than 80 characters in length, however, resulted in a 500 response. The schemas have been corrected to specify a maximum length of 80 characters for the 'name' field. glance-16.0.0/releasenotes/notes/add-processlimits-to-qemu-img-c215f5d90f741d8a.yaml0000666000175100017510000000077713245511421027577 0ustar zuulzuul00000000000000--- security: - All ``qemu-img info`` calls are now run under resource limitations that limit the CPU time and address space usage of the process running the command to 2 seconds and 1 GB respectively. This addresses the bug https://bugs.launchpad.net/glance/+bug/1449062 Current usage of "qemu-img" is limited to Glance tasks, which by default (since the Mitaka release) are only available to admin users. We continue to recommend that tasks only be exposed to trusted users ././@LongLink0000000000000000000000000000015000000000000011211 Lustar 00000000000000glance-16.0.0/releasenotes/notes/clean-up-acceptable-values-store_type_preference-39081e4045894731.yamlglance-16.0.0/releasenotes/notes/clean-up-acceptable-values-store_type_preference-39081e4045894731.y0000666000175100017510000000067713245511421032417 0ustar zuulzuul00000000000000--- upgrade: - | Deprecated values are no longer recognized for the configuration option ``store_type_preference``. The two non-standard values 'filesystem' and 'vmware_datastore' were DEPRECATED in Newton and are no longer operable. The correct values for those stores are 'file' and 'vmware'. See the Newton release notes for more information at https://docs.openstack.org/releasenotes/glance/newton.html#upgrade-notes glance-16.0.0/releasenotes/notes/image-visibility-changes-fa5aa18dc67244c4.yaml0000666000175100017510000002047113245511421026732 0ustar zuulzuul00000000000000--- prelude: > - The *Community Images* feature has been introduced in the Images API v2. This enables a user to make an image available for consumption by all other users. In association with this change, the 'visibility' values for an image have been expanded to include 'community' and 'shared'. features: - | Image 'visibility' changes. * Prior to Ocata, an image with 'private' visibility could become shared by adding members to it, though its visibility remained 'private'. In order to make the visibility of images more clear, in Ocata the following changes are introduced: - A new value for visibility, 'shared', is introduced. Images that have or can accept members will no longer be displayed as having 'private' visibility, reducing confusion among end users. - An image must have 'shared' visibility in order to accept members. This provides a safeguard from 'private' images being shared inadvertently. - In order to preserve backward compatibilty with the current sharing workflow, the default visibility of an image in Ocata is 'shared'. Consistent with pre-Ocata behavior, this will allow the image to accept member operations without first updating the visibility of the image. (Keep in mind that an image with visibility 'shared' but having no members is not actually accessible to anyone other than the image owner, so this is not in itself a security problem.) - | Image visibility may be specified at the time of image creation. * As mentioned above, the default visibility of an image is 'shared'. If a user wants an image to be private and not accept any members, a visibility of 'private' can be explicitly assigned at the time of creation. - Such an image will require its visibility to be updated to 'shared' before it will accept members. - | Image visibility is changed using the image update (PATCH) call. * Note: This is not a change. It's simply mentioned for completeness. - | A new value for the Image 'visibility' field, 'community', is introduced. * An image with 'community' visibility is available for consumption by any user. * In order to prevent users spamming other users' image-list response, community images are not included in the image-list response unless specifically requested by a user. - For example, ``GET v2/images?visibility=community`` - As is standard behavior for the image-list call, other filters may be applied to the request. For example, to see the community images supplied by user ``931efe8a-0ad7-4610-9116-c199f8807cda``, the following call would be made: ``GET v2/images?visibility=community&owner=931efe8a-0ad7-4610-9116-c199f8807cda`` upgrade: - | A new value for the Image 'visibility' field, 'community', is introduced. * The ability to update an image to have 'community' visibility is governed by a policy target named 'communitize_image'. The default is empty, that is, any user may communitize an image. - | Visibility migration of current images * Prior to Ocata, the Glance database did not have a 'visibility' column, but instead used a boolean 'is_public' column, which was translated into 'public' or 'private' visibility in the Images API v2 image response. As part of the upgrade to Ocata, a 'visibility' column is introduced into the images table. It will be populated as follows - All images currently with 'public' visibility (that is, images for which 'is_public' is True in the database) will have their visibility set to 'public'. - Images currently with 'private' visibility (that is, images for which 'is_public' is False in the database) **and** that have image members, will have their visibility set to 'shared'. - Those images currently with 'private' visibility (that is, images for which 'is_public' is False in the database) and that have **no** image members, will have their visibility set to 'private'. * Note that such images will have to have their visibility updated to 'shared' before they will accept members. - | Impact of the Ocata visibility changes on end users of the Images API v2 * We have tried to minimize the impact upon end users, but want to point out some issues to be aware of. - The migration of image visibility assigns sensible values to images, namely, 'private' to images that end users have *not* assigned members, and 'shared' to those images that have members at the time of the upgrade. Previously, if an end user wanted to share a private image, a member could be added directly. After the upgrade, the image will have to have its visibility changed to 'shared' before a member can be assigned. - The default value of 'shared' may seem weird, but it preserves the pre-upgrade workflow of: (1) create an image with default visibility, (2) add members to that image. Further, an image with a visibility of 'shared' that has no members is not accessible to other users, so it is functionally a private image. - The image-create operation allows a visibility to be set at the time of image creation. This option was probably not used much given that previously there were only two visibility values available, one of which ('public') is by default unassignable by end users. Operators may wish to update their documentation or tooling to specify a visibility value when end users create images. To summarize: * 'public' - reserved by default for images supplied by the operator for the use of all users * 'private' - the image is accessible only to its owner * 'community' - the image is available for consumption by all users * 'shared' - the image is completely accessible to the owner and available for consumption by any image members - | Impact of the Ocata visibility changes on the Images API v1 * The DEPRECATED Images API v1 does not have a concept of "visibility", and in a "pure" v1 deployment, you would not notice that anything had changed. Since, however, we hope that there aren't many of those around anymore, here's what you can expect to see if you use the Images API v1 in a "mixed" deployment. - In the v1 API, images have an ``is_public`` field (but no ``visibility`` field). Images for which ``is_public`` is True are the equivalent of images with 'public' visibility in the v2 API. Images for which ``is_public`` is false are the equivalent of v2 'shared' images if they have members, or the equivalent of v2 'private' images if they have no members. - An image that has 'community' visibility in the v2 API will have ``is_public`` == False in the v1 API. It will behave like a private image, that is, only the owner (or an admin) will have access to the image, and only the owner (or an admin) will see the image in the image-list response. - Since the default value for 'visibility' upon image creation is 'shared', an image freshly created using the v1 API can have members added to it, just as it did pre-Ocata. - If an image has a visiblity of 'private' when viewed in the v2 API, then that image will not accept members in the v1 API. If a user wants to share such an image, the user can: * Use the v2 API to change the visibility of the image to 'shared'. Then it will accept members in either the v1 or v2 API. * Use the v1 API to update the image so that ``is_public`` is False. This will reset the image's visibility to 'shared', and it will now accept member operations. * Note that in either case, when dealing with an image that has 'private' visibility in the v2 API, there is a safeguard against a user unintentionally adding a member to an image and exposing data. The safeguard is that you must perform an additional image update operation in either the v1 or v2 API before you can expose it to other users. glance-16.0.0/releasenotes/notes/.placeholder0000666000175100017510000000000013245511421021145 0ustar zuulzuul00000000000000glance-16.0.0/releasenotes/notes/deprecate-v1-api-6c7dbefb90fd8772.yaml0000666000175100017510000000156713245511421025213 0ustar zuulzuul00000000000000--- prelude: > - The Images (Glance) version 1 API has been DEPRECATED. Please see deprecations section for more information. deprecations: - With the deprecation of the Images (Glance) version 1 API in the Newton release, it is subject to removal on or after the Pike release. The configuration options specific to the Images (Glance) v1 API have also been deprecated and are subject to removal. An indirectly related configuration option enable_v2_api has been deprecated too as it becomes redundant once the Images (Glance) v1 API is removed. Appropriate warning messages have been setup for the deprecated configuration options and when the Images (Glance) v1 API is enabled (being used). Operators are advised to deploy the Images (Glance) v2 API. The standard OpenStack deprecation policy will be followed for the removals. glance-16.0.0/releasenotes/notes/newton-bugs-06ed3727b973c271.yaml0000666000175100017510000000467613245511421024126 0ustar zuulzuul00000000000000--- fixes: - | Here is a list of other important bugs that have been fixed (or partially fixed) along with their descriptions. * bug 1617258: Image signature base64 needs to wrap lines * bug 1612341: Add cpu thread pinning flavor metadef * bug 1609571: version negotiation api middleware was NOT up to date to include v2.3 * bug 1602081: Glance needs to use oslo.context's policy dict * bug 1599169: glance-replicator size raises object of type 'NoneType' has no len() exception when no args provided * bug 1599192: glance-replicator needs to display human-readable size * bug 1585917: member-create will raise 500 error if member-id is greater than 255 characters * bug 1598985: glance-replicator compare output should show image name in addition to image id for missing images * bug 1533949: Glance tasks missing configuration item "conversion_format" * bug 1593177: The default policy needs to be admin for safer default deployment scenarios * bug 1584076: Swift ACLs disappears on v1 Glance images * bug 1591004: Unable to download image with no checksum when cache is enabled * bug 1584415: Listing images with the created_at and updated_at filters fails if an operator is not specified * bug 1590608: Services should use http_proxy_to_wsgi middleware from oslo.middleware library * bug 1584350: etc/glance-registry.conf sample file has redundant store section * bug 1543937: db-purge fails for very large number * bug 1580848: There's no exception when import task is created without properties * bug 1585584: Glare v0.1 is unable to create public artifact draft * bug 1582304: Allow tests to run when http proxy is set * bug 1570789: Metadefs API returns 500 error when 4 byte unicode character is passed * bug 1532243: glance fails silently if a task flow can not be loaded * bug 1568894: glance_store options missing in glance-scrubber.conf and glance-cache.conf sample files * bug 1568723: secure_proxy_ssl_header not in sample configuration files * bug 1535231: md-meta with case insensitive string has problem during creating * bug 1555275: Tags set changes on delete * bug 1558683: Versions endpoint does not support X-Forwarded-Proto * bug 1557495: Possible race conditions during status change glance-16.0.0/releasenotes/notes/queens-release-b6a9f9882c794c24.yaml0000666000175100017510000000755113245511426024673 0ustar zuulzuul00000000000000--- prelude: > - A plugin framework for customizing the processing of imported images before they become active is introduced in this release, along with a new plugin that injects image metadata properties into imported images. fixes: - | The following are some highlights of the bug fixes included in this release. * Bug 1714240_: Avoid restarting a child when terminating * Bug 1719252_: Metadefs: Fix 500 for name with more than 80 chars * Bug 1720354_: Correctly send auth request to oslo.policy * Bug 1733813_: Fix 500 from image-import on queued images * Bug 1688189_: Fix member create to handle unicode characters * Bug 1737952_: Fix 500 if custom property name is greater than 255 * Bug 1744824_: Fix py27 eventlet issue <0.22.0 * Bug 1748916_: Glance default workers total overkill for modern servers * Bug 1749297_: Fix 500 from list-tasks call with postgresql .. _1749297: https://code.launchpad.net/bugs/1749297 .. _1748916: https://code.launchpad.net/bugs/1748916 .. _1744824: https://code.launchpad.net/bugs/1744824 .. _1737952: https://code.launchpad.net/bugs/1737952 .. _1688189: https://code.launchpad.net/bugs/1688189 .. _1733813: https://code.launchpad.net/bugs/1733813 .. _1720354: https://code.launchpad.net/bugs/1720354 .. _1719252: https://code.launchpad.net/bugs/1719252 .. _1714240: https://code.launchpad.net/bugs/1714240 upgrade: - | The default value for the API configuration option ``workers`` was previously the number of CPUs available. It has been changed to be the min of {number of CPUs, 8}. Any value set for that option, of course, is honored. See Bug 1748916_ for details. - | Some configuration is required in order to make the Interoperable Image Import functionality work correctly. In particular, the ``node_staging_uri`` value in the glance-api.conf file must be set. See the section on Interoperable Image Import in the `Glance Administration Guide`_ for more information. other: - | The Interoperable Image Import section of the `Image Service API v2 Reference Guide`_ was updated to include the new ``web-download`` import method. - | The section on Interoperable Image Import in the `Glance Administration Guide`_ has been updated. Please see that section of the Guide for information about the configuration required to make the import functionality work correctly. - | The Database Management sections of the `Glance Administration Guide`_ have been revised and updated. This includes information about the current experimental status of rolling upgrades and zero-downtime database upgrades. .. _`Image Service API v2 Reference Guide`: https://developer.openstack.org/api-ref/image/v2/ .. _`Glance Administration Guide`: https://docs.openstack.org/glance/queens/admin/index.html security: - | The ``web-download`` import-method, intended to be a replacement for the popular Image Service API v1 "copy-from" functionality, is configurable so that you can avoid the vulnerabilty described in `OSSN-0078`_. See the Interoperable Image Import section of the `Glance Administration Guide`_ for details. .. _`OSSN-0078`: https://wiki.openstack.org/wiki/OSSN/OSSN-0078 deprecations: - | With the introduction of the ``web-download`` import method, we consider the Image Service v2 API to have reached feature parity with the DEPRECATED v1 API in all important respects. Support for the Image Service API v1 ends with the Queens release. The `v1 API was deprecated in Newton`_ and will be removed from the codebase at the beginning of the Rocky development cycle. Please plan appropriately. .. _`v1 API was deprecated in Newton`: http://git.openstack.org/cgit/openstack/glance/commit/?id=63e6dbb1eb006758fbcf7cae83e1d2eacf46b4ab glance-16.0.0/releasenotes/notes/trust-support-registry-cfd17a6a9ab21d70.yaml0000666000175100017510000000060213245511421026663 0ustar zuulzuul00000000000000--- features: - Implemented re-authentication with trusts when updating image status in registry after image upload. When long-running image upload takes some a lot of time (more than token expiration time) glance uses trusts to receive new token and update image status in registry. It allows users to upload big size images without increasing token expiration time. glance-16.0.0/releasenotes/notes/pike-rc-1-a5d3f6e8877b52c6.yaml0000666000175100017510000000475713245511421023520 0ustar zuulzuul00000000000000--- features: - | The image-list call to the Images v2 API now recognizes a ``protected`` query-string parameter. This parameter accepts only two values: either ``true`` or ``false``. The filter is case-sensitive. Any other value will result in a 400 response to the request. See the `protected filter specification`_ document for details. .. _`protected filter specification`: https://specs.openstack.org/openstack/glance-specs/specs/pike/implemented/glance/add-protected-filter.html upgrade: - | You may set the ``timeout`` option in the ``keystone_authtoken`` group in the **glance-api.conf** file. fixes: - | The following are some highlights of the bug fixes included in this release. * Bug 1655727_: Invoke monkey_patching early enough for eventlet 0.20.1 * Bug 1657459_: Fix incompatibilities with WebOb 1.7 * Bug 1554412_: Provide user friendly message for FK failure * Bug 1664709_: Do not serve partial image download requests from cache * Bug 1482129_: Remove duplicate key from dictionary * Bug 1229823_: Handle file delete races in image cache * Bug 1686488_: Fix glance image-download error * Bug 1516706_: Prevent v1_api from making requests to v2_registry * Bug 1701346_: Fix trust auth mechanism .. _1655727: https://code.launchpad.net/bugs/1655727 .. _1657459: https://code.launchpad.net/bugs/1657459 .. _1554412: https://code.launchpad.net/bugs/1554412 .. _1664709: https://code.launchpad.net/bugs/1664709 .. _1482129: https://code.launchpad.net/bugs/1482129 .. _1229823: https://code.launchpad.net/bugs/1229823 .. _1686488: https://code.launchpad.net/bugs/1686488 .. _1516706: https://code.launchpad.net/bugs/1516706 .. _1701346: https://code.launchpad.net/bugs/1701346 other: - | The `documentation was reorganized`_ in accord with the new standard layout for OpenStack projects. - | Glance now uses the `python 'cryptography' module`_ instead of the 'pycrypto' module. - | In accord with current OpenStack policy, Glance log messages are `no longer translated`_. .. _`documentation was reorganized`: http://specs.openstack.org/openstack/docs-specs/specs/pike/os-manuals-migration.html .. _`python 'cryptography' module`: https://git.openstack.org/cgit/openstack/glance/commit/?id=5ebde9079b34544cc6642a73b40ec865bcef8580 .. _`no longer translated`: https://git.openstack.org/cgit/openstack/glance/commit/?id=87a56ce5c78952c5cccf8c6c280ec1e9a60b0b6cglance-16.0.0/releasenotes/notes/api-minor-version-bump-bbd69dc457fc731c.yaml0000666000175100017510000000163613245511421026467 0ustar zuulzuul00000000000000--- prelude: > - The *minor* version of the Images API v2 is bumped to **2.5**. upgrade: - | The **CURRENT** version of the version 2 Images API supplied by Glance is now **2.5**. Changes include: * The 'visibility' enumeration has been increased from two values (``public``, ``private``) to four values (``public``, ``private``, ``shared``, and ``community``). * Formerly, it was possible to add members to an image whose visibility was ``private``, thereby creating a "shared" image. In this release, an image must have a visibility of ``shared`` in order to accept member operations. Attempting to add a member to an image with a visibility of ``private`` will result in a `4xx response`_ containing an informative message. .. _`4xx response`: https://developer.openstack.org/api-ref/image/v2/?expanded=create-image-member-detail#create-image-member glance-16.0.0/releasenotes/notes/bp-inject-image-metadata-0a08af539bcce7f2.yaml0000666000175100017510000000327113245511421026646 0ustar zuulzuul00000000000000--- features: - | Added a plugin to inject image metadata properties to non-admin images created via the interoperable image import process. upgrade: - | Added a plugin to inject image metadata properties to non-admin images created via the interoperable image import process. This plugin implements the spec `Inject metadata properties automatically to non-admin images`_. See the spec for a discussion of the use case addressed by this plugin. Use of the plugin requires configuration as described in the `The Image Property Injection Plugin`_ section of the Glance Admin Guide. Note that the plugin applies *only* to images imported via the `interoperable image import process`_. Thus images whose data is set using the `image data upload`_ call will *not* be processed by the plugin and hence will not have properties injected. You can force end users to use the interoperable image import process by restricting the data upload call, which is governed by the ``upload_image`` policy in the Glance ``policy.json`` file. See the documentation for more information. .. _`Inject metadata properties automatically to non-admin images`: https://specs.openstack.org/openstack/glance-specs/specs/queens/approved/glance/inject-automatic-metadata.html .. _`interoperable image import process`: https://developer.openstack.org/api-ref/image/v2/#interoperable-image-import .. _`The Image Property Injection Plugin`: https://docs.openstack.org/glance/latest/admin/interoperable-image-import.html#the-image-property-injection-plugin .. _`image data upload`: https://developer.openstack.org/api-ref/image/v2/#upload-binary-image-data glance-16.0.0/releasenotes/source/0000775000175100017510000000000013245511661017050 5ustar zuulzuul00000000000000glance-16.0.0/releasenotes/source/newton.rst0000666000175100017510000000026713245511421021115 0ustar zuulzuul00000000000000=================================== Newton Series Release Notes =================================== .. release-notes:: :branch: origin/stable/newton :earliest-version: 13.0.0 glance-16.0.0/releasenotes/source/_static/0000775000175100017510000000000013245511661020476 5ustar zuulzuul00000000000000glance-16.0.0/releasenotes/source/_static/.placeholder0000666000175100017510000000000013245511421022743 0ustar zuulzuul00000000000000glance-16.0.0/releasenotes/source/liberty.rst0000666000175100017510000000022213245511421021244 0ustar zuulzuul00000000000000============================== Liberty Series Release Notes ============================== .. release-notes:: :branch: origin/stable/liberty glance-16.0.0/releasenotes/source/pike.rst0000666000175100017510000000021713245511421020526 0ustar zuulzuul00000000000000=================================== Pike Series Release Notes =================================== .. release-notes:: :branch: stable/pike glance-16.0.0/releasenotes/source/conf.py0000666000175100017510000002132313245511421020344 0ustar zuulzuul00000000000000# -*- coding: utf-8 -*- # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. # Glance Release Notes documentation build configuration file, created by # sphinx-quickstart on Tue Nov 3 17:40:50 2015. # # This file is execfile()d with the current directory set to its # containing dir. # # Note that not all possible configuration values are present in this # autogenerated file. # # All configuration values have a default; values that are commented out # serve to show the default. # If extensions (or modules to document with autodoc) are in another directory, # add these directories to sys.path here. If the directory is relative to the # documentation root, use os.path.abspath to make it absolute, like shown here. # sys.path.insert(0, os.path.abspath('.')) # -- General configuration ------------------------------------------------ import openstackdocstheme # If your documentation needs a minimal Sphinx version, state it here. # needs_sphinx = '1.0' # Add any Sphinx extension module names here, as strings. They can be # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom # ones. extensions = [ 'reno.sphinxext', ] # Add any paths that contain templates here, relative to this directory. templates_path = ['_templates'] # The suffix of source filenames. source_suffix = '.rst' # The encoding of source files. # source_encoding = 'utf-8-sig' # The master toctree document. master_doc = 'index' # General information about the project. project = u'Glance Release Notes' copyright = u'2015, Glance Developers' # Release notes are version independent, no need to set version and release release = '' version = '' # The language for content autogenerated by Sphinx. Refer to documentation # for a list of supported languages. # language = None # There are two options for replacing |today|: either, you set today to some # non-false value, then it is used: # today = '' # Else, today_fmt is used as the format for a strftime call. # today_fmt = '%B %d, %Y' # List of patterns, relative to source directory, that match files and # directories to ignore when looking for source files. exclude_patterns = [] # The reST default role (used for this markup: `text`) to use for all # documents. # default_role = None # If true, '()' will be appended to :func: etc. cross-reference text. # add_function_parentheses = True # If true, the current module name will be prepended to all description # unit titles (such as .. function::). # add_module_names = True # If true, sectionauthor and moduleauthor directives will be shown in the # output. They are ignored by default. # show_authors = False # The name of the Pygments (syntax highlighting) style to use. pygments_style = 'sphinx' # A list of ignored prefixes for module index sorting. # modindex_common_prefix = [] # If true, keep warnings as "system message" paragraphs in the built documents. # keep_warnings = False # -- Options for HTML output ---------------------------------------------- # The theme to use for HTML and HTML Help pages. See the documentation for # a list of builtin themes. html_theme = 'openstackdocs' # Theme options are theme-specific and customize the look and feel of a theme # further. For a list of options available for each theme, see the # documentation. # html_theme_options = {} # Add any paths that contain custom themes here, relative to this directory. # html_theme_path = [] html_theme_path = [openstackdocstheme.get_html_theme_path()] # The name for this set of Sphinx documents. If None, it defaults to # " v documentation". # html_title = None # A shorter title for the navigation bar. Default is the same as html_title. # html_short_title = None # The name of an image file (relative to this directory) to place at the top # of the sidebar. # html_logo = None # The name of an image file (within the static path) to use as favicon of the # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 # pixels large. # html_favicon = None # Add any paths that contain custom static files (such as style sheets) here, # relative to this directory. They are copied after the builtin static files, # so a file named "default.css" will overwrite the builtin "default.css". html_static_path = ['_static'] # Add any extra paths that contain custom files (such as robots.txt or # .htaccess) here, relative to this directory. These files are copied # directly to the root of the documentation. # html_extra_path = [] # If not '', a 'Last updated on:' timestamp is inserted at every page bottom, # using the given strftime format. # html_last_updated_fmt = '%b %d, %Y' # If true, SmartyPants will be used to convert quotes and dashes to # typographically correct entities. # html_use_smartypants = True # Custom sidebar templates, maps document names to template names. # html_sidebars = {} # Additional templates that should be rendered to pages, maps page names to # template names. # html_additional_pages = {} # If false, no module index is generated. # html_domain_indices = True # If false, no index is generated. html_use_index = False # If true, the index is split into individual pages for each letter. # html_split_index = False # If true, links to the reST sources are added to the pages. # html_show_sourcelink = True # If true, "Created using Sphinx" is shown in the HTML footer. Default is True. # html_show_sphinx = True # If true, "(C) Copyright ..." is shown in the HTML footer. Default is True. # html_show_copyright = True # If true, an OpenSearch description file will be output, and all pages will # contain a tag referring to it. The value of this option must be the # base URL from which the finished HTML is served. # html_use_opensearch = '' # This is the file name suffix for HTML files (e.g. ".xhtml"). # html_file_suffix = None # Output file base name for HTML help builder. htmlhelp_basename = 'GlanceReleaseNotesdoc' # -- Options for LaTeX output --------------------------------------------- latex_elements = { # The paper size ('letterpaper' or 'a4paper'). # 'papersize': 'letterpaper', # The font size ('10pt', '11pt' or '12pt'). # 'pointsize': '10pt', # Additional stuff for the LaTeX preamble. # 'preamble': '', } # Grouping the document tree into LaTeX files. List of tuples # (source start file, target name, title, # author, documentclass [howto, manual, or own class]). latex_documents = [ ('index', 'GlanceReleaseNotes.tex', u'Glance Release Notes Documentation', u'Glance Developers', 'manual'), ] # The name of an image file (relative to this directory) to place at the top of # the title page. # latex_logo = None # For "manual" documents, if this is true, then toplevel headings are parts, # not chapters. # latex_use_parts = False # If true, show page references after internal links. # latex_show_pagerefs = False # If true, show URL addresses after external links. # latex_show_urls = False # Documents to append as an appendix to all manuals. # latex_appendices = [] # If false, no module index is generated. # latex_domain_indices = True # -- Options for manual page output --------------------------------------- # One entry per manual page. List of tuples # (source start file, name, description, authors, manual section). man_pages = [ ('index', 'glancereleasenotes', u'Glance Release Notes Documentation', [u'Glance Developers'], 1) ] # If true, show URL addresses after external links. # man_show_urls = False # -- Options for Texinfo output ------------------------------------------- # Grouping the document tree into Texinfo files. List of tuples # (source start file, target name, title, author, # dir menu entry, description, category) texinfo_documents = [ ('index', 'GlanceReleaseNotes', u'Glance Release Notes Documentation', u'Glance Developers', 'GlanceReleaseNotes', 'One line description of project.', 'Miscellaneous'), ] # Documents to append as an appendix to all manuals. # texinfo_appendices = [] # If false, no module index is generated. # texinfo_domain_indices = True # How to display URL addresses: 'footnote', 'no', or 'inline'. # texinfo_show_urls = 'footnote' # If true, do not generate a @detailmenu in the "Top" node's menu. # texinfo_no_detailmenu = False # -- Options for Internationalization output ------------------------------ locale_dirs = ['locale/'] glance-16.0.0/releasenotes/source/unreleased.rst0000666000175100017510000000016013245511421021722 0ustar zuulzuul00000000000000============================== Current Series Release Notes ============================== .. release-notes:: glance-16.0.0/releasenotes/source/index.rst0000666000175100017510000000024113245511426020707 0ustar zuulzuul00000000000000====================== Glance Release Notes ====================== .. toctree:: :maxdepth: 1 unreleased pike ocata newton mitaka liberty glance-16.0.0/releasenotes/source/mitaka.rst0000666000175100017510000000023213245511421021041 0ustar zuulzuul00000000000000=================================== Mitaka Series Release Notes =================================== .. release-notes:: :branch: origin/stable/mitaka glance-16.0.0/releasenotes/source/ocata.rst0000666000175100017510000000023013245511421020660 0ustar zuulzuul00000000000000=================================== Ocata Series Release Notes =================================== .. release-notes:: :branch: origin/stable/ocata glance-16.0.0/releasenotes/source/_templates/0000775000175100017510000000000013245511661021205 5ustar zuulzuul00000000000000glance-16.0.0/releasenotes/source/_templates/.placeholder0000666000175100017510000000000013245511421023452 0ustar zuulzuul00000000000000glance-16.0.0/bandit.yaml0000666000175100017510000002553313245511421015210 0ustar zuulzuul00000000000000# optional: after how many files to update progress #show_progress_every: 100 # optional: plugins directory name #plugins_dir: 'plugins' # optional: plugins discovery name pattern plugin_name_pattern: '*.py' # optional: terminal escape sequences to display colors #output_colors: # DEFAULT: '\033[0m' # HEADER: '\033[95m' # LOW: '\033[94m' # MEDIUM: '\033[93m' # HIGH: '\033[91m' # optional: log format string #log_format: "[%(module)s]\t%(levelname)s\t%(message)s" # globs of files which should be analyzed include: - '*.py' - '*.pyw' # a list of strings, which if found in the path will cause files to be excluded # for example /tests/ - to remove all all files in tests directory exclude_dirs: - '/tests/' profiles: gate: include: - any_other_function_with_shell_equals_true - assert_used - blacklist_calls - blacklist_import_func # One of the blacklisted imports is the subprocess module. Keystone # has to import the subprocess module in a single module for # eventlet support so in most cases bandit won't be able to detect # that subprocess is even being imported. Also, Bandit's # recommendation is just to check that the use is safe without any # documentation on what safe or unsafe usage is. So this test is # skipped. # - blacklist_imports - exec_used - execute_with_run_as_root_equals_true # - hardcoded_bind_all_interfaces # TODO: enable this test # Not working because wordlist/default-passwords file not bundled, # see https://bugs.launchpad.net/bandit/+bug/1451575 : # - hardcoded_password # Not used because it's prone to false positives: # - hardcoded_sql_expressions # - hardcoded_tmp_directory # TODO: enable this test - jinja2_autoescape_false - linux_commands_wildcard_injection - paramiko_calls - password_config_option_not_marked_secret - request_with_no_cert_validation - set_bad_file_permissions - subprocess_popen_with_shell_equals_true # - subprocess_without_shell_equals_true # TODO: enable this test - start_process_with_a_shell # - start_process_with_no_shell # TODO: enable this test - start_process_with_partial_path - ssl_with_bad_defaults - ssl_with_bad_version - ssl_with_no_version # - try_except_pass # TODO: enable this test - use_of_mako_templates blacklist_calls: bad_name_sets: # - pickle: # qualnames: [pickle.loads, pickle.load, pickle.Unpickler, # cPickle.loads, cPickle.load, cPickle.Unpickler] # message: "Pickle library appears to be in use, possible security issue." # TODO: enable this test - marshal: qualnames: [marshal.load, marshal.loads] message: "Deserialization with the marshal module is possibly dangerous." # - md5: # qualnames: [hashlib.md5, Crypto.Hash.MD2.new, Crypto.Hash.MD4.new, Crypto.Hash.MD5.new, cryptography.hazmat.primitives.hashes.MD5] # message: "Use of insecure MD2, MD4, or MD5 hash function." # TODO: enable this test - mktemp_q: qualnames: [tempfile.mktemp] message: "Use of insecure and deprecated function (mktemp)." - eval: qualnames: [eval] message: "Use of possibly insecure function - consider using safer ast.literal_eval." - mark_safe: names: [mark_safe] message: "Use of mark_safe() may expose cross-site scripting vulnerabilities and should be reviewed." - httpsconnection: qualnames: [httplib.HTTPSConnection] message: "Use of HTTPSConnection does not provide security, see https://wiki.openstack.org/wiki/OSSN/OSSN-0033" - yaml_load: qualnames: [yaml.load] message: "Use of unsafe yaml load. Allows instantiation of arbitrary objects. Consider yaml.safe_load()." - urllib_urlopen: qualnames: [urllib.urlopen, urllib.urlretrieve, urllib.URLopener, urllib.FancyURLopener, urllib2.urlopen, urllib2.Request] message: "Audit url open for permitted schemes. Allowing use of file:/ or custom schemes is often unexpected." - random: qualnames: [random.random, random.randrange, random.randint, random.choice, random.uniform, random.triangular] message: "Standard pseudo-random generators are not suitable for security/cryptographic purposes." level: "LOW" # Most of this is based off of Christian Heimes' work on defusedxml: # https://pypi.python.org/pypi/defusedxml/#defusedxml-sax # TODO(jaegerandi): Enable once defusedxml is in global requirements. #- xml_bad_cElementTree: # qualnames: [xml.etree.cElementTree.parse, # xml.etree.cElementTree.iterparse, # xml.etree.cElementTree.fromstring, # xml.etree.cElementTree.XMLParser] # message: "Using {func} to parse untrusted XML data is known to be vulnerable to XML attacks. Replace {func} with it's defusedxml equivilent function." #- xml_bad_ElementTree: # qualnames: [xml.etree.ElementTree.parse, # xml.etree.ElementTree.iterparse, # xml.etree.ElementTree.fromstring, # xml.etree.ElementTree.XMLParser] # message: "Using {func} to parse untrusted XML data is known to be vulnerable to XML attacks. Replace {func} with it's defusedxml equivilent function." - xml_bad_expatreader: qualnames: [xml.sax.expatreader.create_parser] message: "Using {func} to parse untrusted XML data is known to be vulnerable to XML attacks. Replace {func} with it's defusedxml equivilent function." - xml_bad_expatbuilder: qualnames: [xml.dom.expatbuilder.parse, xml.dom.expatbuilder.parseString] message: "Using {func} to parse untrusted XML data is known to be vulnerable to XML attacks. Replace {func} with it's defusedxml equivilent function." - xml_bad_sax: qualnames: [xml.sax.parse, xml.sax.parseString, xml.sax.make_parser] message: "Using {func} to parse untrusted XML data is known to be vulnerable to XML attacks. Replace {func} with it's defusedxml equivilent function." - xml_bad_minidom: qualnames: [xml.dom.minidom.parse, xml.dom.minidom.parseString] message: "Using {func} to parse untrusted XML data is known to be vulnerable to XML attacks. Replace {func} with it's defusedxml equivilent function." - xml_bad_pulldom: qualnames: [xml.dom.pulldom.parse, xml.dom.pulldom.parseString] message: "Using {func} to parse untrusted XML data is known to be vulnerable to XML attacks. Replace {func} with it's defusedxml equivilent function." - xml_bad_etree: qualnames: [lxml.etree.parse, lxml.etree.fromstring, lxml.etree.RestrictedElement, lxml.etree.GlobalParserTLS, lxml.etree.getDefaultParser, lxml.etree.check_docinfo] message: "Using {func} to parse untrusted XML data is known to be vulnerable to XML attacks. Replace {func} with it's defusedxml equivilent function." shell_injection: # Start a process using the subprocess module, or one of its wrappers. subprocess: [subprocess.Popen, subprocess.call, subprocess.check_call, subprocess.check_output, utils.execute, utils.execute_with_timeout] # Start a process with a function vulnerable to shell injection. shell: [os.system, os.popen, os.popen2, os.popen3, os.popen4, popen2.popen2, popen2.popen3, popen2.popen4, popen2.Popen3, popen2.Popen4, commands.getoutput, commands.getstatusoutput] # Start a process with a function that is not vulnerable to shell injection. no_shell: [os.execl, os.execle, os.execlp, os.execlpe, os.execv,os.execve, os.execvp, os.execvpe, os.spawnl, os.spawnle, os.spawnlp, os.spawnlpe, os.spawnv, os.spawnve, os.spawnvp, os.spawnvpe, os.startfile] blacklist_imports: bad_import_sets: - telnet: imports: [telnetlib] level: HIGH message: "Telnet is considered insecure. Use SSH or some other encrypted protocol." - info_libs: imports: [pickle, cPickle, subprocess, Crypto] level: LOW message: "Consider possible security implications associated with {module} module." # Most of this is based off of Christian Heimes' work on defusedxml: # https://pypi.python.org/pypi/defusedxml/#defusedxml-sax - xml_libs: imports: [xml.etree.cElementTree, xml.etree.ElementTree, xml.sax.expatreader, xml.sax, xml.dom.expatbuilder, xml.dom.minidom, xml.dom.pulldom, lxml.etree, lxml] message: "Using {module} to parse untrusted XML data is known to be vulnerable to XML attacks. Replace {module} with the equivilent defusedxml package." level: LOW - xml_libs_high: imports: [xmlrpclib] message: "Using {module} to parse untrusted XML data is known to be vulnerable to XML attacks. Use defused.xmlrpc.monkey_patch() function to monkey-patch xmlrpclib and mitigate XML vulnerabilities." level: HIGH hardcoded_tmp_directory: tmp_dirs: ['/tmp', '/var/tmp', '/dev/shm'] hardcoded_password: # Support for full path, relative path and special "%(site_data_dir)s" # substitution (/usr/{local}/share) word_list: "%(site_data_dir)s/wordlist/default-passwords" ssl_with_bad_version: bad_protocol_versions: - 'PROTOCOL_SSLv2' - 'SSLv2_METHOD' - 'SSLv23_METHOD' - 'PROTOCOL_SSLv3' # strict option - 'PROTOCOL_TLSv1' # strict option - 'SSLv3_METHOD' # strict option - 'TLSv1_METHOD' # strict option password_config_option_not_marked_secret: function_names: - oslo.config.cfg.StrOpt - oslo_config.cfg.StrOpt execute_with_run_as_root_equals_true: function_names: - ceilometer.utils.execute - cinder.utils.execute - neutron.agent.linux.utils.execute - nova.utils.execute - nova.utils.trycmd try_except_pass: check_typed_exception: True glance-16.0.0/doc/0000775000175100017510000000000013245511661013624 5ustar zuulzuul00000000000000glance-16.0.0/doc/source/0000775000175100017510000000000013245511661015124 5ustar zuulzuul00000000000000glance-16.0.0/doc/source/deprecate-registry.inc0000666000175100017510000000104413245511421021414 0ustar zuulzuul00000000000000.. note:: The Glance Registry Service and its APIs have been DEPRECATED in the Queens release and are subject to removal at the beginning of the 'S' development cycle, following the `OpenStack standard deprecation policy `_. For more information, see the Glance specification document `Actually Deprecate the Glance Registry `_. glance-16.0.0/doc/source/configuration/0000775000175100017510000000000013245511661017773 5ustar zuulzuul00000000000000glance-16.0.0/doc/source/configuration/glance_api.rst0000666000175100017510000000022413245511421022601 0ustar zuulzuul00000000000000.. _glance-api.conf: --------------- glance-api.conf --------------- .. show-options:: :config-file: etc/oslo-config-generator/glance-api.conf glance-16.0.0/doc/source/configuration/glance_manage.rst0000666000175100017510000000024313245511421023261 0ustar zuulzuul00000000000000.. _glance-manage.conf: ------------------ glance-manage.conf ------------------ .. show-options:: :config-file: etc/oslo-config-generator/glance-manage.conf glance-16.0.0/doc/source/configuration/glance_registry.rst0000666000175100017510000000054213245511421023703 0ustar zuulzuul00000000000000.. _glance-registry.conf: -------------------- glance-registry.conf -------------------- .. include:: ../deprecate-registry.inc This configuration file controls how the register server operates. More information can be found in :ref:`configuring-the-glance-registry`. .. show-options:: :config-file: etc/oslo-config-generator/glance-registry.conf glance-16.0.0/doc/source/configuration/index.rst0000666000175100017510000000063613245511421021635 0ustar zuulzuul00000000000000.. _configuring: ============================ Glance Configuration Options ============================ This section provides a list of all possible options for each configuration file. Refer to :ref:`basic-configuration` for a detailed guide in getting started with various option settings. Glance uses the following configuration files for its various services. .. toctree:: :glob: :maxdepth: 1 * glance-16.0.0/doc/source/configuration/glance_cache.rst0000666000175100017510000000023613245511421023076 0ustar zuulzuul00000000000000.. _glance-cache.conf: ----------------- glance-cache.conf ----------------- .. show-options:: :config-file: etc/oslo-config-generator/glance-cache.conf glance-16.0.0/doc/source/configuration/sample-configuration.rst0000666000175100017510000000304213245511421024646 0ustar zuulzuul00000000000000.. _sample-configuration: =========================== Glance Sample Configuration =========================== The following are sample configuration files for all Glance services and utilities. These are generated from code and reflect the current state of code in the Glance repository. Sample configuration for Glance API ----------------------------------- This sample configuration can also be viewed in `glance-api.conf.sample <../_static/glance-api.conf.sample>`_. .. literalinclude:: ../_static/glance-api.conf.sample Sample configuration for Glance Registry ---------------------------------------- This sample configuration can also be viewed in `glance-registry.conf.sample <../_static/glance-registry.conf.sample>`_. .. literalinclude:: ../_static/glance-registry.conf.sample Sample configuration for Glance Scrubber ---------------------------------------- This sample configuration can also be viewed in `glance-scrubber.conf.sample <../_static/glance-scrubber.conf.sample>`_. .. literalinclude:: ../_static/glance-scrubber.conf.sample Sample configuration for Glance Manage -------------------------------------- This sample configuration can also be viewed in `glance-manage.conf.sample <../_static/glance-manage.conf.sample>`_. .. literalinclude:: ../_static/glance-manage.conf.sample Sample configuration for Glance Cache ------------------------------------- This sample configuration can also be viewed in `glance-cache.conf.sample <../_static/glance-cache.conf.sample>`_. .. literalinclude:: ../_static/glance-cache.conf.sample glance-16.0.0/doc/source/configuration/glance_scrubber.rst0000666000175100017510000000025513245511421023643 0ustar zuulzuul00000000000000.. _glance-scrubber.conf: -------------------- glance-scrubber.conf -------------------- .. show-options:: :config-file: etc/oslo-config-generator/glance-scrubber.conf glance-16.0.0/doc/source/configuration/configuring.rst0000666000175100017510000015567613245511421023057 0ustar zuulzuul00000000000000.. Copyright 2011 OpenStack Foundation All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. .. _basic-configuration: Basic Configuration =================== Glance has a number of options that you can use to configure the Glance API server, the Glance Registry server, and the various storage backends that Glance can use to store images. .. include:: ../deprecate-registry.inc Most configuration is done via configuration files, with the Glance API server and Glance Registry server using separate configuration files. When starting up a Glance server, you can specify the configuration file to use (see :ref:`the documentation on controller Glance servers `). If you do **not** specify a configuration file, Glance will look in the following directories for a configuration file, in order: * ``~/.glance`` * ``~/`` * ``/etc/glance`` * ``/etc`` The Glance API server configuration file should be named ``glance-api.conf``. Similarly, the Glance Registry server configuration file should be named ``glance-registry.conf``. There are many other configuration files also since Glance maintains a configuration file for each of its services. If you installed Glance via your operating system's package management system, it is likely that you will have sample configuration files installed in ``/etc/glance``. In addition, sample configuration files for each server application with detailed comments are available in the :ref:`Glance Sample Configuration ` section. The PasteDeploy configuration (controlling the deployment of the WSGI application for each component) may be found by default in -paste.ini alongside the main configuration file, .conf. For example, ``glance-api-paste.ini`` corresponds to ``glance-api.conf``. This pathname for the paste config is configurable, as follows:: [paste_deploy] config_file = /path/to/paste/config Common Configuration Options in Glance -------------------------------------- Glance has a few command-line options that are common to all Glance programs: ``--verbose`` Optional. Default: ``False`` Can be specified on the command line and in configuration files. Turns on the INFO level in logging and prints more verbose command-line interface printouts. ``--debug`` Optional. Default: ``False`` Can be specified on the command line and in configuration files. Turns on the DEBUG level in logging. ``--config-file=PATH`` Optional. Default: See below for default search order. Specified on the command line only. Takes a path to a configuration file to use when running the program. If this CLI option is not specified, then we check to see if the first argument is a file. If it is, then we try to use that as the configuration file. If there is no file or there were no arguments, we search for a configuration file in the following order: * ``~/.glance`` * ``~/`` * ``/etc/glance`` * ``/etc`` The filename that is searched for depends on the server application name. So, if you are starting up the API server, ``glance-api.conf`` is searched for, otherwise ``glance-registry.conf``. ``--config-dir=DIR`` Optional. Default: ``None`` Specified on the command line only. Takes a path to a configuration directory from which all \*.conf fragments are loaded. This provides an alternative to multiple ``--config-file`` options when it is inconvenient to explicitly enumerate all the configuration files, for example when an unknown number of config fragments are being generated by a deployment framework. If ``--config-dir`` is set, then ``--config-file`` is ignored. An example usage would be:: $ glance-api --config-dir=/etc/glance/glance-api.d $ ls /etc/glance/glance-api.d 00-core.conf 01-swift.conf 02-ssl.conf ... etc. The numeric prefixes in the example above are only necessary if a specific parse ordering is required (i.e. if an individual config option set in an earlier fragment is overridden in a later fragment). Note that ``glance-manage`` currently loads configuration from three files: * ``glance-registry.conf`` * ``glance-api.conf`` * ``glance-manage.conf`` By default ``glance-manage.conf`` only specifies a custom logging file but other configuration options for ``glance-manage`` should be migrated in there. **Warning**: Options set in ``glance-manage.conf`` will override options of the same section and name set in the other two. Similarly, options in ``glance-api.conf`` will override options set in ``glance-registry.conf``. This tool is planning to stop loading ``glance-registry.conf`` and ``glance-api.conf`` in a future cycle. Configuring Server Startup Options ---------------------------------- You can put the following options in the ``glance-api.conf`` and ``glance-registry.conf`` files, under the ``[DEFAULT]`` section. They enable startup and binding behaviour for the API and registry servers, respectively. .. include:: ../deprecate-registry.inc ``bind_host=ADDRESS`` The address of the host to bind to. Optional. Default: ``0.0.0.0`` ``bind_port=PORT`` The port the server should bind to. Optional. Default: ``9191`` for the registry server, ``9292`` for the API server ``backlog=REQUESTS`` Number of backlog requests to configure the socket with. Optional. Default: ``4096`` ``tcp_keepidle=SECONDS`` Sets the value of TCP_KEEPIDLE in seconds for each server socket. Not supported on OS X. Optional. Default: ``600`` ``client_socket_timeout=SECONDS`` Timeout for client connections' socket operations. If an incoming connection is idle for this period it will be closed. A value of `0` means wait forever. Optional. Default: ``900`` ``workers=PROCESSES`` Number of Glance API or Registry worker processes to start. Each worker process will listen on the same port. Increasing this value may increase performance (especially if using SSL with compression enabled). Typically it is recommended to have one worker process per CPU. The value `0` will prevent any new worker processes from being created. When ``data_api`` is set to ``glance.db.simple.api``, ``workers`` MUST be set to either ``0`` or ``1``. Optional. Default: The number of CPUs available will be used by default. ``max_request_id_length=LENGTH`` Limits the maximum size of the x-openstack-request-id header which is logged. Affects only if context middleware is configured in pipeline. Optional. Default: ``64`` (Limited by max_header_line default: 16384) Configuring SSL Support ~~~~~~~~~~~~~~~~~~~~~~~ ``cert_file=PATH`` Path to the certificate file the server should use when binding to an SSL-wrapped socket. Optional. Default: not enabled. ``key_file=PATH`` Path to the private key file the server should use when binding to an SSL-wrapped socket. Optional. Default: not enabled. ``ca_file=PATH`` Path to the CA certificate file the server should use to validate client certificates provided during an SSL handshake. This is ignored if ``cert_file`` and ''key_file`` are not set. Optional. Default: not enabled. Configuring Registry Access ~~~~~~~~~~~~~~~~~~~~~~~~~~~ There are a number of configuration options in Glance that control how the API server accesses the registry server. .. include:: ../deprecate-registry.inc ``registry_client_protocol=PROTOCOL`` If you run a secure Registry server, you need to set this value to ``https`` and also set ``registry_client_key_file`` and optionally ``registry_client_cert_file``. Optional. Default: http ``registry_client_key_file=PATH`` The path to the key file to use in SSL connections to the registry server, if any. Alternately, you may set the ``GLANCE_CLIENT_KEY_FILE`` environ variable to a filepath of the key file Optional. Default: Not set. ``registry_client_cert_file=PATH`` Optional. Default: Not set. The path to the cert file to use in SSL connections to the registry server, if any. Alternately, you may set the ``GLANCE_CLIENT_CERT_FILE`` environ variable to a filepath of the cert file ``registry_client_ca_file=PATH`` Optional. Default: Not set. The path to a Certifying Authority's cert file to use in SSL connections to the registry server, if any. Alternately, you may set the ``GLANCE_CLIENT_CA_FILE`` environ variable to a filepath of the CA cert file ``registry_client_insecure=False`` Optional. Default: False. When using SSL in connections to the registry server, do not require validation via a certifying authority. This is the registry's equivalent of specifying --insecure on the command line using glanceclient for the API ``registry_client_timeout=SECONDS`` Optional. Default: ``600``. The period of time, in seconds, that the API server will wait for a registry request to complete. A value of '0' implies no timeout. .. note:: ``use_user_token``, ``admin_user``, ``admin_password``, ``admin_tenant_name``, ``auth_url``, ``auth_strategy`` and ``auth_region`` options were considered harmful and have been deprecated in M release. They will be removed in O release. For more information read `OSSN-0060 `_. Related functionality with uploading big images has been implemented with Keystone trusts support. ``use_user_token=True`` Optional. Default: True DEPRECATED. This option will be removed in O release. Pass the user token through for API requests to the registry. If 'use_user_token' is not in effect then admin credentials can be specified (see below). If admin credentials are specified then they are used to generate a token; this token rather than the original user's token is used for requests to the registry. ``admin_user=USER`` DEPRECATED. This option will be removed in O release. If 'use_user_token' is not in effect then admin credentials can be specified. Use this parameter to specify the username. Optional. Default: None ``admin_password=PASSWORD`` DEPRECATED. This option will be removed in O release. If 'use_user_token' is not in effect then admin credentials can be specified. Use this parameter to specify the password. Optional. Default: None ``admin_tenant_name=TENANTNAME`` DEPRECATED. This option will be removed in O release. If 'use_user_token' is not in effect then admin credentials can be specified. Use this parameter to specify the tenant name. Optional. Default: None ``auth_url=URL`` DEPRECATED. This option will be removed in O release. If 'use_user_token' is not in effect then admin credentials can be specified. Use this parameter to specify the Keystone endpoint. Optional. Default: None ``auth_strategy=STRATEGY`` DEPRECATED. This option will be removed in O release. If 'use_user_token' is not in effect then admin credentials can be specified. Use this parameter to specify the auth strategy. Optional. Default: noauth ``auth_region=REGION`` DEPRECATED. This option will be removed in O release. If 'use_user_token' is not in effect then admin credentials can be specified. Use this parameter to specify the region. Optional. Default: None Configuring Logging in Glance ----------------------------- There are a number of configuration options in Glance that control how Glance servers log messages. ``--log-config=PATH`` Optional. Default: ``None`` Specified on the command line only. Takes a path to a configuration file to use for configuring logging. Logging Options Available Only in Configuration Files ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ You will want to place the different logging options in the **[DEFAULT]** section in your application configuration file. As an example, you might do the following for the API server, in a configuration file called ``etc/glance-api.conf``:: [DEFAULT] log_file = /var/log/glance/api.log ``log_file`` The filepath of the file to use for logging messages from Glance's servers. If missing, the default is to output messages to ``stdout``, so if you are running Glance servers in a daemon mode (using ``glance-control``) you should make sure that the ``log_file`` option is set appropriately. ``log_dir`` The filepath of the directory to use for log files. If not specified (the default) the ``log_file`` is used as an absolute filepath. ``log_date_format`` The format string for timestamps in the log output. Defaults to ``%Y-%m-%d %H:%M:%S``. See the `logging module `_ documentation for more information on setting this format string. ``log_use_syslog`` Use syslog logging functionality. Defaults to False. Configuring Glance Storage Backends ----------------------------------- There are a number of configuration options in Glance that control how Glance stores disk images. These configuration options are specified in the ``glance-api.conf`` configuration file in the section ``[glance_store]``. ``default_store=STORE`` Optional. Default: ``file`` Can only be specified in configuration files. Sets the storage backend to use by default when storing images in Glance. Available options for this option are (``file``, ``swift``, ``rbd``, ``sheepdog``, ``cinder`` or ``vsphere``). In order to select a default store it must also be listed in the ``stores`` list described below. ``stores=STORES`` Optional. Default: ``file, http`` A comma separated list of enabled glance stores. Some available options for this option are (``filesystem``, ``http``, ``rbd``, ``swift``, ``sheepdog``, ``cinder``, ``vmware``) Configuring the Filesystem Storage Backend ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ``filesystem_store_datadir=PATH`` Optional. Default: ``/var/lib/glance/images/`` Can only be specified in configuration files. `This option is specific to the filesystem storage backend.` Sets the path where the filesystem storage backend write disk images. Note that the filesystem storage backend will attempt to create this directory if it does not exist. Ensure that the user that ``glance-api`` runs under has write permissions to this directory. ``filesystem_store_file_perm=PERM_MODE`` Optional. Default: ``0`` Can only be specified in configuration files. `This option is specific to the filesystem storage backend.` The required permission value, in octal representation, for the created image file. You can use this value to specify the user of the consuming service (such as Nova) as the only member of the group that owns the created files. To keep the default value, assign a permission value that is less than or equal to 0. Note that the file owner must maintain read permission; if this value removes that permission an error message will be logged and the BadStoreConfiguration exception will be raised. If the Glance service has insufficient privileges to change file access permissions, a file will still be saved, but a warning message will appear in the Glance log. Configuring the Filesystem Storage Backend with multiple stores ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ``filesystem_store_datadirs=PATH:PRIORITY`` Optional. Default: ``/var/lib/glance/images/:1`` Example:: filesystem_store_datadirs = /var/glance/store filesystem_store_datadirs = /var/glance/store1:100 filesystem_store_datadirs = /var/glance/store2:200 This option can only be specified in configuration file and is specific to the filesystem storage backend only. filesystem_store_datadirs option allows administrators to configure multiple store directories to save glance image in filesystem storage backend. Each directory can be coupled with its priority. **NOTE**: * This option can be specified multiple times to specify multiple stores. * Either filesystem_store_datadir or filesystem_store_datadirs option must be specified in glance-api.conf * Store with priority 200 has precedence over store with priority 100. * If no priority is specified, default priority '0' is associated with it. * If two filesystem stores have same priority store with maximum free space will be chosen to store the image. * If same store is specified multiple times then BadStoreConfiguration exception will be raised. Configuring the Swift Storage Backend ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ``swift_store_auth_address=URL`` Required when using the Swift storage backend. Can only be specified in configuration files. Deprecated. Use ``auth_address`` in the Swift back-end configuration file instead. `This option is specific to the Swift storage backend.` Sets the authentication URL supplied to Swift when making calls to its storage system. For more information about the Swift authentication system, please see the `Swift auth `_ documentation. **IMPORTANT NOTE**: Swift authentication addresses use HTTPS by default. This means that if you are running Swift with authentication over HTTP, you need to set your ``swift_store_auth_address`` to the full URL, including the ``http://``. ``swift_store_user=USER`` Required when using the Swift storage backend. Can only be specified in configuration files. Deprecated. Use ``user`` in the Swift back-end configuration file instead. `This option is specific to the Swift storage backend.` Sets the user to authenticate against the ``swift_store_auth_address`` with. ``swift_store_key=KEY`` Required when using the Swift storage backend. Can only be specified in configuration files. Deprecated. Use ``key`` in the Swift back-end configuration file instead. `This option is specific to the Swift storage backend.` Sets the authentication key to authenticate against the ``swift_store_auth_address`` with for the user ``swift_store_user``. ``swift_store_container=CONTAINER`` Optional. Default: ``glance`` Can only be specified in configuration files. `This option is specific to the Swift storage backend.` Sets the name of the container to use for Glance images in Swift. ``swift_store_create_container_on_put`` Optional. Default: ``False`` Can only be specified in configuration files. `This option is specific to the Swift storage backend.` If true, Glance will attempt to create the container ``swift_store_container`` if it does not exist. ``swift_store_large_object_size=SIZE_IN_MB`` Optional. Default: ``5120`` Can only be specified in configuration files. `This option is specific to the Swift storage backend.` What size, in MB, should Glance start chunking image files and do a large object manifest in Swift? By default, this is the maximum object size in Swift, which is 5GB ``swift_store_large_object_chunk_size=SIZE_IN_MB`` Optional. Default: ``200`` Can only be specified in configuration files. `This option is specific to the Swift storage backend.` When doing a large object manifest, what size, in MB, should Glance write chunks to Swift? The default is 200MB. ``swift_store_multi_tenant=False`` Optional. Default: ``False`` Can only be specified in configuration files. `This option is specific to the Swift storage backend.` If set to True enables multi-tenant storage mode which causes Glance images to be stored in tenant specific Swift accounts. When set to False Glance stores all images in a single Swift account. ``swift_store_multiple_containers_seed`` Optional. Default: ``0`` Can only be specified in configuration files. `This option is specific to the Swift storage backend.` When set to 0, a single-tenant store will only use one container to store all images. When set to an integer value between 1 and 32, a single-tenant store will use multiple containers to store images, and this value will determine how many characters from an image UUID are checked when determining what container to place the image in. The maximum number of containers that will be created is approximately equal to 16^N. This setting is used only when swift_store_multi_tenant is disabled. Example: if this config option is set to 3 and swift_store_container = 'glance', then an image with UUID 'fdae39a1-bac5-4238-aba4-69bcc726e848' would be placed in the container 'glance_fda'. All dashes in the UUID are included when creating the container name but do not count toward the character limit, so in this example with N=10 the container name would be 'glance_fdae39a1-ba'. When choosing the value for swift_store_multiple_containers_seed, deployers should discuss a suitable value with their swift operations team. The authors of this option recommend that large scale deployments use a value of '2', which will create a maximum of ~256 containers. Choosing a higher number than this, even in extremely large scale deployments, may not have any positive impact on performance and could lead to a large number of empty, unused containers. The largest of deployments could notice an increase in performance if swift rate limits are throttling on single container. Note: If dynamic container creation is turned off, any value for this configuration option higher than '1' may be unreasonable as the deployer would have to manually create each container. ``swift_store_admin_tenants`` Can only be specified in configuration files. `This option is specific to the Swift storage backend.` Optional. Default: Not set. A list of swift ACL strings that will be applied as both read and write ACLs to the containers created by Glance in multi-tenant mode. This grants the specified tenants/users read and write access to all newly created image objects. The standard swift ACL string formats are allowed, including: : : \*: Multiple ACLs can be combined using a comma separated list, for example: swift_store_admin_tenants = service:glance,*:admin ``swift_store_auth_version`` Can only be specified in configuration files. Deprecated. Use ``auth_version`` in the Swift back-end configuration file instead. `This option is specific to the Swift storage backend.` Optional. Default: ``2`` A string indicating which version of Swift OpenStack authentication to use. See the project `python-swiftclient `_ for more details. ``swift_store_service_type`` Can only be specified in configuration files. `This option is specific to the Swift storage backend.` Optional. Default: ``object-store`` A string giving the service type of the swift service to use. This setting is only used if swift_store_auth_version is ``2``. ``swift_store_region`` Can only be specified in configuration files. `This option is specific to the Swift storage backend.` Optional. Default: Not set. A string giving the region of the swift service endpoint to use. This setting is only used if swift_store_auth_version is ``2``. This setting is especially useful for disambiguation if multiple swift services might appear in a service catalog during authentication. ``swift_store_endpoint_type`` Can only be specified in configuration files. `This option is specific to the Swift storage backend.` Optional. Default: ``publicURL`` A string giving the endpoint type of the swift service endpoint to use. This setting is only used if swift_store_auth_version is ``2``. ``swift_store_ssl_compression`` Can only be specified in configuration files. `This option is specific to the Swift storage backend.` Optional. Default: True. If set to False, disables SSL layer compression of https swift requests. Setting to 'False' may improve performance for images which are already in a compressed format, e.g. qcow2. If set to True then compression will be enabled (provided it is supported by the swift proxy). ``swift_store_cacert`` Can only be specified in configuration files. Optional. Default: ``None`` A string giving the path to a CA certificate bundle that will allow Glance's services to perform SSL verification when communicating with Swift. ``swift_store_retry_get_count`` The number of times a Swift download will be retried before the request fails. Optional. Default: ``0`` Configuring Multiple Swift Accounts/Stores ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ In order to not store Swift account credentials in the database, and to have support for multiple accounts (or multiple Swift backing stores), a reference is stored in the database and the corresponding configuration (credentials/ parameters) details are stored in the configuration file. Optional. Default: not enabled. The location for this file is specified using the ``swift_store_config_file`` configuration file in the section ``[DEFAULT]``. **If an incorrect value is specified, Glance API Swift store service will not be configured.** ``swift_store_config_file=PATH`` `This option is specific to the Swift storage backend.` ``default_swift_reference=DEFAULT_REFERENCE`` Required when multiple Swift accounts/backing stores are configured. Can only be specified in configuration files. `This option is specific to the Swift storage backend.` It is the default swift reference that is used to add any new images. ``swift_store_auth_insecure`` If True, bypass SSL certificate verification for Swift. Can only be specified in configuration files. `This option is specific to the Swift storage backend.` Optional. Default: ``False`` Configuring Swift configuration file ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ If ``swift_store_config_file`` is set, Glance will use information from the file specified under this parameter. .. note:: The ``swift_store_config_file`` is currently used only for single-tenant Swift store configurations. If you configure a multi-tenant Swift store back end (``swift_store_multi_tenant=True``), ensure that both ``swift_store_config_file`` and ``default_swift_reference`` are *not* set. The file contains a set of references like: .. code-block:: ini [ref1] user = tenant:user1 key = key1 auth_version = 2 auth_address = http://localhost:5000/v2.0 [ref2] user = project_name:user_name2 key = key2 user_domain_id = default project_domain_id = default auth_version = 3 auth_address = http://localhost:5000/v3 A default reference must be configured. Its parameters will be used when creating new images. For example, to specify ``ref2`` as the default reference, add the following value to the [glance_store] section of :file:`glance-api.conf` file: .. code-block:: ini default_swift_reference = ref2 In the reference, a user can specify the following parameters: ``user`` A *project_name user_name* pair in the ``project_name:user_name`` format to authenticate against the Swift authentication service. ``key`` An authentication key for a user authenticating against the Swift authentication service. ``auth_address`` An address where the Swift authentication service is located. ``auth_version`` A version of the authentication service to use. Valid versions are ``2`` and ``3`` for Keystone and ``1`` (deprecated) for Swauth and Rackspace. Optional. Default: ``2`` ``project_domain_id`` A domain ID of the project which is the requested project-level authorization scope. Optional. Default: ``None`` `This option can be specified if ``auth_version`` is ``3`` .` ``project_domain_name`` A domain name of the project which is the requested project-level authorization scope. Optional. Default: ``None`` `This option can be specified if ``auth_version`` is ``3`` .` ``user_domain_id`` A domain ID of the user which is the requested domain-level authorization scope. Optional. Default: ``None`` `This option can be specified if ``auth_version`` is ``3`` .` ``user_domain_name`` A domain name of the user which is the requested domain-level authorization scope. Optional. Default: ``None`` `This option can be specified if ``auth_version`` is ``3``. ` Configuring the RBD Storage Backend ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ **Note**: the RBD storage backend requires the python bindings for librados and librbd. These are in the python-ceph package on Debian-based distributions. ``rbd_store_pool=POOL`` Optional. Default: ``rbd`` Can only be specified in configuration files. `This option is specific to the RBD storage backend.` Sets the RADOS pool in which images are stored. ``rbd_store_chunk_size=CHUNK_SIZE_MB`` Optional. Default: ``4`` Can only be specified in configuration files. `This option is specific to the RBD storage backend.` Images will be chunked into objects of this size (in megabytes). For best performance, this should be a power of two. ``rados_connect_timeout`` Optional. Default: ``0`` Can only be specified in configuration files. `This option is specific to the RBD storage backend.` Prevents glance-api hangups during the connection to RBD. Sets the time to wait (in seconds) for glance-api before closing the connection. Setting ``rados_connect_timeout<=0`` means no timeout. ``rbd_store_ceph_conf=PATH`` Optional. Default: ``/etc/ceph/ceph.conf``, ``~/.ceph/config``, and ``./ceph.conf`` Can only be specified in configuration files. `This option is specific to the RBD storage backend.` Sets the Ceph configuration file to use. ``rbd_store_user=NAME`` Optional. Default: ``admin`` Can only be specified in configuration files. `This option is specific to the RBD storage backend.` Sets the RADOS user to authenticate as. This is only needed when `RADOS authentication `_ is `enabled. `_ A keyring must be set for this user in the Ceph configuration file, e.g. with a user ``glance``:: [client.glance] keyring=/etc/glance/rbd.keyring To set up a user named ``glance`` with minimal permissions, using a pool called ``images``, run:: rados mkpool images ceph-authtool --create-keyring /etc/glance/rbd.keyring ceph-authtool --gen-key --name client.glance --cap mon 'allow r' --cap osd 'allow rwx pool=images' /etc/glance/rbd.keyring ceph auth add client.glance -i /etc/glance/rbd.keyring Configuring the Sheepdog Storage Backend ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ``sheepdog_store_address=ADDR`` Optional. Default: ``localhost`` Can only be specified in configuration files. `This option is specific to the Sheepdog storage backend.` Sets the IP address of the sheep daemon ``sheepdog_store_port=PORT`` Optional. Default: ``7000`` Can only be specified in configuration files. `This option is specific to the Sheepdog storage backend.` Sets the IP port of the sheep daemon ``sheepdog_store_chunk_size=SIZE_IN_MB`` Optional. Default: ``64`` Can only be specified in configuration files. `This option is specific to the Sheepdog storage backend.` Images will be chunked into objects of this size (in megabytes). For best performance, this should be a power of two. Configuring the Cinder Storage Backend ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ **Note**: Currently Cinder store is experimental. Current deployers should be aware that the use of it in production right now may be risky. It is expected to work well with most iSCSI Cinder backends such as LVM iSCSI, but will not work with some backends especially if they don't support host-attach. **Note**: To create a Cinder volume from an image in this store quickly, additional settings are required. Please see the `Volume-backed image `_ documentation for more information. ``cinder_catalog_info=::`` Optional. Default: ``volumev2::publicURL`` Can only be specified in configuration files. `This option is specific to the Cinder storage backend.` Sets the info to match when looking for cinder in the service catalog. Format is : separated values of the form: :: ``cinder_endpoint_template=http://ADDR:PORT/VERSION/%(tenant)s`` Optional. Default: ``None`` Can only be specified in configuration files. `This option is specific to the Cinder storage backend.` Override service catalog lookup with template for cinder endpoint. ``%(...)s`` parts are replaced by the value in the request context. e.g. http://localhost:8776/v2/%(tenant)s ``os_region_name=REGION_NAME`` Optional. Default: ``None`` Can only be specified in configuration files. `This option is specific to the Cinder storage backend.` Region name of this node. Deprecated. Use ``cinder_os_region_name`` instead. ``cinder_os_region_name=REGION_NAME`` Optional. Default: ``None`` Can only be specified in configuration files. `This option is specific to the Cinder storage backend.` Region name of this node. If specified, it is used to locate cinder from the service catalog. ``cinder_ca_certificates_file=CA_FILE_PATH`` Optional. Default: ``None`` Can only be specified in configuration files. `This option is specific to the Cinder storage backend.` Location of ca certificates file to use for cinder client requests. ``cinder_http_retries=TIMES`` Optional. Default: ``3`` Can only be specified in configuration files. `This option is specific to the Cinder storage backend.` Number of cinderclient retries on failed http calls. ``cinder_state_transition_timeout`` Optional. Default: ``300`` Can only be specified in configuration files. `This option is specific to the Cinder storage backend.` Time period, in seconds, to wait for a cinder volume transition to complete. ``cinder_api_insecure=ON_OFF`` Optional. Default: ``False`` Can only be specified in configuration files. `This option is specific to the Cinder storage backend.` Allow to perform insecure SSL requests to cinder. ``cinder_store_user_name=NAME`` Optional. Default: ``None`` Can only be specified in configuration files. `This option is specific to the Cinder storage backend.` User name to authenticate against Cinder. If , the user of current context is used. **NOTE**: This option is applied only if all of ``cinder_store_user_name``, ``cinder_store_password``, ``cinder_store_project_name`` and ``cinder_store_auth_address`` are set. These options are useful to put image volumes into the internal service project in order to hide the volume from users, and to make the image sharable among projects. ``cinder_store_password=PASSWORD`` Optional. Default: ``None`` Can only be specified in configuration files. `This option is specific to the Cinder storage backend.` Password for the user authenticating against Cinder. If , the current context auth token is used. ``cinder_store_project_name=NAME`` Optional. Default: ``None`` Can only be specified in configuration files. `This option is specific to the Cinder storage backend.` Project name where the image is stored in Cinder. If , the project in current context is used. ``cinder_store_auth_address=URL`` Optional. Default: ``None`` Can only be specified in configuration files. `This option is specific to the Cinder storage backend.` The address where the Cinder authentication service is listening. If , the cinder endpoint in the service catalog is used. ``rootwrap_config=NAME`` Optional. Default: ``/etc/glance/rootwrap.conf`` Can only be specified in configuration files. `This option is specific to the Cinder storage backend.` Path to the rootwrap configuration file to use for running commands as root. Configuring the VMware Storage Backend ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ``vmware_server_host=ADDRESS`` Required when using the VMware storage backend. Can only be specified in configuration files. Sets the address of the ESX/ESXi or vCenter Server target system. The address can contain an IP (``127.0.0.1``), an IP and port (``127.0.0.1:443``), a DNS name (``www.my-domain.com``) or DNS and port. `This option is specific to the VMware storage backend.` ``vmware_server_username=USERNAME`` Required when using the VMware storage backend. Can only be specified in configuration files. Username for authenticating with VMware ESX/ESXi or vCenter Server. ``vmware_server_password=PASSWORD`` Required when using the VMware storage backend. Can only be specified in configuration files. Password for authenticating with VMware ESX/ESXi or vCenter Server. ``vmware_datastores`` Required when using the VMware storage backend. This option can only be specified in configuration file and is specific to the VMware storage backend. vmware_datastores allows administrators to configure multiple datastores to save glance image in the VMware store backend. The required format for the option is: ::. where datacenter_path is the inventory path to the datacenter where the datastore is located. An optional weight can be given to specify the priority. Example:: vmware_datastores = datacenter1:datastore1 vmware_datastores = dc_folder/datacenter2:datastore2:100 vmware_datastores = datacenter1:datastore3:200 **NOTE**: * This option can be specified multiple times to specify multiple datastores. * Either vmware_datastore_name or vmware_datastores option must be specified in glance-api.conf * Datastore with weight 200 has precedence over datastore with weight 100. * If no weight is specified, default weight '0' is associated with it. * If two datastores have same weight, the datastore with maximum free space will be chosen to store the image. * If the datacenter path or datastore name contains a colon (:) symbol, it must be escaped with a backslash. ``vmware_api_retry_count=TIMES`` Optional. Default: ``10`` Can only be specified in configuration files. The number of times VMware ESX/VC server API must be retried upon connection related issues. ``vmware_task_poll_interval=SECONDS`` Optional. Default: ``5`` Can only be specified in configuration files. The interval used for polling remote tasks invoked on VMware ESX/VC server. ``vmware_store_image_dir`` Optional. Default: ``/openstack_glance`` Can only be specified in configuration files. The path to access the folder where the images will be stored in the datastore. ``vmware_api_insecure=ON_OFF`` Optional. Default: ``False`` Can only be specified in configuration files. Allow to perform insecure SSL requests to ESX/VC server. Configuring the Storage Endpoint ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ``swift_store_endpoint=URL`` Optional. Default: ``None`` Can only be specified in configuration files. Overrides the storage URL returned by auth. The URL should include the path up to and excluding the container. The location of an object is obtained by appending the container and object to the configured URL. e.g. ``https://www.my-domain.com/v1/path_up_to_container`` Configuring Glance Image Size Limit ----------------------------------- The following configuration option is specified in the ``glance-api.conf`` configuration file in the section ``[DEFAULT]``. ``image_size_cap=SIZE`` Optional. Default: ``1099511627776`` (1 TB) Maximum image size, in bytes, which can be uploaded through the Glance API server. **IMPORTANT NOTE**: this value should only be increased after careful consideration and must be set to a value under 8 EB (9223372036854775808). Configuring Glance User Storage Quota ------------------------------------- The following configuration option is specified in the ``glance-api.conf`` configuration file in the section ``[DEFAULT]``. ``user_storage_quota`` Optional. Default: 0 (Unlimited). This value specifies the maximum amount of storage that each user can use across all storage systems. Optionally unit can be specified for the value. Values are accepted in B, KB, MB, GB or TB which are for Bytes, KiloBytes, MegaBytes, GigaBytes and TeraBytes respectively. Default unit is Bytes. Example values would be, user_storage_quota=20GB Configuring the Image Cache --------------------------- Glance API servers can be configured to have a local image cache. Caching of image files is transparent and happens using a piece of middleware that can optionally be placed in the server application pipeline. This pipeline is configured in the PasteDeploy configuration file, -paste.ini. You should not generally have to edit this file directly, as it ships with ready-made pipelines for all common deployment flavors. Enabling the Image Cache Middleware ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ To enable the image cache middleware, the cache middleware must occur in the application pipeline **after** the appropriate context middleware. The cache middleware should be in your ``glance-api-paste.ini`` in a section titled ``[filter:cache]``. It should look like this:: [filter:cache] paste.filter_factory = glance.api.middleware.cache:CacheFilter.factory A ready-made application pipeline including this filter is defined in the ``glance-api-paste.ini`` file, looking like so:: [pipeline:glance-api-caching] pipeline = versionnegotiation context cache apiv1app To enable the above application pipeline, in your main ``glance-api.conf`` configuration file, select the appropriate deployment flavor like so:: [paste_deploy] flavor = caching Enabling the Image Cache Management Middleware ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ There is an optional ``cachemanage`` middleware that allows you to directly interact with cache images. Use this flavor in place of the ``cache`` flavor in your API configuration file. There are three types you can chose: ``cachemanagement``, ``keystone+cachemanagement`` and ``trusted-auth+cachemanagement``.:: [paste_deploy] flavor = keystone+cachemanagement Configuration Options Affecting the Image Cache ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. note:: These configuration options must be set in both the glance-cache and glance-api configuration files. One main configuration file option affects the image cache. ``image_cache_dir=PATH`` Required when image cache middleware is enabled. Default: ``/var/lib/glance/image-cache`` This is the base directory the image cache can write files to. Make sure the directory is writable by the user running the ``glance-api`` server ``image_cache_driver=DRIVER`` Optional. Choice of ``sqlite`` or ``xattr`` Default: ``sqlite`` The default ``sqlite`` cache driver has no special dependencies, other than the ``python-sqlite3`` library, which is installed on virtually all operating systems with modern versions of Python. It stores information about the cached files in a SQLite database. The ``xattr`` cache driver required the ``python-xattr>=0.6.0`` library and requires that the filesystem containing ``image_cache_dir`` have access times tracked for all files (in other words, the noatime option CANNOT be set for that filesystem). In addition, ``user_xattr`` must be set on the filesystem's description line in fstab. Because of these requirements, the ``xattr`` cache driver is not available on Windows. ``image_cache_sqlite_db=DB_FILE`` Optional. Default: ``cache.db`` When using the ``sqlite`` cache driver, you can set the name of the database that will be used to store the cached images information. The database is always contained in the ``image_cache_dir``. ``image_cache_max_size=SIZE`` Optional. Default: ``10737418240`` (10 GB) Size, in bytes, that the image cache should be constrained to. Images files are cached automatically in the local image cache, even if the writing of that image file would put the total cache size over this size. The ``glance-cache-pruner`` executable is what prunes the image cache to be equal to or less than this value. The ``glance-cache-pruner`` executable is designed to be run via cron on a regular basis. See more about this executable in :ref:`Controlling the Growth of the Image Cache ` .. _configuring-the-glance-registry: Configuring the Glance Registry ------------------------------- There are a number of configuration options in Glance that control how this registry server operates. These configuration options are specified in the ``glance-registry.conf`` configuration file in the section ``[DEFAULT]``. **IMPORTANT NOTE**: The glance-registry service is only used in conjunction with the glance-api service when clients are using the v1 REST API. See `Configuring Glance APIs`_ for more info. ``sql_connection=CONNECTION_STRING`` (``--sql-connection`` when specified on command line) Optional. Default: ``None`` Can be specified in configuration files. Can also be specified on the command-line for the ``glance-manage`` program. Sets the SQLAlchemy connection string to use when connecting to the registry database. Please see the documentation for `SQLAlchemy connection strings `_ online. You must urlencode any special characters in CONNECTION_STRING. ``sql_timeout=SECONDS`` Optional. Default: ``3600`` Can only be specified in configuration files. Sets the number of seconds after which SQLAlchemy should reconnect to the datastore if no activity has been made on the connection. ``enable_v1_registry=`` Optional and DEPRECATED. Default: ``True`` ``enable_v2_registry=`` Optional and DEPRECATED. Default: ``True`` .. include:: ../deprecate-registry.inc Defines which version(s) of the Registry API will be enabled. If the Glance API server parameter ``enable_v1_api`` has been set to ``True`` the ``enable_v1_registry`` has to be ``True`` as well. If the Glance API server parameter ``enable_v2_api`` has been set to ``True`` and the parameter ``data_api`` has been set to ``glance.db.registry.api`` the ``enable_v2_registry`` has to be set to ``True`` Configuring Notifications ------------------------- Glance can optionally generate notifications to be logged or sent to a message queue. The configuration options are specified in the ``glance-api.conf`` configuration file. ``[oslo_messaging_notifications]/driver`` Optional. Default: ``noop`` Sets the notification driver used by oslo.messaging. Options include ``messaging``, ``messagingv2``, ``log`` and ``routing``. **NOTE** In M release, the``[DEFAULT]/notification_driver`` option has been deprecated in favor of ``[oslo_messaging_notifications]/driver``. For more information see :ref:`Glance notifications ` and `oslo.messaging `_. ``[DEFAULT]/disabled_notifications`` Optional. Default: ``[]`` List of disabled notifications. A notification can be given either as a notification type to disable a single event, or as a notification group prefix to disable all events within a group. Example: if this config option is set to ["image.create", "metadef_namespace"], then "image.create" notification will not be sent after image is created and none of the notifications for metadefinition namespaces will be sent. Configuring Glance Property Protections --------------------------------------- Access to image meta properties may be configured using a :ref:`Property Protections Configuration file `. The location for this file can be specified in the ``glance-api.conf`` configuration file in the section ``[DEFAULT]``. **If an incorrect value is specified, glance API service will not start.** ``property_protection_file=PATH`` Optional. Default: not enabled. If property_protection_file is set, the file may use either roles or policies to specify property protections. ``property_protection_rule_format=`` Optional. Default: ``roles``. Configuring Glance APIs ----------------------- The glance-api service implements versions 1 and 2 of the OpenStack Images API. Disable any version of the Images API using the following options: ``enable_v1_api=`` Optional. Default: ``True`` ``enable_v2_api=`` Optional. Default: ``True`` **IMPORTANT NOTE**: To use v2 registry in v2 API, you must set ``data_api`` to glance.db.registry.api in glance-api.conf. Configuring Glance Tasks ------------------------ Glance Tasks are implemented only for version 2 of the OpenStack Images API. The config value ``task_time_to_live`` is used to determine how long a task would be visible to the user after transitioning to either the ``success`` or the ``failure`` state. ``task_time_to_live=`` Optional. Default: ``48`` The config value ``task_executor`` is used to determine which executor should be used by the Glance service to process the task. The currently available implementation is: ``taskflow``. ``task_executor=`` Optional. Default: ``taskflow`` The ``taskflow`` engine has its own set of configuration options, under the ``taskflow_executor`` section, that can be tuned to improve the task execution process. Among the available options, you may find ``engine_mode`` and ``max_workers``. The former allows for selecting an execution model and the available options are ``serial``, ``parallel`` and ``worker-based``. The ``max_workers`` option, instead, allows for controlling the number of workers that will be instantiated per executor instance. The default value for the ``engine_mode`` is ``parallel``, whereas the default number of ``max_workers`` is ``10``. Configuring Glance performance profiling ---------------------------------------- Glance supports using osprofiler to trace the performance of each key internal handling, including RESTful API calling, DB operation and etc. ``Please be aware that Glance performance profiling is currently a work in progress feature.`` Although, some trace points is available, e.g. API execution profiling at wsgi main entry and SQL execution profiling at DB module, the more fine-grained trace point is being worked on. The config value ``enabled`` is used to determine whether fully enable profiling feature for glance-api and glance-registry service. ``enabled=`` Optional. Default: ``False`` There is one more configuration option that needs to be defined to enable Glance services profiling. The config value ``hmac_keys`` is used for encrypting context data for performance profiling. ``hmac_keys=`` Optional. Default: ``SECRET_KEY`` **IMPORTANT NOTE**: in order to make profiling work as designed operator needs to make those values of HMAC key be consistent for all services in their deployment. Without HMAC key the profiling will not be triggered even profiling feature is enabled. **IMPORTANT NOTE**: previously HMAC keys (as well as enabled parameter) were placed at `/etc/glance/api-paste.ini` and `/etc/glance/registry-paste.ini` files for Glance API and Glance Registry services respectively. Starting with osprofiler 0.3.1 release there is no need to set these arguments in the `*-paste.ini` files. This functionality is still supported, although the config values are having larger priority. The config value ``trace_sqlalchemy`` is used to determine whether fully enable sqlalchemy engine based SQL execution profiling feature for glance-api and glance-registry services. ``trace_sqlalchemy=`` Optional. Default: ``False`` Configuring Glance public endpoint ---------------------------------- This setting allows an operator to configure the endpoint URL that will appear in the Glance "versions" response (that is, the response to ``GET /``\ ). This can be necessary when the Glance API service is run behind a proxy because the default endpoint displayed in the versions response is that of the host actually running the API service. If Glance is being run behind a load balancer, for example, direct access to individual hosts running the Glance API may not be allowed, hence the load balancer URL would be used for this value. ``public_endpoint=`` Optional. Default: ``None`` Configuring Glance digest algorithm ----------------------------------- Digest algorithm that will be used for digital signature. The default is sha256. Use the command:: openssl list-message-digest-algorithms to get the available algorithms supported by the version of OpenSSL on the platform. Examples are "sha1", "sha256", "sha512", etc. If an invalid digest algorithm is configured, all digital signature operations will fail and return a ValueError exception with "No such digest method" error. ``digest_algorithm=`` Optional. Default: ``sha256`` Configuring http_keepalive option --------------------------------- ``http_keepalive=`` If False, server will return the header "Connection: close", If True, server will return "Connection: Keep-Alive" in its responses. In order to close the client socket connection explicitly after the response is sent and read successfully by the client, you simply have to set this option to False when you create a wsgi server. Configuring the Health Check ---------------------------- This setting allows an operator to configure the endpoint URL that will provide information to load balancer if given API endpoint at the node should be available or not. Both Glance API and Glance Registry servers can be configured to expose a health check URL. To enable the health check middleware, it must occur in the beginning of the application pipeline. The health check middleware should be placed in your ``glance-api-paste.ini`` / ``glance-registry-paste.ini`` in a section titled ``[filter:healthcheck]``. It should look like this:: [filter:healthcheck] paste.filter_factory = oslo_middleware:Healthcheck.factory backends = disable_by_file disable_by_file_path = /etc/glance/healthcheck_disable A ready-made application pipeline including this filter is defined e.g. in the ``glance-api-paste.ini`` file, looking like so:: [pipeline:glance-api] pipeline = healthcheck versionnegotiation osprofiler unauthenticated-context rootapp For more information see `oslo.middleware `_. Configuring supported disk formats ---------------------------------- Each image in Glance has an associated disk format property. When creating an image the user specifies a disk format. They must select a format from the set that the Glance service supports. This supported set can be seen by querying the ``/v2/schemas/images`` resource. An operator can add or remove disk formats to the supported set. This is done by setting the ``disk_formats`` parameter which is found in the ``[image_formats]`` section of ``glance-api.conf``. ``disk_formats=`` Optional. Default: ``ami,ari,aki,vhd,vhdx,vmdk,raw,qcow2,vdi,iso,ploop`` glance-16.0.0/doc/source/user/0000775000175100017510000000000013245511661016102 5ustar zuulzuul00000000000000glance-16.0.0/doc/source/user/glanceapi.rst0000666000175100017510000010036313245511421020556 0ustar zuulzuul00000000000000.. Copyright 2010 OpenStack Foundation All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Using Glance's Image Public APIs ================================ Glance is the reference implementation of the OpenStack Images API. As such, Glance fully implements versions 1 and 2 of the Images API. .. include:: ../deprecation-note.inc There used to be a sentence here saying, "The Images API specification is developed alongside Glance, but is not considered part of the Glance project." That's only partially true (or completely false, depending upon how strict you are about these things). Conceptually, the OpenStack Images API is an independent definition of a REST API. In practice, however, the only way to participate in the evolution of the Images API is to work with the Glance community to define the new functionality and provide its reference implementation. Further, Glance falls under the "designated sections" provision of the OpenStack DefCore Guidelines, which basically means that in order to qualify as "OpenStack", a cloud exposing an OpenStack Images API must include the Glance Images API implementation code. Thus, although conceptually independent, the OpenStack Images APIs are intimately associated with Glance. **References** * `Designated sections (definition) `_ * `2014-04-02 DefCore Designated Sections Guidelines `_ * `OpenStack Core Definition `_ * `DefCore Guidelines Repository `_ Glance and the Images APIs: Past, Present, and Future ----------------------------------------------------- Here's a quick summary of the Images APIs that have been implemented by Glance. If you're interested in more details, you can consult the Release Notes for all the OpenStack releases (beginning with "Bexar") to follow the evolution of features in Glance and the Images APIs. Images v1 API ************* The v1 API was originally designed as a service API for use by Nova and other OpenStack services. In the Kilo release, the v1.1 API was downgraded from CURRENT to SUPPORTED. In the Newton release, the version 1 API is officially declared DEPRECATED. During the deprecation period, the Images v1 API is closed to further development. The Glance code implementing the v1 API accepts only serious bugfixes. Since Folsom, it has been possible to deploy OpenStack without exposing the Images v1 API to end users. The Compute v2 API contains image-related API calls allowing users to list images, list images details, show image details for a specific image, delete images, and manipulate image metadata. Nova acts as a proxy to Glance for these image-related calls. It's important to note that the image-related calls in the Compute v2 API are a proper subset of the calls available in the Images APIs. In the Newton release, Nova (and other OpenStack services that consume images) have been modified to use the Images v2 API by default. **Reference** * `OpenStack Standard Deprecation Requirements `_ Images v2 API ************* The v2 API is the CURRENT OpenStack Images API. It provides a more friendly interface to consumers than did the v1 API, as it was specifically designed to expose images-related functionality as a public-facing endpoint. It's the version that's currently open to development. A common strategy is to deploy multiple Glance nodes: internal-facing nodes providing the Images APIs for internal consumers like Nova, and external-facing nodes providing the Images v2 API for public use. The Future ********** During the long and tumultuous design phase of what has since become an independent service named "Glare" (the Glance Artifacts Repository), the Glance community loosely spoke about the Artifacts API being "Glance v3". This, however, was only a shorthand way of speaking of the Artifacts effort. The Artifacts API can't be the Images v3 API since Artifacts are not the same as Images. Conceptually, a virtual machine image could be an Artifact, and the Glare code has been designed to be compatible with the Images v2 API. But at this time, there are no plans to implement an Images v3 API. During the Newton development cycle, Glare became an independent OpenStack project. While it's evident that there's a need for an Artifact Repository in OpenStack, whether it will be as ubiquitous as the need for an Images Repository isn't clear. On the other hand, industry trends could go in the opposite direction where everyone needs Artifacts and deployers view images as simply another type of digital artifact. As Yogi Berra, an experienced manager, once said, "It's tough to make predictions, especially about the future." Authentication -------------- Glance depends on Keystone and the OpenStack Identity API to handle authentication of clients. You must obtain an authentication token from Keystone using and send it along with all API requests to Glance through the ``X-Auth-Token`` header. Glance will communicate back to Keystone to verify the token validity and obtain your identity credentials. See :ref:`authentication` for more information on integrating with Keystone. Using v1.X ---------- .. include:: ../deprecation-note.inc For the purpose of examples, assume there is a Glance API server running at the URL ``http://glance.openstack.example.org`` on the default port 80. List Available Images ********************* We want to see a list of available images that the authenticated user has access to. This includes images owned by the user, images shared with the user and public images. We issue a ``GET`` request to ``http://glance.openstack.example.org/v1/images`` to retrieve this list of available images. The data is returned as a JSON-encoded mapping in the following format:: {'images': [ {'uri': 'http://glance.openstack.example.org/v1/images/71c675ab-d94f-49cd-a114-e12490b328d9', 'name': 'Ubuntu 10.04 Plain', 'disk_format': 'vhd', 'container_format': 'ovf', 'size': '5368709120'} ...]} List Available Images in More Detail ************************************ We want to see a more detailed list of available images that the authenticated user has access to. This includes images owned by the user, images shared with the user and public images. We issue a ``GET`` request to ``http://glance.openstack.example.org/v1/images/detail`` to retrieve this list of available images. The data is returned as a JSON-encoded mapping in the following format:: {'images': [ {'uri': 'http://glance.openstack.example.org/v1/images/71c675ab-d94f-49cd-a114-e12490b328d9', 'name': 'Ubuntu 10.04 Plain 5GB', 'disk_format': 'vhd', 'container_format': 'ovf', 'size': '5368709120', 'checksum': 'c2e5db72bd7fd153f53ede5da5a06de3', 'created_at': '2010-02-03 09:34:01', 'updated_at': '2010-02-03 09:34:01', 'deleted_at': '', 'status': 'active', 'is_public': true, 'min_ram': 256, 'min_disk': 5, 'owner': null, 'properties': {'distro': 'Ubuntu 10.04 LTS'}}, ...]} .. note:: All timestamps returned are in UTC. The `updated_at` timestamp is the timestamp when an image's metadata was last updated, not its image data, as all image data is immutable once stored in Glance. The `properties` field is a mapping of free-form key/value pairs that have been saved with the image metadata. The `checksum` field is an MD5 checksum of the image file data. The `is_public` field is a boolean indicating whether the image is publicly available. The `min_ram` field is an integer specifying the minimum amount of RAM needed to run this image on an instance, in megabytes. The `min_disk` field is an integer specifying the minimum amount of disk space needed to run this image on an instance, in gigabytes. The `owner` field is a string which may either be null or which will indicate the owner of the image. Filtering Images Lists ********************** Both the ``GET /v1/images`` and ``GET /v1/images/detail`` requests take query parameters that serve to filter the returned list of images. The following list details these query parameters. * ``name=NAME`` Filters images having a ``name`` attribute matching ``NAME``. * ``container_format=FORMAT`` Filters images having a ``container_format`` attribute matching ``FORMAT`` For more information, see :ref:`formats` * ``disk_format=FORMAT`` Filters images having a ``disk_format`` attribute matching ``FORMAT`` For more information, see :ref:`formats` * ``status=STATUS`` Filters images having a ``status`` attribute matching ``STATUS`` For more information, see :ref:`image-statuses` * ``size_min=BYTES`` Filters images having a ``size`` attribute greater than or equal to ``BYTES`` * ``size_max=BYTES`` Filters images having a ``size`` attribute less than or equal to ``BYTES`` These two resources also accept additional query parameters: * ``sort_key=KEY`` Results will be ordered by the specified image attribute ``KEY``. Accepted values include ``id``, ``name``, ``status``, ``disk_format``, ``container_format``, ``size``, ``created_at`` (default) and ``updated_at``. * ``sort_dir=DIR`` Results will be sorted in the direction ``DIR``. Accepted values are ``asc`` for ascending or ``desc`` (default) for descending. * ``marker=ID`` An image identifier marker may be specified. When present, only images which occur after the identifier ``ID`` will be listed. (These are the images that have a `sort_key` later than that of the marker ``ID`` in the `sort_dir` direction.) * ``limit=LIMIT`` When present, the maximum number of results returned will not exceed ``LIMIT``. .. note:: If the specified ``LIMIT`` exceeds the operator defined limit (api_limit_max) then the number of results returned may be less than ``LIMIT``. * ``is_public=PUBLIC`` An admin user may use the `is_public` parameter to control which results are returned. When the `is_public` parameter is absent or set to `True` the following images will be listed: Images whose `is_public` field is `True`, owned images and shared images. When the `is_public` parameter is set to `False` the following images will be listed: Images (owned, shared, or non-owned) whose `is_public` field is `False`. When the `is_public` parameter is set to `None` all images will be listed irrespective of owner, shared status or the `is_public` field. .. note:: Use of the `is_public` parameter is restricted to admin users. For all other users it will be ignored. Retrieve Image Metadata *********************** We want to see detailed information for a specific virtual machine image that the Glance server knows about. We have queried the Glance server for a list of images and the data returned includes the `uri` field for each available image. This `uri` field value contains the exact location needed to get the metadata for a specific image. Continuing the example from above, in order to get metadata about the first image returned, we can issue a ``HEAD`` request to the Glance server for the image's URI. We issue a ``HEAD`` request to ``http://glance.openstack.example.org/v1/images/71c675ab-d94f-49cd-a114-e12490b328d9`` to retrieve complete metadata for that image. The metadata is returned as a set of HTTP headers that begin with the prefix ``x-image-meta-``. The following shows an example of the HTTP headers returned from the above ``HEAD`` request:: x-image-meta-uri http://glance.openstack.example.org/v1/images/71c675ab-d94f-49cd-a114-e12490b328d9 x-image-meta-name Ubuntu 10.04 Plain 5GB x-image-meta-disk_format vhd x-image-meta-container_format ovf x-image-meta-size 5368709120 x-image-meta-checksum c2e5db72bd7fd153f53ede5da5a06de3 x-image-meta-created_at 2010-02-03 09:34:01 x-image-meta-updated_at 2010-02-03 09:34:01 x-image-meta-deleted_at x-image-meta-status available x-image-meta-is_public true x-image-meta-min_ram 256 x-image-meta-min_disk 0 x-image-meta-owner null x-image-meta-property-distro Ubuntu 10.04 LTS .. note:: All timestamps returned are in UTC. The `x-image-meta-updated_at` timestamp is the timestamp when an image's metadata was last updated, not its image data, as all image data is immutable once stored in Glance. There may be multiple headers that begin with the prefix `x-image-meta-property-`. These headers are free-form key/value pairs that have been saved with the image metadata. The key is the string after `x-image-meta-property-` and the value is the value of the header. The response's `ETag` header will always be equal to the `x-image-meta-checksum` value. The response's `x-image-meta-is_public` value is a boolean indicating whether the image is publicly available. The response's `x-image-meta-owner` value is a string which may either be null or which will indicate the owner of the image. Retrieve Raw Image Data *********************** We want to retrieve that actual raw data for a specific virtual machine image that the Glance server knows about. We have queried the Glance server for a list of images and the data returned includes the `uri` field for each available image. This `uri` field value contains the exact location needed to get the metadata for a specific image. Continuing the example from above, in order to get metadata about the first image returned, we can issue a ``HEAD`` request to the Glance server for the image's URI. We issue a ``GET`` request to ``http://glance.openstack.example.org/v1/images/71c675ab-d94f-49cd-a114-e12490b328d9`` to retrieve metadata for that image as well as the image itself encoded into the response body. The metadata is returned as a set of HTTP headers that begin with the prefix ``x-image-meta-``. The following shows an example of the HTTP headers returned from the above ``GET`` request:: x-image-meta-uri http://glance.openstack.example.org/v1/images/71c675ab-d94f-49cd-a114-e12490b328d9 x-image-meta-name Ubuntu 10.04 Plain 5GB x-image-meta-disk_format vhd x-image-meta-container_format ovf x-image-meta-size 5368709120 x-image-meta-checksum c2e5db72bd7fd153f53ede5da5a06de3 x-image-meta-created_at 2010-02-03 09:34:01 x-image-meta-updated_at 2010-02-03 09:34:01 x-image-meta-deleted_at x-image-meta-status available x-image-meta-is_public true x-image-meta-min_ram 256 x-image-meta-min_disk 5 x-image-meta-owner null x-image-meta-property-distro Ubuntu 10.04 LTS .. note:: All timestamps returned are in UTC. The `x-image-meta-updated_at` timestamp is the timestamp when an image's metadata was last updated, not its image data, as all image data is immutable once stored in Glance. There may be multiple headers that begin with the prefix `x-image-meta-property-`. These headers are free-form key/value pairs that have been saved with the image metadata. The key is the string after `x-image-meta-property-` and the value is the value of the header. The response's `Content-Length` header shall be equal to the value of the `x-image-meta-size` header. The response's `ETag` header will always be equal to the `x-image-meta-checksum` value. The response's `x-image-meta-is_public` value is a boolean indicating whether the image is publicly available. The response's `x-image-meta-owner` value is a string which may either be null or which will indicate the owner of the image. The image data itself will be the body of the HTTP response returned from the request, which will have content-type of `application/octet-stream`. Add a New Image *************** We have created a new virtual machine image in some way (created a "golden image" or snapshotted/backed up an existing image) and we wish to do two things: * Store the disk image data in Glance * Store metadata about this image in Glance We can do the above two activities in a single call to the Glance API. Assuming, like in the examples above, that a Glance API server is running at ``http://glance.openstack.example.org``, we issue a ``POST`` request to add an image to Glance:: POST http://glance.openstack.example.org/v1/images The metadata about the image is sent to Glance in HTTP headers. The body of the HTTP request to the Glance API will be the MIME-encoded disk image data. Reserve a New Image ******************* We can also perform the activities described in `Add a New Image`_ using two separate calls to the Image API; the first to register the image metadata, and the second to add the image disk data. This is known as "reserving" an image. The first call should be a ``POST`` to ``http://glance.openstack.example.org/v1/images``, which will result in a new image id being registered with a status of ``queued``:: {'image': {'status': 'queued', 'id': '71c675ab-d94f-49cd-a114-e12490b328d9', ...} ...} The image data can then be added using a ``PUT`` to ``http://glance.openstack.example.org/v1/images/71c675ab-d94f-49cd-a114-e12490b328d9``. The image status will then be set to ``active`` by Glance. **Image Metadata in HTTP Headers** Glance will view as image metadata any HTTP header that it receives in a ``POST`` request where the header key is prefixed with the strings ``x-image-meta-`` and ``x-image-meta-property-``. The list of metadata headers that Glance accepts are listed below. * ``x-image-meta-name`` This header is required, unless reserving an image. Its value should be the name of the image. Note that the name of an image *is not unique to a Glance node*. It would be an unrealistic expectation of users to know all the unique names of all other user's images. * ``x-image-meta-id`` This header is optional. When present, Glance will use the supplied identifier for the image. If the identifier already exists in that Glance node, then a **409 Conflict** will be returned by Glance. The value of the header must be a uuid in hexadecimal string notation (that is 71c675ab-d94f-49cd-a114-e12490b328d9). When this header is *not* present, Glance will generate an identifier for the image and return this identifier in the response (see below). * ``x-image-meta-store`` This header is optional. Valid values are one of ``file``, ``rbd``, ``swift``, ``cinder``, ``sheepdog`` or ``vsphere``. When present, Glance will attempt to store the disk image data in the backing store indicated by the value of the header. If the Glance node does not support the backing store, Glance will return a **400 Bad Request**. When not present, Glance will store the disk image data in the backing store that is marked as default. See the configuration option ``default_store`` for more information. * ``x-image-meta-disk_format`` This header is required, unless reserving an image. Valid values are one of ``aki``, ``ari``, ``ami``, ``raw``, ``iso``, ``vhd``, ``vhdx``, ``vdi``, ``qcow2``, ``vmdk`` or ``ploop``. For more information, see :ref:`formats`. * ``x-image-meta-container_format`` This header is required, unless reserving an image. Valid values are one of ``aki``, ``ari``, ``ami``, ``bare``, ``ova``, ``ovf``, or ``docker``. For more information, see :ref:`formats`. * ``x-image-meta-size`` This header is optional. When present, Glance assumes that the expected size of the request body will be the value of this header. If the length in bytes of the request body *does not match* the value of this header, Glance will return a **400 Bad Request**. When not present, Glance will calculate the image's size based on the size of the request body. * ``x-image-meta-checksum`` This header is optional. When present, it specifies the **MD5** checksum of the image file data. When present, Glance will verify the checksum generated from the back-end store while storing your image against this value and return a **400 Bad Request** if the values do not match. * ``x-image-meta-is_public`` This header is optional. When Glance finds the string "true" (case-insensitive), the image is marked as a public one, meaning that any user may view its metadata and may read the disk image from Glance. When not present, the image is assumed to be *not public* and owned by a user. * ``x-image-meta-min_ram`` This header is optional. When present, it specifies the minimum amount of RAM in megabytes required to run this image on a server. When not present, the image is assumed to have a minimum RAM requirement of 0. * ``x-image-meta-min_disk`` This header is optional. When present, it specifies the expected minimum disk space in gigabytes required to run this image on a server. When not present, the image is assumed to have a minimum disk space requirement of 0. * ``x-image-meta-owner`` This header is optional and only meaningful for admins. Glance normally sets the owner of an image to be the tenant or user (depending on the "owner_is_tenant" configuration option) of the authenticated user issuing the request. However, if the authenticated user has the Admin role, this default may be overridden by setting this header to null or to a string identifying the owner of the image. * ``x-image-meta-property-*`` When Glance receives any HTTP header whose key begins with the string prefix ``x-image-meta-property-``, Glance adds the key and value to a set of custom, free-form image properties stored with the image. The key is a lower-cased string following the prefix ``x-image-meta-property-`` with dashes and punctuation replaced with underscores. For example, if the following HTTP header were sent:: x-image-meta-property-distro Ubuntu 10.10 then a key/value pair of "distro"/"Ubuntu 10.10" will be stored with the image in Glance. There is no limit on the number of free-form key/value attributes that can be attached to the image. However, keep in mind that the 8K limit on the size of all the HTTP headers sent in a request will effectively limit the number of image properties. Update an Image *************** Glance will consider any HTTP header that it receives in a ``PUT`` request as an instance of image metadata. In this case, the header key should be prefixed with the strings ``x-image-meta-`` and ``x-image-meta-property-``. If an image was previously reserved, and thus is in the ``queued`` state, then image data can be added by including it as the request body. If the image already has data associated with it (for example, it is not in the ``queued`` state), then including a request body will result in a **409 Conflict** exception. On success, the ``PUT`` request will return the image metadata encoded as HTTP headers. See more about image statuses here: :ref:`image-statuses` List Image Memberships ********************** We want to see a list of the other system tenants (or users, if "owner_is_tenant" is False) that may access a given virtual machine image that the Glance server knows about. We take the `uri` field of the image data, append ``/members`` to it, and issue a ``GET`` request on the resulting URL. Continuing from the example above, in order to get the memberships for the first image returned, we can issue a ``GET`` request to the Glance server for ``http://glance.openstack.example.org/v1/images/71c675ab-d94f-49cd-a114-e12490b328d9/members``. And we will get back JSON data such as the following:: {'members': [ {'member_id': 'tenant1', 'can_share': false} ...]} The `member_id` field identifies a tenant with which the image is shared. If that tenant is authorized to further share the image, the `can_share` field is `true`. List Shared Images ****************** We want to see a list of images which are shared with a given tenant. We issue a ``GET`` request to ``http://glance.openstack.example.org/v1/shared-images/tenant1``. We will get back JSON data such as the following:: {'shared_images': [ {'image_id': '71c675ab-d94f-49cd-a114-e12490b328d9', 'can_share': false} ...]} The `image_id` field identifies an image shared with the tenant named by *member_id*. If the tenant is authorized to further share the image, the `can_share` field is `true`. Add a Member to an Image ************************ We want to authorize a tenant to access a private image. We issue a ``PUT`` request to ``http://glance.openstack.example.org/v1/images/71c675ab-d94f-49cd-a114-e12490b328d9/members/tenant1``. With no body, this will add the membership to the image, leaving existing memberships unmodified and defaulting new memberships to have `can_share` set to `false`. We may also optionally attach a body of the following form:: {'member': {'can_share': true} } If such a body is provided, both existing and new memberships will have `can_share` set to the provided value (either `true` or `false`). This query will return a 204 ("No Content") status code. Remove a Member from an Image ***************************** We want to revoke a tenant's right to access a private image. We issue a ``DELETE`` request to ``http://glance.openstack.example.org/v1/images/1/members/tenant1``. This query will return a 204 ("No Content") status code. Replace a Membership List for an Image ************************************** The full membership list for a given image may be replaced. We issue a ``PUT`` request to ``http://glance.openstack.example.org/v1/images/71c675ab-d94f-49cd-a114-e12490b328d9/members`` with a body of the following form:: {'memberships': [ {'member_id': 'tenant1', 'can_share': false} ...]} All existing memberships which are not named in the replacement body are removed, and those which are named have their `can_share` settings changed as specified. (The `can_share` setting may be omitted, which will cause that setting to remain unchanged in the existing memberships.) All new memberships will be created, with `can_share` defaulting to `false` unless it is specified otherwise. Image Membership Changes in Version 2.0 --------------------------------------- Version 2.0 of the Images API eliminates the ``can_share`` attribute of image membership. In the version 2.0 model, image sharing is not transitive. In version 2.0, image members have a ``status`` attribute that reflects how the image should be treated with respect to that image member's image-list. * The ``status`` attribute may have one of three values: ``pending``, ``accepted``, or ``rejected``. * By default, only those shared images with status ``accepted`` are included in an image member's image-list. * Only an image member may change his/her own membership status. * Only an image owner may create members on an image. The status of a newly created image member is ``pending``. The image owner cannot change the status of a member. Distinctions from Version 1.x API Calls *************************************** * The response to a request to list the members of an image has changed. call: ``GET`` on ``/v2/images/{imageId}/members`` response: see the JSON schema at ``/v2/schemas/members`` * The request body in the call to create an image member has changed. call: ``POST`` to ``/v2/images/{imageId}/members`` request body:: { "member": "" } where the {memberId} is the tenant ID of the image member. The member status of a newly created image member is ``pending``. New API Calls ************* * Change the status of an image member call: ``PUT`` on ``/v2/images/{imageId}/members/{memberId}`` request body:: { "status": "" } where is ``pending``, ``accepted``, or ``rejected``. The {memberId} is the tenant ID of the image member. Images v2 Tasks API ------------------- Version 2 of the OpenStack Images API introduces a Task resource that is used to create and monitor long-running asynchronous image-related processes. See the :ref:`tasks` section of the Glance documentation for more information. The following Task calls are available: Create a Task ************* A user wants to initiate a task. The user issues a ``POST`` request to ``/v2/tasks``. The request body is of Content-type ``application/json`` and must contain the following fields: * ``type``: a string specified by the enumeration defined in the Task schema * ``input``: a JSON object. The content is defined by the cloud provider who has exposed the endpoint being contacted The response is a Task entity as defined by the Task schema. It includes an ``id`` field that can be used in a subsequent call to poll the task for status changes. A task is created in ``pending`` status. Show a Task *********** A user wants to see detailed information about a task the user owns. The user issues a ``GET`` request to ``/v2/tasks/{taskId}``. The response is in ``application/json`` format. The exact structure is given by the task schema located at ``/v2/schemas/task``. List Tasks ********** A user wants to see what tasks have been created in his or her project. The user issues a ``GET`` request to ``/v2/tasks``. The response is in ``application/json`` format. The exact structure is given by the task schema located at ``/v2/schemas/tasks``. Note that, as indicated by the schema, the list of tasks is provided in a sparse format. To see more information about a particular task in the list, the user would use the show task call described above. Filtering and Sorting the Tasks List ************************************ The ``GET /v2/tasks`` request takes query parameters that server to filter the returned list of tasks. The following list details these query parameters. * ``status={status}`` Filters the list to display only those tasks in the specified status. See the task schema or the :ref:`task-statuses` section of this documentation for the legal values to use for ``{status}``. For example, a request to ``GET /v2/tasks?status=pending`` would return only those tasks whose current status is ``pending``. * ``type={type}`` Filters the list to display only those tasks of the specified type. See the enumeration defined in the task schema for the legal values to use for ``{type}``. For example, a request to ``GET /v2/tasks?type=import`` would return only import tasks. * ``sort_dir={direction}`` Sorts the list of tasks according to ``updated_at`` datetime. Legal values are ``asc`` (ascending) and ``desc`` (descending). By default, the task list is sorted by ``created_at`` time in descending chronological order. API Message Localization ------------------------ Glance supports HTTP message localization. For example, an HTTP client can receive API messages in Chinese even if the locale language of the server is English. How to use it ************* To receive localized API messages, the HTTP client needs to specify the **Accept-Language** header to indicate the language that will translate the message. For more information about Accept-Language, please refer to http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html A typical curl API request will be like below:: curl -i -X GET -H 'Accept-Language: zh' -H 'Content-Type: application/json' http://glance.openstack.example.org/v2/images/aaa Then the response will be like the following:: HTTP/1.1 404 Not Found Content-Length: 234 Content-Type: text/html; charset=UTF-8 X-Openstack-Request-Id: req-54d403a0-064e-4544-8faf-4aeef086f45a Date: Sat, 22 Feb 2014 06:26:26 GMT 404 Not Found

404 Not Found

找不到任何具有标识 aaa 的映像

.. note:: Make sure to have a language package under /usr/share/locale-langpack/ on the target Glance server. glance-16.0.0/doc/source/user/identifiers.rst0000666000175100017510000000201413245511421021132 0ustar zuulzuul00000000000000.. Copyright 2010 OpenStack Foundation All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Image Identifiers ================= Images are uniquely identified by way of a URI that matches the following signature:: /v1/images/ where `` is the resource location of the Glance service that knows about an image, and `` is the image's identifier. Image identifiers in Glance are *uuids*, making them *globally unique*. glance-16.0.0/doc/source/user/metadefs-concepts.rst0000666000175100017510000002026113245511421022235 0ustar zuulzuul00000000000000.. Copyright (c) 2014 Hewlett-Packard Development Company, L.P. All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Metadata Definition Concepts ============================ The metadata definition service was added to Glance in the Juno release of OpenStack. It provides a common API for vendors, admins, services, and users to meaningfully **define** available key / value pair metadata that can be used on different types of resources (images, artifacts, volumes, flavors, aggregates, and other resources). A definition includes a property's key, its description, its constraints, and the resource types to which it can be associated. This catalog does not store the values for specific instance properties. For example, a definition of a virtual CPU topology property for the number of cores will include the base key to use (for example, cpu_cores), a description, and value constraints like requiring it to be an integer. So, a user, potentially through Horizon, would be able to search this catalog to list the available properties they can add to a flavor or image. They will see the virtual CPU topology property in the list and know that it must be an integer. When the user adds the property its key and value will be stored in the service that owns that resource (for example, Nova for flavors and in Glance for images). The catalog also includes any additional prefix required when the property is applied to different types of resources, such as "hw\_" for images and "hw:" for flavors. So, on an image, the user would know to set the property as "hw_cpu_cores=1". Terminology ----------- Background ~~~~~~~~~~ The term *metadata* can become very overloaded and confusing. This catalog is about the additional metadata that is passed as arbitrary key / value pairs or tags across various artifacts and OpenStack services. Below are a few examples of the various terms used for metadata across OpenStack services today: +-------------------------+---------------------------+----------------------+ | Nova | Cinder | Glance | +=========================+===========================+======================+ | Flavor | Volume & Snapshot | Image & Snapshot | | + *extra specs* | + *image metadata* | + *properties* | | Host Aggregate | + *metadata* | + *tags* | | + *metadata* | VolumeType | | | Servers | + *extra specs* | | | + *metadata* | + *qos specs* | | | + *scheduler_hints* | | | | + *tags* | | | +-------------------------+---------------------------+----------------------+ Catalog Concepts ~~~~~~~~~~~~~~~~ The below figure illustrates the concept terminology used in the metadata definitions catalog:: A namespace is associated with 0 to many resource types, making it visible to the API / UI for applying to that type of resource. RBAC Permissions are managed at a namespace level. +----------------------------------------------+ | Namespace | | | | +-----------------------------------------+ | | | Object Definition | | | | | | +--------------------+ | | +-------------------------------------+ | | +--> | Resource Type: | | | | Property Definition A (key=integer) | | | | | e.g. Nova Flavor | | | +-------------------------------------+ | | | +--------------------+ | | | | | | | +-------------------------------------+ | | | | | | Property Definition B (key=string) | | | | +--------------------+ | | +-------------------------------------+ | +--+--> | Resource Type: | | | | | | | e.g. Glance Image | | +-----------------------------------------+ | | +--------------------+ | | | | +-------------------------------------+ | | | | Property Definition C (key=boolean) | | | +--------------------+ | +-------------------------------------+ | +--> | Resource Type: | | | | e.g. Cinder Volume | +----------------------------------------------+ +--------------------+ Properties may be defined standalone or within the context of an object. Catalog Terminology ~~~~~~~~~~~~~~~~~~~ The following terminology is used within the metadata definition catalog. **Namespaces** Metadata definitions are contained in namespaces. - Specify the access controls (CRUD) for everything defined in it. Allows for admin only, different projects, or the entire cloud to define and use the definitions in the namespace - Associates the contained definitions to different types of resources **Properties** A property describes a single property and its primitive constraints. Each property can ONLY be a primitive type: * string, integer, number, boolean, array Each primitive type is described using simple JSON schema notation. This means NO nested objects and no definition referencing. **Objects** An object describes a group of one to many properties and their primitive constraints. Each property in the group can ONLY be a primitive type: * string, integer, number, boolean, array Each primitive type is described using simple JSON schema notation. This means NO nested objects. The object may optionally define required properties under the semantic understanding that a user who uses the object should provide all required properties. **Resource Type Association** Resource type association specifies the relationship between resource types and the namespaces that are applicable to them. This information can be used to drive UI and CLI views. For example, the same namespace of objects, properties, and tags may be used for images, snapshots, volumes, and flavors. Or a namespace may only apply to images. Resource types should be aligned with Heat resource types whenever possible. https://docs.openstack.org/heat/latest/template_guide/openstack.html It is important to note that the same base property key can require different prefixes depending on the target resource type. The API provides a way to retrieve the correct property based on the target resource type. Below are a few examples: The desired virtual CPU topology can be set on both images and flavors via metadata. The keys have different prefixes on images than on flavors. On flavors keys are prefixed with ``hw:``, but on images the keys are prefixed with ``hw_``. For more: https://github.com/openstack/nova-specs/blob/master/specs/juno/implemented/virt-driver-vcpu-topology.rst Another example is the AggregateInstanceExtraSpecsFilter and scoped properties (e.g. properties with something:something=value). For scoped / namespaced properties, the AggregateInstanceExtraSpecsFilter requires a prefix of "aggregate_instance_extra_specs:" to be used on flavors but not on the aggregate itself. Otherwise, the filter will not evaluate the property during scheduling. So, on a host aggregate, you may see: companyx:fastio=true But then when used on the flavor, the AggregateInstanceExtraSpecsFilter needs: aggregate_instance_extra_specs:companyx:fastio=true In some cases, there may be multiple different filters that may use the same property with different prefixes. In this case, the correct prefix needs to be set based on which filter is enabled. glance-16.0.0/doc/source/user/glancemetadefcatalogapi.rst0000666000175100017510000004602113245511421023437 0ustar zuulzuul00000000000000.. Copyright (c) 2014 Hewlett-Packard Development Company, L.P. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Using Glance's Metadata Definitions Catalog Public APIs ======================================================= A common API hosted by the Glance service for vendors, admins, services, and users to meaningfully define available key / value pair and tag metadata. The intent is to enable better metadata collaboration across artifacts, services, and projects for OpenStack users. This is about the definition of the available metadata that can be used on different types of resources (images, artifacts, volumes, flavors, aggregates, etc). A definition includes the properties type, its key, it's description, and it's constraints. This catalog will not store the values for specific instance properties. For example, a definition of a virtual CPU topology property for number of cores will include the key to use, a description, and value constraints like requiring it to be an integer. So, a user, potentially through Horizon, would be able to search this catalog to list the available properties they can add to a flavor or image. They will see the virtual CPU topology property in the list and know that it must be an integer. In the Horizon example, when the user adds the property, its key and value will be stored in the service that owns that resource (Nova for flavors and in Glance for images). Diagram: https://wiki.openstack.org/w/images/b/bb/Glance-Metadata-API.png Glance Metadata Definitions Catalog implementation started with API version v2. Authentication -------------- Glance depends on Keystone and the OpenStack Identity API to handle authentication of clients. You must obtain an authentication token from Keystone send it along with all API requests to Glance through the ``X-Auth-Token`` header. Glance will communicate back to Keystone to verify the token validity and obtain your identity credentials. See :ref:`authentication` for more information on integrating with Keystone. Using v2.X ---------- For the purpose of examples, assume there is a Glance API server running at the URL ``http://glance.openstack.example.org`` on the default port 80. List Available Namespaces ************************* We want to see a list of available namespaces that the authenticated user has access to. This includes namespaces owned by the user, namespaces shared with the user and public namespaces. We issue a ``GET`` request to ``http://glance.openstack.example.org/v2/metadefs/namespaces`` to retrieve this list of available namespaces. The data is returned as a JSON-encoded mapping in the following format:: { "namespaces": [ { "namespace": "MyNamespace", "display_name": "My User Friendly Namespace", "description": "My description", "visibility": "public", "protected": true, "owner": "The Test Owner", "self": "/v2/metadefs/namespaces/MyNamespace", "schema": "/v2/schemas/metadefs/namespace", "created_at": "2014-08-28T17:13:06Z", "updated_at": "2014-08-28T17:13:06Z", "resource_type_associations": [ { "name": "OS::Nova::Aggregate", "created_at": "2014-08-28T17:13:06Z", "updated_at": "2014-08-28T17:13:06Z" }, { "name": "OS::Nova::Flavor", "prefix": "aggregate_instance_extra_specs:", "created_at": "2014-08-28T17:13:06Z", "updated_at": "2014-08-28T17:13:06Z" } ] } ], "first": "/v2/metadefs/namespaces?sort_key=created_at&sort_dir=asc", "schema": "/v2/schemas/metadefs/namespaces" } .. note:: Listing namespaces will only show the summary of each namespace including counts and resource type associations. Detailed response including all its objects definitions, property definitions etc. will only be available on each individual GET namespace request. Filtering Namespaces Lists ************************** ``GET /v2/metadefs/namespaces`` requests take query parameters that serve to filter the returned list of namespaces. The following list details these query parameters. * ``resource_types=RESOURCE_TYPES`` Filters namespaces having a ``resource_types`` within the list of comma separated ``RESOURCE_TYPES``. GET resource also accepts additional query parameters: * ``sort_key=KEY`` Results will be ordered by the specified sort attribute ``KEY``. Accepted values include ``namespace``, ``created_at`` (default) and ``updated_at``. * ``sort_dir=DIR`` Results will be sorted in the direction ``DIR``. Accepted values are ``asc`` for ascending or ``desc`` (default) for descending. * ``marker=NAMESPACE`` A namespace identifier marker may be specified. When present only namespaces which occur after the identifier ``NAMESPACE`` will be listed, i.e. the namespaces which have a `sort_key` later than that of the marker ``NAMESPACE`` in the `sort_dir` direction. * ``limit=LIMIT`` When present the maximum number of results returned will not exceed ``LIMIT``. .. note:: If the specified ``LIMIT`` exceeds the operator defined limit (api_limit_max) then the number of results returned may be less than ``LIMIT``. * ``visibility=PUBLIC`` An admin user may use the `visibility` parameter to control which results are returned (PRIVATE or PUBLIC). Retrieve Namespace ****************** We want to see a more detailed information about a namespace that the authenticated user has access to. The detail includes the properties, objects, and resource type associations. We issue a ``GET`` request to ``http://glance.openstack.example.org/v2/metadefs/namespaces/{namespace}`` to retrieve the namespace details. The data is returned as a JSON-encoded mapping in the following format:: { "namespace": "MyNamespace", "display_name": "My User Friendly Namespace", "description": "My description", "visibility": "public", "protected": true, "owner": "The Test Owner", "schema": "/v2/schemas/metadefs/namespace", "resource_type_associations": [ { "name": "OS::Glance::Image", "prefix": "hw_", "created_at": "2014-08-28T17:13:06Z", "updated_at": "2014-08-28T17:13:06Z" }, { "name": "OS::Cinder::Volume", "prefix": "hw_", "properties_target": "image", "created_at": "2014-08-28T17:13:06Z", "updated_at": "2014-08-28T17:13:06Z" }, { "name": "OS::Nova::Flavor", "prefix": "filter1:", "created_at": "2014-08-28T17:13:06Z", "updated_at": "2014-08-28T17:13:06Z" } ], "properties": { "nsprop1": { "title": "My namespace property1", "description": "More info here", "type": "boolean", "default": true }, "nsprop2": { "title": "My namespace property2", "description": "More info here", "type": "string", "default": "value1" } }, "objects": [ { "name": "object1", "description": "my-description", "self": "/v2/metadefs/namespaces/MyNamespace/objects/object1", "schema": "/v2/schemas/metadefs/object", "created_at": "2014-08-28T17:13:06Z", "updated_at": "2014-08-28T17:13:06Z", "required": [], "properties": { "prop1": { "title": "My object1 property1", "description": "More info here", "type": "array", "items": { "type": "string" } } } }, { "name": "object2", "description": "my-description", "self": "/v2/metadefs/namespaces/MyNamespace/objects/object2", "schema": "/v2/schemas/metadefs/object", "created_at": "2014-08-28T17:13:06Z", "updated_at": "2014-08-28T17:13:06Z", "properties": { "prop1": { "title": "My object2 property1", "description": "More info here", "type": "integer", "default": 20 } } } ] } Retrieve available Resource Types ********************************* We want to see the list of all resource types that are available in Glance We issue a ``GET`` request to ``http://glance.openstack.example.org/v2/metadefs/resource_types`` to retrieve all resource types. The data is returned as a JSON-encoded mapping in the following format:: { "resource_types": [ { "created_at": "2014-08-28T17:13:04Z", "name": "OS::Glance::Image", "updated_at": "2014-08-28T17:13:04Z" }, { "created_at": "2014-08-28T17:13:04Z", "name": "OS::Cinder::Volume", "updated_at": "2014-08-28T17:13:04Z" }, { "created_at": "2014-08-28T17:13:04Z", "name": "OS::Nova::Flavor", "updated_at": "2014-08-28T17:13:04Z" }, { "created_at": "2014-08-28T17:13:04Z", "name": "OS::Nova::Aggregate", "updated_at": "2014-08-28T17:13:04Z" }, { "created_at": "2014-08-28T17:13:04Z", "name": "OS::Nova::Server", "updated_at": "2014-08-28T17:13:04Z" } ] } Retrieve Resource Types associated with a Namespace *************************************************** We want to see the list of resource types that are associated for a specific namespace We issue a ``GET`` request to ``http://glance.openstack.example.org/v2/metadefs/namespaces/{namespace}/resource_types`` to retrieve resource types. The data is returned as a JSON-encoded mapping in the following format:: { "resource_type_associations" : [ { "name" : "OS::Glance::Image", "prefix" : "hw_", "created_at": "2014-08-28T17:13:04Z", "updated_at": "2014-08-28T17:13:04Z" }, { "name" :"OS::Cinder::Volume", "prefix" : "hw_", "properties_target" : "image", "created_at": "2014-08-28T17:13:04Z", "updated_at": "2014-08-28T17:13:04Z" }, { "name" : "OS::Nova::Flavor", "prefix" : "hw:", "created_at": "2014-08-28T17:13:04Z", "updated_at": "2014-08-28T17:13:04Z" } ] } Add Namespace ************* We want to create a new namespace that can contain the properties, objects, etc. We issue a ``POST`` request to add an namespace to Glance:: POST http://glance.openstack.example.org/v2/metadefs/namespaces/ The input data is an JSON-encoded mapping in the following format:: { "namespace": "MyNamespace", "display_name": "My User Friendly Namespace", "description": "My description", "visibility": "public", "protected": true } .. note:: Optionally properties, objects and resource type associations could be added in the same input. See GET Namespace output above(input will be similar). Update Namespace **************** We want to update an existing namespace We issue a ``PUT`` request to update an namespace to Glance:: PUT http://glance.openstack.example.org/v2/metadefs/namespaces/{namespace} The input data is similar to Add Namespace Delete Namespace **************** We want to delete an existing namespace including all its objects, properties etc. We issue a ``DELETE`` request to delete an namespace to Glance:: DELETE http://glance.openstack.example.org/v2/metadefs/namespaces/{namespace} Associate Resource Type with Namespace ************************************** We want to associate a resource type with an existing namespace We issue a ``POST`` request to associate resource type to Glance:: POST http://glance.openstack.example.org/v2/metadefs/namespaces/{namespace}/resource_types The input data is an JSON-encoded mapping in the following format:: { "name" :"OS::Cinder::Volume", "prefix" : "hw_", "properties_target" : "image", "created_at": "2014-08-28T17:13:04Z", "updated_at": "2014-08-28T17:13:04Z" } Remove Resource Type associated with a Namespace ************************************************ We want to de-associate namespace from a resource type We issue a ``DELETE`` request to de-associate namespace resource type to Glance:: DELETE http://glance.openstack.example.org/v2//metadefs/namespaces/{namespace}/resource_types/{resource_type} List Objects in Namespace ************************* We want to see the list of meta definition objects in a specific namespace We issue a ``GET`` request to ``http://glance.openstack.example.org/v2/metadefs/namespaces/{namespace}/objects`` to retrieve objects. The data is returned as a JSON-encoded mapping in the following format:: { "objects": [ { "name": "object1", "description": "my-description", "self": "/v2/metadefs/namespaces/MyNamespace/objects/object1", "schema": "/v2/schemas/metadefs/object", "created_at": "2014-08-28T17:13:06Z", "updated_at": "2014-08-28T17:13:06Z", "required": [], "properties": { "prop1": { "title": "My object1 property1", "description": "More info here", "type": "array", "items": { "type": "string" } } } }, { "name": "object2", "description": "my-description", "self": "/v2/metadefs/namespaces/MyNamespace/objects/object2", "schema": "/v2/schemas/metadefs/object", "created_at": "2014-08-28T17:13:06Z", "updated_at": "2014-08-28T17:13:06Z", "properties": { "prop1": { "title": "My object2 property1", "description": "More info here", "type": "integer", "default": 20 } } } ], "schema": "/v2/schemas/metadefs/objects" } Add object in a specific namespace ********************************** We want to create a new object which can group the properties We issue a ``POST`` request to add object to a namespace in Glance:: POST http://glance.openstack.example.org/v2/metadefs/namespaces/{namespace}/objects The input data is an JSON-encoded mapping in the following format:: { "name": "StorageQOS", "description": "Our available storage QOS.", "required": [ "minIOPS" ], "properties": { "minIOPS": { "type": "integer", "description": "The minimum IOPs required", "default": 100, "minimum": 100, "maximum": 30000369 }, "burstIOPS": { "type": "integer", "description": "The expected burst IOPs", "default": 1000, "minimum": 100, "maximum": 30000377 } } } Update Object in a specific namespace ************************************* We want to update an existing object We issue a ``PUT`` request to update an object to Glance:: PUT http://glance.openstack.example.org/v2/metadefs/namespaces/{namespace}/objects/{object_name} The input data is similar to Add Object Delete Object in a specific namespace ************************************* We want to delete an existing object. We issue a ``DELETE`` request to delete object in a namespace to Glance:: DELETE http://glance.openstack.example.org/v2/metadefs/namespaces/{namespace}/objects/{object_name} Add property definition in a specific namespace *********************************************** We want to create a new property definition in a namespace We issue a ``POST`` request to add property definition to a namespace in Glance:: POST http://glance.openstack.example.org/v2/metadefs/namespaces/{namespace}/properties The input data is an JSON-encoded mapping in the following format:: { "name": "hypervisor_type", "title" : "Hypervisor", "type": "array", "description": "The type of hypervisor required", "items": { "type": "string", "enum": [ "hyperv", "qemu", "kvm" ] } } Update property definition in a specific namespace ************************************************** We want to update an existing object We issue a ``PUT`` request to update an property definition in a namespace to Glance:: PUT http://glance.openstack.example.org/v2/metadefs/namespaces/{namespace}/properties/{property_name} The input data is similar to Add property definition Delete property definition in a specific namespace ************************************************** We want to delete an existing object. We issue a ``DELETE`` request to delete property definition in a namespace to Glance:: DELETE http://glance.openstack.example.org/v2/metadefs/namespaces/{namespace}/properties/{property_name} API Message Localization ------------------------ Glance supports HTTP message localization. For example, an HTTP client can receive API messages in Chinese even if the locale language of the server is English. How to use it ************* To receive localized API messages, the HTTP client needs to specify the **Accept-Language** header to indicate the language to use to translate the message. For more info about Accept-Language, please refer http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html A typical curl API request will be like below:: curl -i -X GET -H 'Accept-Language: zh' -H 'Content-Type: application/json' http://glance.openstack.example.org/v2/metadefs/namespaces/{namespace} Then the response will be like the following:: HTTP/1.1 404 Not Found Content-Length: 234 Content-Type: text/html; charset=UTF-8 X-Openstack-Request-Id: req-54d403a0-064e-4544-8faf-4aeef086f45a Date: Sat, 22 Feb 2014 06:26:26 GMT 404 Not Found

404 Not Found

找不到任何具有标识 aaa 的映像

.. note:: Be sure there is the language package under /usr/share/locale-langpack/ on the target Glance server. glance-16.0.0/doc/source/user/signature.rst0000666000175100017510000001421713245511421020636 0ustar zuulzuul00000000000000.. Copyright 2016 OpenStack Foundation All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Image Signature Verification ============================= Glance has the ability to perform image validation using a digital signature and asymmetric cryptography. To trigger this, you must define specific image properties (described below), and have stored a certificate signed with your private key in a local Barbican installation. When the image properties exist on an image, Glance will validate the uploaded image data against these properties before storing it. If validation is unsuccessful, the upload will fail and the image will be deleted. Additionally, the image properties may be used by other services (for example, Nova) to perform data verification when the image is downloaded from Glance. Requirements ------------ Barbican key manager - See https://docs.openstack.org/barbican/latest/contributor/devstack.html Configuration ------------- The etc/glance-api.conf can be modified to change keystone endpoint of barbican. By default barbican will try to connect to keystone at http://localhost:5000/v3 but if keystone is on another host then this should be changed. In glance-api.conf find the following lines:: [barbican] auth_endpoint = http://localhost:5000/v3 Then replace http://localhost:5000/v3 with the URL of keystone, also adding /v3 to the end of it. For example, 'https://192.168.245.9:5000/v3'. Another option in etc/glance-api.conf which can be configured is which key manager to use. By default Glance will use the default key manager defined by the Castellan key manager interface, which is currently the Barbican key manager. In glance-api.conf find the following lines:: [key_manager] backend = barbican Then replace the value with the desired key manager class. .. note:: If those lines do not exist then simply add them to the end of the file. Using the Signature Verification -------------------------------- An image will need a few properties for signature verification to be enabled, these are:: img_signature img_signature_hash_method img_signature_key_type img_signature_certificate_uuid Property img_signature ~~~~~~~~~~~~~~~~~~~~~~ This is the signature of your image. .. note:: The max character limit is 255. Property img_signature_hash_method ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Hash methods is the method you hash with. Current ones you can use are: * SHA-224 * SHA-256 * SHA-384 * SHA-512 Property img_signature_key_type ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This is the key_types you can use for your image. Current ones you can use are: * RSA-PSS * DSA * ECC-CURVES * SECT571K1 * SECT409K1 * SECT571R1 * SECT409R1 * SECP521R1 * SECP384R1 .. Note:: ECC curves - Only keysizes above 384 are included. Not all ECC curves may be supported by the back end. Property img_signature_certificate_uuid ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This is the UUID of the certificate that you upload to Barbican. Therefore the type passed to glance is: * UUID .. Note:: The supported certificate types are: * X_509 Example Usage ------------- Follow these instructions to create your keys:: $ openssl genrsa -out private_key.pem 1024 Generating RSA private key, 1024 bit long modulus ...............................................++++++ ..++++++ e is 65537 (0x10001) $ openssl rsa -pubout -in private_key.pem -out public_key.pem writing RSA key $ openssl req -new -key private_key.pem -out cert_request.csr You are about to be asked to enter information that will be incorporated into your certificate request. $ openssl x509 -req -days 14 -in cert_request.csr -signkey private_key.pem -out new_cert.crt Signature ok subject=/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd Getting Private key Upload your certificate. This only has to be done once as you can use the same ``Secret href`` for many images until it expires. .. code-block:: console $ openstack secret store --name test --algorithm RSA --expiration 2016-06-29 --secret-type certificate --payload-content-type "application/octet-stream" --payload-content-encoding base64 --payload "$(base64 new_cert.crt)" +---------------+-----------------------------------------------------------------------+ | Field | Value | +---------------+-----------------------------------------------------------------------+ | Secret href | http://127.0.0.1:9311/v1/secrets/cd7cc675-e573-419c-8fff-33a72734a243 | $ cert_uuid=cd7cc675-e573-419c-8fff-33a72734a243 Get an image and create the signature:: $ echo This is a dodgy image > myimage $ openssl dgst -sha256 -sign private_key.pem -sigopt rsa_padding_mode:pss -out myimage.signature myimage $ base64 -w 0 myimage.signature > myimage.signature.b64 $ image_signature=$(cat myimage.signature.b64) .. note:: Using Glance v1 requires '-w 0' due to not supporting multiline image properties. Glance v2 does support multiline image properties and does not require '-w 0' but may still be used. Create the image:: $ glance image-create --name mySignedImage --container-format bare --disk-format qcow2 --property img_signature="$image_signature" --property img_signature_certificate_uuid="$cert_uuid" --property img_signature_hash_method='SHA-256' --property img_signature_key_type='RSA-PSS' < myimage .. note:: Creating the image can fail if validation does not succeed. This will cause the image to be deleted. Other Links ----------- * https://etherpad.openstack.org/p/mitaka-glance-image-signing-instructions * https://wiki.openstack.org/wiki/OpsGuide/User-Facing_Operations glance-16.0.0/doc/source/user/index.rst0000666000175100017510000000034013245511421017734 0ustar zuulzuul00000000000000============ User guide ============ .. toctree:: :maxdepth: 2 identifiers statuses formats common-image-properties metadefs-concepts glanceapi glanceclient glancemetadefcatalogapi signature glance-16.0.0/doc/source/user/glanceclient.rst0000666000175100017510000000210613245511421021257 0ustar zuulzuul00000000000000.. Copyright 2011-2012 OpenStack Foundation All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Using Glance's Client Tools =========================== The command-line tool and python library for Glance are both installed through the python-glanceclient project. Explore the following resources for more information: * `Official Docs `_ * `Pypi Page `_ * `GitHub Project `_ glance-16.0.0/doc/source/user/common-image-properties.rst0000666000175100017510000000357113245511421023400 0ustar zuulzuul00000000000000.. Copyright 2013 OpenStack Foundation All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Common Image Properties ======================= When adding an image to Glance, you may specify some common image properties that may prove useful to consumers of your image. This document explains the names of these properties and the expected values. The common image properties are also described in a JSON schema, found in /etc/glance/schema-image.json in the Glance source code. **architecture** ---------------- Operating system architecture as specified in https://docs.openstack.org/python-glanceclient/latest/cli/property-keys.html **instance_uuid** ----------------- Metadata which can be used to record which instance this image is associated with. (Informational only, does not create an instance snapshot.) **kernel_id** ------------- The ID of image stored in Glance that should be used as the kernel when booting an AMI-style image. **ramdisk_id** -------------- The ID of image stored in Glance that should be used as the ramdisk when booting an AMI-style image. **os_distro** ------------- The common name of the operating system distribution as specified in https://docs.openstack.org/python-glanceclient/latest/cli/property-keys.html **os_version** -------------- The operating system version as specified by the distributor. glance-16.0.0/doc/source/user/statuses.rst0000666000175100017510000001361513245511421020511 0ustar zuulzuul00000000000000.. Copyright 2010 OpenStack Foundation All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. .. _image-statuses: Image Statuses ============== Images in Glance can be in one of the following statuses: * ``queued`` The image identifier has been reserved for an image in the Glance registry. No image data has been uploaded to Glance and the image size was not explicitly set to zero on creation. * ``saving`` Denotes that an image's raw data is currently being uploaded to Glance. When an image is registered with a call to `POST /images` and there is an `x-image-meta-location` header present, that image will never be in the `saving` status (as the image data is already available in some other location). * ``uploading`` Denotes that an import data-put call has been made. While in this status, a call to `PUT /file` is disallowed. (Note that a call to `PUT /file` on a queued image puts the image into saving status. Calls to `PUT /stage` are disallowed while an image is in saving status. Thus it’s not possible to use both upload methods on the same image.) * ``importing`` Denotes that an import call has been made but that the image is not yet ready for use. * ``active`` Denotes an image that is fully available in Glance. This occurs when the image data is uploaded, or the image size is explicitly set to zero on creation. * ``deactivated`` Denotes that access to image data is not allowed to any non-admin user. Prohibiting downloads of an image also prohibits operations like image export and image cloning that may require image data. * ``killed`` Denotes that an error occurred during the uploading of an image's data, and that the image is not readable. * ``deleted`` Glance has retained the information about the image, but it is no longer available to use. An image in this state will be removed automatically at a later date. * ``pending_delete`` This is similar to `deleted`, however, Glance has not yet removed the image data. An image in this state is not recoverable. .. figure:: ../images/image_status_transition.png :figwidth: 100% :align: center :alt: The states consist of: "queued", "saving", "active", "pending_delete", "deactivated", "uploading", "importing", "killed", and "deleted". The transitions consist of: An initial transition to the "queued" state called "create image". A transition from the "queued" state to the "active" state called "add location". A transition from the "queued" state to the "saving" state called "upload". A transition from the "queued" state to the "uploading" state called "stage upload". A transition from the "queued" state to the "deleted" state called "delete". A transition from the "saving" state to the "active" state called "upload succeeded". A transition from the "saving" state to the "deleted" state called "delete". A transition from the "saving" state to the "killed" state called "[v1] upload fail". A transition from the "saving" state to the "queued" state called "[v2] upload fail". A transition from the "uploading" state to the "importing" state called "import". A transition from the "uploading" state to the "queued" state called "stage upload fail". A transition from the "uploading" state to the "deleted" state called "delete". A transition from the "importing" state to the "active" state called "import succeed". A transition from the "importing" state to the "queued" state called "import fail". A transition from the "importing" state to the "deleted" state called "delete". A transition from the "active" state to the "deleted" state called "delete". A transition from the "active" state to the "pending_delete" state called "delayed delete". A transition from the "active" state to the "deactivated" state called "deactivate". A transition from the "killed" state to the "deleted" state called "deleted". A transition from the "pending_delete" state to the "deleted" state called "after scrub time". A transition from the "deactivated" state to the "deleted" state called "delete". A transition from the "deactivated" state to the "active" state called "reactivate". There are no transitions out of the "deleted" state. This is a representation of how the image move from one status to the next. * Add location from zero to more than one. .. _task-statuses: Task Statuses ============= Tasks in Glance can be in one of the following statuses: * ``pending`` The task identifier has been reserved for a task in the Glance. No processing has begun on it yet. * ``processing`` The task has been picked up by the underlying executor and is being run using the backend Glance execution logic for that task type. * ``success`` Denotes that the task has had a successful run within Glance. The ``result`` field of the task shows more details about the outcome. * ``failure`` Denotes that an error occurred during the execution of the task and it cannot continue processing. The ``message`` field of the task shows what the error was. glance-16.0.0/doc/source/user/formats.rst0000666000175100017510000000652713245511421020315 0ustar zuulzuul00000000000000.. Copyright 2011 OpenStack Foundation All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. .. _formats: Disk and Container Formats ========================== When adding an image to Glance, you must specify what the virtual machine image's *disk format* and *container format* are. Disk and container formats are configurable on a per-deployment basis. This document intends to establish a global convention for what specific values of *disk_format* and *container_format* mean. Disk Format ----------- The disk format of a virtual machine image is the format of the underlying disk image. Virtual appliance vendors have different formats for laying out the information contained in a virtual machine disk image. You can set your image's disk format to one of the following: * **raw** This is an unstructured disk image format * **vhd** This is the VHD disk format, a common disk format used by virtual machine monitors from VMware, Xen, Microsoft, VirtualBox, and others * **vhdx** This is the VHDX disk format, an enhanced version of the vhd format which supports larger disk sizes among other features. * **vmdk** Another common disk format supported by many common virtual machine monitors * **vdi** A disk format supported by VirtualBox virtual machine monitor and the QEMU emulator * **iso** An archive format for the data contents of an optical disc (e.g. CDROM). * **ploop** A disk format supported and used by Virtuozzo to run OS Containers * **qcow2** A disk format supported by the QEMU emulator that can expand dynamically and supports Copy on Write * **aki** This indicates what is stored in Glance is an Amazon kernel image * **ari** This indicates what is stored in Glance is an Amazon ramdisk image * **ami** This indicates what is stored in Glance is an Amazon machine image Container Format ---------------- The container format refers to whether the virtual machine image is in a file format that also contains metadata about the actual virtual machine. Note that the container format string is not currently used by Glance or other OpenStack components, so it is safe to simply specify **bare** as the container format if you are unsure. You can set your image's container format to one of the following: * **bare** This indicates there is no container or metadata envelope for the image * **ovf** This is the OVF container format * **aki** This indicates what is stored in Glance is an Amazon kernel image * **ari** This indicates what is stored in Glance is an Amazon ramdisk image * **ami** This indicates what is stored in Glance is an Amazon machine image * **ova** This indicates what is stored in Glance is an OVA tar archive file * **docker** This indicates what is stored in Glance is a Docker tar archive of the container filesystem glance-16.0.0/doc/source/_static/0000775000175100017510000000000013245511661016552 5ustar zuulzuul00000000000000glance-16.0.0/doc/source/_static/.placeholder0000666000175100017510000000000013245511421021017 0ustar zuulzuul00000000000000glance-16.0.0/doc/source/conf.py0000666000175100017510000002346113245511421016425 0ustar zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright (c) 2010 OpenStack Foundation. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. # # Glance documentation build configuration file, created by # sphinx-quickstart on Tue May 18 13:50:15 2010. # # This file is execfile()'d with the current directory set to its containing # dir. # # Note that not all possible configuration values are present in this # autogenerated file. # # All configuration values have a default; values that are commented out # serve to show the default. import os import subprocess import sys import warnings import openstackdocstheme # If extensions (or modules to document with autodoc) are in another directory, # add these directories to sys.path here. If the directory is relative to the # documentation root, use os.path.abspath to make it absolute, like shown here. sys.path = [ os.path.abspath('../..'), os.path.abspath('../../bin') ] + sys.path # -- General configuration --------------------------------------------------- # Add any Sphinx extension module names here, as strings. They can be # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones. extensions = ['stevedore.sphinxext', 'sphinx.ext.autodoc', 'sphinx.ext.viewcode', 'oslo_config.sphinxext', 'oslo_config.sphinxconfiggen', 'openstackdocstheme', ] # openstackdocstheme options repository_name = 'openstack/glance' bug_project = 'glance' bug_tag = '' html_last_updated_fmt = '%Y-%m-%d %H:%M' config_generator_config_file = [ ('../../etc/oslo-config-generator/glance-api.conf', '_static/glance-api'), ('../../etc/oslo-config-generator/glance-cache.conf', '_static/glance-cache'), ('../../etc/oslo-config-generator/glance-manage.conf', '_static/glance-manage'), ('../../etc/oslo-config-generator/glance-registry.conf', '_static/glance-registry'), ('../../etc/oslo-config-generator/glance-scrubber.conf', '_static/glance-scrubber'), ] # Add any paths that contain templates here, relative to this directory. # templates_path = [] # The suffix of source filenames. source_suffix = '.rst' # The encoding of source files. #source_encoding = 'utf-8' # The master toctree document. master_doc = 'index' # General information about the project. project = u'Glance' copyright = u'2010-present, OpenStack Foundation.' # The version info for the project you're documenting, acts as replacement for # |version| and |release|, also used in various other places throughout the # built documents. # # The short X.Y version. from glance.version import version_info as glance_version # The full version, including alpha/beta/rc tags. release = glance_version.version_string_with_vcs() # The short X.Y version. version = glance_version.canonical_version_string() # The language for content autogenerated by Sphinx. Refer to documentation # for a list of supported languages. #language = None # There are two options for replacing |today|: either, you set today to some # non-false value, then it is used: #today = '' # Else, today_fmt is used as the format for a strftime call. #today_fmt = '%B %d, %Y' # List of documents that shouldn't be included in the build. #unused_docs = [] # List of directories, relative to source directory, that shouldn't be searched # for source files. #exclude_trees = ['api'] exclude_patterns = [ # The man directory includes some snippet files that are included # in other documents during the build but that should not be # included in the toctree themselves, so tell Sphinx to ignore # them when scanning for input files. 'cli/footer.txt', 'cli/general_options.txt', 'cli/openstack_options.txt', ] # The reST default role (for this markup: `text`) to use for all documents. #default_role = None # If true, '()' will be appended to :func: etc. cross-reference text. #add_function_parentheses = True # If true, the current module name will be prepended to all description # unit titles (such as .. function::). add_module_names = True # If true, sectionauthor and moduleauthor directives will be shown in the # output. They are ignored by default. show_authors = True # The name of the Pygments (syntax highlighting) style to use. pygments_style = 'sphinx' # A list of ignored prefixes for module index sorting. modindex_common_prefix = ['glance.'] # -- Options for man page output -------------------------------------------- # Grouping the document tree for man pages. # List of tuples 'sourcefile', 'target', u'title', u'Authors name', 'manual' man_pages = [ ('cli/glanceapi', 'glance-api', u'Glance API Server', [u'OpenStack'], 1), ('cli/glancecachecleaner', 'glance-cache-cleaner', u'Glance Cache Cleaner', [u'OpenStack'], 1), ('cli/glancecachemanage', 'glance-cache-manage', u'Glance Cache Manager', [u'OpenStack'], 1), ('cli/glancecacheprefetcher', 'glance-cache-prefetcher', u'Glance Cache Pre-fetcher', [u'OpenStack'], 1), ('cli/glancecachepruner', 'glance-cache-pruner', u'Glance Cache Pruner', [u'OpenStack'], 1), ('cli/glancecontrol', 'glance-control', u'Glance Daemon Control Helper ', [u'OpenStack'], 1), ('cli/glancemanage', 'glance-manage', u'Glance Management Utility', [u'OpenStack'], 1), ('cli/glanceregistry', 'glance-registry', u'Glance Registry Server', [u'OpenStack'], 1), ('cli/glancereplicator', 'glance-replicator', u'Glance Replicator', [u'OpenStack'], 1), ('cli/glancescrubber', 'glance-scrubber', u'Glance Scrubber Service', [u'OpenStack'], 1) ] # -- Options for HTML output ------------------------------------------------- # The theme to use for HTML and HTML Help pages. Major themes that come with # Sphinx are currently 'default' and 'sphinxdoc'. # html_theme_path = ["."] # html_theme = '_theme' html_theme = 'openstackdocs' # Theme options are theme-specific and customize the look and feel of a theme # further. For a list of options available for each theme, see the # documentation. #html_theme_options = {} # Add any paths that contain custom themes here, relative to this directory. #html_theme_path = ['_theme'] html_theme_path = [openstackdocstheme.get_html_theme_path()] # The name for this set of Sphinx documents. If None, it defaults to # " v documentation". #html_title = None # A shorter title for the navigation bar. Default is the same as html_title. #html_short_title = None # The name of an image file (relative to this directory) to place at the top # of the sidebar. #html_logo = None # The name of an image file (within the static path) to use as favicon of the # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 # pixels large. #html_favicon = None # Add any paths that contain custom static files (such as style sheets) here, # relative to this directory. They are copied after the builtin static files, # so a file named "default.css" will overwrite the builtin "default.css". html_static_path = ['_static'] # If not '', a 'Last updated on:' timestamp is inserted at every page bottom, # using the given strftime format. #html_last_updated_fmt = '%b %d, %Y' git_cmd = ["git", "log", "--pretty=format:'%ad, commit %h'", "--date=local", "-n1"] try: html_last_updated_fmt = subprocess.check_output(git_cmd).decode('utf-8') except Exception: warnings.warn('Cannot get last updated time from git repository. ' 'Not setting "html_last_updated_fmt".') # If true, SmartyPants will be used to convert quotes and dashes to # typographically correct entities. #html_use_smartypants = True # Custom sidebar templates, maps document names to template names. #html_sidebars = {} # Additional templates that should be rendered to pages, maps page names to # template names. #html_additional_pages = {} # If false, no module index is generated. html_use_modindex = True # If false, no index is generated. html_use_index = True # If true, the index is split into individual pages for each letter. #html_split_index = False # If true, links to the reST sources are added to the pages. #html_show_sourcelink = True # If true, an OpenSearch description file will be output, and all pages will # contain a tag referring to it. The value of this option must be the # base URL from which the finished HTML is served. #html_use_opensearch = '' # If nonempty, this is the file name suffix for HTML files (e.g. ".xhtml"). #html_file_suffix = '' # Output file base name for HTML help builder. htmlhelp_basename = 'glancedoc' # -- Options for LaTeX output ------------------------------------------------ # The paper size ('letter' or 'a4'). #latex_paper_size = 'letter' # The font size ('10pt', '11pt' or '12pt'). #latex_font_size = '10pt' # Grouping the document tree into LaTeX files. List of tuples # (source start file, target name, title, author, # documentclass [howto/manual]). latex_documents = [ ('index', 'Glance.tex', u'Glance Documentation', u'Glance Team', 'manual'), ] # The name of an image file (relative to this directory) to place at the top of # the title page. #latex_logo = None # For "manual" documents, if this is true, then toplevel headings are parts, # not chapters. #latex_use_parts = False # Additional stuff for the LaTeX preamble. #latex_preamble = '' # Documents to append as an appendix to all manuals. #latex_appendices = [] # If false, no module index is generated. #latex_use_modindex = True glance-16.0.0/doc/source/contributor/0000775000175100017510000000000013245511661017476 5ustar zuulzuul00000000000000glance-16.0.0/doc/source/contributor/release-cpl.rst0000666000175100017510000002651413245511421022430 0ustar zuulzuul00000000000000================== Glance Release CPL ================== So you've volunteered to be the Glance Release Cross-Project Liaison (CPL) and now you're worried about what you've gotten yourself into. Well, here are some tips for you from former release CPLs. You will be doing vital and important work both for Glance and OpenStack. Releases have to be available at the scheduled milestones and RC dates because end users, other OpenStack projects, and packagers rely on releases being available so they can begin their work. Missing a date can have a cascading effect on all the people who are depending on the release being available at its scheduled time. Sounds scary, I know, but you'll also get a lot of satisfaction out of having a key role in keeping OpenStack running smoothly. Who You Have to Be ================== You do **not** have to be: - The PTL - A core reviewer - A stable-branch core reviewer/maintainer You **do** have to be: - A member of the Glance community - A person who has signed the OpenStack CLA (or whatever is in use at the time you are reading this) - Someone familiar with or willing to learn git, gerrit, etc. - Someone who will be comfortable saying "No" when colleagues want to sneak just one more thing in before a deadline. - Someone willing to work with the release team on a regular basis and attend their `weekly meeting`_. Just as the stable maintenance team is responsible for the stability and quality of the stable branches, the release CPL must take on responsibility for the stability and quality of every release artifact of Glance. If you are too lenient with your colleagues, you might be responsible for introducing a catastrophic or destabilizing release. Suppose someone, possibly even the PTL, shows up right before RC1 with a large but probably innocuous change. Even if this passes the gate, you should err on the side of caution and ask to not allow it to merge. (This has happened `before `_ ) A Release CPL has authority within the Glance project. They have authority through two measures: - Being the person who volunteered to do this hard work - Maintaining a healthy relationship with the PTL and their Glance colleagues. Use this authority to ensure that each Glance release is the best possible. The PTL's job is to lead technical direction, your job is to shepherd cats and help them focus on the priorities for each release. What This Does Not Grant You ============================ Volunteering to be Release CPL does not give you the right to be a Glance Core Reviewer. That is a separate role that is determined based on the quality of your reviews. You should be primarily motivated by wanting to help the team ship an excellent release. Get To Know The Release Team ============================ OpenStack has teams for most projects and efforts. In that vein, the release team works on tooling to make releasing projects easier as well as verifying releases. As CPL it is your job to work with this team. At the time of this writing, the team organizes in ``#openstack-release`` and has a `weekly meeting`_. Idling in their team channel and attending the meeting are two very strongly suggested (if not required) actions for the CPL. You should introduce yourself well in advance of the release deadlines. You should also take the time to research what actions you may need to take in advance of those deadlines as the release team becomes very busy around those deadlines. Familiarize Yourself with Community Goals ========================================= Community Goals **are** Glance Goals. They are documented and tracked in the `openstack/governance`_ repository. In Ocata, for example, the CPL assumed the responsibility of monitoring those goals and reporting back to the TC when we completed them. In my opinion, it makes sense for the Release CPL to perform this task because they are the ones who are keenly aware of the deadlines in the release schedule and can remind the assigned developers of those deadlines. It also is important for the Release CPL to coordinate with the PTL to ensure that there are project-specific deadlines for the goals. This will ensure the work is completed and reviewed in a timely fashion and hopefully early enough to catch any bugs that shake out of the work. Familiarize Yourself with the Release Tooling ============================================= The Release Team has worked to automate much of the release process over the last several development cycles. Much of the tooling is controlled by updating certain YAML files in the `openstack/releases`_ repository. To release a Glance project, look in the ``deliverables`` directory for the cycle's codename, e.g., ``pike``, and then look for the project inside of that. Update that using the appropriate syntax and after the release team has reviewed your request and approved it, the rest will be automated for you. For more information on release management process and tooling, refer to `release management process guide`_ and `release management tooling guide`_. Familiarize Yourself with the Bug Tracker ========================================= The `bug tracker`_ is the best way to determine what items are slated to get in for each particular milestone or cycle release. Use it to the best of its capabilities. Release Stability and the Gate ============================== As you may know at this point, OpenStack's Integrated Gate will begin to experience longer queue times and more frequent unrelated failures around milestones and release deadlines (as other projects attempt to sneak things in at the last minute). You may help your colleagues (and yourself) if you advocate for deadlines on features, etc., at least a week in advance of the actual release deadline. This can apply to all release deadlines (milestone, release candidate, final). If you can stabilize your project prior to the flurry of activity, you will ship a better product. You can also then focus on bug fixing reviews in the interim between your project priorities deadline and the actual release deadline. Checklist ========= The release team will set dates for all the milestones for each release. The release schedule can be found from this page: https://releases.openstack.org/index.html There are checklists to follow for various important release aspects: Glance Specific Goals --------------------- While the release team sets dates for community-wide releases, you should work with the PTL to set Glance specific deadlines/events such spec proposal freeze, spec freeze, mid-cycle, bug squash and review squash etc. Also, you can set additional deadlines for Glance priorities to ensure work is on-track for a timely release. You are also responsible for ensuring PTL and other concerned individuals are aware and reminded of the events/deadlines to ensure timely release. Milestone Release ----------------- The release schedule for the current cycle will give you a range of dates for each milestone release. It is your job to propose the release for Glance sometime during that range and ensure the release is created. This means the following: - Showing up at meetings to announce the planned date weeks in advance. Your colleagues on the Glance team will need at least 4 weeks notice so they can plan and prioritize what work should be included in the milestone. - Reminding your colleagues what the stated priorities for that milestone were, their progress, etc. - Being inflexible in the release date. As soon as you pick your date, stick to it. If a feature slips a milestone to the next, it is not the end of the world. It is not ideal, but Glance *needs* to release its milestone as soon as possible. - Proposing the release in a timely and correct fashion on the day you stated. You may have colleagues try to argue their case to the release team. This is when your collaboration with the PTL will be necessary. The PTL needs to help affirm your decision to release the version of the project you can on the day you decide it. - Release ``glance_store`` and ``python-glanceclient`` at least once per milestone. - Write `release notes`_ Release Candidate Releases -------------------------- The release candidate release period is similarly scoped to a few days. It is even more important that Glance release during that period. To help your colleagues, try to schedule this release as close to the end of that range as possible. Once RC1 is released, only bugs introduced since the last milestone that are going to compromise the integrity of the release should be merged. Again, your duties include all of the Milestone Release duties plus the following: - When proposing the release, you need to appropriately configure the release tooling to create a stable branch. If you do not, then you have not appropriately created the release candidate. - Keeping a *very* watchful eye on what is proposed to and approved for master as well as your new stable branch. Again, automated updates from release tooling and *release critical* bugs are the only things that should be merged to either. - If release critical bugs are found and fixed, proposing a new release candidate from the SHA on the stable branch. - Write `release notes`_ - Announce that any non-release-critical changes won't be accepted from this point onwards until the final Glance release is made. Consider adding -2 on such reviews with good description to prevent further updates. This also helps in keeping the gate relatively free to process the release-critical changes. Final Releases -------------- The release team usually proposes all of the projects' final releases in one patch based off the final release candidate. After those are created, some things in Glance need to be updated immediately. - Right after cutting the stable branch, Glance release version (not the API version) must be bumped so that all further development is attributed to the next release version. This could be done by adding an empty commit with commit message containing the flag ``Sem-Ver: api-break`` to indicate a version. Here is a sample commit attempting to `bump the release version`_. - The migration tooling that Glance uses relies on some constants defined in `glance/db/migration.py`_. Post final release, those need *immediate* updating. Acknowledgements ---------------- This document was originally written by Ian Cordasco. It's maintained and revised by the Glance Release CPLs: - Ian Cordasco, Release CPL for Ocata - Hemanth Makkapati, Release CPL for Pike .. links .. _weekly meeting: http://eavesdrop.openstack.org/#Release_Team_Meeting .. _openstack/governance: https://git.openstack.org/cgit/openstack/governance .. _openstack/releases: https://git.openstack.org/cgit/openstack/releases .. _StoryBoard: https://storyboard.openstack.org/ .. _glance/db/migration.py: https://github.com/openstack/glance/blob/master/glance/db/migration.py .. _release management process guide: https://docs.openstack.org/project-team-guide/release-management.html .. _release management tooling guide: http://git.openstack.org/cgit/openstack/releases/tree/README.rst .. _bug tracker: https://bugs.launchpad.net/glance .. _release notes: https://docs.openstack.org/project-team-guide/release-management.html#managing-release-notes .. _bump the release version: https://review.openstack.org/#q,I21480e186a2aab6c54f7ea798c215660bddf9e4c,n,z glance-16.0.0/doc/source/contributor/blueprints.rst0000666000175100017510000000631713245511421022422 0ustar zuulzuul00000000000000Blueprints and Specs ==================== The Glance team uses the `glance-specs `_ repository for its specification reviews. Detailed information can be found `here `_. Please also find additional information in the reviews.rst file. The Glance team enforces a deadline for specs proposals. It's a soft freeze that happens after the first milestone is cut and before the second milestone is out. There's a freeze exception week that follows the freeze week. A new proposal can still be submitted during this period, but be aware that it will most likely be postponed unless a particularly good argument is made in favor of having an exception for it. Please note that we use a `template `_ for spec submissions. It is not required to fill out all sections in the template. Review of the spec may require filling in information left out by the submitter. Spec Notes ---------- There are occasions when a spec will be approved and the code will not land in the cycle it was targeted at. For these cases, the work flow to get the spec into the next release is as follows: * Anyone can propose a patch to glance-specs which moves a spec from the previous release into the new release directory. .. NOTE: mention the `approved`, `implemented` dirs The specs which are moved in this way can be fast-tracked into the next release. Please note that it is required to re-propose the spec for the new release however and that it'll be evaluated based on the resources available and cycle priorities. Glance Spec Lite ---------------- In addition to the heavy-duty design documents described above, we've made a provision for lightweight design documents for developers who have an idea for a small, uncontroversial change. In such a case, you can propose a *spec lite*, which is a quick description of what you want to do. You propose a spec-lite in the same way you propose a full spec: copy the `spec-lite template `_ in the **approved** directory for the release cycle in which you're proposing the change, fill out the appropriate sections, and put up a patch in gerrit. Lite spec Submission Guidelines ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Before we dive into the guidelines for writing a good lite spec, it is worth mentioning that depending on your level of engagement with the Glance project and your role (user, developer, deployer, operator, etc.), you are more than welcome to have a preliminary discussion of a potential lite spec by reaching out to other people involved in the project. This usually happens by posting mails on the relevant mailing lists (e.g. `openstack-dev `_ - include [glance] in the subject) or on #openstack-glance IRC channel on Freenode. If current ongoing code reviews are related to your feature, posting comments/questions on gerrit may also be a way to engage. Some amount of interaction with Glance developers will give you an idea of the plausibility and form of your lite spec before you submit it. That said, this is not mandatory. glance-16.0.0/doc/source/contributor/database_architecture.rst0000666000175100017510000002461713245511421024544 0ustar zuulzuul00000000000000.. Copyright 2015 OpenStack Foundation All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ============================ Glance database architecture ============================ Glance Database Public API ~~~~~~~~~~~~~~~~~~~~~~~~~~ The Glance Database API contains several methods for moving image metadata to and from persistent storage. You can find a list of public methods grouped by category below. Common parameters for image methods ----------------------------------- The following parameters can be applied to all of the image methods below: - ``context`` — corresponds to a glance.context.RequestContext object, which stores the information on how a user accesses the system, as well as additional request information. - ``image_id`` — a string corresponding to the image identifier. - ``memb_id`` — a string corresponding to the member identifier of the image. Image basic methods ------------------- **Image processing methods:** #. ``image_create(context, values)`` — creates a new image record with parameters listed in the *values* dictionary. Returns a dictionary representation of a newly created *glance.db.sqlalchemy.models.Image* object. #. ``image_update(context, image_id, values, purge_props=False, from_state=None)`` — updates the existing image with the identifier *image_id* with the values listed in the *values* dictionary. Returns a dictionary representation of the updated *Image* object. Optional parameters are: - ``purge_props`` — a flag indicating that all the existing properties not listed in the *values['properties']* should be deleted; - ``from_state`` — a string filter indicating that the updated image must be in the specified state. #. ``image_destroy(context, image_id)`` — deletes all database records of an image with the identifier *image_id* (like tags, properties, and members) and sets a 'deleted' status on all the image locations. #. ``image_get(context, image_id, force_show_deleted=False)`` — gets an image with the identifier *image_id* and returns its dictionary representation. The parameter *force_show_deleted* is a flag that indicates to show image info even if it was 'deleted', or its 'pending_delete' statuses. #. ``image_get_all(context, filters=None, marker=None, limit=None, sort_key=None, sort_dir=None, member_status='accepted', is_public=None, admin_as_user=False, return_tag=False)`` — gets all the images that match zero or more filters. Optional parameters are: - ``filters`` — dictionary of filter keys and values. If a 'properties' key is present, it is treated as a dictionary of key/value filters in the attribute of the image properties. - ``marker`` — image id after which a page should start. - ``limit`` — maximum number of images to return. - ``sort_key`` — list of image attributes by which results should be sorted. - ``sort_dir`` — direction in which results should be sorted (asc, desc). - ``member_status`` — only returns shared images that have this membership status. - ``is_public`` — if true, returns only public images. If false, returns only private and shared images. - ``admin_as_user`` — for backwards compatibility. If true, an admin sees the same set of images that would be seen by a regular user. - ``return_tag`` — indicates whether an image entry in the result includes its relevant tag entries. This can improve upper-layer query performance and avoid using separate calls. Image location methods ---------------------- **Image location processing methods:** #. ``image_location_add(context, image_id, location)`` — adds a new location to an image with the identifier *image_id*. This location contains values listed in the dictionary *location*. #. ``image_location_update(context, image_id, location)`` — updates an existing location with the identifier *location['id']* for an image with the identifier *image_id* with values listed in the dictionary *location*. #. ``image_location_delete(context, image_id, location_id, status, delete_time=None)`` — sets a 'deleted' or 'pending_delete' *status* to an existing location record with the identifier *location_id* for an image with the identifier *image_id*. Image property methods ---------------------- .. warning:: There is no public property update method. So if you want to modify it, you have to delete it first and then create a new one. **Image property processing methods:** #. ``image_property_create(context, values)`` — creates a property record with parameters listed in the *values* dictionary for an image with *values['id']*. Returns a dictionary representation of a newly created *ImageProperty* object. #. ``image_property_delete(context, prop_ref, image_ref)`` — deletes an existing property record with a name *prop_ref* for an image with the identifier *image_ref*. Image member methods -------------------- **Methods to handle image memberships:** #. ``image_member_create(context, values)`` — creates a member record with properties listed in the *values* dictionary for an image with *values['id']*. Returns a dictionary representation of a newly created *ImageMember* object. #. ``image_member_update(context, memb_id, values)`` — updates an existing member record with properties listed in the *values* dictionary for an image with *values['id']*. Returns a dictionary representation of an updated member record. #. ``image_member_delete(context, memb_id)`` — deletes an existing member record with *memb_id*. #. ``image_member_find(context, image_id=None, member=None, status=None)`` — returns all members for a given context with optional image identifier (*image_id*), member name (*member*), and member status (*status*) parameters. #. ``image_member_count(context, image_id)`` — returns a number of image members for an image with *image_id*. Image tag methods ----------------- **Methods to process images tags:** #. ``image_tag_set_all(context, image_id, tags)`` — changes all the existing tags for an image with *image_id* to the tags listed in the *tags* param. To remove all tags, a user just should provide an empty list. #. ``image_tag_create(context, image_id, value)`` — adds a *value* to tags for an image with *image_id*. Returns the value of a newly created tag. #. ``image_tag_delete(context, image_id, value)`` — removes a *value* from tags for an image with *image_id*. #. ``image_tag_get_all(context, image_id)`` — returns a list of tags for a specific image. Image info methods ------------------ The next two methods inform a user about his or her ability to modify and view an image. The *image* parameter here is a dictionary representation of an *Image* object. #. ``is_image_mutable(context, image)`` — informs a user about the possibility to modify an image with the given context. Returns True if the image is mutable in this context. #. ``is_image_visible(context, image, status=None)`` — informs about the possibility to see the image details with the given context and optionally with a status. Returns True if the image is visible in this context. **Glance database schema** .. figure:: ../images/glance_db.png :figwidth: 100% :align: center :alt: The glance database schema is depicted by 5 tables. The table named Images has the following columns: id: varchar(36); name: varchar(255), nullable; size: bigint(20), nullable; status: varchar(30); is_public: tinyint(1); created_at: datetime; updated_at: datetime, nullable; deleted_at: datetime, nullable; deleted: tinyint(1); disk_format: varchar(20), nullable; container_format: varchar(20), nullable; checksum: varchar(32), nullable; owner: varchar(255), nullable min_disk: int(11); min_ram: int(11); protected: tinyint(1); and virtual_size: bigint(20), nullable;. The table named image_locations has the following columns: id: int(11), primary; image_id: varchar(36), refers to column named id in table Images; value: text; created_at: datetime; updated_at: datetime, nullable; deleted_at: datetime, nullable; deleted: tinyint(1); meta_data: text, nullable; and status: varchar(30);. The table named image_members has the following columns: id: int(11), primary; image_id: varchar(36), refers to column named id in table Images; member: varchar(255); can_share: tinyint(1); created_at: datetime; updated_at: datetime, nullable; deleted_at: datetime, nullable; deleted: tinyint(1); and status: varchar(20;. The table named image_tags has the following columns: id: int(11), primary; image_id: varchar(36), refers to column named id in table Images; value: varchar(255); created_at: datetime; updated_at: datetime, nullable; deleted_at: datetime, nullable; and deleted: tinyint(1);. The table named image_properties has the following columns: id: int(11), primary; image_id: varchar(36), refers to column named id in table Images; name: varchar(255); value: text, nullable; created_at: datetime; updated_at: datetime, nullable; deleted_at: datetime, nullable; and deleted: tinyint(1);. .. centered:: Image 1. Glance images DB schema Glance Database Backends ~~~~~~~~~~~~~~~~~~~~~~~~ Migration Backends ------------------ .. list-plugins:: glance.database.migration_backend :detailed: Metadata Backends ----------------- .. list-plugins:: glance.database.metadata_backend :detailed: glance-16.0.0/doc/source/contributor/minor-code-changes.rst0000666000175100017510000001010213245511421023660 0ustar zuulzuul00000000000000Disallowed Minor Code Changes ============================= There are a few types of code changes that have been proposed recently that have been rejected by the Glance team, so we want to point them out and explain our reasoning. If you feel an exception should be made for some particular change, please put it on the agenda for the Glance weekly meeting so it can be discussed. Database migration scripts -------------------------- Once a database script has been included in a release, spelling or grammar corrections in comments are forbidden unless you are fixing them as a part of another stronger bug on the migration script itself. Modifying migration scripts confuses operators and administrators -- we only want them to notice serious problems. Their preference must take precedence over fixing spell errors. Typographical errors in comments -------------------------------- Comments are not user-facing. Correcting minor misspellings or grammatical errors only muddies the history of that part of the code, making ``git blame`` arguably less useful. So such changes are likely to be rejected. (This prohibition, of course, does not apply to corrections of misleading or unclear comments, or for example, an incorrect reference to a standards document.) Misspellings in code -------------------- Misspellings in function names are unlikely to be corrected for the "historical clarity" reasons outlined above for comments. Plus, if a function is named ``mispelled()`` and a later developer tries to call ``misspelled()``, the latter will result in a NameError when it's called, so the later developer will know to use the incorrectly spelled function name. Misspellings in variable names are more problematic, because if you have a variable named ``mispelled`` and a later developer puts up a patch where an updated value is assigned to ``misspelled``, Python won't complain. The "real" variable won't be updated, and the patch won't have its intended effect. Whether such a change is allowed will depend upon the age of the code, how widely used the variable is, whether it's spelled correctly in other functions, what the current test coverage is like, and so on. We tend to be very conservative about making changes that could cause regressions. So whether a patch that corrects the spelling of a variable name is accepted is a judgment (or is that "judgement"?) call by reviewers. In proposing your patch, however, be aware that your reviewers will have these concerns in mind. Tests ----- Occasionally someone proposes a patch that converts instances of ``assertEqual(True, whatever)`` to ``assertTrue(whatever)``, or instances of ``assertEqual(False, w)`` to ``assertFalse(w)`` in tests. Note that these are not type safe changes and they weaken the tests. (See the Python ``unittest`` docs for details.) We tend to be very conservative about our tests and don't like weakening changes. We're not saying that such changes can never be made, we're just saying that each change must be accompanied by an explanation of why the weaker test is adequate for what's being tested. Just to make this a bit clearer it can be shown using the following example, comment out the lines in the runTest method alternatively:: import unittest class MyTestCase(unittest.TestCase): def setUp(self): pass class Tests(MyTestCase): def runTest(self): self.assertTrue('True') self.assertTrue(True) self.assertEqual(True, 'True') To run this use:: python -m testtools.run test.py Also mentioned within the unittests documentation_. .. _documentation: https://docs.python.org/3/library/unittest.html#unittest.TestCase.assertTrue LOG.warn to LOG.warning ----------------------- Consistently there are proposed changes that will change all {LOG,logging}. warn to {LOG,logging}.warning across the codebase due to the deprecation in Python 3. While the deprecation is real, Glance uses oslo_log that provides alias warn and solves the issue in single place for all projects using it. These changes are not accepted due to the huge amount of refactoring they cause for no reason. glance-16.0.0/doc/source/contributor/domain_model.rst0000666000175100017510000002360013245511421022654 0ustar zuulzuul00000000000000.. Copyright 2015 OpenStack Foundation All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ============ Domain model ============ The main goal of a domain model is refactoring the logic around object manipulation by splitting it to independent layers. Each subsequent layer wraps the previous one creating an "onion" structure, thus realizing a design pattern called "Decorator." The main feature of domain model is to use a composition instead of inheritance or basic decoration while building an architecture. This provides flexibility and transparency of an internal organization for a developer, because he does not know what layers are used and works with a domain model object as with a common object. Inner architecture ~~~~~~~~~~~~~~~~~~ Each layer defines its own operations’ implementation through a special ``proxy`` class. At first, operations are performed on the upper layer, then they successively pass the control to the underlying layers. The nesting of layers can be specified explicitly using a programmer interface Gateway or implicitly using ``helper`` classes. Nesting may also depend on various conditions, skipping or adding additional layers during domain object creation. Proxies ~~~~~~~ The layer behavior is described in special ``proxy`` classes that must provide exactly the same interface as the original class does. In addition, each ``proxy`` class has a field ``base`` indicating a lower layer object that is an instance of another ``proxy`` or ``original`` class. To access the rest of the fields, you can use special ``proxy`` properties or universal methods ``set_property`` and ``get_property``. In addition, the ``proxy`` class must have an ``__init__`` format method:: def __init__(self, base, helper_class=None, helper_kwargs=None, **kwargs) where ``base`` corresponds to the underlying object layer, ``proxy_class`` and ``proxy_kwargs`` are optional and are used to create a ``helper`` class. Thus, to access a ``meth1`` method from the underlying layer, it is enough to call it on the ``base`` object:: def meth1(*args, **kwargs): … self.base.meth1(*args, **kwargs) … To get access to the domain object field, it is recommended to use properties that are created by an auxiliary function:: def _create_property_proxy(attr): def get_attr(self): return getattr(self.base, attr) def set_attr(self, value): return setattr(self.base, attr, value) def del_attr(self): return delattr(self.base, attr) return property(get_attr, set_attr, del_attr) So, the reference to the underlying layer field ``prop1`` looks like:: class Proxy(object): … prop1 = _create_property_proxy('prop1') … If the number of layers is big, it is reasonable to create a common parent ``proxy`` class that provides further control transfer. This facilitates the writing of specific layers if they do not provide a particular implementation of some operation. Gateway ~~~~~~~ ``gateway`` is a mechanism to explicitly specify a composition of the domain model layers. It defines an interface to retrieve the domain model object based on the ``proxy`` classes described above. Example of the gateway implementation ------------------------------------- This example defines three classes: * ``Base`` is the main class that sets an interface for all the ``proxy`` classes. * ``LoggerProxy`` class implements additional logic associated with the logging of messages from the ``print_msg`` method. * ``ValidatorProxy`` class implements an optional check that helps to determine whether all the parameters in the ``sum_numbers`` method are positive. :: class Base(object): ""Base class in domain model.""" msg = "Hello Domain" def print_msg(self): print(self.msg) def sum_numbers(self, *args): return sum(args) class LoggerProxy(object): """"Class extends functionality by writing message to log.""" def __init__(self, base, logg): self.base = base self.logg = logg # Proxy to provide implicit access to inner layer. msg = _create_property_proxy('msg') def print_msg(self): # Write message to log and then pass the control to inner layer. self.logg.write("Message %s has been written to the log") % self.msg self.base.print_msg() def sum_numbers(self, *args): # Nothing to do here. Just pass the control to the next layer. return self.base.sum_numbers(*args) class ValidatorProxy(object): """Class validates that input parameters are correct.""" def __init__(self, base): self.base = base msg = _create_property_proxy('msg') def print_msg(self): # There are no checks. self.base.print_msg() def sum_numbers(self, *args): # Validate input numbers and pass them further. for arg in args: if arg <= 0: return "Only positive numbers are supported." return self.base.sum_numbers(*args) Thus, the ``gateway`` method for the above example may look like: :: def gateway(logg, only_positive=True): base = Base() logger = LoggerProxy(base, logg) if only_positive: return ValidatorProxy(logger) return logger domain_object = gateway(sys.stdout, only_positive=True) It is important to consider that the order of the layers matters. And even if layers are logically independent from each other, rearranging them in different order may lead to another result. Helpers ~~~~~~~ ``Helper`` objects are used for an implicit nesting assignment that is based on a specification described in an auxiliary method (similar to ``gateway``). This approach may be helpful when using a *simple factory* for generating objects. Such a way is more flexible as it allows specifying the wrappers dynamically. The ``helper`` class is unique for all the ``proxy`` classes and it has the following form: :: class Helper(object): def __init__(self, proxy_class=None, proxy_kwargs=None): self.proxy_class = proxy_class self.proxy_kwargs = proxy_kwargs or {} def proxy(self, obj): """Wrap an object.""" if obj is None or self.proxy_class is None: return obj return self.proxy_class(obj, **self.proxy_kwargs) def unproxy(self, obj): """Return object from inner layer.""" if obj is None or self.proxy_class is None: return obj return obj.base Example of a simple factory implementation ------------------------------------------ Here is a code of a *simple factory* for generating objects from the previous example. It specifies a ``BaseFactory`` class with a ``generate`` method and related ``proxy`` classes: :: class BaseFactory(object): """Simple factory to generate an object.""" def generate(self): return Base() class LoggerFactory(object): """Proxy class to add logging functionality.""" def __init__(self, base, logg, proxy_class=None, proxy_kwargs=None): self.helper = Helper(proxy_class, proxy_kwargs) self.base = base self.logg = logg def generate(self): return self.helper.proxy(self.base.generate()) class ValidatorFactory(object): """Proxy class to add validation.""" def __init__(self, base, only_positive=True, proxy_class=None, proxy_kwargs=None): self.helper = Helper(proxy_class, proxy_kwargs) self.base = base self.only_positive = only_positive def generate(self): if self.only_positive: # Wrap in ValidatorProxy if required. return self.helper.proxy(self.base.generate()) return self.base.generate() Further, ``BaseFactory`` and related ``proxy`` classes are combined together: :: def create_factory(logg, only_positive=True): base_factory = BaseFactory() logger_factory = LoggerFactory(base_factory, logg, proxy_class=LoggerProxy, proxy_kwargs=dict(logg=logg)) validator_factory = ValidatorFactory(logger_factory, only_positive, proxy_class = ValidatorProxy) return validator_factory Ultimately, to generate a domain object, you create and run a factory method ``generate`` which implicitly creates a composite object. This method is based on specifications that are set forth in the ``proxy`` class. :: factory = create_factory(sys.stdout, only_positive=False) domain_object = factory.generate() Why do you need a domain if you can use decorators? ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In the above examples, to implement the planned logic, it is quite possible to use standard Python language techniques such as decorators. However, to implement more complicated operations, the domain model is reasonable and justified. In general, the domain is useful when: * there are more than three layers. In such case, the domain model usage facilitates the understanding and supporting of the code; * wrapping must be implemented depending on some conditions, including dynamic wrapping; * there is a requirement to wrap objects implicitly by helpers. glance-16.0.0/doc/source/contributor/modules.rst0000666000175100017510000000123013245511421021670 0ustar zuulzuul00000000000000.. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Module Reference ================ .. toctree:: :maxdepth: 1 api/autoindex glance-16.0.0/doc/source/contributor/database_migrations.rst0000666000175100017510000003165313245511421024234 0ustar zuulzuul00000000000000.. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ====================================================== Writing Database Migrations for Zero-Downtime Upgrades ====================================================== Beginning in Ocata, OpenStack Glance uses Alembic, which replaced SQLAlchemy Migrate as the database migration engine. Moving to Alembic is particularly motivated by the zero-downtime upgrade work. Refer to [GSPEC1]_ and [GSPEC2]_ for more information on zero-downtime upgrades in Glance and why a move to Alembic was deemed necessary. Stop right now and go read [GSPEC1]_ and [GSPEC2]_ if you haven't done so already. Those documents explain the strategy Glance has approved for database migrations, and we expect you to be familiar with them in what follows. This document focuses on the "how", but unless you understand the "what" and "why", you'll be wasting your time reading this document. Prior to Ocata, database migrations were conceived as monoliths. Thus, they did not need to carefully distinguish and manage database schema expansions, data migrations, or database schema contractions. The modern database migrations are more sensitive to the characteristics of changes being attempted and thus we clearly identify three phases of a database migration: (1) expand, (2) migrate, and (3) contract. A developer modifying the Glance database must supply a script for each of these phases. Here's a quick reminder of what each phase entails. For more information, see [GSPEC1]_. Expand Expand migrations MUST be additive in nature. Expand migrations should be seen as the minimal set of schema changes required by the new services that can be applied while the old services are still running. Expand migrations should optionally include temporary database triggers that keep the old and new columns in sync. If a database change needs data to be migrated between columns, then temporary database triggers are required to keep the columns in sync while the data migrations are in-flight. .. note:: Sometimes there could be an exception to the additive-only change strategy for expand phase. It is described more elaborately in [GSPEC1]_. Again, consider this as a last reminder to read [GSPEC1]_, if you haven't already done so. Migrate Data migrations MUST NOT attempt any schema changes and only move existing data between old and new columns such that new services can start consuming the new tables and/or columns introduced by the expand migrations. Contract Contract migrations usually include the remaining schema changes required by the new services that couldn't be applied during expand phase due to their incompatible nature with the old services. Any temporary database triggers added during the expand migrations MUST be dropped with contract migrations. Alembic Migrations ================== As mentioned earlier, starting in Ocata Glance database migrations must be written for Alembic. All existing Glance migrations have been ported to Alembic. They can be found here [GMIGS1]_. Schema Migrations (Expand/Contract) ----------------------------------- * All Glance schema migrations must reside in ``glance.db.sqlalchemy.alembic_migrations.versions`` package * Every Glance schema migration must be a python module with the following structure .. code:: """ Revision ID: Revises: """ revision = down_revision = depends_on = def upgrade(): Identifiers ``revision``, ``down_revision`` and ``depends_on`` are elaborated below. * The ``revision`` identifier is a unique revision id for every migration. It must conform to one of the following naming schemes. All monolith migrations must conform to: .. code:: And, all expand/contract migrations must conform to: .. code:: _[expand|contract] Example: .. code:: Monolith migration: ocata01 Expand migration: ocata_expand01 Contract migration: ocata_contract01 This name convention is devised with an intention to easily understand the migration sequence. While the ```` mentions the release a migration belongs to, the ```` helps identify the order of migrations within each release. For modern migrations, the ``[expand|contract]`` part of the revision id helps identify the revision branch a migration belongs to. * The ``down_revision`` identifier MUST be specified for all Alembic migration scripts. It points to the previous migration (or ``revision`` in Alembic lingo) on which the current migration is based. This essentially establishes a migration sequence very much a like a singly linked list would (except that we use a ``previous`` link here instead of the more traditional ``next`` link.) The very first migration, ``liberty`` in our case, would have ``down_revision`` set to ``None``. All other migrations must point to the last migration in the sequence at the time of writing the migration. For example, Glance has two migrations in Mitaka, namely, ``mitaka01`` and ``mitaka02``. The migration sequence for Mitaka should look like: .. code:: liberty ^ | | mitaka01 ^ | | mitaka02 * The ``depends_on`` identifier helps establish dependencies between two migrations. If a migration ``X`` depends on running migration ``Y`` first, then ``X`` is said to depend on ``Y``. This could be specified in the migration as shown below: .. code:: revision = 'X' down_revision = 'W' depends_on = 'Y' Naturally, every migration depends on the migrations preceding it in the migration sequence. Hence, in a typical branch-less migration sequence, ``depends_on`` is of limited use. However, this could be useful for migration sequences with branches. We'll see more about this in the next section. * All schema migration scripts must adhere to the naming convention mentioned below: .. code:: _.py Example: .. code:: Monolith migration: ocata01_add_visibility_remove_is_public.py Expand migration: ocata_expand01_add_visibility.py Contract migration: ocata_contract01_remove_is_public.py Dependency Between Contract and Expand Migrations ------------------------------------------------- * To achieve zero-downtime upgrades, the Glance migration sequence has been branched into ``expand`` and ``contract`` branches. As the name suggests, the ``expand`` branch contains only the expand migrations and the ``contract`` branch contains only the contract migrations. As per the zero-downtime migration strategy, the expand migrations are run first followed by contract migrations. To establish this dependency, we make the contract migrations explicitly depend on their corresponding expand migrations. Thus, running contract migrations without running expansions is not possible. For example, the Community Images migration in Ocata includes the experimental E-M-C migrations. The expand migration is ``ocata_expand01`` and the contract migration is ``ocata_contract01``. The dependency is established as below. .. code:: revision = 'ocata_contract01' down_revision = 'mitaka02' depends_on = 'ocata_expand01' Every contract migration in Glance MUST depend on its corresponding expand migration. Thus, the current Glance migration sequence looks as shown below: .. code:: liberty ^ | | mitaka01 ^ | | mitaka02 ^ | +------------+------------+ | | | | ocata_expand01 <------ ocata_contract01 ^ ^ | | | | pike_expand01 <------ pike_contract01 Data Migrations --------------- * All Glance data migrations must reside in ``glance.db.sqlalchemy.alembic_migrations.data_migrations`` package. * The data migrations themselves are not Alembic migration scripts. And, hence they don't require a unique revision id. However, they must adhere to a similar naming convention discussed above. That is: .. code:: _migrate_.py Example: .. code:: Data Migration: ocata_migrate01_community_images.py * All data migrations modules must adhere to the following structure: .. code:: def has_migrations(engine): return def migrate(engine): return NOTES ----- * In Ocata and Pike, Glance required every database migration to include both monolithic and Expand-Migrate-Contract (E-M-C) style migrations. In Queens, E-M-C migrations became the default and a monolithic migration script is no longer required. In Queens, the glance-manage tool was refactored so that the ``glance-manage db sync`` command runs the expand, migrate, and contract scripts "under the hood". From the viewpoint of the operator, there is no difference between having a single monolithic script and having three scripts. Since we are using the same scripts for offline and online (zero-downtime) database upgrades, as a developer you have to pay attention in your scripts to determine whether you need to add/remove triggers in the expand/contract scripts. See the changes to the ocata scripts in https://review.openstack.org/#/c/544792/ for an example of how to do this. * Alembic is a database migration engine written for SQLAlchemy. So, any migration script written for SQLAlchemy Migrate should work with Alembic as well provided the structural differences above (primarily adding ``revision``, ``down_revision`` and ``depends_on``) are taken care of. Moreover, it maybe easier to do certain operations with Alembic. Refer to [ALMBC]_ for information on Alembic operations. * A given database change may not require actions in each of the expand, migrate, contract phases, but nonetheless, we require a script for *each* phase for *every* change. In the case where an action is not required, a ``no-op`` script, described below, MUST be used. For instance, if a database migration is completely contractive in nature, say removing a column, there won't be a need for expand and migrate operations. But, including a ``no-op`` expand and migrate scripts will make it explicit and also preserve the one-to-one correspondence between expand, migrate and contract scripts. A no-op expand/contract Alembic migration: .. code:: """An example empty Alembic migration script Revision ID: foo02 Revises: foo01 """ revision = foo02 down_revision = foo01 def upgrade(): pass A no-op migrate script: .. code:: """An example empty data migration script""" def has_migrations(engine): return False def migrate(engine): return 0 References ========== .. [GSPEC1] `Database Strategy for Rolling Upgrades `_ .. [GSPEC2] `Glance Alembic Migrations Spec `_ .. [GMIGS1] `Glance Alembic Migrations Implementation `_ .. [ALMBC] `Alembic Operations `_ glance-16.0.0/doc/source/contributor/index.rst0000666000175100017510000000257513245511421021344 0ustar zuulzuul00000000000000Glance Contribution Guidelines ============================== In the Contributions Guide, you will find documented policies for developing with Glance. This includes the processes we use for blueprints and specs, bugs, contributor onboarding, core reviewer memberships, and other procedural items. Glance, as with all OpenStack projects, is written with the following design guidelines in mind: * **Component based architecture**: Quickly add new behaviors * **Highly available**: Scale to very serious workloads * **Fault tolerant**: Isolated processes avoid cascading failures * **Recoverable**: Failures should be easy to diagnose, debug, and rectify * **Open standards**: Be a reference implementation for a community-driven api This documentation is generated by the Sphinx toolkit and lives in the source tree. Additional documentation on Glance and other components of OpenStack can be found on the `OpenStack wiki `_. Developer reference ------------------- .. toctree:: :maxdepth: 2 architecture database_architecture database_migrations domain_model domain_implementation .. toctree:: :maxdepth: 1 modules Policies -------- .. toctree:: :maxdepth: 3 blueprints documentation minor-code-changes refreshing-configs release-cpl .. bugs contributor-onboarding core-reviewers gate-failure-triage code-reviews glance-16.0.0/doc/source/contributor/documentation.rst0000666000175100017510000001212413245511421023075 0ustar zuulzuul00000000000000Documentation ============= Tips for Doc Writers (and Developers, too!) ------------------------------------------- Here are some useful tips about questions that come up a lot but aren't always easy to find answers to. * Make example URLs consistent For consistency, example URLs for openstack components are in the form: .. code:: project.openstack.example.org So, for example, an example image-list call to Glance would use a URL written like this: .. code:: http://glance.openstack.example.org/v2/images * URLs for OpenStack project documentation Each project's documentation is published to the following URLs: - ``https://docs.openstack.org/$project-name/latest`` - built from master - ``https://docs.openstack.org/$project-name/$series`` - built from stable For example, the Glance documentation is published to: - ``https://docs.openstack.org/glance/latest`` - built from master - ``https://docs.openstack.org/glance/ocata`` - built from stable/ocata * URLs for OpenStack API Reference Guides Each project's API Reference Guide is published to: - ``https://developer.openstack.org/api-ref/$service-type`` For example, the Glance Image Service API Reference guide is published to: - ``https://developer.openstack.org/api-ref/image`` Where to Contribute ------------------- There are a few different kinds of documentation associated with Glance to which you may want to contribute: * Configuration As you read through the sample configuration files in the ``etc`` directory in the source tree, you may find typographical errors, or grammatical problems, or text that could use clarification. The Glance team welcomes your contributions, but please note that the sample configuration files are generated, not static text. Thus you must modify the source code where the particular option you're correcting is defined and then re-generate the conf file using ``tox -e genconfig``. * Glance's Documentation The Glance Documentation (what you're reading right now) lives in the source code tree under ``doc/source``. It consists of information for developers working on Glance, information for consumers of the OpenStack Images APIs implemented by Glance, and information for operators deploying Glance. Thus there's a wide range of documents to which you could contribute. Small improvements can simply be addressed by a patch, but it's probably a good idea to first file a bug for larger changes so they can be tracked more easily (especially if you plan to submit several different patches to address the shortcoming). * User Guides There are several user guides published by the OpenStack Documentation Team. Please see the README in their code repository for more information: https://github.com/openstack/openstack-manuals * OpenStack API Reference There's a "quick reference" guide to the APIs implemented by Glance: http://developer.openstack.org/api-ref/image/ The guide is generated from source files in the source code tree under ``api-ref/source``. Corrections in spelling or typographical errors may be addressed directly by a patch. If you note a divergence between the API reference and the actual behavior of Glance, please file a bug before submitting a patch. Additionally, now that the quick reference guides are being maintained by each project (rather than a central team), you may note divergences in format between the Glance guides and those of other teams. For example, some projects may have adopted an informative new way to display error codes. If you notice structural improvements that our API reference is missing, please file a bug. And, of course, we would also welcome your patch implementing the improvement! Release Notes ------------- Release notes are notes available for operators to get an idea what each project has included and changed during a cycle. They may also include various warnings and notices. Generating release notes is done with Reno. .. code-block:: bash $ tox -e venv -- reno new This will generate a yaml file in ``releasenotes/notes`` that will contain instructions about how to fill in (or remove) the various sections of the document. Modify the yaml file as appropriate and include it as part of your commit. Commit your note to git (required for reno to pick it up): .. code-block:: bash $ git add releasenotes/notes/; git commit Once the release notes have been committed you can build them by using: .. code-block:: bash $ tox -e releasenotes This will create the HTML files under ``releasenotes/build/html/``. **NOTE**: The ``prelude`` section in the release notes is to highlight only the important changes of the release. Please word your note accordingly and be judicious when adding content there. We don't encourage extraneous notes and at the same time we don't want to miss out on important ones. In short, not every release note will need content in the ``prelude`` section. If what you're working on required a spec, then a prelude is appropriate. If you're submitting a bugfix, most likely not; a spec-lite is a judgement call. glance-16.0.0/doc/source/contributor/architecture.rst0000666000175100017510000000745113245511421022715 0ustar zuulzuul00000000000000.. Copyright 2015 OpenStack Foundation All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ================== Basic architecture ================== OpenStack Glance has a client-server architecture that provides a REST API to the user through which requests to the server can be performed. A Glance Domain Controller manages the internal server operations that is divided into layers. Specific tasks are implemented by each layer. All the file (Image data) operations are performed using glance_store library, which is responsible for interaction with external storage back ends and (or) local filesystem(s). The glance_store library provides a uniform interface to access the backend stores. Glance uses a central database (Glance DB) that is shared amongst all the components in the system and is sql-based by default. Other types of database backends are somewhat supported and used by operators but are not extensively tested upstream. .. figure:: ../images/architecture.png :figwidth: 100% :align: center :alt: OpenStack Glance Architecture Diagram. Consists of 5 main blocks: "Client" "Glance" "Keystone" "Glance Store" and "Supported Storages". Glance block exposes a REST API. The REST API makes use of the AuthZ Middleware and a Glance Domain Controller, which contains Auth, Notifier, Policy, Quota, Location and DB. The Glance Domain Controller makes use of the Glance Store (which is external to the Glance block), and (still within the Glance block) it makes use of the Database Abstraction Layer, and (optionally) the Registry Layer. The Registry Layer makes use of the Database Abstraction Layer. The Database abstraction layer exclusively makes use of the Glance Database. The Client block makes use of the Rest API (which exists in the Glance block) and the Keystone block. The Glance Store block contains AuthN which makes use of the Keystone block, and it also contains Glance Store Drivers, which exclusively makes use of each of the storage systems in the Supported Storages block. Within the Supported Storages block, there exist the following storage systems, none of which make use of anything else: Filesystem, Swift, Ceph, "ellipses", Sheepdog. A complete list is given by the currently available drivers in glance_store/_drivers. .. centered:: Image 1. OpenStack Glance Architecture Following components are present in the Glance architecture: * **A client** - any application that makes use of a Glance server. * **REST API** - Glance functionalities are exposed via REST. * **Database Abstraction Layer (DAL)** - an application programming interface (API) that unifies the communication between Glance and databases. * **Glance Domain Controller** - middleware that implements the main Glance functionalities such as authorization, notifications, policies, database connections. * **Glance Store** - used to organize interactions between Glance and various data stores. * **Registry Layer** - optional layer that is used to organise secure communication between the domain and the DAL by using a separate service. .. include:: ../deprecate-registry.inc glance-16.0.0/doc/source/contributor/refreshing-configs.rst0000666000175100017510000000554613245511421024020 0ustar zuulzuul00000000000000.. Copyright 2016-present OpenStack Foundation All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Guideline On Refreshing Configuration Files Under etc/ ====================================================== During a release cycle many configuration options are changed or updated. The sample configuration files provided in tree (under ``etc/*``) need to be updated using the autogeneration tool as these files are being used in different places. Some examples are devstack gates, downstream packagers shipping with the same (or using defaults from these files), etc. Hence, before we cut a release we need to refresh the configuration files shipped with tree to match the changes done in the source code during the release cycle. In an ideal world, every review that proposes an addition, removal or update to a configuration option(s) should use the tox tool to refresh only the configuration options(s) that were changed. However, many of the configuration options like those coming from oslo.messaging, oslo_middleware, etc. may have changed in the meantime. So, whenever someone uses the tool to autogenerate the configuration files based on the options in tree, there are more changes than those made just by the author. We do not recommend the authors to manually edit the autogenerated files so, a reasonable tradeoff is for the authors to include **only those files** that are affected by their change(s). .. code-block:: bash $ tox -e genconfig When To Refresh The Sample Configuration Files ============================================== * Every review that proposes an addition, removal or update to a configuration option(s) should use the tox tool to refresh only the configuration option(s) they have changed. * Ideally reviewers should request updates to sample configuration files for every change that attempts to add/delete/modify a configuration option(s) in the code. * In some situations however, there may be a bunch of similar changes that are affecting the configuration files. In this case, in order to make the developers' and reviewers' effort easier, we recommend an update to the configuration files in bulk right after all the update changes have been made/merged. **IMPORTANT NOTE**: All sample configuration files mush be updated before the milestone-3 (or the final release) of the project. glance-16.0.0/doc/source/contributor/domain_implementation.rst0000666000175100017510000001164013245511421024602 0ustar zuulzuul00000000000000.. Copyright 2016 OpenStack Foundation All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ================================== Glance domain model implementation ================================== Gateway and basic layers ~~~~~~~~~~~~~~~~~~~~~~~~ The domain model contains the following layers: #. :ref:`authorization` #. :ref:`property` #. :ref:`notifier` #. :ref:`policy` #. :ref:`quota` #. :ref:`location` #. :ref:`database` The schema below shows a stack that contains the Image domain layers and their locations: .. figure:: ../images/glance_layers.png :figwidth: 100% :align: center :alt: From top to bottom, the stack consists of the Router and REST API, which are above the domain implementation. The Auth, Property Protection (optional), Notifier, Policy, Quota, Location, and Database represent the domain implementation. The Registry (optional) and Data Access sit below the domain implementation. Further, the Client block calls the Router; the Location block calls the Glance Store, and the Data Access layer calls the DBMS. Additional information conveyed in the image is the location in the Glance code of the various components: Router: api/v2/router.py REST API: api/v2/* Auth: api/authorization.py Property Protection: api/property_protections.py Notifier: notifier.py Policy: api/policy.py Quota: quota/__init__.py Location: location.py DB: db/__init__.py Registry: registry/v2/* Data Access: db/sqlalchemy/api.py .. _authorization: Authorization ------------- The first layer of the domain model provides a verification of whether an image itself or its property can be changed. An admin or image owner can apply the changes. The information about a user is taken from the request ``context`` and is compared with the image ``owner``. If the user cannot apply a change, a corresponding error message appears. .. _property: Property protection ------------------- The second layer of the domain model is optional. It becomes available if you set the ``property_protection_file`` parameter in the Glance configuration file. There are two types of image properties in Glance: * *Core properties*, as specified in the image schema * *Meta properties*, which are the arbitrary key/value pairs that can be added to an image The property protection layer manages access to the meta properties through Glance’s public API calls. You can restrict the access in the property protection configuration file. .. _notifier: Notifier -------- On the third layer of the domain model, the following items are added to the message queue: #. Notifications about all of the image changes #. All of the exceptions and warnings that occurred while using an image .. _policy: Policy ------ The fourth layer of the domain model is responsible for: #. Defining access rules to perform actions with an image. The rules are defined in the :file:`etc/policy.json` file. #. Monitoring of the rules implementation. .. _quota: Quota ----- On the fifth layer of the domain model, if a user has an admin-defined size quota for all of his uploaded images, there is a check that verifies whether this quota exceeds the limit during an image upload and save: * If the quota does not exceed the limit, then the action to add an image succeeds. * If the quota exceeds the limit, then the action does not succeed and a corresponding error message appears. .. _location: Location -------- The sixth layer of the domain model is used for interaction with the store via the ``glance_store`` library, like upload and download, and for managing an image location. On this layer, an image is validated before the upload. If the validation succeeds, an image is written to the ``glance_store`` library. This sixth layer of the domain model is responsible for: #. Checking whether a location URI is correct when a new location is added #. Removing image data from the store when an image location is changed #. Preventing image location duplicates .. _database: Database -------- On the seventh layer of the domain model: * The methods to interact with the database API are implemented. * Images are converted to the corresponding format to be recorded in the database. And the information received from the database is converted to an Image object. glance-16.0.0/doc/source/images/0000775000175100017510000000000013245511661016371 5ustar zuulzuul00000000000000glance-16.0.0/doc/source/images/instance-life-3.png0000666000175100017510000007274013245511421021766 0ustar zuulzuul00000000000000‰PNG  IHDR«”-J¡bKGDÿÿÿ ½§“ pHYs  šœ IDATxÚìw¼UÕ÷¿{Ïi÷ÜžÜôHƒ„BèE! RÔ(ˆbC%>BG "bã},ø¨€(Š‚QQ$ôJ(!½ßÜÜ~êÌÞï³gÎ>çž{IB)³ò™Üs¦ŸÙ{ÖoýÖZ{mˆ$’H"‰$’H"‰$’H"‰$’H"‰$’H"‰$’H"‰$’H"‰$’H"‰$’H"‰$’H"‰$’H"‰$’H"‰$’H"‰$’H"‰$’H"‰$’H"‰$’H"‰$’H"‰$’H"‰$’H"‰$’H"‰$’H"‰$’H"‰$’H"‰$’H"‰$’H"‰$’H"‰$’H"‰$’H"‰$’H"‰$’·Zb€ˆC$‘DI$»£4ÕÖÖ.LÄbnSSÓJà´è‘DI$‘D²»H2‘H\^SS“BèÓO=A=Òtssó3ÀQÑ#ÚãäbàÁè1DI${ƒà¼úúúV@ÜêÞ»~¡×¼ú ^þâ?ô5_üŒ4¨YÐZ=²7Mß`ûÉÀMÀèm<߀-{賜u‰’ü¸k/ým³o2jæHú‘¹MMË=ýÀ©Þ-ÿ»P¯]öŸ>ËÒgï×ó?óaN§•”R566ÞŒ‰ßN—¯8´Ÿí‹€µÛñNïÉ`õ gonìíUÌc {é³8¸<«HªÈ̦¦¦‡€¿ÕÕÕNøÑ _æ¯ú¹<þèÃBöYêêøüg?ÆÃü^\xþ{Eooïûb±ØŠºººƒ£Ç¹Óäó÷Ü*Û†o7û¨*ìxwdìoæþ;ã·¿¥Ïmg(æ`p„²à5s =­&ã€Y@j€ó7ûÇúy€“€Ã€ô6ÞóàÓ¡+¥h2‹¬r/‡ p¿µÀ4üxÅÈŠmió;ö'ÊÛSdlCCÃï…O£õ‘×\ýYþ³è6ùî³NAJ‰bÀeHË ®½f>ÿþûmâŒÓNrz{{/I¥RëR©Ô—¶£¯FÒ¿¼ < ¼¿Ê;õ>óþþÚZ÷I` 5Çû:ç?xÆè([ß=|ÐZw+ðÐnØÍCÀ»ÿ1Ì®ø›Ñ •ºæ&`#ÐÜLà~ö~bΙ^¨Ð¯Á½K“¥{nVšëü 8¤âÜŸZ<ðkÛQæ7e€UÀ7€øîÞ9ÏYß?¸†Š˃¦ñ Öº¿õÖqUÓŠï_¶åàÕŠý2ÀÛ¬}0ÎÞþ‘îÿHÓÀÚX[Úê@_¬¸–6`?`±µ>‡ï/·Áì'ÏbƒµíÓ¦Ûž2àÉî)͵µµßÅbn2™P—\ü!ýÒ3‹ôú御eÑÝ¿Ñ'¼íhèÚÚÚ®X,öé~ °H¶]>mÞ©ã*Ö/6º!ëÌ~ÿ>j,‚—€ùÆR}_?÷z˜¹§VsíGéëÏ~Ö¢Ñàãf¼ÊPö%Àæš•§ÛÜçs?ö˜«­ÀïÿÞ4Gºf·)¥ü`}}ýúB¡ðÍ£œ“üû½·ð½o_ÍÈëƥÞÈrô‘s¸çÎ_Š_Þôm†<¸­©©ée£¬"Ù>é0Œã|7<†©ä)Å´5ú¤Rÿ?\1}'ßÓ‹æïÈ —%–Ž›xÆÅv¥YŽ0Û¦V9gÑ€ëðÁ¶5w Bq·µ. üÝ\ïõ0`ŽÁË­ûL€¹+úÍpA¬3ÊÝ–õ”–G«ç0c©¸–ղɰ—\'è`ß4€i[9ý¹Ø¾i–éÆŠú­Öä sþû+ÖÿÕ€c gM7nC›I±²‰®ŠHv½œÜÔØxSGg焉Æ©«¯º„cŽœ³K,ÆSO=“O~›¼íwñ~º_ÜßÔÔôXGGǧ¥ɶÉMÀù°~|È…[+biÅqK+¶W3Ä/ÎöH®ÊqÁº Þ=Âò¼Øò/ÀÕäÝÆø½ø/Ã?n€¹?†KÛPå·ŸiÀº¿ëÕàÇÕF÷sŸ»4îúf€U± Z-™Æk%ðã<ÀÑf›°öï|ë~Ý'  äÃäªÉõæÚŸ3×þoàDàµ\CÆê­Â¢‚í¯wŸÌ}ÙòG -Ò3o™ÜÔÔô㎎ŽÃÔ·®ýg¼ëd)Ä®õÆÅb’óÏy7óÎ>Mþì—·ñý1GñxCCÃ=—áÇ?"XÂO|ÔŽ#€[Ûƒ÷ûxã9ÁúÎï`„Τ<á`gÈJ£/¥<´2d€/=v•Ñ_/¦U zÒÃkƃsñ Ù¿}ë@0°Íøñ¶·½åî·àš‡FñQ|?ïéëK~ÎtQœ'x‰Ÿ6.6{ùÃÇ-Þkèñhc©`QºŠ3 Vf‘ãûm×½Îï}?Föõ*÷¹2Ò3»\Æ766þŸâ))Äœk¿z‹ÿu§<ëÌmËð{³–T*Å%ŸüO,¾[\ú©ÏçNuùr}}ýÍ–Iÿò#ãÖº?¾ØÚö°1φ˜u£ñC K-†µÅ¼ÿqKÑ÷â»gS;ù~ÿl˜Û綃¡ ´®5`düm4`rÅ1wšë|Â2¬7zøŽm¸æŸŒ®ŸÀO yÂÛ§Ìo~Àxy^6ììõäËø!™¿›ãg¼S vuo¯Ïc‘Q¼3Ì÷¯››¶éçÏð“ìû1ÇÍ1×ü~òD~ÂÃü˜Ö$üÄ m½Î\«ÛX?ïÞ Ük±›_™Ng»åj­*+Ó)Úûm‚ù{€Ð8~ÂÅ,üŒü4ÎßVu‹aH=ø±µ?˜íëÙ¤(Ï€èBü[ahu³yNŸŠtÌ›.©d29_Jñ¥B¾úÀ¹ïáòÿº˜áÆî7ÿÚò•\wýÜõ—¿RSS“õ<ï+…Báû”b ‘”ä,Ko«lŸjÞÙý>ùåYÈë8Ý(é ùêLüÌÀ >½Ü0Ž éëKÀ,Wa?ûùüä*Œîø”1²_´ÝqÂIœ"“uSµWhSùžÅø•‚*¥ÈÏT¨ãgó½lí3ø°Ñ‡ËðÓÌW[äàL£Çóe­c?ntÛ-ÕÞ!óÜf_܆Ÿ„¶Û‚ÕL£ÔƒàïJÿ¶ö™b”±£™aŽ{ªÂ8ÃXí†Ù<ª&—?0಴¬ãb³êÞÆÌô&ÔŒ$-‡QØë@\K»ÛÎ’ÞµòŸ«U§÷oJc®°€n®q?®0¿+oÝq¦#Þ'I±ÓØßY³hAkV»4à»Sø~× †MÍĦq3NŸúv.9øìß0‘æX#RÈ2ßžyÅQ(ÖõndѺù¯G~ ]eíý2pó—/Ž^±=N&455}¯³³óŒÁƒ©+.¿T^xÁûˆÅœ½ï—JI*ÕÔÛÛ±A_òéOÖÝuÿ“ù\.—ŠºÀn Tgádž†)øè¹ÇpæÛ§2¼9M"&4H29%Ë[ùÕŸdñce#¶—­Y´à·Xí®²pâñƒ¢q€1£§òÛ·_ÍôÆ©ˆ×ù)ºâ[Q»¼ÐþŸ}ðž[½PÆ ºŠùË3Ñë¶ÛËàúúú¯e³ÙO$âqqɧ?&.ýôG¨­Ý;kÄ&SÍÙ|®C½òÔŸR=[W;?þÍCÜýç#°Ú=êkø%Ü8áø\ùácÞœªLà)#¡Ú8?¥5›Ú³üèöGùó½e9 ÷¯Y´`MV»P}Ú‰ žäg§|‘w=™„HøÑäU¼Ê“Ó92*GAˆ‰q#FŒ'E½¬Å±´<‹7=Ι÷| zÃB¯§3y4 s÷”šT*õ_À‹Åbâ Î .¿„¡C‡ì•?6•hÌTÞ[õÜýΖ/%•ò@¹D`µÛÕ øÕm6¼‰o|îtfOnÁ‘¾ºõ™ÖÊE)¬vk ZˆKgÆô1üô곩«qB=Ô•õhëÉ£µåí)û¬ÃÏZA,&hLÇiNÇRøÓExŠû_ɾyÚWnEàk-øeVo-P½?…Ó¡a¯ž{-ñA¸ºHkq+•­$Ö}þ/W ´"Á°ÄÒ²&<îáMOpÚÿ} <ü&gþò_E¯á[.ï4~c?üPýÕk¾ ?ì½ò‡Æœ‰÷¶­{A¯Yö`r‹(å¡ PE`µÛÕøEª™:y7ý=Ô¦| Ê4mÝy ^ßÄc­Ë5WøÝè*¡@8‚! IR¿5›º9ÿÊ[èØF,¾¾fÑ‚«#°zk€j~àPRõ¼xÁï‘ôkNf¼ Ýþbt¹¹T5Ъsê‘‚Ä4«{×2ëönA \Èü忎^Ç·Dmnn¾©½½}ö¤ý÷W×|y<ý´“÷Ê*E'žêíÞºJ¯zé:·)©¬vk š?¶4=fÔ nýîy4¦ýpCgÆekoß1ÊBÛ^ ˼TWø××UZA*á0¢)A<æ'‘uô¸økwÚnÁ«Ö,ZpÝëÜk ¿ì”kXYð·¸k-Ø-‡òìî`u?p2Æ“çßÎþ ~1õŒÊ²©ØÚ/L)*AŠ€.›u}A+.F'Gš˜¦=ßÁ~ø ´¯Ã4æÙÌ_þëÞüŒÄ#ñÇ„‰_oÌ3ûWþ]Ž_Æåàá(MþuebSSÓ÷;;;OÒÒ¢®ºòsò‚ ÞO,¶ÎU(ñÚL¦g³Z³ôº|®³*HE`µÛ•À/0³©¡†ÿ»ñà môËðug=¶öøu¶µè RÚ«`}\• ¥µF(ÐŽfTS 餃ò—õ<ó\˜gññ5‹üuã+ôÔ,žéwþ8Ô€{Ö,Z°:«*˜J™ßžñN;×§Ó:çU€HAöŒ®È÷º¼ŒubZ>ø¦Ð!`I ÓÃL,KÓVhgÒ-çBO+ø¥I>Œ5~¥ùs¶WÚ€ÿÜÈü团׽LZêëë¯Íf³M&“bþg?%.ùÔÇI§÷Î ¿x²6ëæºÕªWÿ“Êõ´:>(¹XíY`õ.üº‚üê[`öTßûÓ›W´uç鯒¿¶tVVÚú\ ZZ[,KkZê“ ª‹ûS¤ç\>tÕïyùµÁé¾?ôçJSíˆxøõ¿µfÑ‚§#°ªVwï¬<ÕçüG:u‘ ùVúΗØÇ2+ì¸U©±m–¥ €i­™JC¬ÐlÊmfÚ-ç@¦ŸKRMœ4zÇšÅþÍãiI6ãH­mùvV÷¬ãÉMKùÍkÛ‡à×w»øvTEÒéô)¥¾áºnü¢_ ®¼|>C†´ì •H”[pׯ|,Ö½u]B)­ŠÀj«ÅÀ‘³gãæ¯¼!ù¢Çæ®BȦìUªW`^z ÐÒ%oV0¸!Á`X=™"\ù;–¯j­bŘ5u$‡4ž)Z1¸žºt)Dhï÷dŠ´vfY±®=ºŒGŸªœï–_ŸY³hAoV% šeh5ÿ8÷9¤å 6·Pôò{*¿}Qa©Ø£«thÿl–åiÖèäêœZX[xß½Wðr¶ƒóFÌa#f0¹e?ÆÖަ!UÚåìNUŒîò”ÇKËøñ³·pûó÷Ú›þ œ»³¬ÁBˆ-Çw¬úáÂë儉{g†Ÿ¯qµö [Ö¿ :6-«ñ”E`µ'ÕÛÌûËŸnºˆýFú3vlêÌS4ÉÛ¬*CZ—3ª>€ì£`hS’¦t,¬þv1m=x Z Ž6 Mj6uæ(ºIÕÅR¦q• WôÈ^W,ô¢u4›Ç^&_3º™±C@@&§ð±ÆW8b@·–NB—ƒW°>ô•)§*€-hëͳµ7O]*ŽRšÞœKA)”Jø Lk]²ñuIa Ï׉B€ëÁòÍ^ÛÔËØA5L–FJÁ{Þ6‰¦º³XpÝÁŸ ê[øÅÅwѵ{±ªàß:âBbÒY•ógREà Í¿¾SƒK‰ão-mC 8”ö |ÉB ³7ᾉÀM<岩°…Í…-ô¨^<­üUÆìÌ”èæ/áu ïÏå…Ì«<Óû"žö><íŽÙÿ¨àD—³pâÄ}] t·¯K¬yuqL:ñìî2‹ï¶.ñD:£Q=W=U³iõ3užÍ“¸²ªüÉ ¹hÞQHé¿ëù¢‡yý‘Pöþ÷YŒö7Ú£L÷”¶Ûë)ígVhé–º²EºsE\­}ýf®\éûë…H]â~A|-¸ÿUmY_ÞA¾àƒÜ‰‡ŽãôSNuј¹×ºo‚•?aàm£ý¸®*ZàD8•–RGºhõX2`Þ¦•ÆoXs¬¥Æ¶öÓÑ¢º%eο©ÐÊc½Ï¢ÑH!¹ñ¸+ÁqÀŸòšH@¦»5¾æ•ÿ$…trHÇz«wÏ%–Hæ…t2[Ö/oZõt]1Ÿ‘Q+îµ2+xå6ð κžî PTé.Uö‘h…€g‡ XÚ(«Ò9B–39¡Ëµ¼ 劒󩤰,¤°¼WÂ+öÄŠN\察Þ‘ÔÖ&ƒ3i_«°~ÎØÚ1(¢-p þø_9ó @‹×,aÅŬ+ŒË¯`]À®¸’”ö!¼F L¥eÙ´ºX[‹†Õ å+Ç}2øùg³pb:ÒËtÈÕ/?˜ÒÅ|ÎqÅÝ‘I9±¤ë8ñLWÛj½yõ3é|¶3µÜ^/‡H)Õâ·,º+MT')£·š­è' | ^T,[XÇgI6xÆ%*Ø•c±4!0úO[ç.¥´ ټˊÍ~Æzc:Îg?rBpªwŒ™{}Ó> VGŽMÚ©ñiµÊ—ÃS:(JŠ-Êhu%ðT–áÓáúж(es„ÿW²«²m¶e¤Ë]¥ó”ûÒÌ2r*NsL°º8=Ò¾ ½¬zåÁTGëk®NV§×Ê.\'®'Þ›éi-´®}>én‹²òö1°:îð‰$ã>-)¸º_æäX†ªSÅXÆ„°Ü€&öeô‡%vº+ÝUØUÄ,Ë´¾À6µKªPô%\B°bs†lÑgއ8*Ø’ÎÞgÁêCSß>ð¼.”6pié‚6Ÿmfd•ÇW¬ ËØ—]•9õÂe×èá ‹f á'btxþ€ãѵ£ žv=1ÒåÒÞº¼fÝkÖ¸Å\VÊxá­r÷ ÇAƽ…|OïÖ/×f:7¥!ÊðÛÇd6À)ÇN Wä]¯¤WžE}DÀ–ú¨òxe H„:ERî¤vºÿlÆ¥K`‡¨ŽZXç%hëìõKG TKº>Ì}:fß+ªúé‡ î[ÔÚ3Ù+%rt”8Æb -¤¿MW€TË*¹è‚sjÄ*Ü•ì*p–ÆÿV&ZˆrW ¡Ô¶£¸Ôáü«l)úÙŸ ç°±¡'´e{ùË&réOÜbŽ +Ÿ¬io[é !3BÄv©Ë/Köªb±§«uymOÇú:­U¤¶÷13÷ú4~6Ó÷ægÿzÚ2Š)K†èŸÛ åÉ Â|(i)›­õuöÇ®ìa¡¢Šò·YÑfeÉ!a=CsÌVŽ„ÙÓǧ´¯1«@)U7¸Ú-MRÂvßY 9A†‹¶Ýz%€”¯—}D5ve¹øú€NyWЖ+ÐÎ ”× ŽÙ\l Ïù®‘arMs¤ú—žöu5›V?›öŠÙ^)Ì›îò‹Å3ÙÙݾ6ÝÓ¹¾N©( }–™€ãÄ%#ù¡eW©2 ‘A¿AôKÒ ܰcW6â"´Ð§¦Ï€m·"Ñ¢_W D«¤ã‚k·vÃëÌ™>j—‚Õî4ÎꀩC§’Ž™ ‹>XhŸ½ˆŠ‡;‰fÍh‹îšêA°Q Ñá(ÚÒÖõ4R<XRþ5$ „@h޳‚ T—|Á:8—Ä S&ëæð´‹1FÖ ß¥`OÏ-кî…ÚdMƒj4¶7žL¢v[kGn‹áä”r½žŽÍIÏ˽„ò‰žü>ͪbÀ)3§$™ð«ÓÝr^ àõŒ@à£meeøõYhœh8¦J””•B#ƒZ‚¢¤¹ÂÊB–”œ– TI/Š.¾`€²¼îXú¬èy¸ b†4¥w©aýÖÕ‰1üA°ï4Ët€N9 Çpׇ’¡I IDAT¢öB ’}‚奩ü¢ `™ªìV'‘”ŠG¢Rhk˜¯@S–„RG¦C „éÚtŽRϲ«Ì²,–ÒyÁ‰ÉˆYY³n‹þý$|ÿiÔÔôßU‹…Œlß¼¬V&RÔ7ŽìI&kÒIk­vØs „,*í3Ý­²Xè ªë"àü´åHö€mê`.ÐpæÛ@šRE¥ÊbMÚ¸Ò/KÕ>f*S(Ã~T  ¬*;¾:2ºÆÐ$­Ë]cªŠnñÇ]M%KÕ—CÝC‡á¨ ªè*Ê*ø5Òqörfµpât`>~I¥Æ²mõC8{Ò©¾Õ¡•?ø6‚FufU‘´¶¹4öƒªâÿÔ”ØUY•¿0že*_è* S†‹åß =£¨²„®ÊR–}žYýôWâÏ÷>Ľ÷?Ì·¿vS&\ÜVyEz:ÖÕe5uC2Éšz-e,¦µNn;H å)7—ëÝJ.Ûž®ÂÚ£´ô} pšÑU'Tn?lÖXN9j²Ïð•6q9PÙIÕ kiŒe©uèÙ (àt%ë )p†ÇWè!ë»(Àè¶ÑlYץí󄡌²³ìEÌjáDa¬’ù˜ZZ¡$Òü÷œóxׄ“˜Ô¸ éë‚Å2ª*Iuf¥è1ŒªdMl±«°ù¦ÁEi¶³  À'(Å+ ÝöÛØö%jÊ35c·"·Ðw}*7X•Üׯç/ù/½²šW–­æŸ½– Ï;ƒ‹>øž×=NkM6³5]ÈuOÖ»ÉT]¯pbZ µh]ÕÔUÚëÍeºu¶wKÝõûð(px¤Ò÷JJ—“ím³¦áƒgÍæÐ©#ܘ AÈ_URð6PÙ%—úL¾ª VÐíÊÜ~eÚĸõÌ1hIm…,(DuæT* gg%YhK¿Và•¯ƒßMæ°Ç‚Õ‰‡? ×¥êøêâôñ'2¾v,1+ƒž‚*ñ2&N%*rëDY¢BÙÓÒ‰(VØhÕ쉾†HÐp%?².«ÓÕ‹ÚVú\™ÖÔtL)) ´÷†Eןß×GcC¿þÉ—ùÂW~Ì£O¾Àoº•Çž|ï^·€ÆÆ†m:‡WÈIJ^1&d '{Íq£œXÂÒÑBˆˆ­ÜÞB¾W÷voªÃ$N¼ÎÛ7ØØEQ…н ¨æ?Àš¬pæ£ùè¼#8dÊpšêåî7 ¹‚G¦Pò•VÚyTUrëîŸ@OƒØw™±Tá¤ŠÞ :£ x!‰*(Õg‡€‰0®Op —á¶-=ÁYwɤŒoX-œX \ \¼Ôƒ†ìÇoßv%s†LBÆÊž¢Š:OÖËQÀ5u­´Bô*aåæi«X­ æ ¸I×¥SéŠDˆ`gÛ\-Þ´=7ûºË×µÄK¿ÅkÃ)CÔ¤kRüðÛ—óëÛîåo½›üëQÎ:çRΙ÷N>qѹHélë©V)ÏÛOyYŠ…¬ßÒ!žLgºÚV§•WÜëpüY¬ŽZh¯©QÀ˜Z'Ÿp Ÿ9çHƯSÈCHir®"[ÔhU…UUÔ÷V|(0ŸµÉl||Ò€„²€ATW¥ßfųÝÃþªRMÿ ®-yÁ{fÅ.ÕUoXùSÒߌó=š£¹ïä¯0gÈ,¤()O+ò:O^å)˜Ì?{ ­mˆ*@`w!´úkˆ2Êk€¤š{®ß­J©û#ú®¾q-ÿ~Çüj%Jyܹò¡}¬JÓ¤V‘ ?ð.Ž9j6W\ý=^zeßûáÍüùî¸úÊOsì1Ûä‘[¡µWƺ<—˜R½ž›ß‘ÒV〦–e${,P½ø &~>gö®¹x.ã†Õ•½ÉEW‘w5Ù¢‡V}kõ½ر« Ôz?Þè>Þ™2ÍQm]ʪR/É“)¶ëë|°r=ÍãÏ­ÙÃÁjáÄwÿ¤>søùü÷ìKIÉdØNEU SõPô ¥Ú}B øðè^¥º~¾ŽÓUö©L¬LŒØCÄJ_ØÙiˆÕ°%· ™ˆYõ#“öÃïþ÷;|fÕú­uÔá<þì«Üõ×qñGÎeÒäýxóU¿Ù;8°w4ðoศeöX ú¤aTBÁ—.;•³ŸLÌšŒ*[péÎyxJ[UÐÅ€#*> Êþ+e;Ð7±ÉÒ€-wì„ &+wóÖ Å¼»ƒÕ‰ï1Œ*A<ÉÝïù!GµÌ)5¼ÎÑU즠 á”;™óC€¥Jº‰¾íÆü[)AƒSÀ²ÎVg"um† *uÓd"Á¯¼„ú–á¬]·¡Ã†ÑÑÑõ×.dùòe\zñEœtÒÛ*Oó¼Özzÿ`µÃý`Ð ÔF-µÇÕ•ÀuC‡5òó¯½ñÃëBrŸ)(:³.J)3ÀwßgÝ×µRš³ª¯²²pk—j*¡Øî´ˆ¸#¨Oúïâ²µmÁêNàÕ]Ñf;/@¼pâiÀï€5Í,½ðO!Pi›‹­lÊ·š¢­Û×ÚÚZ¹—2ÓÓtÆ×²éªŠ‚Û^û\£ÃpŒkôï«›žfþr7R)°®­Àu7üŠžÞLé¹)ßq;fÜb±½™ßúî7ØÒÖÉG?õyNÈ0ª ›ÃáW7/ÕÜS˜¹RL-¤j¾^ûA cÖha¥–ë¾.Á$´îÃgt•FИ¨>HW9ƒÞ›Ä¶ú¿ƒq©ÑaàôƧo VÿùË_ŒT‹/--Íüþ77ðß_ùºëïœwî,¸úÎx× œp쑾ë&ŸAÉ8“ö›ÀÍ?û=™wÜy·Þú‡®Ö¶­ ëׯgýú \zÙ,úë+ŒÐ7\Kð0`0,j±ÝG>r晳’éô9‰d2æ"ôfoÝôö|òP„àí‡Oféãÿ`)àzŠîœjša£Æ0逃pÊô”¯ÈE0ÞÉ bÙƒ|ÃòIèÒd‡Uji‹©xIU‰e…¨ËÀHPšU5î PÕ u{(h%éÓ’ö‡z)øùÞ¹fтջª=·9ÄvþùçòÔSO½BJ锬M—êâj7¥µfHͤ±ŠªˆÒŠ^‘eÎiGàHégýik)“d!ÃŒ@ rv^]9Ö¸Ñ,V-_ÅkO/£¶¶Ö¬QhÏ€‘±¦ý´QŒ„2 ¬7ßËœwN<•ð· BùÇàÖZ›ï*üë™sy–vi_/ÜöÊ?_dÿÔxb±XhÕ;Ê·T\å±5ÛŽRŠ‘jáä=ÏC)Å’%KþýÄOœ÷Àìí1¬!Àæ+?{ï;ëDÎ(cÌ”cX»âY2¹<·Ý~?þŸÛxûñG0qÂh6nlã3Ÿ¼ý÷Ÿ€4`åÈR:'†#cÝBˆx.—KÝu×}Üò»?pɧ>ÆÜK |5uCZ[×=?d'¼HÿVÊ;N+eþuý¿ÚCyæ¯ò@yx毿Ý-ß¿ÏyJçF¹üø7q÷?žÏçr¹hÒÇ 9ãŒ3ê¯>ì°Ö™è¤vbˆx ¤1í8àÄPŽÁçX sбËeŒuƒÇ2füÄÒTašº0óSUŒ¥ª˜’>p[ ÑŸ¦‚%O?A¾§†ú”R%¶¯TèRJ•mÓZ³ió:r. MƒÌ:€²¢ô9<—ù¬LhV™ý½Šíž2û ÍÒÅåȃ&á8NÉCeö÷<…ëúºIíj_PJñÔSO-yä‘GŽûóŸÿœy˘ռyóö¿üòË¿?|øðxR6` ´dsY–,y‘I³&SY‚¨· ª—gÝTXjV0.ƒûʬ¥Ø¸t=§½ã¸®[æ÷-ëUî3‹ñç{ÿÂ1g_éă²º GÒ}º¢íîëÞÔÉ©SObè¡ÛòÜšíï‡~øY+V¬˜ܺ—ë]ÍÕÞÕºŠ‘ãf°òµ%¼ûŒ¹Lš4ž…?ø>ôg>—úê¹ì’0xHŸ)ÀžÕZ“L&™7ï æÍ;£O`»œùßüQZëUË–¯W(ÐÊ7^”V>™¤ ¥,¢RhíÑN0¸¹&B˜(µñø¹Ó '_ÈC<±˜&Ç€T ¤DIs@:()!&7¨…½ð$cÆO¤šÊñYP©T[™—Æ”c ªIÈÀk#KÄ)4Êõn†£Ž<’b±¸]ºjΡ5Üù—ûxûióôÝôçRvê|ƒÛ²ngÌ=ŠºÚò×~î%fŸ2eÊŒ%K–\€?ní­«æææ‹G-²Ù¬5"eT™ù\ìuµéZÚÞ̤Y“}àå¥íEE¥aeªH˜ò~¾ÛO °ì: åSw,{i‡rö}VÏóú4€RŠaµC)]œ¸ì¯²õ› \Œ¢;!ëêÛEÖ=¹šSßy"ù\¾ìù÷,Õž_*•JqÄ î¸ãŽ[÷E¥Óݱž|®“ñûÍbÝšWÙBo}ã î»ÿAn½ý/Ô××rêÉoãK_½ºº:>öÑó™6eR ¶)¥ü ¤®—½Kw/zªù§ŸVâöÙ»*ëk*|WJýqã†,_õF7F(³“ä}3¦Qw÷Ä´i Gø³7è ¦žâ1âŽã³­ ” îÖ­0¨\.GMMMÉ`Beƒ©Œq*D8UGð ©ÙÏ®\³ô…%xà …R… K/Øz«R/är9Æ o!ŸÏK$Kn@+ƒ#HMטß*|7¦ý¹ /=·ÍËŸgðág‘ÏçÂkK)‘R–éK‚ïBˆÄ±Ç{Åï~÷»7¬^·VÍ5×\#;î¸? :4¥”ÂqÇé3×qb±XøÃ)‹$œÅ´"Y“ÇF…ÓÀ‡#xK³êÚìJX…+ûI%µzâÇ9耋Åð>Ç ïÑó<¤”Äb1ÇAÊÒ`dÏóhÒÂó/?ÏÑCC÷`)[1 £Z£ÉKù‹v¼+Œå4tk3Æ £òùÙ ßßó+ Ô××êéé¹eéÒ¥{sÖY¸â˜Ã§3}ÚD„,ÍΫµ&Ûµ™–¡c©4‚-›×3fÔ0Ž9êP6µ¶qç]÷“Éd?n O?ûdfG¤Hᄆ¦¶ Kk¥Ü2„‡W¯eÄèqe©SaÍæk+« êzI=•× LígŸxˆ§M¥P(TÕ ZëPW !B]%„Àu]†Æ“O=ÈÑã ;*y|´Í˜‚Ѻ#³“¦µåGr]—Ñuš››Bo“c¦ îI®·'' t}cccº»»ûO/¾øâæÝ®¯› øÜsÏ4cÆŒ˜ëºáƒó<Ïóp])%ñx)e{ÒtÎãÆãÅÅÏ•‚‰ÂÊÚÓåÙ}ß•ŠŒ>¿n–Ù#¼Ž2û+rÙ,SÆO"“É„@¤µßÁƒŽÇãe_…³{ú÷™L$éX¹µj:»ît,”ªhvvͺV3ý€©|~®ëöy~¶åÜ—”’qãÆÉÆÆÆOíëJ¨³mŽ×ÉaGžÈˆQã©©IqÞûÏäË_ø £GàÖÛïäî{‘J%xàßÿÉ_úÙ/ȯ_{Ï=ÿb¿nWߨPoxyzÉ2FŽYfiJÈ~±Ç ûa°®P(0múlº{òÒìÙ¿¾~~s{» Ñb¯G‹ÏE*ž‹W,@¡ˆr‹PtQ…¸.Ry$6®ÇíÚ’ Ë/˜À+9+‡0»´a_C㙥§§›Ó¦ÉdúèÔÀ ®Ô©® ˜ŽãHr›û FØY}xV¦¤Û¶Ö°zÙ¦N™Uà… thÐw…}biÁ}1">tèÐËÞfuÎ9çüä¨£Žš¤µ•.«àAWûÐE ëW¬cèÔᥒ%&qBEV%A•QãVÁHDо;Ÿbæ”áœ0J)<Ï·¤ðª¤Óv',–˜Œ‘OI¦“aó*+Þv†n?­-¨Rƒ¶G72ûÀCÊž‘œUŸ_¥oØü)„˜žH$¾ýâ‹/ê½TÇôˬÂÏR¢<7×IËðŒ›p mÔÖ¦8ôЙœxüÑhwß»ˆ§ž^R7¤e˜6mËW®âwÜÅÓO?ÇÓ§’L–f_I¤ê3=ëßP ­5Ý…4Æ "»]mFo»z‚ÏJ)Äâ‡þèau³zc¬Ê¹pú·îꌃ4Æ„¯ƒ?3Â$!h¿‘ÖhO•yl„ðûYÍÐ!lIêêëËÙ•Íšª1,ë»°™–‘Çþ³fêžtj¥¾²uU*™¤3ë’®¯km¹úèû9ŒW‰–Þ¹ò¦0¥ìž‚¾Wö=UÞ—ñP Çq¦f³Ùï,[¶ÌÛ™m; ³úÀ>Ð| '—ÉdDpãJ©ð惬¯´lŠ8ý€é¬^º²d¥ˆkZ®îðTå8*m»ãJÿ¼ŽB˜Eèº.žç•‚ 6•!·ã0~üx–>ü|hs(ËòÐô/e[] ]–šÍæ™=é2™LÈòzßßó«d¥Á2sæÌ˜çy§Dêȸ—3íˆâŽ8ò8Ž8úDZZZ1j8œÿ^~öãooþÄÇ.[ÚÚùòW¾Íw¾ûÿèèèÂ+yêñgZ‡‹ß¸ê -Ë–¯áÀ}öì8Nh}ÌfôÕ””×õ¨´ÿΊŸí³¢”zÇä\ÖQž2J_ ¥1`]…r‹è€QŠ>›ò\DÑÏC*G+„ç2&Ÿã¥gµ’5†J—ÆVõeX%Få…Ç•–.n±2ª@§Æb±>qìþ@!3jô(–>óHŸ$‰p6a6ÑVE¡*b´æcoO'‡Î<\.W¦«‚ÐD¥AmÇÕÃŒGã9˜1cµµµgïj7àyãÆSžç…Èj+ÿJ*+&k”l"‘Àu]5bõ3+ËÁÈrŠ09Á,U ”.gm·0uÒT²Ùløò÷÷ƒ{¶pð»´Öx®G*ŸD….ÆrV¥ÑÅîߨ5¬~ì5ÆŽ[fØÏÏvQV>¿àÞ’É$®ëR__Ÿ>ðÀ/Ô‘ÍhÅÌ&êR.‡Î9†9‡ϰáéo¨ë8ã]sùþ¯rïÝ¿å .¥»§‡ÞôKß•l¹ïì ñŽ.+×vPWWJA»ýªÒ(©f,i­™>c«ÖwF ûä”ý&~)ÞÕ×h”rIX,WkUô|r‹P(‚çë¢=éº8 t±ˆÊçaãF†:Š‚[´ŠT,e÷#´*ú×ú5«˜:e ù|>dRn Ü€• Pi¼Æãq_‹4¥c¸n±ŒU…ڲʠßê.@ÍÊŸ`䨑enÈJƒÚ¾·JÏ‚‚T*E±X$•JÕ|ðÁ v)XÍ;÷r d $“IÇ){á‚`ßtÀRjjjÈår(¥ÈçóŒ<ŠlwÆJS(¹Ð`ð*ªQÀŽ  /³zþñç6ÔO Ïçó!(>×ʇl»b¤”¤R)´Ö \×eú²üù׬ÎÖwxqXAÃòI—RÖý¿CŠƒÌOôŸŸí‡®ÏÊç‹ÅH§Ód³Y”R 9ä#Î8ãŒhài…xnb¦I“& ;âíãFŽG2‘ ¹©‘Óßy"7þàÜtã7Ѫ2f¥ßP¬*—Ë3yÚ,²Ùl¨p ³?Téu@­¹y0Ú¢ö|.ÀAG· žíyE¤Œ½“CHI<Ã1¬@ëosOWWWÈÚkkk‰Åbl]÷z木ìXUV¥,Ve~ÃÖÍë™9m žç‘J¥B‘ãµ½RþÌf³a ~ ë÷ß'‘H\|õM+Ïó9r¤ X³o¸P(„ˆoçàÛ~úà{œ@]m]m&]TøÁ>XfŽ_H†Q™dŒ²ñTægRM‹ùÆOcAp۾ϠÑí AÛz±· âñ8u±ÚrT¬J•¥¸wmhgØ”a(ÏÉD"ZHö}ÙϯX,–)µD">?¥Éd2™J¥fDji ÄÓ½Z{²@'RHšš¦¦n¹l§?@·Â±Ã¬N6¢µßŽAû¥Ói„‹ÅÐ%Xi/µëº¡r‡É“'óàý2kê ¨1·×M$em&›U)­¡EYÁj4¸¸¥l>«íµC8Ú ìH1ºÖ®Ãu’f¬!`I£Œt0ðתZ …6%˜DX yFE2™ô«ÕÔÔ”%Sär¹2½ªµ.Ó ZëP§Œ'“Œ;>ƒV¦b¬J•³*­¡»³æ§̨€f wòù|—µ­×P³·I)c‰DbÒ.aVRÊôúõë‹™LƱѾOY*Ùuý­s©ƒ¿%ÀÒøi‰ÿ‰‹ þ_%hañV¿Z ®ë²eÓf:::ÊXT#Áûû •ŸÛ¶l)ÅΠÀœ”˲­¦”‰Ð(¥Y¿q½]=Uï¥Zö¤Íþªm7UFj©_Éj­g–‰öðzÛÉÑÎa‡L ã¸ùØw@òy—©Ó¦“ÏçÃØ¢ëºtww‡>üT*U6n.P2ù|žžžûåã[NÍ(ü)Í"Ù^)ºEUèì¦PÈãy®_z k|vê¹ïŽ9q\@Ké—\%%®†¼èÁÃM)"V©öŽ(›3BØ ….³±]Ï¥uãF6mÚÔ'ý»?½ÕŸ¾>Çb1Ú¶¶•»ÿ@e2µ®Îª4 =ÅÆM›èhߺ]×îO‡i­Ãrw»¬Ì‹¥s¹[¶l!“ÉH$B—”íò³o¼®®.ôÅÚÛ‚ÑàÚ¬PBù€µD p«°ž‰ß lÐÒà@íJúàü½½½´µµ‘H$BËÕL( {A%HÔ6¨ÏÕŸûÏfǪìx‹kmm¥··7|~vPµÒ¨¯¯'—Ë•¹…d³ÙP™EÒ¯<#´>²ßÎîHß\‘ µ£Äª7›#ÞàÇGƒ$$Û" ~ú+Of2Úڶ¸¨üÒŽ1kÿÊåsä²~œØ‰ÅˆÇœ2à²Ý¿±XÌOª@¡•@;¡5®©Ûh©òK(‰±”Ų¨ ”—U³èîîfÆ $“I’ÉdÆž'ª IDAT™¾²¼[a rk ¯÷[Y‚…(–ï·ÆˆU°*-üßœoÓ¦Mtww‡÷eëûJoDcc#ù|¾ÐÙÏ»¬ü†Ïåèêê"N—=Ü€²Ú/…S‘Zબj•R ¬äòu•ïÊd2¡[¸²JEµ Õ²…ÃȪþÜÕXeçÜ~ö€ö@ŸWê*[ÏWÞß›¡«ú+‘Ëåj”RBkÍàÁƒiii!—Ë•YöMT ÕþVJÖÎ’¶•+JS‚ ĸ‚¾L1©R -5555Lš4)Ìê fµûªŒUôyО_[Ja•²2Èt8Kq»ÒW{ryt¿ßСC>|8…BB¡PÖèý=¿ÊÏVˆAYÌ6_¶j­gí(èìèqAü«««‹µk×’N§I¥Re†RåKžJ¥Êb¦6hUÿÉ6‰S,ƒ©HÖ$©‘5!x(Oáy&¦®ÊŸ¯ëºxÊ-Ň= 1ÏÐ!…RÏuq,ÏF0¸VS-QP%¡ñQ›Ì^-%µµµ|ðÁ¸®¦±W0ýé€j à=DÌ Jér÷Ðg¼—_m]“ËfQª€aÆ1vìØPWý3¨§ºº*nx…zSÁªX,öº®+–RICKÒ+øâ«Ud]X1gfh¥¬Ø£6nAaÜ}ÖʘdÚIÓ!^TÁ`ä`ôW÷Ÿ‡BK1c4ÿ¼÷Aœü‘óJáyÖÀPUž¦VúYAmÙvZ3[p”ì©Ë§_1ý^ !”ÂÍf³« …Â7#VåËu?ø9+zºÀuùÂEŸ¼ßø‹rì@êºÒš\6‡÷Ò¡¾¾ž–––²,ÀÀ ³_æÐ’7ël+ÚVLE×#¥Òl—Üwß}[ï»ï¾±6Û2:.n–$§§ÏüЭ‰ÏFHN9aŸ;ÿ8šRÄb~Œ«·½2cÄš÷[h–¥ìäÀ .UW/%ƒ)ãÿ¨Eº®§n8ÿzøqßúfl•6q2Û]ìc¡™ÏO„~a^3–O:qfŸpB:>Py}ãTž¥§l÷_ M‡OœÆ’ç¡óéWý{²êjÏÃӦ쒀a ~Vøòõí¼ºª˜ð64Ê­÷K)=)¥+¥tÇ)zž·>N߸3u•Øý¥éöâ·ÇEÆ15ý;¤˜…€a£&s÷)ßdBzV[’U9ò*ONå)êyí¢´GiâÅÒÜ/¥žòÔuÑÏ­‡“–yƕ繸žg>+üv–ˆ˜_ RJÇ_'E?1*eNWe¥ŸL †?`Ù|ó(Ø 2Ï<ß® ÇnxÊ¥¨=¼¢bXl³kg ¥àŠÿ\Ï/ßBü/Ÿ_þáEŸ÷´]~éû9÷ÝsÃÉ¥tNÌÿì8GÊð1ÖÈRGNæ‘M+Ùô£;¸ýÛ_è6ËàáSZW¾°h‡&_,]þþè:f4Ó :|Ñí’5ž‰{(¥”VJz¡õ¬ü @Í”!~¼ @®s%Ç:<š|ñM”1s¯¯~ œ ÐÔTÇ·®8æ /•ùÏS䋚\QQô®§q=EQU–ê3D‚X…Ʋ=)¾1ëOr(0àäH¤–h§ ³þ¬UàÝô‚8•g•‚³\~<^”•‘Ray¨R"ˆR~1åi<å1¼1άq $Wýè~øÏ« øùšEW~tW´Ùö¦—i’«‘þŦNN<ø pö¦ ¯pè/ßÍ{1L=›!‰„†´L‘–©ŠÆÔ¸øq"—v¯“¬— ݃%f;Ë·E:é8Äpˆ[sRÙÕ°š».OOï T–ÙŠSÙJc±Tfسm:á8ĵ@Æ£j‡S_SFsÇÆ'!&Vïc@µÝÒºµƒýŽ9ˆd²Ž•u8ïÌpîØ£ŽÇ$'6‚Í[–™l>¢ëQ,)º‘d3p­û~}ÞõÃQÆý¿/ZMUu g6€å«¶Ð·gû°×3~£o:ËêÍKZÓ; {¡¿`ß t,ÙS͘·fŽIMM$¿KmóÒ蘛Fû6i´ËM£·Ö$ØerST»kµ¢ºt‘£“‘ÈÃ1"ï)8Y¢‰¤uÿÔ@òoH0…™¨0•?Pð€ÀìŠaÅ8ÐÄ–ê²#©°£¤ú Èªý›e:º ¸±ù—×ZÒ¦ßÏâmKX¼mI£Ó>ñ<#:Ç!9Híx/Æ<5Œ¦ªI…îS óRéL')iS¡ÕßCIËHTj€¨› C§.÷ëѪ*"ÈTRdN›VnyñJýq½À2K„DGjJ2w]u)÷=ùv½Ò½+{{£ž{|¯2|#Xn/fã¦m{€aƜߖ3êÆ‹T§ ì1“Õ¦-EHÚÿüw*/>zÉIÿžó–üi‘Õþ!,0®Ã°ÞN²êb  ºÚÍŠ•[Ya:ïáÛ†sù°Þš9×&ãñ)•M¦Ã` Á)%d— t3``þ>%8½b2åƒ)t‚1•b *!¾ÄôLÍúµõâ½)‰6lv-ÝhÙºú!µÀ¢Ãƒ¬BIë-à-ÆæwŽú‰%p Eëä™/,ý„µ?d²œÈ¿–‡ K†B'ªN]R!…˜U5Œ¾%&N ã» MúÕQ ójE&ª?•ªéoT;0hþS€^ÉZé,EQùÏßéý%£ v[â£i ìÓÃ;ã1uÛw»²2ÓIH°ï¥P°+4̼¬¢Š ›¶ÓµSrs2Ç•”–³~S!­ó²8ªK{6l*¤xwÛçѹC›p÷ ¬zoÞZDIinwC`Þ¥ûG]DJŠUñSRZ͆‚Õ)—ì̤AYeuÕ šŸ´T`60»Ã°nºdUw QȪ®@þ„o—rÙ™½@’HvÚhðú Ãj©‘•'œš¥š¶L|&M 5HR!f?QRG'*Åä£2•*ˆJQƒÖŸ@qóŸ_…^íR‘Tð+ _M Œ¥?,œyÕáEV¡ÄµM¨Ž“í› ðÑê«hP<8eÙhL¿j´‡W°Õ°ÛÕÀ¬ÂJˆÙ'Há´)…Ðé?š"*ˆŸÊ†ÑÈü'‘"»èàh Àê²ul,^«?쫖ȈË%™ã;thM°uãGdÓap ‘¥+7òÀÿá„ãz3ɹç:N9¡?Ï]Æ ¯|ÂÐ!ÈÍΤkç¶Ìš»ŸÏÏ‹¯ÊÝ·^Îi' ƒ§àZx>ü|:ßMÿãvcÖÜ\pö n{øm¾ùèAfÌZΔslßÎx<œ28XôÏæ²}g™Õö/qùµbùܸ¯Ã°.>ß²µ”z‚Ë!ã°Û4³š”.Ré!sN Æ—FšT @B5›ý Õt¢Uh@…ÑOÙü—‘”@«t- néº" ·—ëŽ×d{´´bs ô•ò† Z»òpHü’b¨¿¥r°"v)(l”F#e5$"0X‚$T› Z„†©}TF¢Ò*t?•bðSÍúýNH€,’ï›7.øûG,°ÄDÌ4ãkŽ,ŽHdL‚Wxóݯ¸÷ö‘ ;õX¯XÏs/ÌÁ}xõ­ÿñâ“ÿ¢oÏüÀµn¼ê|v•”‘—›ÉÜy+9õÄ~“°^µB8þ¿ßú¾ÿüy{j’“쬴@%ƒ¯Þ€Þ]¸ò¢“%PE„à{çâñúéݽ=[vTXÝà ˪²*7ms’I°ËÉ$9«´!XÂXµƒÿH')I ‘~¼ÙìXFTJ¨t?•¢¨AÒ3˜ÿdIâ¸üô€èå÷gëO>½pæýëä oii‡2€¢êbTÀ);/Ï/Ìl~)ú¢ˆ`¿AØ(ª_ˉ"8Ýt€h 3+ÎõôŒáéá‰J5•P¡û©TýYÄ=ŽKéC¦=XPºŒeÛ–ZZUü¨VUuÀ¾ÎòJJ‘÷ï(.¡[רªJ×ÎíØU¢Í5¶³¸„^Ý:g‚-.åòçµ·'ñýôy4ø¼aïSUS ¥$G.Xû¯k‡S^QÕ·ŒcÎü \˜»ðOºç[spd­k+P PT¢Yœv9¯¤J¢¶¨BÔEQƒ³9è)4ŠŸSž_\G—K:É«R(*$&Ø‚9S&UD¢Òïi ùÕà4#'A²SÓif,ÜÌŸ[Jš¬jYd¥E.X(ìØI”è•øÕßäì­:1iäJP!~%r¦I%H ôNêÁàôc8.µ}’»“—ˆú3šþÌD¥ˆÊ@~(È’!©ÇÐÅ¥EzÕï⯓ïÖýVàË#YM‚¸J­PU5q_ÉJÕm1a»(ŽŒªÐã¨Ì_´ T…ÅËÖrT—¶ *t騖e+7ÎY´t-={täÙ‡oäÚ+†#)jàZªß8.=%‘Ô”$Û/¿¯X馥%q÷¿FðÔ½—òîg-¯=y“¦.¥x—¥UµíjÁê횬’Uš)P/ØîW´ú{Šº¶‹ãtr2Ê*¿p;¨aHJ Tã§]ÒéÛ)•AGeÒ¿sr“°ËÁ` ELTºF§(Áû¸6Në™CvªV±b}a9Ž™øïÓô˶·ÐpÎÓL榾W!!‘nK£Úï°k0ä<4t]2:%T†F¹˜&S”š™0Ý–ÆÑI]H´×ã’¸äÒíitV:RìÙÍæ†í4¨þÓŸ‘¨ŠvÝ,{ƒÒ,’¡+½U3e4xë<ÀHFx­ÿ̶»äæ©ä9Àâ¸þGóîÇß±rÍ&îøÇÅÜþÐ8¾üöWªkêxõ™[PU•‡Fä‘çþKnvɉ}ï|ü=ÿ¼g,UÕuôéÙUU9gØ`^|ósžyðÆÀõŸzèÿqÿãowû¤C+uWI¹tù_CKŽÿÛ‹öP[ãæÂ³ lOMqñøp󃟠šÿ´¬ºð«©+ùçÅÇ!KI T»}- âìsz` b’Wº O d~ãY“´ÊpÒ1;1P5Cd3]´ÉpRZ㥰´Žª:D¢2TtÈN¤wûÔ@ôߎ=5Œzb’þ?©®(-¯£Í?ø`ýM3ÈqdáVëY]»!øÐMü?à #A™-‚ÐÞÕ†öÎ6ÈHø…•¥«ñ£Ð'«{p®qEU¨ðUQî« ÒWM¥¯šZ]€¨lØI²¹èàhMkGɶ¤qÔl夯ï€Ê]úånatÁŽð?~&PvϨ˸òⳚ*·T,IR^sXZw:¦dÃ’I¹ù·ÏUÿcRp¸2KV¹¥–…Ã^8ø `ê{7Ó&;‰ŸÊšÂªFv+I •Yª!|=Ô±ÜdLÃÑSEu’’$•ü¼²Sµòb¯ŸmFõéHVª3Dö¹= ÕnÕõ>íÓíÇíõã´ÉØd’$Zg¸ÈIu`·I(hu ¬-æ®g'áq{õ§»¸pæý“Æûn‰šÕ|´ˆJÛœí økþ¹¸$i¶d*ü5¢œ‰j`Ûð¡ë¡Qª¡r¨¥“›„L¤|²´jµ^7Ïüö*o¯š¤˜àä±nàÒn#Èså"I2Yö 2íé;ùU?~”€Ff¦K¯êç›-Ó5ýyðªŸïi«ûxTÌô¤çŸ•œ¸tõvFž?‡]&1ÁNqeC½Ð j˜m"8«opfR=Ú¥áLТ·6í,çÚû'²ië€*4—‚³`K)“g¬dú¼ÈöœŽ’5­)Qyü*vTðé´U<0æ;–­Ø¢ïªþR8óþ¯æûn™öî±ùõ€ã½sŸâ®ç€*±É½™_yl£eEG M€"_ª•#—®®NØ$•uå9uÒ¿ ¾  ¸(n®Cd´ÖŠ+:¢gf':¥µ'Ý‘Lµ¯–=î v×–òÕöE¬ßº"œöx£ Zù¦Í€5u <üÔ[¨*<ûø9Ù™ùÍuÓ¶]Ž/Y·ðóÜðû÷¨(.Eñ'7eüss1|±@·d½”½bÏž=¬îsÐL¹B>¤Ü}ó\}n? KÝW6D²!™ f­Ê@(z®U§œ$Úf:Ñ«øü²t+w?û>ÿÙNà´@­ë€Qh Í!èÔ)—^ÝZÓµ]ÉIªj=ì.¯¦¤´†%l¡¶ºÞüˆ€ gÞ¿ý`¿ë–뜛?¸WÛ®ÿŽ${">ÕÏʺuÔûê‰ä¶27z $S¯/b ³î‰ù´räj–ªòí–Üôý#áŒ`tÁ|Ã3¥WC„6VO0ºàKëo;Yé>«Oÿ7·?øª~Ìs÷¸ׯYnÚ.PÉÚ…Ÿå¶w0[QüC#‘Õ®]e|ðå<¦ÏZŽËéªWáq·ÛýŠ0EY8x„õ$ðˆÃnçÇn&-ŪºÕT»›.Jl”YªQ^‰ä_›]¦{ÛÒ\š{Á¯(¼;yã'ÌÑO[œ-Âéõg’€áh%£†]âøIµhÑ~ÏμqKyÏ-™¬Ú›€Äÿ;éfî<æ&êËkVãÅ[ë›´-•T9™î‰ù$Ùµò5 ~O-ÇÛ 'ê‡nÎftÁº(ÏçNa ÐV,Ùâ½î£B`ð £ 6ZíøÈêí¿áÄAý9v@od›MY½¶ ì©ÞÎiÓ:‡ûFß@ÛÖûÆ3íº.Y³ ÅUµª*^UñeIª²ª†O&ýΤï¨*(‰‰‰¯WUU=!Ì3>Y¥ í*{ÄYýxò_§#I~¿ÊªÂjÜÞð“ƒ4 ²0jSB4ç¦;蜓D‚Mû^]çåÁq3ø}~@”,PÏÛ~¤¤‰g̲jÐZ,­ÐÂ@ŠÅRÌfμ¿¾¥½ç–HVú2/å?‡»&\ôgµ=‰:¿›uîMÔúë"˜þôö ³H‘]tqu"ËžøåEµÅœ3õ^ŠŠ×ë'­aIõ%|ZRDcó²bZ÷ŒåQ‰9‰ÑÖh7>då÷Üz)W^2<@V vðÀãorÁ9§ð/]&Iò@U…)Sgóúø‰\xÞiüýúKp9öê¦í:±dõü ¹-è=ü¢*¾ÓÅO}}=ÿûv>9G©­sKéé韗——ßì°ºK‹›²XlíÏxê.Iv<p×?‡sõð^Hx}*›w×QZÝa^g&Ùé”—D’#V°º ”QOM¦²²V;WñýT¶vÒ uEKkïS Ÿº¼2î?$_ú¾§dhh»a]ûl€ÄYŒÈú‰<$™G¾Gÿ¬ÞZ®• »<¥üéÙ‚·ÑDu†hAI¦-›g™¶ôÀ/ö)>¦m™Å Ó}þ eKngymUrÒ¿©3(¦Ná7,Ê¡Ú1Z YɲŸg^|ŸÍÛŠ*Þxù¡Œì,-bÓí®çí÷¿dêÌß¹väùüí‚3ILŒ/Š»}·KVÿþIK"«z¿ß[þýŒEmޙ𳲻¤LÎÊÊšUVV6 Xcu“#§Bd–+»[RNÿë§I²­ÀK_ÄÇv \¬Òí£`W-nÑø“ä”i•î$3ÅAbB0P«ÖíåÃ)Ëxwâïç÷TV<ÿÕ§Oµ¯‰µFV’òJ:×·™–Àh$Jã—ËrzqBÚÇH¤`w0ñ/Ï3´ÍÉ¢»)S§ÔS篣J„¶;dN)§ä$ÅžŒlˆÐSU•e%pѬg©+)Ð6úÙÃF÷+Œ/šÖDÃG"(Å4z‰¶î3l³Y ŸUý×ßÍæ?ïéºïŽk6tpàÄ¢âR>ül 3~žÏygáêËΧu^Vlduô’U¿}ØbÈj΂5¼þþ´úm…E®ÌÌÌÕåååÿ~³ºÇ±æeTÜò*¥ãÉm2Ž:‚$ÛÚhÖY\vFOö üñùUê<~j´²Ã.“`“pÙíZ„Ÿ¯Âôùòü[3q»=šáÇ×°±¶hñ¿+Ö»² M*YIF?}¡FÊǬtr²›ÜÜ äÛ"Öñœ6IËCÚscO¼—vImb~°Zo”¬ááEãù£pEÐRXì™"´ûC¥ÈI•ޤIEkøpŸþ0#3Yde"«×ÆÿãíÃi'7‰·óÔ ïà÷+<|ï •ß!pšÚ:&}óŸþo:}{uåâ Ïdб½±Ù"ght8ú”’Us?8èdµjÝ6ÞøðGeÙÊ rzzúŽÊÊÊQÀ×X8äIN…“YÑä•-¥Ó)]2ºžûš$Û²:wiÅc·œE¿®Ù—hPT•â=µü¶loNœKeyd+uõå~¼gÅGST¿×¿šT¬r*’Œòlòj²2ªÈö( \Gˆ6z ]?-½+çf>ˆSL¡zË ‘œÕédº¤u"Û™‰CJ™õ(j|u¬Û³‘ k¾å«³ƒæ>€jÿ&f”¿Ï¹£P)\IDATܪ‚84©}itãH%ÜÈÅgún‘•lgûÎî{ìMzõÌ/zô¾¿·Ñ‰çç_1æõO8ù„þÜö+HM ÎõäW~šµï¦Íá5râà~ ?}0'îÓêÛêØí”’?æ<²Úº½„ñ~VgÍY&¥¤$W¹Ýõ÷úýþ÷I)µÐL°Å(£öIf9²ò³rú\y¿Í‘ÚW¿ñÀ¾~jOzvÍ¥mv i‰ (ŠBƒÇO­ÇGUuKÖîàó)KØ^7ã­Ýý{ÙÚ/'x*¶VÄ¡IÅjé‰FL‘³Ì: Ä%íÃyFr2v€HÁ¨m5¥y…ír–=I¹¾õ•´w E2Uß$Hm­%ÜÖî %¦€ ?e¾ ¬­]$}]¶Põ«JUÚlÓµ±cihs£û£Öë-Ù èW({iÜ'«VÈ/=s'm[çhf—ÿ~:…/¾šÉ¥ÆåEFzjÈEë<ü63g/`ÙŠõ|;ñeÂêØýÔ’•sÞ;àdµ§¬†&þÂ×Óæáp8$IzÆív¿¸-NivM>E“W‘ä“= I…#Év{V¿«ÏOÌ:ú¿âr¹Æ×ÔÔ<Š–ê`¡yaS^٣ȦXád–ìLq¥}ÞÉ®Œ®ÇØéEY–å.÷Tl]S[¼|mÝ®å; uÀÔ *·C4òiŠ|1ʪýBZRœÇÆÒèñ|Æ£5ÖÌ$ìôJjEÄÎ8d—VSŸ`}ýZ¥AZ\½M­ò{Cô«ðOcûc ›X8žOõp'«Ûoþ×?_O]ć_ÌQ*«ª¥ÔÔÔÉUUUw¡U°°ÿ´©ý-¯š€¾ËŽTWRë~e»Ó©*>UñûTUñªø}ŠßS]ßPög¹I&G’U±ø˜â!xäP,Çì-+²ŠÔ€ö&x_:GS##¹ u]…—é[á>Úè"Rã{÷|âÝw8kX¶ô´´m ž†6Ï=z³4tÈ@¬¶J’ÜÉ|ðâekyþåé~tGî¼u$¹Ù™!ûç/ZÅwÓæ°`ÉúõêʧÏГ’œ’JV=O/YþËøýFVªªò㯫x÷³_”E%rFFÆÜŠŠŠQhsYØÿ¾©h2*Ô\ŸÑ¬Gº{“õG‰@JñH¬$Ó\2Ë·?5U´;Žv"GÓD³›6!Âþ±Ø ë±8—X’Äâ2§_[ ÓQ¤0„n&<5ÂwÕ4*jŽ}GB¾–ÚÐÐ0!5%õ/ßN›“™FïžùH’¼ÔÎæAfÛÖ9\ráé”–UòÈÓãÙ]ZF玭ÑfÞUiß6—3N=Ž+/NFz*¿-\ÉËoNä’¿žŽM–×ÉÌͯ+Ú²x¿²]´lùF™ôý¿ü¶‚¶mr8óÔc9ý”cÉË õmå÷=«dÉÌqû¤YUTÕ1aòB&ÿ0_•e›W–åëëëŸj,¾hQV,Q€±D2Çš›-‘X2Y}¤(òX‰ i)ï‘|ñfÒ‰50Þ ±ƒ h>/!Άm*T=–Š2ÆM¶Üp¾')‚ 7’ý6žàX“£å/4å0=¢“Cívûe’$}š’’,Ý}ËeòˆsNŠûkÖmá§_3gÞJ>yûQ Á½®ýÎ.Yòãk{EVõ ^¾ü~)Ÿ~=Oihðàr¹>¨­­}Ë'ÕÒ´«¦"ö¢ ¢eÓz¬Õ.Ìë’ÁÔ'™ä–ÁâÉ7OExªèø÷RN)´Ð<«ha_c)i"G¡H?R€Ø*Ç£u5G’ðiøC }333?+//ï}ü1½Õï)uíÜ®Y.œß÷œ’Å3_‹¬ü~…fýÁ‡ÿ›§”•WÊ)))Sª««ïþ´šªÅkZ{«%í 9EÓ¤ä(¦Jb”W±ÈªhE "É,_ šÙ/½$5óµöº:E”u)AEÓ¤$b’Q£¨Ú±fŠÇÒ)šÙXÕDÿJHHø{BB«ÇuÝÈs¥›®>—˱odÕïÜ’Å?Ž™¬~]°÷¿XàßVXdKKK]XUU= Xd5Ï!§mÅ#£b‘Ufy‹&%Å`Œ¤aÅ*«üq¶ëÚ€‘o1H[šS$ŸT¬U8ŸU¬fÁxŠDÆZ7Ð*^»wÈÉÈÈx½²²òŠ6­s•ûn»\>í¤þ{}±®ýÏ+Y4£i²Z¹fï~¾HY½®@NMMÙT]]s0ÕjŽÃ޼¢’=F+O¤u8‚Š%½)kоʪXäÔaWu=ÞNa75¦- AID ˆU¥Ž¦]©1¨ÚñV76ªÏ–ÖÔ¼’‘‘>¡¢¢²ãñÇöQ/ýËéŒ!‘mr\9ªÿù% §‰HV[¶•òþ¤EÊï ÖÈÉÉÉ¥µµµw¬ÆEb‘ÒaâѤ"É(© ™¬Æ¨a±²Êg:Ï*Þç#VSªt8*VŸÄžä­†à!;ç! {bbâí‡ã¡ÊÊÊìÜÜlåâ'Ëžw2y9™1]àè#JNûw#²*)­á“o–©SZ*9œŽ:Çû¨¢(ãЇ-XÂV´u$™fe–SáS|šŠk¦àúÌR”‘J´ŽK®D¸u -»Oœ““ó@YYÙ©’$1ô”c¸dÄÉÒq»7*^kD·„Uum_|¿’I?,RUUõK’<Öãñ<TZ¯ÙB3ÈÙpšT¸mj d’z$½D tHIIê-55µ):´U†œÐ[îÛ³3úv%7+=T³ø—’ùSŸÏõx}L™¹†O¿Y¬ÔÖº%§Óù±Ûí~Øa½R ,X°°¿\š——7711±^™¶o×ÚÁÙCÔ‡F_©~ñîÿ©e»·î¾÷ÃÕVyÙ>@MNNžô´^ŸS_ÊnÏÕ?)= -¨`Á‚…hAè+IÒ-yyySÒÓÓËuòr8T@MIIYœl½* aàÓd¯-ÄŠ°Ÿ®›) ûØižŠ²ÿtàºýtïTà« ´®HLL|¸ÐzG,žFFÙÿ•IËHRЪ£á2BO ·™Î3ZØ¢h.‘ÅŒÁ:\ÀÎçïkàe‚Á 'ˆ}ã€;„Uhž8vp©áÚ— r3¢Nh|ÃÃ<³AîEzÎmb_/ñ^,Ä »hl=rç*`¶P›+Ķ.bdÒøð‡Ø_%¨Yõ5hVÇ‹ŽûÁŠû€ÝÀÀbÛ±ÀVS„ ‰èÑTšgás‚õ*öwŠõ>B,G‹íFÿÏî0ëLjsæÙR(¶;÷„<úIh0’ɉÀFà"Ã6Ýg5WØdƒuG]TÈDZâ3Ú3×Ã=§îòøˆøP,< Ü/=Cl;Íáh„®]e¢ùl1^?ÏÐPúuZÎÿžPßÖ‘ š£y Á"¾…À¿±ßqI3þLjMJ˜}C€uh¾}Å9â}[dµïp+JëU4¡5&´$måP´(œt¡% <<AU/óú»Ã\GY¥¢ù«æá8-"©+ZXë³hÎÚÁ‚ì%èD5®®Õ|\`2¥Ý- ´ÅÁƒVDÀn½ŠF°‰wãµ^……C ÙÛO±Ã÷Gó5JQ„ü@aNIÛ$qN߂ĎZNh¾,X°pÄ¢“”ïÄpìE@ Á(£Í&‚KE |1F"U¡å„l5l«Î5œ7Í™l<¯˜`ÎœŽ ´9#¦ꘞmºÎþ@‘a_!pRdµ-(G%ÔçެÚ3 ×÷¢•þ±›Èþy180>‡*ˆZÇqhæ,ã;»Äê®,X8Rq†MÕÒ„0­‚È_ÐL¨uC…3ĵþ–ËÖCh9*𚃻¯8o§Aˆë‚ÿJ4'óUhÑ¡*Z²w¼dµÍ>?€`dØQh!ÉŸ ù,OG‹F- ² ¢ìÞE›ÿªK²’Ñœò»Ñ—{¡™Tý„ÖŸ¼Cœ÷˜¸V_´€£f•†6É/BË€–0_Ë¡g–µ`Á‚…fÁ=BPönâ¸o…¦`œ<ª—8÷5YÝF9ðí:±­§é£Ìl;^²šæù_ZŸÑœ¦'|žY¹Ð"Qç‰g3“Õyâû]¦kü*HF'æÍhA„yGúï¿U|7V48^l»Éê²XŽE ñ@Xijú€~BPg×]ƒfŠV®JŸèÐ8wÐÎ0ÛÌð mbd3ýÎ>h&8c”©îëÊG Sކzñ,‹Ð /Ïó~tò4“é)B³+DKÊüo ÏêÞ6l“ ÏjÁ‚EVŽ8lŸýÑLv‘à$˜aoD5Ñ“YõÈ#[˜mMõÕ¡Ñèü4˜ˆ‰‚pÌ$3Í7 V¡™ôžEK¹0¿¼£jÃ~{Ϫ×q3&|ÎC˲`Á"+ G– …òß¼Eãl~kÑ|UN´è8ÐüK½&X77 Í%¤ ì"´Àƒh¨%|‰h¦´wÙ·j%/£ùœÌïàlqÃÑ|}Ð|fÅ4×·Í6›Ðz™,V°¦.¶ª{Ñ‚¦Ìò-[/7ó:Z˜ø‹hNþÎb]!Xßq_qÁùy®EËqzϰÿ4ßÚ„&z±ÍŒ¦×}Ô xŽ×ƒR¸O¼3³*h~#óý§ E<ÞŽ!é@‹"­›jø ÓVfAØÛ‚ô|h~,ý™Œ&‡iÝV7µp8A²^…½D:š©¬Ÿ¦h~¬2Ã1ÝÑr“ŸÃ;wÏê@ÏŸ?‡ H-ÅS窤ÏÕªÚF™«ï“ÑUÏ[Kêxñi=/@KŠËêyg ‚âÒªòòòêêj˜ý0õa…"­3窤ÏÕG¥5d®Ö×׿xñ0Œn×›Çâ×óÊÜJ´Ä˜žµœ‚GÅÅÅ•••€ix/)­Ëœ«’>Ws<½ÿþãÇ+**ÓRR[ä°®cí ¨HÓ\K½s5%%åÞ½{%%%uuuÈè7«¸–[PÍ;‚P¦?AKŒéY»ŸJæ½Ô3úI]«DÌUÝI¬mž$^ïê9aª1}(á~ãâUNF¦³­ì‚bï’ä<û”‡Í$þsŸ¿Ëwx÷èÑSßÈjú^H‚ÌJ'7Ö7ó¿žcås>‘6ø­í2Ø´´[nba-þs5&…Çèääddô[èñ3îý*Þ„2õ1ZbLÏ[f]\Û*sír0Ä ~\f8Ó‚>¤7Åxú,Ëý'£V;»÷î+îZ$ʼn…RÁ|¡Ý¼ 8™ –¬uq;~1½ö‚vÜØ®‡‚ V>z1Ô‡:êŸhmó ‚jâ?Wc’‘ÑoÏè‡5Üü×GʤGh‰1=k²ÃèGÏZ%b®Ž3`«Ïše9fÂÔ‘ŸiŸŠÍ&ùc™={ÉÅå7‘M#Óùß-YA÷=¯pHrÖB;‹H<` â¾6ÐìHë¬Ùæ9c¶‰ÑÛ¼ƒ%e®F#£;Àè5ÜÜJÞ„2á!ZbLÏÚ嫲Âè‡ÏZ%b®~eÀúÎ~Ýß°¥¿ï8XùLÒ’ßv(¢ÕÖl÷ž6Ë`tÌ}I~ci7ׯÄý*ºd“»uë®8D…ò“ç:½ûÈ{œ‰—”¹ÊFFw€Ñ…ÕÜœ Þ„2®Hªq·>ú~³`,¦g-Jf]TÓ*suìDÖf÷@ÿðëæñSMHìê®2\V³vp4žk0úrNIÎ\`;ÇÆÄý÷œ`“•UÕ„öŒÞ/)sõr2úí]PŽ÷úB](¦¶]åbhfÙFs)Sf·9=XYuÚ,+ï‹™„Í]ÇÙ¤3n¿7í J)ÏcBÏZ¤T0º¨¨hõêÕùùùm®'ZÅ®‚ÇLdÁÌ!qØ­ZÞR÷\ ÄQ¹ùþ {NÆ@|>³zèpuçC¡¤®éä AÈõJ`±ù÷dß5G“ —ó› Q!uˆýcóIÿu:^œs®F!£;Àèû•Ü»å¼#%û¾8ú­zXh ýH}ï©x¡¶x‡ÂLýÍÅ;äz5T^³Óo¶Í2È—wc“:gT¾m×ÌÄÍô¬EÆIÉ:úßÿþ÷|ðõ×_‡…… } U­b>W‰Ñ¿ï ¤›6¿:iŸJbW¿¨ƒ•¡”ó~ZE’Nž!½äúèN1Q¥­gd6ËÚä×ï91`ÌmG·€ƒç3å#?ÓohÚG¾?ÌvRu]ˆã\½†Œ~{FçUrï”ñŽ ”‘ùâè_½aâþ°ÂÅxž­à£¹-ƒ”TVºú >ÔOAqû16‰çþ¸âpT6‰ùÝm’É|C3«=‰$cn³Ì-(iöË Œ-Vló¡ýÂkø)&_/°ƒGÅí°Ð³!-Œ®©©QTTüàµÔÔÔ¶oß^QQÁ¬_Õ*æsµ=†{*¹ôâ=3žÓTòÆ}C²êÏfTKÜSþ{®&"£ßžÑ¹Ü[¯ ”áyâèO¾ÐýÝ#ØïꣽäBn7ñ=ºçtÒÿ}øáÅœÁÑ[ýØ|ñdSˉÆC› ƒ•Ý‚Ó )×GþãOµ7{†‚!éìI÷ÐLÈÛ¬pq:îŸP,n‡…žµKqÒs?úèÑ£0Ô­[7kkë„„„ÿ®'ZÅ|®¢ß0W‘Ñ`tLakV)ïBy!Gì|àâݾýBïµ@ü…žÑ¯Û|ù*8z„I…î \vöe3プ²ôçnsHÒÞÉsÊ7V@ †’¤ù¢& í[7Ó³fù£ÃÒ®¯¾úêðax#Ô$Îsýƹz.å12ú­^ÌM|Ä;‚¤7³¬V|:Îè·ƒlð´…ËG~iÀWaÙ¾ðn=z Ý·ïEØ‹Ûïù°[÷ƒUˆ!©c8íÕ[~õáxRÓf“Ál;ÁÄÍô¬…\‘ªïu2zäÈ‘»víŠÍ®繊~ã\=‹ëhéb´WZ‹ü@%­ Æ£ L‰ÇÎ!ùÌ:’9€ÝµGÛÃhú 5ÁšÈh±½×ñÍ7ßDDD455Á£OZÿÙ¹º?¾Þ3µ™‹ŒFFÿÇK÷†* ×`fôLmLlùª™-vúñ¨­¡dsÇ¥bkGo¡Œ† –ÏKvÓ} ñE1zØÈÑ0dt×f8`À€õë×Ã3bVøÇÍœQ°87òåÅ7?o†·}½û)ÀÁU‡â˜«Š6vl§Þ¸3²Œþ‡ýÅd3à/3³Ü#JAIU°¦ùR¸T?Ñ…9ù e4ŽÇ3†ŒÐüHSVår}ûš‹b´­ó XÈãv[Ñ]óÝ;==½Ã‡“…3ŸþqFÃÚ]IâÅ®Aº_[‰ª fY­Xç›äžØ¸pÝæû¿¶wìÀ«2-1ÞÍ.‡w£í|ÓêS-弄Ñ0øE‹¥§§·Q§3 ¯úðæLÇpδ…ËéÊÔðÛek$N™g?ÆÈâÇ-þÌOGþ<›Mâñ3­á —Yá×ýð/Ø‹š–.ÔdîH{~!›@sX‘@°Dp ºK÷…aÐ,Ð,`aÐÿ\ßjÂËyhÁJ·±¬ù@z߯}¦…>AûÝ!dqC?éú\ÑÈh´LfØÅŒB9¸…ÁšÞŠÑkðÎ Þl-Ý )S§£«W¯´ˆ=’š€tÌO8þõ…þÏÛùº€Õ@÷žrð޹#éEE] Ø 4„M­ ÆãfXÂk†åZwxˆÞ»êœÄå© 6dB0ì¸ãR1äaG -Œ èßoòÿ4Áöù>€|‚›NeÁ›T!©ï5ß×>d42Œ~§¦á}xèÇ£Öû¥„Ù:Ÿ ºhÅÇè•^1£ LIÈNn”9‡ä÷î§@G 5éÝ3æŽÐ ýh° 'ìÊx†ÍQŒæ»×K{Ø—Öùnƒ'¹—Âl_ÑBŸ úh=ÂexEzSŒF#£»”Ñ&¶Ž•‡kO1×1œ#?P‰,9™ŸOy€|ŒfY­ÒÏB&β%M~»Œ¯}À%@™‚›¹#³ûÝ!£tYt/kGoú¹âÝžo”¶ñE&æüa³ïçú&Lš»xÖ’-Èhd4ýO2V‹hJ@XSF/÷ˆ¢Ÿõø=HEÞ¼v D–±°ð$74¨ÇÏ´Öš`LÛçÛ‘ JèWq˜:ó•CÏÔ†2š®¯áÅ@ÑíùF©àCBŸ gj3¼P‘…9¹‹‚ŒFF£‘Ñÿ£aÀhò‰0ëƒ> ŒÖ7[D—®SæÙ3ýÇéÛjZºÌv>g86r43 ;ÂÒ˜âÌ·#“¡ÀñÞýVzÅû×Àkz/xÈMrËbot%°˜0šù­Ðö|£”°˜~œ(ê ‚gجQQ×¢7dÑÈh42úŸ¼×ñÅd3X>Lµ§˜j)£a;bÔX@ªê'Ú»ÙåLFÏZ²Å|© ³‘EN~À÷ù+ö0¿8Ä÷SÉ·øóíÈÇPx‘è7HFåtëU4o¿;¤§\ŸÏõM`$0ZÂh¾o…¾ñ¥`Ã9dß6ž á;Œ–®²‘ÑÈh42úþÌ֭w `mË„5 ùݸöû;z¥µìº\ʼ7BWÊíùÅJû¿QÚÆÜàŸ&ôÎ 2FFÿ3Œ~«;¹…þ)¹˜LQOû°·\뎌FF£‘ÑâËèEN~ôKë÷øm6ùØïÁß"£ÑÈh±f4-ŽŒÎ®àD´¤8·B†Í}-˜« ¹^éÜ{å¼-)Î)ç5ÎÄd]¹r%))éîÝ»Ož<©­­}ùòeKK 2º-=¨áFä·†å⫽äù|N+ïÜ]¾*}Œ~ÕÒZÛÔZÞÀsY=—øa 7»œè›OÑæ{e¼sw÷qmîãêü'ÏŠÊW6•×4¾xñ¦+`š¼ #£…¨ùUKD^ \ê¸2•8ÃY;_›–––““SZZÚÐÐ Œ.©ã¹´-µ†óû°¦µ¢¶éùóçÔ¯^½’L¿£á ÐÃòú›å×n±Sî^¼z#4&ý|t¼‰F‹³aÅf³ããã322òóóËÊʤ€ÑœW­ÅµøÞNúßÂY~XÅ©ªªª­­mllɦߎÑpDàÐÔÕÕ•””äææ^¿~ýÚµkqqq1(I¬ Ð)))·nÝ***ª¬¬„U‰¤ßà{þ²õIï2~ZÏ ÐÒç²zÞù… ¿ôùÇ>0uëëëE²p{ú­ Ë.xƒc+;;;333===% JKKƒ—U4,¢a¢ÃzäÅ‹’>Ë_òVXp C™S–BÓó{÷qÝíÛ·auøøñcXPà C¾éñÖŒ†÷paÃRº¢¢ŽTAA²l”Ø+''ÎÐVÐèêêjx­•‚)^Ïá>z} Cy·-…¦ç÷fQMrr2¬ óòòž>} zùò%2Z$¦á½FMM º´´ô)Jgª¬¬ Þ={öŒÜÑ“‚·Šõ/¸Ÿñ®a(o•¢¥ÐôüfTÇÆÆ¦¤¤Ü¹sˆ0a#£Eb^ÁšššàRohh¨GI‚àLÁù‚wˆÌï0Iú ®{Á}Pû†¡¼ùT¬ýóo'±LÁ,S‹UN{óÄÿ^ïBò¡×r§›Íï×_AyØpØ% *ƒYÚ÷|¼Ðî ¯g[ŠÀZç}4)ª11=¿iùUl6;!!!++ëÁƒð^f22º-RÃE°nFIŽ^½–tЙ¨¶‰[XÍ»†¡¼þD¬=~Ënùƃì=GBT>R[ä°Žæ!Þ<œøDD$Ó6RR¶_³åÚýÆäÂ&Øåè…$Z§g/¹_V9‘8*³D°¯„¼ú=z¡îs.^p;¼ƒ† U…Hr»g 87z~Sòª._¾óæÍ¢¢"dô[À%)’¾ Œ.x} C™V,ÖÖÄÚz0Äë·{êM1ÌSû†¥ü߇Šjªw_y7ÿð6úÚ°Ã{Òt3ûµ.³,m…`Õ–}dB{+Óó{-÷?ŒÎÌÌDF£P’¡gMÜü*Þ5 eòc±ö¸I,çƒ$žÿã2Ù4ï¸Ë'òV%qLN=$ã 9r½ûX-^uür–`SÀè=ÇÂÛèë³/u· >ŸúVÜWï7 à‡eŽÓÍø’âiz~rÑ(”¤©æ97¯’w C™øP¬ý•kôX}Ö,K½©#GiŸŒÉ¦ù}:JâåNûHÞ+$éËñ“a5 œ5]`{ñF9mªwù]~á¢::qå®|…ø¢ˆÇêmØåK;šeµxýNh šuõ %É?=Åù¸Ñó{5BI £s__ÃPÆ?k ¿³_çz$l©ã®ƒ•ƒ®= y'@Q{ŶxÅkéèM1± I`ôΣá¢vYðÓŠ1úF{O°Áól—1΀vŒ^·Ãgý.ßÑÙíé]LÏoì=d4 %iª~ÎÍ®øÏßbŒ-kÈÚ¼?Ä6¿n?ÕD0/Ê[…—é&Ä®¾áBkÆÜo0HiÜdc=CSâî=zžˆÍÕQ{zÿgMÏoôd4 %iªjäÞ+ç]ÃP^)k™ÈÚ¸?IJj³îÁ)|yêc1ù{âèæÜEË?ÕÖ¥›°ïÖ#áB{Ùâ:LMƒ™™>ÇÆj©£¨Ž„&ÅÊôü²‘Ñ(”Ä©²‘{çõ5 eÔ}±¶ÎD–ã¾@ºùý¯N_ŒŸJòÌ8 ›<ÂÎù™N/¹>_M2ø±æh¯‹Yt_¹>òÎ>áB{Ñ32ƒ–™™m~QŠÊª‚:*1ôßç÷62…’@Fß.ã]ÃP^Ê“6‡e7O(>»IúžZûý÷ùÍBF£P’¦GϸY¥¼kʰ\´šžßÐÛõÈhJÂTPÍM{»†¡ö»CÀßmð„Md42…BFó{¹G”ž©Žáœi —ïŒ,¡yÃo—­=’8ežý#‹·ø“$`÷ϳÙ$?ÓzÉÎ`Â\Züëþ€/_/jZº”ÂtG¨ù¯/ôÁËö…ã:B!£…ðêà dó¥.JÃ5h¾Woù!#4—î …G‡© Ó{^i-{$5A (¤¢Fwæþ¼-Ù…[Lu÷žrŽÇ3˜;n {/ðÚ =Å|ê×ðGÈhd4 …Œ~ïx|=xèÇ£Öû¥PFÛ:Ÿ ±ÝÖ€4u˜Œ^é3ÚÀ”¶p_}8çüÞý AfûPÓ`¶‰éŽ€lX¶“{‚#£‘Ñ(2úlbë8Py8,faI+?P‰ÞvFìòÂB˜Éh–ÕŠï6xÒF` >q–-i VÇÌöu¿¶(Sjóí¸1 ŒŸ"£Q(d´Ãê} ™C—ÃLF/÷ˆ¢™Œ¤¢Æ¼yíþ í((©2WÄãgZkM0¦í îˆß½CF£PÈh‘†•20š|åˆüÁ0­o¶ˆÄ³í¦Ì³§Œþãôm5-]¾¦>gD6r4ÍÀ^£tYäž5±Ð‘ÑÈh -Ò_L6ƒå3ÀT{Š9p–Éh=S›£ÆUU?ÑÞÍ.§Œžµd‹ùR¾v9ùâç¯ØCíòÁÿêÇ-þBwDF#£Q(dt[v‹©vOläK’ûÑ’9„ÎL¸‚îv £FF£P²Ëh¡f~f(ø‹ðÿ” ŽŒF¡ÑïÁ‹œü$úÃ=d42…Éèì ÞeŒ–>çV £Q(‰Ue#7"Ÿ÷—àq½)Å>ŸÓ gùصrd4 %a‚Kô|6Ó¸Þ”bÃùRŸ¾ŒNHH¸yóæƒÑ(”X¨¹¥µñ%Ï ® ë_´T54ß|ÂI.z‘ß—Ss¯æÊ*´TùveüÍ”ë·ÒoÞ½y7/§àÑÃ’Šòš†šL29ÜW¯Zˆ¸¯…ŒF¡:]ÕÏ[ŸÖµ–5 Ño0Ì“âÚÖÆ¦—§¹¹ùÕk`¿Ç©¨§§÷Á{2%%zÕÒZR‹÷šÑí½gý¤¶õIÍ˺ººÆÆÆ/^¼|ùòýbúý¬‹‘Ñ(©Ñó—­%u¼Ëïi=/@£E¹¬ž7O (¬xQZZZQQñìÙ³çϟÂZÜnz £QңƗ­Å¯ en%-ÒÏ“§yyy®å]{Pf—£Ñ"MçÉ퇵wîÜ),,,//ohh·¥42%=ªÁ}ôŒwíAy§ ýf§Ög=i–Á'NçIfQMBBBzzzvvö“'Ojkk92…êÕ½à>¨á]{Pf=•-ÿ{½Ë׳-ßXm§W`ôͺ٧¯üÑóñ²v¬Àtž¤çWEGG_»víÖ­[>¬©©·ïP#£QRÅè¢××”7Jdȉùõ=zô6BýHH|Û5 Tô<ͦ›Àè7î"•¦ó$%¯ŠþÎE<‹ˆŒFIj›¸Õ¼kÊô'2dÇÞ“¦›Ù¯s™eiK“ólìS6“øÏ}þ.  \ßÈd¶•ï…$È÷î+¿óp°¥Ýr#S‹»}dçˆÑy’”'îÓ’=kâÞ¯â]{P¦>–!k}©ëz(øB꣞½äî7‘$à8±Cbó…vó~p8™ P^²ÖÅíXøÅôbÈæÆgÚ»}CÁƒ+ï?%#GŒÎ“k9Èhª ÿúÚƒ2鑬ødô]ùþ ‰Z «oôûn_’ïÞ£çÕ‰g-´³øÁ‚÷°é¾ÀèGBIli·bî÷ö2rÐè<‰ÏFF£P]¥šçÜÜJÞµeÂCYñ»côÜN²Áóm—1΀äÑ1÷9$þÆÒn®ý*B5ºoï>ògâI¼n§T“‘ƒFçI2…ê2U?çæ¼þÛÁPÆÉ„c Z)›l¬gdJ h>—Ap9§‰T›¹ÀvŽý÷œ`ÓÝÑûƒâI¼ÆÕÇô[;9ntžÄÜEF£P]¥ªçÜ{¯¯=(£ e·B‡©i033æÚX-u„à#uM'Ï`B®W*«ª™ïñÇš£aZý×éx¯võ™ù­Œ7:O®ÜAF£P]ÆèFîÝrÞµ%û¾LXÏÈÌæW'fÆÕ/j°²*Nž!½äúèN1Q¥ ÕfY;@rýž)šÝ`‚½§âÉŽ+·û˜,°“‘ãFçÉåÛÈhª«TÙȽSÆ»ö ŒÌG·†ç4$•àqôßóä2…ê*•5po½¾ö  ÏC£Eúïy’UŒF¡ºH¹Ü̧¼kÊl4Z¤é< ¾ÅAF£P]$ú?¿I‰F‹2'âÿ¿Ã‘Ñ(dtçz|½gj³˜´ùîƒéŒ§ƒŒFF£Ñÿ˜ûPüí ›nH挛aù¶ü¼-pgd ÝìÕ[~õáøwLºî@ Èhd4 %¾Œ¢í®¤›‹]ƒt¿¶zG¶v˜Ñ|ƒé@×hŒF¡ºšÑË=¢ôLmt çL[¸œ.3 ¿]¶öHâ”yöcŒ,~ÜâO’,«žÍ¦;ŽŸi½dg°¨`_z'Z°Ûµ£w·=?×71˜m·Î7‰0¡#›M>¤¾{b£Ùb'È@5§ »Ì!AMè—9˜™PúÛÕû„I°kÚ‚ÐîHP“olL =Jö»C9ùÑ:Ëö… ÝŒF!£Ûe€‹ƒ[,*Í—º( × ËÛ!#4—î …‡‡©1ס^i-°é‘Ô$ªâd‰}S8žBËP ȵãR1éHõmèÜo2°’ZŒÇͰ„Ør­;TØZ@jª¨kÁâöef×åRÈ«iéNš»X軦-íNÔØ˜z”6ÊRPR…ãCê|¤©ÑÈh2ºã ì¯ýxÔz¿B[çäQXhø½Ò+f´i-e´Ð{a°°…5)À´{O9BðXÖü6k芻»É°‚1j,,ŠE ‰o²)ª;¡cd´àQ«Ö#;:Ï^ã½d4 Ýq›Ø:T®=Å\ÇpŽü@%²JeÞ&vÉŠñ1˜õÝÏ6Zh?£iG6›| ¦ýîQº,ZÁÚÑ›|2Éw皯Xɬkê6†$”Ñ¢º:6AF %ð›}?×7õ³–lAF#£QÈèzÀ2 S¥á”Ñô­=€ pÆÇ¸A*jä>¯¨€Ñtq:q–mû »+Sg¾„è™Ú´ÍhšÞJ5$¡ŒÕ];-x”Àž©ÍðÂðçÙl 6¹¯‚ŒFF£Ñ10pF>ÜÜÀUD­o¶ˆ®‚ÉÛ|ʸ?NßVÓÒm»…!#4É­‰½Ñ•tÊèa#G3ïÏ rØÚ»ŸÂJ¯ȸÅT@I}QŒ†— 4óf±¨!ñuMZÕ(FÃ.k$Ò ‚G‰x†Íu-æí d42…Œîˆ¿˜l+Mx¿¯=ÅüÓqF”Ѱœ1j,°XõíÝìr&áý»ùR—¶[°ßÒS®¼å‡Ý¡e´­ó XcBûä›B9`í7H„rºõ*¡ßÒ£ƒ½`Í›ÄäN…Ð!ñuM[Ú¨±éΡÏEèQ¢·>I‚3"£‘Ñ(dô[Öô£6&ž`Éä5P‰ù5¡-·üÌߌ¼íǘ».—ÒûïåI½÷îÚ8JüÓàÝ~?BFwŠÛþiÉþøzüCm¥·ø¤©c¹ÖŒF!£;Å‹œü:¼–‹:J6›|ìw‡àï ‘Ñ(d4ÿî2…êr•Ôý‡ÑÙ¼ËåÜ d4 Õåâ¼âFÝo½|Ÿþ» nx.-ÜqEÜȼ–W‹¢¢¢®^½zãÆ d4 Õ¹‚K« Š[TZWø¤nÛ¼yRÅeÇ%EDD£a]XXˆŒF¡:Qͯ¸…Oë ?@¡Þ$}}ý‚§u7r_¼x166öúõë„ÑBuŠ9ÜüÇ•0§oÞÉåìü"˜'Ü,(¿páBtttFF2…ê\5p¸¹xŒ†2·¢éÿΓùå¡¡¡W®\!Œ®ªªÂ{(T'2:çõµevy+-Êtždäý£qBu¢ê_p³ò®=(¢Ûï´Âú¬'ÍïØ´pýQ“8ŒD°5˜y:OÒó+¢££Éýh\G£P¨ºÜ»x×”YO[Á‹Û8‰e f™Z¬sÞG’]à^Ñ7KĹ#¾ Tô>Í~ÇÁ,ZºÆiÏ;潌DðyM™avìB"ÍÓy’š[qé񴯯Ø7nà÷:P¨Ngô××”7JZÁã'±ì~Ûèyš½Ã;hÈPÕ_V9‘|gЊsG|;B{·ò]Fr9³ŽpFqË;æÝG"´eßÐÄ/u hžÎ“äÜÊÈÈHòÝ;ü~4 Õ¹ªmâÞ*â]{P¦?iëNbmó $ñª-û&L5&ñ·¶Ë|Î'ZÚ-7±°†Í„û‹W9™Z̶² Š½Ë¬3ÏÆòîó'Iâ•Nn¬oæ=Ç *6è¸Ã»GžúF&К]GBþøËîû×ñð»}˜­‰êëƒäëHÔØh~†¹%x½«§àŽV?¯8s5[Ô¡ ƒš0xHpü`ûu.ð(Ýlÿ!å I‰{@”é<›©&sࡈÌ’Fd“Xi¨jpB.‰é×6IL-´¯7’¯#Qc#ù‰ÓLÿ:»;lpÜX{‰:d0ŸiÃÁÐ2Tà{ _M4ÚáB7ÛHùCGÒF#BG­A³°/4õÑÇ$ÉlÍx¶ÕÊ?÷‘˜Î“Äd4 Õ…Œ¾ùúÚƒ2éQ+xœËÜjñ†>ß|kû~¸ãH(ÉÃu¾Í;˜ÄÇ"2{ö’‹Ëo"›F¦ó¿[²†Ôùcß ’üÓ=à-NÅfCå«’_³ÍsÆl+¾Á€†}lº©5FtížÁãõóÔBûjÏ ™‰ä»÷è]Ïì‘o„dST/d0ôèYÚ­˜û½=ßSëÝÇ#0¦‡”o04n£Q#I|ÐO¬6rÔáо–awÓù‹HLçI|62…ê*Õ<çÞ(à]{P&¢ž¯Ð™-æàh™Ý­óMjc$ï0ÀßmðÜ‘C¡ÇFÕ`$g>‹U‡â`lœ4w±kø£ö<)ûÝ!í<,ÈhJ| —´ƒ[,îÌ—º( × É^½å‡ŒÐ\º7R¦N/{º$ôJkØ#©©ÊWQׂå6ÐAk‚10 ^,׺C~khAAM` t¯ý)oðOlpc@&l˜!þí[AI†Dª}¤© Šz¾B{d¶ ›‚£ev·ãRqƒ„ühÓ_÷GÀîs—¹ îH¡Ðcê'Ú0<04 Br˹\ùJaÐf{žÔ¦SYí<,ÈhJ¬ïuÀe¼?¾<ôãQëýRÈ5oë|‚F¯ôŠQ@­ y²Ð¦tï)G€†µç ›5¢ö…5&T>Ì!IXŠÒ;*´AÁ{ê£õ€g˜Úx²BGËlYÔh™Ý‰$äaÕ G²{dST/0JRXqÃÒ˜Üpó5ûÆ'ÕþÂŒF¡Ä—Ñ&¶Ž•‡kO1×1œ‹5²„k~õáxRÁ9$h‡À€‰Bheš‡7Ý£tY´GxûO?lÜ*Ø­û€Á*ÄÐ# L°2û~Øìû¹¾ “æ.žµdKÛŒ-3)j´ÌîD òc&·}?šlŠê…9›M>³íH ÐÿÓqFóWìüZˆ¨SÐÎÂŒF¡Ä”Ѱ¼@Óõ Òp Êhò.›,â+|¸¤¢Fo^‹ªLÁmÂpæ«‚ž©¨}¡24þF¼ò±°/0dyKn)´ÁhÁÑ2[5Zfw¢ùëþÁåªPF‹êE£És„‘«iéÎüic{žTû 2…SF€Ñd]9T¦ŒÖ7[DWpä7å˧o)˜€Z™â^z÷SXé±[L5°‰¾Ü+Sæ= X eô°‘£™7XgجQQ×¢w`Ú`´àh™-‹-³;Qƒ„|¿AÊ‹]ƒHv'9†¢zÊè]—Kéë(,‡ Í¡úY¥¨SÐÎÂŒF¡Ä÷^Ç“Í`ù ﻵ§˜Ã[iÊhÁˆQcŪŸhïf—3ù˜0_ꤞÐÊLÜÀ À zrºõª¶÷…ÕýšiêYäúö·vôÊh[ç°H„¤ÝÖBIx²t9Ù£{äkYèhùº5Èu¾I°Äþ×úc&“ûÂ|;ÒeµÐ^„2zÕ¡¸žr}>×7ùdìT8Y[ÎåBRÇpÎÔmŸ‚vd4 %¾Œ&ë8÷ÄFÁ» °vc^íÔÀæ÷çÚ®Ìüd’¹|ã¾ûãëÉ:´ÞàŸ&ê&IûŸZÛ£ú£A¡ƒÜ]ÉwHߥzšD}lØÆ“jÏaAF£PbÍè7ÞùSû+¿KGí÷[üaÑj¹Ö½Ëz+‹zRí<,ÈhJ½Èɯý¿{~«Êïq_¦m6ùØïaf<’šFé²ø É÷Õ£XYÔ“<,m3Ú/©2**êêÕ«7nÜ@F£PâËh´lþÝ»£×*/]º{ýúõ‚‚‚ªª*d4 Õ錎ËÈå¤ÛE„Ñ>‰•¡¡¡—/_NMMÍÏϯ¬¬DF£P¥gMÜЬ:CCÃP¨7I__?$«îpLQ```XXXBBBvvvyyySS2…ê5p¸I_…ܬÅu"úæúZÍáSáG=sæLttôíÛ·KKK‘Ñ(T§¨þEKQåËýñõɹå’ NGßòO=r>ñð¹«h4Óþ®®òŽ;~.êЉó^^^'OžŒŠŠºyófII 0º¥¥B½Õ5µV6ïOhȼ_•^x.6ëä¥äc¡ñGCb}ÏÅȸ=O†ûœa“Øçl4Ùd&eÊa1+½bN_Œ=r:üàÁƒÇˆˆ¸qãÆ“'OÑ(T§¨¶©¥ ò•GRÓí‡Ï®d> »–w6æVàåëiRà9 m'L™q"<…l:í=baýsõ—­ßzðä%º)ßoÀ†mdGˆ¿Ô¸~«;MÊšC£Ó6øóJÿ˜C‡Ðu42…ê´uô nAUËÁ”—÷J㲫/g–„§=MÊIÌ=—#éÖþJ¿{÷?-ÿl®Ûæñ•þÔ6ê÷ë¯àäæK7!ö¿˜Á”³~YåÄ—”5G§ål ΉIÏ=ËÎð÷÷?wî\\\Ü;wJKKŸ?Ž÷£Q¨NZGs ªy¿6Î)å$ÜoŒË©¾[uùVyTVYÔÍRI·ÎøÉ ~ü·‚¢RXÚCØÜìvtü$y(4¥è{ûÕ¬oLæ~wø\çÿíè:TuÄ—º÷ K&IÚÅ’Õ[&O73š9w¯ß’1_øij¿ûÙè›yRp ©SsKw]æ•—RrÏž={éÒ¥k׮忿VTTà÷:P¨ÎÒ³&îý*Þ/ò*Z®5'râï7Åå5Ææ6HÇê:85Áh¦ÝJ'Ø„Xoê×ä!]–‘é¼]¾¡¿nÚÓ»O߀èÛ‡Ï'A`·b³ë¡à3ñyP§¿Â =GüÏ%h›¸ðç•K/&IÒ´0ÙØ6·¸ Tâyö*$¡‘GŽúã¯ãP_:#ñíÇ ûxå•›ÅaaaÑÑÑééé÷ïßÇß° PËèü׌†òÚCn⃖ĢWà„Âf)ðW§mó>ëùV_ùþ—n–C<Áp&ä}/fôì%SOªΜgµx*º¤»ÓÍifß:¹ŸäKž¸r‰Ék$ùU.¦›/„ w_yÏ é8€Lç”5{¦òʸ;å‘‘‘qqqø[pªÓUóœ›[Éc4” ¤Í_°¶ àk ›ïì×A<ÁÈ6!€‡hµ5Û¼§™YBÐø{‚MótuòäKB#ݺuW¢B ùÉÆs ß»¼GP¼ôL:OâîáßTB¡ºJÕϹ9¼kʸ"ió؉,ïN'>€5ïò?ÝõŒLaÓÕ7\e¸:­fíàh<ׂþ Š{N°ižn™YnväKB#ʪj‚£÷ÅKßÁ¤ó$æ.2…ê*U=çÞ{}íA](m3‘µÅ+„ÄólW VVÕ34…8*—#ß_aÏɈÏgV®î|(â5G“€p¼ë8C3ËMûù’—ó› Qqò ¦õýcó¡Fÿu:^ú&'Wî £Q¨.ct#÷n9ïÚƒòò}i30ÚÉ3„ÄI%½äúŒ74%›Ûý¢V† PÎûiI®ßsbÀ %¹>òÜ`pìzŒ ÁÔo,ßHêÐ$øÀù UuÍ‘Ÿé@³}äûÿæâ IØ}ï©xé;˜tžDÝFF£P]¥ÊFî2Þµed¾l9"·åTréÅ{œwl'$«þlFµÔ®¿çÉ-d4 ÕU*kàÞz}íAž‡F‹ôßó$«Bu‘r+¸™Oy×”!Ùh´HÓy|‹ƒŒF¡ºHŵÜÄG¼k”h´(Óy‚ÿÏ…BFw‘÷Ç×{¦6Ó¿M6™I42…BFwÞÈü¿«}(þv Á:ß$ˆG˜.÷ˆ¢I42…BFw©ùø ñÞhÞ?Û?Óú» ž|I42…BF¿« ¿]«ài —1²°ÙäCóî‰f‹ i0ÛÎ)è.d¬½»õèù¹¾ d`Ȱ¬Vüy6{ẊÃÔ?gyç|’¤í,Xé6–5_÷k«µGiC@vÁÁÀCSæÙC¿?nñ‡ŒýîEN~´Â²}áÌA"£‘Ñ(””3ºWoyÕO´—î ÷¤¼Ü#Šäµ&›a ›–kÝ¡ÎÖЂ™˜/uPî¸TL—ÕŽÇ34ÆL6þaäÝbª™kmhh ›KvCãüÓH*êZ‹]ƒ ¾à`†ŒÐ„‘8¸…÷Ó›Ne)(©z¥µ iêÀ£Èhd4 %CŒ¦Ôƒ%0¬a!wï)ç‘ÔDò°ža³Fð^Ýÿ¼-/ «ihä@2‡ä¿Ûà «iÒ# [Ô`lOØnkõÑzd„ðb¼Æ{ÈhJ¶½úp<‰m6ù̶#wFé²hkGo ðÛ2ù°[÷ƒUˆ!¯c8‡¯Ç6ã’ˆ‡à‡Í¾Ÿë›@0iîâYK¶ £‘Ñ(”¬3zÙ¾pÅaê´Ž‰­£ž©ÍÛ2¤¢Öv‚Ñ›-Ð4gj³ü@%²*'÷XÑÈhJ¦} ™Ó»ŸÂJ¯ˆÝbª×änð‘£™·ƒÛf4°–ÏÌÛ°4d4T¦'ÂCúf‹H #!7^À3lÖ¨¨k60Åïu £Q(d4/†õl¿AÊ£tYPN·^E’¶Î'`I »Øm x#£Éä!#4?ÒÔ¼Êõíoíè-ÈhÃ9S8ÐÁÀ‚}Ĩ±jZºªŸhïf—S¸8èŒF¡d…ÑmØ+­e×åRú¡ß»üãíÁ€)‰7ø§ ½m‚ŒnCwïÞŸÓ ³åfAYhh(›ÍNOO‡utUUÕûetII )…ŒFÉœ9Ü'u¼Ëïi}+h´(—Õóæ 7òË‚ƒƒ###SRRòóó+++ß#£aU®¨¨øèÑ#SSSÅÿ•’’2%sjàðÖÑpíA™SF‹4'é¹¥AAA—.]JNNFWTT455µ´´à½êý«þ÷Ñ3ÞµåÝ24Z¤é³¸™ é}ú(++p•”” >|82%sª{Áû^\{Pf>•fÿüÛÆI,Sð4S‹µÎûÚ®<` ¢×i63@Óy’œ[C¾×ÑIŒ¶°°puu­®®ÖÔÔ„Íøøxcccd4JæTÛÄ-¬æ]{P^"Í?‰e·|ãÁ@öï !CUYåÔFe@3Ԅʘ;•Ò}dÚi:O’s+É÷£;õ7,%%%µµµŒššZcc#2%sŒ.x}íA™V,ÍÖÄÚz0Ä«¶ìÓ›b A|~ãÏ+ŒfZ˜/´;s—VF{œbC`õóŠ ¸l’\áä6Ãܼ~»'lîò ùÃÍîâæ¾q—@:O®åvÅoÁutt`‘ÞÒÒÒ§O’‘——‡¾Ñ(ÙÒ³&n~ïÚƒ2ù±4{Ü$–óÁ@ÿ°ÌÑ`ºã§OŸeù×ɨUÎî½ûÊŸ½V@*£÷°™T›8ÍÔíxT^ºÁ2Ç/g) U½ö°…ìòÉç:;}C¥øÒy’ÓŒ†uº££# ¯…÷:P²¨šçܼJÞµeâCiöW¬YV‹×ïô1]`û~èêz4"³g/¹˜¼&RÁÐt¾Õ/kHÜ â_'Ù48“ݽGOö½z¾6µtô Ž\Ì^K÷¤óäjvWÿò펷`.2%5ª~ÎÍy}íAyõ4{¬ËÌjñÚ>ëwùΆŒË¡HÒ «·yO3³$1 yï 6  ¦¶îdÁ6¡©ñSM €–mWn‘îHçIì=ü?,(T2:»â?÷.¶Pš=v"kóþ@fÆÕ7|èpuºùÝRÇsmHÜ_Aq÷q6 v¬¬*Øæ•ü惔ޱ³a=~&©Xº 'ÑwºˆÑÚÚÚZÿÕèÑ£‘Ñ(™SU#÷^9ïÚƒòJ4{ÌDÖÆýÌLdG¾¿Âî“1‡Ü¨ú‘úïPò y§?›QyÍ+or"BeÚÈ‚Ÿ×¨ih74•ó„Ý%ŒÞ¼y³œœ\HHH&CÈh”Ì©²‘{çõµeÔ}i¶ÎD–ã¾@¾ä6¿(…ÁÊð”?­¢y@³ë163øëLÒUµÏÆè7yäg:´æÑhÞ?“…v¤ûèÿž'·»‚Ñou’ZFß.ã]{P^Ê“E‡ç´$•^¸ËiOåÀ´Ê¬Ffæ¯à´!ÃÔdá@ý=O²ºh…ŒFÉºŠª¹Y¥¼kʰ\ôÛyåNõÏt~Ùä. O–Γ󷻿~´™™ÙŠ+ÒBF£dNª¹©Å¼kJÿ›è·óOø,w ‘‘'KçI@F]0:<<P«©©©ý_éèè £Q2')þŸ³ûãë=S›ßK ïÞþÏÙ·Ð9..ïu ÑRËè¾;È&ñdθ–nÙ2ºkmff†Ÿ¢PÒÌh êÞèJ/v ÒýڪÌf6…ŒîF———÷éÓ0ÿ…Œ–F/÷ˆÒ3µÑ1œ3máò‘%$iøí²µG§Ì³cdñãZ™eµâϳÙ$?ÓzÉÎà6*»'6š-v‚¤Ál;§ »|Œf6^°Ò Våàï6xÚïYääGZ¶/Üf“ß°…vÚže™ÑøÁQ(Éc4ÎÁ- ¸i¾ÔEi¸Iöê-?d„æÒ½¡ðâ0u AJX¯´ˆ=’šÚ¨¬5Á˜ ¯–kÝ¡ÎÖÐQ÷: ÚhÓ_÷G@å¹Ë\7ÊRPR….È£iê@ã|ÃÚi{v”eF¿‹Ñ(dô?f€ÚþøzðÐG­÷K!´u>AµÛ°ãcôJ¯ *Å¥`å™Ý{ʈƒÇ²æÏ°Y#”ѰšîÖ£'ôÎ’úh=‚WÇã€]Á1‹áw”qF×ÖÖ:88hiiéèèddd £QÈhq·‰­ã@åáÚSÌu çÈTZ¶/œpõáxRÁ9$h+x§â» ž—‚•íw‡ŒÒeÑ^¬½ÉŒ‚Œ†šc&óê‡Í¾Ÿë›@0iîâYK¶e´Ð¾qGYftii) V[[;**jÙ²errröööÈh2Z| ‹MôdÙT®A½Ü#Š~N<åcô 5zóZhehGq˜:ó•@ÏÔF(£Ý!¸àõLm† Xb|w\*Êh¡#|㎲Ìh]]Ý€€f0ÍápÑ(d´˜–¢Àhò=e@\‡”Ñúf‹HƒÙvSæÙ3ýÇéÛjZºL\ Vî÷î§°Ò+b·˜jà5¹!Èhè½ß åÅ®A¤¨L‚6kTÔµèÂâµGÛèTÔŽÈh"MMÍ’’fFMM ÿ -Öþb²,ŸGé²´§˜:Έ2–½#F«~¢½›]Îdô¬%[Ì—º0-´2@à -C9ÝzUß^ç› ó}¡¯1f2½³ìÂûËJt± Ö1œ3uCÛ ÝM´bÅ º™˜˜Ø³gO¼×BF‹»aéêžØ(x·ÖÂLöQéwéÚ®ì•Ö²ër)½—Ò¶÷FW2‡±Á? À-ªr¶½£Œf¨§§÷ᇪªª***v¡Gd4 -yf~"'ô÷Üí¯Üáo‚Úr­ûÛŽð;âwïJJJ²³³óóóߎ¹Èh2Z|¼ÈÉ~$ø~+·Ó6›|ìw‡t Ó7î˜Ñ(d4 Þ¹Œ®¬¬LII111‰ˆˆxÛ?ÜŒFI!£³+x—-ʹ]ÇèÍ›7g•””444TUU!666FF£dQ•܈üÖË÷[ÃóÀ\žsÑháŽ+âFæµø%”FEE]½zõÆÁèGd¡¤™¦¦& µ»»;2%s‚Këú“Ö§u­e hô ó¤ ª5,þÖ¥K—bccÑ………UUUï—ÑÖÖÖ{öìáÃW\ ˜FF£dNͯ¸%µ¼ÿ„„÷[ÑoôùœÖ'µ­7r‡……EGG§§§¼wF‹úo³rrrÈh”Ì©‘Ã-©ã]~Oë[!@£E¹¬ž7O ¸YPrùòåÔÔÔû÷ïWVV¾_F«©©ñý¨{÷îÈh”Ì©Ã-®å]{PæV Ñ"MçIFnipppdddJJJ^^^g0:**ê¶€Ñ(eôã××”÷ÊÑh‘¦ó$5§ôÌ™3Äh===eaRQQAF£dNõ/¸žñ®=(Ñ"MçIjnÙ¹sç`©›šššŸŸŒnjjÂß° P¢ºÜ5¼kʬ§Òì£çã§Ì0ë×_AcÔè_VnN{ÐÔ±vvzFß,‘îc%Ôtž¤ä”_¸páÊ•+ô™!2…úF½¾ö ¼Q"µÞát†2õ!'4)ËtŒÞäŽ55` ¢çi¶+Q¦ó$9·òÒ¥K111ä»wø[pªUÛÄ-¨æ]{P¦?‘N§>n‰áј|‰¿±´›kãÁ¶C!_°hý5Û¼§™YBŒv;Á¦ù 7˜n>ÙxŽÂ %èKZ#'q÷Ñ(TW©ú97çõß3ƒ2®H:}9§©[·îûã˜ÉQÚºöŽ»H Œ†:$ž¹ÀvŽ®¾á*ÃÕi}kGã¹6ôWPÜs‚M’‡.d Ùy²©ª¦{Iëa¤ó$æ.2…ê*U=çÞ{}íA](µþa¹Ó𑣂RJÈæOk¶ VV »UK6?R×tò † äz¥²ªšù÷Gåräû+ì9ñùÌê¡ÃÕ…Bü±æh€ÿ:¯¤2ür~3Ä;Žñþ î6ßpi=†tž\¹ƒŒF¡ºŒÑÜ»å¼kÊË÷¥ÙßÿêÔK®Ïç_ ¬<êK½c±ô!'ÏxhÜõQÚzFf³¬H~»_T3‘弟V‘äú=' R’ë#¿Á-6¡þ05 ¨£Ï2ÿr‚‘‹O¸´@:O¢n#£Q¨®Re#÷vïÚƒ2"_Ê})·ådR‰ýf÷!ÃÔ’K™]Ìi†‡„î5ÃîqÚhöLFõù[Rôþž'·Ñ(TW©¬ž{«”wíAž++vpò\²É]vžï{ñßó$«Bu‘r+¸™Oy×”!Ùh´HÓy|‹ƒŒF¡ºHø?gÑøÁ‘Ñ(d´¤z|½gjó»T@F#£Q(d4Ï?o ÜYòÆj’9ãfX¶³Í¾;Èng…vŒF¡d‘Ñoä)ñb× Ý¯­ÚÙ&4¸7º²Ú9d42…’DšØ:T®=Å\ÇpŽü@%XÞ¶~·Á³ÃŒ^}8žÄ6›|`<ï—Ñ´qç|x"ñ›}?×7`ÒÜų–lAF#£QÈèÀPXêÒM=SÊhº^ž8Ë–$¡²â0u&‹¡>"aÕ €¦|W®A=HEí­¾zÑ•Œ&7OÈçЉ=S›áæÏ³Ù@íÜBAF#£QÈè÷`€©\ßþ䣶Ýìr Áñšäó½½Ñ•€W’„ʽû)¬ôŠ!Ÿ4¯ÉÝ€a#GÓ{S`4ù¼Ø-0úÓ·Õ´t;ü]‘72š9€v~idí‘DÚ¸¾Ù"úŽÜH!ža³FE]ë­nÑ £‘Ñ(dô{6Ð 0­=Åü#M཭ÑS®¼ÙWýDû‹Éfô3CÀn¿AÊ£tYPN·^E’¶Î'`É °#_ÿ€ú°|†:Ðæ§ãŒ€Ñ³–l1_êÒyŒæÀ­c8‡>#Ø Þ Œ5^EàÉ ­æ’С«ld42…Œþg ËÞ]—K!rQxARèÝ ¯´¨Lïfˆú>Ÿ{b#Ý’/ê‰¡É <&‰7ø§Á{ü~42…Œ3ý~Ø-¶¿a.Ò™þq‹?¼±°\뎌FF£ÑââµGéÚΰGRÓ(]ÓŸ~e¤9Έ/Ùþ¯Q¿{㋜ü„¾]°Ùäc¿;gˆŒF!£ñ/(¡‘Ño/.J%AŒ.©û£³+x—8á!tN9×+ø%UFEEÉ4£›š[+ZŸÖýí’Z.Zü]ÑÀ}ÎyÕÜÜüêÕ«––‰€5ç7®¨õÖÓÖ¼ÊÖ¼ j.-ĕ܌â–ãW ###ãââ®_¿^XX(sŒnni}P«›ÖÒz´„ÎÚÃnCãó¦¦&‡°R‹9£áÒ*Âù†n÷ :]»‘sñâÅØØX`tAAÌ1úYSkqmkX.Þÿ’<ŸÏá»ÒÊg0këêêž?þòåK1ÇtS3çú­føÍ‚² .°Ùìôôt`tUU•l1ºª±õIïp<­çhIqY=ï¬APð°äÑ£GOŸ>­©©LÃjZœoz4r¸8ßÐo5Ãoä—={6222%%%//¯²²R¶]ÑÈ{¥‚ceNZbLÏZÖ½ü;wîÜ¿0][[ËápÄy)ÝÀáâ|C¿Õ OÏ-=sæLDDDrr20º¢¢¢©©I¬&yç2º¬žûèïX@y· -1¦g-9ãVjjêíÛ·‹ŠŠ`‰Kiqftý œooíôÂú[Ošev†§åÊö:úi÷áëcå­R´Ä˜žµøäëñññׯ_ÏÏÏ///oll|õê•83ZÊæÛâýBãßo›;½£³Jèæ€Š‡‚Ø2;ÃSrËCCCÉýhx¿(s÷£KêxŸ³Ã±€2ó)ZbLÏZLbZLL ,¥srrž>}Z__ßÜÜ,¶Œ®{!mómüd–«Wàûm ìušM7!Ž»W)³3<9·",, &yFF†,~fø¤–[XÍ;P^‚–Ó³v%>5::Þfgg‹?£k›¤m¾ŸÄÚîÈ—¼v¿ñ—UNÓL-f[Ù‰½Kó«œÜŒÍ-Á\=Iæ@@Ô7ól Mæ,´[•Y™ßwx÷èÑs¢‘ ì{ôBd¾ûyEðÕì6šµ´]5¡xhÓn)›áɹ¼ß°ÄÅÅݸqCÃòø·àõ±€2­-1¦gMâ-eóMwkëÁ@¾¤Þãæ–î'£V;»÷î+’TIÈLœfºïDäÿíèJj:íóßëæqŠm¿Öå£5 s"*vM7ÿððŒbÈÀ²*ˆj ±ÆgÚ»}CÁƒ+Cišá×reû·àj¸ùU¼ceò#´Ä˜ž5¶D1úY“´Í·q,çÌŒDfÏ^rWó›È¦‘éüï–¬ ŒÍîÞ£gLv½` ×´@¬6r”Oh dÊûØ´ÙÚ,‰Ñ;„’ØÒnÅÜïí¥i†'dË6£Ôpó*yÇÊćh‰1=k—¯J£kžKÛ|ûÊ€õ§G 3³ýP$éæÚíÞ¬Y–ürüdÁÝmþí8Deø¤æ“ç( RÚåÉþÿ:ɦuȦÐfIÜ»ü3ñ$^¿ÓÇÌÒNšføUgtQ57çõ±€òêƒØ‘wëcî7ÿãÃÓ³%QŒ®~.Fóí½x¬ë@ffÇÑp•áêtÓÚÁÑØÂf×±ˆÁCUùö=–€¾’Ï!›ªkÀ¾”÷ž`ÓjdSh³$F»Å“xíŸo,í¤i†ÇÞ“mFTq³+þówÈb »ÈžçÓŒÌ,V–ï¯``ŒîúùÖ©;‘s†™¹œËyµ÷d Ä¡™ÕC‡«» eç7ÄÿÃ#ˆÔ<”ûNÇÃôƒ‡ ÞyŒ÷_·û†C¬®9v¡ ’ëBh³¤0š"ñjWÓoí¤àÀÒyråŽl3ú~÷^9ïX@y¥ +ìt0¸|ÿ[½CnT_º×äê¥=~jd‚¹¸ÓŸÝcˆÊã]0‹Vl ¿ÓcøÓ+dÿÙ$òP—áMÏZdœpF×ÔÔìÚµkäÈ‘K–,ŸÙ\ÕØÕó­³=f"놦ϱ$Ìj˜`ð”óíV‘š0Ç”UÕ´Æè¡;yäg:$9ašÙ05 ¨©Ï2×™`´õH8$7ì=1`×ñ¯æœÚ,jºÆ“xÕvŸ™ßÚIÁ¥ó„-ãŒÎ¯äÞy}, ŒºßéŽÌkQ¬¼ÊÕWè£0]±!ØæÅšc3qÆœ9‹–ŸJ*¡Ìm–ýu& ’_[¬ØîCóK~w›ú%ø×-žÌäd“ù†³¬öžNäëh_pÊÿ}ø¡à–»xwïÑsÜ“vÐÑ…Ûßÿê}ÁæáÈ»Ìa@›0ŒiæÖoì«“LÏZ„£!¶³³ëÛ·/LEEEHŠÏl®lìÒùö¦z`Jix6‡/&£æ3|½š/Óf¥Ïϓ۲ÍèÜ îí2Þ±€òR^§{w`"À1<§Eè£ý·ú±!X½ÓßÉ; b›.*j´‚\yõOµ7{†‚õ.¾Qœbj©;ÕÔÙ'6\íJjBr¢±´ð»G0Ôü+8ÙÑ…»œ^r}æÚ®:–ÅÌ{„fBÐ韇Ã'd í@³ö›Ý!ï]@‡1BCkþ ¨öƾ:Éô¬QFß¹sÇÃÃc„ ÌeÝáÇÅê~40º+çZrý÷<É’mF§·f•òŽ”a¹îß„àèæâß÷͘oÞw>6ÑÎGÙä¡ Ù-g2ëÁýkÔî ’„}7z†’Ø|ÑŠ™ í=#x_i‚jÌ^ Ù£—\ÈÙ\êä9ÅÌŠo$»“>7^0 æt ÛÉå$OÇC‚‡‚o5‘¼Á×óçÚ­¡ÃØàÜþ¾:Ãô¬]¸šuòäIX8+((|ð¿ÒÓÓ·Ï‹ª¹]9ßÐ’k:OÎßn”iF'=ä¦óŽ”ÇnvºWz„wëÑ“n®?³ØÅ¿Wù_ÝB`S~€âZo6fvŽƒ”‡™j>ÖhN¿J°©5}ãIlû‡Ï”¹v°£æØÉ|½@òÃnÝ V!†f¡¡ãñ»Ñ þë ½¯X$CÇhgÑš?nòÖûÚRpíïëýšžµíû|ü„è·utå|CK®é< Ȩ“iF§wéÿõHj¢­óMb&ûP$ÿЂ߲g T~ ™CU®±l_¸à?œ·Ùäc0Ûî×ý Jª|½@ýA*jíÕÒ½¡Ð2 Œ4¢8LÖ1±uÔ3µÆÛöõÞÿ#gÈ•T˜ÁÎÎκºº|@TUUŸotIôÿœÝ_ï™Ú,)ÍâÿœEFwÐf‹†~)Þ)ª¹š˜yòäI<,ÂÉu‘Ñ8Õòòò¢¢¢;wîàaŸBHHâ²Ó§O’i½Áq_\¸páêÕ«)))¹¹¹<¨ªªÒZF—–+‚hL¿¬E”D"iÒÝ…Ÿ |×××W*•FDD$''çååé£+**|åääà L_¼x1”Lë h Ç3 î¯ééé………p_<i-£ï—Ë3Šs[Y.‰¤QÜO¢dÙG —𰨋ŒFØUVVvïÞ½¬¬¬´´´„„„ëׯljÁpO‹‹ƒãÞ¾}A4ãj•ûª2:ýÑÜÃöF‰¤QÜO.%ü?FÃÏÁ+b4Ãtee%ž‘iœff&"²[db0 9##7×üüü’’ĸãj3£KÊoÝSÌ=lcï’HÅý$\öïZÇ¥K—ØZ‡Î1š}sLã̈օdb0\©¢¢"й´´´¼¼\Ë +~(¿Y¨˜{Ø^ËÒQ9-t&±1jœÄÆö3§EÞþJ‡ «1ìg/ ŠÉÖÙO‰ûIxBÎ_ýž3LIIÑ¹ï •=@j<,c¶?$‰ábá’áÂáF«å€fŒN{4÷°½’©£L²~—ßõ¼†I~,D¦th÷‘ ÿyþyý”¸Ÿ\HÈc¿aÑÑß’‘5§Ý+“'ç+æ¶áé:ªAÃ$kwùsz÷5[µÅ“BâlBÉ‘s²qŸÌzÏj¬Î~JÜOBtû·àddÍi…eòÄGsÛ ·uTƒ†J\wúsFŒ™d;s;ôÆ›ý°5êiÒö…ö?yêì§Äý$$žMFÖlŒ~ —å)æ¶çoê¨Þ*qÙá#Ì`i=µ»Ò¡ßCRZ¶Òó>+ÓÍO‰ûIð b4YsYÁy£¿g†mpšŽjàÉêí>|×72»E‹–;…©êЩ«pW§Äý$(ŽMFÖ\–ÿ@~ãÑÜÃöLªŽjÀɪm>,½ãÏð·Ì,†|0AõІ_ÿ¦vïßѺù)q?9}MFÖlŒ.•Çå(æ¶§’uT1ç–­ôôZ·1yÇâËo7I«•A†Æ}–möÒÙO‰û‰4–MFÖ\–W*½«˜{Øú'‘Hõ_?‰!F“‘5—eËc²s[?‰¤QÜOþ޹GŒ&#k&KÊ—_ÎTÌ=lÿˆ#‘4ê¿~rµŒMFÖLFÿs–Dÿs–MFŒn4í+ƒXz[HÉîK•OÞæcÛi¬ŽˆÑÄh2²gœÑ#§ÎƒXºuÛößü¢¶ØÎ‹åƒFÙÕ±Í:v]¸+°.¾\ï³1 “MŒ&##F?£ÜŽ˜™VÇ6Áß-gòêRà±4'F£ÉÈtŽÑîAÌX2ÀÚvÊ¢• øx}îê%¬2øÃ³7«cû’i‹¾?„Õ'ó—{ÿÓhÐ~•‡R+÷¶h¥××rìЉŽ(Vß³@ãËö‡*vÎf_O^fþV?a¿Ähb41ZŒ~sµå8‡;¤`4¸,dt·×úÌÝrbžûÉ®¯sð퉨FÀË—­ë¾Ö {÷GƒÐ‹]º£Gagïh˜0w-HºáŸŒúž…ÚÑ®úíZ'}C ˜•yµ £‰Ñdd¢a4ÈØR¯ §˜é{£…ŒžµæK;®óàXzñž ~CmêÞ…Ñ‘ˆó*x’µM£5îgÁ:]y0 ¼¦µb4™˜=g³/¸Ìw‡NtT»½Æ7 (çx¾bwÃÍ´_å¾—ÑjG;sõ¾–c‘6ÙiülWb41šŒLLŒž¿Õïå×Mø®…½Ñ|9è@Yº‹Q½Þ¾h6F«íîK•í;ë4ÔnÀ 1šMFŒ~šÚy±¼Í Øwt›s€6!£-Ç9ðøš-M|÷{¬‘©y½º¨;£_éÙ¯¾ëŨ¸l¨¦Ñ2²_j`lZ¯õb41šŒ­-éþ#&¼ÚÇ 2aõk&eÃÞýApdŽŸí:aîÚ&bô¬5‡ó¢˜ã:ï:6nf5‰ XíhùҰ£lb41šŒLLŒf ›Nek ´…¼]ŽÄiá)°€Òh™VxEt10¢÷£‰Ñddbet½~Ø­Óô‹›Ï]½ð|`·l;1ÚÂÂâ¹F2b41ZÙÃÄ\"Ô›ïZ÷d­”Y÷רŸ°qOµßdÚ¯ò˜³Ù—~gø4ãn1šD¿'F“‘=ÞäÿßîUŸ¿%ß)¿‘#G‚DÒ¤„…Ÿ á–'•JÏ;wåÊ•ÔÔÔüüü²²²êêj¡_£ÉÈêj•Õ5y¥5é÷H¤&×ÝùʪÊÊʪª*Fmb4Ùc,ç~MjvqJF‰Ô¤‚›Ý,¬Á“Yii)"ëòòrÀ¤&F“‘ÕfÉ™ÅVVVÏ‘‘5±YZZÂÙRòkòòò ‹‹‹Aj`Zû£ib4ÙÓ´Äô<ÌŸ«× ‘šHñIip3E:¯&555==ýîÝ»EEEÀtUU1šŒL£Én+mBn ‰ÔDºYttt\\\ZZZNNNIIIEE…–‡ÒÄh²§iñ·“Û¸»5Ϙ"SKbîT6QãhùÊí2­¤°)¤µêZÝ,44422òƈ¦ïÝ»§ýoæ£Éž¦Ýx4y°É®Ñ9-r.±$Ù~ãòcÃ騹ë¾#šŽnÜësæZfƒGè0w©ËTTê·öA6ø|GŒ÷ë_¡ÚsA…nvæÌ`úêÕ«¥ù›yÄh22õvý¦bò`U£=<\òåBç=¿nùÅ×àU£Ï¿ZÞ€FP=ø†ÆóÑP aà ¼šÙíeÃËwªPW©ßÚÙà–ù+ÔÌ|¨ö\P¡›_¾|9999//ïÁƒÄh22›¦˜<ØFݩљ“¬ßíÃÒ+Üv¿7r4?´ØÅ]òÑ”1“¦í?Êr‚nÌøß%Ö6¶ß¸n]·ÓbùÓ¾\tô\<ûŽh;&Ïpò‹¼œ•ö¶j¥gi=vâ4Ç…}2k>ZûÔqŠmÞïëò“'ïnëA¿U›=”†7wùZ满ɥNK\Ð>Z;r6Že²6?¶Ÿƒ|×­^,S©_á 55‚bè ùÂaìð–Ú|lo5vDg²L0z—O /£ÿ²áŸçeZrA…nFŸ={6** ŒÎÍÍ%F“‘Õf1&¶5Ú#0zÝ.–žúù|ëmYzÔ;¤wü¸áçc]^êîéÌẇ~4Õaûaé"÷¶/´Ÿâ0³Pòhˆ¬SWýÍN ýÞ#‡¤Ñ8„-JÎY¶ÖÝËÏ/*iãÞ¦8Š]ïÀk [øíjÖHŸ¾f¨«4¼w‡Xoòðå»#Fc`À7k¶£)ß°d"ñÚ}Pw‹çÉWz»lõRí—²–Fz½Õ@8_e=¢54‹ŠhêÕ×{ Ï—jôÄiK\·jɺ1šŒ¬v5õÑ»w©yo×h •ôhùÁx»ï¤|ÎÆ#[½ÖmBRÊY™eëwš8í¿¨6mÛñŠC¬m>ž9¥Á¬mÞZŠ/Qê‚ei ð‡½Çø¡¾,6î?¯U‡‡wø±´—4Fu.©ŒíZÛL™>{)kÓeë!–éºÝ»·©™j¿|·–FØH ;ÇE“?›Ã+^¸Y“‚ŒzšxœWm-ØLqÐ’ *t3b4Y=,:E1y° ½U£=zw¨dúœåœœ÷ŸM_ê~4ì&2ØçÛ¢EË®Ý ˜:tî:|ô$dZŒË+޳sœl?¥Qà§Ã,³e+½–Ö_9ÿx.¥Ré(Ô¶]û„ðFVl:ÀÚ?ÍÉq‰«êðÐ/`´üвöJÆÛ)µés. Ví—ïÖ¥‘o7zàDXÚþ«•Ý z 5Ÿ@§.ú›<ýT[ÆG7zÒ -¹ B7#F“‘ÕÃ.?š<Øž»Y£=8TòÝ–ž¹`õà‘c‘Øð‹_wC#¥’›~õ­øî¨I3&ÙÏci0kË¡@–J®Ü|PjÒßÜ~¾³êQ pû‘Þ |Ï(ÂöcáªÃ3èaŒÁ°4Øå‡fÌ[9ÚÖžµ‰Y&:BwªýòÝZá[¶Áã#;G$öŒÂ)ŸN*gù†¯÷bƒQj…?ýߥZrA…nFŒ&#«‡E%+&¶A©5Ú£C$«¶ù°ôß1EŠpòÏðÀ¤Ê.Ý ¾ß}Œ;x6éLr52Yá_¤q(9ñ³yìh‡N]7 <‘}JVÎr>_ä:j²=K÷é·vß –F­­¿‡ð‰ÓR£^¦V6j‡7æc‡¯Voei4Þ¾C§-‡ƒ>]ðrcÖ,ÚD1VÆæÇñÓç¨öËYK#|`߸y $£oПҕâòýpÀ7Å[~Çb¤ë_-¹ B7#F“‘ÕÃ"’“ÛÀ”íÑ€!’ÿlóá»ö \ú‰Ä®Q¯÷éù–Ù`+›ví;,\·™Û…÷èiœ!sØÛ©_.eµÀ¬ ^?z·nÛÎ|ÄX´ðŠQ¯_NËØÑo·êØEµVþäíŸá~ Rü A7O©Úá!ÿ÷¬…»_êŽ1cû±ã–‰6?˜dß«ïÀ7û››ô?™£Ú/d-ð-þÁcì'Ž,mñþ8œ [J&`$ëöû ›‚Ž]ÎÃn@b¥–\P¡›£ÉÈêa—Ml¥É5bÑñ˜’c— Ôz{ðÈ•[}TóQµêÞÅö?#ºÕRàõ>ý<¤ñ|7 ±Ú'<Û/¾œç´i×þÇßBs$"§Žª6R‹pFÅ–j:úÅR7û…®ÚsÉ„nFŒ&#«‡…'*&¶ÿ$ÖˆTN+Ýí­ýf£×Ѷ¯õ2=[ö„ ¢©7Þ2›³z{-eÜÿ_¶Å»–`ôFï§õ™LqZþäŸC#JèfÄh2²zX˜L1y°=)«©öHeK6úzÇò­GŽÅ”=yƒhê?;}Ÿ°‘…nž¿†fŠ÷Sm\ ÝŒMFV;}]1y°ý%º†Dj" ÝŒMFV“Æ*&¶:þ/S·…”ì¾TIÿ:¶‰$t3b41ºÞz¡c×…»Õúr½ÏÆ€Ìú6¸óbù Qvj퉨þèËÕ=ßÚöÅNo²^²/XSÅyî'!b41šŒ­ëŒ ·œÉ«/¾k‘“Ûó1Ó4á[2mÑòaÛCK?]¾³E+½5¾IJ±µ_å1g³/4}Ån죉ÑdÄhqËê“ùËö‡¾ÿé‚ÁÎ`9S»”LòÏr܃ >˜±d€µí'ßlu\ç ±|@óû£ñH ªEu6ÙÉÍïöŒ•{Áо–c‡NtU•z@\<ùæoõXYGgo<¶~îêÅ |½ÍŸà225"˜Ud…ßxÛB³G£ÉˆÑ¢gtë¶í ŒMÁ;µA£ì€Z„À@Þ‹]º¯ðŠ@fÏw†ZŽsX°C |£üÈ©ó„Á²ëŸ²öõçn94ÚqöކPlÂܵhsÃ?J½¬úíZ'}Ã=Õ¬‘Wû˜¡.[Í@ƒ;ÂÊ@ó.F|„î—ë}„cÆ=£¥^›•£ø2«¸þäMÜ Ì¬&õ1ƒÄÝ‚MŒ&#F‹žÑ,…ƒ};/–³Ýé+v#š õÚ´ãåû µQb4dØ»ÿ¶’ZÖ:„½@Æý,—Ñ8xÍ2ï Bã,­ß£×7?‡ ±Æ7©í‹8Ðù¡ó]^­m ÈdkHpˆ£‰ÑdÄh3šÑÚžoѲãKLà,bRdöµËËŽJŒf™-Zé½9ÈzÊ¢Ù›ªŒæ½@3W`m›ì4~¶+_9Á]¥ƒ? ‰±³V"4· YHmaEˆòô!1šŒý¬1zþV?á"_îܽ‡påW•Ñм`‡ÔÈÔüÃ/œËhnßYŸ…íl1B×üU7¿Û,¢G”- ‡Ñ»é{£y¤¯Z‘Þ½#F“‘=³Œ:> %Öø&!bE&[v9§º½éT6‡&‚b {$^éÙ­f¨e44Ê~©±)_Üøî÷Xð]XQ9ˆŒv„!¼‰¹dGX™°˜jEb41šŒýl2š-év{­Ï«}Ì@Ï6/t˜±r/2¿õ ùu”Dæk[àUÈè%û‚õÚ´ëk9¶÷À‘ú=z¹þ)áYk!RFöˆ*£A|†½9Ü'Ì]+,ààâ‰SýÈ/óÜÿ7öî‡jEb41šŒýŒ¿ º¨=+½eÁßµPúÚ°v­ðŠ®«¼f2Az†ÚàŠÄhb41úÑÔÅîˆU·"ˆ606UZmh€ÐBu»eÛ…w…ßNè·àÄh22f´ëŸ²YkÙ¯òpr;ò䀆دé/r£ÉȈÑ$b41šL'•€‰ÔD ‹M#F“‘5Ä⊭¬¬ž##kb³´´ô½Vüsd1šŒ¬v5«3‡=RS n† ú·`™T* ¾|ù2——÷ 2Z.¯yPQSôP¡{er’Xô B^UUÍLþÈšÍÉVþë0ª>Sø úf~eÌò+·J#S‹Ã“ ÃdùòH¤FWxüËד¢o¤ÄÈn&¤eÞÌ*È*(Í+©€2o,~(/¯|:s¤Ñ1ß)®É,®É¾O™pղЫ>|X^^^YYYUUÕ<.˜[JC͹}¯¦øAæHEEEÕ£ æ©“º~Œ.)¯¹S¤øŽôM±èt>>55533˜F4ÝÔ¡Lc2:ç~Mú£3ÁöFI4âW-êÚ+W®ÀÓÓÓ àMú(w¿\NC׉¹U“’’r÷îÝfe“ÑÙ%ò[÷g‚mì]’hįZè¥è .DGGÃÿrssñ(פ1BÉCr’ÈæÈåÔ‚ˆˆˆ7nddd•——‹†ÑYÅò›…Š3ÁöZI4âWíì…ȳgÏ"LÉdÙÙÙ÷ï߯¬¬l:÷*~¨[ã´Ðy˜ÄfĨqÛÏœyûG<¶ÊÆ=>g®fÖ·£¯¾];f¢9vS̑ȤüÓ§O#”A(}ë֭‡ІÑwŠäiÎÛ+™$ш_µ3ç/9s&<<<>>>++ ÏqMÍhr˜ÁÃ$Ž wÿ¸ã°ÿ×ÿqëØ¹ëW+Ö×^eP¾^½„&•´j¥÷ÊkÆû}CÈ·}Ž„'æŸ:u ¡ôÕ«WÓÒÒ ÄÄèŒ{ò”Å™`™AøU;Ò¬Œ.*Ó-‡1&Y¿Ë‡ïz^{î¹çŽ…ÈØîöÃR›íGŽ™d÷Åÿ+™ÈY¹a/hki5vâ§ŽN„©-£*ÔöÁ¸9ËÖŽ·›%Ì_ìâ>j‚ôí»kÉDŽä£)c&Nóð åÅö ;y†õ‡¶“¦;ýq[mŽŽÌ‘0Ù¿ŒŽŽŽ£oß“'ç+ÎÛðt’hįZ`ó2ú^™n9Ì a’µ»|„9½ûš­ÚâÉÒßýäµù—“Û½g/[ûêë½ãÝö…öØÝò«ßÉÈ µeTõÖ;ænûޏt[¯u›ä2–ùÁx»!ïÛütÐëaé¼nš2‘cõ¡-ÚG ]^ê~àïd9'ëÔUÓÈ_·ûF¥š£;s$4AÌŒ¾U(O|t&Ø^¸MøU;Õ¼Œ.,Ó-‡4TâºÓG˜3bÌ$Û™óøîù›Õ§ãK £ž&?ŸGN‡Î]òVQ-#Ôá3qí;tB¤ZZ¯Ü| ï³ñ-[顊°¤j&r€õà”r¶»týîQ§!ô|«¿°¤jŽîÌ‘x13úf\–§8lÏß$‰Füª:×¼Œ~ [óîP‰ËaÎKëù«ÝYú³y+»ôúÁ„á£'uꢿñ?d‚Ñî‡yyµe„šúÅ"´‰*ДY Þ4™ë÷ùö7®TR59-Z´ìÚÍ€ ]£vè#;G-åücpr¥Ú™#Á7ÄÌèÔyB®âL° N#‰FüªIÇhì–••5–{<Ð-‡8D²z»ßõÌw Czß_Q€o`b9;dhÔËí€:uýñP ËÔT†+(¥à4|´…µ 0z88i£§ÿKÝ •£š‰Öºiü™äÊM^R“þæŸ}å¬)GæHPœ˜’/¿‘£8lϤˆ@3æ9[XÙ¼÷þ¸ácl?žµh—o„Ò!hˆdÂÇ%G/e óÿŽ-á%g.pAއ,Ò¿ž–øp 6õ z ÍÝ'¢Dñ9ð«¬‘ÑØ]³fÍĉ1²Î/™Ã<¡ ‘¬ÚæÃÒ;Ž…¿efïb»?ù„ÀgN%V"½ÁSúÜsÏ­ßï‡ôë}ú­Ù{¢ö2\(ùŠQ/aΨIöÓæ®D•Î/u_½ýË<~¥[ÕLätéfà²ë¯î”„-œ_šPÎr¹¢MÕœgþÚqG=}]ÌŒNÊ“Ç=:lO%‹@˜3Óæ9»ý¸î€ÿKݳÌúf½Ò¡<¥#?²ëù–Ïï¢o°pí^¶{üjQÇ.úˆ†¶üâ/S8ýÌ…®'cKÿŽ+sÙíûÓa¢øøUSËèóçÏ;88´hÑP@ºÝ Œ—Ã<¹¿ÁUÛêµncòŽÅ—ßn’&Vó£Öã@X”±”Lxç=ëµ~ÈüöÇCp°6íÚ¯p÷ÖTFØÂg_»sà½]»"WìfhôÖË~ƒ†sgVÍÜy<ÊиÒƒ­lÚµïÀü|³wpë6íÛðHô~ P¦šóÌ_;î¨ÒX13:1O{Wq&Øú'‰@f–’?ùðÝ]'ïByœ’)r;ô?Ï?Ï«LqZ·f»sVoŸ8sfݦßB~:΋‰Kÿ½jF'''ïÙ³ÇÔÔ”ÿ¯ ºqÝ+¯TdÓÔú#ªàxL铗Ѥß#óTëªfþy­½¨ö‹üÚsžaý×QcÄÌè˜lyL¶âL°ý[&½c)Yîî#Ìyã-³EnžìG£KöJeØÎzwøX^Åy§o^¦?ŸJÂnï·Í·£79~½ÁŤYKvüuM§Ïůšß¹Ë`ôÑ£G§OŸþâ‹/ ÿ™PÇŽsss×½²ŠEæ0$wÔ“1÷DÌè4ùåLÅ™`ûGœôö{’E›}„9æ’Ic>ÇõèÝÛWŒMð¤¹Ê#WYºÍwúb7ÛÙÎîÅõéL0ÚÕ+‰u‡ÃÞzw8¢éV­ÛXOžµÿ|Ž(>~Õ6ì><`À€çŸ¾yþ‰Üüe߉ËaH:«ÿ:êÕ23:2Cz[q&l«ý21—|¹ÞG˜óæ ë©‹Ý•­;‘ÒþSÆòçlöÝÙÅÀèƒK>ùf+2qô›ŸCx#{"ª±kÜÏb€µ­(>~ÕŽJÃÖ­[gaa¡Šé1cÆ4º{e‰ÌaH:+^Ì'F?5Fo:•ý|‹–Ë„©z¡cW¶Ë„é{£Û¾ØiË™¶©Ú 4âHšBZ>c´PŒÑ®Ê^íc¦×¦Blxç+=û­ú효Íß¾xñ¢ƒƒCëÖ­‰ÑšÈèävÄ|Ì´g’Ñ 85#gšÄhb´ wþÉàÏÅÏ£ÿýévaá¦M›ñw†ZÎh÷ ‚f,`m;eÑJŒ³F|<‡øÍüáŒÙ!Ç)¤qtØd'7¿Ûª­9®ó†„Æ³š…½™Õ$<~m ÈTËè©‹ÝÌBÓWìæ#A¨‚fíWyðZ(9P2T]¶?”gZ}2»¬ð<÷“pÔñ³]Ñãäùn8ŠÈÃÁÅ“ž¿ÕOØ ?5µãd-«~ ’i‹¾?¯ô©ª®½kb41šTWFëÚw†o²¶ç*ª ¡Ñx´š»åH×õc<§¸7ãqª}g}[r¶D¦©5Ž`4‚Öž0w­~^ªŒšû µùz›?Z`Te#1ìÝ}A/véŽC¬$ð‡Z *2WxEðÂxÎC/¸=àiÅf®> ¾üº zÇ3_'}Cþ‰’h“ÎNMÓ8Õ~ šT ×Ò51šM"F«ØÚR¯ ‡é{£…ŒžµæKƒw` ‹÷¡ìÜÜRR—Ö„ÃQÔ‚Ío=Ã…˜C4Ú¢•žR›l$gZœ¢$:Úy±œe"âæk„GÏ\ÎÒv˶#4F¸Ÿ+°ò` É{᧦iœj?Z­ZXS×Ähb4‰­^xIùîЉŽj×£×ø&‰ ‘| …TΙً šZã;keçî=ú˜z" Gx+Ī÷0¼öõhûUh%ŸoѲãKL¨ŽU #ˆæÑ.ìû@„Õ}-Ç"1l²ÓøÙ®Â% ~jjÇ©ö©…Ѫ…5uMŒ&F“ˆÑêÅø."M!£Ùª‹šA"$º בf”125ÿð çZZcCððñàW¿G/%F½Í_mt©ÊhTÄHû£ZFcÌÀ.‹Ä7ü“Á+òSÓ4NµH-ŒV-¬©kb41šDŒÖø¶Y›:°—å7æ%BF[Žsàñˆç|÷{,ÏrÓ©lN1„„lASkŒb@'ØÇ"nö¢§£qèÅ.ÝÜŽð¯51%>³ï÷x¬ZGFC£ì—› W6„§¦iœªˆ£±åß^j*¬Ú51šM"Fצ¹[N¬x®µØ!d4ÈûšÉ@À˰w0,ž0w-;ºd_°^›vxrï=p$"MöóTM­qн=| ›˜KPàÍAÖJŒF|G<ûÆÛ–½ ç ¾ªŒfÑn·×ú°^Ð㌕{ëÎhäåq.»ÍðSÓ4NÕDiðfV“jùô4uMŒ&F“ˆÑ“q±¦@›óÄq9§ôÞžêW|¼5á›|Â*ÛCKkÏ–3y-ÃãÇÃíºk…W„ÒR‰ÚSS£¿ð©Eš «vMŒ&F“ˆÑö£çÿ"F{„€ѷݲíõ=µzý”Fmaµ]£‰Ñ$bôÓѲý¡Â_—h‰ìWy¨þ†¶.rpñ~_ڀ îšMŒ&F£I¤g…ÑQwþrñ¹Š“!‰E²Ü§ÃèÌbr’Èæˆ¸]RíŸTsRFw]ñéxBM@’ÜOz¦9]U-L!‡!‰fŽ€oÞ!©"ftuuõ•;•§Sätפ²Š?ÏDœ={6"""!!!;;ûþýûMÊhxöÍüʳ©ä0$AÌÁ YÿøKCCC¯]»vóæÍÂÂB‘1º¼¼üÞ½{éééqqqÇ0áqÏ ƒáJ>}ÂåË—“’’rrrJKK«ªªš”шֳËd²ÈÈÈsçÎaH¥RºdZhðLø'°vñâÅëׯtÀ 'F³)W\\Œ)‡IŽgœ æüY2­·àà`ðñüùó—.]ŠE€ŸŸ_VV†ûn“2q:î¹¹¹©©©111¸¯c  ]2íœ&ZXXØ•+W:àУ1åF“‘‘‘‘£ÉÈÈȈÑdddddÄh22222b41šŒŒŒŒŒMFFFFŒ&#####F“‘‘‘‘ÕÍþêÏ•X]Õø1IEND®B`‚glance-16.0.0/doc/source/images/instance-life-2.png0000666000175100017510000011564713245511421021771 0ustar zuulzuul00000000000000‰PNG  IHDR¬»9lª;bKGDÿÿÿ ½§“ pHYs  šœ IDATxÚìwœEÚÇ¿Õ=as„% ¬ ‚¢˜QÁ€bÀ3`ç™ÏÄYTÌs8s<³œñT^ó sd‰»Ë.›&vw½TÍLÏì,»À‚€ýãSÌl‡êžêêçWO¨§ÀƒÖòøÏ’’’РAƒ ½&ñàÁƒëŒÞ½{ŸYZRÒ²ßøq‘Ÿ¹[î¿ß^‘’’âPÿþýÏL¯‰<¬„×<¬ß¸ê裇ßvÔd‰!,C8¶´¥%…# 鮫Ÿ;ùÒKnZ…êJ€\`ÉÚ¸×âââ= /+-)»jê?‚£Fl™Ü÷õ7?pÉ•·E—,­nvyâ²eËþí=]ayð°á§[nùµÏŠúÍ𠘃 ø±ü~Ÿ§¤82õƒö»ë®»þ¯ƒÕ]ØÀ |››8à™ÆÆ¦-§^tVà€ ã"»xyí?ï2uÚ­±@08oéÒe‡µ´´|ã=eRݽ&ðàaýÅĉ{t¯[ÞK†CÈh;ÜŒˆÅðÅã˜VQWŸ³×ˆý·X¼ù惞,,,üéàÆoõÙ‡/&4ÓçÇ0}YË„}ÇñÉ3Çyð@ŸÏü|è!3îÞÓöà–0þ6z»»Œ–æ<;Ç ·`Dã8¡œPÂaL+Îà``È‘GY ®p~10è< |Lpí¿øx¨\Å[3ûõëwAqQÑÒ­† =øÓ_ñŸó¿™yyù†¯Ý ærÊÉÇŠYÿ{ÿÓ£v-(È_0dÈ{<ï©{ð˃‡ âñÑŽ†áCÚ+ …pÂaD4ŠS²¢¡Ûƒ6;˜œp0x˜ì üèªþV`à-M^B—.]&ôèÑ}iî—ÿû¥Gsî¼ýš`E÷î`úV¹”–—sõ´ }ï¾ý¢¿[E—¶ÚjøÛÞS÷Ð|^xð°^¢EEÅþóæÌ)ïZX`IËÆ†@ˆÅ@€-iÇcÒÈúTì \ ¼ŒàC è<¥ëýÍu]€€" ¬÷4xÓM7}. ¸nÚ¥Á½÷Þ½S~¨iæ´ô¬( Û®[—Çÿ]]ä=zayð°þZ96FºÊ ÚÐÐ0oþòåþÞ--B‘ bŽŽibûýÄ µ½{Ùº¾Ç€ã4a=D4 ®ë®úç­äËüàâŋǟxüQæI'møk.:L3'4ÔÿþÃëE ËçuµâQ¯7xð˃‡õ y(_ÒaÀ8 ø ø¸Y.?~üöcûnòQÃüyÂÓ41 DZ1 ²Rb˜6Ä­DÝo£"-à]½íyà~àN”¯j6ךU…þÌŠ^±téÒ);︽¸øÂsü¥¥%kÎІ‰ƒ]³ì÷¯µ‹¿«°¢¯Gxð˃‡õ9À>𤯳€g€“5aµ‚eY³çÎ n¹ež-¥ˆÚŽ‹Ç¥íØ8 XŽÄvLÃú­qÅÿôi8'£ªã“€IÀràMà'àlà <åº.Z´è’?˜é ܬ§È !üuMu¿Ëe¿Ý5mñz…°3Þ½K°ÙôåHÛ ®¶Våó7Fš—Gk~ß%nŽtV¹ ­ 6!¯ky„åÁƒ‡Î#ªƒP‘wx5awÑøCDÍïß” CÔ–u©ŒJ)W)(Â0|Ñx<Ò°|ñÏÅáæš"ÇŠ­î­ôªP¾¶›:éçŒõº¬GX<üq ‹¨.6ìôC:Уnñ¯]b-M‘®½·¨ñù‚]ÕÏ[)SIa;µMKƒ¡Æê +¶^ú©¶6÷º¬GX<üÙ0˜ª¿_¼¼±ü0Û¶iZш?ŸS]õmNIEÿåyùå¶aú»‚l•IJ¹ÜŠ…b- µ%–Õ’»omSTTewT~ÄÙ¨ “æ,Ç×Çü¼‚Š¢•Aþ@ýÝ^Õßw¶EEWÎÓçÔ¹ê죵³ºŽßP¦Øº,÷¹ƒ&Ç<àà Tz,€MP¾Í"àSÔänayð°V°p£þ>U ¶æW-ãÌ nã¸#à¯ÇO¢¹~Iy4ÔH^a·úœÜˆé÷‘”Jé4F#ͱHsMW)¤!ßv]‹w÷>ʯU¥ ¨?ð+0¨ÖÇÜLÖDahrs&zèïÍúo€«uÝÍÀÀõšxæêý/[ékÇôµ—»“šë@™ƒ×õÌÎzéý“PÑ›õ¨ ö=€{S½×ª ÅÝkV ¹¨LWÛltd•POúõäá»/ãõ·>âÈãÏaIu ‰GK#á†e¡–º²x,ÜÔP· ¿yÅ¢®–ö-¤íl!k;ð1­AmŒj‚(NCùª¶ÑÚX?Òí™?¡2Ø—¸È MiÒ­wØþìNZ^¤?°Xï¡Gýí kMjêZxúᘿ,ʧ_üÊu7ÜËñÇÍ.»ìÒ$KF;õ9¶å>³Ü@¼g+g]`ð™Kà›z`± P µœŽF}$L†A­E}¢I±-$Ì‘n²9eNœîÚ–¸þúžÎ××r€Û€ €=PæAayð°FïËÅÚds&*GßZÇ![o=£_MMryÇïÃÔò"7‹Ì9ýô¯Vay‘—Pé˜6Eå<²]¾2L–Ô†¸ù®Ç8õ¤cÙt÷XXUÅÙç]L ØáÚi—³Ûn»ãØŽÏ8[ GÊf-œÛ…\óæZ¦@-Êw+°ŸÖ†o ýhMCkÍÇiM(Žò9ý·×ÞÉEtýûÚ8v°þ\˜Ù`ú<m<<´AÀÿôzĺ"«‰'öèU_ßS†CȘZ^ĈÇ×íò"ŽÍÑ“¡Ky]v#o½ý&?í8>þàÍYçL>Ó H$RZ8Nf‰—"Ĭuøœì ¹vÊÜvjõ÷¤2Ó·Å“QÓnÓôú¼U¹vÄÑÖœ±ˆÖªEÍÕK”Ýè¼°}°u7-(¡"ÜNF ¸3ÌÖèþ4TŽÂ6U+F´q1»Ù‘Ûo¿‘ý'ìe-¨ZØûâË®aÆŒ×AÊ6ÈʱãŨ\‰ëå¨H¼A¨ðñõö÷ôç×(?Þ)(Ý!¨È¾Ÿ´v**|½4ifà `8ð¤&ÁA¨Œ'ài”ë à*TˆüÖzpä¡ x>,ÚÆT-Äv¦µ¯amc‹nݺý=¯©±Ònwy‘­*ºïƒ žð£r>Šv›Z+ Tu Qó†>F…} y@N´‰ò¢|&rèï4qÓ†ËyëÍwŽ­L€­|X ˆm¢ŽŽ-cÒYØ •#1±(ä”ë ý÷#¨ùs‰´U!T€Æõúón]XU3ðKÀQ¨„Æ“ô¶8*êpŽTÜ2?&¢ çëÁƒô^?°C7žÔ¥»®ïw ¬÷ýì‰ Ðè‹2QV{då–…5·§ *¼¸y\ó@à\­¼€2C~4jÔ¨ÑÀ¤h,Ê*,/rjY‘[µð}ñv¾& w.ÀK&”‰¬-Žcm4Ó»GW$¶mc§‡µ§Ÿ%Ä(MþåëðyÚ´XGö +è¼) Ki;áq5áÙƒGX–eÍþ­¨`^`Ë-sÛ[^dNcýÇú´¯Q¦Ì§]Âx'àoÀ (Y­’¡B·ç£L‡v{7üÜÞçÓ¹s"·üýôrŸi¦ÈÈ’Ú$hµ×¾³X»Ù/ý&wЀ¾ió­°$Ò‰á8ñ•ˉçJäÏ^wóàiX<¬ŽGeÌMÌb«‰SP¾¤«Q¡ä± ±¡èÿõåúï WSIDZ‘íhX S – ÂÅ3àÅxð˃‡¶°#Ê<· ¬•É­¥(¿Q_}­ÙrcÙ–U D6šIÍ»ê‚cvvÂòà¡mx&Afl‚Ê–}ÔZ"’Pfç£&Žnpd%Qaì×Þô0‹W/—Òæ86­‹D¶™é"K±b£¥HÞ«o #<˃§ayðÐ ù¨TE×¢Òuö@ð”ðDTôßjã?W_ýèV99{Ù~¿”þ€ãø}Ò6Mi›>¹Ü±kŸ~›3ft4üº*¨äÒŽ^_Ážc·çú[ˆï6f;ó  c³åàÈ8ŽŒw´Ú€Æ\ Wum÷þëƒÈg_üj—••_áuMayðŽûQKïìäzMÔÒ#=Pë/-]“ʦNjl!}òkª»998N €Ì!îóQ–›'wÛa‡Kf̘qn«,'•Q¼cZ–”|õí/”úlË⼋oæ°ƒ÷däðÁÉc섆eu˜°GbÛ?øìõ/¾üA~÷îÝZ±¢á¼+B^×ôà–)ì£ÉdØZxŸžÒÚÛÞ¤çí[-7·_äØåŽ”8Žƒ)%&`K~?Žíˆíº÷Ø5ñØ@-$ø>½•° 5Ix T;7º¡VÎ]ŠJÏ”‹—Ö´Œ³]ù /¿Ã¡ã×ÙóxêÙ×2¨?9`,Åå¶ma;íû°)ùÏ»ß;w=ôªS^ZþEss˱sæÌ]ìuK1]xððgB®ÖªN¥s#õü(X.*x´3*Ó½b²ã8†”âv(„ á„[ Áo[ô³¬~§œrÊŽ¨püçQf?P+ÝN@ùÏþ‹Ê4?ÅUý@e»È¦^!„àÌ“ýú>Ûm3”~šÃßüÈÎÛgÄðADcš©}Ö—_ÿÆ1gÞyô™÷æ65µì0û·ßÆ‘ZPÒƒ°|ùÍϯhl · jáfñxü ¯;zXU† A¥'êLS`•Î)ŠÊ﬊£Ñè@åÇcq¢2ªD'q¤?€íSQÑKŸö*à‡¨ŒÕ¨$±Ù²ÍoŠZR¾-XK—.üÑ'ß~zÁå÷õ¿|Ê1ì¾óH¶5”ÿÌü„Ͼú‰ÍlBIq>R*ÿUf¶ö¦æ0¿ôeôõ™ŸËÒÒ’k—×MëÌ6òà–#p¯Ö°ª;±Þj²:WÀÎÀ?þ¸}¸w/±|Aþ€¿Ï‡a˜H)1 „eúˆû ý¾ˆ>ík”yò Rëw}‰Êbž¹&Õ÷¨ùg/¯ä6.\4<77絺h—©ç½Í`¿߉Ÿç, "%82•šÉ²^ý¿ŸíGžy×îÚµâ妦æSšššë¼nèÁ#,ÚÇñ¨¾:¹ÎmPùñ:“¬F7.]ºt€cšvEÊrë`;ŽZ,ÑØŽÄ Í‘ˆ{:ï¨eÖÕ_ŠZ›i_M®‰ÈÅɨ ‘ɨ cÛ¸ŸØìÙ¿ëÖ­Û„‹¦ÝÿĈáƒÇ6&gèæýÙ¬²§:BZHËBÚŸ~]Ž r~lhlšÔÐØäe"÷Ði£N6v”¢V‘G*‚nM1åÛU×ÝØ•ºi­!=xúé§ïišfÏH$â‡ÃÁx<î³m[˜¦i™¦5M3æ÷û£‘HäÝ'Ÿ|r],2™7`À€óVœÕ¥¼$çÀ}¶Ï6´/£wÜsÅûo=[rÛƒo…/«ohnÓÒÒò¶×õüÍž={./,,Œ—••…ºwï~^0—<¬|¨yHhñ)ÿPgÜßMÀo(Sà†Š2TH¿¶´´tŽã8÷ëÛ»ôÙ'nç±®C @Á?ϦWÏî|üÞ³ÆÅœŽeYc Ãø¾´´ô ÷xÖ öf –nɆ+QÓ B‚¶˜ŠZÔ#,MXý45ÂHí<°6xªé„zFeÖð²²²· òûÝuÛ¼þÒýÆŽÛoƒF²¼ûÞ§ì¾ëäääpÚIGòÉ{Ï‹ÓN>F„áC|>ß¼¢¢¢;QŠ=tžBùϲ/OÚ‹Â* c~×? aeC.j¾ÉhMf‰+Ò/~¯6ΫD9É·rVRo`;}|¶ŒÙB_[ÝA;‚þ¨åÓ»gÙW ”è’Ù>ýôïlk]¤|`0ÊÑ3Ë˳6QtF§ ;c]í· Ê_æ:®—Þ¶‹~V‰ß_¨¿›úž= K!¨ûîb`“ÒÒÒ„_Iél{Õå“ùpæSÆAöÄ0 „iå÷?fÝF'ÿ...ä¢óNæãwŸ‡²¿ÙÒÒrZNN΢¼¼¼ËW¡¯zX9> &ŒgÓ¾òÇ]Û¶îÞn§ý)Q ûg¼{·»¹¶Ü ¼¼ \ T ë|ø·Öö²™ìGeUyµMN;¤4 xx ¸+ã}OÜ[¢¸eÕ!ú^^GMl/Ψ{7mez8tƒk_©þMo{mc&ðëï³ ®ò!ð/ æÚö. jR¦ûœ`ÏŒkíÌÎ8.Œq3ø"cÿ‰+¹ÿíô±ŽþLëK2®%5¹ ??wm£òĹ í¾Œ¶p«æ§Í®}_²fÙ»sPIU_nÓuMæ£B²‡écÖ/Ò à'ÔB‚W“]m²±˜yÿÓ /RW ¶¨¨èŸÏgå俟~‚üù»wäâyŸ¶Y~ÿõ#éóùäü_>ló˜þïY¹ïø± ÀétÎ(vLÓïU&ù̪\ïéaZ.­Ð2ªE—±®s2}XcuÝ;e†Îum³€_PæÇ'´,²€jÔœ¼§õµß˸ǻõqkMÐF­åÖÖ öT–ÛëuŸê}7éºf¸JBîÞ®ïyŽ~ß-'ÜëYú~þ ܨ·wGMÕ˜§ëùU×uì†FXÓt Dåp›¤ÿþYNoà¯úÇï:¯B“@®>÷K=JJ`~¸³€#t݇êzöu$~Ñ„5XwÖšL6mãþßÞ××ÏÕ‘AX[k­o+}€~@ õ(j˜î\85£m¾ÑäÑÃõòì­ªGB‡h2ûÏ<‡¿êÀ*`KýýDàya•èï×éN½%©¤®Ñ9™!ÖÌÒÏoµµ«`0x£Â1 CqØAòëO^“Kæ}Önùø½d¯žÝ;tìë3–;î°­ÈâââeZnÔ¦œµŒþZ_™¡DµŒJXNjPÓŠ]rè{-Äý@XÓ² ÊÝdx^±n¯ÿþ«ë˜cô¶½³üN?ÐDÛËÝܤÆ™ØQ×y‹¶¨ åXx,ãýy5Ëù V-(wYæ>BMÒ_§èŒÑÔ=j9Q æwô¶€³€]]*fµ.èsoÓÒ X¦;‚_ ÷yú¸¹×=H“ÝH­9 µ†4¹]›å^·Õêp¾ïO³óî|î´™M$îÑZ0^ªÍ 4¹î3¡aMAùC¦ê¿ŸÓf‰óõKU¿m^IºÅ§Ía?ê¿¿'ûRìè—ã;ýÒV¢ÖŽ:j•®~­Èé^/K—r „m¦,ȵÂfC…#–ÂZ$åbëNâ€Ipu|X†išGåççßÚØØXZXXÀ«/<È ›v¸‚…‹–ѧwO„hßÊ>r«-yá©»Å{|ÌÕ×ßÝ廆†§KKK§Õ×ן¦MUV ó´¼9N¿gŽîÓ-W@™ò»è^ƒK½¨ßã!úÝï,üŸþì›ñþ%¶ý ŒÓÕ/úþÐò-×2µqmª>^ÔßÐr¬=ìç’Ŷþþ-ð10¡çï |¢å`"ÝØwšËPIž7ÂÊÄ"ZçP[Lº³¹'p†&Aè6L ßy+¹Î.­AfŒtÚ2·]§Ëz$õ$í;c‡éúßÊb~šLû~“-€HFçëî"ž/V£þ‚Z÷Iêß͆¹U3ÆV†ÉŽ>ɰX åKƒX~òñãHÙlšä Œ˜%'îó™Å䘗6…¨9Ûg‹QâoØ×Èß×2a­ªkÏÒÒÒûêëëûmZÙ×9ø ½xíõw4hÀ*URµx)}úôÑqEiÌ®;°ë.Û/½ò&×Ýxoe}}ý[¥¥¥ŸkâšåñÐ*á>àYm~CÌÞÑï.™ðQÆy¹ö¯Œ°VUNÈ•œ,Û~¬MµÆ“m²{[Q¥ûksÝkú½¦Mxí pë]ï¾û·ïÜÎÀ9W[ŒÐ%Û}nЄ§u°BܥпisT@w²ÍÔîNw‚XIC¢~̵ý­adÃõúÚgëk_Œ Hømef"-Ô[2¶7¹ö·wŸKô}¹1X¾šmü4jrë7ºŽ«PsžÖ»M§h3h½~ÞêmOjÊi*è9?Ëõïjâ0Ÿ”û/7Š\á Œ+ÓÊ5È @H°ôP! %©Ÿ#Á’ŽE¡“…þˆ¨´åN4û¦ç^"jbØ3òlñRÓµÖ»ØïL-: 6¢¬¬ì¾ºººQÅÅ…ÎW_È„ýÆoÎ|ŸÂ‚iJi£´ª¥ôíÝk•Ï>`<öÝÓxâɸù¶û·B|^\\üÆŠ+ÎBùq=´Úlu’~†i {М€~›1 uï§÷| «¶6šÌBt™Û–éÁf}ïÁ/ÚZµ¶"=¤ÄÿÒr*[\¬I©7ʵáþí!”O¯-„u¼¥É(þ‡ï¶šñ÷&•ÂfÊŽKÝ<ᶨz/òWÚÌÖQüO—Ñz„1µæQ‹‹P]Çÿ¤ÛiýR¸ÕäÆ•ÜŸû>{ò÷ulTôQžÒu.SCY"£_÷H¨D›DžæZQ@³¹›ßgúr¬¿@øE™Y&K„0¿£zm@‚%¥²óéWÏ'R¤å0Üß}à˜‚x.P*òÂaÙÇlòÖRïg^XœkÛg7_k¿ÞImÒÔ3k¿ÒÒÒ[W¬XqHçš+„/X IDAT/à˜£ÿb|êUhijÑ„µjêªEKØa‡­Wù¼ä¨ÈïçÄãgÒ¡÷Üÿ$wÞóðÞ¦iþ’ŸŸÿxccãläók:i üO”OÖ§ßK÷ûú™ìPA âØW¿‰An„”? ­ÅH”¿ûÁN¾çÿ¡üZÇ‘×ÌÒO-Êö/ýÞçjkÖò Sä?ôo¿Ç¥åÑ2Pvà>'ÐyóWÄœ£„з5;ñðÞÓ¦Ã3µ Ëgµ`ºP“‡Ï%´Û‚Û)ñ+ÊA¹ÄEšn{o¾.ÿÒiŠÕäêûÙV›;bªè¦M‘ ;“_“Øš"”AB2 YÍÕ¨û¸û›r¥q™ÿLÌÿM^…ï!ßßQÁ‘ÁîbH ÜßÍ hW½Í†ÀÊú®À®¦3È,´{‹A¡ ñLàRÿ·âBÿÐKVÒ®eEEEwù|¾9‘Hdÿ³Ï:™/þ÷ºqÒ Gô’s©G ÒæWu¤„Š Wù¼Ì’ŸŸÏ¹ÿø_~üº8é„#E8>:,ÈËË»‰ÖaÈÒqÊ5îmeôi(ÿöç(ò÷ZhOq XßBMk¸QkPËÌœ¨Íp§¢|ò¥¾¯5¥÷PYþÏE…£gCÊ~º°Þ¬_Å„o=á7{\ßçMZ3| å½Sÿ–©Z.ÒGÚÂE¤|m뺯ÊÐ`7ZÂúNkDçh!ú)Ç(.òØ^“Êù¨è¼D†£?ëQózhM­Nk<¿¶¡9–img¹>f™Mݦ÷¿¦µµ'ô¾FT`Gâ:•º£W£B;_DÍKhèŽr¶¾þ2ýr\ºŽÚ{¼Ë¬¡ú”Ü«©þÄñ[r7ñ]fŒ TúúºØ]L¿zâ.®Š»z‰Ï½=í»H•%Dªk­Lã(8ƒÌ‚xOcË?ÿ6/ñ¿ÛÿüdDcgVN^^ÞŹ¹¹‹[š›O;âð‰ægÿ{ML9ïtò B¤•nÝ*¨©]Þj{{%'$­òym•Òò2®œzŸ}ôª8ð€ñ¾H$rNnnî’@ 0…•ÏÓù3c)Êìÿ.ÙSt] ž²´f³\k)¸Žy¸@[`„wŒ~ç7.Óòàe-Üf÷yßû˜½­&ÃRs…–Y碢 Û˜ér&*uX©>>áïüå»ë†ŠzE*¬}?=h¤-Jë¸;êû‹Œ¿ø¡É|;]÷8—,^gXUûÅp­$¨¿¶¿º†ƒtCº}6[êó¾Ì0 n‰ò=Ôë‡ý©(–Lœ©;ÍRQEF®±³uÁ ÍïC¿`OrŒ.Z-.ÓZÈb` +¬¾kYh¼ß°ÀYn½GjNVÒ2ƒ Cí¥ïçThl‚ðvAL|IëÔ=õ”þj%ÚÝVú¾j{xÓ:}Ú—älê :g9±O~/³o´Üôç “žÄ/!®[Ã’à—’0`F%fœˆ$“RFdČȘ«Ñ„% æÇO®È%( ÇŸƒ!•O«=˜Œ%NÔ©§ZXbwëºèœÕø…O ‚[ž@Eþ“ŸŸ?½±±±dŸ½ÇÊK/:K °òéo?þ<›Óμ€÷Þ^µµÏ:ç2Fm½G1q­<ºþ•i×Ü.ßšù®(((X …Îuç‘•¼/ÖSô{½O“Å®¨ ‡¾Yå•ÒhÞÞ­š9¥Ñk¹õwîG™á|£Y|wRáªC“#Ÿé•å¨`ƒ3VÑÄfiÛíÀKLžëlÔOùZ‘cDýÇ›…Æþ>¾Á²Dˆ­^ú%ødâ»$.!–ˆ&‰Ý`7øœ¸ŒcIƒß…C•-™tÄá`>Gà³>³[ƒÜ‡–‰å"G–¦è Ê_'mßb§%èÈW#(ã&­[ZZzo}}}ßm¶áL½ôlcûm;6=kyý vØe¿|÷á*]øü‹¦±Ù€JN:ስú(?ùä ®¸fºóÙç_ÅÅÅ Žs™‚<¬ßDU®­E§¯‰¼ªš9Åù³¶áúJX{h3Ÿ™¡–ž|ÉôÊ p9Ê‘˜îN)©àÈMG3ª|(ÝòºP­gaã~¨ŸÇ«UßAS«i:sPþ±‡˜<7ºÑ=à #D‰˜æëáÛ)¿»Y„K£òI°µön–äÕÆÃ54-¹$"ùÐŒÛJÛ™å\/ç¯ÖÅ/3…õ7SG;9V/£€Üt¹f œP®tN O³[…+NƒGD£ÑŠÊÊþÎe—œmì7~Ü*ݲ”’^}‡óûo_â÷w<&éò+n¤¢k9§ŸvÂ:y®ÿyû}.¹ôYWWßÒØØXèÑÁzMTmÊ«.%l¿õ¦l¹YWºçQ×eQuóªêøî§*jk›Ú”WU3§Dÿlm¹>Ï®/B™sPY0–h­j”ÏkhâÀ‡ŽãŒáGп¨/%¾b DV·¿#–E«yoÑÿ8ýÓ‡¡.-ÀïWà¯LžûáÆð`ûŸOɼÒà%%Æñr_¥ÈØZ›íçD%áåN,¸ÔZŽ1/ã©2+öÆÒk©êd ÏG“yH®Á=á.FžYaøíöz^rçX-aŒ˜ûº½Ÿ[VVv{}}ý~@@N»âBqÌчâó™«u»ÃFŒáõWŸ¢w¯ž>çšën#pÎÙ§®ÝksŠš[ê—ˆ3Ï8=ÿå·¾ˆF"ϧµþ’U+y5vס=a$ýzS’çoÓÍë8’e+"|ôm>ó1‹×µ’WU3§|øgjÏ +ÌôÊ¿¢"|}zoÎS{\ÎЈ,?E"“ÛeÆÛqø¹ñWN{ÿ¾«ú&µCåÝ›Âä¹MìS½(°y°Â¼Ãìmî(3ý‰_ŸÐ¨b‰µÄZnVÛsBQÆö½ÅÕ¡ykýå=‡®Uyþ'ý>vˆ0óÛ ùi’…΢!N|ó¯i5 ¼¸¸øê–––“€8`Â>bî¼ù¼ñêÓktŸc÷ú ×N»˜Q£Ftøœ‡~Н¿ùžÛo½f­µ_ XŠFVÈÙ_ý;·¹n¡qÏÿåµw¾÷ký%«4y5ppo®:mO6ïSœ•¤¤Ì£$ÇvøeÁ ®yð¾ý®ª•¼ªš9¥éÏЦaM¯ü;*ÜSàÏáÁ}.f¿>{ýä"N”ˆŒ±#„œqiá&>ÃG?9"‡"_>¦ð¥ˆ ‡—}Îþo\ÍÉ)ó€ý˜<÷Ç íæ_áÛÍ×=ps ŸoD,(´F%•é¯Yb-µ—5/·¾¥Å¹ñÄcë¼Ó]ì{Ì(0&ÚýŒüöŽõ/tBN3ìi±#]›sóóóϵmûâx<8î˜IâüóΠ¡~“Ž>…YŸÎ\£û;ê˜S9xâ:p|‡Ïùâ‹o8û¼KyÿÎÏ#ìFm;jÿþýÛþÚ¥¿øÇÇÂ#¬õš¬’ò*ë粿ïË>Ûõ%à3´öѸCÄrˆÄmÂ1Ë–øøL¿O4(šøL#E\ŽäÓ—pɯQ»¢9M^UÍœòãÆÞ®aM¯<5çJ{òë_¡k°Kòö[œµñåÄezê:GkR&ô-õ™gäQâ+¤ÐL,#‰8QîÿîI.ÿð®äø8„És7ŒÜn× _1þÃ}½|Wؽ}•¦)0¥ q”Q‰]e- /·?pšœÛ'ΉòâÓk'º¬Ï9t­ V¾æ“òÕX…èN…ð¯ôXŒÙv³çQ>Ì}Ch Ninn)Ü}Ì.ò´S}ûô ¹±™q;ÎÂßcˆÕŸ±1}ú=Ô.¯ãêiwøœh,F們̛󀿓ˆ*/.¥Œ-žó‰¹ô÷/s¤cá86a­÷d•”W=z–òÄõ‡Óµ(GiOBQ‡šæ(ñ„9ý¤’UÂQQfù9&…¹~ sL„PÇEc½þ5w>ü~š¼ªš9e£ÎE¹þÖôÊ=Q¹Á Šº3gÒ“”ûÕò/–ŒS¯#ä„!«A0Õ Rä•Úºº’gä&Ï™Uû-{>w2Ø1PÑ9§1yîýëu]läç”ûNô œ'ºùºµFeI5ñHã"ç¿ÔÛÓ¸"þ>K–¹Úð]@™4üO{™&SoË3ì˜MŒ€“³ònhD$Ìs"¾¸:RŠÓ  Ào¦H©>Á@Þàÿþóo6­ì·Ú¿aÖ¬oøÇÙòÑ«–€cç]÷åÎ;n`ø°¡kÖ†fކª]ü£X8ç£<ÇŠã86R“•GXë5Y%åU·Š"ž™~4å…Ê ·–7Ç Åì¬æÀ¤´’©—TêÒ@@Э(‡Ü@Jãúæ×jN¼øiâªN 8­jæ”û7Öö]¿ kzå&¨èÀrò‹ùùˆgé¬P£i»…j«6ùd6º’m“—›¸ Ìzºb`’…¡% {áDhX–8éX&Ï}|}l"ßäYƒó.+îgþ=¯ÈÌ>)‰¶Hìªø÷áåòžË"ýrkÖeÖ®Ë1MöT ζ*Í|'ÏÀ"õfv"Jj º4æñð÷sßÕÿbذ¡œü·ãV»>˲è?`+¾ùòCÊÊJ;|ÞßϺ€­·ÞŠc9|µ®k?¦/ØÜX÷»XðËûùV,”FTa­÷d•”WEÅù¼xÇ1t+Q¦%bSÝ”²¾ ÙÊ”OÙÈËM\…¹>*Š˜† .[\Û±Sž¡º¶!)¯ªfNy|clãõ°ÞÆà ðÕQÏQYÐ7i¬Ž·•ÒJ¦u€Ì‘ŠÚ&[i\>aÒ;؃ ’†x#ýŸ;êª#—ƒ˜<÷U×½QKŒFeåšpl¡f¹'>ߢ&"¿|Àä¹k¾d÷uþ@I¾˜ì¼BšÁÄWÛ¡…ñ'£õÆ \šÿG=¾¼‹|ãB†xŽnF>Â×ÑIýìÍ=J—¼ ^˜ñ*ÿzæžÿ×ÃkTõÁ‡Ç1GÎûíÝásxø ¾ûþGn»yÕ/¤iâ÷ç5‡›ªÅ‚ŸßÏF²•GXë=a½ Œñ}¼tÇqôë®f´Äj¢+‘V)ÒJ™Û!Ý$&%ÂÃ'èUšKЯH«±9Îç?EÕ¢º¤¼ªš9åU×½­‘¼ªš9%´>´ñúKXÓ+wÖ Æ³Ng¯Þj5ê°Œ°,^£Ÿ®h­S'þ)‚r“WBóJüåÖ¶ ݃ݴoK²<¶‚Íž>k@å½j¿=*UI`5Ý TtÏmLž»tu*˜x8ævÎ=6·`º¿XÅaIš[¿Å8ÓŒªð“ÀzT­q©Ñ;Wú^åSéßÄÈtR20#Ý«2`o.{~CùêëW0tøvÌ›ýÁ``µë¿ã®ûø}A7]?­Ãçüòëlöp¿üø¦Ù±z0¿%ndÁœÿæFšk EL–GXY%åÕm—MdmÔà:s¨nT‹¯»#Ë y• +™ñwRv¹ˆ+AZRHz”äRk"ú¦(O~‚å5kM^UÍœ²ôlçõ™°ÞöîR1€_}CÄeœ%Ñ:šÂJj"K—“¢,ÝaRÚ–Ô$&¥¤G°Åf! ©ŽÔ²ùS‡A¨ ü]9¨÷VŒî9ŒMK6¡$P„O˜H`E¼ªæ%|S3›ûg¿ËÜgGQùÍ®^Ål¢ä¶à9 ø@ ÔìjJI$* -ˆ¿_:?>eÙÔø§ë™ÙÒ/KÌÛ†qt¸ÒÈ'¸†Ý®IÒgq€i»_ÊÄ-[¯=¹û¸ \vÉÆìºÓj_âëo¾ão§üÏ>^µd;Ù‹Ë.½€q{ì¶ò6 äEl+ê,™?Ëß\·Ðï86R&ˆÊ#¬ °Þö®PÁŒ›ŽÄ0qÛ¡zEmtùLÓ ”2¼RÄ•NZ‰À )%=Šs)ÌS¤U³"ÌÁg>FCcv…¨Ky#†ôaø^ôíQBq~ÓúþV´ÄXRÛÌ/óªyï“_YVݘU^ýQÙ6ÖOš^9½°áû“gxµòû²x q;æÒ¢DÆÃ—YmÂîš ô?·¶eK™üÞ;Ѓ|3?IZ“þs‹Ã Lè9‚m{lÁÀòô)èIa #™è5sÔ”þ·íØüÚø÷|û4O›¶õûÀ$&ÏíнnÌÝ©yˆÿ‰Ü2³¯-!ÒbÇä¼ø£‘¹á+¬ëÚ]îäëcš‡†MãÁ@‘.3VK×*©5©hÈ呃aXm²sÍu7ÑÒâê«.[}k£ãÐwÓ!|ùé‡t­èÚáóî¾÷~¾üê¸/{"ÓŒKlj×.þÁ\Qý[ÐvlH’GX*Y%åÕ“·ÅðJåg¯nŒ³R™KÏ´¥e¥É+I¥i[2åÎô*Ë!?'EZçÝð* MaF íŰA=Ù¬Ozv+¢(Ï!²ÍMmí^¶‡9 xòµ¯yõíoZÉ«ª™S–üaÂd=#¬ç€¿lÓ{$oøBDœËãuÉ[+Óª\þ©Lò’I+A\‰ŸAZR²I°'¹fž«“ÈŒN'Ó¼#¢v 'ùôý†¿ð¥FXš k"µ÷—1k~rÁៀ˜Ë²H ËâÕXÒwN‹ÌiáR¶&-*vº/+a*t†Â6H«[°+Åfq*Ù“ñĈr"„­ÍNˆ;L؉ qôù¶¦DA¾‘O@]嘘H$1'ÎÍ_ÿ“[?N¦Ëû0–ÉsÃYÛæ: ¬2çæ@/ó Ó„†0–ü-vMß¹¡Úȱ. ª#„%*v&ÅNþAÆÆú›ùí%Æ5bЫ*ÀÁÆsé¸ ú‚+](ѶmúVæóO> Gî«ý¿ýö{?â¾ûæóû¤>ä=d"‡ú ÃD¾æ–¦e¢vá÷ùV<Š” Òñk# «¤¼záöcÔ¯,kŒa;2MÀffã‘n K¶djXɶT©ùWFZÅ9”äºȪ‚˜‘˜MKÌ¢%jŽX„ãŽ>ß‘GŠd¶ü A÷’º1 UÜ’Üõü,öiòªjæ”ðºjos½ë{—ÞŒ   7Œ> CBŽ"µÈÐÅ@dþBqXâÝMŒ¹‰Ô`BÏÀK >‘iXVÇ7[ͬ°±¥M³¢&^ÃÂø–Åk©¯ Ñi!âD±°’„'H¤…Òvhér"TÇkY]ŒOù 1…ÉŽ=Fѵ°€™ó>µT‹7ë³§j8*l^oó#(|D°ëįÝlö:'+£ƒÅÔÅÈülY€mä̰FšMþF¹³>'§ j”ôZàŠ/áÔíÏÇFÚH4ÕRÅ0 ¾þú[„!Øjø°¬Çt¤tïÞ—_y²²2 Øáó|>?ñ/Ž>úØæX¬1¾tþWMõ RÚ)ß©–6iŸ8úo'û~ý™8fÖ·UÌžWm[–5Í£Ž?Å•ãnF”—pö1;" A8fŽ9.¹dhÙ”^Ô—Ä?Dj žØ§Ü)²$D“HʬD:!MQ‹†…íHJßcñŠ(5 QêÂ1šÃ6ѸådDZX ãHÂq,²°.‚aŠr|˜¦`»!½È/)ä“/æ&åUãÜ™3ÿœ„5½2x0oØå4¶­Ø 4:M8ú±˜©n"¦„ ÒÛÄÃMWq‘$’iéá©‘P˜,á4Û!Zì1âÉGdès©Þ„“¼Ž=;=A^ËâËivš)÷—á&ú å³?±@…ÐoËÞ¥Ïòf}úÊÁ·ˆòn›å>â/4úˆ¨tD•umÓÏáëk®‘늬„‹xŒŒïn‚Ê,F–OU>v¾.èg¼5ï³ð‹aºÓ…5›ÔÈG|TìÜ/ýLÓmˆjÀ¡HÊÝš››yûíw8xâ«MXB ¹÷¾8ú¨#:|ÎæC†´\5íZeï‚@¾?pìX:Qy„µ±hWIyuÆqc9¨hŒØIMÅ.³Cf_InO}É$®„ür“VBö‰c5iI} ƒ¤%fШĤó !SIÁ…‹èÜJ&î*Adޤ¶)NsĦ¼0ˆi•]ùþ÷:ªúm‹+Ç=Û8wæòu5Z^Ÿ0z9¦×è¤*s⊌’£”A¹Kª3 ÷.Í+]K‘š@Èı)BKÔ—ènFâ|©‰O`òºif ºS `Y¬–Y-ß"‘Âà®].Ó‡þý—§øªa– (<Ç)4F šk‡W…¯åjg]‘ÕªhR™Å§‹ßõ™, [ßûߎïÌRùµù“°”¿ªû“á¡.Ü»ýÕÎæ]GDŽTI‹=ŸËIš2T‘IMZ°çžãx÷½hhj"ËжÃeÿý÷cî¼y|ÿãOíkúƒa!ŒPcÍoÁœv„¸úú»…'Ö7j$åÕÃz'åUÜr’E&A‰¤±'¹¤"2>ÝE´CdÙÈ+­D>¥9~cüdÿû6E¿HºÏ1—»Wnq.9æ‚_>ʉEBa!Œˆ¤í>ñÝý©ÌÆ=zô`¿ýÆsÏ=÷'·­Nñùüüí¤¿r÷ÝÿlóÓŒ¦/Ô´|Q]õM^4ÜèÛüî„ÂaÞyÿSO¬o¼HÊ«Mº€öó¸ ÊÈ׸´ªD12,I\ˆ4óa&i%_@‘î<6¤H'0-³Ô¾C™"ã^¤@ ]u‹¤/LX޳`¹rWUçpúÑc’òªÏØëóþ´„µý&[“kæ €¨M§¨4µÚ-82ÕméhõÙM>™¤•ÐËEšfåŽëiÿŸ¸;b¢£I‘¢@áŽ;H5ûO¡ßˆ:1@0¶Ï‰ÍùÀ®¾®¾³¬#× ;õñù±3Oc]MÞ[Yð„ÈBdb5‰ËhùÀaßúQœÚsŽí;IÍe“’x<Ì¢ß>έ¯žãH)Ãèç«]Ý£ÆD¸àüs¹ûžûhi ­‘YðÄãßÿ~™ÚÚÚ´í¦é·MÓßj®±kþj^tßÃygÏÍw<ŠãHO´oÄ„5r«MÈ ª ¢–Ì $Wq•Öý,E<¤icéßݤ•0&d\â@idqÉ»´,#Mò§Ì‚‚ šëN`ö’bú7î´ußÖòêÏHXÇo>!ÙàQK#*Üä%ÝÅåÛÊ0’y~ÒJ8-ߎÍL¢K3ð%4/H»ÖÊ7eVÃFRo«IÉ}òzA 2·;ýs{r͉"î«sÎ?­&ôÙ:~&m†¨¯â9¢ 3bZday¬+£K·Ñæ•ôJêå,ž7+׊…CÂ0¢‘4ºÍ‰²é€M;vwî½ïþ52 –——óדNàœó.PÂÄ41Ì@s4Ò­[úK~¨aYn¶œÂ»î4Šâ¢þýú»žh߈ ëÀ1C’¢–“á¶Hu%wOwk^Fš+ƒè Ãß%’rÅpÉ\ÒÇ­e%Çã.-Kêï¸ï+ ¤p™d(¡.¤èôéR€?'C^ýikzeØ`D…š(×QUn¢2¥P#mÄb m L­É)Í/åªSº‰,Ã4˜©e¥Ì‚©mé˜EºYP«ÖnÙêpê*µñz@ÍÙÚ¶OÒÊн»NÀ/rùèò›ž˜v1ër†y"V„UüÌ,ŽëÓFå*³ôwǵ_EWº=sÞcÇY¶àë¼Ëç;#d~„0TøxsÝ…LáöÛï$ެ‘ið’‹/bÖ¬/yó­ÿ ÛñXKcíÜ‚–†%yR®üqL™|"·Üùõ+š<ñ¾¡ÏØë“òjØ@5uÂvdR^)£6ûe‰f"ý24ª4²¾¯ì¦ÁL-+áËr$ )ZÙû“¤j4 ’Fˆ©w°^'ñõû[íã–W* kKÀaл°' °¤•"œQ ·)/½$É"¸„ËÏddhM¸†ýiÚS+-ËeîkE<é‚UºÌ‚)¿™;#ýœêXmòì =“ÙJ¹CäEòãK65/Š^Å…2ò<Ùâ&$w±]Åj§Ø S„a†ÀFòeG¨m‰5ÔX[³ð»<;n1 _Ø0LÅM4ƒfçwâþûZ#ÂÊÏ/l¹ãö[Cgœ~FnÍ’¹ùŽÓ±åĆo9ˆöÃÅWÞéIù [~Ã0èÕµ°lÇ5µ&=€ÂhmÐd!²žY|^®PölV2m<.-+9ÀnÏ,(Sv$#9Ø&màžÈ:/€Q:Ø(ý³Öh€Í+“gªõ©,W[¶alãÔF|´ IDAT_š 0ƒ´ o¤…W¤kY©‡(²XµH¦6!ƒ€ÒÉ(a?N÷­´öc Âv[ZôLep(ë7¼øà|Côv¢Ü;îõÈü?è¹dR6bZQe#¬8陡ûQ™† S7 |†Ðf†0„žeQ·ôçüºê_s¬X¤Å0Ì!ÌV#Ù‹/¾[¦O'®²ÿÊ0Ì0R†šªsGüóö£·áÚîX¥üÇ©GP]SÏ3/¾å‰ùJ^ ¬ 7¨&êÆœLÍH´"¨´h¥´` ÑÊ_•;"-‚9SË’Y´¬t #ÒÌ‚à2 fñ…û{?–€ä<.€] ’òj]4ºï4½}€!'o¾orµØ¸´1¤"#Ã9˜š3•nr» $ŽHÍÞNLÀ’èlî g’”¸“M !]Kò „ÉH4=‹Kå.ÔÍÑõè8›ä Õ!sDä"ÏT½`ú’tWjç™§ú¥¬_Ò¿em­¼Š¤•¸u#­!H6_[Ù.:âÿ²ƒÃ00 ƒ…‹kxÿ£/9êð} øÃT„¦?ÕÜ+iEEãòyùF HaqÏ–@N†aäJ) €-‡ cÔ¨Q<ðàÜqFÇÒ, !¢ŽíØ¡æZ_<Öœ0Ô¼ò²ó¾ßi÷ý·˜xÐ> :¨c/™ÏÇÍ×LfÒ ²õðÁôïÛÍ÷ž 0«¼Ú×Á:ý‘“ ¸HG´žC•. ¤Ê\¡å€H„s$s jyƒÐ²+]ëpZi\iÈd]a¤|‰hA™šÇµ’¸ ™E`%µ0÷‘kXÓ+·`zåƒÀràmàœÄç°;n¶zøÒÁ–Ž&«”Á-3 2´® MÌhãG&çPµV¤]þ­Ôh%SÚŠV”YÿÊcÒ¦i —f—Èýå§Ü2ÝdË[Ù·hÙzðÎÊ64(§3ŸÕ’д†i˜¦ÉcϼÁ½ÏàØS®`ÎÜ…˜šÈ Ch­K›h;Ƕh^±8¿¡fn~¤¥!‚$d3&0¸ä’K¸é¦›‰Ç,ZÏàsaÚÒ‘¡pKl¨›Ÿ‹6¤tH”ò®åbê%çsöyS±¬Ž!úöéÁÙgÉy—ÞN}ùÕ7¶»à’k¹ñš ;Üpï¿;ü÷K¦Þð W^p"Þ¬âõ–¤Ú”W……yȶì>ª’ÊEø´E$nË —CŠH܃ÞdƒK<$4+CJm©Ñò#SËrÉC©××2yµŒ³3^2)SÖ¡ Å,iýI|¦Ï'–I •”‚.MKh§XÜNêvuñlÖ>aM¯Üø'0,ERåÜ4úxöì·+½ózáfýÄœ!;¤ÔàŒpMH^È”‚†t‘VR"&&ÀɶU]÷ƒK]&¾ì$Ö®ÕÊî'²tÏð%¯³¼¥AÐŪóa?÷ß ëñ»-3£Ýlíí› ü¦ò_••óØ}—qÑ•÷òÙ?rσÏñÙ?rÛQRRÜ¡´âaŸíX>aø¸ì’)UN<´÷¸±»‡l6P !LÈq«9i¡¦ê|©ƒ)Ú¹áòûî¼Þ™0ñXãö»æŒSŽépƒ]wÙiÆUÜr×ÓœsÚ¡;¬dÕJ^•çq¤ØmT%½»äc)ߎbq‡–¨2’ò6àòi¥õ+wrZ-2ÃíÊ™vj—]¬fAunê°¤KBdaªVHM¦"­›LÑäâP¿i$÷56$ó|·aÖôÊ|Ôb_g&”Ì^ÝòȘ Ùe˜Ö¤„KZI,#lGˆaaºrºÃÐŒ¨¿„f’XßJ:´M7ÙàÊGe»æHí·²Ó-¿iöàlþ§UÐã”LS¡±­‹¯$ùýãª/µ‚mõÏuˆ×_ëˆE"q|<¡a®@‹ü¼\î¸ñ<ú ý×k|ø¿/8øÈ³8ì/ûð·ã'aN‡9ø°-ú]8å\Ž8âȼ™o½J0˜ƒ?˜×Ò¸|Ac¯ÒâÌòóóþûä£wî8þ€£éÝ»;Œß£C's‚ÜsËùsÊ•”çsܤ½=–X?ˆª•¼XÙKNÙƒa›Uàó¥ù%±ÂqÛ’)¹’ý—AVB¦Y„¤i>+C…ã"‘ /ÒÕ£v‡ÍbÕ%G¶S²ÉÀ’¢mÌúaAâëçëâ9­ÖôÊ]€€³ƒÒÞ¼uøC|wÈÓŒª‰©Í~¶t9a–[+¨‰×²ÂjT©˜²H7‘… D¦]µMÇfëë6-Š•ZwÓ8±‹6È«­Æ/Ó„å863~ÿ|Pp|uÃZ>ÜeCfH|:qµaŠ;öˆ ÜÇ¥ Ù|S,XÌô;å€COå¿ÑaÂ’RrÒ_§GîL½âjlÛB:Nȶ¢¸ýT+²o·Š.±'¾Ë¯¼…Og}Ýá(*Ìç¾éçñÜŒ÷xñÕ<¶øãÉ*M^õîQÊ#7ÉóÓ`äæÝñ™†~?%á˜M}‹EMCŒ†EÜ’m®&œ.;d*/Idg”49(:h¶È"°„±j׈Ò\¿’ߎÃÇ_ÌßÀ kzåàM /À9£c٤خËHL}¹¸£Úªcil+¬†ÿgï¼ã¬¨Îÿÿ>3·ìÞí°²,ìRDPPTìXbË×^“ã/ù%1&Ñ$æ["¶Ä˜¨Qc‹ ± öÒ« X`—í»·ÍœóûcÊ{÷Döìk^·Í;;çÌóyÊçyâ2Öí‹'R@ˆÎ݃» .~‰(‚&Ä»˜ù¾ê¢;!Öy&Íqñ"&{ÅIb «Ä OÞÍå—lj'ËyçÍ¢å¸é–{X½fý.`RV¢$(Ƀ÷ßÍKÓ¦3sæ»6FÊÝÙ³G ¯ä¯÷þž^}3ë6Twû)*,à¡©?ã¯LçwçõN$yõ½ó`ú—2aD±ËŠ&õÍq¶5Eil7ˆÅ»ö»xLu LˆùT]^æùn{{:{7V® XuMQÂáØ^¬=ëœZq ð8à#˜É›g?Àa…nåÂ*B“ÑBÜŒyúYí®š®ì„âd¿=²„÷½·M ÈÖ­\†5¶ÐÍ7ã¥>õpM¯<¡¦¶¥‰x•Ïçã—×ÿˆìâ~l®ÞJii Ííüþ–»X¿~?¼â;œvꉩëj©Rê çE^~ÿxè.»üGÌ>t¢Ôv{ŒÚŽ:bbÖÍ7\É¥—ÿ”§ùe»×4²|` ýÓµ\uýì¬o䢳îð½ V®¼Ê ùëïÏcÂÈb÷óH\Ò6‰f¢*Å—p+$5¦÷º÷T÷$ÑþdgZ°±n‹Ûl¸«‰å‘…5µâ à_€ÌV~gº V ÉŽx-Û£µDÍh§#]} gH»Õ}WGܘíÎBi\W=5‡$0 XŠ.¬8Ì[?±>Èí'Åêz Ðí>þø§ÇhiItR±:î*•—áÓƒ´¶DøÓÝ·SWßįù%'v!÷Ýÿê-µA «Ã´³qäa\ö½KùÎw/ËQÒı¾z¸9šåyçœÆ~p1ç\rót?þ<¼²Œ>ðsžåCîýûË(Õ[(w/•+¯òó2yýïßwÁJJ«ÆŽ¦(ÑN¬©®fI¨ôÏq;÷ öÀRi¾$vÃÓ¿ ˆ“*úáB׸ zæ{%OtÏÖÔŠRàQ@#dñÅS, ª¢TǶÒ&ÛÓ^u»m{QvÏIæ:\}Eg­éeÏowg%¤?´êÖzR{`ÀJ"˜Qþ¶hšóöª'§7YfdSÚ¿ˆ»þò$ó­p]xíuȸI ˜AUUºæãöÛn%';›ÅK–sëmwsü‰ßäÔ3ÎW·ßþç‰[·le&m7üô*âñ¸¸ÿÁÇÜÆˆ»±4\zñÙÜ}û/ùáO~Ëô×ßéöÿ8 ´/Ýÿ3/ÛÀoïx’¸Ñë þŠÁÊ•WþÌ OÝ})E–¼ŠK67Dhšéïi•R4S): E©„R­ìöónðVY¤ ىŕ,wT·•ï] ,ÕÍ¥ÕÙ±Kóìë“L›±Ðy{ÖÞš»/XVŽÕc@À;ç=Ä Ð@@Ñ&ÛØ­ÁPf’õ!Ó\™z¹„²3»Z•mM)è$ý¹`—xí|_8‰]ý¦Ô„J{ž»¥Õ(•d:¿Ôäú,wׂKÀ´ýÁ!óÍ^±b‘#óüK3Ù±£Oæ,æÁGþa˜FŒ¶†í4mß@ëÎj4£…£Ë+Ó缳Ϡ¼¼Œ­55Ìúl®¸ýΩ™'œt&'L9‹åËVº`#4Á¿žx¬ùá>Ãü‹w7–• ,qÎ÷ØÉ‡ñìãS¹ûþǹïÁuûÿÌÍÉâowý˜h$ε7ÿƒ––Hïä5`•$¯¹õBÊ ³P Ú"’- ¤©º'ÄUBŽ(Wá¶_;ùQŽËOYA ¸„ý\³fÍ·^|ñů»‹°Øñ‹k.â‚ÿ9Ñê9¥ù4l2ÕÓ‰òÜó3xà¡g8öÈ Œ>„Í5µ\såw6¬Òªi¦ëèšÏªØ®ûÐ5_³"‹Æ3Þxû]žzæy®üáå{ÌîffÕÖnYVôô¿§sÏÿäé'îeèòݹ‘>”Ò·Üþ÷Üv=CÊKì8œ‰i?Ji¢”a=Jƒ?ûœ?Þóÿó‰|çüÉÖšñiðà“Ÿðú»Ë¢‘H$£šÒ‚UÆÕcÆÔžÈºO©@¥éšÒýBt4ŸéÓA÷cjè:R׺eíaŒÊ±—öÍ.ñæ(Í‘CZ~$”óD=A´ÉÊòúµ«X÷Å2rr²lE %VY’’b½gJIK{˜Ã;`FÈþÜÊ—ržKÏ÷çRÙ¡Wû¹™ò¹é|&_Ìý€aýrÐuÝÝ/ YV¡aJZÂH)Éô‹F]#*¥doÈ+Ñ͉zË-·|^\\ì÷ úîláX„%;–1ü‘èØ;XTZR¿ª$ËJ O%ôôpþYɼs8öØc1 ƒžœ«Ïïã•·_áèó·KbºnFi‘;”²@ eƒ¡Xξ`™Ö$#1•¢©¦žCµ1ö-êÖ95ÊäRhƒ©Œþ⿸ìÅ_|æk._ Úrž¬Ü‚ä aýÚÅ´·´°|ÅZþ|ßclÙº³N?Ñjý qÕ¾KIÿÒTÀú«`i§#”S\[»yIÀKÓßäwþ•§ÿ9•Ã+zzþ†iš[Ö®«.Åã(e¢¤¥Ä|øé\yì²³²7fGOÏò $RJ²C>Š BI€%¥ÉκFnùós˜†ÉM×|“Â>Y½€ÕƒqÁ9ç\üðرO‰ø„ß>øtÐ4¤®ƒ®£4å󦃮ah¡ûÑ‹‹y²¦ŽãN=;Mÿ©ô½©4G¢&y”:—3L%™÷Á»'¯|~¦ÿç-Ž;í<èØ$4—ÜáȬdÀrÞs€È”–…%mÀj¨­áàŠ ûÒSyÿUË«n¹>úè›Î>ûìIÑhT3MÓ{r˜¦‰”2i3MÓ}?+3Äâ¹ 8¦Üš”¦|É ËD¢¤ Þn˜¢CC¿t!È5+×PU1Ò=¥TÚó3 Ã}tÎU)EkC ¹å]tˆ_Im,w£ÀµÀp­/•âV„õïÁ±‡C,sÏÁuSz®™sM5¥[¼¥¾–––áo½õÖƒ_sù~>ùð1Œ•p ÆcíDÚêéW~0F<†ßÇ7‰¬Pˆ—^y›-[¶QQQÎ3?ä½÷>ePù@Š‹ ­ï#€.9æ`v{[Ó¶,€Q#*8 „+¯½…£&§¸¨oâÁ¯Íœ¯M:ú”`qÿ ŠûWRÜÏÚ&~4›ÄÌ™ï°jí&ÂqUcdÜÄ(ê?Œ¨Ìa˦Uäf¸î>¥™~N>n,õ-ÜöÀ+äg‡¨(/¶Õó^—à®Æ¯Î>ûùÑ‘H±´ïO]ÓìÊåÂŽ1Y}×,¥ÙrÉÙÕÐ#Z33 ‚O×]Â=Ýw;(ÍB¤”^JTmO cH`í+5¼r÷åUs#ùŃ@KP¤‡Õ!¤Ý5e‘-<Ï¥°e•Lv¬YøÇ1¡ÛòÊ9ç½!¯v Xçž{®vá…¾PTT”!¥D×ut]OT%¶+뺎Ïç³+h[Ÿ+¥ˆÅbôñL“ŒPÐÍr Ù&¡SB{!´ì&dJÐieŠùÌå QÇí&€Ö¹(×5(Ñ4Í=OoUÓ4),,déÊ¥—•ØÌåa1ª”(”åpv€ÉÿrY¶ù^ÚÔ—¢¾E8×Ïù]gÂ…ø|>÷¼|šßýŸâñ8¹¹¹---O¯X±¢á@,!¬êláæô))#· ?µ;¶P6 ”£Žš@]]¯¼6“––V**±hñr¦¿ú&``gEEùk‘u ³Û[›j²œ×Ã*SYQÆ•×ÞÂÄCGÓ¯´¨[q¬º ~H0 ÙëVsËrß|ó¯yúé§¹òÊ+9묳˜9s&¯½ö>ŸŸ±c¦¨¨˜‹W2°Ø—XNàá ‘eL;„Ç^øˆï/edE ¹™½€ÕµÜ*ûÁÐÊß4M)« ‚ÝfHÚ[u!šß4]CH0¥IaI ¶l§¸ß€¤†¬ÞØ” Z"´Érͱ´¼’kî¬;ºêKÉ«%Ÿ¯¤¤t k]A ÑóF‡8– …õ¨JRìk¡¨¨o·å•®ë{M^í°ªªªN¹ôÒK/QJùü~?Bˆ$”uÌk6z OŸ>Ìþl6†—yæÐë×M®5"Ò–û¹—FšÑH„P<ƒP(ä^H¯Æ$½ïc¹Úv À²…KxPyJ82Áâñ’3¤H+o@Ô¹ÍË7qÔ°Ix¯Ÿ£)9@ï,ÐÔë§”¢OŸ>jþüùjÁ‚oˆ€åZ[‘f|º ¼â`¢Fœ¶æ&=‚qãªhlláÍ·?`ã¦-Œ®NMÍŽ¦i/¿žóÙœäåæPZZœöGƒ9í­[³¼k©bp£FUråµ·0î  è_²KÀúdî2&L<ÒuíxÌ 'œÀù矔’ 6P\\Œa̘1ƒ¡C‡RVVF0"ÚRM  '–ó˜ŸâÇE)É]Î`gc;†©X»±¶°ÒŒ“=ôgú|“1MM¦ 6Xiº]´@vPǾ‹¥´ã? @,΂¦z&DG+Ë Zi”oGUrïeê‰FiÆ——W‹0hؘ$i˜,¹:<\¦’‰fJAõúÏ9bÜðýV^í°.¹ä’‡&Mš4TJK<;'î ¬sA½&lêÅ•R²eÊGöK”Þ·ÉÊžh-ÅhJ-ïgʶ¸”g[øé|y°kñyÍVT½€šº„ÄãqüšŸh F0+ÃãTž§ºZŠôXd°ó³ÆŸ°¸Ò\¿T@êyI)5!Äh]×ïZ±b…:PKhRšÑ&JJú3¸r ;ÉÊÊd„±œxâdtÍÇŒ7ÞeîüEÙEE}Ę1#Ù\½…ç_˜ÎÂ…K=f$Á`0qÓ»€•<ÊËú3î ‘\yíï5¢‚Aeý;=q¥mÑ,JJJQJ¹7º×eâ÷ûéׯcÆŒaÒ¤IœuÖY\vÙe”––"„ o߾̚õ J²Ó–“¢1tp1'[Åìëxç㘦DJy=Ï[ÿZ;.ºhZisS†wŽ„m±X7–DH œ”’–kL¨$cI¾¾}i å‘™J¶²¼, A'$0O{Ž”|ö õåå•O#ªüC¡„¥:ºSÝ.ËPyØ„ÀöUs9ä ªýV^é»0«û\tÑE÷‡B!]iš.ò;V•×Ç™zâÞÇœÜ\6mßDAqŸD,K%zÇÈN@K¥a…¦+‘R½x#åƒÊB¸Znª6à=ßT+@Ó4òóóùtö§”,OÉûJ&¨{éìNÞErÕ E¸=ÂH_%™™!4Mëòú¥[ÞÇ‚‚ó£>Z¸bÅŠÕ2` ¡#„†4£eðÐÑ—” ·‘™™ÁرU|ãäã¶*½jõZžù÷4fÍžOqq!ÙYYtC¹iŒÜöÖÆ-Y鬦ýK8ìÐ1üøúßÓ¯¤/#† Nk]­^[ÍðªC]׸פ@^-Ú·šÚвc–S9`9l-¿Î‘‡ f}u=ë«ëPJ}ˆaµx0t°:÷Üsù^Qá÷E[›î(ÔnÍQ)]Æ´“¦œ¹ÐHe–.4¤RäçdóáÆ ª‘"œ:-‘¦2mµiõRÊ Ú3òêÓY ª‘ba©$ ’Éî@):Ư"‘v† efî·òªËÄa¿ßÿí!C†HÃ0Ü˹ Þà ×”õž¸‚`0ˆaôÍïÆE’€E"æ#¼„…*¹RÞø‘²µ¢ÄÛ¹½ŽQÃG‡]wL:mÀKñÍuóeÅ21¥tãXŽ`r,ª„¶”ìtÜ•Nᦹk)<Ó4]‘÷úyƒ•Î9vvýrrrB£G¾¡WwöZ6’Xëv²ƒq&L<’ ¥¤¤”œüì¦SN=Ž»îü oþç~}ó5Äã=ü„lö$þ‚ê21xÜØ‘<þ÷Û¸ç/Opõõ¿§¾¾¡Ã>¶4‘ã®7g^ä({©÷Lªb̘ƒØ°¥ûnÿ>ùÙ8pVÃÁuXMCòº8ºlàMYMM~G‰öù¯R cFcȘµaJÊ0Ð ])4%1"ÄÖ­d«ˆ-î471X¥Iv™y6s/…IWW»ƒQ#†ï1yÒ ”)]6 íåLXW^· cŠËŽy§ë>ŸÏàý\^u X'Ÿ|ò RÊ RŠp8L0Lº¨kÅAoÕt]×ÉÌÌ$‰ ¥$RÞw íÍmêBÂæ€ƒé­Zá77gÀ ·žËç,§¸¸¥ÑhÔ©“žî<…ddØåF¢Q Ã`̘1¬[º:QJE%'K¨zÏç'Q…HÓšàH$‚ßïOr¡zÏ-UKÑuP(D8lå;Äb1Æ?éÜsÏ-é…ªäašqâíuè´2|X•:ì°ãËû(' ›“Í7N=‘û﹕¿ýå”T´Ã]QuG¨àµ¤´§žýÿxãíxiú[|áp”£¦½Ý*=fF§Š¬œÜÙt]§  /[v«øÌ\à›ÀiÀÀz¬dþÜl)èyyy“Nè×ÿ„¸=†iEÑ4Ý.ë&6!n€4ÑLÍøPøu bq­­hÒÄgšLÌËgÍÊe®»M¨Î@K%Ë ÏçÒ`Ÿ/š³ÇåÕê•Ë<Ì@•TíÂɽrM(/ÏÄ=¯âLi»%÷Wyåë¬>nܸâxç¼À¢~z'Ýÿx<ž I×`0ÌÎÎÓkSu…zª])“h¸hFùy辄²6koD¦TO@r옼úÜüùÇxæ¹ø3rþöLŽ?þxwþ²²²BF;XUŽ (àóù0 ÃUì|>#FŒàƒs_ÕçË\†•À¥@¥í"œ¬ž^v~M§?| 'üQojzêG[ZZ¹oþ–ÖÕqD¿~üxü¸¤ûË}ÂJ*Öt¤¦YÏu ƒ¸¦ÓV¿Ó´Š×JOs]Wà+ÇâP6yÃUÁúèæ€Ï§mÎËã‚!åœQRJ†ßßçÃ4â(—}š \¿£4Û2Ñ,Æ ©$Â4PÒ*m¤ …D )¯›/á"ôðr@N)¾:yå+Çh;„ô0S¬+«G(¹ÿË«]±XŒÖÖV233“踩û:6‹upºÚ¥›»dYWÒ*‡b«!e·Ÿ®/ص¸-Å…íDý>'ÀÝÔÔDff&™™™î9yW£u­·j‡ ž·X%»»²®œÄa缿sjÀÝ™\ïâóV ñ2ƒ¼çÞ;Ò¡”_íF+Õ Ëç2n$‡ŒICSm²˜Í›·2tèPæÍ›GUUU‡9KÕB½9Z^–-Xd%ÃøJ™éí¶kð /p>p5V3õÀÇÀ'öãþ÷êLÎĪE9mÈ!wÿçx½Ñ’@,#ws ØëÆ0šÅDF×Ð '¡XºL@a Д°­X•“ª£D"†¥©ää]鹇÷¤¼re•Lï ìÀ ôXW˜É…ögyÕ`‰h4š)¥J)ŠŠŠ(,,$‹Ç;œd:­1Ý +¥0l¿§¦iv>@2ÁÂÉàrb^Žm-§®¿`0èT:àÚ‹JÉNàoöæ&c7î³ÁͯíØX|Ìm`‚ PÎc˜goWÚnNmĈ(›P  …BnZsKVyÒc¬Ïí91L”©¡t«¼–’&ñ¸ièE]Ú  àÂVºI/×gŽ`šrË+)M̸DèšËòºÕÞS7…© ‰¸îÑýY^u X¦i¶†!²²²Ü’^SÔ4Mâñ¸kŽ:›CºHõiú|>«¢…OCb•ÏÇ ZÚ^Í®Îî˜ÖÊT 4áÕn‘JÇŠ‹„#ä—æ§Õœót‡©‰ÎéèÆ>Ÿ%¬K¦î±¸ðvE¶Éø K¤ƒ/è'‹‘““ãþ^VVV‡<‹ÔëçX§© ƒ~¿¥”‡kz±)íXŽRÇìðì&@()QÒ Ú²²²:ÜÜ©7pº×„Ú'E+ `޽9ý†cU»ŸŒEÞ l6ÒQ%¥ ©7ëí%f¥Èõ¡ ˜‡O³+kÔ˜xö$O°Ûv÷9š‰ÇÚòvâÂIßWžº]–“]ÍJ­‰-K¶ 5L¤THÓÄÄz V©$aǨlFªrÊÿÛ^H FNþ°r42'¬W ú-ãý¡·i(Ó$nš˜¦i— w4iAŽ/„’P©£¦u~SkÉ g-BH!„L!„ ‡Ã›¤”·÷ZWÖøÃ}°±µ#ã·?¸¬ßÐÊÁ»;ˆÝ©íá02 hdeeQUU…Ïç#ºÊœ³©WP—„%Ó$n˜øö=½f•½=b¿öe@¹½ ޲ãaå6À´Ø eÚ h¦ãP¾yüh²BA šæ£5mQ|‚Á ›ag#R‚Já®#éRÈq+eƒ•³Â²óòi ¶,Xl†)1¤ ¦Íô¸‡¥mÁ [v¹¿©J6ƒÞjÌ4ú°Ñ„/ ¬ÌV`zW u®†Ä³ßD·aJÓ°,NSJ”)1¥áæ eeø@)jÛÙQ׊¦Ë–l=²W䕨ý5,RŸw³¦êû%%Œ ýMŒCƒÂ~CyýÔÛšU‘Tv?¬¢Dd„ˆ%®bÄ•¡Lͽ½cRš5Šäw;Šûæ7mÀ’V²›i˜¦Dh 4¡ t]³z'i ¦YIO{"YÙq :=¯¬.X6õB9ý³¤ÛÀÑi)¥‰iHŒ¸A‰Þ— Ùcnüä^øâm€'øÙºï`ÀÔØyÃÕrñ9SÜ~Xš¦£é>ë¹®#4?šæãæR­È8b$Ÿmß@í_^â™;ßïïÑö-Q»aùÌ¢Ý2IâoΪfÜÁ㬵#MLéhÁöMïqóIÓ”¦TV‹i}æ$jZñI4#ڴɇ–º S÷ã~XšKòÛrÁç‘©ÏÃ60Õ±ê –M¹cðŠí ¥bX?þtýiTöËIyѸ"7‰ÆM¢¦Ä0¦¡,¹aÙ=šÜ¸»W`u¦{8²ibHiXf†¥Ôê>Íb.jºŽ®k`—,ó,\7 éqÇ +àŸ_xš9*¤í¥’R%Uä0=û™6 JSaq ³}Œ’‹|øÞz%À›ßýÕ^‘W=¥q(ÆÔq<º½‰©GÙfûÙuÛÖpøcçóó#/çû£/ (PˆP)‚dêA»’a¢§‰T’¸2h4šh—a×Uèòl”×1˜üý¤;GhºŽÝÓîÃSˆÄSîIªdêzG°J;’kl$»“sµ¤ hI-Ñ5Ðt|J tÁ€¬R²3³Q(^¨™ïTwÜÔkEu=jëzì1‚9lÈÎcáqU<3íu¾sþ7wÃ%¸{òR÷iœtø¶ï\k»K¬-›¦i±Ë¤éÖ5 ³1Ðû]çk躆®+4¡Ð5Añðn<å¿lH`ÇËÉVϼqKÙ”;ŽväÕºÕ5œó¿pùEGqñ7ÆR”Á€ 𾄴QØÊ‰ÂÐØ§=f&Ø€$e`¹Ó²^hMG×ttÝ#”š¼Û¿*¥ªF Á Åz-+‰·DTGW ó™i{¼L±-…Ó:ʇ t~Å9dg‡PJ0oÉV„¦ïUyµç×­kε¡;îüôFþãNþÏx»æ#j";ˆ+ÃÓ¬Å>‚"@¶b@ ” -`#¾µÙvŠmÉx@ÃóÚµflË'ÑÚ0JÚ–‘ô4]”(!»+‰ì@À0måСµ›·¡PÖ»¦ò¶uLœ›B‘§çŠ£Ú꫸©’º¡P‹ª×óE¸‘Ñ™yÑaeýÙZ·³ÛQURAÙÎk Îü`wÝ÷D§ŸûüJó4 /åÅ ¯ìǘ‘ƒÐ&ŽÊa‡ cÒ„‘1a$÷üåIߨáWO?œCÆV2vÔ F íÇÐòBÊû÷aîüe¼ùÎl{ö=bÑô]ÆðÓö.€=ZäÕ#Ï~”ï>Èn™Î 7³½>ŒaȤ® B€Oü¡ F¿ü AŸ§ˆMƒ—)e™¤E:LwOÙ0ÌD`S8Dz?7½ßItîVI‡`帮ÃÒ k8EåÜ<鉃å‡|H-á8 m{]^}5‰=×­SÀƒL­xø?àr sþú¹\¸~®­žê4ð`&cpn ýC¥ S’YÄÀœhB£Ä_Ì:c“…îW+q’ö:z4U[+á·u-$åíL’U…·–aR¥ Õ¬„£•X€é+ek+U%‘BZµ•p1OÏ`sû6ïI÷Ö.Ɖ“ãš;ÿBñÕçÒ}m«£l­%/7Ýho‰Å÷s¯±©]×&k°ò†’F[{ÃŽ_ÄãF§Çin‹ô.€= Z x°lÊIòjî¢õÌ]´ÞW:cGdXEýûäQZ¢¤ ‹¢>!å¢i‚’¼jÛQÊî4¬%SÚÓK+óÑ¡ §×žÇ+äÉ©òºaÇjRInÀd°râVÒV ·‚}‚~/]K0QÌ7;ÇPP³³mŸÈ«¯6õºu ÀÕL­¸ +ßã{XŒ#0M–n\ÀÒ :|í‰3nçŒÁ'üš¸Œ'YbÊ5­U÷NÇå ’êzÀK¨¤îÀÞx)`•L¶pÀJ%¬½.Ä+ÐÚß^´. L‘­eÔ­òU‹¶-qN7,è#]œì,®½ènøýÆõË軥‘«ÿxKϲ€ÕÈ)«“€³ÏG³—°hé*F Ì”c'¸»½õþ\–­XKÅàüÏ7Žæ…Wßg㦗÷çìÓŽNôaRRJítË4L^zýCjkغ­–²…”åáèD¢Q>øt9«ÖÖ0i|c«¸¿7LLÓtâB=õ!N–`õe`QÖÍ4^˜ )åý,¾fU2ªgÞØ\]6åŽòÊ4M.ÙÈÂ%;|ïÎÏâÔ#+ ø,oÜ”ÖETrrM;.¾„DJqvTŽ$e"^åR×eú˜•ÕbD¹`å(.ÁdW [8×öRdu~,^U³OäÕÞ)pݺf,¦Ñ#L­è‡EUko#€L¬@m!PpÇ‚'9cðIÖ]¡e²Ó´î)MxrË]kK$R’C]©4v—Ý|1M,+91ØÙGºí§»«¤¸•טTk0á‚”@UÖ0ëw¤âÁ¥¯9§ý×­«í…¤]qcFÆß~àOí[¶nÏÈÏËÁï÷õÜÚEr£—žþâ«ïóüô÷¸äü“yò¹7ÙX½ïëtz|:ó¬à‚³O$+ËJí_ZHÙ€žxúu¢ÑŸ3Åû{‡ËÑ¿½ýŸƒ>Žž4†¥Ë׺€õ'grúI‡ðì´OijnãØI#É&<ø‘˜Ég Ö!,÷Ãî¼n~‹U ÷i,Æ^ð(ðg,ßÀéXĈlàp¬<¨ßØ—œ ®·]k_àråUÙ”;v)¯e§Yi¹ª: m¦MÀ©=;ư’Š®¨¤ ëH‘Tnx¯UeîevVÉq+å9¶Jë T Fô϶AÎäÕ·»òªzæµ_/ÀJ¯ «$Lò˜Zq)ðÄò­ËˆÊA-@@ ¸`dzŠàŠN k•ö}§‡•òXP¸@Õ™U%In²+°2í¸•Ä)…k§ƒ+P­eP°ºØ.¯_Éêm+œ“½·Šº="Ôa“˜Ýž®ÝˆÎÜKþ=í]~võÅL7‚CrÕÏïá²K¾Á³/½Ã?ø%åKÜãõ+éÃâek<¸_¬Ýäþ†ÓáV@,7øpÖÞ{ùÏ¡ØX½H49É0%Ûv4’ 0¸¬¯KÆøõÓTÖ—¦[w—a÷+Û•s1Pí‡fy› Zð¨èŽEõ{à às{ß8PaÚ×Ñ]Ø©¼*›r‡%¯Vl%3 t>-6XLÂô ”÷½}«RAÊiâµ¾ÒZUžÊ;Ý+™+éiô˜ê uXdÔÏ7ìdÅšmûD^ío…T?sž4D@A@0…Eˆ°j^IÛöI¿9ä g_SH+héöÔJì+•L¼gÇŸd¢ud’Åe:‹NÀÊ!Y8q+©q+Ǩ”EÑÐPLÊïâ¼aöοý)×­›×‹C݆3Mê~Ï6è’t‘ hHÚà òB(%ÉÏÍ"‰Zï·‡PÚÇýÎÒk¹æ—÷b±Xešžf‘îz‰ÅI)1º¨´ñý‹cÊÑc¸õÞé<üôûîûùù!vÖµ|™‹÷O¬Ü)k6 àûÀÏl°ÂãþÀ·l+#~€/@W^Õ·FA)‚>-AˆɤЮ6ï¾JzhêÊ¢ªKd\Y$TrBpX)0=Xy-¬$ª;É®@§ˆÃ„Ê|7ÉùÞ}âÊ«ê™7Î;k5P°¥e &1- 1E×›´ ¦GØHeÚ¹R‰I—žZ„L¹ >;¦…Äb:°R°R6X9q+¥¬@‰äðœƒ)ð墀OwÌcQ"ŽwO/u{´(¥Æ÷”ØUÉ›t[VV&Í-í(¥8bbϼøÍ­í<ûÒ;6aJ)&ŽÁ“/¼i𴶇ÙX½ÁK8ëÔ#™4a´kÁgg…hnisÊ«ÈSÏ¿M4n¸.줲-Âñ“GóË«ÏbÞâõ óèǧ3L Ãпäu|xxx+¸Øªú4û_`ï³Î¶Òäڶê𠷇t¨çÝØ¤Jtƒ°ªrX@åºþìc¹@倘‡æ.=õ­¤d:+©:‚•›{!“ÁðÁyäeZ†÷œ5Þ8Þ^—Wû×b³Ø…sæÙ$>2µ Ç*2w¡'ÀÉIæõ‚TRœÉ"&mIPø›5ŠÃóaBÎXÆd¤Ô_l»&e’0¬” V^ »D×tŽË›ÈÀ`?6‡k8wºÛEz0í@6— 'e’Äb¥TÆ—,åøc:Ù?Ÿ&X¸ô ~rÅ9(%¹ìªÛÙ¶c'7_û-P’_ÿü»¬Z[Í…—ÿŽ[n{”'L0èçâ+þ<ñ*Ç %¹â;§óäsox?êÖ›._°zÝf¾ûã;y÷£%T”[kcLÕ ð«³ùá óÈÓïó£KO`ô°!˜0v0RJ «ÂÄ—¹îSm·ÞÇÀóvÌ*Óv ¦ŽÍ¶…58 ‹Qw@›]8`Ñj‹„àÓÁ nYE¦«¥ºÞpr,)/H™P©4@¥@å÷ F—å2±²€ñƒó©êŸKI~Ðd7`g`å[Iš®qÄðX]6¶Öµqý§íSy%ö»U0µâà· ¢î’—¶µlŽn뀰©…%…·ÏMJ,+µ’7ÉW¸¤ +^‘¯ç1,4ŸÐ“îk Ò`Kl;¢ÕDe<É è+l”Ò:n‘/Ÿ ¹Ò,?p}¬‰ªiW@]5XL›ã¸nݬ° €úŸ]u—œ{Ò.+]hš¶Èš_j””¯]=ZÑ>ü¿×)%Ë¥4u¥ »¢…‘¶ÂÅ®tñ‘íöÛàqíþ Œ±k-ð û3‡‰è÷쯽ÏbÛÚ:å@[´eSî¸ømÙÀ^½ÿ»;šâÔ4XUâ•G`i*Y^I‘IB&Ë-—H‘aiæöëül•ÅYèšè+3LEMS„Mµâ†LëL+‹dVåg|E>¡ ±-Êe7?Ïæê:W^Uϼq¯Ë«ý±Á’ånØD]t'EÁ¾äê9(µÕ²n QÂ^¥'\¤ÓÒ½ •J¾n`eôc@°¿•ë%%Kê–c"Óg$_Ÿæ£“Wú~7ýo6D8µ`0îÕmK¹vìùèŠcm×¹Tq'ÊÓí×[´)A˜ð•Ä[ýŠblöHríªµá:¾9ý'¼±þCpXRÒlšÇƒ Ÿfeó CydBdèAO!^.´”Ίí‘Z^Ûô§Ìü?ÞXüªÃaßãºuôÂ`ÅLn<ò°ÑTU‰°;ÂÆ/ny!ààƒF „ŽÄAõÅÛåy7·œüíu[–gíãÿ=øä`‡°“(ÕñÑÙçã¹kY_½Ó4 ãÖ/ùûQø¤ŽVÒ°WĶØïð£yÝÌH^ÅIC€q‹WÖpÑiã¬8V@gGs¬ãŠSê°y Í­8á©Ñc?Ï èT Ì#;Ãßµaþ÷÷/òágë]yeJ)æ.ÚÈ“ÓçóÅæFòrC„2ƒ–‹ÏSG\K+¥;š"¼=g¿¹ÿ ^}s‘ÕBÇ–WÕ3oܧòjí¹~ p ­u7Ö¿ËYCO%¤…(ÔóØa4 »Ì”T¨ZŠóZ:f¸R”ø‹¨Ì(GÅʆÕóâÿB¤ 8‹õc,Ÿþ«_¼Ç«_¼g.«çTÎØ¾ ÎHN ‹V£•í lk«åé ³©Ùº2Vv ×­›Ó{Ëw=Æ4Œgý¿ùã?˜5wøí5kûöɶç~A%'Áì»1‹:¾Kð\·±–ÇžŸ£>·RôíÛwe$Ò[ªiW;Z3?[Ãi“‡“ÐÈÏòQß®ó}IDATïQ Î ­ÜrJˆçf0¤8ÓN‰Uë¹â7ÏÑÒéT^½ÿɼÿÉä÷ÉâˆñT–RVšKV(@k{œúæ0;v¶2kÁz¾X½5­¼ªžyã>—Wb¿]S+î®&#—Mß{/C™,i_IĈ DgîäeÄL!^ŒÈ¬ $PdYZJñʆ·¸âõ_;»Ô§%åEM­È¾•d9Ù¶ z2·qݺzïñc—¤‹ç_žÉƒ¼þý¯¯Ê<æ¨ {äGTV»bÎ3EûÉ5x_Jó¸ÎHÛ·×óØ ³yã½…dddD”â–p8|=ošØ;öð(›rÇ}ÀÕ9Ù¼ùðduL©ø¼º•pÜèBÔª´²K‘\ÁB ÁÐ’,Šs®5öƧk¹éO¯%É+o^TÙ”;öˆ¼ªžyã~#¯ögÀ*Áb eýüÈ˹aü­îÂ2Æâ¶Ï‰©.r;¡F;ÙT9Z#2+ù,ÖnÔŒñûù÷ó÷9ÿvvÝœÌuëVwq~A¬æuS°ØjN·Ô"ûºÖb5£Û ¼Lãºuë{oížÖߟÎáÆ2ñ1hº.W­©®ÿ¿ÛÿV˜•⦟ý€òAý¿`U^ûùgû `µ(%ãJ}¼@ÕÔÜÊS/Íâ…Wg+233óþæææßa‘zÇþX®¼ºü¢£øñB3K757;·â•‡-è” Z²‚Cûe“°\€1Cr÷ÓŸñÜËŸ%É«ê™7®îâüz,¯ªgÞ¸ßÉ«ý°œ&‘w ù>qÀ“gßÉÉŽCCÐnFX^C‹ÙÞ‰0Q+Ы±„´ *3ÊéãËwÿóš¶mœ:ãj¶}á|ñs浜Ë3µÛèèf–)ÏMRɧS+¬Â*×­“½·qF>Ðð³ŸÏ%çìÖêu[øÅ-å'Å•W\°PÓôñÿyëc¦>ð$'Ÿ8‰+¯¸ˆ¬Ðîõ08ôˆÚ峟*Ú®ÃJÇJi‰Dxî•Ïxò…U[{˜¼¼¼744üŒdrDïØOäÕÀþð¡ùnøÓÍçrÂ!e‘˜ä‹šVÚ#F'¶U¢ó¯wdø5†‡(Èò¹¢º¦¾ëo{•/ÖÚ¥‘”ü¼mÛ‚së—?¿[òªlÊÕ3o”ÿ-{_üfjçbͳ œ.ÆÇäð?}?DP‚¼yñÃŒïsÅU°=VÇšØâšÝ%–‚OèûúR($OÏuÿcCügÝ;üà­ß´RT~Ä3;®aq{Sš ÷N¼Ly.Ó, 3å³Þ±€¥i:јÉîz”Uë67ýõî_å•÷ ‰òÐc/ðÊëïsù¥ÿÃÙgžHffÏ€kàð#j—úäþXÓŒ7¼öÖÜ~?õžÜQ»SëÓ§Ï{õõõWaQÌ{Ç~*¯²UP0ìÌ¢DÁ£w~‹ƒ+ ]™TÛcÝöfJÌÔëÒæ(Ê ’Ò]m˜Š·g¯æ7÷½·ëRÑv®xþšÈö¥Œ¼{áøzʦy&?Ýs‘´ Î/¬âˆÜ'ä xêÌ?râ€ÉhvUI¥í2B»ÙN³ÙŠZ€ ðÔ‚dëÙv•wÜýçïXÌiïÜ õv“¬ßÃC5oìBCélÒ½›¹‹çfЦÓ;vXv +üòëï‹ûz6㦟]ΉÇæ~±f[?ó*o½;›ÓN™Ì·/8Òâ>ݬa“k—}òø~X~ö9yôÍÈÆê­Ë~|Ò»<öš<üRò*ÔÙU9ý!rüA;~þMŽWæÊ ¥Ḣ=jÐb[\AŸ†ß§ôiädè‰þç>¬¥+6ò—ÇgÊEKWiyyy[ššš®^îŽP{T^åüæ!9Ž˜Š“Á/p e}»Ÿ=Ñ޳dÍvþþÜg,^^íÊ«XëöWë—?õx¼eû)¯ö`iö„ëÝÜ´nh1Ä‘¹ƒ9«ÏMµ2燿ȹ|£â†æ¦Ðß¿»£pLFi‰·±ª~ O,ŸÎô5*v‹¹–·åãæu=ÐPv¥t6ñÞ×Fš×Òó¾³õ–æcóÖZ~þÛ¿2fdŶ›oüA©>ŸÌ^Ì]÷>ÁáÇpÕ’““({gJÉ;ïÍáµ7>béçk8âð±œ|üáqøÁþ¤4üèÚ¥í;ÀÚP]Ë?žzW½ûñB‘ÕGn0Mó‘t ì±WäUÖÀIƒó‡žv“æ ºòêœ3åø‰• P@ŸÜ A]¸UÖ#qE[8ÆêMuL{{ïÏZôÈ+#Þº¶eíÌG[7Ï: å•øßóN¼wt¶|)ßÑzº8´|_¦ü~é·8‘&‡,·ŸUA²½.œ\û“zc+ÚæŠ—ë稄3Yv¡¡t×dîL#éj3Rž{‚óx Ä¿:u š’ú?=ðdþ²åë´?ýáZú—ZqÃ0yê¹<ý›œsæñ\xÎIäçå$4ñÉìÅÌ|ÿ3.þ‚Wž½›€´8¦vÉGìuÀªÛÙÂ?Ÿûé3f¢Bˆ?„Ãá?á^LùJäÜ^—WzF^fáØï~+Óÿ8„è ¯ú•äbHÉÎíIàäñšF¤aUûΕs›¿xmŽRæ/¯v°R'º³G}‹Ã×m'Ù4ìËé}N¤jZ)ë° _ ³¶33º»È®´Î^]¼—úÙ Xšîû8æ½æsÇÔÇùá÷ÏáN?Öõõ74¶ðì‹o1íÕ÷˜|ÄÁ\zÑé )ïŸæþWIñ€ò‘ÇÕ.þðá½XmmžzyÏNûP™¦”µ¶¶þØÙ‹+_ÉØçò*WÞ7è©'rË& Ýßµ¼RÒ0Új׆ëW-oÙôéJ3Rß+¯v°D7'¾'=Ñz:j<U¡Ff& eØ]æYŠ6óZ6©f3ždg¥÷ùvׇەÆÑÉþ2êëX?ùgó½‹OK,¡é˰*A°£¶ž;îy‚­5uüòúï1vÌP÷±xœÿ¼ù O<;ƒâ¢>œsæq}Ä823;/Ñ7xäqµ‹>üÇWXñ¸Á´7æóøsÈææV‘““3­¹¹ùz`c/¦|e²m?“WÂ,Y’Ñwø`]d(i*¥L…4•RRÉx[´më‚M2ÖÒ+¯ö`u6‰¾]Lòî,Œ®…¯ׯ,<›JÑPd7ü³]iÎsc7'´§Ÿ}--=/7wS4íwÛoþŸ8nòx°6 ¡•§î„œìäÀ÷àQÇ×.ú࡯ °¤R¼ýÁR~æCY³­VËÏÏÿ¸±±ñ*¬–½ã«µ¬zåÕ×L^u·ømº<„ÔGƒ‘¾NöÕ»ð§{ßßÉ"ðÛ›Ïó<`oA ÃÞBö–áÙÏ9¶ ™žš ä¢pWÝxÜSŸÉ”÷¿ŽCE£Ñ§r²sÎzeÆÇ…} r=ª!´E §*šýK 9ï›ÇSWßįo}ˆuõ TJvV& Ø¿ˆã9”o_p E…|69÷ýí9Î9óXtMsSPTÑ^³aÞWRüv΂5üöÏÓå´ÿ̺î[F/ŒD"¿Áª„Þ;¾ú˜U¯¼úšÉ«Ý±°|»¡™tWK鮦ÒÝÀ©—¹CÊÅ”»0™w× îî>é4¡x7¾óu¡‚‚ü׿üÒ³øÉÎß(„(ïê íá/¿öϼ0“‘ÃËùîŧ1¦ª¢[?6¤jJíÂ÷þºG-¬/ÖÔð÷§?–s®Ðrr²··´´^ÜÔ@cº ŒÛ[«$~ˆt±Å<ûÅ=¯cökï1½‘ú:uR%]&»ÃÜIÝç@xñH$òdnnîÀÙs—Œß´¹6ÿ˜#ÇÚVQúá÷ù8¨ª’‹Î™‚ß§ó×G^dÚ«à÷ë”÷é@e÷Ž‚âÊöšõsöˆ…µ¥¦ž{~GÝûÈDCSk›”ê§‘Hô[À’^üØg–V¯¼úÉ+±û÷Ä¿«ïâý®´Îúº›uNЦBš·«üïDt‡Ú™OYÒ3¦ÍF¸H;²²²~ÑÞÞ~Û¡ãF«©·^)ró²»ýÝÏWnàõ·>æƒOÓ¿_!'s(Ç}(ÅEɱ®ŠƒNª?ó/ea56¶ñäËsxyÆ¥iZ\Ó´»"‘ÈíXízǾ¬^yõ5’W_&ËO÷è Ý¥…v•ç.9/] 2oWt1ù©þÖ®ﺛ .Óh&»fít(=`‡Ïç»@ñtvv–øé•hgœzdñùÊ ¼óá<>šµ„'ÿþþDJLåØSjç¿}ßnV8ãù×òìËŸªh,®222þÙÖÖv½1ªý¸zåÕîÉ+ƒž±ͯz"¿ì÷»«qøº9É]MxêÖ™†’¦ésë†4æ{gŒú ¢'ZŽ÷y¼7Þ‘4*((x¦¡¡aôÄCF«_^{±¨<`¸â Skçͼ·G€e˜’×ßYÂ/Ì’ ÍZvvö«---×kz§j¿®^yµkyÕå]z¬§}"¯Ä>Vwëru¥¤>L¼ # T¤h)»úÿT´$}Ù“®2Ê;«zÜY-®Î€«¤ºX_~¿ÿ~¿ÿÞX,–ñÝ‹¿!®øöédd¾`ýFí¼·§v °”‚g¯âÑçfÉê-ÛµÜÜœ9ÍÍ-Ws{§ç¿Ú]øUÉ+ÑMyåÐVGщ,艼R»!¯R©ïf'®F¹¯ä•ØË ¢»&sgZI: ¥3°ÚÕÿ§v¡¹ôTƒéIaÉÞ‚·»? óóóïojjº¨_i‘üùÕjÇyðn¬òàÓjç¾µkÀZ´lÿû3¹bÕ-''{mKKëÕÀŒÞéøZ˜Ö å9u£›U: Ö¨nid]È(•F^¤'Oì¤/±~¯;UÅ.,ªÎ&¾'Ö•w¤[i+=/¯–Òk=íÙ19??ï©ÆÆ¦A‡z:ï¬£Ä “Ç£éZ2ôàÓkç¼ùçNk݆<úÂ<9{Þ -++«®­­ízà©^e〱Ž}ú:ʧΔçt@¥èÈR)nÀΡkvb:K×'KÒ9‹q¿ÎùûÙyt\»2©½‹"æ]û„;³´R5輆—âëŸÜ»¿ _ffæOÀ¯šššúõ•çœ1YûæiGR\XЭ wFíœ7îìXÛk[ø×Ë Ô[ï-`°=‹ýFJù¸wôì:Ñ5~Wž ®Ü‚)Ý=v¶ñß&¯Äùbéjñt¶Pv•©ÝUVwïØ×Ä)………¿¨¯¯?FÁqGÂyg%&ŒÑ¡à­w wf`5µFø÷kK˜6cže ¡MÅbšz/sïø dmW`Õ]ÐÚÕû_û‹Ø;zÇë(ËÎξ Ô•­­mÙåeýå‘“FkcG–3ntE‡<¬aãϪ=ãö¢xÌ`ÚÛËyvú<ÙŽˆ`0ø¯p8ü+`Ëÿoï܃´¬Êþû.»ðí.» ,rÁp"åV‰‰dÌ®JA¶ùGÃLÒIsìbŽe¥¢&8æ- m†¦& '£IhÅ”5(¨K”K8 B ¬_<Ïé}¾—w¿ýöƒØ}~3ßìû÷}Ï{¾óž=Ïy.ç¯RÇq§œT³ hx.—ë} h΃h›uÁ´üõW}*ÿøý7æwmß´ëÚyæú÷mòÕÕÕËÓ½úœdÍ¿ÿ7²®Ì8N÷³$Œ¾ØÐମ®ö­ À2™LÈ×ÔT¯>èUåßæÆÒ*M'þóNæ÷Z™ÛvœÖ2=k.p‹7ÎÑ ¸µÈùÀezvàëþ ºC9¹\îàc^Žr° Y-Ý +;™g9}K©cø¼*­›ÑÞLJçàkEÎøNŸÿk`²¿Çé–<\• ŽÆDv¬ƒ!Êù¼«¼™”FØÔ˜†t&0ÓH}+°úë¹Ùz«©À, ¾ÅÉš>ΤMFƾ?å¯Âqº%kÌ€t° X ¼hú »Vä6Äâ³…Èò3Ióy¸™hJL ø–æµh2ù´‹€úP!V#?Ú §‚Ÿ«ç'á1eËi·ßþ\<l”PÎ)šî”@#°Ò¯ú¹Ø— °.@l®K€_˜B ð=àà2M¿D_ÚBà&M{x\Ã½š–Æ·5wœîÊ^ V[´³Në ·)AƒÉë w°_ÓþÌÖã&sýÅÀU T"»N4ù|<¡<3¥æþŸèñJ`žÑüšLߘŸ7Â(œ£BÏÐóÅÊY ìñfRŸîÓãO¿5çv$,€~:òØ¡/àj#.–éñŸ÷›ûÎÐ‘Õ `0ð:Q”Ðz`ˆ¿Çévl5šÊ>`€þßgc¾™Xú~½íèóFk±s=Û´Ÿ±ù\gηè}Ò2-Ó>*…l]ScV Û—]—°Â§Êœ‰ µ¤r«ÓHÕ& é¬1Y;òÉ:’)Æ}ÀOou ¿÷M ^G©€ œªÂêànUÅÇF0Žãt/6U¼¤N‰ ½’Â0òÐØ-5*5óV„ …VµàôÖO…}mÀ]F³™®}Í"à Aû¼ Ä,7[ó­ÐãÍ ºBîÕCoËŸ5ýY/S~[Î4¬JZ/N 4"&@€÷!6Ø¡úù—¦¸]÷ ö^€7ôåZ k¬›?úÎQi%>ŸŠYˆ’ÈOšAwðaýñc-Õ´”‚ßPAs7…nŒEŽSZÎÅú›W¨Æʹ˜ÃƒRœvÈjŇÓËf}Y»5ím€ûµÒ—ªðJX“ud´‚(¨â}‰¿Õ´³€§ýUü* C€òrâÄŸæUQ*ßv| €~:ˆí)œÆá©‰À˪½9%òUà+*ñëÚýdìº`—틘ö2%æ…3Ú3ˆ:ÜÿK }]=‘´²‘h1àmF³=Q¸¬ G‹ t`S“pîlà¯H„ÕÑ2KëÛVù˜¢S§³H§ôÖβ^UîàF]/'}T=îÉôGL±Ûúgµã¿ =‘hÖOW°DÉÓ ƒ£ dÎuå8ŽsìxÙç뜮­@|ãŠh¸Côš3‰ü…)`,b–IZ¿-‹ÉLÕ¿I0uDþË@uLjF|õú©m§üãéxͶ%À;Èö#wvB` U½¶HÞU:ºVD`¥‘ù;“tPç8ŽÓ£¡åC%\; „ ¡±oÄ:ì>À?) ŸÝƒLhÜjÒÞ>lî;‰~²÷í šOØDxZ–«Fn–Íg‹9w °ÝœÛ| µ úÉ#òbkhìùmj9°þ ð]¢íÈC9ò*¬“¿K¸fwB}8Žãô(>¢bGkïMA|[ÍÈ*#EfÌï#šãR¯yÝŽL«ÚN¸£šÍ:$ê)ëü?£þåÀ š6ãÖZÄ™<‘(œy42—æ Ä9™§·‰öÎKŒÀ{Ù?ë”vV‰"ۉ̻iD–»i£p­ÊùzßÍš×x$‚ÍjXµÈ(Ϫ¶9xFµ½“¼É:ŽÓSù²v–ã:¸nâãª3izï½1õ¥Mä<“v…¦»ÆšÄ²À«ÚiwV`5'”¡jÖ´V,˜^‚ÀêD§®Ö²ÅÖÅúýºX«TÐá¼ ä ¡ŽÂï¿R¿7×LÖ´yÞdîDÖ«Àéa®[GûÏOÐÎÚîÒ»1g Ž ›%ö7i­ iq©V1·‹~çˆ9ÎFžß×($ا´,k…š[ê'и@®Þ6dUÇJ(ë!àA“–2euXNäuýû^Ä|×½ˆ–‰±ì¥ø„׃ú7“ÖQ[}[5›°ÜLÅŠ‘S¡4Ï ¾¢Rx 1ï݆Lň× u´×œÏv¢¬yd•»dÐjd2¼ã¸Àrz$@fùÏG¦´µsÝ+ˆïª5âoj$š€ÝÕLD6Ê öv:ž3òÑšo–ˆYía$XäH¹ ¸ñAÅëà"}F`&âû{ñ¡í ã9¿Z3…kk:N·#íUàt‚½Èvc}ÁƘsƒˆ¢ù"!äw Žÿ‘zü.ÑâÅGË4m¿)àsÈÊü˜óÏ"¾¶9´¿µùfĤ6B¿‡©м áöès†u²Œï"~¤øóŸB"!¯A¦T"Ñ…ç«Ûoêñ"$Rpx;ÏX¬Úà-:HÚépo®Žã8¶™ß˜ãòˆ+hí·ª¶1ë@VÒ$]ŒÕ´9&í\ BÐÁAÍ{~ÿ1…æÆ!HàCñ¥…òZ3_#\ño`—ž _«eÎÿÐg­*R'6è"ÎmÖ>Y“2o~Ãr¢Õ[PAö  ¾Cˆ_+”É\šð>vz3uº¾â¸s¤Ô!f³ Ú¡nBüZo™kÞƒÌ]ú¨` Î"sª^#2»U!!ñL‡[˜ûÖkg< ø•þ­B@Ö“¼rYe{´ …V$LÞƒ T-§ ¸5æÜ`dÂî-Ï .|JLøÕr¸ß Ä·6Íü†@­>{˜>ûy’w‹=UËÑOë÷e­#{m_Ä:J¯Y«¤ã8Žsœð¥‰§‡â>,Çqç„࿌—š„$e IEND®B`‚glance-16.0.0/doc/source/images/image_status_transition.png0000666000175100017510000075143313245511421024047 0ustar zuulzuul00000000000000‰PNG  IHDR9]åOòsRGB®Îé@IDATxìÝxTUÚðzï$¤´P„Л ]±+t]×¾¶UwW×ÕÝOÝ]ÙU±ìÚÛºv± "UŠô!BI é=ó½çâ sS ”;3ÿó<³¹÷νwÎùÝyÖy9ç¼ÇÉ$, (@ P€ ìCà%gûh[A P€ (@ Pฃ~(@ P€ (@»`cW“¡(@ P€ \I@ P€èlš†F,®@vIŠ*j´Wqe ŽVÖ¢¶¡ u¨­oD£L#uwq»«³üu†·»+‚}<êë‰_Døy!>Ô]|àìÔÙ­âçS€ @g 0Èé,y~.(@(­®Cêá¤*Áù›–[‚ÌÂ2ä•UÜÇU‚— \TÐäíWí¥WggTÕÕ¢NŸZ ŒªêP,AQ±CêÞæ{¸Ë5]ƒ}Ñ-Üý¢CÐ/&}£ƒÑ;*HîÁèÇ¿zl2(à`NÌ®æ`OœÍ¥(prŽVbùž\¬Þ—Õûóž{T D%x1IaþˆñC‚ôÀÄûI`ã~F5lh2¡ ¼YEå8P\®ýÍÈ?¦T»óŽI`Ô¨õü IÇȤHŒì1Ý£à#½A, (`W/1ȱ«çÉÆP€è\h¬Øs Òs0?-G‚šZæ‚Aq¡Ç‹¤ÙClÏy­h}c2òK±>«@ ¶TеG U·ÑÝ#1µO,¦õíŠ^‘çµ^ü0 P€è9ÂÊ›R€p 5OfYF.>߸ßl;  ë%æHà059V "<%˜0ZÉ—^ŸE;iÁ˜ú«†½õéŒ)‰¸.% ="ŒVeÖ‡ Ú'À §}N<‹ š 쓹4ï¬ÚÖìA~Yv ÅŒÁI¸vP"døÙ™é B‰$PólTòI8 æß¨y8 MM–Äj®Ž¯‡›6wG%ð“í3-ê3Õpº/6ebΖLä•V!%> wŒê…†$սϴN¼Ž ÎX€AÎÓñB P€( ‚o¥·æßËÒd®ÍDúà¶‘=pó°îè&skÚ[Ôð±]2OF%PótÔ<š½…ÇçÑ”TTËmäƒN³¸JÖµH©OR˜ºÉü5×G%è'/•„ ½Eµq…Ì#zoMæl΂‹$*¸^‚·Ç'k÷kï}x(@ tšƒœN£çS€°!•Åì}é±™½$Y2©ÿÒ¾q¸stOL‘áh.N§ÎV¦RC¯Ú—'½%ùX±7»” Qzeœ%0ñ €‹Ÿ?Ü|ýàî'/o¸zzÂE^®žp–'9ÏY2«ÉÿÀ$’IRJ›š$¥t}=jjåU£½ê**PW^†ù[_VŠÊ²rMÙÇÓ]ë™ÛMȼ áòjOÏÏ1ÉØöÑú½x}ÅNì”`l² Á{tR?LèmCOU¥(àp rÁ NC Z†Šýgy:f-؆J t~5ü<<¡ºKjæ“uʪ¦’|¿# Kµ Å/,á‘ð‘¿^Á!ð”ÇI/TëêQ]R‚ª’bT &?•¥eZ]†&F`zßXmîÐ…±!'­êWRmy~ÑvmþÑàøp<}YŠ\sÒëø&(@ tŠƒœNaç‡R€0¸€NöÆÏ»ð÷ù[Q^SûÆõÁ£ûi‹n¶Uuµhçü´l|&óZ¾K=ˆêÚzø‡†À+&11ð ÐzeÚºþ|¯¯ªFù‘#(ËÉFåáC¨©¬B´¤®¾I†¤]'IÉÜ¢“•ÍÙEøë÷›ñƒ´q„¤¢þÛƒqÑQ'»„ïQ€ Àù`s~½ùi Œ/ð}j6™³9%•¸glo<6eÂý<Û¬øÖœb¼½r>\¿UØDEÂ/1 ññp÷önó:#¼¡zhªŠŠp,3åY™Z/OBX îÓSëµ:Y»U:ê'¿Û„Å’™íŠñxþšaÚ< #´‹u (àà rü ÀæS€°¨udølµ–VY¥PþçÕCeqÎÖ'ì×IOϧöáÅŸÒšSß @ö艠nÝ ØXÜÊFea!Š÷îAéÞ}Ú|¼<"=XÃÃ[9ûø¡…äüÈ,*ÃC÷ÅS— Ômó¾A P€-À §£…y P€FPCÓf-ÜŽg܂޲¾ÍË3F`´LÐo­¨‰øoÈ$|ܨue‚’’Ú«|#[?¿µ{ØÂ±&IY}TzwJv¥£,¿C#ñØä~ZMkiÔ"¨*9ÁSÒ³äí7gŽÆÄ^LN` Ïšu¤ìR€AŽ]>V6Š @;6,Âmÿ]µæÍÿMOÁÃû¶š-­\†¡½,ͬE©¨k4!H›ðä¾Z&´v~”ÍžVž—‡¢ÔT”8ˆ^]‚ð÷ËS´`§µå•Uã~é ›³9·H’0z¹·v*Q€ @Ç 0Èé8[Þ™ €qÔZ0³nÃ_¤çat÷(¼%=j]™æEõò¼º,Oÿ¸UõMíÛ}%r?ó…7›†­ìW=†¼Í›P’™…þ’œàÅk†b|.­Vîöƒ¸çã•puqÆÿn‡±bÌB P€çM€AÎy£æQ€0ˆ@ÎÑJÌ|o)ÖeàWÁï&ôEkC°~Ø‘d®Ivq%Â$°‰èß_Ö­a¯DUq1ò6mÄуٸl@f_; ‰²øhóR\Y‹Û?\¡ešûäþxæòÁp•…EY(@ P Ãät81?€ €–ì>ŒÞþ áþ^øä7£Lp‹Ú”…;ïüh¥g#DæÜD  ßÖ´¸Ø”:„¼µkQ+‹þiêy]wé¹i^Þ^µ}¾ƒâÂðÅ)ö, (С r:”—7§(` çdAÏ?Ïݤ­óöÍcàãÂöïeixìÛpööAô¨1ð“tÐ,m ˜­ =Mzv6IoŽ/Þ¿e FÈ"£ÍKZîQ\õÆ"TÈšCsîžØê9ͯá>(@ œ±ƒœ3¦ã… lD F2…ÝúÁr|µ% ÿºz˜¤9NnQóŸáÝ¥Ø k¿D €È ¹•^‰ò€&P[^C«VBõî<:©ž•¡iÍ{uJ%3ÝÍï/´ɾ6¿qõ(@ P cätŒ+ïJ PÀj^Èå¯-Dºô$|-=ãZ™(ÿ‰¬wsçÇ«àäã‹ØqãáÜr›1ZcüZed wíôŒÀ·GÏÈ@]¥Õ⣞»—DOÊz:O_–¢{Ÿ; (pNäœFÞ„ €2‹Ê1ùåÑÐÔ„^Í~p«ž{>YVï–tÐɈ–¹7Î..l‰mU©¦¬ 9Ë–¢¶¤ïÊ𵛆tkÑ€wWgànɾvýà$¼ÿ«‹˜ …P€8+9gÅÇ‹)@ T`WÞ1L˜=QÞ˜wÿDøé'»«ä—¿¾»òˤ÷f»v5hKl³Zj®Îá ë‘/ëëÜ?>/\3 n͆ÿ-ÞuX›§£z×¾¼s"<\[&-°ÍÖ³Ö :]€AN§?V€ À9Øv¨“^ú=dÈÔ<éÁñ÷Ô¯i³r_¦¿¶žÞˆ›8 žþ-×Ç9ÇUrØÛ•ìÏDÎÏ+04>ßÝ; ÁÞ:‹u2jê+ó‘"™×æÊûÞÍ’AèNæ(@ ´W€AN{¥x(@[Pθ~hóGó§÷ãW’„ÀOznâ.gW}†5[h£­Õ±ºä(²ÌG”·+=0¥Å¢«æ ´wTæ?0^n2hkϘõ¥ 'ðûÆ ÷LX! P€g& †¨M”!jªWà‡û¦´è˜µp;n|g)‚{÷AüÅàœói_å„îW\‰B“?7êî1 &K¾i¹%¸òõE¨klÒ½Ï P€8}9§oÆ+(@ N@%PspT6/5ì©ùüŽÇ¿Ù€Ç¾Þ€Ø‘#3lœœ ×»®›·’¦O‚Ã0îÅyX±÷ˆ®½É]‚°ðÁiX—™o-‘d* (@ œ©ƒœ3•ãu  " ÒD«,jÇ“ LÕõਟÊ÷¾³¦"þ¢‹Þ§AjíxÕp‘¡ “&Á+:“^žé‡tƒº†js¨í<„ß~ºJ÷w(@ Pàô䜞Ϧ(`(•Z­ƒ£ÒD«,jÍ“ < ÎkËw"aÂŹ »¡êqrvFÜÅÃ?!—ÉsSÖ¬ËȤ|zÇÅxgÕn L·[¿Åm P€8 9§ÅS)@ MàVI" úTëà4O­†¨ýgÙNÄ¿A F«ºÃÖÇIÆ v{‘èLÿÏB¬’lwÖå²~q˜}íp¨ç÷åæLë·¸M P€í`ÓN(žF PÀhÏ-؆¯¶dáë»'¶XèS½÷Ü‚íˆ;A‰ pŒöìÔœ¨8>è‹)¯.ÀÖœb]µuî×*ˆÝq¸D÷w(@ PàÔ rNmÄ3(@ N`ÉîÃøóÜMø×Õà“´.*MôãßlÔ’ pˆšµŒ±¶UNœô²¹…†kNÎÑJ]_”Þœ”ø0Y0t1ŽU×éÞã(@ œ\€AÎÉ}ø.(@à ¨Ã7¼ý®”ˆ‡.NÖÕO-ô©ÖÁ‰è×—It2ÆÜq–9:ñ& ÊÕ“eQвšzKE]ðÅò^]n~o˜oÍBà P€§`sJ"ž@ PÀ8*³ðÌ÷–"Üß ïÜ2FW±ƒ%˜þÚ"m¡Ïè¡ÃtïqǸ.îîH˜<YÇj0C‚Wë`Fͳúò® ’‰-¯.M3n#X3 P€`c°ÂêP€8™À¬…Ûd-•|ò›‹áãîj9U˲öúb4zzË\q\Ç"cî¾¾ˆ“ôÒ‹$ÛÚÓ?lÑUzDbž¼d þ(ë¥I’  P€8µƒœSñ P€†Øt°ùnþqåô ÖÕéžOVcW~â&N‚³¬ÇÂb{¾ááˆ>ý~3~LËÑ5à‰ibP\(n|ç'Ô64éÞã(@ ´`ÓÒ„G(@ N ¾± ·ýw9FwÂï&ôÕÕïã ûðÁê ÄŽOÝ{ܱ-°Þ½*ëÝ$spŽ”VY*ï"I >ºm<²ŠËñ·õ==–“¸A P€9 nP€0®€Zr_aÞš9’}ØRWà®W!<¹»vµç†í ÄŒ…zID0óýåºù9ñ!¾øÛåƒ%5ø6[³ÝÇËšS€çI€AÎy‚æÇP€8SŒüR<+ÿzÿÓSv¢§F%!¸áÝ¥pòñEôСgz{^g0n;~<–eäâåŸôÉî—Œ]Cqû‡+ ž? (@ ´.À §u¥(`û?[ÞQAxx¢~˜Ú«ËÒ°!«@†©‡³‹‹aêËŠœ½€OX¢J²o6`¿ôà™‹d•–¬zc±9»ï­Þm>Ì¿ (ÐL€AN3îR€0’À÷©ÙX¼ó^™1j^†¹¨tÑ»À;XŸ„À|ÿÚ¶@¤<[w™cuûG+u Iî„{ÇöÆs7êÖÕÑÄ P€.À ÇÁ¿l>(`\•là‘9kq]JFu‹ÔUôÎVÁÙÛ‘ÔçŽý8ÉB¡1£Çb… [û`í]Ãþ*Ce¼Ú3ó˜„@à P€¿0ÈáW €A^_±9%•øçÕúù6ªwgQz6¢G‘ajü¿qƒ>¾sR-Ÿð0„õîƒG¾Z¯ëµ òv׿h½" „ª^= P€Ð ð¿ŽzîQ€0„@u}#þ!Y´î‘aIqÁ¾–:©Þ¿\‹¤$øEé{w,'qî¢RRPQoÒ’OX7ìÎÑ=èƒgÙ›cÍÂm P€šƒ~(@ Pà?ËÓQ^SǦ ÐÕîÕeéÈ.®D³©é\ìyÇÕÃR0{Iš. ›ôâ=ué@m(›ur{¶`Û(@ ´W€AN{¥x(@ó$PU×€YÒ‹s߸>÷ó´|jym=žþq+Âúö…‡ï‰ÞË Ü°[°^½àáï‡'¿Û¬kãÍú#1ÔŸsst*Ü¡(0Èá·€ €ÁÞ_³•è<:±Ÿ®fjÍ”êú&Dôï¯;Îûp’ÜÑáá³û°+Á*ãÞãSà“ ûpøX•å87(@ 8ºƒGÿ°ý €¡Ô³—¤âWÃ/@¨ï‰^œcÕu˜µ(!}ûA _bq< Ä$øáɹ›t¿qH7í»¢’°P€ Àq9ü&P€0À·Û «¸OÐ÷â¼!™ÖêMˆ¡j,Ž) –I ¿p¾Þš…½'u—¹9÷OÆ›?ï„ÒÈB P€®Æï(@C ¼º, —öC÷pK½ê$£Ú‹2T-Hæe¸¸»YŽsÃñà-ss^”Þ>ër÷˜ÞP™÷>Z·×ú0·)@ 8¬{röѳá €Ñö–aÅž#P©­‹šoQ\QƒðdöâX»8â¶êÍ Nî‡÷dÞV‘|'ÌE­›sí D¼½j·ùÿR€ph9ýøÙx PÀHïÈTµîÉ”äX]µf«^YÇÝÇ[wœ;Ž)ÚãÀÅï®ÎÐÜ1º¶fa³¼X(@ 8ºƒGÿ°ý €!M&| ÿ:ÛÈP³Ìe‹ü`MÍ)B¨ Uc¡€pvuE@·îx}ånHž K™^QAx—½9nP€Ž+À ÇqŸ=[N H`YF.ò˪ Ö=±.jø‘oP |##­sÛÁÂzöÄÁÂR,—ïuQߟ/7g¢A¥éc¡(àÀ røá³é €q>߸»†¢[؉„µ Møßú}졟£cœZ³&%à ÿˆp¼³F?dí:™—£æê¨ ™… €# 0Èqä§Ï¶S€†Pÿêþ¤Žž18IWŸùiÙ¨’”ÀAݺéŽs‡J ©;¾Ýz5 $ ’SâÃðù¦ý–cÜ (àˆ rñ©³Í €¡TF5•=MeDz.ŸmÊD@T$ܽ™pÀÚ…ÛÇ‚P]Wùi9:õ=š+A3G¬éX¸C 8˜ƒ{àl.(`<é9Ú„ñ„?Kåªëñ]êAøÉ*÷,hMÀM‚߀¨(|¶1S÷ö´ä®ÚµM uǹC PÀ‘ä8ÒÓf[)@C ¨‰ŸÒGŸ6zùž\TËPµÀøxCÖ™•2†€Ÿ|?~” Ù:Ñ@r— Äù¶èá1FY P€çG€AÎùqæ§P€hU çh%ÒsK0µÙÚ8*ðñ áPµVÕxÐ,à‹ŠêZ¬ËÌ7Òþªï“ê!d¡(ਠrõɳÝ €!T»« Fw×§ˆþ~G¼bô½;†¨0+a(Ï€øøc~³€fB¯h¨áj•u †ª/+C Pà| 0È9_Òü P€­¬Þ—Aq¡ð”@Ç\²K*p@Ö@ `c&áß“xGÇHsXwÆÈ¤H446a}Vî8w(@ 8ŠƒGyÒl'(`HÕûó ~Z—Uûòàìì ŸðpëÃܦ@«¾‘QHÍ)‚JVa.ÑÞˆ“D«å»ÄB PÀä8âSg›)@C”V×É|œ£äDèê³z>üÂBálÕ»£;;°ðŒ@cS6Ð÷Ú¨ïÕšfsu¬.ã&(@»`c×—£Œ,z¸&“I†«…骹bo><Âõ½;º¸C+w__xËK }´.ê{µ5»Øú·)@ 8Œƒ‡yÔl((`4ÔC%òñ@l¥jõ2b÷‘øHO Ú+àŠ-2dͺô‹ F~Y e¡Y P€Ž&À ÇÑž8ÛK F`‡ôäôÖÕgWÞ1mè‘Wpˆî8w(p2Où¾lÉ)Ñbþn©`š… €£ 0Èq´'ÎöR€†H“õqÌ?DÍ•R³‹ <̇ø—§ð Ö2òY'ˆðóB˜¼Ô÷Œ… €£ 0Èq´'ÎöR€†È,,CR˜¿®>*¯8N’]…íð ÒæweäÓ]Ò-Üê{ÆB PÀÑø_QG{âl/(`š†Fä•U#AÒüZ—¬¢r¸øéë÷¹MÖܵïŒÔ÷ǺÄË÷ë@q…õ!nS€p9ñ˜ÙH PÀh凧ʬªrö–ÃÍWÌhug}Œ'àìâ /_ïAN‚|¿²ŠØ“c¼'ÆQ€-À §£…y P€­d—ÿ×õ¸`}@£þ%ÞÝ_¬•Ëyˆ-<$8>P¬ïÉQß/ów­Å<@ PÀŽäØñÃeÓ(@ã IZ_Wù×÷ owK%›L@IE5ܽ½-ǸAö 8É÷&·´Jwz˜Ÿ'Ô¢³*59 (@G`ãHO›m¥ # ‚œO]}J*Õz&&¸zêëNâÚp‘ïM~¹~Mów¬¤ª¶«x˜ €} 0ȱÏçÊVQ€(–€&Ä×CWËâÊã?DÕU œ®€ Ž ›9¡¾Ç¿K*¨f¡(àH réi³­ €aŽJ@äÝ<È9þCÔÕƒAŽa” UÄÕÃG›õؘ‡C«ª³¡–°ª Î^€AÎÙò N[ ¶¡ žn.ºëªëµ}gWýqÝIÜ¡@ή®¨©?þ2ŸâñËw¬ùqóûüK PÀ^äØë“e»(@C Ô56ÂÝĘcª85;n膰r†prvA¬¿d]<~ ˜Íß-ë÷¸M PÀžäØóÓeÛ(@à ÔÊ¿¸»»êÿ/¸NzwTqvÖ7l#X1C ¨µrê›9îrLówËPfe(@ t ÿKÚ¸¼5(@¶e!P×fÁLCÓ/i~›oëŽù»eäú³n Î¥ƒœs©É{Q€h§€š+Ñ|‘yøšé—¹9í¼O£€& ‚c·fI+j¾fþn‘Š €£0Èq”'ÍvR€†Ps%Ì?@Í3OWÿ"ÏBÓPßóóµæDæï–ù8ÿR€°w9öþ„Ù> PÀÞªkÐÕÍ×ÃMÛo¬¯×çÚ# ¾7Þ¿|‡Ì盿c^ÍÒ•›ßç_ P€ö*À Ç^Ÿ,ÛE Z ØÇÅÍV¡ñ=¾8hCM­¡ëÎÊS ¡¦¡¾ú…d‹eÑYUBš7f X+ P€çN€Aι³ä(@ ´[@ý5ÿ5_dþª~¬²PàtTpÞ,˜)®<þ] ‘ š… €# 0Èq¤§Í¶R€†P½6¥Õuh°Jùë'C\e!P9†yL6U‘¦ÚjDøéƒ™"é-Ti¤½Ýmª-¬,(@³`s¶‚¼ž ÀDøyÁ$kå”W뮎 ôA]E…îw(ÐÆÊ ÄùèNÍ+­F˜>Nº£Ü¡(`ÿ rìÿ³… €âCý´Ze•ëj—懺ò2Ý1îP =5¥å0¯Ìç(–c!Ç¿kæcüK PÀä8ÂSf)@à t ð»¬i¢~„Z—nü4°'Çš„Ûíh¨­C]]š4*ˆNø% nÇmx (@»`c7’ ¡lIÀYÆu öEËžÔ—•ÚRSXWÔ–ÿÎ$…ùëjÞw(@`ã@›M¥Œ%Ð-ÜùÇt•êŒÊ²r4Öq­ wN*P]R7Wéµ9䨜{ JÑ=<à¤×òM P€ö(À ÇŸ*ÛD Ø„@¿è¤*ÑÕµŸ9ª¨­,h¯@uI1zw †ê!4—}…¥¨–gUàÌB PÀÑä8Úg{)@Ãô‹ Æî¼c¨ol²ÔI aóñtG•ühe¡@{ꎖ`P¬>˜Q´³D=}ºµ÷6< €Ý0ȱ›GɆP€¶& þ…½®¡Q†¬éçà¤Ä‡¡ª ÀÖšÃúv’€JE^YXõ½±.©‡Šµ¡j^n.Ö‡¹M PÀ!ä8Äcf#)@# ôŽ ‚·»+Ögéš±Ý"Q“ŸgÄ*³N¨..F½Ìᙩ«ú^ êª;Æ P€Ž"À ÇQž4ÛI NÀU† IÇêýú€fdR*KËP_¥_(Ôp `… !P‘—_/ݰ4•t`9#%`f¡(àˆ rñ©³Í €aÔ¿¾¯Þ—¯«Ï°Ä™KáŒò#GtǹCÖ*ór1Jc«œØq¸eÕu-zwZ»žÇ(@ Ø£ƒ{|ªl(`3#»E`¤‘Î/?Ñkãï醡è”ådÛL;XÑÎ0I—M𨖫«ÀÊ}GàïåÎÌj:îP€Ž$À Ç‘ž6ÛJ N`L÷(¸»º`ÑÎCººMï‹ÊÇ £ŽX(Ц@E~>êêê0µY³0ýÆ÷è¢K)ÝæMø(@;`c‡•M¢lGÀGŒî‰ùi9ºJO鋚Ê*TéŽs‡Ö¥ÒÛ€na'­mh²Œ\LiøX_Çm P€ö.À ÇÞŸ0ÛG ^`ª4ª'GM7— cCì‡c™™æCüKY™¸v`¼î¸ªVY[õ½b¡(ਠrõɳÝ €a¦õíŠâŠšYÖnœˆrùËBÖ*¥—Oeᛑ’¤{û»í‘,k0©…eY(@ 8ªƒG}òl7(`^‘’þ7_lÒ4êÇ«ú«zd¡@s£û÷#6Ä)q'ÖÂQ½s6g⚉ÍOç>(@‡`ãP›¥Œ*p]J"ælÉÔ YS 9&„¢xï£V›õê$•U­tï^Ü2´›®j¨Ú‘Ò*éÝa£ƒá(àp rÁ €T¯Mžü8]±'WW½{Æô”³ûÐÔШ;ÎÇ(ÍÎF,û›Q=tŸoܯ¥î)½ƒ, Y€AŽ#?}¶0Œ@ˆ¤Ä‡á½5º:ýjøh¬¯ÇQ& й8úNIÆnŒï„? Eu}#>• gæ°î–cÜ (ਠrõɳÝ €áîÕKæSdᘬTo.á~ž¸b@;åëü´]Ý›Ü%¢úè1Ýqî8–@Az<]qû(}0óÂâTô½c „­¥(Іƒœ6`x˜ @gô•E'ËJõÏ/Ú®ûx5/§—üˆÍÛ¼Iwœ;Ž#ÐPW‡béÍûýľðõpµ4\¥ŒþdÃ><<¡NôíXÞæ(@‡`㦌,ðè¤~X–‘‹ÍÙE–jª¯¿<%™Y¨*.±ç†ãäK€£zqº¸¯®Ñ//MC·feV5 w(@‡`ãП§Œ(0¡g´–Nú¯ßoÖUOõæô—Bó6mÐçŽý ÔWW£X<.Ãý=Ý, .¬¨Á–§ãaéÝñˆ… Ž ðÿùM (`@g.ŒRb}V®v/^3Gf£ôÐ!ÝqîØ·@î†õñÀ2g˺ÌZ° >în¸ï¢>Ö‡¹M PÀáä8üW€ €¦ô‰Áˆ¤H<ù~Îø]pÙ€ä­] S“u6#¶‚u:UEE(Þ“® owý\œ×$ßcS莟‹Ïä=(@ غƒ[‚¬?(`·»b0ï<„…ò².³¯†Úò2¨L[,ö- ÂØÜ5k08!7Hzqë¢àP_OÜ=V¿(¨õ9ܦ(ਠrõɳÝ €á.º jÎï¾X‹«^›ÄP?üiê™›³I‚ ÷ƒ¹mœn8šJ­†/Þ3¶7Ç…Ùv#Y{ P€(À §qyk P€çB@eÔzsæhÌÙœ‰¹ÛênùìåƒÑ3"9Ë–2ÛšNÆvwªKŽ"wýzüeú@’®kÈC2LM}þqåÝqîP€ €^€AŽÞƒ{  )0±W4n~îùx%Š+k-uTÃÖ¾¸}V‰ïï› Y‡… NO€AÎéyñl P€†øõˆ ðä%q·¬Ÿ³x×a]ÝT’‚¹÷LBiæ~\.ÃÚéè|:kGõàìÿþ;Äx»`éï¦Aeͳ.|¶FK4ðݽ“‘âgý·)@ P  rÚ ÅÓ(@ UàéËRpýà$-Áº¬]5§ô‰Á¼û¦ â@È"“Mœ££ó9ß;jŽ pýܱêÑéóõÔUá/ßoÆ?ïÔ2© M×½Ç P€h¿ƒœö[ñL P€†xÿWa\.˜úÊ|l;T¬«çÄ^ÑXòÐ4ÔæFÖɺVw"#›îDît¨€Ê¢¦œ>a>Xùè¥-œ-JÅ3ó¶à-‚x倸­ oN PÀÞäØûfû(@‡P×¾¼s"R$µô¤—~Dúý:£ºEÊëép/?Š}òC»®¢Â!\ŒÒHµÎÞ¾ÇØÄ0,ûÝ%-†¨ý{y:þðÕ:¼tÝpüfd£T›õ (`³ rlöѱâ ô®Î˜+ëäôŠ ÄØç¿Çæì"Ý Ɔ`ÓãW ÖØûí·¨(ÐmÓÌs&·};²/Æ#/¡ƒ“uIÔ‡¨œû?]YWÅã“ÏÙçòF YÀI&¢2·¨#Øv PÀîªëqåë‹°.3ó‘Iº6–Éú:3Þþ ‹$QAÌðáëÝ[÷>wÎ@c}ƒd¶[ŽcYðüµC[,ô©>EÍÁQCÔTœsãλP€—äð{@ PÀê›0ã­%X´ó>½ãb\Ö/N×Jõ¯[Oÿ°•Ù¡tGÌÈQpqcªbÒYìT—E¶$zpo¨ÁW²ÎEDéî¦ÒD«,j*É€šƒÃ!j:îP€8[9g+Èë)@ U ¡É„ß~º ï¬ÚÙ×¶ÞSð£,&zÓ{ËPïêØñãáfÔæØL½ ÒÒ‘»~=Å…â«».FL ®îj¡OµÎÒŒ\-‹“ èx¸C Pà\0È9м(@# ÌZ¸³÷ëƒ%ØQI ¬Ë‘Ò*Ì|9–Éî¨9`œœ9eÓÚ¨=Ûu••ÈYù3Êrã/Óâ‰iÂÅIos´—¿¶P[èS­ƒÃ4Ñí‘å9 N[€AÎi“ñ P€6(ðÅæLüúƒåH‰Ã2|*ÂÏK× 5|íåŸÒðG †Üýý3z,|ÂÙ«£CjcGÙî܉¼ ¤×Æ ŸÜ6ÃZYãæ§Ý¹ZN„¿7¾—\è³ P¦(pö rÎÞw (`;—È‚¡‹QU×€/‰ú„ªû ËpûG+±BzuÂz÷ATJ \=Üm£P˪¢"ä®YƒrÉT÷ÇÉýñÔ¥áéꢫ‰ ‚þµh;þôÍF\3(ïÞ"¤;ç?é¸C PàÜ 0È9·ž¼(@c «®ÃÍ2gAzž¼¤õ!Uª¬ÝƒG¾ZŠz"¥HÀÓ N͆^»¥[»úêjän܈⌠NˆÀ[7Bÿ˜àª†Þ*=hjþͬ«†âá }[œÃ (pÎäœsRÞ €ÁTÏ«KehÚ×´ÉñÝ6ñ!¾-j­RM?ûãÌ^’?„„ ¤$èg™´¸Ì®4ÔÕ!?5Åi;êãdm›'µÚæ¹ÛâöW ÐÛCK00D† ²P€ Ày`s^˜ù!  (–{7¾ó²ŠËñ·ËKb‚d4ËI ÕZ a{ò»Íølã>ø!üÂALHž6ªƒªÔP[‡‚ô4K€ã)‹®>>¹Ÿ¶®w+ÃÎ +jðÐçkðɆ}¸mdO¼ªg›=2[sŠñ ôÞ¬ÞŸ‡{ÄïWŸGëîøC P€!À §#TyO P€¶( ¿íñÞêÝxbîF4ÊÎÿI sçèžpsi}Íœ"–õîê ¼¾r7–Â?"IÝ”˜7oo›#¨”LiG÷ïGéÞ½¨©ªÆ¸žÑ¸gLO\50¡Åz7æÆ©ÄO~·I¾ I—f Ç`éc¡(@N`Ó©üüp P€P ž™·¯Hr‚è@--òÍú·ùC_%2X.ÙÃÞ‘úßn=ˆêºzDEÁ/>þ±±ð 0`+ÕcS‘Ÿ¯õÚTde¢²´ ±!þ¸eh7üfT“®c£æÝÌ’!~¯­Ø‰P_O<'=77éfÈv²R P€AŽ>t6™ @»–TàY vT:éÄP<>u€öCÞ½žuÓš†FÌOË‘$™˜—–Êš:øøÃ;:¾‘QòŠ€»oËLníªÐYžd2™P]\ŒŠ¼|Tæå¢âÐaÔI¶´¸Ð\;03R’zÒOQ=7/Kð÷Ÿåé²Ö›2wíÕbmœ“Þ„oR€ @G 0ÈéhaÞŸ €­ ¨ìjªgGe S½÷OÆÝcz·9gÇÜÞé)Y—™ù²&ÏüôÃHÍ)’apMð– Ç#4žÁ!Ú</ÉØæîçç“Oæ{¶÷¯Ê†V[V*AM ªKJPw´•……¨—^&_/ŒJŠÀ´äXL•W·0ÿSÞVÍYzaqªf$)¡–÷]ÔGK0pÊ‹y(@ œo9ç[œŸG PÀV«Ò†°½ùóNÔ76áÚA‰¸ct/Œ”€¡=¥ª®bõ¾|l‘€gKN È\ÕÃY}ÇË×¾~p’ù<.žžpU/-{›“³Ëñ ÈÉY†™5ÁÔØ(ÑX_†šyÕ¢±¶M¨)+×zhT<Ü\Ñ»K0Å#EÖ©™‰>’=®=Ù¯«ë1gK&Þ–9G+÷‘ûÉbžý0shwxH¦5 P€0¬ƒÃ>VŒ €AÊkëñѺ½x{Õnl•ll½¢‚ æì¨á^‰¡~§UkHdäCVQ¹ö: köäʰ¼²¨Ä%•µ¨•!puòª——Jˆà&†›dus—¿Þ’Á,ÔÇá~žˆWŒdBK:ćø!IzhÔ0»ÖÖþi«’r{¬ÜwŸoÜOåU)m½¬¼s=1©wL»‚£¶îÍã (pÞäœ7j~(@;P)§ß•`çËÍ™ZP¢zKTÏ´ä®m®·c4µV l¾Û~s¤jÞMßè`Ì”ÀíÖá=´Êhuf}(@ PठrNÊÃ7)@ P ]jþÍ2ɰöù¦ý˜»í€ðÄùjs^&ôŠÖ†‰E#­´ê­Ùq¸«öåaáÎ,Ý«õØô‘am*@›‘’ˆž‘íj7O¢(@C 0È1äca¥(@ ذ€ "6ÉÜ•em$PÛ 2‡'N†©ù;ƒd™~1ÁZoI„ŸW‡¶TÕeŸÌûQAÍvY°sÃB¬•deÕuð÷rÇø]0E% è‹®Á“õ­Cxs P€Ž)À Ç1Ÿ;[M Pàü TJÂõY’p «÷çc›ùeUZÂ$Èéî¯Í¡Qsiâ‚ý&skBdžÊääí.É\d¢¿ÌÁ‘ìk®òª“áeu’x Væó¨dÅ2o§¸òøž¼Òj¨y=ê¥æùì- kr޳LÌé€A]C1²[¤Ö³¤†¤Î|ó'ÆO¢(@³`s–€¼œ Î@@-¦™z¨i¹%ȔՊ+$()C¶¬ÍS*½,­–êràh.Ð¥G«o»8;k’J: ^*hR fTF5/ –X(@ PÀ!ä8Äcf#)@ Ø€JOmî9VU‡é±Q=7Ë~üÿúÃoµ*³šêÝQKˆôø¨žŸ@éõiOjh¢`U)@ PàÌ^r=³ëx(@ P cÜdHZ¤¿—ö²þ„ŠaÚîô~]­s› (ÐB€«™µ á P€ (@ PÀ–äØòÓcÝ)@ P€ (@ rZð(@ P€ (`Ë rlùé±î (@ P€ @ 9-Hx€ (@ P€°e9¶üôXw P€ (@ P …ƒœ$<@ P€ (@ زƒ[~z¬;(@ P€ (ÐB€AN  (@ P€ lY€AŽ-?=Ö (@ P€h!À § P€ (@ P€¶,À Ç–ŸëN P€ (@ ´`Ó‚„(@ P€ (@[`cËOu§(@ P€ Z0ÈiA (@ P€ €- 0ȱå§ÇºS€ (@ P€-ä´ á P€ (@ PÀ–äØòÓcÝ)@ P€ (@ rZð(@ P€ (`Ë rlùé±î (@ P€ @ 9-Hx€ (@ P€°e9¶üôXw P€ (@ P …ƒœ$<@ P€ (@ زƒ[~z¬;(@ P€ (ÐB€AN  (@ t¼À‹/¾ˆ×^{­ã?ˆŸ@ PÀä8àCg“)@ P óÞ{ï=|øá‡_Ö€ € 0ȱÇÊ&Q€ @Ç œ«ÀdýúõX¶lYÇU”w¦(àÀ røá³é (pz*(ùÓŸþtzµq¶¼¼¼Úx—‡)@ Pàl\Ïæb^K P€è ŠŠ |ûí·ÈÈÈ@ß¾}1yòdXª²fÍÔÕÕ¡W¯^øïÿ‹‹.ºC† ÑÞ_²d T/JPPf̘ËujcÏž=X·nRSS1räH\yå•Úû*À¹üòËáää„7ß|]ºtÁôéÓ-מ꾖Ù(((À?ü€Ûn»ÍòÖ®]»——‡±cÇbþüùZû®½öZÄÆÆ¢©© «W¯ÆÚµk1fÌ 6ÌrÚh«ÞÖ')—¥K—Âd2i)))-Úºí°¾?·)@ F@þŽ… (`x/¾øÂ$ÿñ4I `š6mšiûöí¦úúzÓ 7Ü`’@Å´ÿ~Ó´÷Ôy<ð€I‚“···ISmm­éöÛo7}ú駦mÛ¶™®¹æShh¨)==ÝÒöÙ³g›$ 2I@aÊÊÊ2ÅÇÇ›$9€öþÖ­[Mô˜ÂÂÂLð˜Ô¾*í¹¯vâ/ÿÓÐÐ`zÿý÷M~~~¦ˆˆíhYY™é‘GÑÚwÕUW™î½÷^Óã?n=z´ÉÅÅÅ4oÞ<­êœ˜˜“«««I1ËmOVoóI¯¼òŠéÒK/5ÕÔÔ˜–/_nrww7I g’Ñ´yóæÓn‡ù¾üK PÀ€³Õ¿æ°P€  /`r `zë­·,õU?ÐÕöï¿ÿ^;¶wï^-X8p IÒcb*,,4=ÿüó¦¿üå/–ërrr´óÔ|séÖ­›é·¿ý­y×tÅWhA“ù€Ú—^ó®ö·=÷Õ]ðËŽ fÌAŽù}é2 <ØTUU¥RÁ›››ièС–c•••Z{Ÿ}öYóe¦SÕ»´´Ôäéé©W拤ʨtêØ™¶Ã|?þ¥(` Ù®f˜>5V„ Ú# ½0¸ä’K,§J0ƒòòrH £SÃÈTQçH/¤çEÛW)›Õð, b´}õ?=zô@II‰e_z8 æÊ¨²sçNH  4,ï« 5\ͺ´ç¾Öç›·=<<Ì›–¿þþþHJJ²ÌÕ‘ÞmX\÷îÝ-ǤgJ¾&=M–ëNUïÇCzppèÐ!Ë5#FŒ€†PCÿÔçœi;,7ä(@ 0È1ÐÃ`U(@ PàÔ*1.æ³ÍŽÚwv>žSG8ærìØ1äææB†«éæÑ˜ß7ÿŽŽÆ¢E‹´¹2j^Œ 8¤§Èü¶ö×:Èiï}u78ÍÖ‚!éÝôèXîtªz÷ìÙQQQZÛþüç?k×åççkózT€s>Úa©,7(@ œfW;Èü P€8wêǽJp:Åøìر㤗=ùä“a`˜5k®¾új­'¨ùÖAN{ïÛü§³oýyÖ×Y?U½Õ¹*ÉêÉùýïÏ>û ûöíÃǬÝò|´Ãºîܦ(ÐÑ r:Z˜÷§(@s.ðÉ'ŸèîY\\Œo¾ùFwÌzG KHHÀ믿Žêêjë·ðÑG!;;jø— pfΜi¦2šY,466Zµç¾–“;h£=õV­†¹Ý}÷ÝZo–Ê6§†ª%&&jµ2B;:ˆ‡·¥T€AŽƒ>x6› €­ \xá…ZZhõƒý§Ÿ~‚dÓÒ0KÆ5­Iæa\EEEº&ª Õ“1~üx¨9,’ ’ˆ2)]»vÕæ¦¨ T/‡š‡³råJüüóÏ8zô¨öžš÷£†|©Ï™™™lnÚ±SÝWW «Éʦ}¶$GÐŽÊ„]í~ê¸uQsf¬ç ©÷TÕUÔûªœ¬Þ*ö¤I“´ùFªªMÊB}¦¹œi;Ì×ó/(@C ( «B P€hSÀœ]M~œ›&Nœh’^í¥R>«cªÈº9¦[n¹EËšnz饗Lò_{O¥…Vi™Uúeù±ö÷±Ç3IÏŒö¾úY³F;®²•½ñƦ9sæh™Ì$02Io‘–:Z]¯²’©”̪´ç¾Ú‰¿†ošw@IDATüÊœ¦®Ui¯U=þð‡?h鯟yæm_¥¨–€Å$Áˆé©§žÒŽ©tÓ¯¾úª–aí¹çžÓŽ©:È@Ú]OUo•j[ÖÖÑ®SŸi~©lnï¾û®vÓm‡u›¸M PÀ`³T…äÿìX(@ P€†øòË/qÝu×YzÔdy5œ,88ø´ê­†«©ž5|M áj^TO‡šŒo.ªgÅzò¿êùQsX¬ÏQçžê¾æûuÔß“Õ[µA%P™åÔÐ>ÕS¥ê«z¥ž~úiHÚm¨dªtv;:ʇ÷¥Jà%fWs¨çÍÆR€°éÉ8£Æxyy¡OŸ>m^Û>^{Y¬ '½S–CÝKE¸A Pà,Nü¿ÚYÜ„—R€ (`\õë×ãÈ‘#Z £ÒI« F¥Æ^³f¶Vu¦6ã¶‚5£(Ð~&h¿Ϥ(@ ؤÀ¼yó ½þúëµá}½zõ‚ÊP7}út\uÕU6Ù&Vš ÀÉØ“s2¾G P€°ääd¼÷Þ{ZKT¦5ëÅSí yl(@ìÉiA (@û`€c¿Ï–-£N0È9aÁ- P€ (@ PÀäØÁCd(@ P€ (@ rNXp‹ (@ P€°9vðÙ P€ (@ Pà„ƒœÜ¢(@ P€ ì@€AŽžôvlj™;Ty~,(@ B€AŽ!+A PÀ± %€I=T‚´Üd–á@q²ŠÊ]RÒêºÖ1œdM7Oü#½ÿØýc‹s\$!A˜Ÿ'âCüê§ý퀾ÑÁèÓ%H ˆZ\Ä (`— rìò±²Q Œ#PY×€õYX½/k2ó±5»ùeUZÃü¼Ð-Ü_ H¦÷C\°Ÿ¨¨ž™Pé¡ ’Þé­Q½6NMxòÏ^øûs÷h=;uÒÃS%÷.®¬•WÖó“WZ-S¹L嘷#û Jµsœ žA]C1²[$F&EjÁf¡(@;p’µLvØ.6‰ :I Iþ«²é`!æ§å`AzŽ¶Ý CÐ⤇edRÅ…¡_L°dDHs:¥¦¦žžží¾DÕe_a©Ök”z¨X ¶ÖIÀU&½Eþ^îߣ ¦$ÇbjŸXt öm÷}y"(@ Zà%9†~>¬(@ÛhhbYF.>ß´s·ÐzUb‚|1Uˆ ½¢µž“è@oC4F>;—`å¾#X˜~H«wem½ i Ƶƒ1#%=# QWV‚ ÎH€Aαñ" P€Ð6gáÝU»ñåæL-°Q½4×I0-¹+’eŒ-”Ú†&-àùnûAÌ‘v)­Òz™fëŽ[‡÷@¸Ìóa¡(@›`cS‹•¥(`réõøhÝ^¼-ÁÍV rzEáf ®“^¤0ÿÓªau}#2òishÔ<5Ÿ&W‚Œ‚òíuT²¬ÕÈ9jþM½¼¥ÆÍÕY^.pwq†·‡›6w'\æïDH0äƒxI: CãT]Býq:ónT/êáù|ã~|*/ÕÃsYÿxÜ1º'&õŽaö¶Ózº<™ @§ 0Èé4z~0(@8|¬ ¯,MÛ?ï„Jó¬†vÝ1º—6Ϧ=MQ͆*A>¶äÉ«d¾Ìñ©¡Nðòõ†‡¯œ¼½á"ón\ÕËÃή®prv³5²“$ 0Éç«¿õõhy: 5µhª­FcejJËQWwÙ°Oë5¹|2îÓ[Ë|v²F¨y:ë$›Ú|I>0?ý0R%°ilj‚·¯/Û˜‰%¸©¨®…O€?¼£cà%¯¸KÓEõU£"/•y¹¨8tXëñ‰ Àµã%á@RâBOZ55_çeéÍúÏòtø¸»á±)p÷Ø^ð”ás, (`9†y¬(@ƒ”ÕÔk=7jhZt žºt 6çÆÅ©õA^2Ë%³Ú;k2ðíÖƒ¨–Þ’€¨(øÅÇÃ?6ži™¾&éiªÈÏGiN6*²2QYZ†ØÜ2´~3ª‡6¯Gʼn=µ˜é¬ÛðÚŠZïÖ¬«†â†ÁI'Nà(@ t¦ƒœÎÔçgS€0’€štÿÞêÝxbîFm‚ÿÿMOÁ2áÞ­adEòCÿÝÕx]欔¹5þáHꎠĸɼ[+•EE8º?J÷îEMU5ÆKêë{Æô•Æ£­Oõì<ùÝ&¼/Þ°„¼4c8K†9 P€èT9Êϧ(`5çäöW@¥„¾wloüUœ o÷Vk··  ³—¤âÝ5{t뎰ž=µù5­^`cUOiv6J2vãXvºHƶG'ôÅí£zÂ×£õ98[Äí¡/Öbõþ<Ü#~ÿ¸rü$ó (@ tŠƒœNaç‡R€0ˆ€Z#æo?nÁs2ôj`×P¼sËØ6×·Ù•w OÎÝ„¯·fÁÛßÁÉ}Ú£‡–ýÌ Í9çÕ¨-+GAÚ x2dÞ3~/ º¸/ü=[`þ·~/~'ÁŽJHðÚ£piß®ç¼N¼!(@ œR€AÎ)‰x(@;P½77¾ó²dmš¿]>÷KnuM•]íÉï6K2}ð‘LháB`BÚ˜¢c—Z*[[AzŠSSµ`çñÉýð€d™k-»šš¯óÐçk´Ll·ì‰—gŒh³È.±Ø( P€/À §óŸk@ Pàü ¨D¯JR?~½ƒ$›ØG·G|HËŒg*Á³ÒË3{I<¤ç&|à %%µk™óÛ¢ó÷i ²þN¾:ÅÒ»êã®n;áÀÜíµ!€’rúãߌÇY§‡… ΋ƒœóÂÌ¡(`cÕu¸ù½eX 韼d ž˜va«“êUÚèG¾ZŠz"¥ ¬w/é¹i=»šAšv^«Q_]ÜQ,ÃØK·n…þ1Á-ê ÜúÁr,•ìs*ÛÃ2·‡… :\€AN‡ó(@ D`Çá\ùú"T×7âË»&`DbD‹š©¡i·´+äGyXï>ˆJI«Gë Z\쀪$#[îš5(/(À'÷×Òm7_3Gõœýsáv<ñíF\3(ïʼ'."ê€ß6™8 rÎ#6?Š @§ |¹9SëQH‘!S_Ü9~^ºº¨â/ÿ$CؾÙwÄŒ–âá^¥CjcGÙî܉¼ è…On'é¤Ã[œýÓî\\ÿöDø{ãûû&Ÿtžó(@ œŽƒœÓÑâ¹ lQ`–ô"<.ÁË}ãúàÅk‡ÃÕY?ìL ©šùþr,“Þ›¨9`œœm±©ZçºÊJä¬üe9‡ñ—é­Ì9Z‰Ë_[ˆÃÇ*ñݽ“1´•`¨SÁ§(` rìã9² Z 4Èz/¿ýtÞYµ³%¸QÙÀš—Órp“ÌÑ©wõ@ìøñð cïMs£ÓÝ/HKGîúõZR‡¯îºXzw|t·¨¨mÐztÔ<•àÊñº÷¹C P€g-À ç¬ y P€¨klÂŒ·–`ÑÎCøôŽ‹qY¿8]-Õ«§Ø‚¿~¿¡tGÌÈQpqk}¡KÝ…Üi—@uÉQdÿ´î 5øJ†^tA”îºF“ |¶oü¼o̓ߌì¡{Ÿ; (pV/¹üUÊYÝ‚S€ €¡Tb+^[„Uûò°àÁi˜Ø+ZW?•úÊ×ã=É ;rº  gOÓ!厛—‚.èÒÂb¼5#ü½Ü0Ü*у³dª»D •Î6<:g‚$Í4‡®%:/§(pB`ÿÙî·(@ ؼ@U]¦¾:é²ÐçO_ŠA]CumRsB&¿2YÇjÐ}útø†·œ ¯»€;g, zÆ&L@ÞöíxäËuØ“_Šß0R—²ûÿ¦’…BÝð ,ZÓЈ?LêÆŸÇ )@ Pà„ƒœÜ¢(`Óµ M2©}v9†NGŸ¨ ]{¶æcÊ« P%óoº_qÜ}[.ª»€;çD ²xàí¥K‘U\92|Í×ãÄ~?©¼Ü]pÿ§«¡ÒO·6wêœT„7¡(à@Ÿà@›M¥ìW@%¸ö­ÅØt°‹šÖ"ÀQC×F?ÿ=êü‚Ðmúe pÎóW!(>Ý/Ž™…7{Ô¢¬Ö律úàŸWÃC_¬Å»«3¬ßâ6(@ œƒœ3@ã% Œ&ðëÿO=ÿ©¢«Þâ]‡1á¥áÑ% S¦ÀÅ‹{ê€ÎÓŽZw(IÌôÂJ 8@aEî“UΓ— ÄýŒo¶нÇ P€8=9§çų)@ Nà©ï6á³ûñõÝ“Z,B¹ ý.ù÷øÆ' þâ pæú7úüÔ¡æâ"ë³;& Zþï…¨”Œy, (pz rNÏ‹gS€0„@^Y5®yc1¦$ÇâÙËëê´¿°LK퉸ñãÙƒ£Ó1ÎŽêÑI¼ä¤å—áY¸U-j.*ûÚÜ{'#¿¬ ¿ùp…ù0ÿR€ @;ä´ЧQ€0Š€Ê¤vü(öõtÃÿ~-ÃЬ*VRU‹I¯,@£·âd'ÎÁ±Ò1Þ¦š£?i –ìÎÅ}’BÚºÄùh=:s6gáÅ%;¬ßâ6(@ œB€AÎ)€ø6(@£ <9w#6Kªh•h ÀëD¦´úÆ&™Ç±yÕH˜,YÔ\O¬Åb´6°>'TÖ5ÕãöæÏ»0û'}0sqÏ.øÛƒñǯ×C Ad¡(@ö 0ÈiŸÏ¢(`•LàŸ‹¶ã¥#ÜE¿Øç#sÖaý"ÄK€ãæíeˆú²íPëèt2~¹Ë÷Ñ]ô‡ÉýµD*ãšõÜÝIÜ¡(@ƒw(@ W@- yó{ËpY¿8Ü1ª§®¢o؇W—¦!vÌXxëƒ݉Ü1¬@dÿþLˆÇÕ2ñбJK=ÕpÄn½Çd(⃟¯±ç(@ ´-À §m¾C PÀPênƒ I{ç–±ºzíÎ;&“ÓFD¿~NJÔ½ÇÛè:ö"Ô¹zâê7Ò%"ˆ ðÖžû{«wã‡Ù¶Õ(Ö– @'0Èét~$(@ÓX¼ë0>\»¯Ëz8!>–Ëë$è¹î¥ðFô¡–ãܰM7Wt•E[7,³ó¶êqyÿ8Ü8¤îýdÊkëuïq‡ ô rôÜ£(`8*Y'å®VâšA‰P?t­ËŸ% ÁîüRÄŽ“TÑÎÖyÖ¬Ïâ¶- ¨á†]†ÅÓ?lÁº¬]ÕÕ\,õ}xü› ºãÜ¡(@½ƒ½÷(@ Nàéy[pTæc¼zýH]ÝÔb Ï/JE—á#àéï¯{;¶-–Üþ±Ñ¸Qæ`Õ44Zæë‰Ù× Çë+vb£dØc¡(@Öä´î£  ! ö|IÒ ?}Y "ýOdLSÃÔ~-ópbbÚ£‡!êÊJœ;Õ';zŒ$ ¨Özt¬ï|óÐ‡>_k}˜Û (`%À Ç ƒ› Œ&𨤅N óÇ=c{ëªö÷ù[‘YT˜Q£uǹc?î>>ˆ”´Ò³nÇöC%º†½,ÃÖÖeåãÓûuǹC P€Çäð›@ PÀ j½”o·À‹×‡«Õ|Õ»ó÷ùÛ™’?_ƒÖžÕ:a½{Ã/<w~¼Jw» cCðë=´EB­‡³éNâ(@`ãÀŸM§Œ-ðÄ·1©w &Ë˺üîËuÜø#¼O²õanÛ¡€¶ÖeÄll¥×æÂXTQƒ7Vì²Ã–³I ÎN€AÎÙùñj P€"° ýÖìÏÓæâXÀÒŒ\|¿ý"‡g65k;Þö EˆÌ»zä«õZf5sSÕÚ9÷Ê0ÆçlÓ7¿Ï¿ Y€AŽ#?}¶0¬À“ßmÄ¥ýâ04!\WGÕ‹Ôµ«–p@÷wìZ ËàÁ(ª¬Ã+KÓtíü㔨¬«Ç¿—§ëŽs‡ €£ 0ÈqôoÛO N@-ü¹é@!þ:}®nj~NjN1"ÑçŽý ¸yy!$9ÿX˜Š²š ª”Ò¿½¨^\¼µ MöÁR€h§ƒœvBñ4 P€çKà…Å©ß3ƒº†Z>Ò$[š» Á‰ ð ¶ç†ãDôë'kæ4i)Å­[ýàødm¥Öïµ>Ìm P€-À Ç¡?O M`Çá,LÏÁ#ûéªöÍÖØ•{‘ƒRtǹã8®îî‘@ç_ÒkSQÛ`i¸š›sãnxqI*T0ÌB P€ƒ~ (@ Hàe™sÑ»K¦&Çêj5kQ*‚ããà¨;ÎÇPõToÎ;«v뮂â/ÚyHwœ; U€AŽ£>y¶›0œ@ym=>“ÅÕŸ*u°¹¬Í,À†Ì<„Ê¿â³8¶€«‡;‚%ÓÚ¿–ì@£éD¿M²Æ£»Gáí•úàDZµØz PÀ‘ä8òÓgÛ)@C |²a›L˜9´»®^jŽŽD8ü"#uǹã˜áÉ}qäh%¾Þ’¥¸ctO|'éÅ ÊktǹC PÀä8âSg›)@C ¨!H×JD —»¥~ê«ÊªÜ«å7[ÀÃß]cñúÏú^›k&ÂÇà ¬Ípl ¶ž €0Èá×€ €2òKµ´Ñ¿q®6ÿ]».nnJLÔçÎÿ³wðQ_øA! ¤‘ÐBï½WéAz¯" ¨¨ ˆå¯¢ ŠJµ¢t¤£4é]º€ô !´@HB éܾ=î’z—Û½{ï÷»ÜÞîììÌ7›ÝyóÞûžm#àY¡í¹t›Fêpv°£~uKÓG˜eM o0Œ€Í"ÀJŽÍ=wœ`Ô„ÀÊãש `ÉjV®°Q³°ZŸ¿lÊmog´ŸØ6ùýŠQž¼Î4զPrÀÐwé^¸mĽg›G€•›¿F€P«ŽûÜr0œ¸BÂÉ«ly54‘Û "r‰%Ù²´øè5£V5)Sˆ@)½RÜO,Œ#ÀØ2¬äØòèsßF@\«îçï„Rï:Æ.i°î¸äÏG.ž'UEƒ¹ª@À£ti zAÇCôí’ÜSÄu­>ÁJŽÞ`›D€•›vî4#À¨ Ígo’—kj\Ú˜=mÙ1r+i¬ø¨©ÝÜË"àâí-•`(ÆòJõâRi¾e¸›·F€°)Xɱ©áæÎ2Œ€Ør>ˆÚV*jäªv2è!Ý$w&P㩦M®B ^ýß £öÀe ,k¸¯XF€°UXɱՑç~3Œ€*ˆŽO¤ýWïQ‡*~FíÙ*&¨y\òR^±Zϼ†<¢k"ôEœìsS‹ò…ië9Vrô ð#ÀجäØÜs‡F@Müsõ.Å'&IKŽa»6ž "—"EÉ€‡Àð0o3W__rtt¤-Éšv•‹ÒîËwHä–eaFÀ&`%Ç&‡;Í0jAààµ`*çëN¾nÎú&EÆ%ÐQÿ`Ê'VéY´ËškÑ"´9™’—µˆ˜xI'Öù|Œ`kE€•kYî#Àhƒ×ïQã2¾Fm=|=˜žPÖ'¿tYËh=\Ž`kA€•kIî#Àh $i¼)\ÓJz»µûºÈ\ïÏ8FǨÿ`RAÀéÙ=ã/îC)áåF!‘†»x›`›@€•›fî$#À¨ ;¢)>1‰0 5”kbBjïêj¸‹·t°wr$GGG xh¬Ð@‰¾‘l_º•qF€`¬Vr¬`¹ Œ# =n<[]Oiɉ$G·|Úë·ØâänŽÊ}¥4J4+9 üÍ0¶„+9¶4ÚÜWF€P Á‘1ê_7ㄟ÷£ɑ-9ª'-5$·¸o‚¢š\0¿3=ˆŒ%áÉÂ0Œ€M!ÀJŽM 7w–`Ô‚Àè8ÊïìHöˆ&‘q ”˜”Döyò(»ø;ˆ ¸H÷–|OGväàUMw);'g  ¡x»æ¡¤'O(üq¼ánÞfFÀê`%Çꇘ;È0jD $*–05샰’cˆJÎlÇݺN!Ρ۳ÆSüý[9sQ_Å>ÝO¦äx¹<½ÇF+?&¾4WÇ0Œ€ê`%GuC b[@“N/'£®ÂºÁd•%gp*Zš¼{¼./šËÎ>g/n¢«A9I¦Ì(÷ØÃg ´‰.ÅÕ0Œ# zXÉQýqFÀˆ‰O”‰ û%ÜÕ vœ#Ç—œÚΕëÙ+1·6_¸o?»‡ÌòŠd ˜„$e3Œ#`hs¹Ê&††;É0ÖŒ@|Òr²·3êbœ ”†ä²3ÞoTHÅ?’GQø¾õ{ãÙ»¹“G»~uby¶ïG !w)l÷_¤KL |õÛséÊy|=¾rZöÈ£ewr,XLß»øw(âÐVé:æZ½1å«×J ÿg#ÁµÌ.¯+ywNIÑ‘ôpÓbY¯ƒw!òlÛG_6"Žî¤èsGÉ.Ÿy¶éCöî^FÇ#ÿû‡"Oì¥ÜN”·B-y „Z”\¹í-ù£¦;9<½—”{Ëè ÿ`FÀŠÐær•w`lL:íÁÊ“U­IlàºúFrôõ£Â#>'Ÿ¢t¡w úþmÙ( ž>tkúXŠ>{Dîs«ÓB()r#E üÜ3‘ò–¯IÎ%+Òõ÷ºÒÍoÞ”‡Ý›v¦uséÎo_Èßv.näõò`º3çsº¿|–R=Iˆ§ÀÉ#(1<„ò7é$ªó=+PŒÿ}™Û?}B¡›—ïÀ÷„BÖ—îÎýòÙ1*9B9NĆâøLaVî-Ãc¼Í0Œ€5#`ü†µæžrßF€P Â’£L@•fÅ?› æ²ÓÖ£Y§ÓQÀ§ɵv3r«Õ”rÙÛK‹ŠKÕJ×äwž’•Œ~ãC5èÆ¤áTô½²R“¼G ˆ ºF®É'}¡dÁL§¼•êè-@8ìT¼<%F„Ê’÷|My+Ö&;×ç‰W]ªÔ{Z‹FÝÕH÷„ì (ÉÑ™D¡LCô÷–üÅF€°~Xɱþ1æ2Œ€ @¬DX²Ü%ʪû¡äØiHɉyWã\ºJ¶‘޽~žàÚVìß²\Wbd¸ˆº#cvàÞ–š<¾zš–„©Ш%G',ƒŽÉ”…p@¹·Rí/ïdFÀ `%Ç •»Ä0êGÀIÄ&£õÍóŒ ëÉ3–5õ÷âi áJ‰:ùÏÓÊßdîxJþE)RŠ~ç-[]( Ñ"çWÃÝëÌýÕ?Ë}¨G÷L±2*ôì\Ð —uü"êŠ1*òpóÒÊBƒXÿó”ð0Øè¸–<Ê›r)ýˆ{v±’£ Âߌ#`+°’c+#ÍýdU!àj§îjO“* S²Ó'Æ=µŒ(ûÕþíÞ¬ Ù{ú Ö³y„øÐDG:˜ÂRâT¼9*AaÛWPÜÝ@I5¶sµìÞãK'EHÉI2à ÚnÍ|Ÿî-þŽb.RèŽUtó«‘äÕq,›¯A[I"²a%ÅD¾ÃRÜms&Ë<žîߢ+£Z fµ½„úïüú9%E=’TÕ¾C&ÈrAß¡'ñqòÚ¡;VÊ}Q§Èúä ýIŒ‹#¼Æ‰d—H÷¼Žê 7•`ì#ÀJNö1äF€H@hh()V[JrÖ”þöOÊÇ™®¾ÕŽÎ´/"‹Ýä,¬2†‚ü3…^ûÅ\;'é¥ïüþ¥Œ›Ýt‚ ±@nG'*ûã6a‰)A·g@zUÔΓ¨àÐ,j0®¹Ti@_£Kƒê’ÈɼåjPøî?eï£ä9Ñ E§]RO(_ñT çhyÜ«Ã*òÎwôèÀ&:ÕÜ. ©/YÖìò‹<:‚-.þÞMYNKÅ}ãã–ǨÉ!QOï%oWãýF…ø#À0Vˆ@.Aý©³Â~q—F€0+!!!týúuºqã¦øDFFê¯ïëëKÅ‹7ú܈w¤™—“(qÎÓI7 ?Oc»Q¿S©Ö­È£T)ýùZÙ€%Ê£ c†Bsm\Š>sˆªï|`Ô…'ÂRk”|“È úãäkê1LjX&!ì9x»Pgn§”y¸«ÁÂãT¤¤PÂòž.·ŸËÑ·¨l ^‰¹´iõðß¹“Zz­ÙZßϵ§nP÷_¶SüÏÃÉAcÔäúNð#À0™G`&³«e4>ƒ`lD1ù½yó&ùûûKe á¶¢ÄØ‰@ù"EŠè˜š5kê·¡Ø+VŒòäI9ùÞqñ6M»¼I2¬y~~~E'³RÌÓUž)â(„‹Ô3)éíFAÏ­@Ê~-~'†‡ëÈS µØ~­µ9.*’Jx•4jöÍÐ(Rî5£üƒ`+G€•+`î#`ë ç \Ê®\¹"?—/_Öo‰ð t‰‚5J ”™fÍšé·±ÏËë¹b*<‹{¹ÊëÞ‰¤EŸ×_¶€ù?ж’å&tërŠ>{DÄ·<¡{‹¾%ŸþïjÖ ÌTcnÎz#'&ê1AI6”q•ô~žðÔðo3Œ#`Ͱ’cÍ£Ë}cl¤Pb ÐÀB'X§ ø/_¾<•+WŽZ·n­ß†"ãädÌJenèòˆ|&ó9SÀCc…“Ô$ÿ@s_Þ¬õÛ¹º“ç˃äG¹PZniJþÎ:ñ‘âd] %熸¿ê·”õÚùLF€`´‡+9Ú3n1#`Óܹs‡.\¸ ?çÏŸ×oƒÍ ’7o^*[¶¬Tdºwï.¿¡Ô@¹qwwWv¥ ä£ë09}.• {PTøIiœZ0þó’êÝBœ‹½`d%Çþ‘±â~iäg¬Ì( tµ¢ÆûÓª‡1Œ#`-°’c-#Éý`TŽ˜Ê.]º$E©Á7ÜÏ Pf`•©[·. :T¯Ì˜#è_-Paòþ8Ž‚Â¢ÉÏÃE6 ¹L*ò¤BȳLµ4•Û¡râÄbA­•ŒZyæV(ùæËK2™ÖÒ˜˜*T¨Q}üƒ`-!ÀJŽ–F‹ÛÊh*2°Ò\¼xQ888HײêÕ«Ó»ï¾Kø®V­,XP#½3]3« K’]ž| WrP{³²¾´ôâ=Ó]ˆk²jâ£¢è±ø4.ãkÔOÜW5‹yI*tüOÓýû÷S|c¡ “åñ¿9cÆ £úø#À0ZB€•-·•P!ˆ9qâ„üü÷ß„ÏÝ»weK}}}¥&³qãÆI…±3ŽŽÚÌ(ojø‘¤D¯S×%ôÕ7.íK¿þs‘ž$&QnÁÂÆÂ¤…@Ô½`éâX¯„,–$bº&MšD«~ür?'‡·“Ëæîö‚ +hÖ“KË–-“ïâߌ#Àh Vr45\ÜXFÀ² A¦¡Bƒm¬þÂF³Zµjé•Xg ä°¤@ãÒ…’clµy©LA9ñŒ¾Ln… §]µy¢îÝ¥j~ÞzÒ$§þÎxúbçer8¶6>PjâããSìWvàÿ¹iÓ¦ÊOþfF@“°’£ÉaãF3æG 4¡ÄÀ2£(6 À*0šÚµkÓ„ ä7Hòå㄃Y¸-8t™b…Õ¹s ÈP¦¬GÂJÆJNVPµ­sß¾EÓDò&ûj­èä¼/iôˆ×èÀ2ñmFAl\þüù3R”Ë0Œ# ZXÉQíÐpÜCàÑ£Gôï¿ÿÒÑ£GéÈ‘#ò[Qh_ Í'Ÿ|¢Wh\]]s®q¹Ò¯¿þJ+W®”NGpÂ7VÍ•oe[ù︄DŠ¿I=CöÒšE¿Sž/¸’b‹%J”xQ1ÞÏ0Œ€ê`%GõCÄ dL‹&ßgÏž5RhÀz† wñâÅ©~ýúôñÇ‹¹u=ªQ£¹¸ ,–I¸±)Jîw¶è¢ËÛŒ# 6r‰Õ[ÚÅía¬!w($Ô„ÿýþýûå7ØÏ0qAM(2°Ôà¿•‰vÖ®Æg%Gàƒ> ™3gÊÉcòc†¿Ál… âÚµk©yóæ´ëÒj=ãoòŸÒJz¹é‹ö»›¶ÝަÒ:ë÷ñ#  +Î.]JŽjCÝ Øù¦n;Mßo?MÁß&ÝG9M~ƒDäÕW_¥Ý»wíybð %22Rº¼)Jž1¸‡ñsb›6md~+…±Íð\ÞfFÀBÌd%ÇBÈóeS KÇŽÓ+5‡"Ä×€ Q£FôÒK/ÉoXl`¹a1/È¿´A¸¶ýý÷ßreeŸè¨àø%ôA»êôAÛêúÓ׺AÝÝAU `—5=*¼¡ pÿÜyzxâ=œ>HOZcu¦¬¥~^4wPú isçÎ¥wÞyG϶Eý«¯¾R.‘ê7òí@áÙ¹s§ü —9é®ß—ø„¼Þ[L5j‘¯SFÀ«ë×Q§’ùié«-ô»¯?ˆ 2ÿ[AÛß}™ÚT,¢ßŸÖ””aÆI…ñ8PR2#—/_ÖŸ w8,´øùùQÛ¶må.n^^^™©’Ë2Œ#]frLNv!äó3"€,å˜4ìÙ³G _¸pA^­bÅŠR™5j”üF0KÎ"+ÜÍæÏŸ/Wµ}||hРA’MmöìÙF±Pn S§N¥ñãǧÚÐ>uKÓÜ—è&©žÒq;Ùç¦ÁõËÐ’3—XÉI5ÛÝ#<"‚ïÓðÆì{«Nø“·kjQ>ãù• @¹Aì ,À™00âóæ›oÊØ05nß¾]~-Z$ÿ'ÀÐ+\ÃÁÁ!³—áòŒ#Àd vWË\\˜0/aaaÒ÷J >ð}‡EúÂJÓ¸qc^5ï0¤Y;òÍ›7–/_NˆUèСƒ\ïÔ©“ŒS@¬¼{h¡×¬YCíÚµSv§øNá‘E>XJ¯7­H_t®­?~2è!Õšü'•år-XP¿Ÿ7l›Â¢›78ˆ&÷¡ç”D•&®¦æå ÑÏý_R@°ê îJŸëׯËx4°µµoß^þO€Å…`#Àîj&”«c2…&É €Bƒ‰À©S§äùÕ«W§-ZÈ2s¢ÍLÁjòÂ>¤¥"ÀVÄÝ`Õî=ƒ¦‚©( 6”ŒvÈ< ܦM›¨L™2é¶ëõÿÒÒ£×èÆ×ýÈî™õ'UŸüÝÎãA%Ä=ÁÂ<ynÎ/[J_v¬AD—"¯ÓKß®§ãÂå±v1oe·ª¾ÁܶmÛ6Úºu«|îáˆÿ EáÁsiëU5dÜF@«°’£Õ‘ãvk˜˜O£XjSƒÄ|•+W– V7a±ñôôÔf­¨Õ žDpõo¿ýFëׯ'$IìÓ§TnÒséE/\ _Ö(G¥Ú…«Z¹OWÒÆ7ÛÑËU‹éÑ\tø [üUîן]8gŽݸþÿ{„nÓ_º¦)0 ]¸—ÎÜ¥ÿ ⺔cjü…5b ¡ð@ñÁ"ÜØš4i"-¤°’‚’…`, ÀJN@ãSL!—3¼Àáª$|ˆå(W®œÞRƒ•KÄs°¨; Žà—_~¡«W¯JAXmz÷îáfÔó‘TU‰ÇÉhïZNÿ›Üò8Òú7ÚêO‰OzB~-§\%ÊPQAÎb» éÃåU+hp¢F.iaÉ„?hZÏ4º™6°¶á9¹eËù j±bÅô  2º``»w÷œ`ž!ÀJß Œ€©À‹ÔªPlð¹s玌¡cb2x 64u!*n(&+V¬±5¤7ÞxƒªT©’£ ýëä êõÛºôE*ëó”€ øfë)ú|Ó)ªÔÙ9rÐvŽŠŠ.æ@»vÒå/ï¯Åý1U|‚¦ 7‘DV난_ 0€Âƒr÷ ¾ q‰/¿ü²üÀm”…` ÀJÎ €áÝŒ@†€»Ù‘#GôJ ^Æ @¢<(5ø€8€)3 iŽ|,’)B©ùùçŸå$ ÔÛ£G&(8–Z1)s„ËÚ jS©(ýb</­9n•ªRáÚµr #¾z@æî+ ‹âî´æõç4ϰô•øh jPަv7f[SOë³×0MbÑhóæÍòÖRÄ») Ü|AðÁÂ0ŒÀ3XÉá[È ·nÝ’Áäð%ar£€ÆY±ÔÀ­‚ɲ‚lΜƒ¼°ÚÀ- qR={ö”Ê ˜ëÔ ?í½@ãÿnEàtyÍjz¥lZöÚs–=PWú|55,åC ‡6·¢g¼+Xt‚… \…a¡­U«–Tx:wî,-é™ËøÕ¹$#À¨VrT:0Ü, -_˜Pjðò¼wïž ‚ÅJ!>`BsrrRAK¹ i!%©ŸgÍšEÔ±cGkÊZ5»~¿ã MÜx‚ü¿êG>nÏÝp¦ïYNýê–¡é½蛜 â/Ê‹„nÞTB¸ç°X?‰qñtqå z§Yyú¶ûSv=ä—I\µ¿ÝL­*¦ß5µ~ ²ÐCäµRžé 2@ÜH`@ïŽgº¯¯ojåSF@°’£Aâ&šd°_»v­\ùà ç`@S|»™ÞÙÌ`âêA1cÆ úóÏ?å î[o½%ÝÒ<<]Ÿºe©Ù:iQÅÅAQ φ ¤•VŸú‚’ O—.]8'¥ˆ¯Ï˜VrL‹'צàÛŽ‰0&ÁøÜ¸qƒŠ/.Wõ Ø4oÞœÝд0mD 2ÆÊ ƶN:4vìXêÕ«—L.hPTS›°ÚTþb5Õôó¦•#Œ­6íØJn‹ä¡Ý{Rn»Üšê76ãDß@—ׯ£ùCšÑІåô'"/N©· ¨?þG‰q±r¿›››\ QPð\cIÄíìØ±C*<°ôÜ¿ŸWÙ­[7êÚµ«dÇä8žÔ±ã½Œ€F`%G#ÅÍÌ&˜#ç_ý%?È]ƒ„œÝ»w§=zÈIq6/Á§[°ÚýþûïÒ- Áǘœ@¹A. k‘gnÒ+?m¥Æ¿BMÊ<·Ú†FQEá¶æ^YPJ ¥ŽÅúÐ ²Œ+kÿ¢:œiÏØ—:øÎÊC´âøuú¦Zn6x þ˜a!DPr°pË4HR ±¤DX>|˜Ö­['?×®]“V`Xx ô þÒё٠S"Ç{U#ÀJŽª‡‡—-àαk×.¹ÂV-äY¨Zµª^±Á6‹6 ! €) ì°aÃèí·ß–y3´Ù£´[ÝvÖf ‰Š¥cw#»\¹ô…gï>GcW¡òBYÏëé©ßÏÖÀ]áJrú]ø¼§©À¹;aTsòŸ2Òð—*Ðk¯½F‹/–±gÉ{îàà@x ñ… Þ€Òƒm(D,)8wîœ^áAÞ3¤qÄ@XfMF€P=¬ä¨~ˆ¸™B ..Ž»®Kð½—V¼œðašçLÁ©ºÂ †˜6mš´Þ€! Dˆ¹qwž3Du6Aƒ.?¢ê“ÖФWêÒø¶ÏÉ/8´ñwè\x<•íÚM¸­ñ¤Õp«¢Šh±(sY,ÎLïÙ€ÞmUEß&Œy£©ëäïCºRn¡óÂõªzõêÒõɉÓEéKž•,i#€g,õ5JX¼85·5{{{Iʱ{÷nbæ0=´ÙÞ@Âa0´­^½Z>³_ ,`w„[›··w¶¯Á0Œ@¦`KN¦!ãrÀÀ@骟ó*UªÐ¢E‹$yòÛ\ºtIZo8Ö&LJÅ,„ÛaÓ¦Må‡Í›7ÆÖ[WpxâÞôÅ+u裵ÿÒé[¡FcðKÿÆTÑ7îØNOÒ‰Ë0:‘¨(Ac|K°|Mì\ÛHÁA¿îŠ'„’».™}@IDATlx«*8(·`Á‚Å!µÿ(9•+W–d(Ëb[­oß¾Ò].mË–-#äå7n*TˆÃ«OXX˜i.ȵ0Œ@†H})(C§r!FÀ| ®fÞ¼yÔ¤I*Y²$}÷Ýw2Îfß¾}„Löø~ë@ààÁƒ’¦T­NNNK(¿;tè`4a/&´«A JùPÿy»(:þyy{;Z?º ÙÅ>¦À½{° /ÊU™x¡ÔnßNm+¡Ï:Õ2ºÞ!ÿ`š´é?šÚ½^ 75£‚â&×°(¤æ¤1tèP©b= ù©üÛ€y ‹2äÞA!\kß|óMO²ìý= #À˜VrÌ‹/מ ðRÞ»w/ 2Dú7ã¥P¤HÉ’v÷î]B<VùÙÍ" ª¼è¿ÿþ+)Y1é”X$賦<7¦°iý1¬%݈¡áÂEÍPŠ{ºÒÆ7ÚR¤X¸}ôˆá!ÞV1I½)`ÛV*éžG&}5$ŽŒ¡^svRûÊ~4¦ås–µ´ºÓ¸qcš4i’Þš N—.]è—_~¡ãÇSll,Õ¨QCZÒª‡e(<°ð¬]»–‚ƒƒ¥5ï¯áÇK…îl 1Àx°0Œ€é`%Çô˜r™DàÆôÅ_PéÒ¥eº Â'–(6+V¬ù @{Êb=€P sçÎT¿~}¹¢ Å ”X–ôðóp¡å#ZÑêþ4s×9£0tÑÐæ|æ,Ý1l,êF‰(oìÜIyÑÀ¶·;P¾<ÏŸu‰‚/º÷o☣=-Öˆ„ ½^}ôÑGúÅäy™;w®<$ mUgÀ€ò;:::½êøx6À 8P.ÚÝ»w~øá •q;¾¾¾ôꫯÒvaÉÃb #À˜VrLƒ#×’I×á?þô›¥J•’+Œ {­é±cÇè7Þn™¬–‹«$ÙC níÚµ /zÄÜ:tˆZ·n­ò–«¯y­+¡É]êÐø?ОËwŒدniúº[] :xˆ^¹jtŒ¨X¯wû´U @y5”q«Óñè¯QmÈÝÙÑðPºÛˆÉÁ"h.\hü—P0±!—æ‘ÿ›,9ƒ\ aÍ õ­[·ä"÷»S¸paÉ ‹ #ÀdfWË~|v&À„±«V­¢˜˜i¥Á ²pÃ¥‚Å:A¬u+W®¤jÕªÉm¸Ï°d¾swѶó·èЄ.T± qRTLÝv†J¶nE"¶E= f ±SQ7d.œ—„ÎPçÝU‡¥ûZ¯Ú¥ ej±7ˆk|‘`’ —*|ÀºðÚk¯½¨(ï73ׯ_—.„XDn0°‡Âú‹[ZchæfqõŒ€Vàd Z9-µ–xhë…˜äB±Áƒ,@,Ö‹\‘»Š-^ØPt`Éá¸*Óylbµž±‰n‡GÓ‘»’¯›³QåcV¢Ÿ÷^ "i!+:FÐXì,87…{æ#ÿë´é­öÔF ʆ3Ôí—í4¥k=B~$s è¦?ýôSÉb‰ç2âOÂb9àÑ€÷&¬q 0@œÕàÁƒ%©»»ñb†åZÉWfT+9ª7qxY.]ºT2ý`µpôèÑT§N÷Œ›Ÿ F\’Â]fòäÉR©MÖ6½ºøxú<ŒŽ£߬£üÂ¥i÷¸NFq Y{[(:?í¹@Å›5#¯reÓ¯K˜ ˆÁ yj"ƒn 6¼¶‚P ¨Ñµ^¦¶37ÑÀeE’×&FÇÌýcëÖ­4hÐ òññ‘tÈ*T0÷%¹þt€ŠX(<ëÖ­“ïRXÁ¡ðÀ½ÍÎÎ.ø0#`³°’c³Co¦ŽÃ ®h°Ú ¸A®£F’d^}2è*ª/d8Oœ8Q2!øùwÞ‘YÁUÔL«lŠH$5ùn•òv£mït”ÁꆅëÚ7[O“_ãFä#r¥°ä<Iâÿ#P Ä‹x´1í¨YÙBF8q3„ZMÿ›Z”/L«_oCö ÒËaûX¿`u?èYÔ¼"ð~E®¸HR…<9äXFÀVrŒààYFàòåË4gÎà ¦žn"ë:¬6ÍÄÊ1‹m °aÚ0aÁ¯Äp‡%ç¸x/œš}¿‘ªõ¢¿…”“½1·Ì7[O‰D¢ÇÈ·ZU*R¿p̹¶Ùú•ÇHšh»Ç‘’E­^ cWÝsw¨ù´"ákÚðf;r´3»œÄ/!!A&²üñÇå"¬²Ìp™“#þµüýýiñâÅòƒ¸«zõêI7ð~ýúQþüùÓ¯€K0Ö+9Ö?Ææë!Vía>‡Õf·p¿(Q¢9R®ÂÝÅ6@®›ñãÇËäXþúë¯%¸mô^}½DÜË!0ÓrKE–ë4_9›„‡‡Ëx P?÷éÓG¨nÚ´I®àÃ=‰œl¬‘Ó±’ˆ8« r}9rDºR ß‹å¨!¬8{ÞëD§ÅĹݬÍ›`ÔÐKïû2%ߥ«ë×Q,g^7ÂÇÔ?B¯ûÓs­Bntì£.)œ#÷¥R ëÛfA#­8À*·c¾ƒöÿã,êBJL‹-hÉ’%’–ÆŒtõêU™ž¡L™2rÑ tý,Œ€-"ÀÒ¶8êYìóµk×dnä\@9VŠo ‹í wį¾úJ’ €Öô›o¾¡®]»Úé)\×ÀºV(^Ú$&ÏÉY×C£¨Ë/;èbpù‰I’»°ì°˜Häyûߣ")ëÓ² MëÙ€’¹ í¸x›ºÿºýi ÎÈ6)Ü MךìÕ„…-Ä~ìñD?ýô“Ìñ’½ùls#p^$þý÷ߥòƒX$_†§EÛ¶måûÛÜ×çú Àîj*Õ7aïÞ½„Õ!$+^¼83Fº¤!ƒ3‹m!€ ×÷Þ{"##éË/¿”±7œßH½÷È`ÍI–¶Íc:¤È£úéÑËÒƒ—ɧJe§SŸr3[S¶Ö± =»)Nd´Ÿ7¸) ¨W&Eó棖²¶`Hs‹ ¤hT;`­ýüóÏåÇ믿N³gÏæ84ðRË¡ØØX鯅gŸ -Ç;¹°HY¤ˆ1u¹ZÚÌí`L„+9&Òꪉ—üü3gÎ$PAƒ£ìرrÅž)+­n¸ÓíV¡ÜBá:t¨´Þ°[bº°©¢è¥»ü¼Î‹Àö¿Fµ‘Vƒä [öï5¹ôårqV–”×Ó3yþAB ËǨ‚o~Z5¼%UH– ”Þÿ[Œ¦l>IŸ¾\‹¾|E[”úk×®•l™p_ûóÏ?™\$ƒ÷…Š ÊØÙ`{å•WäBU+‘C‹…°BXɱÂAÍV—BBBdn›Ÿþ™îPC|j¦°Þ‹ß\¢wW¢Ú"Ϊ‘­© ÀÞäÄZ°`}ÿý÷Ò¥Ùúe‹}9òî %Ä… ¨iÓ¦’X¨K—.Ä.ê¶xGXEŸYɱŠaÌB'ÛùLÖ¯_O*T >ø@²çp·,€i%§à%Ë ’º"ë‡~ êÕ«[I︆…EÓÀù»éˆÿ}úº[=Ûº*¥–ôï³7éíU‡éæÃh*Pµ*ùŠûÁÞÉѰ*›Ü~ü0TXnþ¥°À›Ô¹z šÙ»¡°â¸¥ÀñPÃï£ g郶Õi’pa³ÏÒ)NÕÔ(8H B<7xR¬©áKÑX°èÍš5‹6oÞLÅë",;Ç'ww÷ey# bXÉQñà˜¥i{öì¡É“'ËäÈŒI-è9a˜YàÖL¥Hê ë Vf§OŸNÔLÛ¹¡YC®iS·¢Ï7§&e Ño›¤Èႚ’žÐ{ÎÓ—"P>&á yU­F¾Bá±stÈÚ…5|VLX8Ý;qœBý¨z1ošÞ³>µ,_8Õ­?H£—î'{×´dX j&0¶f!ž°€…ÑÍ-¥ÒgÍý·Æ¾!m”VXêàÊ6dÈiÝ)WN›±dÖ8Fܧ4`%'Mx¬èà®]»è‹/¾ ýû÷Ë$aü1µlÙÒŠzÈ]É Hå,Iƒ– Ž——WVªâs4ŠÀñÀ¶h/]{A_t®CãÚF”'—ȸšµëMÝ~†â“täQ±¢ ®JŽ.ÖïÚ)þOBDìBè@ªX؃¾êR‡ºÕ(‘"ùû^D YqÖœð§Á ËѬ>ÈÝÙ6¬_pqc˜aà˜ÎToÍíDÊ€ùóçK…' @Žñøñã©Q£Fšë 7ئ`%ÇÚ‡fg(7 6mÚÈ<pEb±m@,0oÞ<‹ ñ6pQÃýÁb›ÀZ3uÛiš¼ù?ªTȃf‹‰ùKeR' a¿î»@Ó…Âó0*––›BÉ…çûž RkÒeCqqq2F“à%K–Èœk6Ô}›é*ŒN›6M¦ðöö–ÊÎÈ‘#ÉÅÅÅf0àŽªVrT;4™lh¡ÜÀ‚ÓºukI.ÀI3 ¢‡¯<Øq®^½*ß÷Þ{ìíí­¸Çܵì"KÍœ.ÒW›OP¤pQÓª½/,Þ®/VvâŸÐ–s7iù1Úx6b„“ÏÛ‹œ‹úQ~ñq±¹íí²Û´lŸŸð8†"ïÞ¥ˆ ›}ûÅF?¦"žnÔ¿n)ê#,Xµ©@ZrB(u7ž ¿kZ£Òé«®u©y9ë&H ƒ{ÓØ±cea°ƒnšÅ:¸s玤ÿí·ßÈÙÙY Þ“Ù¬s¼5Ò+Vr42P/l&òÜ|úé§ô×_ÉLôHÚØ¢E‹–ç¶…ØÒ¾üòKúæ›o¨Y³f„PiCÁÂd‹–Ðø?!êù=~’[Z7Ƶ®Fe}ò¥YELBí½rG(=ABá ¢I¿}·Þää[\Ν³§åqÏO¹rçN³®ìLŠO ˜ÐPzúß¿O±Á÷¤µ1õEÒÎÎUý¨}e?ªé—6á,>èË´gh·È5T·„}ùJqnÑì4Ï&ÎÅ{é“O>!X‘ÿ÷¿ÿÙDŸmµ“!Â* úéüQ2²!©8Õ8ך­Þí7+9…? ¤‰'J7€J•*IË XmXd²ch@á;=jÔ(¦ WÀáï !‚Š*UªP­Zµhμ´àК±ó <Œ¤NU‹ÓÈ&¨}¿TÙØ’_à¦H0zàÚ=:x=˜ö] ¦KwC)I¬ôç¶³#W¡èعå#W7rÌ'>"¦Ç>O²{§<Òú“K”“ÁÍB9Ñ +“îI’øN¢$ᆙ'>±òEñ"9g¢øNˆxDÑ‘²).y©N‰ÔL*4.íK ÅÇÍ)}l-üqôª$¸p'ŒÚ …èý¶Õ¨uÛ#H>¦™ù rXr0é1c?‹2žË"ïÚO?ý$]Ùà*øOXõز£ÁÁÔn“YÉÑÚØ…‡‡ÓW_}%©‹-*ÉúõëÇÌ&ZH3¶7ILü¦N*ï äBZ¸p![ö·5W jñ^½zÑùóç©¢`Pƒ ¿ÎºS7èÇ=ç„¥æ.qw¡aËÓ e©L´­;†XÁîÒ½p:{;”Î åÁ?$RÐXGR€øŠEa;ÉœØ E¨ hOénTF$ç,-ÚSµˆ§ü¼ˆ@!µ+ û„jþ¡Ë‚ :€ìDϾuKÓ;-«ÈºR;‡÷¥òç 4ˆúöí+)‰9ihú˜i½è§gÏž-•,š@Ñ“/_ÆŸZÇ€Ûo1Xɱô™¼0VB~þùgéz„ÕÌÏ>ûL®Ì;8¤¿™ÉKqq #€ PXo`ÅkȸqãXÖðxZºé°àÀ½qõêÕ©6¹u@8°PXx‚#S-ËÒG(½j—¢’^YO %#4:–FljO,ÅÄ'‰Ü<â#b~…õÇQ(3Žö¹ÉIÄ÷¸ kŒ—«“Œʈe&ÕŽˆ¸æÁë÷hÕqZóŸ?Ý{ôXZ~†¿Tú×+“!«Ï‹êæýÏضmuëÖ:uêDK—.%~‡=ÇÆš·`Ù™9s¦ÞЇDäo¿ý6åÖZFÀL°’c&`MZ-VSñ@¸uë– æûè£(þü&½W¦}à¥î‹‹-’ßÚï÷ÀRlÚ´INDOžÔ±j1ªXÐ=SýåÂYCï7$ …Kíºuë$#WÖjâ³´ˆÀ½{÷ä» © Ê–-+ç7]»vÕbW¸ÍêE€•5Ž þù'L˜ IÀˆ…UÎg¢Æ‘²|›vìØ!°Áå¹(8éžåÇÄZ°{÷njÕªa%»y¶‚¢%Ë” (ˆ¿o¾‡‹“>^±3po+!âhŠ *g¼ŽY‚ Ö}aA\Ï AŽ€ïËB‘AÜÏÅ»áÂÝ-I&é¬WÒG‚2¾tmßßôÁ{㤥ÜË+m†µ,5ŠOz!ÿý÷ŸLV]­Z5©LsRâBeµÖŒ{°è5nÜXZxx¾cµÃÓc%'§Oëz ûõ"b)°‚ å¦gÏžiÂÇlÜ+È‹4}útêÝ»7ýúë¯ÌZc£÷‚9ºÝ²eK˵sçN“WÿH°•JÇ™[¡Rù8',=þ"¶ç^DŒT~pA{»Üäå’GÆÚxäu¢<"çY޽ˆITâsâ„Òò8>QÄïÄQˆp5CÝP  °ót¥2‚êºZ/©PU+êI•„½ P$::šŠ)"'ZÈÚÎ’³ ~ 5X£YÑÉYüÕr5är!V† "-;… qî)µŒFÛÁJŽZnË–-ò<((ˆ>øàiÉAB-F 9ׯ_§>}úЕ+W$Ë^,Œ€©8tè\Qݳg5oÞÜTÕ¦[O¬PXF¨¦¡°àÒ0¡À É(›A7hĮ̈õûß‘wa?¡øä––O¡!I)|Ýœ¥E¨p~2ÐeÒ¼> ƒºõ\¹ž+@ižÄM†(ÖÕ«W—~÷™ ZÍU´bÅ 9ÿ ¹­°‡8SŽÙÒÜ0ª¥Á¬äXz$0aÅêüÞaµõ¦xñâ–n__¥¬Y³†^{í5Éx:Ö2eʨ´¥Ü,­"бcGÒT×…sçÎQÕªU I+T¨`²ö¡>vlÞ¼YƉ˜¬b®(Ü>}ZZt@r÷!³ne:«+#)§‘ÄÖ$m×®Õõ“;dvfš/Í´ÙÛ®í ÀÝùnh/ €àŸTVp´=®æj}\\½õÖ[2gÉ€¤IŸs¡m»õ"FVeµg¥W\ÒL5RȦ/Ðô³XXqvíÚE`óëÑ£!m‹m"KžA—.]’nŒíÛ·—Þ wîܱM@¸×YF€•œ,C—õAŸ‰Õ*°¥Mœ8Q>Ôñ‚eaRCÖ>0]X`åÊ•r"ÆæûÔâ}ÙEÏ$äÆÁ¤BbNW²7ß|SZrnܸ¡Æ®ÛD› èlݺ•öïßOHrÄÆ,¶‹ž#…,¬Ç—Ö[Ä-?¹²XŒ ÀJNFP2QPý¾úê«ÒϽT©R2‹8XÔ8š‰¶Âj@­ŠI'V®Oœ8!I¬°›Ü% —­µk×J?x4'Í&˜Ú’ƒ‹uéÒ… .,ãÜÒ¼84+uëÖ•ñQ°(â}iޱ6k¸r“#ªñóçÏË<ˆY žW,Œ@z°’“B&:¾`Á¹ Ê_¬LÀç¸D‰&ª«±6°RõÉ'ŸP÷îÝ©oß¾ìžfm¬ÂþÀÿ½|ùòòžSaód“ÌiÉAn©1cÆòvDFFª›hW“&M¤Â¸C¸é²0ˆÑó,û©fÍš„g[ûøÞH VrÒBÇÇe$k8p \}ÀÄ•…x`•ÁÊÕ´iÓè÷ß§9sæ0»Ì‹Àâý&A.ZË–-£>úHìbæZÝ9r¤t…¢ÃbYÚ¶m+ïI<ÿà˜…ˆcF"YÜp÷G/¶êð½ñ"XÉy2ÙÜ—ðO?ý$ÿ!oß¾M e9s&¹¹¹e³f>Ýš@Ð-¡!àìV`RcaÌÀÔ©SÉÏÏOÆA˜ûZÙ©ßœ–´ËÝÝ]ºHÍž=›Wˆ³3P&: ‚ ƒÀd¹ÀX `'rfÁÕÿÔ©SrïL¾?øÞH VrRC%›ûkù%@ ýÎ;ï‹êׯŸÍZùtkGÄ (]º´4ÉשSÇÚ»ÌýSwïÞ%¸ÓÂ×.[ZsYrÐw<³oÞ¼Iýõ— °ú6º7%C€BŸ…P<‘K÷G·nÝèáÇÊaþfˆ•Þˆ£€‹27#ÏÄ¿ÿþK“'OfW#blUá¾ÁªÔàÁƒ¥ÿù¶mÛÈÛÛÛ»Ê}R!xfyzzJ † ›gÔ$s[rp1,2€„`úôéF׿–Ct˜ÄÂå[ù›,‡ _ 3˜g!y1âu0ÿúçŸF@"ÀJމnø´Ãzƒ ½ø;vLÆ™¨z®ÆJ@€s×®] ´˜‹-¢ï¾ûNšß­´»Ü-•!€UO¸y¼÷Þ{šZŒ1§%C<à÷*cu woÄ*âyyåÊu4Š[¡š6mJgΜ¡zõêɤ²¸_XVrLp,\¸P®„……Ië V´âöa‚îsYDI`áž‹V¡`Éaar(×`-5jTN^6Ë×Ê KŠZ0|!a3‹:È;·$"@äŽ;ÒƒÔÑ0n…j@LÜLáÞˆ…ŠþýûÓãÇUÓ>nHÎ#ÀJN60GÞF6Œà7ŒdUHfÆÂ¤‡Vˆ±âeV¿† ¦w gLЬˆ?þø£ŒAqqq1iÝæ®ÌÜ–´îp… ‹:pvv¦ 6ÈÜ9¯¼ò ÅÆÆª£aÜ Õ €…?üPþï"eÞ­AAAªi7$g`%'‹xoÚ´I2§á¸{÷núþûï5åî‘Ånói&@T½­[·&˜×á_V+F §€›ZBB‚¦òä”%cÑ®];Ét8eÊ”œ¾^øøøÐæÍ›%åˆ#Ò(ɇl¼c±ðŒ<: ™‹í!ÀJN&Ç<..ŽÞ~ûmêÔ©“ÌPÄâ°0Aà믿–Á³H:¦ ­­ g¤\Fýà96cÆ =z4yxx¨¿ÁÉZ˜–\ñ•k×®å<Éð·ôO$­]¹r%-_¾\&„´t{øúêD xñâtðàAªX±¢\Tܺu«:Ê­2¬ädÚ«W¯JÓ'bp°¿xñbÊŸ?&jࢶŠ24Ã¥ñZ?üðƒ´üå䪴­âÎýNPF#éìØ±cS/ Ò½9ý?JZÐÔbq‚E] Y(ðàV6F 50GÛ²e õèу:wî,ãÔÊñ>ëD€•œ Ž+”$œBð#òÞôë×/ƒgr1[G **Šà?¾téR¹* *TFÀRÀ},~C‡¥B… YªÙºnNYr TÁšƒç?¹XÔ…¼*0ÔÒœõ^]c£¦Ö888È\`È5hÐ BN:Û@€•œtÆÌxˆ0@ :tˆÀîÂÂdû÷ïK39b·öîÝ+•ŒœÇes!7ŸÀÀ@™üÓ\×0W½9mÉA?ÀÐT¶lYúüóÏÍÕ-®7€<£J•*’d,ŒÀ‹@ì4’cgþüù/*Æû­VrÒÌK—.Qݺuiݺu´~ýzﺣ£cgð!Fà9ÈôÒK/^¼È¹{‰…°$°€ÀõªOŸ>TªT)K6%[×Î)K ëý—_~)c@Î;—­vóɦGïdÄ7"…&¯,Œ@Z€HnãǧU«V¥U”Y¬ä¼`ÁµŠ_777ÉÊw#F £œ?^æÚ±ÔJ–,™ÑS¹#`6þþûo½ ŠU-Š%,9À©gÏžTµjUúì³Ï´›Õ·¹páÂrŠ؜©S§Z}¹ƒÙCà‹/¾Rp]ƒ‡‹õ"ÀJN²±…¿:&Rƒ‹Ú?ÿüÿÉ0âŸi#« è¡K—.- ¾¾¾iŸÀGBV0Cb®eÉIKp‚r…ƒ`Zã¼9ê¼sðÌýöÛo%ÞÛ,Œ@Z€]Ä"]»v¥³gϦU”i\âe¡ÓpûMÚt$÷ìÛ·¯¤üå—_ØômRtm£²íÛ·Kßð–-[Ê•Ed“gaÔ€V,[´hAˆ+ÔjòY¹xE£V­Z9kýúõÉËËKæiÉñ‹ó3„&­È‚¼(ÞÞÞ:‡ Ù&ñññ2È­[·äâ³åZÝ}0“-9ÏÆ/M°§]¿~]*9ìÛku7»Ù;W PT wGVpÌ9_ |óÍ7Ô¬Y3Í*8†]µÔÚüùAG‹Ð,êDôèööö4dȲÔ}¢Nd¸UÉ@<ˆX@0Ås¾äèXÇoVrÄ8"ø¬I“&2‚¥V­ãv²Ý^üùçŸÒoJäQÂK–…P 'Ož¤mÛ¶i6GÁÑR19Êõ[µjE;v¤÷ߟž¬àhþF¶HL Š2’‹±‚c‘!à‹fXŠ/N½zõÊ@im±¤%‚6<<œÝ[Ò¹]h×®]4vìØ!kˆŠŠ¢7Ò„ d~#,B¬‚¹¦>P–ŸÔ[d¼wÿþý4yòd8p ÌUh|4õ_Ó§O§ŸþYðæÍ›")xë¤%vvv´hÑ"B~į¾ú*­¢|LKˆ—…M‰0EêŠ-ª+W®œN¬ÂÙTß¹³¦C`É’%:‘$P÷é§Ÿš®R®‰01bAGçää¤/y×l™ê„%l ºÿý×2 0¸ª˜|é\]]u·oß6ØË›†ˆWÝÈ‘#å˜ýþû̲½zõj]‰%tÅŠÓ×?gÎù¬ùÊôûxã)jŸ…`ÈÓ "°¬èÄ¢‚|ŽEGG¿¨¸~åÊ•u‚QþV<ݲeËt"’®H‘"ú2imÌš5K'bju‚/­b|L̰©˜pç#=ÜˤlÙ²ZÒG¹­*A@¼H% ‚áÇ˨™3g’»»»¼_ÕÚÆ¬´K¼_³ršIÏyï½÷ÈÇLJÆg’zt°ã€u ,,Œ@fM4’ÄâÅÍ™µ3‹—ÏI=z$Ý4/f-tæ–fW3?`*V}%3Sv)¥1™êß¿?!æÈÚDQ8rjì„…]ºªâ(¬Htÿþ}IF`¸?£Û<>E*ýr©Ojg?žàBÁ½#¬1©K±ÏÅÅ%EœîÁŒÞ(‡û„³gÏNQ?ïÐ6Ás‹>Á˜˜beÿd,Œ@fÀ*+↠&'7™=ŸË39‰üÐ!£GÎÉËšõZÊDE –t´S§N27Ö[o½E§OŸ¦ô‚•1Ña¾K—.-š 7¹p²sçNiB_yå*T¨Ääð<À¤«qãÆ2K»!ȈCØx'À’Q±bEýQ)‹ú±Ê¾>}úȤ¦Ê1ÃoĬ"Ïâ5Ú´iCÂýG®Œ£îÝ»“p“ÛH¢ˆ…Ücûöí“1®èÏk¯½–b¢)OHöç¿ÿþ#Ä] O ¬>mÛ¶5šŒÂò‚$¶(‡ ï AƒõJhh¨$‚‚2 ÜÊ}¢”Ãq°ºa‘ }JMx|ŒQÉ©ñ1¼*î#(8AAA$ÜRI¸ÊÿüO@ÒûÀ‚ïé¬ þ>úè#iAÂýŽ˜F"  ·º¬·R…ëÄO'áe½>Óæ®Ž:±r«.:ÁÚcóx0êF@°GêDþÀVwC3Ù:1‘—ñbÒŸÉ3ÍW\¸ÑèÚNXvÓ¼HXX˜N$œÖ!N 11Q'Ühtˆ!:Ä«ˆ)„nüøñ:áj£CYÈŒ3tÍ›7—Ïœ€€o"‚ªõ×|_Šç“Pvtݺu“õÔ­[W',x²b„ëNPÝË8á2¤óööÖ‰‰¤¾žä"wœ¬gîܹúCˆ‹@YÜ'ØútBa’}ä=:1©Ô‰B²ŒP´t"›¼,‡ëà<úp@,<êz÷î­ì:1™ÖU«VMö5$$DžœG<€×¤I“tb²© ‘<Ž?"H\‡¾:tH'”2âoƒ†˜Ûä"¬1:´«Q£F©>Ãy|ž5°ËéñQÆ ÷9Æ÷ŒHË Ìxº‹/ÊÃiý?àY°`ÎÍÍM>û”úð-'e,¶á¾ô¶ñS¡B]—.]Ò+ÊÇÕ‹À ¬zX¥à6bÄ@† qF «ˆL]þüùu=zôÐá¾baÔŽ€2Ù»sçŽÚ›š©ö‰Õ]9ù9|øp¦Î3wa( !€ö"© tÍš5Ó†r„ h‚œ1©›7ožþ86Ê”)£ú}]»v•Š„²C¬6ˉ¿ò䨓AEó£N°‹)?u †íÚµÓïK¾‚žäЉ°ØÈ}Š’ƒsë•\DDyE@Æ‚sEÎ0¹+5%G°XéòåË'<å¼Ë—/ËóP'JÈ]îÝ»'+’N ÀŠ¡"X€*UªTªJÊ •+mSÎÃ7Ïs%ÇR㣌‡pµ•÷‚ˆyUvÉïôþPHX^L¢ä ®;vÈvlÙ²?Y´‡€užL D&zв0YA@¬*É U¸R,]º”]³"Ÿ“£ ?(ÍáÚ£¸<åhlðb ,†¿HΰtéÂûAì À+Œ¡$w³‚«(t!.\.ý¼Hà6<Ä⑾ˆ°¾HLÏ)""BŒ#1#2Ò#¯ܘ Jÿ÷»-Zèë~²“Â]M)|£†6 ‹‘²[~óø<‡ÃRãó¼©o¥÷ÿ€³„%/õ“³°·uëÖ²„·ß~[ÿ?–…jø "`uÁ)xØ 8­]»Vúe*~œĘ/­QàÛ ñ‚ ÒºuëLúðÔ($Ül €g&¿b…[­Í\%@,(fîD3—ÆÄJXadL ×R“–-[…õFÆãךÏ¥ÊùPœ—€I–pÙ‘çA‰U|ijzd¹K¸\ÉÉbi Èå#¬y2GÈO?ýDʹ@P¯9 X"MƒT¾R«c‡¾€ù*¹ #íCì,œÏ>ûŒ ¨!N¢ô_‰ªR¥ŠÜ¯üIŽ¡²_ùž8q¢déc—¡ðø†t"DSúªbE0ÉW²½ b¹[±b(eà*…iþüù$÷IÄâH¥G9޶À-L{ÉsÄÀ- Œl© ê…ÀM,³"â¥äy`ž{‘ o ’€ƒ¡€Í Ï[(H°¸€áM©'yß«V­*OÍ }7ÜÙ¡@A!U„ÇGA‚džKŽÏó–<ßÊÈÿÃóҦ݂ÿK"¶Í´smfGÀ*”<¼Ef\I;(X9$¤Ù‘ã X-˜ Àý4”˜ °0ZA+þM›6ÍTò;­ôͰj´ä }° ÇbAaY1Ä’ˆ@f¹ .]‚D€Ë™ü­ÄNAA@ß@ jhÈŠ+dŒ ¨–ç—4Ã$Ê rÀA€«6”ì7¸-‚êîXP‚¡X`²†¦ôç*X‚û™"è» XÐ+'¸Dé¶Û©tÕõµ"¨ýÇ1ÐEcátÖ›7o–ñ3‚UNÅâÜðàŠŽ8Ô¡ÄÿàbwÐ_`ˆ¶¥&P*wZmP`Cx|žÞoÀÂÒã÷0,‚Š(÷OZÿ( K"î9ñÇoÜOY}nàÿ øßFl‹†ƒ®iUªXÓyzzJæMw†oq„;‰dô18o 7€È `¯ÈÃ’™Ó4UVLbeÅ"„ªÛ ê[¼“Äd[ßNáJ&)iÁâV5g#©“•²#û&âltB±»AË 60°JL(4:‘^'ÝÇu"þJ'+yÆ^ùWmPdByÐ… õà8¾Eà}ºL‘ |vww—¬q »ʃ¤á55˜Ð è§PHt"Od9ëÛ·¯N,8ê„+™<.\Ét`qÃu…åD'¹„Ò&)±Qßúõëuº¢1Cúã …e4(¡A æ:Ppƒ¶TÁ±º/)¤Q?XÕDBUy}áÉ¡  N,€Êr/ú#òé„‹œ¤ŸæñQÇø€9Q¡B®b:0½f’Öÿþ×DND———¼ß>ÿG`Å;p%}Ñíæ~0«Ö¨QCÞ_iäƒjB`F.´F ¾&+WâŸÄÃPº¨‹…È*H€7µiÓ¦¹1dµ>>ÈIÀÚ…`l¬`+îO9yýœ¸Vö‘ØV 5»$ƒ îPðç‡õã•e¸Á¯D†ÌbÀ¯bX"\m(°ÌˆÜú]X©V¤P7V½V½q]¬Xúw.°ˆ)A[-­ÓX™ÎˆÀ] V"\ß°°(q>8_äÇ‘®rxƒÑ }ÊLü"úŒäŽè#Ú«ôKi¬;h7,d”G;„¢§‘ß`˜CŸP+þ©‘ð숖/_^º "FŠÇÇ%KqkžþJëÿ!µò¦Ü‡ø\ÄêÂâØ ASVÍu™™xhhRÄCU'ÌÕ’k«E,Œ@v@î¬ö¼ñÆÙ©†Ïe,‚V,±B/˜½,rýœº(¬â](­9uͬ^çøñã:¬ÃæÔ/>™(3yýHl‰\IæXrÐG-‹pO–IGÓ³úd¶<>™ELåa%„%•Eh3OVw @ˆ¿f-²›GäZm¬Ã?^d'a>s'­ œƒê\¸ YWÇ^Ðñz}Áõì®U$’Ò'Ÿ|B" ¥É‹ž]­LR†ÃRtíÚ5IQ˜ áŠeòk&¯–#\W‰—H~\ ¿ÿ÷¿ÿÉ8'%æÇTmæñ1’êªç«¯¾"]d…ðB]=±‘ÖhBKÖH¬¶ÃOW(9ÉŽðOF s×]Æ e†lÅï7s5piFÀ²`Õ^¸çèÙÞÚE¸dIKŽ4×DWáÇ/‚ðe,ŽPLÚfÄÚ×ZXœ‘ïCÄæiÁÁsÍÜ"ØÙdfy1U’pAH`îKš­~Äop…2Ù5x|L¥ê*B¸È餺vqƒR  ½˜œ/¾øB®Žé%y¶hÑK¹›&D`äÈ‘’A¹”ds&¬ž«bÌŽh‹Áª†˜ÃØ ³_Ø@þX¬À 9-˜¾Ÿ#ð ´ÍæÄ©·1sTýÂ:ÁX%¦ú㈧Q¨®õ;5²*i«‘s(y’PStÇÇ(ª§°¯"q­–žCêA/G[2SSÒ‚]FrçÃ¬Ì NŽÞ(Vy1ÜOpõÀă«b«ï¾‘kEÄFX½‚ƒÁÔ"¡BÑ¢Eå3fùòå$âdÌrOæ´‚ƒN€d@°¯é?ZUpÐÁ„GãÆ#d·W’Œb¿©„ÇÇTHª£} -Mš4I âV¼Í(9àÞGþø8ã…ÎÂd¬Ä W™/ùX-"]P Ó;ï¼£Åæg¹Í†„,W’ƒ'ŠƒØŒ^²¨¼ïkêØõõ”[d Ë…xð Ì=bÄ}†hõBÊ-S; |÷T·n]y&4¹:¬vŒ¹}æGýÊ•+ËÄŸ"oˆù/¨‚+à×ÇÇG&µÄJª–è¶mÛJr€'NÈç¡¥æÛD[E>!I‰ ji-[¦lb°TÐÉúõë,µX„gQ%êwWɾd.¬„Íš5K•(r£´ƒ&È'ÿñ… ²‚£¡ã–&C@$ý”™èßÿýdG¬ÿ§Ö,9ä—A¶v˜Ï)Üæ?êA`ìØ±2gܘYô@ —H.Ù Ó+ËÇ-ƒ€ªÝÕàÛ©S'™méÒ¥FIÈ,_UëL™2E®¯ZµJú’k½?Ü~ÛE@ä_‘‰é`ͱÑbLŽáØ/BÒa$°6LÔiX†·-‡¬„Ç'üo,€…H =zP±bÅhæÌ™iãcD@µJV¹z÷î-ùë7nÜ˜á ÆÄ’/­röîÝ+‰+Õ.Õ©SGå­åæ1/F@$¯%A£LãÇq!+>¢EKŽ2µjÕ’xañŽE]àJP•› O]½åÖd;;;gwaA埪ø\3! Z%A€x‰oذAú<š©ÿ\­ êÙ~ýúɤŸcÆŒ±‘^s7­‘…0YF[[­[r”±4hÁÕV\VPQÇ·ŸŸõïß_2­©£EÜ 5#ðÚk¯I/£ùó竹™6Û6U*9¿ýöýòË/´dÉ^q·Ù[Ó´6l˜ $7oži+æÚF 00PºÚb,޵–-9J¾ýö[I¨ÓµkW™ãHÙÏß–GàÝwߥsçÎIö,Ë·†[ f›lðàÁrÎj Ï%5c•¶©NÉ9v오öýøã þŽ,Œ@v€Â¼mÛ6é~€Ü,Œ€–€ÿwáÂ…©W¯^ZîF–Ún-–tDÈ–5ÐØGEEe >Éô€}VR޵0=¶ÖXão¼!É0Ï`QªRrBBB¤bjPäÃaa²‹ÀåË— +Þ mÔ¨Qv«ãó‹"€,ó°F"ߊ½½½EÛbÉ‹[ËŠ)VszçÎêÛ·/3®Yò¦JvmXs6oÞLW®\Iv„2ÆTªTI*ÅœcÉ5üR’¢ÄL`ukÙ²e̤¦†»Cãm;΀ Ï?ÿ\ã½áæ3Dpå…59ÃlQ¬É’£Œ_É’%eìéîÝ»‰ãT,ÿݹsg*Uª§®°üPh¢£F’J1,XÔƒ€j”œO?ý”öïß/}Í9Išzn-·dâĉtñâEÉ`dË«ÞZCnûs ´Ïž=[«çË—ïùܲKŽ2tH*¦µ9sæHúbe?[,¸‚iÑ¢En¹†ð•5bëà¿xñbM´×V© %&áo¾ù†~úé'ª]»¶­`Ïý4#ǧ©S§hZË•+gÆ+qÕŒ@Î €ÜN ¶…«š­Š5Zr”±ìÖ­5o„ Ò›AÙÏß–C`èСòâ AbaÒB ÆÁÊÇ,ki¡”óÇr‰1]Î_öùAí[­Z5jݺ5ç x oe¬xCYFâ½]»vI÷žlTǧ2ª@”ÑåË——ÁêªhuÚ¾};µiÓÆ-0ÿ%‘§eÖ¬Yô÷ßSÛ¶mÍA¾BšÀ5ôðáÃ’m-Í‚|Ðæ8uêÕ¬YSz%½ôÒK6‡ ˜iQKô«!C†‹‹‹¤ßS Ü+@`Ê”)’édîܹ¬àXÁxrˆöìÙC'Ož”¹Ulk¶ä(ã ji€]l£,–Eàõ×_§óçÏÓÁƒ-Û¾ºê+íÿøãÕ·ÕVhQ%gÆŒr¥D¶îcn+7œ¹û‰ÜPr&Ož,ƒFÍ}=®ŸÈ ð¬lÒ¤ ç {¶…Ì:äPäàò‚ñ~ùå—éÒ¥Kf½Wž6uêÔ‘‰w/Ť‡\ÖV¯^Mð(a±<Sr°* Z_‡7hÐÀòHp 4úôn=¶· ùä!põêUéº4vìX£ý¶øÃ,9W¥`¢T¶lYé–‡°,–CÖŒGhh¨åÁWÖ`  £­[·j¢½ÖÞH‹(9±±±’Ú·aÆRѱv¹9ƒÀ?þH§OŸ–yDÀŒÃÂXHHšá.]ºXCw²ÕEɱfKŽܸ7mÚ$c ³ Ò Ë €Õyfβ üšºj±bÅñ8ðPb±<™ Âz$©y2jù›ÀZ€ ÀgŸ}FÚE^FÀÀŠ (la™äg¥5ŒhæúàîîNÈ¢ngg'-:lIÈ~¦*íêêJ}úô¡… šªJ®ÇŠÀ½â,è³XWrH Z_W/^ܲ½ç«[ ï¿ÿ¾d]úøã­¦OÜFÉ?áºôꫯ2[²ä(îããC;vì ÈÈH©èpΙœýI<ðaaÒBtðÑÑÑ’2­r|Ìü䨒/_ÖM›6%d‡eaLÀ¾}û$ý8hWóæÍkŠ*¹FÀâ$&&\0‡Nnnno7ÀrøùùI†½û÷ïS»ví(""Âr±Ñ+étéVµ£í@IDATÒÒ²j£p·3ˆ@áÂ…e¬ù_ý•Á3¸˜¹ÈQ%gÒ¤ItãÆ bj_s §íÕ‹‰à›o¾)Yˆ8fÁöÆßš{Œ@ç»wïÒ˜1c¬¹›™ê›-Zr€—*ñ[·nQûöí)**J9Äß9„À Aƒd¬Þ;,Œ@ZtïÞ6lØ@|¯¤…’ùåX2P$Iª[·.MŸ>_ÚæW›¹¨uᢆ<¥J•²™~sG­3ùÉ,”B™$$$hÅ£™u¬ÿþ. Ë¡C‡pÇwhJ0SÕJ9ëÝ í;u†»_itâmì=‰}"5¥†óZËø(Å$åKûhJKÿLÅÅ×ÓCSj<=Üà©I¤gPáÉPŠO:’RÓ• ”!¥Q1º—ˆ3q8§¢tM¸`¥5®Œ&aÁhV9j”CÝò(a=Ñ¥§B"P¯^= 4Ÿ|òI!¯”ÓÍ„Àfj(Ïf¦NŠ­[·jÕ›uwƒžfé«(¤*··Ýç#4¥kÃñËX}ä""•K]Üú5ª‚aÍ«áŽz•4—¸¢´/טU«V¡G8yò¤öL7O«ÒJ!°¼»Úã?ŽY³fáÈ‘#`Î!AÀŒ=ÿüó<¨i3G›Ò† `¼üò˘1c†–¤……n! +9Œ‰8pà­.ü¾ÿt]£%‡©¦Ë•+çÂh]ô“*–æ×­ÇñÇöد\Ð*úa`“*èÛ°2zÖ¯„R^öû[¤Ågëé«X¼ÿ*Ål‡RÌè&7¬EuÜݪ&ºÕ­(nmE¿5Š|ejj*¸ðÀÅXÉ(\d‹s¡e•œ;v M›6˜2e ˜c^H0;wîD«V­ÀìSÇ7G“Ò† `0æ$,, Ï?ÿ<^ýu»àÉž˜`q=ˆ’c•³gÏ¢gÏžZLÈÊ•+µ{Èx†lå„ÌÞußo8„U‡/"TüWŠÁ(¥tªU*7@¡)RÅÖœV Ó¹¨øÌÄÊÂB—·¸¤T-ñ“0‡Ö!¯’îZ|ާ;‚}½4Å„.hT’&¨XÚ¯H©|„¨ÌlWñ1{•Õ‡Évž‹ÄÉð$$¥d]^Ò³$<Õ×Cý•PVÙnn”pG‰€J> &I¸‘ž†ô¤d¤$'!Y}ª,MZ*AAÅ 4©¤’ ¨¤*ñ@ó*ePK)-¥ý£ðýúC˜ñÏqÄ(ިĽЫ)ZU•çKA1,ÎyÌûþûïãÚµkZ qqÚ’k €å”¾„X·D²zPä‚<ظq#X”MÒ2æ’rX5j„¶mÛâÇtX,Éxrr²¶**JNÎ(GDDh5tXKgéÒ¥hÒ¤IÎ'ºèÞã*€ÿã¥{ðó–cð÷òÀãÝâ‘ÎõQ)0ÿôÌ—cµë]šcWpüJ´Bñ|ü|ᯠ`x)—A/U¸×“*Õ¹›Rh KÔoR‹ä¸Xõƒ$Ûœ¸è¥ÿd HYœ:«¸ ÎµËk®tÍT„ü(Iei›¹ã>_±W³îtUÉ ^êÓýUÂ!Ë!°oß>íw¸}ûv´lÙÒrIË9!`%‡Ö›Æ£Aƒb¢Ë vÙWdXµ2Ö¯__ä6äBAÀàê{¯^½ÀšbM›6µGmΓ®äÌ;ƒ¶9?öÈ@ll¬¶À¸k×.Í­‹B®N§U:æ÷îÄô-G57°÷l‚Ú×rË‹˜t€îlóöžÃ pW™ÓJ• …·Š{ò+_þ¡¡š•&¯6Ìy,CÍ­”"¯¬v W® A}&©de|1PY£5®ªeYËO®U*QÁ§Ëöªž³hS=ï n…Þ ÂÌɪ´uf‚,¯î•^x/¾ø¢àb],£ä|÷ÝwZz…×®]Ûº"IoN‹Ós•{Ù²eÚdÐiÁ\NÚ¯_¿Žµk׺¤ü:%%^^^%'o´¨ ÞsÏ=` Æ.ºj&:ÆÂ¼5‡æ®ÅÌc¶Ä}mkiñ0¹!¸G¥iþmÛqü²õ$ÎGÆÀ¯t|ê tåÊ(U±Ü<¬Ry#7ö ûéÔ–Žëʽ5þü9Ä\¾oUgHÓª¸§uMe¥©’g†µ*^ç͹Û5e‡1HŸŽh‡¶Jé2/#FŒŸ]´@ Yó+9\e§b3lØ0ImÕ±tþÎèƒO¿Ö-[¶8¿°"¡K!À£|nþù矒L#‘וœ9sæhÖŠQÉ>X¼ +á‡1]Ѽrþ‰ ÌÀªÓ6‘––†   |öÙgxä‘GœVN;̼–œ¯¿þô~î¹çìPVaÉQ8~ü¸æƒOW!AÀÙ`áO*:?ü°³‰fvyÄ’S4Hzè!ÍR8räH\¸pA+Ð]JesZ|àÆO[ ÅüýáÙ²FŽ¢½zæíÐjÈø«"UºuS.i5Uj眦û96áp;ù› ¬ZUû‹WîÞç÷ìÁÈïV JGýŸ!­0¸IUƒLÞîxs@ ŒTµ‚þyÚ|8/÷i†·•Ë_nî~†dã6ÁA¨Þ³ê»Ç’ÝÑöÃ9=e .^O0Œ(1z¼kì|c8ÒUáÒæïÏRi¸Α‚!кuk:tH›'ì 9˘EÉITyÚ?ýôSÍç·lÙÛWÌÁ¨´ášL™2E…ÎÝ-„gC€.¾mÚ´Ñ\œM6KÊÃ8¡¢!PU¹.±¨rõêÕµÂÊ+W®,ZC6¼Šòt£ ð)‰=FàÞ6µnãfû™p4ûàoH¿ þõ Þ¨»R§ŽÊnuÛ©.»ƒ™ãj ˆ={bö¡K¨3áO|»î*qj¤ºåJcó+Cñ/¥ð<0u5ÿuRT;¡‚#Àç<3J\qÁ13Ç™fQr¦OŸŽ(U‘—ÅŽ„s!@åæÿûÆŒƒÀÀ@s5+ívÀeUÈnWO<ñ„]ðãLˆ%Ç<£Ä hÖëß¿?X`™ÙýÒ•rûÒßÿà®ÉË1¦]¬}~ªªú7¦”˜šŽç•{­2¼Pﮨ¤VÑÝKÚO}S~íá{Ê*WwÄHø×m€'~ÛˆŽŸÌã—LÉÓÝ Ÿ o‹YõÆŒŽ£ë§óo³ü˜ž/ßT©Re”RÉL±BÖC Ø¿z®¨}ùå—Zá1VuÌ…ÀÂ… qâÄ ­j·¹Ú”v{A€Y®J—.Q£FÙ KÇXrŠ?T,ªú믿¢Aƒš»$Ý&¿úê+0HÚ)>% ÷ü°Ë^À´»+%çöBã»ÏG`ä÷«p&*•;wF™zuíQ»äÉM%kÛÁ5kbÿºµhòî,LÕt®gàwXójhPa(îüf™fM[ød?4 6œ#9#дiSìQ‰„¬‡@±-9\ ¢Ÿá³Ï>k=®¥'—@€VœÞ½{£^=ãCÖ%„!º-|ÿý÷Z¢N6… †€Xr †SaÎzã7´lk?ÿü³ö¼e={£K*V„–ƒ-ªhåjUÏ%»‚C÷ªÏ–ïEëÿÌÁU7Ô>Bœ"¢o™ÔVõº‚6Æc¿¬ÇàIËŸlh­^ù@Í}­n¹@tþd–8o8.9# JNθXro±•ÿìÞ½;8xB‚€¹8wîV¬X¡Uì6W›ÒŽ `/ÌŸ?_KåËŠôB…G@,9…Ç,¯+˜©Žq:´œ3vààÁƒynÕcǮƠýç".9[T\H;• Ì”b’R1TMÄ_ú{+BUÖ´šÂÓßÏôù^HJ¸¹¡R›Ö¨­âuVœ GÓ÷ÿÆÎ³á†V}<±äé~ egÐ×K0M%€ÊΓ8f‰²ÅRrhÁYºt©Xq¬3V.ÕË´iÓ‚ê!+$8ÌFÙ§OÔPil… Ž€Xr ŽUaÏälÛ¶m Ûyûöíµt·…mÃÜçï¿….ŸÎC¹lzyj”1Öö9x)-þ3ËŽ_Õè+4k&‰Ì8¥*T@;‡!Þ·4Úýw~ʦȔTq:?톗ú4ŃªNÑ$•nZ(wX.€µ$“ u¹£dÞ#ÅRr&NœˆšÊS&¢æWo«´LÍêÜÎX™ÛÕÇ×Õåg½§åË—kÙ(]‹¢Ê/–œ¢"—÷u¡¡¡Xµj† †AƒiYSó¾ÂrGw(ËA·Ï棎Êìµâ¹ö5ºu.;xm>šƒðÞÚD¼”Ä[d0<¼½Q£_?„4j„Z£²Õýs[öµ†´ÆC[ã •uí“e{-‡34Z·n]¸)+ BÖA È†,þÉ Å &hƒfv¥W@`ݺu8yò$|ðAWWdt1‹¦et1ÑE\@ÀÓÓS[dj¤&µ/½ôvïÞ &Éðõõµ÷{/D¢× UÊPÌþWoø”4þqã<2c=‚Ô"k•®]ebá‘¡µ’rcôÆç+×âdx¬ªMÔÞ*YN¯öm?O<óÇ&¸»•À¿{6ÖÉçM|||Àî¢äXï–(²%篿þëã0½¯ `NX§U«VhÜX’æÄUÚ²=,ÂH+åøñã!Åm‹>bÉ):v½’…½/^Œ%K–h…Ci´¹r]Spš……`Îã·+8ï.܉‡¦¯E9åšVMÅse\È:„Ôª…Zý`Êp×í󅸞˜bèøé;áó‘íñü_›ñÝúÆc²‘‰)>,ØXë~(òÓáÇÄàÁƒ!Å?­5T®ÑOll,fΜ)V×n—“röìÙ`ö*)n[ô¡—¸œ¢cWØ+™Ý’Å iÝáÂfäFL[ÜÉÛÙÈ8ôü¿…¨Y6óžèk°°_ÖÈy{þTéÒ?BÖG T…ò¨5xö^‰E—ÏÜ–yí¹ñîàVø×¯ëñÛ¶ÖgÐÎ{¬_¿~±'v.¢]±W$%çèÑ£X¿~½¶iWÒ3Àœ9s´Ì#÷ÜsÃË"Ù˜ŽqÓט+ëï™õš£1AYå\Xû1™§OŸvu(¬"¡”>¼fÍš…‘#GZ…9éÄu`Vµ¥K—Š«šë ¹KIÊl”½zõÒj$¸”à"¬Ó"ðòË/ãüùó9&à\áóÏ?×å@TB F¨8œîu+âíAÆLiÛÏ„cÔ÷+Q¦^}”oÚ4·&d¿!à«jèTU1S¿n=ŽwÝ[W-‹ÿÝÝÿY¼ ‹öŸ³#®mÃJåÊ•µßŽmzw­^ ¥ä¬]»W¯^ŨQ£\ %‘Öâp¥›þÜâªfq¨¥+#páÂMgm¡â# –œâcXÜø¼ž8qâmVÓv9N÷ÝwâââLwg}dÆ:å¾tCsSSµ#³èjlMZŸòP¹SǬýòÅþ(­bÃ:vÔÒ|ÓgJuª‡ÑmkãÁik=›éy®ð±™\ ²<…RrèªÖT­ªÐWH0' ,@Uû X­ ΄À´iÓ¨R®ŠïL£êº²pr6zôè|à¢ãwŸ{î¹Ûκé(fí<…ic»#Ø×+ëxºRzF|·17ÜPõŽ ¢$äX”U¡ eÔÃ}SVãèÕë濾·#üæÎ‹1cÆdÕ5cPuvrssÓÎ‰ŠŠÒ=ªÜÔªûメ­ §®8|/݃J;Á7D,úplÃÍ]Yâzô©¨xü{æ÷ík„â¥>Mñòß[q>:ÞpÌU6Ê—/¯Í§¹À+dY ¬ä,^¼XËAw5!AÀœðÞªR¥ 4h`Îf¥-AÀæL:-Z´@“&Ml΋³0 –ûI <LªŽM›6áùçŸGÍš55iááXq"Ç¡=ö~þç¨ÌüpWxª‰°NñÉʽi BjT—Lj:(þééN1YYlæí5ÆçLØåKû€õ‘\‘Ê•+§Å³EFFº¢øV•ùÖS&Ÿn-Z$îDù`$‡‹†]¤@bѰ“«ì\Ïœ9>ø ý2)œ f@€ÖšöíÛãÃ?ÄñãÇqìØ1|ôÑGh×®]–¢Ã˜ÞÇßÿwm®æ›Ò#3Ö#6½Â:w1Ý-ß 5”ÒZc§­Ãµ¸¤,iX?çûû»`ÁÞ³øk‡ëŠͼÿ¯\¹’…‰|± Rr¸J³mÛ6Qr,3.Ý*ï-ºBŠ«šKßN)<œ””)œlæÑKŽ™µ@sµjÕÒ¬:´î°>“oToÝ ë~Ç3í+zœ³û4þÞyaªà§Äá qŠ0U06©„;žþ}“Až®µ+àöuð‚rgKL5ÖZ2œè„º’ÃlÅB–E @Jë—°B+ë< æD@¿·zôèaÎf¥-AÀæÐUî<’1ÐüCÁ„BŽ@HHZô„³-ïÇ·óÖÀ_Ÿët=1þºeTÆÖ€°Júnùt"ÜYüRÅYý¾í80fûðÎ6`½¤O–íq"‰ó…¿ .Öp@ȲHÉaæ«J/Uª”e¹‘Ö]5kÖ mÛ¶ro¹ÜÈ;·À¬fͤ<ð€s jé89r,žûs3šW ÁC]A×:½6wbR3PI¹º 9/¥UÌmˆ²î=À¯õk†ÿ.Ù Ñ Î @6ÉèÞÉù4“wY)9|YwíÚÕ²œHë.‰ï­Î;»¤ì"´ó"0}út”-[VbÍ,4Äbɱ°h–‰V:ÿ»«=LÕÓý£ðíšC(צ­rS»U+Ç,H“v€@¥víq5.Ÿf³Ú<׳ B•²óîÂvÀ¥õX ’£g´^¯®×S¾J :uJË‹ïzðˆÄ–D€þ¨GŽ‘{Ë’ KÛ6A€J«½çUCÄ&Œ9A§bÉq¬A|sîvômTjݲàP‚§ÿØ ÿ²!‘ââŽ5 E䶤¯B›5ËV/7¼9 ¦n<‚S±ElÝñ. Kކ-_%‡+í|Q3KŠ `NxoÑlKWH!AÀYظq£VG\Õ,7¢bɱ¶ælyÑþsØrò ÞÜÊÐìÂ}g±úðyTh×Á`Ý1œ$N‡@hã&p÷õÅ+s¶dÓ®ª†”» vö;ó†Xr¬3ºù*9ëÖ­CË–-áççgޤ—A€÷ë.qECHphÅiܸ±vo;‹Lö$‡Xrìi4òæåÝ;0¨iU´®Z6ëD¦ŒxUYw‚«UƒùrYûå‹ó#À"¡åZ¶Â¯[Žãðå[ñ(n%ðÖÀ–øyË1—±æpÞë:–+[ÝÝù*9\mg…c!AÀÜȽenD¥=[#œœ Öax!Ë! –Ëak®–7¿ŒN]Å«}›šü{×)ì;r­ŒÖÃI²á´Õ¬ß @L˜oŒÁ¹»uM„ùáÿVìsZÙM£%'>>Þt—|·y*9ׯ_ÇÁƒÑ©S' t-Mº2111سg$på›À e_°`xoß{ï½N(}ˆ$–û‡ü¸øtù^´¯YÎPø“Vœ7æí@HÍð ί 9î„09bh‹–økûI¸•%!­9ÏöhŒ)*6‡i¥hÉIHpŒr¶Ï<•œ]»v+ftW̉ €fddH<Ž9A•¶lŽÀÏ?ÿ Ö|ªX±¢ÍyqfÄ’cߣ{ìj æï=ƒz550ºDÕI9|1åš­;†“dÃé¬QþÊšóɲ½YÇwª *;“×4ìwÆ ±äXgTóTrvîÜ©¥A­TIŠtYg8\§Þ[¬—P¡B×Z$uj"""°hÑ"ÜÿýN-§­…KŽ­G ÿþ¿[a~Ú¬šáä–îEPåÊð+ŽWÛ`*ñ•„à—ŽãÒõ[ÖŒRªP츎õðÝúàÕÏ™I”ëŒn¾JN‹-¬Ã‰ôâRìØ±C,„.5âÎ/,cqJªêÞwÞy§ó kc Å’cãÈ£ûÔô LÛ|ã;ÕƒZ”Ï¢]*gÝ‘ (£&·B‚@píÚZ}¤¯V0€ñpçz8£j+]0ìw¶ OOO¤¥¥9›Xv'OžJÝÕDɱ»1s †hÉ‘{Ë)†R„¸‰ÀŒ30tèPøûû &D@,9× MÏÝsñIjE¾®¡µo” ’pÂÄ3ÄŒ‹n0ÓZPýú˜¼á0¨ëT¿| :ªšJß+k 3“»»;ÒÓÓYD»-W%‡Q,Ô(Q»'§b‚ÙÇŽKŽSªk Âɛ6mÂèÑ£]+I/–+]„n¦n:‚¾ +kîjúåñ)iÊ5éëÖ×wɧ €2uë!*. óTü–)=¤¬€T–£7k2.YȲäªäìß¿_Ó2›© µB‚€9ÐZˆmNT¥-["ðË/¿ 44½zõ²%.Ñ·Xrìw˜™kùÁó¸§u-“¿o;dµj¢\”„O?ª­o×ÖwiŸÃšW³°ÍÞuÚ°ß™6Ä’cÑÌUÉ¡‡>ƒ5jÔ°'Ò‹Ë  '´¨¬nB‚€3 @%gÔ¨Qðððpqì^±äØçÍÞ} îj…zH³ª§¨ÒU«ÁÃÛ˰_6 ºu±òÐyC‚ï’š5ðÏ'œ ZrÄ]ÍòÛ«’sôèQÔªU !AÀœ°>ŽX͉¨´eK¨´>|÷ÝwŸ-Ùp™¾Å’c¿Cý§ª}Ò·a˜%K§‹*{ÖfU4¨fM}—| Y”®R*aȨ̈²öñ˨V5µäñɆýβAKޏ«Y~4sÕ`hÉ©S§Žå9\Në«€C!AÀ §¦šÀµmÛÖÄqÄ’cל†ÕG.bx £÷ÇÌ'µIl@å*öÇ´pdsÜÔd? jUüª\Mi`“*`r>ÖVrFKŽuF5W%‡–Qr¬3®Ö‹(9®6âÎ+/Wâ~ÿýwÜ{ï½Î+¤I&–;›ì¬Ré¡™%«²ä˜Òï;Ni“XfÓrB P…El9qÅà²Fk`'•emñ~çTrÄ’“Ó`þ}9>u¸JÆìW¢ä˜pWoñÒ¥K¸~ý:êÕ«çêPˆüN€Àš5kpñâEqU³òXŠ%ÇÊ€ ;NF[V-ƒ²þÞYg3;Ö?jòÊ•z!A 7*…“þ%ÎNéר –©DNXT,9†¡¶ØFŽJ_ÚL!-JŽÅpwÙ†iÅ!‰’ã²·€S þ믿j©ÐëªàY!ë –ëà\Ø^–ª j?•:Ú”˜ióSNb…ÜpóP.k+`á¾³†Sßu-6»Î…ö;ÃjäYfù‘ÌQÉ9w.Ó?øä¸BÀ;(žªŽÒFÃeJk–ï/g#Zr¼¼¤n”¥Ç5W%G^Ü–†Þ5Û?sæ ªW¯îšÂ‹ÔNƒ@ll,,X ®j6Q±äØô|ºÜrꪶânzWßK”pƒŸrWòC€V¿ÐrXŸÍjCKÎ%Ukéld\~M8ÔqqW³Îp‰’cœ¥—›œ={UTñ/!AÀ‘˜3gŽV­zĈŽ,†Ãò.–û:f¾Ú!R³ä˜rµåäU” ‚»x„˜Â"ßó@À§\9l8f´äÐBHÚ«î1g"qW³ÎhŠ’cœ¥…ÀÕ«W‘˜˜Ih!·ƒ£#ðÛo¿¡OŸ>R.BÖE@,9ÖÅ;¿ÞN\‹ABJš„NÝ©b+JgNP dCÈß2¸¦×)À»$ª†”ÂÞóΥ䈻š>–ýÌQɹvíÊ)ZH0'´âÄ’cNT¥-k#ŽåË—ãž{î±v×ÒßMÄ’c?·Â>µÂîæV +þÝjRê#JŽý ”p✩(óž2%*Ð{/ÜJH`zÌQ¿‹»šuF.G%'::ZKók¤WA€ñ8\…­¬RE ŽŠÀ_ý¥Œ<ØQEph¾Å’c_Ãwèrª©•v_“¤×â’›Ÿ£uǾ8nì O?xª`üìV*Ї/GÛ»ÅâGÜÕŠ_/%§ÀPɉÅE€–ÖÇñô”š ÅÅR®·tU£‚ãççg;&\¼g±äØÏ p*<5Ê:rsBêh´îN’ A |ƒqôŠQ¡©®î/ÞgÎDbɱÎhަ䤥¥!>>^,9ÖÁߥz¡%Gâq\jÈNØóçÏcÆ âªfÑKŽ ÁÏ¡ëӱʒão8r:"nîî(éëkØ/‚@~¸ù—‰l ﯧ™œßåsüúõë0.8 óÄèmJ]ÕH¬J/$˜‹/JMs*mYßÿ]{62é€m ’#–Û`ŸS¯\açJ») o€¿rO6Ý+ßüð,U ǯ­6úýu:›ò“kö{Fdd$‚oÆ Ù/—ŽÏ™(9Ž?†#Z„†J͇0aô6èª6lØ0q¹¼ Ùáª\ŒN@XÑuóŒªiRR­È …E€Jι(cMýþº¨êå8 EEEIvN+ æmJNLLŒÖ­˜Ñ¬€¾‹uÁÒeË–µˆÔ§OŸ' úߎ; ÝϦM›ðöÛoãƒ>ÀÖ­[ ן+&Mš„Ç{ /¿ü2¸2•ÇqãÆn_¤ÔÔT¬\¹Ï=÷-Z”ßå6=~ìØ1ìܹÓa]Õ¬y/ÅÅÅaþüùÚ}aîAK޹-z{ñ*utRjBüŒ•Û/Ç$¢„—OѶÓ+o¤¥"fëJœûì9\ß`ùçUâ©C¸üó§ˆÙ²< ‘äó'qúqH¹’ù Í:à$_Júø !)ÉiYù”t×[„«„ÎBº%'û;ÑYäÓåX¿~=Þÿ}Œ=sçÎÕwçùùùçŸks ý¤âÌnSrRR2ó“{© B‚€9 %ÇRJÎÆµêóœuïÞµk×.ëÏ<ó ú÷ï©S§â7Þ@»víðñÇgµA T‡´Ìp<ð~þùç¬c–ú‰"åâbÉ’%–ê&«Ý'N`òäÉxñų”¬ƒ9|¡²Ò°aC¼óÎ;_|ñEgwQI ÆûöíÓðóÏ?ÿ¯¥;£=•[&ÎèÖ­›=³™#oÖ¾—x¿>ýôÓ {Ÿó"qsÒYÆßÛ äÕØdxx÷NpÐÄãûµüO\ýí ¤†[öy•|þÂgMÆ…/_DÊÕ[ MÂመ?äÅÉÃ+󾉈7*4!êÓï7G—›™ÕX3uÖ²¿]6Sþ¹ØüÉ'Ÿà¥—^B:u0jÔ($$äo›2e ¦OŸžÕTqæ ·)9Ô˜H%¥JqÀò¥øddd ""ÂâîjýúõÓ&¢…±Dþý÷ߪ΃›Æ-B+V¬Ð>¯¿þ:¸ÊBò÷÷×’&têÔÉjqE쓵XÚ¶m[ü(@ 5kÖÄ£>ªéáá‘ç´tqU¦K—.ZM­½{÷‚xåG#FŒ•]Ž©E‹xâ‰'ò»Ì.ŽsÂ~×]wÁ]T;Yû^â8·iÓùÝGEÁQ,9EAÍ2×è+뜀šSH{x;ßB©o½(;Ò:Ï+¯°š(3<óy\ÂýÖó8¨ç4]q ¥;f>CMqw†ïºr¬ß[ºL!~ÞȾO?æhŸº×cr²¿A¾Ã ²ðúæ›ojïfÔåwΧ| ŒäŸþÁêÕ«³ (Î<á6%G·äˆ’“…¯|1Tp¨èXÊ’S7oÞ¬¹hqòÊ T=´fܶm[qš6˵œ(’/k•=’þ™[ŸÐÎÑù*S¦ŒV;&·óM÷ó\SÒ'Âz[¦Çìåûž={@W>G/jí{)¿ûÈ^ÆWø(qÉiÚ…ÞÆ²×“áî¤Þ ·Ë?“K”¸9E»ù\ÖGÉ#Ðø Õ÷;ç»Wæ½éU¤ËàS±É™‹ðú>Gýd<‰–Röw¢¶ÓNÿ¥§§k^3\Î8OÐù~¯X±b~—hÇYžÁG¹-šRQç ·–n¶&–SX廹 æO²¦’Cï¿ÿTÜ9Ù¢õ Q£F`,É´iÓ4³)ƒÈiJÕˆº¼Ôbaô‡¾¿ ŸŒE ëWÏK«™^ùÛªP¡‚¦@±ƥ̛7ÿú׿°víZ,]ºT³?þ¶xNýÒÌMWš¹ÒÑ»wïÛ”¡£GbË–- ¥¥cÇŽ¸óÎ;okjݺuX³f¦¤°Rn 1ýå—_4¾©´Ò½4hÐ íF<Û"oÄôþûï7X¾x e%6­[·Ö®u„tUcúsº1Z“é^âêäÌ™3Á—_«V­´ h¹ÝGÅÁmJvµâ h¾k“ÓÒµÆ<=Œë¥©é(a'ÏèuóA×/w_”úÒãc±p:_S²L÷¥ÉpC-jÅl[ w?xU®èµs‘|á$‚ºß ¿Fù[ÓÙîõ‹¤âh<ËUF@»Þð,o,|‘”ˆØk@—3âÜÿ~x†V2 HìÎuÚ9n%½@ËÉôwtC=Cãv®…›?üf>CS.ŸCÔê¿:ê)$<¨ñîY¾ ‚û݇Ù¤¸=›»m•jõ|¶_ýVð Ñú±‡ú}£ß[:O^îH1‰ÓÑ÷;â§©%'§w"ߣô”`=6Æ23^• ß³|¯^¹rE{s^CïÝs… ³Œq¥’@—}¶Aë ßûÙ=B8/a»\¼cvÎL µSã{ïñÇÇâÅ‹µ9Ä“O> ºëÓÛ….ü¼/É#ç5¦Äw<œsçÎiñÍœ'ð½€v~óʼ`Á-~×´Ý¢|7>™T º’#‹§\“¶Pr8‘¦{ݨøÃ§‚Câ÷7€|ä¤xñœ¢Ljù úᇴXöWJe‹3f Þzë-|ùå—Ü¥) Mš4Á /¼ =DãCEä©§žB·nݲ~‡ÚÉ9üû÷¿ÿÿþ÷¿ÚC¯oß¾š¢vÇwh.wúéŒs¡û >œx “˜±aßÏ?ÿ<î¾ûn¼ûî»ÚaÓ—ªéùÄ­eË–(]º4¸¢Ãïü£ŒT€ˆ'W`^yåðKÅŠ4ÒÁƒ5|%1„)ÖþþÇhøä†‹¥øq”{éÈ‘#à}ظqcí Çœ9s “3sb$JŽ9Ñ,z[)7•N@M‰ûÝÜŒûL[ó{`—AŸó.~÷ŽÖ­»_)„ ƒ‹“ßR±5™ÏcñŸ|uŽ?Õ—§‚3ïGâÑ=ˆTÊÐáñµrVž,'¨sŒïˆ%5w¶ô¸h¸«"LϺ.=!ûï¬ 7•¡üØW”’•†#ã:‚ŠN¾~‘‹~F¹ÑÏ#¨ÏݸôCæóX©9Ú)‰J!ŸG» ‡2“ëP‰;4º%Îö,®þ>W~ùñû¶àô[cpyÚõ¦µÏ«¿…ËS?D¹1/¿yœxn°ÆÓ±'ûjŠ—ádmè÷~oélxº»©d™Jµ¾ÏQ?uKÎåË—o{'RAhÚ´©æ5ðí·ßâÃ?ÔŽî»ï>í\Î-ø¾^µj~øa- Ÿ8pÑ”1/|3†‹¥ô@`l çA³fݺ‡¹ŸïfzlÑ]œ¥c4hÃEà°°00Vùÿû^}õUí¾{÷n­}öW©R%Ô­[7ÇY.²=>§Ë•+§Í“CÊkžÀ9ÅO?ý„Zµjáµ×^ÓÎ/î?·ì èJޏ«eGF¶‹ƒ€žµÏÚõ—h-`VZûì3må‡&>4ó"*9\¥g>ÛåwþQ>®]ºt ”«Mœ ³ìþýûµ&ùЛ0aB^ÍÛå1º3žVÖ *¶ G¸—¸ºGå¼}ûöZï#¾-AÖV4-!ƒ³´©gÀÊnÉIS+î%Ô¤Ô^È»ºñyLEÇ»ò­ç±g¹0„=“™hÆÍÓ µ¾X€*¯|ú¿ì‚G© •MíYM)ÉIžŒÔœzínv»Aw CÉ ²š’Øu0μÿ0¨˜hJ ¿òBkEi¥|¥\>ƒÄ™ÏÇëkJIسŸeZ“*TU–'ãóاFT|Øø ¥Wfèx­ŸZQí­)Šÿùš(ÚD9K‹Áù‰/!°ÇpPÆR-»" }eйZ_-βi ÙðŸ~ßè÷–Ί—ʰæ,–ºïs1ïîìïÄ®]»jÞ”»J•*š·“ 1‘ Æñ̘1C³²PX¶l™ @¥DO–ÄÄa´„pÀL°\°åÜ€ ôlỌÖz²p‘—J­,|ns1’Ïsçù|ŽS¹¡Å‡Š‘îQ¯^=홟ӜŽó Ý+„×sŽÀóIyÍ8o;v,zõê¥kŽ·=…¨I‘Ä—ÚðJ:4Ò§ÒYû¸RA—.>Hä…œÔçDüÒ´ÊU KMÊÄ„Êt¢„ûèB–ÑBí):q•¤zõꚌºB¹F¹13‰.Z§˜Y'®Ñ cªÈ1XœT”‰$ãU¨Ðpå†ÊW¤H¦}Úbü5&Šñ ˆ7•D{%[ÞK\Qd (³êÄû‡/âÜGzy}Š%'/täXQpóöÓ.ó©sëw^2¤ÊÜù0RUv³ä‹§rl6fÓ$> ¿ÆFWÖ€v}”b”‚ð¹?j×÷¹ þܶ™‘œ¤¹œñ@ò¹Ìg2-,¾õ[ÂÝÿVaU¿F™ÏcõCÊ껄rcËNzºnïj™I÷®Þ@)Qg³NM¹v7R’j’zÚ¿iÐꔡ¬LöF&"k¬)]Ìiˆ‹º‹WNïDýÝN˸N´šhåщï%fjÓ3“ò=@2}Wñ}Lå…–žS§Ni >|›— \Sâ*IŸ2dˆ¶­+)Ú†úWÔg»µç ·)9ºò)˜š(é>f âd‹zü'­´¤äDœÓŠ’—%%§ë̵™G¸"£»÷eo—“;®¨ä„eçεÓù#q…V!®ñfO£ÿ¯N4Yë.|ú¾¢>¸x=Fø@åÊóÜëÓ>õ~å“‹>Lqíˆ ¬u/ñ>"™ó^Êëþ(Î=šW»r¬ðxÝŒÅɾÂî¡ö߸¹`ZøVíë 滋n6iQ×rd,éT¦¥ÆMÅü˜’óÌç1ctHŒ)\¿ ¹”yWË´.1Ɔ”pl|jfºTk;´·”›[û ö-3¶å–f@ˆ1H1[–e5qEŵ-[öB7T<‰îi¦D÷5ý~3Ýïˆß/\¸PhKwNÊîqŸ' º«ç\ð$eŸCèóÎHº¡CÿÔvšü+êsØÚóã]d"€|̉-'ti²ÑšÃ\ët=¢›kâd'ú¥¾ýöÛš_jN”ìç[b›«2ôÓ¥ZNÄ MÏÌú¦[]õóôÚ@z²¦l¤%‡.iÇ7$W šÖ-®ÀçDEy€q•ˆæwZƒhFÏÍR–Söºi,9ލäXë^Ò-‡9ÝKE¹ r/ˆ%§ (YþÏ›±8Ùc%¸ß‘7L‘K¾tFÛôª”ó3Ù= X;¿w³éeðTîfŒÑñÒö'_8…C÷5ׂý+Œ{M;®_ÀøœŒ¤ÄïÏùyljÉѯ)ì'‹tÃcÍó_¼ˆÈ¥¿++ÒqTÿ—Â6eÑó32Òµöõ{Kï,E)?Ù÷éÇí³(JN^ÏÒ¼Žº“8¯ »‰s!SâûšJ“>0=–Ó÷üúÌéî³öÛodZ,x°„zû¨X¤“@능ÈÍÛe‡?¦e™+Õ²›»ã–³òf)òkW¿oô{K?ŸŠtö}ú1Gû,Š’Sùl¦k:ciõ,kÙ]âéjÎ÷7ã*ó"ýù›}‘5¯kLY{ž JŽ)úòÝbØÚ’ã­ªo3Û0­³)ñ‡Í‚\ÌUOW6fá313YuëRXb:Ff˜š:u*hJæ'ƒ ™ÎQϬÂ6iQÑÍÃÜf`!uEû˜0mè«×}ô‘ÛĬh:Q¡ ‚ÄcTà¨T’(WÚ™x5öÍc—_~Y;‡ݸêÏ6˜p´aÃC¦6m§É?>¤y¾©[y¤RÈ´””}Ò¤IÚô¦•ŒÄ~H<®“žBçYßoëOòÊB±¶¶âØû½ÄXúkó~Ô_œsÆdÑœYyŸ›“ôß‚9Û”¶ @)ï’ÚE1I)†‹}½~ó·n8`£ ¦sN‹Gø¼©HOŒ×>Ó¢#´Ñi1Q®ïËÚN¹z ·¡ÒÓÿÍÚ——™À&=1óë[§)B>¦~6‰Û½AKE]fØ#Úµª_&¸¾a‘ÆËÕ¿2Ÿ©×."-6åÈ|Ÿûä)d¤$ƒnl‘Ë3ŸÇl‹ü’n¤f>C)Nñ1Ú×* ‚N<®µs3˜… Ž>Ñ[¥žöCzB¬ê3 Ì*go¿¥ôäL}=uQ´Ï˜ÄTø{yö9êߟzÌKNïD¾ŸIú1~×ßzúiîÓÝÔ²/¨ÒkE'öEÏztÓÃÄ|VŸ={V?M{çs1ï‘G2ïW½mÎ[LI%â|ƒ÷Ÿï9û%Ñ” :Oà¼Àô½QÔy‚(9¦èËw‹!ÀhvP‹u–KìGÃŒ_Œy1%¦w¦ ñ9á×ÿ˜]«ºß«é5ù}gîz¦Ÿ7nœÄ $l‹¦©éŸJe€V$N¦iVfm\L2@…Ê W@˜?žˆÌSÏ;´LñóÁÔ2³ñ“ÄÕuöÍkÙ/ýp¿úê+íAÉ@B*vŒKbªIfu#\áaà"-^|x™>µFÕ?¦ƒ¦µ‹ý“Ã:<$fh¡É›[ø |ñŵ¾©x10Ý™ôÕT¦Ø/c†Þyçíz¦­ä8Ø ‘*ˆ´Ú’ìý^â½B¬h¤‚ÎØ/Ž=-`¼ß7mÚdxYK}%±¸íÈõÅG Ä/3>"Îh%)£ªÓ§%eNÆ‹ßKñ[z*³Y@IDATêy—{ræÝq8|k¸— Ô‚ü}U’èU³ P9ýÞC¸ð¿×pâßCPíÝŸЦ‡vNüþ­¸t3uÄ‚iª.NæóªÊ«ßji©=Ý_¥ž¦RVÿˆU3§Î7+áV2s²Î´Ðžå«âÄ‹ÃTÖµGPþþ5 ÑåŸ>Bôš9Q5m*=ó‰R‚bw·@~ ­–eͽtˆ– ÝÙ.~ÿ®Æ  ž»c­ª‘3[Ûwyê”"uYsE‹Ýµ^%ˆÅ%u>ÝáXXÔ«buœûøIÓUŠë}*cO÷ •aŠ[n¤Ýô€(ãïm`#"> Ù÷Np Z@è=Á˜ÙœÞ‰T¸(Jbl+½2˜DH/ÿÀ÷%ßçùÓ¥+W}¨ÔäG”݂ƕ¡ìçqµ„«-Tþ¨ü°]>èŠB´îPÒ³¼°-¶YÔöŠÂƒ¹®ásãǸ{ G¸—È#_K-lT«VM«í@EJȶ$¤¤Áï©)X ê¬ h\%‹™_/ÅæOT¿£{Ö>{ø’ª’0Å3‰Îܼn=©ìí[ÿåî}Vsó¬X­PY¤håI¸ô´’¹—=ÙCv‡×1æCÀwŒh™Ò¶MQÁ¡rÈ:zñVÊbkr„{É”GKYns{aÛz|\±_Ox—ô@D¼ÑjS>À7¢ì/-±®àp¬LœìcÇØ¯JÕ³ïÎwÛÝ¿4˜–97b†5wå.¦ïå7-=YûÔóXW˜¸ „~ ˜Ÿ§'ܯÅy)Ŧ” ¨¸%{ TµHæëíiPpSÓ‘¨jg°äèn\EU :FœëpA6/âÂj‡¹ß¯¹]Ëû¶8ü[sž`wunHÊ~§A€ÖN¦-EœœÓ B-çè*c΂R Ê[²d‰æÂE‹T^ KAe¤U‰¸XjÅ» |Èy·#ÀX*³ŒÕrrµ{‰+‰Bö@Å@_œ2¦°­ìÔ—ìƒÁrÁìf¤tãŒDW7ºâù5i¦“f2†„C;·w¼«ÖÍuµßÚX¤¨¦ÊAþ†nõû«béâyaµÑ†^ÓFÉ1'|ôXs¶í¨mYnÖ騈ßA€“ys(¹1G×"þYŠXD¯2qâÄbwC÷:V*æd X¬Ë´€W±;Š…Ç§_¿~N§Y¬ÎŠy±«ÝKbÉ)æ cæË«—)…Sá™ïzÓÕË !&N=ßhÍÕ÷ÚïgòÅÓ¸8ù-Á(£ã­2«s¸‰Ù‹Ôµ¾X¨Õç9õêÝZ‚„’¡•Pºc„Žz >µÙ › ’S»ì­‚¨dL¿¿x_9:1 “åäUQÙNŸ> Æ“÷ËIÆÝ:¢'EqpÈ~­(9Ù‘m‹ `iKŽE˜¶`£Ìž6`À€¬ÌýÀËjX¾ú53å&ÓG;¹â½$–û¹3«…”Âé£kZµd(KhjB<™a5?7²¢ÈC˃öù§SqÝÏõvùS”G=â]”ã`å•Áx¦lY¦Ýf\3ñ9¹Ú½$–ûº+iÉY}䢩ºåµí$•Æ”NøíuÒoÖLö,kBT4ê–¯m”–ÞgÎ@ÇG­ZµÌ. -6®nµÉ TI!*²Ï숒cvH¥A !0cÆ >Ü¢î•bÝešKŽý uýòAÊ’ fZÓ©¬Jÿ\ʉ* ]H( )qñHQ@W 6\qàRêÝTœ pÃRJŽBa–Eɱ ÌÒ‰(9r8‡ÆŽ;´täŽÀ¯+ò(–ûuNH32n€QSjŒÄHc!AÓãò]ÈŽ@bd¦Rœ]ÉÙ{>M*©zANLLw5KXr‹±/JŽÅ •†M`¦*¦ ìZq˜N»[·nö̦Ëó&–û¹jª q¦’æDÔ”ZTAª(9¦È÷|HˆG…@ªøb’RqFY ›(¥ÙщIXæB”ë¤Ì:­‡µK÷ÄôÑ…©/ãÒ`‰ð6A€gf*»÷Þ{E!·É¬S±ä 'k妲§5RÖœÝçŒV›v5B£ÜÕÒU4!A  $ªâ•j—3œºç|æ}Õ$› ›á$Ù «I”ë ˜(9ÖÃÚ¥{¢’C—5!AÀ^`j¦á=z´½²(|ÝD@,9öu+´«ŠÍ'¯˜êX³<˜C:þÊUÃ~ÙrB€•¯â¯^AgÞ7&´éÄTPõqª¨ÚKŽN'Nœ t‡††:º(ÿ(93TŽÍ¨(9Ž=~®ÀýÏ?ÿ¬Õ*jܸ±+ˆë°2Š%Çþ†®c­òàŠ{¼Iòò>¨€8•’]HȤ¨(¤$%£cM£%gã‰Ëàýå $I¬?Š¢äXs—ìQ”—v‡:)) þù'ÆŒã0<»2£bɱ¯Ñ§Õ&-=ÿœ2ZmzÕ«ˆ„‹ì‹YáÆ.ˆ¹pþ>^hªb¹t¢u‡–Í*¨ïtàO*95kÖt` uQroÌ’cQrrØ\†é9sæ >>^‹Çq¡TP±äØßÀU ôUuL°&[½œ~*#F¹«¥%§ØÓ‘]!wîúÔ¯÷*Èë&í¿‰ˆ¸$t®-–ù,¢ä/9»ˆˆ’SDàä2« 0}útôéÓåÊ]%¬Ò¹tRhÄ’ShÈ,~AŸ†aX|àœ¡ŸžjÒÊ)kÌ…ó†ý²!˜"‘–ŽØ‹—0 qÓÝÚýT¶”šW.cØïˆL¼tôèQÔ«WÏÙwXžEÉqØ¡s,ÆEÉq¬ñr%n/«˜eË–‰«šƒ ºXrìs hµÙq&×ÔÊ»NLÜVÅXÄœ9£ï’OAà6¨S è«eSZ²ÿz73ø9:1éÝ¢6lèè¢8ÿ¢ä8Ôp9.³%K–DJЏ,8î:/çL8ÀŒ7C† q^!L2±äØß€ÞQ·Jº»aé£Õæî–Õ5%'CÅì 9!­ d¶SÊ0³¨é›œŠ Ç/ƒÊ³3ÐÀš 8ƒ8#ƒ(93TŽÍ¨ŸŸŸóàØR÷ΈÀO?ý„»ï¾ÞÞÞÎ(žÓÉ$–ûR/t¯[³vž408¢e ¤©Z91çÎöˆ @2”‡–¾{[òçï9&p%gÿþý¨^½:|}o)rrXQr,±ô ð÷÷G\\œ`!ØÛ¶mÃÁƒñàƒÚ_ÂLÞˆ%'o|lutd«X¢,9\…ש¢Zo¯RG)w!A ;×ÏžÕ”à-ªý¹ã$ÓìëeØï¨´ä4jÔÈQÙwX¾EÉqØ¡s,Æ©ä0{• `OЊS¿~}´iÓÆžØ^ò@@,9y€cãCw6«ŽôŒ ÌÝmŒÁ×¾®Ÿ94UEH0E êÈô¨fpU‹IJUÊò9Œli´î˜^çhßiÉ‘xëš(9ÖÇÜ%{¤»šXr\rèíVhþúë¯;v¬Ýò(ŒåŒ€XrrÆÅÖ{ƒ|=ÑKŠÿ¶í¸•»•+’—»;"Ž3ì— ×F %.Ñ*uôc]ŒÇþÞu 7”¯ÚÍ«9@©Ê]“™ÕÄ’cýá%Çú˜»dâ®æ’Ãn×BÏž=[S¼¥¨]Óm̉%ç6Hìjǃêj«ðç£oYîý<=p_Ûšˆ>rÈ®xfl‹@ø‘Ãò÷Æà&U Œü°á0†4­ fçs¢‚CEG,9ÖMQr¬¹Kö(JŽK»] ýã?¢ÿþ(_Þ9 ÍÙ5ØfdŽJŽXr̨™›âä4ÄÏS61´üx׈‹ŒBÌù †ý²áš0Û^Ô¡Cx¬s=-+ŸŽÂÁKÑØ¨²ª=ܹ¾¾Ëá?÷ìÙf˜¥k´u%Ǻx»loTr˜B:--Íe1ÁíÓ§OcÕªU7nœý0%œN€ÓH? bp~T«ñLu“š…… ‹J3¾w¾K>]H庘–œŒ'»ëÆÐŠS½L€–tÀYàÙ¹s§fÅñôtË”#‹(9Ž4ZÌ+•Rll¬K!¬; S¦LAhh( à,"¹Œbɱÿ¡~D­ÂÓ]mÎîÓf_éÓQçÏ#12Ò°_6\ ê¾ûöbt»Z†„ÌÊ7eãa<¢¬;NPÿ3kP©ä´hÑ"k[¾XQr¬‡µK÷¬É)/7—¾ìAxVÖ¦’ôÑöÀ’ðPHÄ]­€YùôÚ¡¤â,>]n´ÚômXõ*ãÊ®]VæHº³'¢OžB\T4^èÕÄÀ­8iÊü÷hç*˜¹KÝï¢ä†Új¢äX j×î($$D ""µémŽÀ¢E‹pñâE<ôÐC6çE(<’x ð˜Ùâ N`7Ÿ¸‚Í'¯fuÏÕù·Bĉ“H¯,\\é ³¦]ݹw©šJ +e‰NåæË•û1¾c]0KŸ³Ð©S§-JŽT”ïjÝŠ’ãj#n¿ò~÷ÝwèÑ£jÖtž ö‹¶e8KŽep5g«TжÕCñá£Õ†iWÁ•íÛÍÙ´å D8ŽeÅywPKÇL;~>*ÏöllØïètUsssCÓ¦M]‡ä_”‡6Çcš19 ºKŽã3q|^Å,^¼<òˆ3‰åR²ˆ%Çq†{ÂÀ–˜¿ç ¶¹–Å4­9i…H•ü#îò•¬ýòÅù`F5*·÷©Xœz峦çÝ;q»Ú¨R*k¿3|¡’S¯^=øúú:ƒ8'ƒ(97dŽË0­9áááŽ+€pîð0m4ïáC‡:¼,®,€Xrcôû7ªŒv5ÊaÂ<£Õf@ã*è^/ —6o‚I6ÇJ¸,2WU²ôÄ|8´¡é[ŽâLD,& t¾à|*9Í›77È+ÖC@”ëaíò=qr)–—¿ l@FF¨äŒ;V«Y`3F¤ãb! –œbÁgõ‹ßSV›%ûÏaƒª}bJGµG\x"Ž5Ý-ßÔ„D\ݽ ¯÷k†J·¬ÉišçA‹ãlVåve¹jÕª•“Žªý‹%JŽý‘ÓpX¦LQrœf4Oº©Ñ]M8ÞØeçX,9Ù±ßížõ*©š'axî¯Í«M£ŠAx¬[}\ÙöÒ’’íWáÌ,\ز¡þÞx¡·16åÿVìŵØDL`ŒÑ1K§6näøñãš÷JÛ¶mm̉ëv/JŽë޽Õ%KŽÕ!—M`Ânݺ¡víÚ&{å«£! –G1àÿF¶Ç®³˜¶ÙhµùÏÖ(éN€…œëgÏ"BMø¿Ý >%ݳ½“ˆÿ,Þ—û­;Y'8ø—-[¶h±È’>Úv)JŽí°w¹žY|ñÊ 4u¹·iÁY¸p!}ôQ;àFX(.bÉ).‚Ö½žV›G»ÔÇ«³·‚u*íã‰É÷vDøÑ£ˆ9Aß-ŸN„@zj*.lÜ€»[×B?U'É”x?0]ô‹Ù¬;¦ç8òw*9ŒÇñòòrd1šwQrzø‹ùŠ+jõI‹káÖ˜NªÊ®õÚìm†6«†á-jàüºµHKN1“ ÇGàü¦M𾑎‰ww0³öØ%Ͳ÷éˆvëŽá$ß ’Ó®];—±Ù%DZÇÏ¡¸%Ç¡†Ëi˜MIIÁ÷߯¥.Y²¤ÓÈåÊ‚ˆ%ÇñF?ÄÏKs[›´ö€¡@(%™<º3J¹ßÀùõëO0á8W¢NžDø‘#øé.(«âqtJJKÇÃ?¯ÃÀ&UpWËún§úLLLÄž={Dɱñ¨Š’cãp¥î+T¨€øøxÄÆÆº’Ø"«˜9s¦–ðBjãØx ÌÔ½Xr̤ š¹¿mm0ÁC?¯EвêèDè×qÝqò”šãvôsäÓ±H‰‹Çù ëñh×ܤªyÖĹ|=“îídØïL̪–––&JŽU”€+uOKéâÅ‹®$¶Èjc¾þúk 2aaa6æDº7bÉ1’Öogòè.8‡×çÝÖzÔ«ˆ—ú4Õâ7""­Ï˜ôh6XôóÌÊå¨ä‡Ï•;š)m:y/݃ÿkƒ°@?ÓCNõ®jåË—GµjÕœJ.GF”G1æ—–’(9<ˆÆúîÝ»±Iù„?ñÄƹ°›bÉÉ ÇØ_-ÄGuÄgË÷båaã‚×îlö5BqfÅ2‰ÏqŒáÌ‘Ëó›6"ýz4æ=Þ ¾žYç0éÄèW£OÃ0üKYxœ™øÞ‘xÛ°(9¶—á lÙ²ZFQr\fÈm.(­8õë×G÷îÝm΋0`>Ä’c>,mÑÒƒê¨dÕñÀO«™p«FŽ{‰˜ùHO”P–€U+!ãl‹Ñ)^Ÿ×µC‡ñ‹r?¬ZÚÐØ¿nDÔÉý¨°ÒÓÓ Çœiƒ÷í† Ð¥KgË!e%Ç!‡Í1™æ ,]†ÎªœùB‚€¥ˆŽŽÆ¯¿þ*VKmåöÅ’ceÀ-ÔÝwÊmÍM½îûq2nÜê$´”7æ?Þ‰—/á܆·È7»GàºJÕ~ãF¼=¨%†45Æáü°á0füs Ow¬_§ÿ„Áƒ#..Îîe* ƒÔЀВSôÌ{(9æÅSZ˪U«âÌ™3ùœ%‡â#0uêT¸»»c̘1ÅoLZ°+d…ß®†£H̰>ÊÌG{aõ‘‹x{þvC­ª–Á÷@øáC¸¬2T Ù? ‘‘ÊÍpîmS o la`xÛ™kxò÷x­_s¼óä8¬Y³ ̧àŒžëÖ­C©R¥Ð¬Y3²a}Dɱ>æ.Ý£(9.=üV>##“&MÂý÷߯½l¬Ö±tdqÄ’cqˆ­ÖA›jeµøœ÷í¼½ÆÅ/Z¾ÙþÙ*׬6"Eë()&§-Bûje0eŒÑEëZ\†»ÝêTk%‘Ú´iæ3Í2ãVöíÛW´Žíô**9;vÔÙì”E—aK”—jûT”ûgçb‘záž8qO=õ”³‹ê’ò‰%Çy†ý‘Îõ0¾c]Ímm÷ùƒ`OßÑoh޳ªPh䉓†c²a¤(—³“  ~Y,x²JºßšV²ÎIKµ}¿>t‡rO¼ÅsõêÕµ¤05jÔ@§N°|ùò[ü•qU³A¼u7Ú?Â…“# JŽ“°ˆ÷å—_¢wïÞ¨W¯žp$l˜ ±ä˜ Iûi‡õRÚVÅ€¯–à|t¼1®þ?Û£1ά^…¨Ó§ Çdö¤¨ºw'-DµO¬x¶JyÝ*¶Ì0«1SVãðåh,|²/‚}½nc6((Ë–-àAƒÐ¿L™2å¶smÇñãÇ5!!×®]s™EPë"pàÀ¬P¾áÏ<óŒu;–Þ¬†€Xr¬µU:âêÿ,Ÿ¤&ÂTt¢S ý~~W;<Ú¥>N«ßu¤šD Ùä˜Xœ˜?a>îXýï·)1/ÎÜ‚¹{Î`ö¿ÔbSùÀ\öôôÄŒ3ðÊ+¯`üøñxã7:«­8>>>hݺu®2Ëë! JŽõ°–ž´äNËŠœ†ƒü3?'NDݺuÑ·o_ó7.-Ú±äØ|,Â@iO,zª/"ã“Ñÿ«ÅˆKN3ô3鞎x¾Wcœ^½×6“ ë"¨2WžX0µJ{a㋃P®”÷îÄÿ­Ü‡ŸÆvC×Ú™õñ 'ä°ñÞ{ïi–œ?þ£GFJŠQÑÍá»ÜµvíZ-Έʛí%ÇöcàRPÉñððMºB‚€¹ÇÏ?ÿŒ§Ÿ~267ºöÓžXrìg,ÌÉI•`¬xnN\‹Áௗ€1¦ôñ°¶Zzâ³jµüâö¦‡ä»•ˆ½tÇçÍE“r¥°îùñ3º¡Q¹™0o;¾¹·3îi]³P\=øàƒ`<åÂ… Ñ«W/DEEêz{8yÕªUR—Íâ&¢äØÑ`¸+Tpp(JŽ+Œ¶õeüöÛoáíí±cÇZ¿séÑ*ˆòj˜mÖIÝr¥±üÙ`‚¡“–!1Õ¨èLÐ?ŒéŠ+»wiVfR²jqò¸ŠÁØ Ö(5ZßLiâªýø÷Ÿ›ñÙ]íÁ„E¡ž={j…4éíÁ eŽTWï°²0žWµ‚¨  Ù¢äØÇ8¸µjÕ±cÇ\JfÖòнá믿ƣ> ___Ëw(=Ø ±äØ z«tܤR°¦èlWõUúN\„ØäTC¿ÌƶX¹¶%;£eöJUqžB–C€¿· [·â´²Rü»G#Uߨ'¼=Ü ~¸d7žùc>ÞÿîÙØp¬°5ÂæÍ›5¯:€q–Ž@Œ-]º´ÄãØÑ`‰’cGƒá*¬Ô®][,9®2ØV”ó·ß~CDDž|òI+ö*]Y±äXqÛôײJ¬y~Ž^¹Žžÿ·‘ ÉFz7ÃÖW†¢Ì$ý7b/_6— ó –”„“‹#bÿ~LÛ Ÿ o “LÐZ'¯Ï݆×çlÃ×*KÞ‹½›˜¥ãŠ+býúõ¨Y³&:wî¬YwÌÒ°¡’Ó­[7©cAŒ Û´(9…ELÎ/6bÉ)6„Ò@|þùç9r$*Uª”ÃQÙåLˆ%Ç™F3wYU ºãJL":üw.N†ÇNnP!;_»½k…⸪Õri÷n•™ËpŠlØK—4Ò/á:¶¼<cÛ×1´–šž±?­ÁÇK÷`ê]ñx׆ãÅÝ UdéÒ¥šâÀ’sçÎ-n“»>==kÖ¬Ýí„ìQrìg,\†Zr ~ýúu—‘Yµ,,$·wï^<ÿüó–íHZ·9bɱùX•Ú¡Øüòø«,í>šƒ-§®úð.‰9÷ÆÇÃÚàêŽ8±pR⌵v ÈF¾ÜPqN¶nÃ1¥8ö¬Q{Þ†ʲfJLóÝwâbü½ë4æ?ÑdS€LÏ-ÎwÆXþõ×_3f †Ž~ø¡8ÍYìÚmÛ¶isQr,q‘%§H°ÉEÅA€JéÈ‘#ÅiF®²øôÓOqÇw yóæYûä‹ó" –çÛœ$«PÚk_„v5BÑý³˜¾ÅÓI÷©ç{5Á¶×†"4#GfÍDøay¿ä„e~ûÂ#plÎD؇oïëŒyJÌžA>Û+…óÈ•h¬q0ú6 ˯Ùbwww“ʼþúëxøá‡ñÉ'Ÿ«=K\Ì…¶°°0)@m p‹Ñ¦(9ÅO.-Ì®ÆÕ™C‡­¹J0A€VÍ~á…LöÊWgE@,9Î:²yËåçé¡–|ꎆx`êj<ñÛFÐ]Ê”š……hV‡'»ÔÅ9ÏqBY"’bbLO‘ï¹ ¡ÒuŸÿg+ŽÌ™F%±wÂð3¤ÑrÓæÃ9T…[·¾z'š†çÒ¢ùw¿óÎ;øòË/ñòË/ãÕW_5Åh‘ñ8=zô(F r©%ð°D£Ò¦ nnnZ±Æƒæuš „ÀgŸ}†† JñÏ¡å'‰%Ç9Ʊ°R¸—(¡ÜÒÚ¢MµP<8m v Çoõ@UU_G'Ÿ’îølD;U£¥ÆN_‹ÃÍDÙ&Q¾Ys¸—”)Ž“égÔÉS¸¼u J¤$ãkUtõÑ.õoK.@…ò5•\àÓe{ð/{óŨðt·þ:9k büøñZI“&s [RŒR¤™ Ž™=…ì ùÅÛ×x¸ 7 4€(9.3Ü”5 ˜UmòäÉRüÓb(ÛWÃbɱ¯ñ°7#ZTGC•”à®É+ÐôÝ™˜¤²zÝÛ¦–•VUË`÷ëÃðÕêý˜0'+÷èòmÚ ¸võ¬0œê² *6öâ–-¸~ñîk[[¥nƒŠÊ50;Qîîýa¥rO»Živǘv™.çÙϳÖ6ãs˜”`Ô¨QˆŽŽÖ @—,YÒZÝßÖ­8L<ЧOŸÛŽÉÛ"`[õ×¶²Kï6D€Jޏ«Ùpœ¤ë/¾ø¡¡¡¸ï¾ûœD"£ ˆ%§ (9÷9õËbûëwbŒ x¿ïÇU=e5¢R B{¸•Às=ãÔû£ð@ëj8»vŽÎü ‘'O•“°%FFáÔŠå8ô÷lÔöJÇ?¯ÅŒqÝnSpˆÑ7k¢Åû³à®¬%»ÞnsGà!C†`±Jm½hÑ" :I*Õµ­ˆ<´mÛ!!!¶bAúÍQrrFv[úõëãÔ©SHLL´lGÒºÓ"À¼ï¾ûÏ>û,<=•·VhL,vrd!À‚”•ÛÔ¢§ûaÕá ¨ÿÖŸøsÇɬãú—2þÞøVY{½{Ô Áé•+qlÖ,D?Ž®£îÄ_»†SJöƒ3g¢BJ æ>ÑÛ^¢ÜÿÊêPe}ÒjÓõÓùxZø|®glRî˜éΞ¨{÷î …®b @|¼m²êQÙêß¿¿=A#¼ÜD@”¹l‚-9*M¥dX³ üNÑé7ß|£ùb?òÈ#N!Q0è®&–œ‚aå*gõkX߉ÁM«âîïWbð¤¥8w›øuBKã÷‡îÀ¾·F _õ œUuMýñ.«ä%iÉF+Ðm;èþV¢ÏœQu„æãðìÙK¹Ž¿í‰}oÇà&Uo“*I% xoáN4}o&â“SµäïiZÅì‘Ú(ÄÕ«Wcß¾}š»ãc¬I»Um¦‹/¢_¿~ÖìVú* “S@ ä4ó"À‚ \}߯ª(7kÖ̼KkN@rr2&NœˆÇ{ öµºèôà‹€‚€"èã‰ïFwÖbsûe½fÕaZéWû53³™Rà Aøãá;p&² ¾\¹“×ïÄåmÛX£:BêÕC©ÿgï*à£:žÿ„xB<8$ÁÝ‹{)Vê¥B꺻—*å_W¨RR( mqŠ»» Ĉ+÷ŸïÒ—¾»\.v—{w7óù$w÷Þ¾}»ßÝ»}³3ózõôÅ]ò}~V¥ìÞMé{÷PnV]Ô®=~ÍhÔ²~™ýùeÓ!zdÆj:“™G/éN]Øž@ö`t騱#-[¶L±›á D#"j†õ ®juëÖ¥.]º&lŸù7ß#!N; š³eË7nœ3š ÷ta¾úê+Ŭsÿý÷»p/¤éUA@,9UAÍs®Ø¢m{îJú`Ézé÷4õŸ=ôâÅ]éæÞ-KY#ÀÊ6骞ô☮ôãºôñ²Ý´iÎ â“àøxŠà¿À(ó$˜FF²0'—Ò¤LŽ9J?yŠ¢Bèþ¾-h|ßV”]öfЪƒ§éÉ_×Ò²}'i¼~¹u#÷½+§š¢3pà@BÞšØØX‡7®jÇ7Z‡#]µˆ’S5Üä*; €Ý(9"‚@e‹ ’ÁÝtÓMTÏ v\+Ów)+å#P›©A8p?°??gÝ3m½þÇfzntWfkVÊ:âW›ngE[Ž¥ÒëöÓ÷kÒ.vE ªJ SF(¤~=òò1Îc¢‰r™!ílb"eK¤ŒSIäÏV«KØmïÚˇÑÈvK)vzô6M¦ç~[Oó¶¥¾ÍêÒ*Ž»¹ .F_Ä¥Þ'$$Ðr΄ÄЈ×Y´h‘²²8ªiii*´Ö"ÆDÀ8ßVcâ#­r :t ?þøÃwªÝéÓ§ÓáÇéÑGuÇîIŸÊA@,9å$§Káò¾<:¬£Š3¹óæ¼2o#=Äô71+rêX ’[vl؃^¿¬­=|†fm>L¿mM¤;v·…DÇ?[‚ØE)˜™}8±uMÉ9ÞàÉII¡ìS§(')‰rø5É{¢Bé²öèâK:ÐvK³Ö/}í9Aïü¹U)7=X©YpÿHÖ¦¡¾ˆË¾oÄÊèÒ¥K•’‹âuµ†çü]xá….‹—»7¼¥™Q‹àâꫯ–ÀNwyôïofy:t(âêš0+ ËÒ; йsg•LöÇ´CmR…«!€Z°3"  ˆ PöŸÉ 7l¡oWï£`?ºg`[º£_kjV:7Œe½§2riáÎc´œ]º–ìK¢ýIé\ÄDAäI~áäǹ[üBBÈÁÁäU…d™x"+ÌÍ¡‚ÌLÊ翤Ïc‹Aaj e¥gð³Ù9   ~ñ±Ô¯y]ÚºujX>u1fl8D“þÚJ›Ø‚3 E}z좎líidÙU·ø|òäIeÑÁúõËŽEªj‡¯¹æJfKÞD ‰ÀdQr 9.žÑ(ü8DGG« ÁaÆyF§¥—ÕB@£êÜ´i“VT I×½®(ð¿%ÇuÇÐÙ-?ÍõrÌÎGü‡Ü:#Ù r{ßÖêµ¢ö)Ùù´áÈÚz<•¶²‹Û&þ;˜œA9yÿ±´Õfr_?eí©Åq¨µ¼ØrÄŠ©–7Õbe…)Fù¯˜LEET”—O…ùy”ϯ¬É(ˆ|¼½©~x0uàħEP‡Ô¹q5³_c‰íöiôÅŠÝJ±Ëà¶]Á‰T¹°#!Yª» 6Pñ{QXX¨† íg­*((PÏ/¯¼ò Ý{ï½î¥«öo²¸«¹êйA»£8 »+ˆË%Ç ´ºðÚk¯© Oaä«° | ·TšfDbBüÁÓ#;Ó¯›Óç+vÑ%L;Ã’+»ÆÑÕ]TŒŠ-ÖäÈ ?åâeéæ•š“O‡“3)1-›’³ò(%;O½fåRAñ9Êg‹J!¿z×ò!?v—óeE&À×›"ýîu‘üW¯N 5 a SÙjCYØJɤŸ×¤ŸÖPV<2¬ÝÂä u™ŒÀS¬gK˜&Š\×ðÞ^ЬC «FRRã" JŽqÇÆ#Z†‡UìÊ‹å!€€Ò+V(òÊÊy÷E>ð"‚€=ðe«ÊØnñêï +&߯٧’‰~¸xÕgct‡Æ4œsðÀ% ä(+ý¨ [\jJÏtíáÓ4{"ÍeX˜ 0]ÎV›w®ìI™6ÚS¿517…n®Pv¯cÙ³g\§7n\SÃ,÷©¢äT4¹Ä~tëÖ~øáûU(5¹-°âôë×Oý¹m'¥cB@,9‚I Uø¨zvTõ·ód:ÍØxP) põ‚ [¯ŽiVú4‹¥^ƒ¼<Î’"Öj6%&ÓÊýI´bÿ)ZÌD©l1jÌ”Ø#˜QíµK»ÓàV l2«9«íθ/Üâ kXtªŒßŸß~ûn¿ývgtGîY DÉ©XRÔþ@Éyùå—éìÙ³T‡ƒ6EklܸQ1ñ!&GijKŽgMô¾M½0zŽüÁå „í>NÓ7¤W™ s°elêÀÿ˜­CƒHjY·5‰!?f`³—ÀBsâl6`„mÿÆþœM¡œ‚"åÚÖ›•/¸ÝU­uÝ0{ÝÚíêRJ騤¡°î@ù©Š¬_¿žŽ?.®jU¯†¯%§†—Û™#еkWÅä·aõÃb~V> 瀥‘tMDKŽÌšBn_×õh¦þpÏ3¬ôüs ‰Öƒt€É`é9’’¥Ö1(?ZΆãw9¯7+AÞ„<>ÅL:_tŽãtŠ)¯°˜ ò)%‹ÿØ"s:3WÕy45‹ 8†Îu$xeѾï&Ò+_üD#ºµeÅ*ÌcÝÐ(•ü75(7PtÀì ¥'2²|f:ËÛÀU­I“&B~c Œ?‹’cÀAñ¤&x?<¢äxÒ¨W®¯;w3gèíE±äÈp&Ѭô Ù&þ4ÉÌ/¤ý§3è0üâØ¼BQÛÚ:εÅ%—-/šRSÀÊ3 hJ¬?añ+3ñQ¡4 y=Šãצ‘ÁÏÄõ™Œ 55•"ß™@­¼3¨•Xm4ø+õÚ Aƒ‹ èTÖ‹dÖ¬YbÅ©êÎ+,JŽó°—;ÿ‹\Ö`þ¬!ðꫯR›6mèòË/·vZŽy bÉñÀA7p—AJйQ¤úsd3#""T ;ÈzFŒáÈ[¹uÝ €rƒÏQ£F©4AAAêóîÝ»UbX¡°¯\N/d?çQ§wEàªÀeM”W=Ƕ{ß¾}ôÓO?ÑÓO?­üà{7©ÝKŽ+Œ’´ÑQÀmWI«n\\ýù矴wï^ºôÒK)?ŸóU@àQjê¾}ûV ´q6¢ä8{äþÔ£G:xð Ê,pz`ÅIHH ±cÇêË{G@,9><¸û -‹Hõhݺµ²â`“kL'e-O äÀ«ÀËKŸËÃÊçe”Œ0 Þ† .¸@íÒ¯^½ÚÑîë8tè}ÿý÷ÊŠ# ŠÏ~/–ÏOï=”ü6‚‘T¤úÏyóæÑ_ýE7ÝtcBˆ²dÏž=´mÛ6ºêª«Ê*"Ç †€(9OlüŒ[´hA«V­òÄîKŸË@`âĉÊÿüºë®+£„öTÄ’ã©#/ý†»æÿæÍ› ;!ЫW/cÚ/¿üB÷Þ{o™µÎ˜1Cå×éß¿™e䄱%ÇXãá±­ÁŒXrû¬Xq\hܤ©‚€ à<`]ðóóòÁ_ýE§OŸ.aUëß¿¿bÿ|óÍ7Õ«o-UÛ Qrì¤TcúöíKØ©]ºt©}*”Z ²KÇÅÅ©œ†o¬4Ðéˆ%ÇéC 0>>>Ô¾}{Qr8ßÿ½bW‹/¹Ë 7Ü@Ï=÷Ý}÷ÝÊ…­ä„¼1$¢ärX<·Qaaa_ãE‹y.ÔsXí°@Ñ‘$k4ðÒUA@¨6pYÛ¸qcµë‘ J#••E¿þú+A©±¬WW]u]yå•â.h ŽÁ>‹’c°‘æ HóæÍ+³\MŸ˜4i}ôÑG¿íĉ)%%Åá÷‘”hÇÅÅ•¸¬ûŒNž<©Š ÜLiš‹ PKºæåu~ÉÓ^Ëê.ΗWFS”ªªt•u"óÔSO•*ŠcÈâyå•W(88˜ÆŽKÍš5£çž{NÅüôÓOޏÔY  ä€Ñ+55U]U‰«[–Àë®»NÍ%g¶VœøøxBú {~°ï#úwm‘šG b$à5ß.¹£  @2®Ï?ÿœ’““É•~¸eøþCÁœx@Â.p£Fþ;!ï*  W*rssiÉ’%ooo‚µìFzY¶l™*ãççG°À@ôuà3g̘¡´ºuëFP¢,Ë \E$33SÅé þó5ÖæýúõëUVu°09’´rí{÷îU;Ë[·n¥>}ú(7œƒ‚sÉ%—¨öÁý®~ýútñū˰cüûï¿«˜ ­¼Pãæää( Ð&}ÿUÀô½÷ÞK;wîTYÜ7n¬r…hÊ^“&Mèî»ïVñÀÖ”^"ÎE@›×›7oV¬•K—.U i÷îÝUÃð=™={63FYG¦Í|o’’’è·ß~SŠ=,¡¡¡ê:ÄÃ"ö Ê3<+Pb~@•Œ„Ýz)oÞ§¥¥Ñ?ü Öùóçæõ„ TÍH)£æ$ÚX¯^=}ÕŸŸŸ¯z>ôÐCfß‹êÞ¸N:4}út•ytð'RÃð¹™0³œÍŽÉAÀYð§É×××Ä;"Îj‚Ü·šp މéÀMì’PÍšärAÀdbw?ˆ™ðÛÀ ‰úMü0fÂ<ãq?È—ÀÄÖ»Ÿ˜˜LÀtøðaSß¾}Õú6mÚ´’2'fâ‡AÓ?ÿücbB+&VˆL-Z´()cí »·¨ºx¸ä4?dšØËôË/¿˜Xá01ɉ­&¦¥-)ƒ7ìÊbbŸ}ÕV\ÃV!ÓîËÌR¦Ñ£G«¾½öÚk%u=ñĪ[ÖMwÝu—‰vMíÚµ3ñÃnIyã<êÖ­kzä‘GÔ÷cùñǫưBjbEç;ï¼cºãŽ;L—fb&0ÓW\aâMD'¾4]sÍ5&V|M¬0«ëXñ5]~ùåê:V»©Cû÷ïW×áû¦ Æ vÆ Õ¦@Eç=)Ì1¦XVU±¥S½jßÛ/¿üR»E¿ròñåÎQ7¿öÚknÌzç¨[H½¥xWbrø['blxIåËc“µÖÁM ¹@µ+"Ø ^ˈhûöíÄõ—/¸è@öíÛ§^As ªrÍõA(Ñ\µ-ZDÖGN.Mpn>Zíxy¯œ{`Ïž=ÍŠ"ñ-ˆ7ø!NõÕW ¿iz«ÜªU«JÁ ßÜÇàN¦õK+d­}pÉÓ ØßXñ"¸ÍhÂ*Eè¶C°WB´8”Õ¤M›6tôèQí#õë×O¹Á… ®GH>‰ï6[JÊÈç!—50¬YδHEh¢À ñ¥&¸ni‰Eá¦Ñ»Râûvûí·+v=¶2*€ŠÌ{¸ÇAàj ÑÏ5|¶6ŸqÜÑÖEüØ‹p ¬ö".¬mˆÏ‘g™²P²ÿqQrì©Ôhg௎E•ÝIì\³TçH°òn¢"ÐKGÞOêö ´‡!ÄŠà ðÈÓºuk€à»eËbw*3P´kµƒ()¯œVÞÖ+”‚óõå‚<܀ʼnÝÏÔ1íÚ¥àâŠ4÷¾ûîS×ÁLë—þí½µW(‚¸§e{PVkNËÄj KA;Á†…`wã €ñ`+Š»ªH«¬)CH A¥-¢ Á<¨È¼GY-¶K{Å1½X~7õçùqmø¾Yn<Øûžø‚¤›*Ú†½ï!õ•F@”œÒ˜Èƒ!€T°ùÌ;×`-“æØBàÙgŸ%+;z‡ÌVäœ{"€‡o(Ñx°ƒuLc˜kšÀÚ€ {XJ pXJyíÃyÜsݺu¥vŽDÁùªHy÷®JrMõÀws ŠNEÄÖÚ:‡ºÁäYEæ½*\οòîYÎåU: «DzÇ)YýŽU©Rq<œJFŒ\qd£¤œ²¢äØ I©Ç¡`—ÅH™ÅÚY7¨œý¬‰ãˆ—Ív¨Ý kÒ'# = !ÙÉ«é-°6À²« ˜£ÊÍ}î*Õm lnzKÚ 62­]Hp –*½|ÿý÷,(CÒŽÜ@IDAToPpu]s#Ó÷ ×ƒŠ¸¼ M`½ÒEj÷Ûجð*â`,ÃÂÂJ,+Žì¾/pe²ƒ–5[óÞV[´ïsEæ³­zªrÌg¯V£q`”ƒw Ç()•ª´[®©8¢äT+)éDð ƒ ÎÌäÄVÈ­+ŠÀã?®â˜ND°7°äÀ¥¹—°ùŠyfS·A<\0! 9ÆŽ--§ vQSRR¥.b¾ýö[EçŒò¸ñ= ¬hna²&Lh 3s›zElèpñ°§eÁ½`9Án1IqÑ~Ä!ètoæ$¹8¥F«ŸTÌ ¨ŸQ'\vqJ (v9\)Jˆ)ÐÜ‹ÐO´Kkó믿®b4Ð?M€¬M8§Yˆ4‹b‡4¦¨íq ;ƒu‚ñÓs‚ñÔD›gZn׿bÜô¢Õ‰cÌ@¦¬ƒ°2B*:ﵺñ½Ó‹F9‰¹j隸S3©GSVÃz„ï(b›DŒO*3v538äƒAàÅ×ÄÁ“&¤5H‹¤e!†&þÙRt e•‘ã‚@UÕ-XÄ@ùÌ®`Š Šóv˜X±Pl`ì‚¥è”Qÿ[o½¥¨kýýý{(9øW1—N–E!9 V50TFtÓ`¨bëŠ*§ÿÇîm&&PóœÝ„L`rƒ ,XÑÚ¶mkm.è¥Ù ­Ú¦¿´½¼ë®®ûÙ'Ÿ|¢?mºõÖ[M/XÌp f ÒçD…&~HTß-¶ ©:Þ{ï=EEWô ýxì±ÇLlÁRu²’¤(¨AQÍyNL7Þx£éÃ?,¹(†Ño\ºmVMœÏD±²á¨®A­-b|ðÝÐæè½9g’úžh”ͬ„+êoP5ƒ=ã‹ù æ?|Ÿ˜4Cå8çiRse `ºí¶ÛLO>ù¤úŽéÙ€Jyóß0 ¢.Ôï^Àn†s¬ø›ØNÊaï9OÝ“ó9ì¶*fk˜b‡ÔSÐÛ*/窄À»µpO®ùŽ'¡ìÞ” "oŒ‚Ø”àzb×£ôÉÝÚŸ0SÁFÜ ÝmtÑü`Gš©h•u.^±æ\ÃX!(i,vLaõ`Ê[ueô終¢[êÂ.·µ€}­ly¯°¦ÀU 5q_k‹ ¬E8o-»ïœ_ªäRôY,Ž{à:}™’ÂoÐg$EpÑÓ×cQT>º0`ÌceDÍ_D :ÝÁ÷–0²’¬Ö_fh.f–uWdÞ[^ƒÏ˜Ÿ° ‚ ¦‰mYÙSŒˆ5uOËûÀÚ 4PÄuÔ»|žìc—j¤A õ$2šÃm£ª³5ÐL¾2ZÃÿ19"‚€#Ð?`á!_Spp/œ³T`£)¶ü8÷GIs«£à PööîÝ»¤>koÐv(Ae‰¥òb©˜h´Àe]¯?\4Ê`ýqyï^€|.‡P°õ´Ïöè%6âââlVU‘yo­ÌÏšTp ìC!t6Ë@=ÏÖUå’jm³Ã^r¬âHLNű’’NFÁzø1–5'D·ÇN3'“Sq Z@wEå° P-°ó+"æ ¾ q]–Dæ¥*þ …ĸ¹“ F Dˆ¡s¦`CÊÖúõë“¢3Ûâ®÷%Ç]GÖ û…¤~$ö+wÃÞ¹~—Þÿ}å΀Ý)AÀQ`£CDJ#" Ѓ=¯º’d@8GQ-ë‰)ª[¿3¯ç˜4•”ltÎl¢¬í¥œ:»OFº¿(9F iK¹ÀemÁ‚f,1å^$Žz@ýÐCÕ¨ÛÃ;&70$bÉ1ä°H£ €\Öìñ°\¿~}ÂÆÜÃ7lØ@Lî¡ò= ‹Õjã(šm°.E°n½nkî¢H[QrŒ2ÒŽ !%AÁø¡1LEìƒFÛkœ–IKÜ ±ä¸ÛˆJ쉳¦©@vËÜJ•½\©`éÐÿ¹ÃwY‰Ùâõue1qTyÄâ€VZË‘å¨ûxb½¢äx⨻pŸ@Œ ç0Ÿ‹ü0ðÁÊÜn,mŒJ+Ü ±ä¸ÛˆJì…,9ØÜ·oŸ½ªt›z€1½F²âhà‚Ôaâĉ*•÷h¨TÿU”œêc(5Ô0W\q…ŠËÑÞÕðíåv<õÔSÔ”iEï¼óN‹3òQ°?î°›lT¤FAà<ˆñ€UÝ.kî†)6ã°QŠ F” &Ð\@œ'«$¡¯ÛéJm%Ç•FKÚª€’ƒ%K–"NF`ݺu*‹<²§caj±äÔÊrWDTãmÚ´%Çbð²³³iÊ”)tÏ=÷Œ(ØÀAwïÞ­,:Fl£«µI”W1i¯Jš“±VDŽUQr*”3Èìܶm[š>}º±æ!­A2µcÇŽ)6é²tÓ@ˆ%Ç@ƒ!M1:u"lˆËÚù¡·Á!CT¢Tà V j×®Ê;bŸS§N•QJ—‡€(9å!$ç ‹ÀÕW_­ârÄœ[³C„äpð~à¨I“&5{s¹›Ç# –Ÿ@9€Ê?!!ÁŒ| //@³eË–r®v¯ÓÛ·o'Xr~øa—ëØsÏ=Gááá÷5‘ª! JNÕp“« €\¥°Ã±téR´Æsšì4O>ù¤çtZzj(Ä’c¨áÆ ŒŒ jÔ¨‘Ê'wã7RË–-)((H¹¯\¹Ò`­uls`Åiݺ5 >ܱ7r@í*–èǤ?ÿüÓwpÿ*…óÕýÇØm{Ø¢E ‚Y?ƒ rÛ~©cû÷ï'¸ªMž<™BCCÔ4i‹‡ –héf…8|ø0ýðôaÃZ»vmI,hý‹R\\\RÖMOl‚N›6M) ®ú»1jÔ(ºüòËõõ¶mÛÈßßßS†Ï.ýKŽ]`”Jœ…ÀرciæÌ™’8«†àñǧfÍšÑwÜQCw”Û¥KNiLäˆç"€ §7ÞxCYnôdH˜­Wp€')9Ø«S§7Î¥'Ç{ï½§ˆ–&NœèÒýpFãEÉqêrO»!%'99™þþûo»Õ)YGtÑP(ßzë-Ã&S³Þr9êN¸êެ;ôÅX€ùÕW_%°rÙ’Úµk+76[eÜå\nn.}üñÇÊâêÖ ÐóÏ?Oo¾ù&Áj'Rql#*^”œ‚@\\œò3†ËšˆãÀÎ9‚/¼ðB9r¤ãn$5 å %G,9å€$§=»îºKYi/Y–4mÚT±®•uÞŽ#ùgvv¶RrÜ¡_÷Ýw5nܘ}ôQwèNõA”œƒZnä(`ÍAbÊüü|GÝÂãëÅ‚±yófIüéñ3A#"忣>*垦µ›H»à ‚Mwß}—n¸áŠŽŽv‹.à ‡>͘1CÈ–*1¢¢äT,)jLÀ²–™™IsçÎ5f]¼UØ {úé§U¸ûEg" –g¢/÷62 à¹ôÒK Ä–‚c`Yó™5kíÛ·Oå™q§þ‹bĈtÿý÷“¤Î¨ØÈŠ’S1œ¤”¨_¿¾bWûî»ï ÜJ×mÚ믿®Ìþ/¾ø¢ëvBZ.‚€ €Ý~kO! Ã%—\BHîn‚ñݹs'}þùçîÖ5‡ôG”‡À*•Ö4`O™7o¥§§×ô­Ýú~G¥wÞy‡ž}öY·1û»õ€y@çÄ’ãƒ,]¬2ˆ»yâ‰'J‘Ã`çß”äÍ[³f ÔÖ¸ &Ð3Ï<#Ï;`Qr*’1>à‘ÇÃüUE*v† ÐX K0»Ü{ï½–§ä³ à4„xÀiÐË]ün#k¢^š7o®ÿè–ïaÅéß¿?õìÙÓ-û‡Ni ¿ð²±€(9¶ñ‘³.‚ò\|ñÅ$.kU°Ÿþ™°¾ð ”““£*Yµj•J0Êh__ߪU,W vFÀòÁÍÎÕKu‚€Ë#¨‚Ôõ›8ëò}³Õ­[·ÒüùóÝÖŠ£õ¹'‹ü9Ç×Ë«DɱŠrMಶlÙ²’lϮ٠ç´zýúõTPP@¯¼ò %$$(å”ÑZ «sZ%w¬# x³^BŽ žy`ÍðññQ@àwÝÝVœöíÛ{Dšƒ{bbb”UÇÝǵ:ý;?û«Sƒ\+áÇSxx8M›6Íá;9i9t(9ƒNœÍ¡ä¬®¾úêägä à`7çÍ7ߤ“'OZeKgû¯›ØwÃúk×q‚!½/[K†³â€Þvlbw¼q½rXâKºÝææ³Bö+d+ØÊ„cC[7 «»&Ðe›R»¼UEKMY”±[„,Ë(T•[È5‚€Ý¸ûî»iïÞ½ô÷ßÛ­N©Hp%'³xÏ úiÝúuóaµ֚ݰñ… 0(þ¬Lh¢mTùûûk‡œòšÄ›ˆØLÔ6áöÖ¶~íOWwK –±u*Ý®´´4jÒ¤‰zÈ·—›z¥áä hàÀŠx©_¿~Nn¡n?Y”C‡4Æ€R±qãÆê¡üÊ+¯TUB!€Bó9»‚ÍÞr„óZ®îOwlB!~U·Ô€`@‹³IgB[ÄaL0 Åª’™_Hs¸?óî,Qت¸„ûq{¿ÖJñ©Œ’Öƒ–Ù”ûî»OÑ‘BÙœ”œ={öÈ®¥³Bî_ãìçXš/V즯þÙKI9Ô¥qíž@Wñ:ÇîgU¸j§r¬(<°–å±+õùuìqQxà-醞ͩ¯¸íbâ¬c;N¤©ø™}gÎÇѤfår5|£Jг®Õåö$D‡P3ŽïIàö`ýª,iú¸”㈦ü³‡fl8DÞìw +o÷n§ê+«YYYYÊŠƒdÕÈñæÉ²xñb|üõ÷ôÁW?БöW(f—[û´RÊM›zåûÿÂ`Kb ï2%ÑrÞiZ±/‰N¦g)p|™¶10<Œ¼‚CÈ—}€ñW; €|üüɇ]¼ý|©ÿð{yyóëyˇ‰—sçŠÉÄ”œÅùT”—GEùyT˜›K™™êï\V&夥SA~¾ºO½°`êÛ<–ú1á|;r°¨·Eöjk£µódºRv¦¬ÜÍ;p&ºÃ†¶/s—oÍš5¥2C{sû¡ þòË/‘oÀŽr̸@ÉÙµkaQܰ˜Me‹Í»m¥CÔ?º}µy·êЬ`…[3Ö±¥¼†í>™JÅl•ñâß÷à°:äJµµu,(ðüúÅkÖ2/vu;¿ŽñÆ|¬aX¿L¼ŽsìfQ^þùuŒ×²V4 23¨ˆ_ 3ÎRvÆyÒœ Þăef€"Ή¥^¼ŽUÄòƒ ÅïÖ죗Œ]Ä.x ë@C[5(5ÔpKGn7Xq>ÂÓ±9 –þóÏ?= ­ÿ¢ähHÈ«{ €Åà¥ß7Ò·«÷˜Ï`ò€Ûú¶,÷Ç,jì8Fs·¥ìÖ–•›O¾þ~Kœ%:02Š""È7ر1)YÙ”›šJ9)É”Ëùj²O'Q/(Á~tÇߌâ@ÍámX×l ÜÙ¾\±G1Ü€Á »~ÏîRJÙùä“Oh„ „ü8˜»###iÁ‚Ô±cG[·s‚€S€eqç΢ä8}¹©£ëæ‡KvÐl¦lVtnêÕ‚ÚÁ*A¾-¸¬jˆw™³-‘Ÿ9Ëú‰…DG‘_L] â×€ˆHòg§–]‹ ϯa©)”é ò’NQöÙ Õ– âcéâöçIs:óÆ-] }y{áÔ½i ½Ä$ Xÿ ¹¼IG7Ýt“bUS=üˆ† B+W®¤Þ½{{8ªû¢äÈ,pާçÐKs7ÐÔ•{Ýóó£»*s·-~~(636¢i¼¹šw»`Á©_‚5¢PæÛ÷ç¡ÊĶ8IüÐçq`eÇe'&RƉ“J!éÉ»b×±9ÿJ¦µ¥ðÀšó#÷ïÅß7(Zê[زóܨ®ìòp^IºóÎ;iÊ”)Š= N«V­”‚S¿~}GtGꪔœ;vÐ’%Kª]—T ¸“}²l½6e² ò„Amé‘ ;¨βڈxÐùÛÒ×òÛÖ#”Ë›[¡Q‘аÕiØPmÒÁ*ãl)ÌÉ¥L&ÊH†…‹Á'ËvÓ߻ޑ»e…23KX|<+6 •™ÞÈ]?ÇYª3x‘Hg¢€ 6ѱëÀÖ é®þ­hL‡&Tû_9Ë>@Ùùfõ^eåMöãÃ;Ñ£Ã:R¯î]iË–-j— qLÈ- j–èÉg#!ËãöíŸ JŽ‘†EÚR æl=JÏXE‰©Ùt÷€6ôÿ>Ç„”Í„¶‰Ý©A¢óÍšý”ÊMzu)$>š6%ß@ÛVþj4Ó.—bã.'9Y­a™‡*+O\tÝÍk¬V¶ú :êg[O2C[Ь—èŠ1£éëÏ?±K»Ü¥’¹sçÒèÑ£iÛ¶mÔ®];wéVUû!JNU‘“ëœx÷™±šÒ˜Ñì©èA6é—ÅX«Íû‹wЧÌL“Æ´•al­ oÙ’ê0 |”]Qα‹ÙÙ£G)™¦ÒÙÊÎl7w1áÀ„mË´î`ç>Þ¯ÍßLaþ>tbÒ-TÌ9p„AÍg€g¶JpЦŠ®ŒòÈÜ÷ãJE« å7¯¸€“s[íRoÐý°v?Mú{;må¤ÕÁÖ²…7kfxÅÆj‡þ=˜Í‰¿¡? h^Ïj?v3UæsìÃ;}•±ãÙ¾E4oÎV÷ÎõrŽÏÔ}û(eÛVÊb¶¶«8ÉÚKw¥VV²JÏš5‹vŸÎ¤)Ç|9GA.½qyå*aP9((9p±\¶l™AZ$Í*ŽÀú#Étë×K9o^¼¸=ta{«li ù?¶Ú¼±p+›Ô]L»öäËLhî.™§NQ2od¤>B­ë‡Ók—tSÊŽµ~cÃî^¶†Í`ÏŽÙÝ ceR0X«ÓÕPSv]7nœJŒîêý©FûEÉ©xri "€LÉ·~½”0ÚcuT±7úŒÎZS@@ð䬵ô=[n@õÓ¥+ÇÛÄYÝ Ò®qÇWø=§¹p­Ü¼y³(9®3dÒRF€ l½ÙLϳå¡oÎ}Æ–䕱XyàZýÒ¼M”SxŽ¢xƒ.¶=+B¾UOXmyWùœËu§6¬§T^Ë:29Á¤+/ Á-­“â —<<|x#óÛ[•¹ê*}¯n;_ýuÂßQvi -=Ϫ[¿‹\/JŽ‹ ”G7sÓHÞÂ;_Aü#ÿýmƒ­úê‚€T“¯r¬‰7^ÆvíÆfýfüïÑб"C”v`?%ñBQ̉?ŸæØ¥G˜p  vé8¤&Ѹ/3mi¡RtFr>AÀh@ÉÙ´i-_¾ÜhM“öVH䨑qSÑꃧé5f{ˆcN¬-M¿s ƒû~^EGS²)š›X¦ñ÷áÜkž.9))tjý:J;r”ÆtŠ£w¯êÉ ´CJÁ’’Oã¿YªÈ…ãuîåKº—IBTêb7;žžN8öøÅ_¤‡zÈÍzWáSa¨¤`#€/øÛ‚RsÜÍéÃëúXÍw3G"ÝþÝ :Í„1:S ï|¹»[Zenl§Ù…íôæMßëK#8Éš¥ÀEâÓVªdlOè¬òpòiAÀ0Üÿý´qãFQr 3"Ò[üµû8]ûùßÃq£ÓnBF”*~„wÞÁkØÂG)’cnê]pù[' (u±8{ìZµŠò9ù(Ȇ°FùZqAÿœ †øéêÚ$š~¾ch™1»î\{çÏŸOûØ…^(¢äxà »D—Ssòéº/©äf\Ó‡Æ÷mUªÝgX©¹ïÇ8Ì~Šd†™={Qí@ë¥.öÐÈWp|õ*JÙ¿Ÿó5£÷®éMѬôXʼHL`?ç-êÓ´ñƒ)‚;E# %gÆ ´bÅ #4GÚ ”‰ÀëœÚà™Ùë骮ñôÅýÙÁœ y,ÞNOÌZG^AÔ o a:h‘²01h§wlgËÎz¶æÓTƵ7'µ”í'ÒèòORçšq×…VËX^ãnŸ‘4¹mÛ¶JÑAŠQr‹Žs.·¿„Y0ÁÂsË7Ëèz¶à5kAÍ/½Œ#JÇèTþΞ{E'õnqùt¤À‹:½2“¾çç½ åU¶ò|qãšÈì ÇŽ§Èe—]F±±±ôé§ŸzJ—Íú)JŽòÁY|¶|·úñ¹wP[ú‰-é¡go9B]^ý•Nžóá¯+)<>ÞYMu»ûK` l1°Ö ÆÁ›0a¬Dg" –g¢/÷¶†À.¶"ô{ë7‚õ`õ—Rk‹Üd èùÆo4mý!JàøˆF½{³õ¦4Ë¥µºå˜mü™"¹ÙÅ—PXë6̺ˆîcÒPqëå¶>-iÞ½#TÓˆÕÉ/2?¯/ëNï}||èöÛo§/¿ü’?ÇÓD”OqöwÒ_ÛèÎï–Ñ £»ÒÛWö,E­ù"çp¹ô£…—@ £Çop{áÚM¦Àk`®ì†al0F«wÿÞ¦?-ïC@,95µÜ¨‚l>–BÞž£haùŽ 1'ÀY¾ÿud+ÃìbjÎ;ëaì.,b_j1 hÞ=)nÈPúxùÄ.ƒ 0ÒË…­ПŒ¢åûNÑhŽûµtÑÖ—u§÷Pr’““é÷ßw§nU¨/¢äT&)ä(ÞâlÎO_E“®îEÏŽêbvìÄÜ0u ½4w#5î×—ÿú‰ß²BöýÚm` ¬9°·Ü Ãa¬â\;AÀˆ%ǨË=­!gÐ;¿35t$-àœPsj¸øánïØzÔü’K VÇ!‘O-ç'3©ûÄÙtàL†ÙÍzÆÅÐâ‡GÓ·‘ïÿAȱçîÒ°aC2d}õÕWîÞÕRý%§$r ¦øhéNzì—Õôc{ÓƒCÚ›ÝùZ†r ÈOS<›ö£[·6;/‡°æÀc€±Ð Æ c†±ÃŠ5‰€Xrjm¹—-ࢆXÅnLÒò;3PZPD¿±`‹J…Ѧ-5r!y±ëˆãˆWñNgLµ©ûë³iíá3f7íÄ ),nÛO¤Òe/¤ ×6³Ânòáæ›oVTÒ§OŸv“U¬¢äT ')eg¾^µ—&ü°’^»¬Ý7Øœ^&æïÌ¥µÇÒ©Ù˜1T‡w!Dj`ì1 K³?Æ c‡1ÄXŠ5‰€Xrjm¹—5@2µ–±u˜Ákùù˜?N=ùëZzbæZjÔ§·r£òÌ\ŒÖ«™cÈ™—pñÅDÑ4hÒÜRì íê‡+ËÛêƒI4ö³¿ÜžŒ4mÚ´šƒÜÅü[iFI3Ü?v£ñß.£'9cñ“Ã;™u6)3—ú¾5‡v§äP?d óŒ<5úØc v§æ¨1ÁØèc‡1ÄXbLEš@@,95²Üà‰¾èÿæQ½:4—ƒÙõðvÝËïo,ØJM¤NÆ(â¼Ùr7l4hHÃþo~©uªkã(5~ w£ÿýàÞÉ…誫®ò8—5QrœóÝóØ»n9–JWó®Éõ=š)ZG=X8`58šS¬¬⻬GÇ9ïϳ֌Qc‚±ÁéÔœKŒ)ÆVDp4PrÄ’ãh”¥þ² ô%-àÿsü€<¼T ˜½>Z²“↡ÈÍ˪FŽ×µ¼¼¨ Ç£„ÆÅÓ·?w7»sŸ„Xúáö!ôŊݬ˜n1;çnಶeËõçn}+«?¢ä”…Œ·;'ÎæÐ¨æS÷¦Ñôù ýÍêGfâÁlúOÌ.¤øQ£™A-Øì¼|p Œ Æc„±Ò ÆcбÅ‹‚€ à®ÜüÕÚq"MÑ[²¨ÁEíCNLÙtð ‹sW\®_Øi<` Rt.æDâ+˜íN/c:4¡w¯êE¿éêO¹Õû¾}ûRÏKOrY%Ç­¦°q;ƒÀ¾+>ù“BüjÓ/w]Hµ™ÉKìŒ]Ä,'ûÒr)~$+8AB­ac”WŒ Æc„±Â˜i‚±Ä˜bl1ƞĩõ]^k±äÔ<ærÇó¼ÎÉ$Ùxˆfòïeœ{ý-*Árx¼(8F›3ˆ‰j2p 5jDÃy Û”˜bÖDÄ™Nà\pPb·w_¯¸¬Í˜1ìïîüá¿'Mwî¥ôÍé<À&|ì~ýz÷0 ãdišÀùú/Ó¦ãi7rù…†h§äÕ``l0F+Œ™>g4Æc‹1ÆX‹‚€ àNüµû8=3{=½uEOÔ²¾Y×@ýä¯ëÉ€¸¨™Ac¨Ø iÂV¶ÚQ1JÑILË6kß$¶ætc¯„Ëy³.ÝÂcÁ¬  €’sðàAÚ°aƒ ÷¢âM%§âXIÉ*"ðÍê}ô1S uó@je‘ú±_ÖЬ-G¨é…Q@XXï —Õ#ŒÆ c§ŒíÔ›¨±Æ˜‹Ž@@,9Ž@Uê´…†¯ýüoºªk<=0Äœ ‰>oâÝÿØí…dÀˆ9çÅ1:M‡¥?ºè½ù”‘÷_ŠN(úó|® ˆn˜b¾‘gæW»ݺu£¦M›zŒ5G”œjO©ÀûNgÐ=ÓVÐ#Ã:Ò囚úÏ^z{áö•@!õêš“ÆEc…1ÃØa õrEç85ÖsŒ½ˆ à„xÀ¨JÖ8Ç&ëqSQLh}q£y,é‘Ô,ºø£…Ò¸15¸ §µËå˜ðöõ¥¸‹†Ó¡ô<ËÊ«Þ+qVÓïÊLl‰ôþ¢íl}õ›tå•WŠ’S}¥OG ˆW‡ë¿\¤ò¼viw386M¦;¿_Au;u¢ÈfÍÌÎÉã#€1ÃØa 1–zÁX#wÆs@D°'°äˆ5…À 6ÓꃧiÚmC(H—ìS±¬}ü'ûr¬Ç ’iYS#bŸû€P§ ÓK/d¶µ—~ßhViïøXzvTzœómglw¸¬íß¿Ÿ6oÞìn]+ձ䔂DØ ælàTš6ž}`uD !ËCPݺT¿»¹òc¯{K=ŽGc‡1ÄXê©¥1ÖsŒ=怈 `oÄ’coD¥>k¬?’LÏÿ¶ž&râ㎠#ÌŠÜÅÖê]IÔäÂaäÅùXD\à˜jØ«—Z§æmO4ëÀÓ#;S×&QtÝS~Ñ9³s®þ¡GÔ˜­3gÎtõ®”Û~QrÊ…H T0—`ìí+{ª]}}7½”R M8˜w¿dWV+½?Ä9Xå-<¦z%ç-{ÌK}9y/Tùͨ,bR¾*2#è­_/¡~ÍëуCÛ›U1mí~úzåj4hI>73h\îCt›6ÅùŒ®çœ“ºÞülòÝ­ƒéPJ&½:ÏÜÒãr´ÒàQ£FѼyó¬œq¯C¢ä¸×x¢7Å&ÿf)õb“ï]Ú˜µé³å»iîÖ#Ôhà`òñ÷7;'\Œa#vÕøÇc«—»yì100'D{! –{!)õ”…Cî?“AŸëGú­¸Ã)Yt»éÆ´kGa¼.âú4äü1…LD0nê³øœ¦‘Á*i9èÁÝÍmmäÈ‘´qãFJJJrý´ÑQrl€#§ª†À¤?·ÑŽ“i*á§~qØ{ú,!tlÇNB4P5h yUH½zjL1¶cM0öHй€9!"رäØE©Ã{’ÎÒ+¼{ÿâÅÝ(!:´¤(B ¯åXÃZAÁL4pAÉqyãÚx³»a#ö,Y¼çýßßædµ£.£Ôf;…˜æþú2Ãüùó]{ðÊi½(9å$§+‡2Þ¿øûzzDg375ìãßüõ2ò­Fõ»u­\¥RÚð`L1¶c½Ínk˜ ˜˜"‚€=KŽ=P”:ÊBàÞWR›záôÐ…ænjï/ÞNkf7µÁäåí]ÖårÜŠŽ¦z]˜là×µt€-xš0«4³ê   L°3e¥¹·‚VÆ_i³¤Š’㊣'mvO0Itˆ?=zQG³6|ºl­9˜D û ZÌS/â^`L1¶cŒµ^00'07Dê" –œê"(×ÛB`ÎÖ£ôçÎcôÞØÞ„¸ M@ýä¬uˬ’æ$ZyumÀêJã¿[nÖ‘võÃév¿~zö:³¼:f…\ðÈ#háÂ…T\\ì‚­¯X“åi³b8I© °†w¸¾[³O‘ øûü·Ë…üG8qdt‡Yš¤ˆ+"€±nßAµÞjƒ¹ Ì ÌA ºˆ%§ºÊõÖÙÀÃ3VÑÕݨo3óÜmw|·‚¼ƒ¨nç.Ö.•cn€€Ú¬ë7€–²ÛÚW«ÌsÀ½À®‹Åì¯öò\÷!!@\Nzz:­^½Ú FÏzDɱދ­ÌXMý™‰ !õòØ/kÉäëGõ»ˆ›šw|_¿kW5Ös½`N`n`ŽˆÕA@,9ÕAO®µ…ÀÇKwRbj6½y…y¼ ¬; w¥}û³›š<6ÙÂÐÕÏÅDSt›¶ô0oÌfä–t'<ÐWÅh½Ç BaÕsiÑ¢5lØ/^ìݱÚù¶Z…EV…lÞ_±ÿ”Ê' ¿víá3ô=ïà×íÑ“s ügÝÑ—‘÷îƒÆc1ÇØë¹&0G0WDê –œê '×ZC ·°˜&2‹X!›D—uçþé«(2!AsJPqï7õºu£,Nsò ½Üѯ5 ¢WÜÈšƒ¸œ¥K—ê»éVïEÉq«át^gžå„i#Û7fÊà³FüïǨNýzonÝ1+$Ü Œ5Æc¯Ì ÌÌA ªˆ%§ªÈÉu¶øpÉÊäû'†w2+öþât4%›ê ›š.îüÁÇÏ—b»v£wÿÚnFB€D×Ïî¢\Ùô䮌”œU«VQQQ‘+w£Ì¶‹’S&4r¢¢ÌÝvT1μ4¦›Ù%³·¡õƒQ¿gO³ãòÁýÀ˜cìãü9zÁ;‘evi}y/”‡€XrÊCHÎWœ‚"zƒ­8µ¥&IÑ$3¿^š·‰c Û“_ðÖí¼¼º/Ñ­[“_hoÊm0ëä =›S|T¨ÛÄæôïߟ²³³iýz÷Ü|%Çlúʇª ðÖ­j‡¾+sÉká§yÇ>"¾)“ üw\;/¯îÆcÿÔìõf”Ò˜#°æ¼É‰öDª –œª &רB`ê?{)›G.ì`V 9Sr Ïq0s¶P³BòÁ-¨ÅÜÑ1Güãºý´ëTzIÁ¸÷äˆN4mí~:žîúiZ¶lI±±±´lÙ²’>ºÓQrÜi4ЗõG’iéÞô°ÅâðËÆC´óx*ÕíbnÝqBå–NBc9€¹ ÌÌÌA *ˆ%§*¨É5Ö@‚ÇwÿÚJ7õjAQÁÿYqÒs è ÞÀ‹dÆH¸/‰xá‡NÏòf^®ëÑL͸ƒÀšã®q9¢ä¸Ã ubÞþs uæÝùÁ-뛵âe6ñ‡ÇÇSÿ@ˆx&{Ì̽`®`Î`E@,9•ELÊÛB`ÖæÃt(%“jnÅù„™Ö ŠMË®j"ž‰@-îvLç®4sÓ!Úwú¿¡¾›sïàvœn'Á¥ÑÕ¥OŸ>´fÍWï†Õö‹’c9XNeäª]zËÅa1sÌoML¦΋#âÙ``.`NèsÌ!A ²ˆ%§²ˆIù²xñvݾ 5 -)RÀŒj“ØU-œã2¼}k——7ž‡@X\rlÎ$¶öéå®þmÌ{ß­Þ§?ì’ï»1›\JJ >|Ø%Ûo«Ñ¢äØBGÎÙD`ê?{(Ôß—®ìjΜ?ص‚¢£m^/'ÝÌÌÌ ½`Î`î`‰•A@,9•AKÊÚB`ÿ™ v=I Ö â-R²ò(¦Xqô¸xâ{Á¡ˆvh Çm%óœÐys®êOŸ¯Ø­rÙ×N:‘——mØ`N²à²Ò5\”ò¶â€XàKþrƒií5­â‚í‰Å~Ì"‚ˆâsBO¹‰9ƒ¹ƒ9„¹$"T±äT-)[_ðïòž o×ȬȻ°âp<†oP Ùqùà™DµlAäíM_®4ß”»½_kÚt4™6ðŸ+KPPµf«¥;2¬‰’ãÊ3Ó‰m_ÂîGxhß×| ‹F@p ÕiÜØ‰­“[ :M«9¹¡ÌÌ!Ì%A ¢ˆ%§¢HI9[›LôïÎßÚ§%1K“üÀ Û(~耀—ÕiÖœ>^n¾)×'!–Z× W›u®ŽT×®]Å’ãêƒ(í·?®? ‚ÇÛÕÿX ˆij¾àE#¬EK’ûaíê5a.`N`n`Žh‚¹Ì%A 2ˆ%§2hIYk N0)#GY”õçá~FÁuëêË{G ºU+:ræl©M9x$LßpÐlmsE¨—#îj®8rÒf»#€Õ™4>¶[‚YÝó¶¥d$äA@æææˆ^0‡0—ôÊþ¼¼, KDäsUøiÝê›,Í¢ÿ#È/:Gß®ÙOa-e « ¦î|M@D…ÆÆðf¹ËÚÕ—ƒXKrWâsçΔššJ‰‰‰®Öt›íw5›ðÈIkü½û¸úR_Ý-Þìô¼h„Õ«+™¡ÍP‘@ÙÂ170Gô‚9„bÏ)A "@ÉKNE’2e!€M•_™:zlwóºù¼ “ÔÀáÍš•u©÷`ê$4§Y›ŽP^Qq ¬$wkM?¹¸Gbr »ví*é›;¼%ÇF±†ûðë¦ÃÔµI4ÅE†”Ü9·°˜fo9B!ñæ‹FIyãñ„pÎÌÌM0‡0—fòœA &£ØÓÀŽ¥—Ö¤:¼ã(„z\äýyÂãã(· æ3‘Ž^0f³Ò¬óÆÖŸv‰÷‘‘‘E»w›ÇκDãm4R”àÈ)ë`·kT{sb|éó Š(œ9åEk„ÇÅ«9b¹@`.aN‰A@,9AIÊØBà‰*`Ür£nÎ6Ù¨³…›§Ÿ«ÍÊozõèGV†õ2²]cå‘°þÈýa—{kŽ(9.7lÒ`{"°ód:MÍ¢”›XBëÆRíÀ{ÞNêr#070G0Wô‚¹„9µëTºþ°¼AÀ!`£exÛFfu/Ù{‚rÙU-¬iS³ãòAÐ#Âóc.oÊéãHA¢Ó0<¸”…G+¼oű³¢ä¸ÂHI†vÀ"‚ü©GÓ³{ü¾-‘‚š/fäƒ À`Ž`®ès sÊÒ£/#ï ±ähHÈkUHL˦'RKmÔá÷'4*R\Õªª]Ú¨eçÐêƒIf½Æfž\Y äHLŽ+ ´½Ú`·kPËúäU뿪6Ka欪Ã_~AÀ˜#˜+˜3š`.aNan‰‚€ àHð;ãËɈû57§ˆžÃ›/²QçHèÝ¢nÿ:u(¨N(Í·Ph†¶n@pWËf·}W•–-[Ò©S§(33ÓU»PªÝ“S 9PÈpòÏ$êÛÌ|qøk×qò Zl!€9‚¹‚9£Ì)Ì­ÿ²èèÏÊ{AÀaW3ÇC>U•û“˜ì„‡XÑÑ9JQr4HäÕ ²’c¾†õI¨KEÅçhÍ¡Ó6®4ö©¦ÿºj=ê>1²¢ä{Ϊu{8fŒ4}šÅšµkù¾SȉÓtƳóòAÐÀÁ\ÁœÑ ææ昈 ` qW³…Žœ+•NHõ²bÿ)òòò¢ s7l}y/h×­G[“͘B„Rf ]ÉsÉU¥Ñ¿Þ8î”+G”WNh÷?ìƒèëC™[l–ó|`¬¹âã„æÉ-]Ì•åþ̘S˜[˜c"‚€- äˆUAàlnÇ㤱’c¾^­ä5,$:мtÖªÔ/×xÁL S|î­=lnµÁ¼rå5,44”BBBÜ*!¨(9žñ´K/7'¦P‡†‘ä£ ÈÙw:ƒÒ²r)ˆwçEŠ €¹’–™KûÏd”ÇœÂÜÂÊC@ÜÕÊCHÎ[C`ëñT•H¹¹ô²t_ùÅȦÇDÞ—€/'·ä?¸>êójÓQ×^Ã`ÍKŽ~Tå½Ç °õX*µoaÖßÍl²­UË‹9‘”ˆ P0W0g6M6+޹…9&"ØB@,9¶Ð‘s¶ÀïKx5 *)VÈq»O¦R[rDŠ"àÇñ¥ùùG/FPëœa×kWQr\uä¤ÝÕF`ëqXr̕쌇…’—÷AœÕ¾‘TàÖ`®`Î`îès sLD(±ä”‡œ·†~_,7êŸ ®G²Qg 39fž/Í×0mn¹òf”œcÇŽYï´ w54g4ùÄÙJËÎ/µ@læ/ymYœ1$.}OÌ̽`ÀÃ\ÊB@,9e!#ÇËCñ8Úƒ¨Vvo¶`ãÅ?¬ŽvH^rˆˆPŒ|¹…Å%ecC(šÿ¶s&W•&ß8s挫6¿T»EÉ)‰°†ÀÁã'¢CÍNoåEÃ?<Üì˜|ÊCssG/ÚÜÒæšþœ¼ôˆ%G†¼¯(ømÑ~g´k ø³‚S‹ÙÕDŠ"®â»ö$™3‚6‹ %W^ÃÂymNK3_›+ЉËÉ·Úˆ£bÀ6NÉR Ôê×ùÏ—ù'59‘ÆÇ™‘CD¨ ˜3˜;˜Cš`n!I暈 PbÉ) 9n ¼¢b:•‘KqLó«—CÉ™ä"k˜y_>¾jÎÔ"̽4åùåÊkX[¨RS]×¥ ¼%ÇùlCÉÔ8"˜tÄjt<=›“_“SŠ•AÀ/8DÍÌ!M0·0Ç0×D[ˆ%Ç:rÎGxóó¦i”ùzµïL&Õæß#A 2xy{Q@p`)%'Žç—+¯a°ädeeQQQQeà0lYQr ;4ÆjXbZ6'º 6kÔá”ó;¾¢ä˜á"ÊGÀ7ôüC…6‡´+0Ç0×D²KNYÈÈq[M=o!na®Ð`'^û=²u½œ,Àf]©5Œç—6×,Ë»Âg(9wqY%ÇfÚx†óšÄp@^N"@œóÕÔ–÷VHúnžþ‘•3žyHÍž;jé ˆ Ì5AÀbɱ…Žœ³†@2Óúúðî{x oÉi¸Ë¦rž7ß@c­aî°^ä;H‡_¼• ’܇©«dâüû¦ÏK¢œèBÒYP“»¢À] â..k¢ä¸â,tB›S˜õ*2ÈßìÎ)¼høùûAÏ)äߦPêÜoÊ)å9§1g0w0‡ôìO˜k"‚@Yˆ%§,dä¸- äX®a©Ùøý1‘¿ùÚf«žš8çëEÎî”2g*è3@IDATåîßV9åÞôJêø×ò sß$«PŽÏœ4_ðQÁ:JiW_ßó–΂‚WkºÕöŠ%Ç*,rÐtÞ•4_ °Sáûñ"å#àD^þ®÷ƒW~Ϫ^sÇr· s sMD( ±ä”…Œ·…rpY®aP| >Û¬sµõ"sýb:þáS¥àwgçü¼ñ£4‹õJs‡LÏqM%ÁÝ”±ä”úZÊkä#ÿÚÞf§r Š©–ëO¡ÂÔÓtvÅ\*âW¿† ت ¿Æ«¾žËË¥Ì K¦÷Zœ0.bä äÓ@Ã{ööµê½OX$E]:þßãKøøò‰ˆ¡¨1·¨cê˧¨KnUŸMÌ\‚ë‰s3wèEéËæPÞ‘=1ìòoÒB•ÑþçdQê¼o©àÔQòkÔœ‚Úö ÿ¸Öª=ZW|ÅÜÁÒ æ暈 ` ±äØBGÎYC ¬5 e½˜ºÞHb¹^ m¹‡vQQÊ) î2€2þ™Oy‡÷PøÐ«È·n#2;GY[VRöÖU|¾?·ïYÒÄĤ/û¢¯¼›²6,¥ŒÕ ¨vt^‹n+µñ†u.sÓr:——£ÖÁÐžÃØá?ô¬-ÿ©°@­?)¿M!]rÙlÚÿÐ%*>÷Ì/ŸrÝõ)¬ÿÅçÛ´q)yóšÕ]µ§àT"¥-žI1c良ƒ;)}élncŠq}© Ä7塞Í+)}É,Ú6&Ž7èæÑ¡gÇÑYVNÿô>íßOm²¡?)ó¿çõ¢ƒºþèë÷P o–åìÛJ‰oÝK{x­1ý÷ ›8é!:õÕÖïbªÓk8ÿ¿Çhï]ƒ©(=…òO¡}÷¢=·õáûþJG^»“N~þ"—s …S@óäUÛ7èZ’ol#Êeæà“cÕõ9»6(h±‘‡µìØ;Ðéߣ¤ï'Qö¶ÕtøùéÔ×o˜ÁúÇ÷éÔÔ‰{ã£ܹ?xpŒZ‹÷M®6Í ;ñC-¬aë•ß¿ ³åÚæÄfVêÖš’#îj•‚M »:ø"ûù˜ëÄùØÁpñ,Ñ©ó¾#¯À`òæ?,t îyµä‡»L…É'K¬&uxwªàÔÊ=°½d8=ô.¯^JùÐÂâÚc¨²ø Î¨‹o¦Ð .ÔN“Oh85}~ªú\˜|‚š¾ð55zd25yæsu?(Nš$}ó™ ò)¤s?‚ C½ñϨSï£Fó½]Yxî¨9¤ëæ˜å¢¡;-o…€Xrd"TüÖøZ¬aÿZ½ ²ŽY[/¼ƒB¨áo“wpf*KTïLxšýß\eý8ùÅKÔôÙ/Õñv3÷¨ØÏŒ5)x"ÙBR§ï(2åçQÌÕ¨és_Rs¾®Þøg)gÇZJž=E•CõŠpk í=œ-2S”¥ÇRxሺü¼-‘Züc­/å^‹¯O(qùÃÂ’¤Iþ±¼“v†Î±‹$ yGv1R‹VÆU_1w0‡ô‚9†¹&"”…€XrÊBFŽÛB¿5>kX»y)±8n«žš8g¹^àž^A¡j½Ðb;¡üÔŽª¯\˜µc^þÊ}-ÿÄ¡’fzñæ\ƒÚ–ƒgyûPÖ¦eêXÒ“ k!)Mà6í[?ŽRçGP<|Ù ¥ Ê}Ìå÷²[v,¥{E@p/MüãÚ˜­ygŽóÆ^ꨧƒ;öæ6¤Ó9ö®0”` ³X¯´ç¤’¹e¨—ߟC$hùXI 7B;`–¼ïµyÇ¢–ɵ]‹Bº¦ØqPêÓhû% ”ÌîcØ=‚Ôâ…¯vD,øä9eZ÷oÚZ‡ÿ³^b®úŸ²ÀÀïçröm¡ 6ÝôE*ö^sýÓ=øÃ%þÑØ=ƒg¦±2U`fªXåÆ+…¹cÍ}Är·Õx-ÿö®>ªjy@ Ti¡é]:J¥è{WTÏÞŸ]ŸýÛvQ)Ò‹ôÞ{oJ¤7Øÿ|nػٚl¹»;óû%»·{ÎwÎÞsçÌÌ7R#_# –_÷€ÿÝó•¥ ‘öü1ùiNkÊP©Pº˜m?¡²R†ªÔT {ø-åp¼Oޱ”òìAÉ9¼[y,¨ mžRæÿ®Äî˜ïuô ân4Z©ÇÍÓvQ~ÊIŠlÞ‰ ØI0߇ZXµ÷$ml©¾®Ô%Pž±þ½ ïJɹ%B~¦–±Ê÷ÔO' (25Çþ|ÂÁ˜ü`=òêÝÊ/Çs¢]·¶VAÕî~ŽÊ²ÉÞšDwHekÔ£3‡“¶rUè2ÐÚiÅÚ2ƒøÑÿ¡£o=Dgü®®cÞâ{ (Vy†ºˆÇŽ¥B¿æÓl¨ºJe ƒ€åбa*&14x®XºiÏÄÂø¥˜‘èêokÿå“.² t>“„ñ¼…ß\¨3w®S1¤æå„3Ñ ¤ w(îéðúË' > ãÿ¦¼SÇ8–èIJ;‘½'öSÝ×q¶¯‡qjAZ¡½'icËk•‘YE@”«°ÈNKàgj+}¦‹~:9\nà™©ß*ëKt§~Ôô×MÕ¾ ÞÄ᤯^Qñ9¯tél“Þ‚s¹5I€¹&mõ|:ùóû„xw Ü  |Õyé;*—Ð’jr ”ž@ŒÍYkƘå>í˜| ²Ê¨µG>=ž+Ú ¨v7mAÅßç1­=Î~‚… .a.Ïm‘Í;²+X:eíÙ¤+$r-SOG2Ãi:ϵg9õXáqí:#|bÜD\CZ}´÷¤r)7´ãòé]äm»xûíÝâ"ÃIË­5™} rôS혿|‚ö”–¹WR!N‘ hÌg°˜dí\Ož¼‘*t½N±Ëd2w?h3UfMÓ$¤B,¨ òMh»Ô'è¦ÏLý†Ò™´+gÇ?{^­Pâ@OüŽÏ1-4ró$÷¦ÚÆ7ÄâD6i«&ìýÛ˜Š³—:¦ý‹bö¶º¯þÄÇ«j»üîcÇZrŒ5AÀÚ ‘½sä˜ `Ž@,æ0ÎBo.Úó§ '—¢|ïam¾¨tãt–cF/°2ƒØÌÔy“Tð?˜7ó9Hÿ+ –FÞP3ç3[:/„1-nÐí—šËnÙHQ€t`hƒÂ’ðáŒB(Âë4¢†Ÿ/`*êÛTì æ(B`aC®·œ#{éÄwo¨óÏΟÌd TeÄÃLhpIID¾žÓ}¥(¢«?ø*ÇÏt äŸÞSç§ÎŸÄó[k•/çì?©}Éޤ꾦rЩ¼<¬ÌœøúUªv÷óŠ.Œ ßSX?|)BÍP˜kNwÐG˜Ãâù=È\´÷¤8‹ýæçÈwï!PŠ'‹+_|ßßÿFŽiH­Ù{°È,è;~&Õ«M_¾ˆˆãß®ØCM\I-ï¼Ëòt¿ÙÆ \Â| „æì2hV³`Y}³ÚæŸ r ”Õ[ p °¹Sà¶™ò­º©•2Ü+uçNa÷µ*;ïçͲ¶~?>¿¹ ÝÓµQámïÿy<“F Æ^_¸O¾æ|üñÇôöÛoSRR’ùnù.ØEà«e»éé?×ÐÙï(¤ô%&°(v5 aW,÷HpØøË'”­ZËß`Õ)ŽX.§ \s˜eG¶èLaÕë¨?ór”kS_E°¯ŸÃΰµ4ÒýTÁ1 ¶îª‡qF‹»Z$åx*QåhÑK1#Ú jÅ\²näedPxë+CÚ¹òé:Ùœ™1Apw‹îÈÉE«ÖæÌÓ‡U7d­®v׳®j+0f ÚÒª•|>›ÚÕ®¤mʧ P‰É)‰ìpxžÃ ŸJϦê®XÜ«Vä¼c—ŸGNãw§ÀúÄ"éµÑ%sû5ïE¶ì¬òé”b¥&‹ü$;¼v#µÐh”6\ÈÌà9L¯8b«NÅ#Ô6J˧þ¹ü8øûMKêÄEÑá}À{J—|˜óÒõûý¦Q¯hÇü Óu꼉´cD3Úܳ"fŸiLVð{v×Ê™/`ÐÆŒ6†´:bWµÚ<ÖD{ˆ%Ç:rÌÚ³æÐý|U¿rå¥_ ˆ·v?ïK™ýËå|3&:þñÓÌž¶ÙðÍI`×ø0¦®>ôìÍ´¥W,íÞD屫xÍ`йœ¨Û(È9ŸNÚ¸Òê„÷$¼/‰±ä£ _‹º¬Ðœ8ŸÅœÈcrI7Ž£fÇÊ%Ç#ý‡ëøÑãÔ‴ OÜÌË…bÌ`ì` i‚±•œ–Mk"‚€-Ä’c Ùoê"9/WµX×µ~|á© ü¼Ùrø’e¹pg€|{Z…nWâA´ct)—Мê¼üª&˜Ö¬Å¿¡ ¹ã”—Gu-(Ñ2‡¡‡.ÕA,9Æé C×+X==’ª_Aîy8ñ(¤à(ŒŒsÁ Ƙ¬‚™£"ß­! –k¨È>{ çªØòTÔ’Mùi9‡ÁÚU±ð¯tx9{î˜Q¥½÷Ô¯\t“9Ì8CI”ãô…¡k’P%ZùÂîIÖO­kÆRîÙTC×]*g<0fÚÔŠÕUlÏÉsjŒa¬‰¶KŽ-dd¿#ðlÁsÆ\ZÔˆ¥Ì´tºÀD("‚€³d§¦RXh[m®ÌW™«xß©óÔ Jg‹‘ó<Œ€(98PŠ V'¶OÑ5©%”œTý>Ý ²!XAc/æ²íxªck"‚€=Ä’c9f –5âhë1ý¢\ËËÏ!¼´ŠÎ"ÍsXÓê±t™¨O]¶ÿôyÊæ„Ö–s›³eÊyîG@”÷c°%â‡km‚ÈÎÌ  ΀í\/6 t­3ÚË…vkŒ-™44äÓbɱ…Œìw„åv'Ÿ£|¦°×.l‘˜%‹u$òéyìÐÖÂsXiÖzšUq¢9ň’ã ”ä˜ ¶Ó[mṶ֮́+E§NH+¥žFàÒX)E—ÆÎ•»alaŒ‰ö€’#–{É1[`%¯à»¬éÝ®ÛÕ©LY2‡Ù‚Mö[ €çOæéÓ„qc.˜ÃàªV.´ŒùnùîCDÉñ!øþvëv¬Ð`r8Ç Õ4‰‹ £„øŠ”™œ¬í’OAÀ.+3;š`LalaŒ‰‚€ à šV‹¡ˆ²!´æ~Q®GBUÊ9)s˜'0Ä2³SRT2ë®õ«êš‡qÕö*Éó¦ÅÇ¢äø¸üéö]˜v+«žÔU»gƒxž ôût'Ȇ `†Æ ÆŒ¹`LalaŒ‰öKŽ=tä˜=BØ•¨CÝ*´â€^¡¥tæù4ÊÏʶw¹É'©|¹0[HV³’Ó•fã JŽqúÂð5©\>œò üŠýz…¦ÿ¨ÓOŸ¢‹œUYD°‡ÆÆÊ5 ªéNÃ˜ÂØÂAÀS`õÝrëT/žc)JSú‰žº­”@d&'Q7VŒ™•¼P@œ“Æ –ÖÂä‹O%Ç'°ûïM»&ÄÓ²ýú‰ ³Zt9¦'é÷ûo+¥æžB =)I•k›ÖÔÝc cKDp„€Xr!$Çí!€çÌ^¦‘>™~ÅjJYÑIK‹ƒ@$\Ó *ÍÞ®_”À 9LmŸuæLqŠ•k‚ólí«]©%T¾’4—=YþÙ“D,Ÿ ÄÐÍ%ÇÐÝc¼ÊU☠0`ÍÙ¡Ÿ †µ®£7e‚0^Ÿ¥FîÅX1Œ¥ö<¦0¶DgKŽ3(É9¶È ,9פu­8ªEçÔvɧ PŒCiD›:ºýpUËdŒ+c! JޱúÃ/js}‹«èï­Gu^RkÆFSªL~ч¾¨dêƒjŒ`¬h‚— Œ¥ëxL‰Î –gP’sì!€çMJFN–µµ¯Géü+"XC óòBݨvõu‡§o9BÍ9ËŠ QrŒÕ~Q›ámêQÒ¹LZ¾_OÃyk‡z”!JŽ_ô¡/*‰0ŒsÁÂX˜œE@,9Î"%çYC IÕŠLÿK“×ë¼¼ÂÚŒD"‚€%g ZqÑìÍr%ê¦l8(s˜%XÙ%Ç áOÕhZ­¢Zµ˜¼á€®Úÿê@™iiBéCE6€¨Y160FÌc+`S"‚€3ˆ%Ç”äGŒlW¦l<¨óH@"Ǻ•+Rʾ½Ž.—ãA†XÕÎïÛG·wÔÏapU;q>‹Fñx1¢ä¯Oü¢FXñš²á ×*Ü’_V[×®B)»wk»äSP`L`l`Œh²Œ!KÓ¿v\>[ˆ%Ç2²ßYðÜIæ—Ó%{“t—<Ô½1¿Ìîç¼otûe#¸8ô(åp²Ø{º5Ò1iÝEݘ­ƒ"ÆC@”ãõ‰_Ô+ò§8ÏÀ¬mzL碂Ü<¿h‡TÒó`,`L`l˜ ÆÆ¥uÇüù.X" –KDd»84Н@íêT¦ +õV›;:7¤ ùùtV\¯‹kÀ^“ºg7õnRƒêÆE¶1;ÿýÆJÎèN ÷Éc! JޱúÃojS¯Rõn\¾^¾KWç›Û×§°Òt†"‚ÀXÀ˜¸¥½Þ̱ƒ1„±$"¸‚€Xr\AKε…À}ÝšÐïOqŽ3ÕkR%*œnhU‡RwíÐvÉg#›–NçŽ&òB]pw«ÚõÖÝI²áSDÉñ)üþ}sLX?~.«°!Qa¡ôÀ5)uÇv•¸ð€| JàÇŒ±€1Q>,¤ŒŒŒ!AÀÄ’ã Zr®=néPŸÊ”.E?¯Ù§;í?ýZRÚÉS”ž¬'×Ñ$AƒÀ©íÛ¨zLd‘ô_/ÛMC®®CPŒEŒ‰€(9Æì¿¨òÄD”¥¯–é­9÷iN¹œT-õ ž˜À/%•t+ æ‚1VdÒ0?G¾ ¶KŽ-dd¿+`QÞŸ/ÙIf)s¨s½*Ô¡^U:³u«+ÅɹˆÜ­S÷ì¡'ú¶ 2¥J¶p{ÒYZ¶ïÝÏ x"ÆE@”ãöákV¶Liz¸g3úlñ‚oª&µ™+”À)[¶è&í¸|xi8³e³ š`¬`Ì<Ô£)a ‰® –WÐ’s!ðxïæ´“_Xgo×Ç—>Ó¿%¥>BÙgÏ9*BŽ0§Ø!œÝ­ïí¦WfÞŸ¿•šV¡~Mkpëý¿iò†áÿ}èÓ<ÂJNFný°J¼ùÒ Ö”‘š*Á›>íßÞ»™©g cÁ\0V0f0vDâ –œâ &×XC 3>öçLõïÍÛ¢;Œ¸œ&ü›¼a½n¿lyy”²m=Ù¯…Îݔѿ®ÝOãú¶¤+¶àÁÅŸZ*JŽ?õ–ë _ÔÛ˜Yäƒ[utÒͪÅÐH¦è<½a1S°H!€>Gßc `,hÚhŒŒñcÖP‘OWKŽ+hɹΠðĵ-éŸ=I´áè™ÂÓñòúæÐvìv}ˆ²RR ÷Ë—àAà$»+†—)Ecû´Ð5ú£EÛÙU?ŒFwV50Ü%Ç€âoUj‘{ˆ2ƒÖ/köëªþê¶”uî<¥Jb5.Á°dzè{ŒsÁ9t&žêµùnù.¸„€Xr\‚KNv€@߯5ô+36è΄5çjNš¼~­n¿l>ùÙٔ„ϲÛbtxhaƒOgäЧìn=Ž­;` 16ÒCÆîC×îÌ™34jÔ(zìÞ;¨M©dzõï TÀlZš4¬RîãÜ(Ék×rÞm·|8èë“Üçè{ŒM060FnïÔ*Gk»åSp ±ä¸—œì$¯ iOo=BkÒ]ñÁðŽtöÈQ:ì˜n¿l6Ik×Q%&ÇyŒc¶Ìå9›)²l(wksX û]”Ãv±+6eÊjÚ´)­ZµŠæÎK“?ýMÍ ïWîÑUüõ!í(Ät’7oÒí—ÀE}>Gß› ÆÆÈ‹×·1ß-ß—KŽËÉЬ&u©_•^œ®Áéݨ: iU—’y®%¾Hà#Å ¸){÷Ðû7u¤ˆ²WR ç3fâ{f@+ÝþÀGÄ[(JŽÿöOj~úôi9r$1‚n¸áÚ¾};]{íµT'®<ÝwMz™Íý*פRùpzup:½u夥i»å3@@£¯Ñçè{M0&^â—ŒŒA ¸ˆ%§¸ÈÉuŽxã†ö4ç1šËæòáˆN”›žF`Ú l Æ&­\IíëÆsëúºÆBƼö`Éï¦ÆÀ¢ä¸sŒVµ3fPóæÍiÍš54oÞ<úꫯ(:úŠÛÑs N^½5Goµy´Wsj\­_ºÔhM’ú¸ô1ú}n. ŽÆJŠ€XrJŠ \o ž «âpþ=y•Îõº^¥(zn`+ŽÍYÏÊN†µKe_€ pzçNJ?uоºµ›®E›Sh{#¼scG¦”.£;&ÆE@”ãöaj–™™I<ð 2„HÛ˜R±_¿~Eꇎ—µ¡æo£Ã)W&‚Î(ýýí=(íD2Ù­wg+Rˆìð[зècô5ú\ŒŒ Œ sëŽv\>WKŽ+hɹ®"ðÞðNtðLšJj~ís[S½JåéØòeæ»å{!Çï:ˆ!~š‰q®®«kÙã“VR'+ÖÝI²a8DÉ1\—«B°Ú´jÕŠþøãBÎ÷߯³ÞXÖÁxµÙé±I+t‡ÚÕ®Dcû6§kVS^F¦î˜lø?èSô-ú}m. ¨iŽŠ|/ bÉ) zr­=ê3) (ƒá^›œ–]x*O¸½»" 8³Gë /pSK\¶”jV,§äÌ›õÓš}´â@2ÕÙ|·|÷DÉñƒNòE/\¸@¯¼ò uíÚ•êׯ¯¬77Ýt“ê„òDðÅ­×0KÍQš¼á îüׇ¶§Ú1ttñ?’;G‡Œo 'ú}‹>6ŒŒ…/G_C"‚@IKNI”ë!𓣠ʣõ‹u]êÅrê$­Z)1¦Ž@ô³ã§·ï ´Äãôëݽtîh Œ†ûâC=šRûÚ•ý¬UR]yë1PÄÄDêÙ³'½ûî»4~üxš={6U«V­Èy¶vÀ¯ùÞn鱉+élV^áiåBËÐäûzSæÉ“”¼esá~ùâß /ѧè[ô±&è{ŒŒ… œ?Úõò)ØB@,9¶‘ýî@ŒZX˜™Â‹4Ӷщ…œÆñ(ñŸE¶¦CÆ7²SÏR{­¼Ì„9êVÑ5d,»©a<¼5¬ƒn¿lø¢äøG?y­–Ó¦MSîi)))Š``̘1Tœ•Ów™za–+a­jÆÑÛÃÚ«Îôäd¯µKnäЇÆEŸ¢oÍ}1€± "¸ â<Üuo)'xèפÝÞ¹!=ôË2JÉÌ-l8ÜÖ&ßÛ›rSSéøÚ5…ûå‹"€¼nG. ¶ìfýüu­u€‚ûëÚýôÙ¿ºQTX¨î˜lø¢äøG?y¼–¹¹¹ôøã+ZhPC¯ç×-Zû¾Ë•¥oïèA¿°/+æ2®_KÔâ*õ`ÉÏÊ2?$ßýô&ô%úÔ\Ðçè{ŒŒAÀˆ%ÇhJY¶øhT a¥æÞ—èNi\µ"}Ëñ9'·n¥Ôz·l݉²axŽ.YLe rèúP™R¼*wYý~wׯjŽÓö˧! JŽõ—Gj»ÿ~êܹ3M˜0~ýõWúöÛo)""¢Ä÷ج=ÊÙ‚þu9á$šà1òû½Ö(_–Ž,X@/^ÔɧŸ €>CߡїW¦R}>Gßc ˆîD@,9îDSʲ‡hð|›¾õ}½|·îÔ[;$¨g\âÒ%w'ÿC yË:wè0ýq_&ˆ,lHîü~1Uä¸,(º"þ‹€(9þÛwn©ùôéÓ©]»vÊ%mÓ¦MtË-·¸¥\­¸*ÕŠ-O·|³ò/\Qf¢ÃCiúC×RÁ¹T:ºD¿J¦]+ŸÆE}†¾C¢/5A£¯Ñç⦦¡"ŸîF@,9îFTʳ…â ŸºöjBlÆö$½2ó>ÓMw¬S‰ÏCùYW˜Øl•%ûƒÀÙÇ)‰é¢ßÑ‘Gl.ïÎÝB‹ö$Ñ/÷ô¦òa!æ‡ä»Ÿ! JŽŸu˜»ª ö´çž{N¹§ >œV¬X¡XÔÜU¾V’fMæU’mÇSi,3”˜KÓjéÏúÒÙ(‰ÝãDüôú }‡>4ô1ú}. ÓÌ‘‘ïîB@,9îBRÊqטl -3kÝøÅ<:Ÿ}…LŒ‘Ó¾–ª–+C‡XѹPPàl‘ržÈF_£ÏEO %G,9ž@VÊ´…’cá&#'Ÿn›À)ÌNŒe—¦y  ì åÂk÷k3tŒ÷5çüy:¹¥«®‚‰g3鿝Ðð¶ui\_½ò£;Q6üQrü¦«ÜSѵlžmÓ¦ t¥_÷³)5ëJîÜïÝ;*ÙÃÌÚ%´œžè╉¾8Ä}ÿeô‘¹ Ñ—èSô­ùʘùyò]@@à®. éE^èyóçÌßu\×$Lc2–ó𢻵‰¦£ÃÇW°à˜1jF”¡Eÿ¾®HZ$­ÑÀô‡ûSݸ(_USîëDÉñ¨F*2--† B}ôýðÃôÞ{ïQéÒ¾ëv¬Ìx¤?åä_ ϦôÜ|\Ÿ±ìãL=|xÑB:³g¯î˜lxôúb,÷ úÆ\ÐwèCô%úTV¿ÌÑ‘ïžB@,9žBVÊuWÙe÷æöõ¥ûõ€f5i昔qøær ôŠÎk¼RöѨΊÇþè²åböw€§;ÃŘ#‡ú«–悾BŸ¡ïüûzB_ŠÞB@,9ÞBZîcÐGÿ~?jÇÔÒ׎ŸE;NèsètK¨Ê/Öƒ©lúYÚÏ/ÚyWbÛ+W޹Ä‘îû{õ¨W™þáy ‰]Íåf}êÕ4~dç"D:æçÉwÿF@”ÿî?«µÿ믿¨W¯^Ô¾}{Zºt)Õ¨QÃêy¾ÞÙ¼z Í}ü:Z{ø]ÇnO`71—ÇØEê/öoNÛ¿—òjXA®¬†™ãã‰ïÀXs`>0ôú }†¾kVÍû±]æõ‘ï‚€ ø °Òôú5U)lÃ_ÔýÓiÃÑ3ºª´®G럽j…í›:•2Né™Eu'ˆÛ€¡ùóé¾® Ùu°¿Žd7çÑßVÐ;œ¬ÜrŽs[%¤ C JŽ!ºÁ}•?~»Xþ¯nEÈpƒóôŸkØK¡ =uíÕ”dHDÉ1d·¸^)¸;–ÆGo¿ý6}öÙgT¦L× òÁ-kÄÒÒ'†Pbj&õxocGsiW»myáFj_žöM›*„æà¸é;€-0Þúâì‚¡W`Ð'X©D¡¯Ðg"‚€ +;YYéÝ»75kÖŒfÏšI3ÇV1:׎ŸIÓ·ÑÁÊôÒè%Žo<º|…¢ã¿P ÷\Ð] .#ÊnSÿ":}‚Ž»žþÝGŸÌ4ѰõæõYéëÛº‹Çe„ýóQrü³ßtµÎÏϧѣGÓçŸN'N¤'Ÿ|RwÜ6ÅW O ¡ MÔñ­©dÈY%*œþá׸>ÍU0­;G6ЇÀ©í;h»é7 S ¢=VÓ¤}r.· +÷Ðî#ËdÖº“e# %ÇÏ»3;;›n¸áš>}ºzØŽ9Òo[„\+PtšT«H×°ÕÀr5 9XÞ¹±ƒ v/“r’öþ1…ÒŽéóømã}Pq` %€­ežôú}‚¾A‰‚€ ¬ìÙ³G)8 4kÖ,ŠŒŒ,„"¤t)úòÖkèÍ:ÐØÉ«è±I+©«Cfr]óZ´óåáÔ±zía—òÌ~jâ\v"®#—™IæÌ¦ã«Vq’êV´ò©ÁT³â•þ@‰‰gÙ á½é´îÈi^Ä$,j®Ãì×Wˆ’ãÇÝwîÜ9êׯImáÂ…Ô·o_?nÍ¥ªW`”Ù¤›ÛÕ§>›G/M_¯,æ ëÓ¸ºš$®kX…ö±›À‘ŋ٪£O,j~¾|×#P“«0vÀ.05ÌËÀ}€¾@Ÿ oD_# –_÷@ðÞß¾}JÁ©[·.Íž=ÛfÌëÓý¯¦‰÷õ¡o—ï¦>þM'Ó³u rÓ ïDg¶l¦½ýI ;q¨§Ø]p÷ï¿SLN­xz+9mŠ,Ò-ÜDm^ÿƒò .Òêgn Ƀã¾tVH 5&˜ÚròäI¥àœ={––-[FMš4 ˜æƒš>³íëTá•°jæ—{zSlSÔ\–¸È06;÷¥©›Ó¿® =¿O¦ø)®aC’°d %ý'&†޽9¹n E‡–Vìi7XÉ š•K·~»ˆþá Ð_ðªäý×4Ö$[‚€w5tBUaÿþýе´V­Z4‡(£¢¢ì¶~dÛzÔ¤jEN:Ÿ_´ÿdW¶¾Ô¥^|á5˜§Æ²‹ðà–Wѽ?/£%Y¹i3ªÖ®…„É‚R!P_²Îœ¡¤•+)™ê L¾Ä.€á!úøcÌuïÎÝBÏO]GÃÛÖ¥o9§‘$«¶2H6Å’ã‡}âÄ êÙ³'åääЊ+JÁ1ï¼\#ÈùXZ¾:…°*c)xIßûßtwÇz”¸d)íg¿ÜŒd=»å5Á¸ L€M"SŠ+`fMÁÆÀ˜{Qp‚q´»Íš%ÇØµ”ÚH¬´ Õ«W§¹sçRt´s¹ÁZ0A˺ç†Qf¢©Î«œxðæR¿r´ÊãòÝ=¨àèÚ5i"Ú±SrÙƒÄßóÙ5ÿÏ_»ykŧ7>#»¶/¢àœ8ŸE>šE/L[Gï2EôÄ{ûˆ‚ce0m–âÕ0Ý/îw6ÿ!®Ãbw0abè¶?~\™Ë1Ñ/Z´H=t ]a7TîlVÝÿóRúcã!úO¿–ô?Øäi)ÛùÅü±I«èŸÝÇ(¶NŠç±ˆØàfËJM¥“ë×S*'FëÕ¸&}ÌÉ=‘ŸÈRò.\T«^ïÏßJ7µ©«2@ÇDÈj¢%N²í{ìÆo$®„„ˆ3‚ï{$°kp˜Ÿ=zô Ê•+Ó&i©X±¢Ë ÆKÖÿ1ÁÓ®¥¶Ì\ùóݽ©N\ÑøÆ´œ|Åþõá‚íEUÚ´¥˜úõƒÚ;¡ /NnÝJ)Û·Q%öàxŸ—[Ú×·ÚÓ¶¡{\BÙëÞêT¶zžì´Þ/'OžL#FŒ°¢ñŽ%ÇøTXÃÄÄDµš¦œøø+¦ï“øË„•{•û‚ß¿¹­u®WÅjkgn;JÏN[OÛS(®~=Šoݚʙ²“ åfÓ&J9pZpBº·†¶£ë[\e¯UOѽ?-¡£©¬u¥»º4´zžìŒ€ÀTNª8lØ0QrŒÐ^‡#GŽ('–çĽÆÄ] r,Äýë›…t(%ÞÚžÆôjNÌUPDœN£§o ‰ëöSdl UiÝ–*rP0¥ˆóç©Û)…œpNºúlÿ–Šö9¢lÑ…Ó94–I~]»ŸîîÚXåÀ)Vô¼"@Ë«ˆ’cÙéIð°…‹ü€ñ°ÅªR0Êá” z€­: v§‡{4£7‡µ§¨°Ð"P`åìÏM‡èž(v'¥R ûQWjÑ’¢kÖ(rn ícÚ™m[é,+Ä«ÇÒëCÚÒ­yr´ÒÈôÜ|zî¯uôéâÔ¯I úrtw««‹V.•]‚€ÏДœ<^á -úÛ÷YÅ䯅aÁkæÜ¸¸8·´/—ƒàßà\-oÏÙÌVÊ*þÔšu7Û•|Ž^ä;ÌelÙ‰mÞ’*5jH¥Ø‚™›žN§¶m£Tf±ƒród¿»Ô‚kÈšü´fý›™ì ü|ÆÉ?ÙX̳v­ì³Ž€(9Öq‘½B ))‰ºwﮘ\Üù°õPu½R¬ö`IÁ[Ã:Э@Ù™³#‘Þž»•–î9NåyU¬b£&× …„_!2ðJ¥=t°¥¥0ëϹ=»(ƒ¢uoTƒžáU¯ÍjYUn€É«öÒ³­¥|vSûpdgº­cÕNŠÜ‹À4¦Ým¾(9îÅUJ»‚À±cÇÔ¢"òßÀ-¼R%}rä+gÿ¬:p­Úpô /Ø5¥W·#[.ÂûN¥Ñû ¶rž—½ÄY¾©BBªÌ9zÅCÁÄtžçRêîÝtîh"U‹‰¤'û¶ {»5&[™Mì©ñ8[oVH¦‡?¼X[ð,~ï•¢äoß{½å§9YV“#µtéÒ µàX>%3Wѹtµ¾*Ž>Ñ™º%TµvªÚ‡‡âçKwÒ/kPî… T¡våï]ë**m%ÆÇfA8p‘•“4žÎ8@ç¦0žøníX_M–­jÚ^q\¾?™þýû*Ú¸mÕ90MÿƒjÇÛ>ßM•*:4%'—iãË–•¸18²Qb°¨ˆ9náÿüóGç\Põ·b7=ÏAòH„ý_Vt@ö‚Å;kr†Ý²¾]±‡>_¶›Žœ>OÑñU¨BýS¯.…²Bæo’ÉLijãEºœ¬lŽ­AuoL7r\¨eÎ6­m x‘Ó ±g§ºñ4žãLÛ³ELÄ}ˆ’ã>,¥$; ]ÒÒÒM4˜]DŠ"€1˜«ì:F8ÑÚ«CÚÙ}èeæ°¯óúŽ­«ø¥?„]^¢k×¾ôW£¦aé;ᣜvü¥±ë"þ 8ðº3+uw³ëfÄ´G‘‰DhÈ{3g{"õmR“žïY†÷í¦ ?ÿüó¢ ÊAÀ  ññСCI”ƒvWKc.¡œ*U¬Ç}º»‰ x™×>fr‚œÌ´È·uj`óEÖøÅLñÿ ¿èOÝt„²óò©BµjÅ„;X´ ¯àû›»ÛᨊJ…•cƒpEdP†W–Kñ±Ò¥Ëðç%¿jÇÏ\¼xL<^à8ð*´)7›ò3Ò)+-ƒ.ò1HlTµªKm˜ö¹Sdw­_•ªF—SÇý;v.“ý»÷зËw¾nY›¼–vã”ΰOt³fÍèú믧ï¾ûÎÑ-ä¸ às4%'›“†óïND( ˆ{…î˜sä~ü\–raû’cIA3‚­÷÷]Ó„ççÒGd±k6\•Wì?ÉsØþK¥ÃËs)b)*W>‚Â0‡q†¶ªí2 ü‚÷0­'¨¬I#¸@:•žK8v>;WMFyìBPÀT¡~8—åI£,+?Ñl…©Ì+sU¢Â”òR›sþÔe·$†kTµ¢:f¯-–Ç@=móúó)€E..2\1ÍÝÏaƒ*ŽWËPžFÉû÷ß+eÇò²- ŒÓÁƒ“(9Fêÿ¬ yz÷îMYYYÊ‚S³fMC6ÏùŸWyk³±5©£bvàîU¯’í8k"±çä9ž¿ÒÕßaÎÙ·íä´±A*úäòü…9,Ÿÿ@ˆÊsX(Ï_eù3‚S6Tây¦JT8Åó_MfB«Ëu¨Ãñ4õÙB7;G^æõ‚çÁ²ý'hÇÉþÆ™ÜÖ!W×ae®1]Û´¦SÊ‘yyò½äˆ’Sr ¥+|úé§4fÌúá‡èöÛo·r†ì* X CFd°²À-­LéÒ¬ðÔ¤›ÚÔ£þü åÃÊÔ\v©ûƒW¼àZw„~<ÜÕ¥ åX$[Ì<öÚvÛm·©\;vì(qÂ;{÷‘c‚@IД¼˜–+眕³¤÷”ëŽéÓ§"ö§çRóå4¬õ¿s,)”XK`á¹®ùUd+ߎÑÚ…\APl¦ó|Œ˜XÄÝ´¨K£™láÎΔe´:S}DÉ ¦ÞöR[a¹Aî‡×_`Íñ,g³òè¯Í‡ÔêÑâ½'”õ¥-@äÜ2=UWq,öØÊ<[;}é`ƒC|Øtàk½áÈ¥ÈôâzŽlW†µªëÐW[_bÑ­³gÏRóæÍ•_ú/¿üRôÙ#™3gª˜EQr Ò!~X <ï à¤¦¦*µÚÌ®éo‚ø›xN˜´þ[ó+…§fLùÂØM¸‰Õ¨hŸ˜Æ[m†µfÛñT¥Ø`õ†Å¦»µAAÅóXcöd1¢ä£¦úîÒ¥ !ü›o¾ ˜vùKC2r h' C ÀpC áxš«9ßLö}ÆgKŽ“iÆ.ȪìIõŽgi+Çÿl9–B+™8`3ÇÁZ·6Xœ¸Ù›~ÚJ’VÜúÍž=›®»î:úí·ßëZqË‘ëO" )9™™™„d"‚€+ 5Cß¾} ±8°às)”ˆõ{ƒ9 nËø^Àž µÙ… ñ;m9 æ0XK⣢+ö[=Pq{"‹ò &.XÍ”íü°†òQšáw ŸcÄÐ@é€?r\d˜bÀ‰cw·èð²Êo9 18—%óØ4ÉÇ™x2½g »ÀÍÉLÍTJb|²rÿ苜o –RÍ:sŽãUsØbË ,8¼€fPx&`ƒâ‚ù –ŸŠlõ±\ûåq•ÎHÇdÙö ¢äøv¢ &¨øäÃAÀ·ˆÿ"›•AeË–¥Ž E'lUÛ¶méå—_¦çž{N¹v´hÑ›!upÄ’àì¦æep—Ò¡C‡”‚Ó A7•8Å€‘ùØœÍÉf¯å`©;v옽Sä˜ à5$»’× ¾r#Ä;ÀŠóÔSOÑСC¯o~‰ü¼+V ,f˜gžy†ÚµkG£GVIòü²c¤Ò‰€Xr²[=Ò(ÄmLeÿþýÊ ·Q£F¹zÄÃb&"Qr¼Ü ð FÈÞxã /ß]nç QÉ)Ãî ?ýô“Zý|þùç=›”)”±ä”¾€¿ã×_=íÙ³G)8Mš4 ø6¡°äˆ’c„ž:Qr¼<îºë.‚ù|âĉ„IÿG”¤*Tðÿ†X´ nݺôñÇÓ|@óçÏ·8*›‚€oKŽop÷§»fgg«\JHn ÖÒ¦M›úSõýº®¢äøu÷\åEÉñb—~úé§4mÚ4¥àT­ZÕ‹w–[yXrQÉfwÞy'5Šn¿ýv:uê”'a”²—KŽKpÍÉ9994dÈ‚[8‰yõn×CÉÁ\‘—W<Æ6ïÖVî舒ã¥Þµk=ùä“*˜»G^º«ÜƲ’ü¾üòKëÚwÜ¡èν©ÜC°…€Xrl!#û¡à ÎuãÆ ü–-[ (^F zõêjž@."AÀ׈’ã…ÀŠÆ­·ÞJ`©z饗¼pG¹…7€»Z ˜ã­„⥮k"‚€KŽzÁ8uÈå|eÆ £uëÖ)÷ÚV­Z§rAT“5j¨Ö"‘ˆ àkDÉñB¼øâ‹´wï^úå—_($DX»½¹Woè–€Ù±cGzíµ×”%rÆ ^ÅWn&˜# –s4ä;ÀB"}V¯^MóæÍ£6mÚ0>B®øøŠ’ã£Ûê%G‡û7–.]JÈ"ÿÑGQ0& s?¢Æ+1” Êók®¹†n¹åEža¼žbÉ ¦Þ¶ÝÖüü|>|8-[¶ŒæÎ«¨ïmŸ-G<@hh(U®\Y”O-å;…€(9NÁT¼“à|ï½÷*žþ{î¹§x…ÈU†G ÐÝÕ´(]º´¢•F{~øam·| ^E@,9^…ÛÐ7ƒ‚3bÄ•ä R3ˆøÄåˆ%Ç÷ý 5 iŽW^y…Nž>^”†@@”t²Ä#/Î×_­XFТOš4‰¾øâ £uÔ'@KN€v¬ƒf]¼x‘î¼óNú믿húôéÔ³gOWÈa_!%çÌ™3„>|‰€(9nFÿÙgŸU«úø |‚Õ]MëÙ~ýúÑ /¼@cÇŽUYƵýò)‚€»ÀËòÝwßMS¦L¡iÓ¦QŸ>}ÜU´”ã+V7(:"‚€/%Çè¯]»V1Nýïÿ£ˆˆ7–,EÍ’mÔ*z¼^/¿ü2uíÚUQ¹jxxü¦rƒ E@,9ÁÕõˆ½ºï¾û”{ìÔ©S +"ÆF–ˆ»Ÿ‚¡v¢ä¸©—ñ ~ì±Ç”ð¨Q£ÜTªctðR©bTŒ^WOÕùs~ýõWB^(K@¸§–rÍqfŽF`~G#æ”õþù'õïß?0`­B2PÈéÓ§¬eÒC@”7õØüA°äŒ?ÞM%J1þ€”œ`cV³Ö/X¹ƒ+ɼyóèÕW_µvŠìÜ‚€fÉqKaRˆa€‚óÐCÑ?þH˜_A]/âÄÄÄP™2eÄ]Í?º+ k)JŽº¾§ˆK@b²V­Z¹¡D)Â_@LN°1«Ùê›Î;ÓG}¤ˆ7À|$"x±äx]ß—=fÌúî»ïè÷ß§Aƒù¾BR§€u?66V”§“=…@ˆ§ ¦r‘þÀ4sæÌ`j¶´•€%G”œ+CáÁTÍÛn»Ö­[G WÊ7AÀ ˆ%Ç ¼¸~õÕW4yòd2dˆÁk+Õ³†@¥J•DɱŒìó*bÉ)!ܹ¹¹jåúÞ{ï¥úõë—°4¹Üß%§h}öÙgJ¹6leff=Aön@@,9nÑ€Eüûßÿ¦Ï?ÿ\ à"⟈’ãŸýhµ%§„=Šü  I|ñÅKX’\îÀ]Mbrô=®|èÁ¬ÚWAÀˆ%Çh«¬ÿüç?ôÿ÷*ÿÖM7Ýd¬ÊIm\B@”—à’“=„€(9%6//Þ{ï=ÅþR½zõ”$—ú+bɱÞsW]u•JŠ€áwÞyÇúI²W(bÉ)x¼ôé§ŸV1}`R9r¤k(UrÄ䤦¦ºr‰œ+¸QrJéO?ýD§N¢'žx¢¥È¥þŒ€(9¶{¯wïÞôþûïÓsÏ=G3f̰}¢\@@,9.€å'§"y6 øáEàã'Õ–jÚAZ²l;§É!AÀ£8M<‘‘Aÿüó-_¾Üé•Ù£Gª`ü 6Ð7ß|ãцx»pd`Æ 5¬kÖ¬I‡¦U«VV£aÆԶmÛÂmW¾ £3òÀíG“ƒÒš5k´Mjܸ1µnݺpÛS_¼Ý‡»víRcæê«¯v˜ô---MåOغu« þÇJ Vì p|ýõ×Í1ú-??Ÿ–.]J`C’9WiJÅ]ÍÚD?þ8mß¾n½õVZ¹r%5oÞÜþ>ZÒþvµz®ŒgWËö‡ó=Ù~±äøÃp\G0“b.íÖ­õèÑÃñr†!pôÛŽŠŠ¢ôôtCÔU*¼8mÉ™3gŽJv9qâD§Ð‚R´bÅ õB‰kM@k F5¼XCÐÖýë_„UÆ^½zQƒ \n2ØÙÚµkG7Üpegg뮯R¥ uéÒ…jÕªEwÜqÁŠäiñvÏ/¿ü’ž|òI:vì˜Ãæ!Þ£Y³fŠøx8“£hãÆ4aÂÚ¶m›*Ÿ`ðÁµIIIïiy‚Xr,)º "P«ƒ% ñk¾”’ö·+uwu<»R¶?œë©ö‹%Çzß¹:¾üòËôæ›oª|8Ë–-+|.;wµœå+œùmGDDPVV–¯ª(÷N+9ǧ:8Ù½|ùòtË-·PÇŽ‹5€YÞzë-1bDefàÀTµjUŠŽŽv©ú°˜´hÑ‚`²&À³víÚjµ«FÖNqû¾’ö¡«;Ý< . ±odDâUX¼ºwïNHD kÎóÏ?ïð–ÇÈÂŒ~‚´iÓ†yä‡×Ù:A”[È\Ùª²•cåÁݦøJJÚß®ÔÛ•ñìJ¹þr®§Û/– ÖëùÚk¯þ¾þúkúôÓOuÏeëWk/æ‘@\Àueg~ÛåÊ•+²XëLÙrŽ àNœVrpS$xŸ+‚—UWWÞà?~£ l[¶l!ø»K¨¿:u긫H·•Sœ>,î͵ñ¥}Ú*gÇŽj,jc L.aaa¶N×íǹæ¢)TZYæÇì}Ç*^Ø…]ÍJ—ŽsÄåÀ’öðÃ;¾Àƒg·¿‹S%mkŸÅ)߯ÑÚ­}º£-®þNÝqO)ýÀz+ØIï¹çU¸åsÙ½wtoiHÏ ¸©«h¿iíÓ}Ô5Pp†ªî×%°– <¸°¼âÜ«ðb7}ùøã AÕˆ±'p÷Â*XØð €õ1 ˆ%A€%^’o¼ñÆ"Ö {e:s /“0'Ãü=ð‹…e /äÕªUS}„r€ýôéÓ•«À’%Khîܹ+&¬Â8¼°ÂÅíÀ ùµ×^[d|ìÝ»—V¯^­,-ègky³xñb¥¤ ˆµq†ýÀô—_~QõF\Æccn~( u+S¦ŒŠ™2·|á´Ø´oß^][ܰâ@$¨sbìÿúë¯Ê³iÓ¦„|Έ³ã¹  €.\H‘‘‘ê7Kb°0朱(ãw2kÖ,‚¯9ÜB1žñi.ŽÆÎue˱¹~ýzU¿œœ7Bsqô|µõÜ6/£8í7¿ÞÙïbÉq)c‡øÄáÀzsÿý÷«ÊÙz.ã·*zÄêÌž=[mxPàw‰kà&ŽXXXô;uêTØPgç6g~óˆ%ÄÞ¤I5oÃmîÍø­Às\q1·Z“`ýmãG”k#Böyž(tÂñ &®€i÷îÝ&~ 4ñÜÄ/Ç&~™4ñJ¹‰Ý© ÏçD˜&~‰6ýöÛo¦Í›7›ØÈÄ«1&^e/<‡H&ð.ÜÆ—E‹™î»ï>¿ˆšp?~á4ñê®:gÓ¦M&~6U®\ÙÄ Ûgî¥Nôð?~0ñCÍÄ/Pº;1í¥ÂÑuûÙ­ÊÄ/Û&~áÐíç,»Iéöaƒ­CªV0‹Óv°µÇÄ/ˆÚ¦ÕOŽUÑáΊ•‰]èL;wV磾¬”šX™1q–zÇ·˜8è^Ý›ÝMüP/,×ZâþLóiâ0Õ-[¶4õìÙÓÄ1…×}øá‡jOF¦C‡™PoŽÏ(<Ž/l±Scˆ_ÐL¼*fâàSU~Ö§m`¬[·ÎÄ®ªðhOX&VhÔ¸á^»B˜ØÅÏÄJ˜ºãc㛓ÍiEªñŠ}LŽQ¸Ï™/<«²ØUΙÓåœËüïÿ3ñhúóÏ?ÆÄÑxNLL4ñ‚ê~á0]ýõê™Â/&^Q4ñbMá½0,ûÏ/v51嵉_JLÌô¤žK¼Qx£ñ…]ϸæìÙ³&&)Qããc›cþpH‰£¶ã$ŒEü~Ù¬ž×(ƒ›ÔïóR)&¿Xš^yåõ{@{ËØ±cÕagž¯öžÛÚ=ŠÓ~íZg?1'¡ÿøEÖÙKä<ƒ €ß>úî“O>)¬‘µç2žçœ3G‹ß5Þ07^sÍ5j>åõ;Á9x¿ÀXæÅ4U¦³s›£ß<æ#mN|ì±ÇL¼øjâ8Ó AƒL¼x©êÆñ£j¾ÁoØšóos,žA"þ‡~£x7ùÖhJ¯|šðÖ/ª¼²¨Srð"À&gí^4Žù ½å 2^P^j5a˺ŽWdÔ.¼7ñJvX}:s/ÝÚ€b‚úó …î¶”œÄDêáh®A9ÄCÔRÜ¥äàeÞR¹äÕáB%÷=z´R˜ýª°œÔTõ”0M,û/~P˜ÌÛå}25IHHÐ)rèWLšðª¹š°Ø"¢í2¡l”cKÉÑN„’…esA`¯ü©Ý˜ÄPMM `Ÿ;”ŒW”ÅñTZñòé$=ôR°µG—93ž÷ï߯úãUŒ,˜à·€Åˆ¥’ƒ|f+4½ôÒKÚeê“ÝQLeË–-\´q4¾Š;ž9ù¡©G…÷fë“nü;j;#¶È˜¾úê«Â2˜ÑRÕ­`j”7,˜ ^ÙB¯v9z¾:óÜ.nûÍëäÌwMÉÁ|#â?|ðÁê÷ùÑG©´µç2NbK¤ZlÕª üpŒŸ ï'Ú¾ÌÌL5Ö™5³°\Gs›³¿ù}ûö©:cîÄï ‹£uñìÿöÛo ïiíK0ÿ¶ÑvŽ—µ‹ì38Û¢äØtW]1+0ÜÖK“,Ü{øÅQÛEüÐRl`æÛ5²›Š­>ʥ詧ž*,æh¸hðKJ¡ÉÙÒ]©8÷*¼›¾À= nf¯¾úªK±IÀ×ñK’ rçåB"_ Üzà7 †2Mžyæ©\N4í˜ö S=(¬Í]a@˜P·n]ÕF¸!€xnc¸dçÎÄ/%ÊUO+÷Ͷ9IÈ- –ý¯]cïDpóÜqà–ቪÐ5ÍÙ¸{÷ÑŽ‰»š†„ëŸÈj² ¸Â®Y%m¬™»`a,°ÕX18±5Ѫk(ÜfÙr]øìÑêw/¸×ñ‹ŒÊ÷ãh|w<ã·„±Ê/fÄÖOõ;r%¹0\ìð\fë•Vuõ;Às†•4µï7ÞÐÇN¸"kî$Žž¯Î<·‹ÛþÂJ;ù¥8Ï'‹–Ó<„\¼Ç§ÞØ*Rä.¶žË˜ðn ¹Oƒ–¿ °—jûÀâ÷5ü¾5q4·ÁµÌ™ß¼ö;Äo îϼ`¢nqüøqõéh,óo.~ÚóGëù¼€M%±ÌiaþƒF~øg#æÃ·¹ò›G¡x^YGóU0ÿ¶¿hžëÏ~²Oð4v©ÒÌ“OjÑ~Ô˜ð!Z¾í¸£O<,‹ x{¢Ýç÷^öÊwõ&Täû@Žs †³å@¹Vx™C%»m9{©WÏc3¾ ô´µ²Ž~ÁÇÁ®kÔ^ q®o*OMAl>Q°é_Xc¸Ö¼ÿ±íŒ`% R¡`ÏÓ–2(9Xi,N]iO ŸƒsÛcÌlŸž#GލbmiM¹ÂoÓ\0~ðò¤g{ã«$ãÏ7¼‚ø+ÌXDÁoÆY<Ža´z ž]8Öę竣çvIÚo­NööÉïÍ:Æ:ÆnÁ*¿ÞÛo¿­ywÔÎVÿÛÚ¯ÝÓ|nsö7¯]këÓÑ=ƒù· %G³¶ÙÂOö žFÀ®’æ6ï—;¸'á!†Ál.pË‚+Š5#&dPGš V¡D@ðàÐÜ(°]Ü{áZw \¯àNgË…ËÑ}4Ö90Jq¾î…ßѵÅ9ŽZ¸l¹*xÑÃu\ióR0UÁ†I!tç€Ñ l3x™Ä !ûH+íAgnQAýÀVƒâ“'OêÊ)îÆ+¯¼¢”g­îæ÷+n™ö®Ã˜-ŽÂk¯Ì`;†œRPt°pÛm·Ù´ºw<O<Çà‰{Yy .šæÂ±jj<1Y‡Úmo|•d<Ãcµ_¿~ê7¦I¸óiâ¨íȯk¹¤¤¤Ð_ý¥\Rñ[ƒ[ ØæÌL…Pä=Ë=·KÒ~óú¸ò],9® åýs9FLy.`Ð’f{¿Wîh>·9û›¿rµþ›¦Ü˜¿§èϸ´Ì¿m¼Ghs¿5ldŸ à l*9ð%Eymâ‡küÆA͈ċX¹Ó2ÓÃ$ %/½ˆãÁ 7r¾@ðJ6!áeþ³pÏÀê%\A8ÀIQIâ%‚ÕLÄé`B}*®wæ^êbýÃà í…ÂÕÛÀl jfP¸ÂÅÏ–0‹:TżLÐß"»ü„ ~øÄK0ÕîóÑš;¶9@YÑujŠöYö!Våà2€ñ¡ ^Ò0‰àV}Ag k‹*ªiŒ%ÜÇ $iߣ>ªVñQ\˜ Ë—/WõUVþÁ'çÃIŒ“'N(`´]Sš1v¡@4kŽk‚öA´:kû}â:Qr¡äø8bÂ@} W­1cÆX½ÀÙñŒ‹Í­Ë'°:š[F,û/ð°Ðb|š/Î` Â:©ÑÜ:_Åψ›?¾j7\j˜ ƒÌs†8j;èkaÁDÜ3%*mÄöÀ"¤YŒñ\Æ3¸W¯^ŠNÖä;ï¼Síˈ£ç«3Ïíâ¶ßj‡ÛÙ©½`Ú9EùÌ—‹ÿýïÊygí¹ŒñŠßœvLkžÓHoa.8Ïrδ7·¹ò›Ç}Ìç lã‚9õÄ;‘5 æß¶¸«Y²ÏëðT'»¯Ä+V®bÓÇÞ(Š_0Sñ6ñK¦¢uäU<8G+G\WÌc8 až@Õ10ñнºûÃ*–6\ƒ?ŽýQ4ÄZEØíB•ÅIM°¨vÛ»—v§>AçÌJŠŽ½Èò^`^B[øeÚòPá60Ø5 ðbKˆ*çöÛo7q<‹µS³# iV"Lœ7@•Å«¸Š®lJ`¾ý%„­RŠÝŒ_,“ÞÍ7߬ú 6{}ÈùqT=@A :mÔ—ã¬ÔuÚ?ÐRcl€e lm ñ[+Å&V¸Ôi å;…/˜žÀòÚIvïÓ ­L0ê0ñ‰Ý TÛ@Í: Ö%v1RTçœE±žšTÙ¬ä)ŠQ0Uicîï¿ÿ6±»œÂûøEц(g¸ÒTÄ=°ÕAG0üYŠ3ã™\Õ·`*c#˜ Ñÿ`ÓÄVc¬cÌ®ùûï¿Wtâ ¡6gÎs4¾pWÇ3®Á³‘•L`#« èjA¯¯‰3m2[‚["+ŠºÝ’b¿{ZYÎ(98×UŒ´òÍ?‘oÆ‘’£þÓusÁD:N^æx…Ûü°Ãïx1B>%ä©áU4«ç[ö¡µó@í«ÑÁbá•;«e9³“Ý Ç+ÎGKRž½{‚¦Š¿ˆûà`õâ -nX–lo„ã¸;˜Ó_Úëø?{m‰-7 âÞËÖ}œÙÏI"‰_:Å¢eiZ·,î(%~AqºsÆ%{L'p!tUà6Êp{bÉ $‚@O´k‚:j,SÚ>w}²§£¿vW¹Á\\¬àòøã[ôˆ-È:8œÏø!ÎÄUÁsÇs VŽ3ãËÕñŒó!ˆg³'δW³í¡H\4Wbk':ó|µ÷ÜF™®¶ßZ=œÙǯ Μ&çx Äwn‹l…ñÒ]߯ÑÜæè7oë˜[À jO‚ù· wÁ’Îçö°•c‚€38Trœ)$ÏA0=¨Qµø[mÅ ˆ·A 2»^©@b[绺AÐÈ瘼\ÛSXœ-›WmUL|œm)°Î–Œç±eSålƶ{²ÍÈ¡…X+ÄŒ€iÀ€NÝãÂ+³N/'ù'x¹1È£„¸6ÄÚ¾ùæ›>¯œÌm>ïE…E!AÀ—ˆ’ã}0!™½Õ]Kþ<%ÈY¤å-Bbµ’ VÝë¡XyCÂDó$Š%-?®KŽçzùÝwßUŽ#SD={ö´{3v]*L^ ò °‰!•§¬xv+#½‚€Xr¼³Ã›€8¤AŸ©#øpx¡‡N¹ÍCÀºX,~Ÿ¢ä¸šœîvDɱ)~¤`ûŠr  ØÓÌ3¤[s# ´6»»=°äXºã¹ûÁZVëá&ÊqdŠÎ9dºvíjd&í²9õ²¸J؄˯ˆ%Ç8݇,&€!“Ic Q1™Û Ñ R AÀˆ’c§@) Êl˸;—øÍ![1O~ÓTT,9žíP‘#ç,¤ BÍ2hÜ­ ,6bµ±†LàîKŽoû9˜˜‘“zè!e¹QDæ6£ô³ƒHÜœq:#Hk"“v:®jàӇ니 `‰€Xr,qÿ6w‘k‰©º‰éÏ 1r"Á€Xr|ßÿÈi…ÜIÈ!å÷iß·Hj ˆ€(96z•©‰óº¤ÇF“e· 0}¨JT² Ï" Üb`ÅAbLódŸž½³”ndd•Ø7½ÃùÅhĈ*Ùì'Ÿ|â›JÈ] âqä7jøn ø Š’c£‹À9'”9ÞÆ)²;ˆ€"19Þˆ›:u*µhÑ‚úöíK»víòÎå.†C@,9¾ëN–L7Ýtq’Lâ¤à$}Ỿ0ú¡ä¸’îÂèí‘úù'¢äØè7øwìØ‘ìå”°q©ì4%G,9ÞëläpÁ*rƒ ˆ³‹¢ã=è u'íÅZV‰½Û-XøÛ!'A¦¯¾úJïÂïwwƒ’AÀ—ˆ’c}ü0g̘A7Üpƒ•£²K •«8ˆ%Ç»£‰^gÏžMõêÕ£ž={òG‰‚€géæC |ýõ×¢àxî€(Ä1bÉ ˆ®ôëFˆ’c¥ûV¬X¡’Š’cÙ¥Ð,9¢äx@s¬*ƒ¤W¯^´yófïWBîè3Ä’ã]è.\HC‡¥áÇÓwß}'¹O¼ ¿ßÞ ¤1~[©x` JŽ•~„«ZãÆ©Q£FVŽÊ.AàŠ%GÜÕ|3`ÑA|Øöîݛ֯_È]FàŸþ¡!C†Ð°aÃèûï¿'€ûÚÝM%Ç݈JyÅA@òäXAmÚ´i*7‡•C²KPˆ%Ç÷!""B¹•â dsæÌ¡N:ù¾bR·!–KЇŸ?^•‰8ìƒ5oÀ€—öÇÅÅÑÊ•+åE\!RòȇĚƒ¦ü±ë’—,%HÆ,–œ`èic·Q”‹þÙºu+P³ÿþûï6ñv¾D93˜%'˜{ß8m%Ǭ/À$ƒ@9¸½ˆöw5{èøîÐ}öÙgôôÓO+egüøñ¾«ŒÜÙm@‰±åßý·Ür‹ÛîŒ H¦L™" N07·ñ’¹¹¹n.UŠ\C@œmÍðBîÖ­[S•*UÌöÊWA (°äˆ»ZQ\Œ²ç7Þ ØØX7n¥¦¦Ò«¯¾j”ªI=Š@÷îÝ©råÊ*™ååU«VU¤–ûeÛ96nܨ\´ÁLøÇVàE’"F999%-F®J„€XrÌàƒ’†&AÀbÉq„ïÿç?ÿ¡o¾ù†Þ|óM3fŒÃàuß×Xj` Xè@@`鲆mì—J[ÈÙßDºð\hß¾=!?^LEw KŽ(9î@RÊ( ¢ä\Fïðáôÿ~‰Ç)Éh ¢kÅ’ãøœÉ“'+egôèÑ’·Á?ºÍj-­¹¬‰«šU¨œÚ‰t È/Õ¦MEÃŽ—RAÀ]ˆ»š»”rJ‚€(9—Ñ[²d‰ZÅêܹsIð”kƒ±äøOGßxãô÷ßÓôéÓUrCŒˆøp%®W¯ž®âØnÕª•nŸl8F`ûö튮eË–êw! ŽcÌä ×KŽkxÉÙžA@”œË¸"»sÇŽÅ\ï™qp¥ …´u)V¬/^LpÏéÑ£ë&“I”?˜€A™ûÈ#ÐM7ÝDŸ|ò =øàƒ~Ü"ÿ¬:—IgiÛñTÚwê<NI§CgÒÕçéôºÀ E$‹-¥JÓ÷ÉÑôýdz‹ÆŽ åÊÒU±å©n¥hªWžêUަæÕc©eÍXªÌŠP0 XC¡à\uÕU4gÎynSçû°­Úû”(9>ì¹5‰’Ãî+XÑœAñ8Iê ZÆ=§L™2ôÅ_PÍš5顇¢#Gލœ:’sÅ3}v‘M3PfVH¦û“iÃÑ3J±¹È"ʆPB• ¬DQǺUhT»úTµB9e™‰‹ §¸È0uNÙËV›wåÒ3Ï>Iy8°™­;¹¬,e«Ï¶øÀòéHê%…iÝ‘ÓôÛº¼ïRl@|tµ¾*ŽºÔ‹§® UÕý"ùþ(T NõêÕ yà䙈½lÌ6i–x=ˆ¾B 0Ÿì. ‰ 8°Í<ûì³.\%§3ˆÇDEE3 Óö^xjÕªE÷ß¿ŠÍûñÇIó'˜Fú¨!GS3”Kٜ퉴hO¥eçQ4[Y:±"3²m=¶¬Ä)ëJBå Tº”ó•|ŽŸ×eØ% niš‚R­‚ý ú“¬ä@ÉÚz,•6°âóíŠ=ôÒôõÂe´«]™4«E›×Rß]©‹óµöî™Èý N||<Í›7*T¨àÝ È݂͒#JNPŸ7>蕜7*–ž:ø¼3¤þ€Xrü£Ÿ\©%¨ˆëÔ©CȩӳgO•;/‡"®#°;ùMZ~ßp]ÑR)2,”z5ªNoÜО®I¨F-jĺ¤ÐX«¬p®J|T9Šo\ƒúòŸ&ÇÏe)ËÒ‚]Çé›å»é•ë•õhh«:Êš„z‡ø¡Æ«$Æq\\ÍŸ?Ÿ*V¬¨5Y>¯ JŽW`–›8@ 蕜µkב)"8ƒ€XrœAÉÿÎAþœÕ«WÓõ×_¯rf!hóæÍý¯!>¨ñ)vû~Õúyõ>e-Ue8[jÆê¬›°cÔ¨¡,J°*A¶s|ÐÌmG•‚ö-+= 2ÁÇîéÖ˜Ú^UÉȺ~ËÄÄDeÁb³`ÁЉ‰q½¹B(!eË–¥ÐÐP’äË%R./¢ä°’#Vœ¡ »X”œÀíò (EgذaÔµkWš4i 0 p\‚–ýlÞÎcôõ²Ý4}Ëae±¹¥}}ú¿[º*ÅÆâužI£ƒ§Óè“fòij™t’•§SgƒX›¬Ü|ƒ“Ï18ù© ß ”ct§Z†b˜U­JT¸ú«ÎJâ{êVºô×(¾"•ãsIóê1LPCO÷¿šp]&³5ê'VÜ>_²“ãx*Ñ}¬ìŒîÔ€¢Ø:eD9vì˜RpÀž'66ÖˆÕ”: ‡¢äIg´™A¯ä¬[·ŽFmÐî‘j¸«!f´ã"‡^ áâsß}÷Ñ Aƒèã?¦‡~8ðZÌ岂ñóš}ôÁ‚­´“-×4¨FßÞу†·©ç”"åŒj Xwø4mLLårR™<àR’O¬‡Wˆ¢Ò‘å©LX$…ÄÄRHµpŠæUáR¥ËP)vU+Uš-C¦‹t‘‰L/ÐEN z>7—RrrhûÙ2?C¹‡(;9:L2‰:÷Ó¦V,ÿUbÂxêP§ŠÝúÖgF¶g´R+œden=1e5=û×Zz {Sz¬ws‚%È(’””¤œððpZ¸p¡òP0JݤÁ‰€(9ÁÙïFjuP¿¥¥¦¦ÒÄ’c¤éu%GHü £JPE¼hÿðÃËΘ1chïÞ½ôþûïs°»ck@ nkèK³ò è“Å;èƒùÛ˜É,—þÕ!&Ý×WY>U|?[Ef3ùÀ,þ[Î CFv.…– ¥ÈÊ•©lLªÚ¹ •‹‹¥°è VÖQqN‡”—žFÙ©gù/•¥¤Ðì=;)kêZ¶•¦–¬ð lVƒÿjQ'f[³Óµ>3±ñ߇#»ÐKwÒÿ-ÚN²’ ^¼¾ A!ò¥ ¡5Hà„P•WAÀ׈’ãëûµ’³yóf5#GDpXr„ŠÕY´üû<0¯Aѹ뮻h÷îÝ4qâÄ  âÎa÷°/–좷çl¦Ì¼|z¤g3zœ­ŽØÌÖ9Ãè÷‡éÈ™óű|Íóÿì]xE=Þ; % ¡…Žô^¥ ˆ b òÛPD,(Å‚ ¢(6lH‘"U¤Wé½… 齇/ yI^^Û;ß÷x»³³³3g–—={ï=·y+øûù Rã%YX*ò©,”ÓìEl }‡bv¦ýs Ä7¹°‘eÇÕ^¿1;«W¯݇+V¬ÀÀ f~ûöm‰à䋤©;wîÿ-+€†7L&9&°*‚ªIΩS§Ð¤I•ß<ýÒ"À–œÒ"fþí[µj…#GŽ€ Ú¶m‹%K– oß¾æ?±bf@ 4__º£ñ¬xП>°U±–›[-©—ywæ~¸H‚“š•‹9ÛÎ ðÝ¿ðøÏÛq<Ý µ{õBÃ'F¡† ‚æBp´—ÅÆÑQF¨;h0†t_Lù÷ª¿³//ÞƒK·“µO‘’’ŽïÒ—¦ÀˆVµ1N´ë0sµ$I]¨q)+H%Mv­ÎËËÃ01¦ß~û Ý»wG¦ZرcªU«VÊ^¹9#Pñ%GÎ+WñWã+0…P­%‡Ìû¡¡¡¬šTøžàš ’Ã19%€d¡‡íììðûï¿K/GÞ~ûmЋ’Ÿ~ú ¤heî呆âI`à÷gº`tÛºEN)93G›Ó˜%2…Òš§pÝkÔ» ìÄ¥¡´СòE²èØ Â ïÄi!¹¶ˆÕ8pàz‰œ1}úôÁÌ™3>ßU'®¢ë—ÿ u v¿5þʘ²ÈÄ4tø| 9"Lûö¨Ý§/l9î¬LëVIÚT¢ ÁƒáàÍɪóï™E_ÖB•à!5=gx{áx/.ú¹ùwí>ÑÑј?>4cqˆØØŠ8¨qãÆI’¼?üð ÇP).Â;Œ€‘pw¿+±ž˜˜hä‘ðåÕŠ€*IÅãPᘵÞöeŸ7¹ªQaKNÙ1TÙŽ"¡äâÅ‹1kÖ,)V‡8Ë7ý—½0ìÇ-x¶C0ÖŒï g;ez´o¡ÙŒ•8›…`{C‰0¹”'Ô2ÖÕðÈܘúϱBîk¤¾¶j\/,

Èh#@®jT˜äh#ÃûE!ðÆoH.k{öì)°QòPC’,~þÏÝx_$£ünd(^³P—î_®|ª¢ÎàGáàé¡y˜·Ë‰€•5»tA@ÇLû÷8ú ²£-5=°i 6¿þv\¸‰¾³WâkAdÈbLD™ÎFFFJdÙ××·œ£áÓÃ"À$ǰxóÕ # J’CAdê§<9\Ò “vW+ jênKÉB?ŽªU«¢]»vRøƒ‘ï±µÑåØç›Oâµ¥û0sH[LÐRq Åߌ[¼o.?€ªmZ#¨GÐ9—ŠAÀGÄ6Õ0;¯Ä¢ÝÌ5ˆH¸û²D¾ZHm_l›Ðû×ý…ì|àÝ÷?ÈÍÇ o}aâo3CÀÅÅEÈHHH0³‘óp-U>å“Ù? ÀRÖça@Ø]Í€`[Ð¥ˆàlß¾¯¾úª4þÜsÏ!##£Ð é þñ0¼jÕªBÇJSAœIB"z®°ÞLìÕTqjjV.ùv~Þ{A={¯Y3ÅqÞ©œ«TAÝÁƒ‘´ wÇ#âjáï…Iƒ;Á}ìW8äÕŽ.wƒ¶x‡0#Hu’¬9“cF‹faCU%É!iWÚ´°;Ù@Ó‘ß²³%Ç@€[ÐeHò÷³Ï>ÃÚµk±zõjɪCò¿š…r£PÎ"A·oßÖ<¤ó6Å༾l?>Úÿ뢌¯¡$””ÇeוÔí?"!2Ã!`ëìŒ:"ÛÅf¯Ãž°¨‚‹Óá‡o½†­“†âȵ<öÓ–1‚‚F¼Á˜$>À$ÇÌÍ‚†Ë$Ç‚“§Rñ°%§â1¶ô+ôïßÇŽ“Ô²(NgÅŠÒ”)n‡ríP¡ûlìØ±ÒviþYyü*^X¸ˆÄ“Úœ˜ÔLñ`ýBcÒP[>ÞT†ÃãPª$9œ9Zew¹§ËîjzSå]MŸ>]rSÓ%)q^^FŽ ú– ¹6MZyŸ<ÚÝëW“«¥ï¯¶Æ»Ï!PXXEMÉîXY[#¨wÜJÏñU›‘“'²‚Þ+n¶X5®Ž iéÖ–«ù›0yØ’còKdÑT%Éñóó³èEåÉUlÉ©8lÕÖóÌ™3%)éwß}ÝÍgC‚²Âš&ä¶vüøq|þùçR5ÅÚŒúe»$40±—2™ç΋·ðÖòƒ¨Ö¦ çÁÑÑ ¶m„¼t_¼‹7WPŒ¸q5Ìy¼>ß|²H¢!ï0&„‘œ¤¤¤B¢*&4DŠ# :’CêjLr,øŽ®à©±%§‚VY÷Í›7ÇG}„3gÎàúõëøúë¯ÑµkWX‹·úDvè[.ùùùøàƒpêÔ)¼¶t…KÓïÏtÁ]Ñé»­"Ó0T¸4¹Õ„_3%ù‘ûáoÓFÀÁÓþwÆÜíg°èP˜b°Ïw¬M1ú× Õ<.Œ€©#@îjär˹rL}¥,s|ª#9$<Àîj–y3bV,<`”Õy Œ?Û¶mC\\þúë/ 6¬ ÌÆÆFrW{dðPüºû æ?ÕUÝî«¥‘ÐÀз!ÛÚ^(vuQ'ˆ2kÏÚµàÛ´©•ØóQ‰ŠYѺç W6"º\SG€HŽË1õ•²Ìñ©’ä°%Ç2ofCÌŠÝÕ 2_ÃÕÕÇÇ’%K¤7 D|(gNÕjÕ††·ö`P³@P3Ö1±I&­lî[€xÇl¨Þ¦-ìÄâðùÛ‘­Ÿãåd‡ïEþœû/b˹f3¨:`’£Îu7•Y«ŽäÐRoooSÁŸÇaf°»š™-˜ —\Öº 9sæ`ÈËà6zmÓ11÷sª¿iÿCµ¶mAîN\ÌJ•+Á¿k7œNÂûZbDp‡µ¬%òçü‡t‘'‰ #`ª0É1Õ•QǸTErH™(%%Ç…( ì®VÔø} pX(kÍÛysÇÇŒ?€Ômfnž1®þÕQ¥ñ]}\û0>ö¢W­}ÌÞ|ªP2й#B Ⲧ­?füòbpqq¹Ú²»Z1qu…" *’“˜˜(À1É©Ð{Ê¢;gw5‹^^“žÜëK÷£c?Œn[W1N²àD&fÀ¿ÓÊzÞ± ¼ƒƒáV£žñ9šnk~®˜6°æ¹pí¢–1sž…¥ @Öò¢áÂU‘Y݃IŽ¡o3˹»«YÎZšÓL •­áÑøZHk–“‘ñ˜¹é$ü„\´­““æ!Þ¶ jtì„+±©ødÃqŬÆunˆÚ>®xKKnZшw##àååÅ$ÇÈk ÖË3ÉQëÊó¼Ë„[rÊŸTÈíU‡ðl‡`4÷÷Rôô¢=p©R> *êyDz°sq†_«V‚äœd'¥`rÖ"nçËÇÚcõ‰« üH\SD€H»«™âÊXþ˜˜äXþó õˆÇäèLîJ'~Øu±©™˜.\“4˒×qXXwªuè È•£Ù†·-*ÃÎů/Û¯˜Tï†5ÐS|Þ[}XQÏ;Œ€© À–SY õCu$§råÊ yV.Œ@iÈÌÌ”ò”8±[Pi¡ãöeD€”³>Ûx/ ·$Íœ8Tÿæßá%â5Y-²Œèš×i¤¶æ×¾=Ö¼Šín*Oxßå(l TÔó#` pLŽ)¬‚:Ç *’“œœ Rú Lâ\Ò"@V*ÎÎÎ¥=•Û3eBàÛ¡HËÎÁ¤>)ÎÿfûĦe£ZëÖŠzÞ±lH€À#0VTL´mPôoˆ)k(êy‡0(m ˜ÂJ¨o ª"9ìj¤¾\Ÿ3&Ñ*lÉÑ'ªÜWqdåæãË-§1¾K#ø8Û4KÎÌÁ§›NÁ«qcØ88Ôó†:ðkÕ'¯ÇJq8š3þh@K¾z[Ïs‚PM\xÛø°»šñ×@­#PÉIOO‡£££Zךç]NdK“œrɧë„Àƒ—¤<(¯uk¬hO’Á™‚ù6mª¨çu àèå ÏZAxwÍÜјrËot«_]Ê©£QÍ›Œ€Ñ KNll¬ÑÇÁPª"9lÉQß ®ÏË$‡ÝÕô‰*÷UôðúåÖSx¢ME,NjV.f 뎗 8Ö¶¶EÊu*@À¯e+œ»™PÈšófϦØÓ7âU€OÑ\ KyBdgg›Ëyœ‚€ªH[r,ä®5Ò4Ø]ÍHÀ«ð²›ÏFâ¬xˆ¥‡VÍ2ÏyÉŠCJ[\Ô‹€ƒ‡;¹«±|tyäsKBà|T¢¤Œõ¸ 9šå—=ààä7ÿÍjÞV9•­­àV·ÈʧY:׫?7G,=rY³š·£!À2ÒFƒ^ÕVɱ¶¶Võ‚óäËŽ»«•;>S7–¹"IFwªSUq‚ƒap­S•*Sh9Fà>^uƒ“ˆ£"A¨\è6!wÇeâ~â˜LrLaÔ7&9ê[sžq`áŠ2ǧéŒÀò£WDfM$ÇÁÁA]qŒ@!6ž‰@×àj°³¾ÿ³“ŒëÂU ÁÅ8 +Ÿ½¦Ø¨Ehú6öÇ—¢–«Ñš7Ã#À–ÃcÎWîÿ5UwDÆðJ•8pWK]!SdKN…ÀÊ DîFl¿p}ù+ðØ ˆ­­-œ}}õ¼Ãh"@Õœª×ÀºÓJ« Yr²só°ûÒ-Íæ¼Í08ä|A€ªHœüü|^xF LpLN™`ã“t@àôx$gd£c?EëÉq®QUÕ¨ðNQ¸ yñƒW¢‘’•SpØ×Åõ|ݱ7,º Ž7c @–œÌÌLP*.Œ€¡PÉ¡„TLr ukYÞuØ’cykj*3Ú{9 ®"S}“êžCÆì¹ '¿ju¼Á‡€KÕªÒß·ýâžÑ,!u|A÷FÀ˜ɡ kÆ\õ][u$‡\Ö¸0eA€crÊ‚Ÿ£ {âо–¯B::ôfR3²àìÇ®jº`¨ö66Ž’ ß^m’#TÖ…ßF.ùDraŒ„“#¯ò˪Šä°»šÊïörNŸ-9åO/Ê…Ó6¨Šâ8½}·±µƒ——¢žwâ°÷õÃ.A˜5 ÝWéBxàì­ÍjÞf ŠÅäPaKŽAaWýÅTErØ]Mõ÷{¹à˜œrÁÇ'ƒ@FNÂb’дÆ}W5jzøj œ„‹‹¥WBÀ±J÷f öuƒ­µ(î‹ #`,œœœ$uR&9ÆZu^Wu$‡ÝÕÔy£ëcÖlÉъ܇6ä––/\‰4ãq¨Í±ˆxØz°G/Þ/GO/¤efãšH,+«ÊhPÕ§"™äȘð·q¨"H8“ã`¯Ö«ªŽä°ð€ZoõòÏ›crÊ!÷PzÃîhk:>n)|âìÍx8x*­; xƒ(ù~ѶÚ>uã~¢Ð"Nå*F Â ¸&93_@U‘Ê7‘••¥1}ÞdtC 77999pttÔínÅèˆÀ¥ÛI¨SÅM!:p%6Y9¹"‡IŽŽ0r3€•ˆárru)äš,d¤Ãn'3FŒ€Q ’sûöm£Ž/®.TErììì˜ä¨ëþÖÛl)‡ “½AÊÝC <6AÞ. <.ÇÜ} µs½oÝQ4àF lÄ=#ß?rº¿® 6X“áoc À–c ®îk2ÉQ÷úóìuD€\Õ¨0ÉÑ0n¦3WãRPÓKIr¨Ž,ÏÖv¶:÷à BÀÚÙa‚8kº¿²sóp3)M³š·ƒ"À19…›/&`’÷# 2ÉqppС57atG (’CÖ{ávÄ…(-¶.®Â’£$9²¥ðªù)mßÜž(ì®VôøÜ²  :’“]œø•# “¶ä¨üFÐóô)=cLJ&üÜ”ä92! VÎLrô ·*º³–œ¨D¥Å¦Š‹ƒ$Er×íV@ð$M¶ä˜Ü’Xü€TGrXxÀâïé ™ ÇäT¬ªï41=yùùðv¶W`-ˆOe;e¢ï +ò ®N}ÙÑ‘Œ†ÖööÈÍËCJVNA­uåJps°E\* ï€ÂG€,9ôÂ0-MI >¾`™8wîfÏž-[¶èt>‰5mÛ¶ o¼ñþý÷_ÎÑw#U‘z /¿‘×7ÜŸe# ÿ(³%Dz×Ùг‹KË”.éå¤$4·ɱ¶·3ôpÌêzéç!nÝoÈ;mVã®èÁÉ¡›z÷Þ’¯GDZ»N>ÆßŒ€! K–‘6Úú½ÆåË—ñã?bâĉˆŒÔíÅÒéÓ§±lÙ2Ì™37oÞÔï€tìMU$‡2ÞO’¦#FÜŒ€lÉᘾô‰@ܽQ/'%¡‰äG~XÕçõ̵¯œ„$íÛ¨¾Gah¶5n!}õjߑɱ¶ÕÆSÜcñ÷HµÚ1âù™ä°Œ´qð/ÏUk×®_|QêÂÚÚZ§®Z´hñãÇëÔV»á•¿ùÚmtÙWÉq¾ÊLrt¹-¸6™™wߊ2ÉÑF†÷˃@FNžt:%Õ,éÂÕÈÊÆF³JµÛw„ëUø{O ûæÕBX»{ªS{…|ߤj¸«&t¥g窞¿ w5*LrŒ¸å¸tåÊw)ƒü­KW2!ªT©’.Í¥6yâ7ÿ‰'žÀÕ«Wu>§¸†Ê¿¬Åµ²z²äÈnG2%ž† KŽ••äÿ°º,_ÆÂȲ¾Tìl¬3ÍÎÍG¥ÊÊ:E3Ûɼvi§ײSpj®šAÚÙ#H=¶ùÙ™Â:ÓŽÁ‰í,„¿? )‡¶ÂÆS¸ºˆ?”îÂÆ»*îˆX¦Ôc»PÙÁNZ#åȤ9$õkíîïÁc¥í”#;EýAX‹ó½>[pÝäƒ[¥z+Wxö|tŽ%”JâwŠŠ|oÉs²³¶Bv^¾¼Ëߌ€Á wozc’cpèË|ÁÝ»wcçΠ<“d™¡¢MXȬ.䯂îÝ»—x½âΡ¸ùQ£FaëÖ­ Ë]kàÀ¨ZµªÔ'Õ N2ã ™Ë÷–<[ëÊ…ˆ|Œ¿C!@®Lr …vù®óÞ{ïáÏ?ÿÄ›o¾‰#F`Ú´iR‡š$gÇŽøè£Ð¼ys4hЃ.ÑEíAç×LŸ>wó«W¯Žàà`г)#?ÿüóˆEÿþýA}Ô¯_gÏž}à$UGrèü;$ÜÊ…ÐúÇ$Gw¼¸¥nd 2C…@5KŽdÉQÖi7§í˜eßÁ¡V#éœ]µš’…&ñ¿ ¦°}%âÖþ†j/|ˆÊöp¬× îDê‰=BFÛ Ž [KmíkÖ‡K«.°vqý5Dµç§ô!oøOøJ¼f¬Œ$þ³£®ÃµMØV©.5‹Y:D˜<{®Eçá!òe ¥’ÕÝûF¾·ä9‘ÎaKŽ  &9F¾”—=~ü8fΜ‰/¾øB²¾J$C³ ÿ;v,¾úê+‰ä<öØc’ueÞ¼y8pà€fÓ‚í’ÎqssCëÖwó‰ÄtéÒîîî˜;w.ˆôÙjÖ¬™tM"<&<øw[Uîj“C‡@vqáwo”ˆ‘cû{ªE%6挀ŽXßóqΟ6÷NéT+!ù+ü±tìÅ´›Õûi§p)s’I˜ì¨ä¥% :ê×%ëMA…بõù à,¨×òé®d£k vv5jÁUX~âÖþ*HÓG¨$dãÖü ï!/t½ðKAœZáúg÷bí„…(79¾ Yo7>*ò½%Ï%WÔÛé0,ŸÃߌ€¾ ’ÃêjúFUÿý­Zµ -[¶„««kAçmÚ´‘¶eKÎ’%K$/—·ß~» MTTH¤ ,, íÚµ+¨—7JsŽ|:÷Ë/¿D«V­V"²òÄÇ?øw[U$‡Ø •ÄÄD&9ü®ÉaKŽ®hq;]-8+¡IrlDüD¾…¼u' JòÍ ëK‹Î‚ˆÔFúù».f$*q9î݇) “þ¸i?k‘Å ;U°×Aâîµpï2é—N¢ÚKS¥¹)‰È‰½)Åë¸?<@ã,ËÙÌ¿Grä{Kž¹¯y:YNœ—°Ñ ¿€gŸ' ZY9»Â¶Z$| õ]pˆûw¡p¥»®Qcž›ù÷ûä{Kž‘;­Ø/ù3†B€HNtt´¡.Ç×)#5jÔ,5Z+Š!Åâ~øAq²¼P\NQE—sdrCRÒTÈe.((ßÿ}AÎB¹ï… âúõâ·UErHrŽJBB‚Œ3:!À–`âF¥DÀÝÑV:#!=[q¦‡£rÅ›4s/ùw“/'lþ y©ÉH9þRïñ/ ÈKO±9)¨*„9G(¥uEÜ? ´w®~ôŒ¨º##’‹¦’vz¿S™~é”´'ç.>$ ]¼>'TØìaç_VNÊøK¿§&"çv¤¤ÌFòÒéçãæŠñ%ÁÖ/@»+³ÛÏͺ›ÓËËÉ^1ö„ô,xh%U4àFÀÉ¡€q€2Øå¸Ä Aƒ¤³_yåU‡Ü`—.]*ÕíÙ³qqq’È€¿¿?Þzë-Ìš5 çÎòeËð /`ôèÑRÛ¤¤$é[ÎQI²Ï%#ËEïß÷7ÿÔ©S˜8q¢$QÝ­[7IÒš„>üðCPÿÅÿn«ŠäÎ7Ëä”¶ä”-n«+ÞÎwDcSï>˜ÊçUq±Gî½´r9~;Ôi/A8ˆÜœ{²%2…ð€ÿĹÈçò„A¸“›nCøþÏ ºúÑÓ¸òî85i/aý¡bãá—ÖÝ»z>.ë.©«QÞ››?O“ŽÇoY*ÉOK;÷þ±vóêi#á3ôn†nÍcÞC_‚ß3“‘vîˆD¬Î=ÝFŒ#[²þh¶3×í¼{÷—¡¡{L›ø˜ëyÜæ‹‘roâ—ͦ½†:u’ˆËúõë%u³¶mÛJ*k”—†*YOè™zÓ¦M¨Y³&H| aƘ>}:&Ož,Ž:tS§Þ‡üã?°aÆÏ!T(i,åÚ™?¾ôM¡&/½ô’Ôï‘#GеkWÉJ7î@VƒUè)/_¾Ç·X–M ‘àÕW_} 0|ÐD€ðèmÆÊ•+5«y›($ékûò|¬× >T³ ¯a?mÅvap®Õ£GA9oÅFÓ¢BI>+Û*ÕÑ(¹'YXlªÔ*ÐÊ÷oôg*'æf ´.Xäg¦ Kc±MÉ]-ëÆØUz`»b;0Ñ W® |Û6ä~ÿ[ðlWŒl][®âoFÀàÐ[yrY¢·þ$ÌÅô w1²È$t®p›&Å4r_#rJ¿Å¶¶w=4GNBtÞƒ¬*šíiûAçÐuHl€d£5 ½p¾"~ãÈ}’Ë–Pæ¨J]Àðöö–L¥%Ç䮦Ã(Å9¼Ã”„)ª¹9Ø"&EiÉ©ææˆ;7 »a•ÔŸ©×$84Fm‚CuDlŠs£?žržj«KyÁ¡ó)'CíFºteVm²ÓÒ…[šƒ‚à;$ɔ˖C³šÖ¢ KJÊ$Çô—ÖZÄMÁ¡bccSì€)Ni˃Ρß|m‚Cý“Êm£Fºÿn+_—•v„fØžIŽ.š ™ÝÕL`,tžÎ¸Ÿ¢˜]M/d¥*ë x‡(ì”y+ãäû‹î5.Œ€1 W$z€%’Ã…¨hTIr8UEßV–×?‘Γcyëj 3 òvEx¬’ÐÐCjFjºÅäÊ1œÕ2†AŽëú(IÝ_ô`èÅ$G-÷©Î“$„)®ãAª]¦:v—ù! :’CoHÙƒ #PX]­4hqÛÒ PS㘾¾¾LrLa!T0U’¶ä¨àÎÖóÙ’£g@¹»Wóˆ¹8ˆd 5}Ü/Wñ7#P"™‚àä‹zMª{*Úž¾ºÏ¸0¦€YrØ]ÍVÂòÇ :’CoˆäPb#.Œ€®°%GW¤¸]ihZãîÃ'=ˆj–þžÈŒÓ¬âmFàdˆûÅJ¨Ô5ðsW´;_ˆø(ð#`@Ø’c@°U~)Õ‘???ä‰7]첦ò;¿”ÓgKN)ãæ:#à#‚úº:‚D5K odqü &$¼]i1±¨_Õ$M.—ˆ„4$¦gA&Ór=3ÆB€-9ÆB^}×½ÿK¨’¹S2P*·nÝRÉŒyšú@€He÷åÂTù{áèµE×!u|‘žšŠlñáÂè‚@Öí(t®ë«hJ÷)«5ÕraS4âFÀ€°ð€ÁVù¥˜ä¨üàéë†@VVìííukÌ­R"RÛ{/G+ÎjS³Šäz”¥¬W4âFàù¹yH–º—4 ÝW$D@Ig¹0¦€¹«%''ƒ^ra*Õ‘)s}TTTEâÊ}[¿•““Ö ZSS›JH?\2¿7Ó †FâM…ËZj[ @ábH»-Åšv÷’fÙ{9JeæqÞf Yr¨°Œ´¡‘WßõTGrh‰ÉeÝÕÔw³—uÆdš–œ²"Èç•„@Û *°qô@ªYú6ªŽô‘šU¼Í‰@Rd¤¤Èày?ág¦°î½ mâSd\ɲäPa…5®âË0ÉQñâóÔuC@6©sLŽnxq«Ò#àdkV>Øzî†âä¾ü‘–$Ü:’’õ¼Ãh#MüÕÿ]ŠB¶ :ëÝEUäFÀHÈ$‡-9FZ]V•$§FˆˆˆPÑ2óT˃[rʃŸ«+}¡ÙpFù»Ô®–/œìÌ¿WºÂ¨ÊvÙééHŽC߯J’C÷S#‘ÇßÃI•¸ð¤MGGG899±%Ç4—Ç¢F¥J’ãïïÏ$Ç¢n㊠[r*_îý.ô€™Š37 ±®\ ýùI¹zµ Ž7mïÂÁÎ]êUSÚQˆø(ð#`$ÈšÃîjF_E—e’£¢Åæ©– ¶ä” 7>«t»š·È™óï™ëŠG´®…$!yŸ#ÞÖsaŠB %ü26 ‰UÈ%\Yœ»•²raL &9¦¶"–9Õ’œX‘dO~xµÌ¥åYé ¶äè IîçA£ =TËŽ\Q4# ƒ­ ®„+êy‡ ÈU-éVF´ª¥dùÑ+p‰½×4¥˜…¢ï0FB€HÇä |]Vµ$çÎ;ˆj4\’É0««•„//·ª-%½“\Е½µ7DÒåKu¼ÁÈ$„…ÁQ¸ªõm WIßK_†ÇÕýhÓª%zõê…­[·*Žó#`LHFšÝÕŒ¹긶*I PañuÜäå%[rÊ‹ Ÿ¯+]ƒ«I.kËÄ[xÍ2¶C0’£o##>^³š·$^8§ÚÖõý?ça‚$»‹_-ÅÆ‘——‡ž={¢eË–Xºt©´ÏÐ1ÆD€ÝÕŒ‰¾z®}ÿWQ=s½A ·ò×®]SѬyªeE€-9eEŽÏ+-$4ðXËZøó€ÒjÓEŸ@7Äœ?_Ú.¹½#*’Z§&$âùN ³¤ûÇ×ÕDš{÷îmÛ¶áÈ‘#¨S§F…ºuëâ»ï¾CFF†â<Þa …»« iu_G•$§R¥J¨Y³&ÂÃÙÇ]Ý·¿n³gKŽn8q+ý 0¦c})`|ïåè‚E¸Æuª¤°KÈÏÍ-¨ç u#{îšø ¹¿WyÂû×½ðL‡z°ëä"[q.\¸€>}ú`âĉ ÄgŸ}†ÔÔT¹3A€^6ÇÅűUÑ h«÷"ª$9´ÜAAALrÔ{ß—jæ²%‡“– 6n\FZx£¹øüüß9EcB‚!ž{ᢢžwÔ‰@vZ:._ÆÝ)Ø(rãÜHLÃXA–‹*µkׯ¼yó$O†çŸŸ|ò‰Dv>þøc$'ß+ê\®cô…Yròóó£¯.¹F Lr AÂŒ€²äØÚÚ‚,€\C ð¼x@%u¬„ôì‚Ë‘¼ôñv>þÌ)ˆ—õ\TŽÀí3§áí‑mê(øQã.õª¢Ž«¢^{ÇÇÇDl®ŠLãÇǬY³$‡iÓ¦!))I»9ï3zE€HÐ+¬Ü™ª%9쮦u'ðn±%‡•ÕŠ…‡TO¶« «Êøa÷YEïoôhŠôä$²«­µíäeç A¸ª‘ÇVÜ'r¹t;ëO_Çÿº6–«Jüöôô";¯½ö¾úê+ɲóá‡"!á~bÚ;âŒ@)IËH—4nZjîÿ:–úTó>ÜÕnÞ¼‰ììûoJÍ{F<úŠB€,9ìªVQèr¿E!à"$_|¸!æn?ƒì¼ü‚&u«¸bHó Ü>~lÌ)€EuѧO rS /un¨˜û—[O!È˃E¾¥Òwww±!Až·Þz ß~û­dÙ™1cÇì”Ln_"t¿‘‡[rJ„Š”U“òe…µrÜ=*9•-9*Yh›æ«Ý#65‹…)F6}P+¤Å'Hñм£ r³²{ú&õj wÛ‚9Ó½òÇþ‹ k%–-kquuÅûï¿/YvÞ|óMÉâx¾ùæ~)XVPù¼"à\9E•zD@µ$‡¤4©„‰Dj\!À–œ¡ÃÇ* êîŽxBÄ[|ºáH1K. üÜ1¢uÜ>&¬9õòqþ¶l¢Ož„£Me¼Ö]é’6{Ë)8 à³"nKÅÅÅS¦LÁ•+W0zôhLš4 õêÕÃü!Œëã܇º —5¶ä¨û¨èÙ«–丹¹//]R棨hÀ¹óC€\Ù]ÍüÖÍFüÁ#-p%6¹PÞœé["KÄæÄœU*°YœyÅ#%¤žc„«Ú”~ÍA.r¹’‰ow„âíÞÍàhk-WëåÛËË ³gÏ–þVRBѱcÇ¢I“&X½zµ^úçNÔ‹‘ŽÉQïúbæª%9.%Dc’cˆÛ̼¯Aîjä;Ì…04µ…BÖ3íëaÚ?Ç£›CõoôhŒè£G@îK\ÔÀ­ƒàå„Wº*e£?Ûx.ö6ßEY¯OTjÔ¨Ÿþ¡¡¡hܸ1† ‚:tHŸ—á¾T„»«©h±4U&9lÉ1Ò­g>—%K“óY/Kéä>Íqî~úï¼bjï÷kg›J¸%2Ùs±|RnE!NäÅùfx{IyOžñµøT|¿ë,ÞíÛ6Vru…}“ËÚÒ¥KqøðaX[[£]»vxòÉ'Qa×äŽ-vW³Ìu5¥Y1Éa’cJ÷£IŽ…ÝÕLrYT1¨ÈÈH<3t*m˜‹–ïRäÍqoî¿ÚV¸¬…"í6'Ô³ä"_XñnìÙÞпI€bªoÿ}þžNxéáŠúŠÞiÙ²%víÚ…åË—cÿþý–bxÒÒÒ*úÒÜ¿… À$ÇBÒ„§¡z’Cêj999&¼D<4c#À–c¯€:¯¿víZ4kÖ ±±±Ø¹{7lñÑ:¥Õ†\Ù:WCä»pG¨Er±L¢ŽC~z~ÕQ1Áÿ¢°ìÈe|1LiÝQ4ªà¡C‡âœÈÙ3uêTI,=¿ÿþ;‹bT0î–Ð=‘œ˜˜¾W,a1Mtª&9ôcœ——‡Ë€ #P“S2\_ÐýöꫯbРAŠŸDNM¯ûnjdÄ{aá¨#,xo÷jfÒ5j”ô÷õñÇÇóÏ?N:áÔ©S&=fœáðööFåÊ•9WŽá WÝ•TOrêׯϖÕÝö¥›0Çä”/n­;ä¦Ñ£GÌ™3óçÏ—ÜÓìííØÁ{B::ÈËOþ*¬6".C.Uݱ蹮ˆ½xI(®•«ùÛ È1V;wà™`<ѦŽb_n9…ÿ.ݯO RkeúÂ)ñöܹs%ÉéÜÜ\*Û›o¾‰ôôtżxG}Á!¢Ãîjê[{CÍØô!+ ¶äT0ÀÐ=[r,`Mp ”g¤U«VR~‘à¹çžÓi”vÖ•±xlw½‹ÿ=®8§_c|4 %"…5(õömÅ1Þ1ò¸¶e3øºâû'”ñ6'#ãñîêØ:°Zz›Ç„î²yóæ’•rÞ¼yøõ×_ѤIlî•\ÔËH«{ý+zöLr„»Zjj*'2«è;ÍŒûç˜3^<:Å*<üðàØ";$]šBnk3‡´ÁôõǰïJ´âÔ)ý[ Wƒê¸¶y3²ÅoóA€ sׄÇ*3kÆõ„½õýäžiÙ¹9ÚÕª‚I½2ŸIiŒ”„~Èmí¬°4É!+æ‹/¾ˆda¹â¢N˜ä¨sÝ 5kÕ“zÈ j(Ìù:f†[rÌlÁLx¸ä®óúë¯ãé§Ÿ–d¢ÿý÷_xxx”iįtkŒ>üñØ[’QÐÅç,}¾;‚Üí¾i#ò²³ Žñ†i#pãà¤\¿Žu/÷RÈEÓ¨Ç.رΠŸSŠ˜öŒŠ]ÕªU±zõj,Y²«V­’Èþúõë‹n̵‘œÛlu¶è56æäTOrè£Zµj8}ú´1ׯmÂpLŽ /Ž -^ä;éÝ»7~þùgéánæÌ™RÐmY§@dæOƒãhká?mE®†®´«½ 6½ÚŽBˆàêÖ­’b[Y¯ÃçÛâE[ô©Óøã™.èTÇOqÑ9ÛÎ`ùÑ+X"È«¿‡“â˜9ï YuȪٿŒ=ôÿ„‹z \9“£žõ6ôLUOrp2›Ÿ9£”d5ôBðõL¶ä˜îÚ˜ËÈ.^¼ˆvíÚ!,, {÷îň#ô2tw[¬|©'Ž\Á„åû}ÒÃðÆWú 'ö6®m߯ò½ tLk'NˆEDì݇τ âÈÖJe½nbâß0cP+ô¨_Ý´®‡ÑPàù¢E‹°víZIvš¼+6mÚ¤‡ž¹ s@€ÝÕÌa•ÌwŒLrÄÚ5nܘ-9æ{WøÈ9&§Â!¶è ìܹSJŠHVãƒ"ïÉCé7ž¢IuOü&Þþ»#ßlW¾¬iîï%´ˆë±S‹†Ú,'— À^ÛµKHƒ7±6ÊØ¬sQ‰òà m$I‡›åuô€¤—]ºtAß¾}%wÎÌÌLÏæf抓s]9ó7“±NdÉ9wî(_F@¶äh#Âûº"@*R½zõB÷îÝAdÇÏO醤k?%µÞ²>}´ ÞÖœµ§®)šw®OëÆ÷Frø\ßED‡™Ž #î$\ ÇÕmÛ0¾kCiý4‡BqVýæn@#!2ñ» ±j(îîî’+çŸþ)%%¹é'N¨aêª#‘œŒŒ IJµ ðÄ+ &9Z²äÐ#r%áÂh#À19Úˆð¾.¼ûî»3f Þ~ûm,]ººœVæ6dÛ±>Fþ¼ {/+×z µµµ/ß%:×ÄCõüü2_‡OÔä¢.Öâå. ñÍã&gæà‘¹a-òˆ¬릩²¦hh¡;”D”’†’+[Û¶m1kÖ,Ž+³Ðµ¦˜*—c¡ läi1É аaCXYYq&f#ߌ¦zy¶ä˜êʘæ¸rrr$õ´Ù³gã÷ßÇŒ3@Ò¹†(ßìˆ^ kˆä 8z=VqÉ>j`ók}‘q#áB^:O(½q1$2pUXöÈEm® 8šwGºŠ& έ¤tlz­¼œìŒ3H#_5 @ŠÑ™6mÞ{ï=ÉzóæM#Š/¯oÈ’C…I޾‘åþ&9zÃZ¯^=;vŒï F ôÐjkk[¨ž+m(ç©D­\¹ëÖ­“ÈŽv›ŠÜ·®\ K_è!r©ø¢÷×ÿ"ôV‚ârëVÅŽ àNÜm\ãËI¿/=­hÈ;‚y FŠÄ¯²ÈÀ'ƒ[+®“•›Aó6ãbt¶¾ñjy»(Ž«m§²°dMš4 ”,÷Ö­[R<ÛV¡ÈÅr`KŽå¬¥)΄IνU¡``&9¦x‹LDrlllŒ?I#@o";w'OJñ7$mŒbkU«Æõ’b9º~ñNDÆ)†Ñ¦¦ŽL ŸJ9[³ñJ"¤hÌ;zC _XήnÛ‚ø³¡X<¶›Bd€ò'%§g¢ÿ·qäZ 6¿Þ üÜõvmsï¨E‹RÒÜnݺI2ìS§Ne÷5s_Ô{㧈$ʹr,dAMlLrî-‘œãÇ›ØòðpLza’c +aºc Y:t@JJ öíÛ ˜6fq°±Â†Wú¢Y /Ñ9~[1œÚ>®8ù}úôALLŒ²ï™%œ+Ç,—Í,Í$çÞ25oÞ\úÁŒR«\²âPa’##ÂßÚ2cÇŽáææ&åÀ©U«–v£ìS’Ðþ×êú¡çœõØrî†bžŽvØ)¸Çu ð[%7ª; Ey§Ì$^¿ŽK«V¡¶“N¾?¤P¢ORQëþÕz¤çUBö_Sq|_™¯¥†Ç/ý?»téèï6åâbÞ°Œ´y¯Ÿ)žIνՑsW°Ëš)ß®†“ÃcnNWn9‚ B]¯±»-ÎL†'Ô.s¶A/¡~×[Èz“ŠÉD“5°FRSJL}æÌ4mÚË—/×>÷ï!@Òì“'OÆêÕ«±lÙ2IfšãtÌïö`w5ó[3s1“•’MàU¼©rØ’£ò ˆéo I&šÜŠ(ɧ9É‹?Û¡v¾5G®Æ õ§«pæ¦RY”Ù>Ò{Þˆ*¹©8¿b9¢OŸÉCµm?E£òª”[Q ¶‰¡§ñÕcí°wâ@ÔôRº§¥‰8#çoÇ[+€Hç_c»+,<ýúõ+ˆÿ#Á“´´4 >ãÆ“V«âb§?`Àìß¿”G§uëÖ’Âa±ù€É!Àîj&·$3 &9KI$‡~$9)•(*ßd’£ò@kúDpèjðàÁX¸p¡”DX«‰Éïv9t޽?~®Žh+ˆÎÏ{Î3µ9ýÁP¼×§)¢ÂÅ¿W )2²P;®²Dn¤«â¾¸¸n-Bª¹âÜGáÕn!R)ÊÉÈx´xo=‰")ë;}R§îÝ»ƒÈvùùçŸ%·È ìF¨ MÁ~£FpèÐ!É""åª*8È&¹«%&&"++ˤÇɃ3?˜äh¬‘*첦ŠÊ7™ä¨üИ>%!$‚óè£âÏ?ÿ4K‚#OÇÏÕÛ'ô—Æ_Zôý~3âÒ”dÕù¨KœŸ:]Üö￸²i#ÒãâånTý›•‡ãÜÒepM‰Å:¡d·éÕ>…ÄÈöåÖÓÁñv¶—fúÕ‹ÄŽb¼îPÆP­Bîk/^”’a¹æR4^^^زe FaÆáóÏ?/º!ךdÉ¡Ââ&µ,1&9Ëèçç‡êÕ«3ÉÑÀDí›LrÔ~Üÿn£BîiC† Á‚ ÌšàÈ+j-L Ÿ>ÚF";G¯Ç¢ÉÔåXsòš|¸à»–· Ö¾Ü Û) ¸“sÿpAø2 Ú¨i#/;7ÃÙ%‹‘qéœpïk ÂzÓ¿I@!.Ç$£‡‡~gåA|4 %vL§Bíä oooE¢¨BžÌÌLéþé§ŸFzzzQÍT_gmmï¿ÿ_}õ•¯óÚk¯qâP¿+˜ä˜ø™ñð˜äh-)%±%G ï2ÉQñâß›úáÇ¥œGyüñ‡EÍUí\·*N‰ùž k`°$xì§­ˆJÎÐl"mw ®†ï=Š•ãzÂ73gE@|¸P•K‰Š*ÔÖ+²ÓÒyð Î.^„”³§1¥o3D|:z4)Øi–"öæäÃ@DHÛÊ£}žö>jr¹*ªP=ÉLSŽ¦Ù³gÃÎή¨f\§% %¼(¦©S§N’¸ÆaÞ4ÈšÃ19&²4Œ¢I-h‚¥ %`«\¹²$GI J\Ô“u®”x{ß«W/øûûcݺupppP ޶ָLÄê<ß±¾$uÜGäs!W6’Ú/ Xi-7EÝ\}ûö•äé÷‡îCéïÒ)a]*J}­¨>¸N‰¹öíÙ³GzyѹsgPBßzõê)ñžÑ K»« ~‹½0“œ"–¶}ûöÒ[Ÿ"q•Ê`’£®§õ:t(nܸ!½è µ+µ—.õªb¯Hº14RŠ3iõñJtÈoöl*ˆ€‘"Aô¡Ä¢Ç#âĹXw:wl—,då±÷õƒ“x°qðôOXÙŽoÑöw„$31 ñq’+]Öí(¤—:²ŽÔôqÃð¦þb.mÐ¥^58ØÜy)îÚ—n' IèS’ÕÊIX½H5m|—F%žW\EÕS\Îûï¿/"9ä9sæ ##õë×ÇÔ©SY¹(Ðt¨ó÷IÁ“EG&: 4ÐáLnRÑɹ~ýzE_†ûWLrŠXp"9”‹€xÔà‡_\u&9êºÆŽ+_°,Œ@IDAT‘›;w¢–p³âr>j€>[ÏßÀl¡öÈÜ hXÍã:7Ä“m뢨8:»¹¿—ô!u±”¬ì¿½â³Kè9ri™ÙÒEœ\]`ãêk„lëâ [ñmmoïc+TA÷•*‹©™ ‚B$åŽÈ!CV™Ü¬Lä ‰å\ñ!׳ìä䤦 /%©‚àä‹vVÂí«~UOtnà‹þѱŽ<ïO²˜-RKÛx&?ýwÿœ¾† /ÌÚÏv¨rïÓw!"4_|ñHmŠ««+>ýôS¼òÊ+xꩧиqc}_Vý¹»»K¹tÈZÖ¥KÉu±4þÒ“»Ú‘#GŒ?E! ÿ_g €‡H½5;!Ü,Z·nm3â)”&9eEÎüΣ7䤠öÏ?ÿHÙåÍo†1%²¤Ïéñøzû‘æ&®8ˆÇZÖ’ú; kHqRɤFFâô‘ ìS_ô¡¼2a±)¸|å&"Ó+ˆIéK%x:; Hä÷©ëã‚ h$ÈX“êžhàç^*w²01ž?\¯{/à†Yµ–¿Ðƒ…èBqs,ýx ŸA j䚦ý’í…^Àï¿ÿŽ—^zIr½*|&×肹 ’êZÿþýѵkW‰ô$7ã!ÀîjÆÃÞ’¯Ì$§ˆÕ †‡‡‡ä²Æ$§€TT%“œâÔŽT…EOu¹ÈùB$‡’öîÝۢ窯Éi˜?úa|5¼= Ãü=çÑíËàçæˆa-jax«Z©íW"–ú•L“,?û—š…T±%,6Ù¹ùÈäÇZXelEÌŒ­ ¶Vðr²;xŠïòð¸,?zKE<Ñ1[äëêˆg„Åf¬b¨#â” U´ ]—æÍ›'½|[´hFe¨áXÜuH¶˜bîHŠ»[·nRŒã-3Yrbcc¥˜³J•L]¦Äx8ñ•K‡“œ"ð¢ÿ`$%Mj,¯¾új-¸J-P–q9/…Z欶y=z”Aþÿû^|ñEµM¿Üó%ëÌ‹HŸ ÑI’àÀ²#WD@þx )e²ÚPì}ûº”Np€ú¦¹‡UTÉÄé¿KQØ ÜÑ(vèÜ­iÜ$‘=Sˆ tIP­Lè¡«E‹3f &Mš$‰ã899U4ß/©Ô‘å–,:ôrƒ¤¦YŒÀ8ËN–ú{///ã ‚¯jq0É)fIÉeí—_~)æ(W«WE|¹ŸQ¬ÅüœŠŒGbzÈZOq:D¶H8TÕü=ÌÇ"B’Ò}ôÞyçɪÃ"å»we×5R\#¢C˜‹áI%¥4\} À$§)›œ”„¸¨vW³Üuýõ×qöìY>|”‹KÅ!@RË} ™fñ¡’›g…[‘"ûŸÄæÕ'X£…dñ¡6DDÜlá-\Þ(ÖÆAôa'H‘‡“+”ÕäøœÌœ<$òB±;qiBa-/Ÿºúðsu@-KC¤fxËÚhZÃME<õmÎ…Ü+øá¼ûî»øë¯¿Ìy*&1v’—Þ¼y3:vì(ILïØ±CzbƒSÁ H®Ÿž¹ˆäpaô…“œb$õ•FIq9LrŠIÕì®f™‹üçŸJ"+V¬ p1,d¡!¢A*?Ü>†w¿[‚Œ¸ïp3)M²ÀD§dH¤EÄ%CX~²y!ññm'ÜH=é”´ÓC! RD’†ôr†½ F–XÈÂüÙgŸIÖÈ7ß|“•@õ°ÈäµAytˆèPœ‘rgãRñÁ!¢Ã$§â±VÓ˜ä<`µÉeĸ¨¶äXÞÚ“4/ ¼õÖ[RâOË›¡ùÍ(&&Fr¢0˜îNÒÇüfaø<!!!˜8q"(·—ò#$‘ŠÓ#A²’±ÚWùqÕ¥rYc’£ RÜFWDF5.Å!@<(9U¦H.ÇE°%DzÖ=55U"6mÚ´‘+ZÖìÌw6ô`#ûä›ï,Œ3òY³fa×®]X·nq`W%/Ž¿ÿþ«V­’bs-pŠ&9%&9&¹,f=(&9X¾."rVV[s€‘¥báËZáñãÇ#)) K–,aA ZZÙ’cBC2›¡Pºƒ¡C‡â½÷Þ“rŒ˜ÍÀM| ”$”òf͘1 .4ñÑZÆð˜äXÆ:šÒ,˜ä<`5@¦kvxH~ˆÝÕ,g/^Œ à·ß~cÅD[V¶ä”oA¦M›†ÐÐPPR[.úC€ò‘+àØ±c%iiýõÌ=…“œ¢Páºò À$§ôÈšC®\Ô‰»«Yƺ‡‡‡cܸqRrßGyÄ2&eA³`KNù“$wGŒ!ÉJç Õ9.úC€Äúöí‹G}W®\Ñ_ÇÜS!˜ä‚„+ʉ“œ$Ýüp\N 8YêavW3ÿ•%¢úÄOHVÙÏ?ÿÜü'd3`KNùõÃ?ÄÅ‹AK.úC€T¿È]<; €´´4ýuÎ=)`’£€ƒwô€“œ@”ãrˆèpQì®fþkþÅ_àøñãX´hìììÌB6ƒ;wî ..Ž“/–s]ëÕ«‡'Ÿ|Ó§O[sÊ ¦ÖéNNNX½zµ¤üõ /hå]}!@$‡b&)š # ˜ä”€b`` jÖ¬Éq9%àd©‡Ù]ͼWöܹs’:e‡'Å$.¦‡úF8\ʇ% “”ÁÊן­€¿¿¿dÑ!IiJÂÊEÿÈ¿ä¾Ê…ÐLrt@‘¬9,> PØ„-9滨ôàüÌ3Ï iÓ¦Rð°ùÎIJG.?ÐøøøXöD 0;²æÒÚ§Ÿ~j€«©ï½{÷Æûï¿×_GUçN’IŽ>Uw_Lrt\Ê—³ÿ~lÚ´Iúa£? ¾¾¾’r“Ž]p33D€HÇs˜ÏÂQ‹×^{ £G½˜àbú0É©˜5¢ÿ¤ÆŒƒ/õJñ9uëÖ•ÜÖX¶[?8³%G?8r/w`’SÂpþüy|óÍ7øí·ß““#ù:ùå— •Îôôô,¡>lÎÉ!l.æe'OIIÁìÙ³ÍcÀ®sçÎá믿fTÊ[rÊŸZ&9… ¹_AÊ4U«V-Ö]‰¬;lɹ—%nqLŽy¬*½Imß¾½” Ïeì‚Oc `’SEá ggg,Y²¤HW5jMfi~Y7KªaKŽé¯&I»oß¾Ÿ|ò‰é–GX&9… ÑkÅ£>* PK.‹IJÓCú„ *öBÜ»ç̪€¼Èœ“œÀ&-|z;ST\[rJÏsLŽé/"ɸvëÖ $óÎż K)©á±E¼âÖÍÎÎN²p.\¸°â.Â=KØÃܹs±lÙ2lݺ•Q)2Éa…µ2€Ç§B€IN!H WL:ÁÁÁEÆç°%§0^–TÖÓ^M²à%‡þr1?%K9“œŠ];’“>yòdAꃊ½šº{ïׯŒÿýïÈÎÎV7e˜=Y¨0É)x|J!Šª/ÔT½ÈLIÕZ´hQ„òüqÎÌÍõ¸T\OElj¦ô‰KËDBZ²ró‘-Þrfåä!OÄþØ %7[ëÊâ»2m­áédog{x9ÛÁ×Å5½]PÍÍ •+"W”zÓü ñ‰rtͧê²âôìÙ;vÔCoÜ…¡ W5*åù5ô˜Íñz:uB@@€$Îñé§ŸšãÌjÌÏÛ°aCIÒ~òäÉf5vcÖÉÉ LrŒ½r}&9:.$iá“Ï?ùÜÒÛ}¹èbÉIÊÈÆ©ñ8ÓâûÌÍx\‰IFTrFA¼µ /^‚¸iñp´ƒµ•ô!bc-”[Ò³³-ˆO– FéÙ¹ˆ¤(N!ê[–·¶çx:£NW4­î…¦5<Ѥº'Võ}0û‘׬4ßlÉ) Z†m»wï^ìÙ³GÊaeØ+óÕô…“}!ùà~*Uª„Ç+V¬“œc¥£Ò³aM*­²uB}«¡rYcKŽVºâçÈ$§SRÐÕ«WãСCZøE½ŒHHÃ΋7±7,{/G!ôf‚DDÜy!âñ¿†¶BM/ L §‹ 6e“½ÍÍ¿ƒÛ)MÁÕ¸éûBt"6†F`ζӂåI–Ÿ6AURÛ!u|ñpݪpÖ .%#À19%cd¬³fÍ’ÕØŠc¬(ÿu)‡Š‹‹Kù;㈠Ðÿ™ÐÐP4jÔèmù`ù ç…~øAJúí·ß–¿CõÀ2Ò*Zì ž*?é–`ÒÂ_´h‘d†&e52©’+]oIÄbÙAjâ…k™Zz£O#LØJlûÀßéWÓ­)Yhª¹9JŸÚ¾ÿgï:à£(Ó÷ é•TB€zïE@ºTElXOÏvzg?Ïrž]ÿv=½³g¹Ó³¢" ]ŠHï=Ô@€ ¤wöÿ=&™I6›ÙÝ™ÝÙÝ÷ýý’öµçÝ™ç{˧*TUsžöåÐúÌSÙúnóazqþ©o#;%ÒdÑ·)½ÚS·Ä(U9Þ©G€-9õX˜i+##ƒæÌ™C?üðƒ™ºÅ}±ââb©2Y²‹¸Æb¹?þø#“c¡–j •Î]wÝE÷ß?!Å4‹6Ø’£ '¾ªy˜ä4‘ꊔ”¬ LÐÁ¡átÇç¿ÒÛŽHîcÝ„[HÍ[× #ˆ`AtÜ)®gR´ôwÛð.RWr…Õgñ±¶ c/-ÜFÏZG=’bhæÀ4ºf`:uIhåÎ.›®m^'Çt*‘:ôÆoPçÎiúôéæì ÷J°ä2€±‹\Öð}7Âßþö7cãÚ%n¹åzë­·$×µï¿ÿžQш“@ñeÍ"À$§YˆÔ±4û£{R`j_:{îmÉ:CLìCWH£Tá~æ¨c勤ˆ³Aòr‘pñ7ˆÃ©1@râÄê„H±;H<!¶í$)¸iH'émÂîÛM‡é½•{è©9›h`J<Ý1¢]78Ýîºí釧\Ë–ói ë'|þùçRªV¼¸±x.°ä°«šëô‡¬_~ø!eeeI‰\ײo¶„¥'^yåš6mšäæ>xð`ßÂÎQƒäìÛ·ÏÎR|9#Ð&91itd`¶°Öüsù.ks’ÚF…ѽϾJ{¿ÿ€~~âŠF×7uîc{sÎIɧƒ8š§kãhò‹ËD1Ñâ/~DEÒã#¨£ˆïI”’ ô „À– ÁÈŽ‰ÒßÛ3/.wÙôÉš}tÿ7kè¡ïÖÒµƒÒéþ±=¥úlÕãÍç8&Ç|Úýè£x—ų€%‡]Õ\§C¬')Ysî»ï>×5ìÃ-M:UŠD&È ø0Ú‡’sæÌíøJF  ˜ä4 #‹Ù§köÓ[KwP¦êŸÖ«ÍýÓDšÔ³ù‰äS‚cKú·ƒ9ÂZ’K+äRÆÉ|ªV™–‚˜„Gµ"¿ˆH oM¡mÒ)*,”üÅBb~âÏ?(˜Z ‹M qâ€Ä?²‚déŒ-çEJéª*ª.¯åÒ_¥˜ Ý]THÛSÕŽTRXÌ(YfF "ƒxaâ¯)ËϘ.IÒß?®N_¬?@ï ëÎÇ¿eÐDá‚÷ð%½i|×¶¶†ë•çØ’c.µÂ}3ÑpEL‹g#À–×êË!`¸¬1ÉqöXÇë’K.¡µk×J„Çu-{fKì®æ™z3c¯™äXÑJ™p{wÅnzEĬ”¢só°ÎôÐøÞÔI¤fVJô(‡¬jˆw™»ó9] ‘”ˆø8 jDí»ô¦˜X §È‹ÒBÄ×þ(@"CA6²ÕTVQY~>•æçÑ.áÚ³iÕznÞf©/CÒèÒ^í¤Ø¡~"Ë›5‰ ¤{F÷ ?‰?ŒåõÅÛiÂ[?Ó ”ÖôœH¢0©G²µb^yŒcrÌ¥V$8qâýñ4WǸ7!À–‡`sª²¬]ýõ”——G±±ÖŸN5À…! ¯åõÔSOÑ’%Kçj@rðÛPQQÁñzjhxÏN˜ä(ƒ;Ù¿î¥ÿ[°•ŠÊ«èž1=èá ½¥E7—©6±hç‚]Yôµˆk™³ã(•UTQd\,…$§Rç!ÉÖ:A²Ê¨ ¼ã@በҟH'µVUZFE'OÒþcYôÜâ]ô×7P[‘ºúúAi"é@: hרWˆv˜"¬VøÛ,bž™»™&¿3Ÿ.©¨_¼|îܦQo;ÀîjæÒ(’~À¿=%%Å\ãÞ8„^d8&Ç!è.4yòd)+è¼yóèæ›ov¸.hÏ=÷Á]k{qÚ{ÛØÅÅÕ¾`­œädß™Tµ Ÿu&9P›»#‹þÏÏ¢ÓÅåôë_.£oîg•àœ+«¤—E|N»Ç¿¢[ÿû+ަ.⇪ËÕ×PBïÞEp¬a&fMÚ_4œzÜp#µ»x-9QL½2›†¼2GZÈZη‰Ý“iÇSWÑkW¥…‹_ÏggÑ’½'¬UïÑÇ, [rL A¼Œýç?ÿ¡ßÿþ÷µÉ8LÐ'î‚ó°%Çy ©AðRZcGÊsÇ@¦µ=zЫ¯¾êX>RªU«V’K%,9,Œ€3ø¤%gÓÑ3tëVÖ¼yþ²AôЄ^R¶´†@ 7´·…Õæ•Å;¨²ÆBÑݺQž½(PdBóFAF·ØÎ¤¿¢œ:¸c]ñþê&ý¿é% rÜþ"%Û½"n kÝ+¬a—üýgúHÒ€tÔH^à Â$ÇZ\´h1e·Þz«9:ĽÐŽÉÑF»+éÒ¥ µk×N ‚6l˜Ýå¹€cÀ+à‘G‘~Ç^|ñE^«¨ ârØ’Ó@|X3>eÉÁz7/ ‹ ¬ñbQÌÂñ‘é •+Ï›KwJ–›çl§ˆî½¨ûõ7Pò!^Kp”ãÇvDb"¥ŠÙ¾îW_M¹ÁQ4C¾/þHËöe7¼”#Cè»;ÇÓw_BK„[ïçf‰”Ù']ç‰@rXÜÀ'Ÿ|"írÂ÷ëBÏ”‹4øœ \ODµ×…Œ_œéK;^z]yÝu×QBB½ûî»zUé•õ€ä°%Ç+UëÒAù É9v¶„Ƽ9WÊöÒŒÁ´ôÁ©VæçíÌ¢.Ï|Gü°‘B:u£î×]OIú2–ù¢„ˆ¸£Ôñã©Û•WPV‹÷æ<š.Ïa±iCA,ÏΧ¯–2µ×!¾§ÌÒÃ…crÜ«ÀsçÎÑܹsyñO÷ªÁÖAr‚ÅÚ`,®G$gݺuRª^×·î»-úûûÓ]wÝEü1áþg±ŽârØ’c>ªŸ 9K3NPÿ¾§3"öfÃã3¤ zµí†è¨X¸sâ; éÒ.¤Âˆ8ê6ój;xX˜Ó;Ü®´ßÖ¯ ë)¤MœDÅBrËŽž£®OKψµwŒA)±aA’EçƒFÒ;ËvÈNNa™òÚfw5÷«ëÇ”‚£¯¼òJ÷w†{ +xÉ ÒµN®LãÅäbÝ–/_®­_¥wÞy§D.¿úê+Ýêô¶ŠØ’ãmuÏx¼žä aÀ¤·ÐønÉÁé“£B†¼ŒwÖ›ÕÙ…ÔùÒË(E¤× ·žaMUØwZ‰œõ¯¼Š ¦î ^ÏOkç6BâŽ]i½ ”9…¥Á´vM£B&<À$ÇýJÁ‹À¥—^Ê멸_º÷€-9ºCª¹B¼Döë×]Ö4#¦ß…XHüjá Žu¿X¬#À19Öqá£ö!àµ$§¼º†®ý÷/Òš.XËå«ÛÇRX :Ï‘¼bþÚzð»uÕ£u¾â*ŠëݰØF …H8 Ò_v½êj:åN#^#ÜûÖ7²êô 6 ¢30%žF¿>—>]³ßvÅ&<Ë19îUÊ©S§¤µqn¸á÷v„[7¬hÎîj†@«©R¸¬-^¼XÓµ|‘¾üéO¢-[¶Ð¦M›ô­ØKjc’ã%Štó0¼’ää•TÐø·~¦E»Kkº<0®g#˜¿ëÝôò»ÎUR—+® ¤©¥ŸWÂÑhìzЧt±zvû‹/¦¿/ßC_šM9çTÕ·YÖ~úãDúËÄ>RF»§ÄB¢ž&“ã>}ûí·FX׃Åû`KŽ{u ’³ÿ~ÊÊÊroG|°udµëÚµ+}öÙg>8úæ‡Ì$§yŒøŠæðº·zÄ}y68WBkNcº$©P€…ç÷b­›>^Fa;S§ËgPhŒÚ…MU€wšE N¤#í|Å•t´²%õ}áúŸ JAüÓ‹ÓÑ¿7JÊnwÓ§Ë=&!»«)5éúíï¿ÿ^rU äØ8×£ol‹ˆ©®®æ˜ca¶Yûˆ#$KÚ²eËl^Ç'Aà–[n!¸ã¢ɢF‰òòòˆ½)Ô¸ðž}xÉÙ+¬#…û¬ë»œº%F©Ð@r¡bË/7eRº˜nwÑEÂzë=«@rp'82’:^:¢ºu§¼ï›5„TÜJ¹mxšïdš½í]ñÁbª¨VŸW^k–m&9îÓp«V­"N8à>Ù²œYŠÝÕŒDÙvÝHú0hÐ Z½zµí ù¬!ÜtÓMÒ¢¬sæÌ1¤~O®4V$;Â$­eaEÀkHζãy4JÄ}¤ÅE⇦Q‚XG)«æPae8TRCfÌ ¨öí•§y[«“Ý…ö£FKDçÒwÑo"ÛR.ëÝÞºzAßm>¬<åöm¶æ¸V‡¦°«škawykˆC€´“ ,îE$7ovoG|´õÄÄDêÓ§-X°ÀG°>l¶äXÇ…jGÀãž.//ÜFßoɤîšÐhœ{yávê0jE§1ÁÑ~¸æJÄDu=šÂÚµ£I"«ÚÖcê€ÂûÆö¤{Æô Ø'̑釭8®¹7”­Àm#F,ÐÛ¿åaÞö2Ø’c…vîÜ™²—“¸O'ˆ?„›¥¶äÔãÁ[ö#àQ$giÆ úÛO›èµ+‡Ò˜.IªÑ"Môã?n”’ °‹š Sí€4tV¶€¸ÖÑ9v¶DÕ¿7…5g`J¼X,t +«TsÇúË–×"¿téR3f Ïð»v—·Æ$ÇåÛlYÖÖ®]kó>iXìàÁƒ”™™i\#V3“S˜ »ë1$/Ã×}ô ]= ×S%ú¼YÌþ'ôîÅITȘsî))ãÇS©M|g–×ûû‹E¿½Sœ‹„ÞôÉrB w “×iX¯\¹’Æ‹ûƒÅ»ÝÕ8&Çz0`mÙ²ÅñÁ^ 2„Ný×_õÁÑ[2“ë¸ðQíxÉ9/ÞtoüdµŽ ¡ÿîbÕèŽæÓ¥ï-¦ˆöí©í¡ªs¼c^ü)uâ$Ê}š]„ܨž¤¤$ÊÎÎvcÌÓ4[rÌ£ Oì‰iHNie5½"¬8÷ŒéA­#‚ë°,ª¨¢çæo¥ø^½((¼ÞºSwox-ñݺQPd=9g³jŒ7 íDiq‘.‹Íaw5ü†ì€ä €Åw`KŽ9uÝMüîBJšÅ=À¢}òäI÷4n²V™ä˜L!ÖÓœO×ì§AtžÐ[!ÖL)«:O ü¤ÂÅvZˆÜÑ­û ¯7¤½9çꆌŒ{OîK_n8H'ΕÖ7bƒÝÕŒ@U]'Ï‘2•IŽoßcKŽ95Ü®]; e’ãFõÀe-''Ç=0OÓ 9œ]Í<ú𴞘‚ä`Ç·–î ›‡u¦¸ðz+ι²JzeñŠíÕ›à¾Äâ{D§¥SXL4=ùÓ&Õà¯ÜQºW„ÀHaw5#Ñ­­;33“JJJ˜äµ©Z`KŽ©ÔQ×üæ!1ÇåÔAâò œÜÜ\—·kƱ (Çä˜Q3žÑ'SœÙÛŽPf^=4^mÅù@dZ«¬±P‚pUcñMÄó–Z÷@?lͤ§ê ±9÷ŽíIþº‡àÒh”àEŒSÜ…nm½Èä„™c9«±­qífA€-9fÑDã~ .‡ÝÕãâª#X³°°þyçªvÍØ,9gÏž5c׸O€€)HÎ?–ï¢i½:P§Ö‘uUŠŒjo WµháìPwœ7|¨ÔT ±9o kŸR;!óÞë(ëº1^¬PWHU’ƒ™cù¥·Ñ|À+`KŽyÕÊ$ǽºÁ‚¬EEEîí„IZÉ©®®fÒg}xZ7ÜNrž.¤•ûOR+ñyÅåÔº'[q”¸øâ6¬91={Ó'"n댸'dÁº9WH£~3.@–-92ÚÆ}‚äÈÁÎÆµÂ5› ©Kxa1ø>‹ô²¸p‘d‰IN-î 9vY«ÅƒÿÛ‡€ÛIοŠ*Ö=™Ô³ªçoÁŠ#ÖÅ UçßD ®Kg"á:öñê}*îÙ¶f¡ÍâÏaKލªë„ï?“5&¾°\Y^^?qá ãö„1Â’ öÆYÉ=wõ$§¸Øu ^»kœZÚŽŽ–.c’£-¾¦!n%95 }&fçoÞ…1K–-â…uDZ3w!•¥|œ?}–bÖ·UÇNôþª y*êdxzukMdÍÉᘜ:¸ Ù€ï?^ªX| &9æÕw§N öïßoÞNzqÏ`åd g­‚eKÇåxñ oàÐÜJr–ï˦ÜÂRº'JûQxt…‹ #,Œ€Œ@¼x>zº€VˆûF)¸¾Û|˜ª‘¦Ogaw5mPÝ©S§$· &9 €ñ]&9æU2tƒ _GŽ1o'½¸g˜\ãõÙjŒ$ ÜLr¼ø†7phn%9ßl½€ø‡„„¸¯&kÉ8»šÉ”â!ÝqÉÁLü¤ê´Ñ+ögS™pU‹frF )"Äý1_de¢žIÑ”ÞÈÂÓTZ³»šV¤»î0lÉq ;o(Å$ǼZìСƒ´#» ¹^G 9²;§ë[7_‹lÉ1ŸN<¥Gn!9ÇΖÐîì|šÜ`mŸÈ¸XvUó”»ÇMýŒl׎ŠË*hÝá\Up?ÁB¨§ÀëE°ƒÀ±cǨÐ'‹o"À$Ǽz‡%ÂÖ$\ú-9j¸™ä¨ñà=í¸…äÀbèïG#;©SDÏÝyŒB’ù…G»ú|óÊ`‘R2¬U$-h@hÆwkKpW+©Ôou¶äw!Eê‰'˜ä±ékf’c^Á’ƒxN>àzÁzåú†MÚ"““*ƺ咳ú`. èGÁ‚èÈ’•_LGÄ(­˜äÈð§ BÛ& ’sBuÅðôDª®9Oë3O©Ž;³Ã$Çôl—Å9•••”œœlûB>ëµ0É1¯j¡›„„¶ä¸AEyyy놖ÍÙ$“sêÅzå’s(‡ðBª”ßæ^(ÃZ·VæmFÀ*á‰mhDZ3„d²´ ¥"‘Åjq/é%쮦’ë«„ÝÕcã+GÂÃ鸸ØW†ëqãääîQ“5î°j;wN}÷ ¸œä”UŠxœ³‚ä$¨º·úP.EÄÇQK…uGuï0 ¨æüyÚpDmµÁ}µ¦A¬Ž¢˜Ý›lɱ2Íઆլñ¢Ëâ›`‘¿ÂÂBß¼Œòd„t×kºÈ$G­Jœ‚‚‚‹3 #`.'9;NäK7ê€ñª~®f&¸/æE Hã#yjKî/ù^s¶çœxÀY›.[9kNÓWñoF–¸ƒ"КÅ|À’'²X²»šëtsèÐ!JKKs]ƒÒ“Q”ɺéR’sF¤õõ³ïÑ¡u0œk;å—Q`hhÝ1OÞ8_QNkÒ±×ï·9ŒŠã‡éȳ·Reîñºë*s²èô¬÷éÈó·Ûwî\zôQíøfeeÑûï¿O·ß^¯ogûë)嫪ª¤[£HŽ7èÑ1@ÿŽ”s×½’Ñ{APG0:bп7ß|“Þ{ï=ƒj7¶Z¸¬™•äx2®Mi $'==½©Ó>{$c¶dïÞ½ôúë¯Ó’%Kl]VwÏ¡_~ù…|ðAš?~ÝqÞð\NrbÂUèå—`= ù Ko‚Uóèø›Ò©¯ß±9œÒŒ-”7÷S*;¸Sº$¤xÛj:ùï¨P$ˆµcÒ þ¡Îc¯ÝGgí@ióñ÷Mn‘zMùË/­pºãZ,\¸î»ï>úúkmøâ%lõêÕô /ÊúšÈnF‘oЇ½cï!{˹ó^lÕª6£¦|?ÈcpöÓ^ œm¯¹òŸ|ò ý÷¿ÿmî2Sž73Éñd\­)V̓Rçέöéc 9Mbrøá‡Ò_þòÂ:CZdçÎôí·ßÒßÿþwÊÎÎÖR„¯ñ0\Jrò¡‰ RA”WRû"Š—UoèñWQ«S› ®ë³ô4µ>YºÖ/4œb&]Ga=‡Ô•µv¬î¤h/¬Ç`"?;KšûrãÓ HN\xí½Ë¡³¢Õ’sÕUWÑàÁƒ ×k‘ððpºîºëhÈz}k)'_ã©/Lrÿå9ù%W>®×§«õ¡W¿•õØ;¹¬½åÜy/ ‚$zнèÙöéÓ§M\¬_¿ž–/_®g3uuYk¯î¤ˆËAîkãôd\­áyøðaª¨¨ îÝ»[;íÓÇð¬°Er`ýúÃþ a¤õ9Ü¿úÓŸþä®ÖîG‡*âB†"àR’sVšèІ$§öEÔ?È;Hޤ-DÂ?ªö¯Ôp ¼$‹@O¥X;¦<¯y[¤ènÑÂ¥*×Ü5G/ôYWÎ6°ØÈîçJ+­¶®œV’ƒ-¾ø³GP?{í¼,ýõ¯µ§ˆé®•ÓãÛ(q•>Œê?êud Ž–sǽhÉq”sF0ýõ×2d)™²BBB”‡tÙnª=]*¿P ¬­ò¤„žõÚSWSãôd\­÷îÝÒó [·nÖNûô±æHÀ‘Ÿ¿ò§ÀdBdÏs¸©ûQK{|kÐ6í¬SŸ*ªÏSp€Ÿª¶²Êi¿¥¿ú¸ê"ß9'\تÏÕÎTD·–ž)d9žŠ·¬¤–!áÂÂ2È¡®_J%»Ö“_d4ÅL˜IþQ±ªzª òéì/³¨2û…v(¼E”/Ôª M¸ÓR„òªÚ{Hî^Ð…{¬áqù¼=Ÿ¶äççÓ¬Y³¤šJ™­ýP.]º”0㈗…™3gjJ› Ó9\n`v>|87Nê6ÎôéÓ¥!LóIIIté¥—Ö É‘¶ê »pæÀJÏ?#õ±ÿ~Z·níØ±CÒÇŒ3$´àÏ-/Þ‡4§W\qásÆ ´gÏIçЗ,ÍéGëäúäO-嚺§ä:šúlªÏ¶îE­mârô°ähÁclj<8WVVF+V¬ -[¶¾û7ÝtµmÛ§êî}³gϦ}ûöQ¯^½hâĉ„0ÌÀßpà RýrV²Ë.»ŒÚ´iCXøvÞ¼ytë­·J1S}ôUVVJ/e“'O¦ž={JqIÿùϤX5ÜG:u’ÚlêÞ³Õ ÚgÝ`4l(¾×¬Y#õ/áèëèÑ£% ¶–öš‡ÜgqE=¸'ñÝƒÄÆÆÖÅ;B§ø †^~ÿûßKç›ê«p•:ÑàßöíÛ¥xNݱÛÉùõ×_¥ï,~wa™4|ký=’ _ø×TwÞÊþñ¶6ì›vÖVg“WUŠY®@ñàP ŽAZ48®¼ÆÓ·a±9óýÜ¡ E½„Êï¡ÃϤýw¥Ò½›íÞùªJ:úÂqj5rmZN»¯ê*Õ+WV~d¸w…tìEIw=']{nÅl¯#9-ZúQ¥XI)A³|o)Ïٻݔ%/8“&M’^rž{î9é% />ÊW¼ÄÜqÇÒ¹iÓ¦Ià®]»J/¿¶úõ3Ïy¿¹>7u/ÚÛ¬9Î’-47¼dƒX€x?öØc„Ôñ˜`ñ‘%##Cš¨Àwðé§Ÿ–ÈÜdàfT^^.ý&àZ#|?Aâ>ûì3êØ±cõÖË#FÐOû,½üòËšÚ³5tE\QϘ1cD ú“qÅñQ£FIñ—\r v¥Œ¦¾Ó®ÂUêHƒ ×øÝgiŒ¾ ”à;ôùçŸÓŸÿügºöÚk Ïbˆò¹bïïÊÛ*ãÎû}c±—’œ 1ãè¯n²RXw ö˜í¢{¯.Ú´‚òþœ:ô+…÷Fp= IëNIw<åpÇNó ˆo36ð²@IDATK1¯¥ÐÎ}¨ÝCoI$æø›ÕÕyä™›)bÀèº6ãfÜA­Õ3’u{ðÖÊ©j@rÅ1ˆ|o93¼€€B–†róÍ7K³˜xÉ™i8ãûüC:†ß>}úÐ[o½%½Ìá%¹)Á‹²­áZ<쮾újé¥ ™™`IèÛ·¯´PFÃ,*ö!Ž´ÕT\q–=]ՌҰx÷Ýw©GÒƒ3%%E³òP‘Ý綾^’¶—-[&}âßÉ“'¥—,9€X‹~´Œ¡®ÅFsåš»§U©6›ë³µ{Ñ‘¶@rœùh ¬¹ñüôÓO’Þ0±+,¤X}~×®].pQA&@rð½øá‡ ®—°Úa¦yРZ«<&3ðý„5á–[n¡ &¨°Åu7Þx#­ZµJg°iÓ&)pZ¾ØÖ½g­=X]š§\·ÖOÔ ½¾ýöÛR‘ß~û¾ÿþ{:"\òþõ¯ijÏÖ8ôÄÄo'Þ'äï(Ž!{àøñãë~£mõÇU¸¢_ ú0`@Ãü/hhÉY°`4‘ôÆoHf:tžÃJ°ù=j®Œ;ïåØx[.uW«îRþâÇG)ÕÂmK’Ç•×xêvþ¯DƴߨݣÿTÍ,`<-jg|[îoJîgY/×Ì +Qua¾T]áÆe’[›;ž®«3aÝQéþmuǼbCÄÕ ¹Bü.ÜKu÷–✽›ÖH^fáú€™\Y€/^\¶m«ÇéMáÆ¦ lÄì.Üj𒝾úJš9~ä‘Gê.AúVÌ#ëÎСC¥ãÊ™*p¤­ºܰ![rôhÚH} +„«‹ì>‚—YÌ´+ga¥Ã‹1tpÛm·Ißõ/¿ü’~÷»ßÕ ¯9ýhC]…6´”ÓzO5¬»¹>Ë×+ïEGÚrÖ’£ôµ¹ñ€ÀÀÝ%!!A²Ê¬\¹Râ¤ï6RÌâû ‹†,¸$¥(1ÁqÙ§¼¿ pùúâ‹/¤ßÔƒ?¼¬ÉÒܽ'_§l¯¹qÊe´~Â’ƒ?¸zA0~Àøøxi_K{¶Æ¡7®XcVvd^{FXÄAF±}çwJýÅ?[ý©»Hl‰«²lÃ= 2QnxÞ×÷’L.Êiè,èYoŽüÙSFnmjùà:×"àR’ fØK… Rd÷5‹XӤŅxåyOÞÎþ×3wªóe%„LizHuÑ9ª:“Mq—ßNQ×Çb(ë.Û¿]Ú I¯uƒ¨;'^ĽMÛÐÀ:(¯#ß[ÎŒÙÉß4Dé}åRâ⫌2f×ٞ‡3¶D¶lÕoô9Xr¬½ø9Ò®‘ú@`¡[¼x±43 ·ÎÍ›ëÝL¡ ¤-E¼^Øðˆxˆûï¯]+K˽ e Ö°ÑRNë=¥¬_KŸåë•÷¢#m9Kr´` e<˜ýÁyê©§–Rùeóü…‰8´²+¿ÜËãoHpp\‰‰|]ÃOÔ?ÄÖð ýË×+ïEGÚip&&G ZÆ“™™)¹ˆb6Ù •ŒdHøë7'JLl]‹—qüF¬]»–à~3eÊÕåZî=ÛÓ2NUv`ÉàE®¡hmÏÖ8ŒÀÉ`ÑDì+ÅV”׉«²lãy‹°Ò2Ñð_ÞÉ 4âå0QfíŒkd½9ò{dO¹­ßôŵ¸”ä VB~•‡)‰[ΫƒÇåóžü‰x™öüSZ€3÷‹7tŠ_¸NMJ¥Ó³Þ§óåõ±¨îÈ'HŽE¸YÂo\dT‚ÀM¦)Áƒ*55U PW.ãz¸§ÀGÜš v/R|ðê4^0äÓñêì£m©pñHœ^–#õ_,ØŠø 9œ<³¯„ „í^€aÕ‘38á-úÑ2e{ò¶–rZî)¹>ùSKŸqmÃ{Ñ‘¶œµähÁ@ËxàÚ„ûz–Û+¢RòòòÁøù¥GùýT^Ûp[ζˆ×çƒ,Y´Ü{ ÛÓ2N¹~­Ÿ²%ÇZi-í57#p.wß}·´ò=&Ö[–æúƒë\«ÜùDWvG–ñg=J’ƒ $BL(åææÖ_Ô`Ë‘ß#-eÜq4ïjDÀ¥$'4Ð_¸«U«º í×X îV]èA;çËK¥ÞZÄlCÜå·QÌÔßÑñw%)»Ù…qXªjA•SKËë).ÜÛðb-KÃc‰¿û U:.egCbƒÒŒ­”ýÁÓ„ëÛ 7¶Ë(8¥+剄E[~•ª©<-¥¬®Ì=N¥vúæ ‚û&ôÂ=$G¾ÇB¤+—ÏÛó ’QZVÅÈê‚ô•¸¦Á‡)Ÿ‘f3MòÊËcÇŽ•|À·nÝ*Åñ`&ª}ûöR9lƒÔÈúÆK2¦! ùµ×^£½{÷J+2ßÙ½ pgCœ2:a•g”×Ò–TØ$ÿ€§^–#õX\‰`1@ 8tŽ>œC …,ÈÖ„1¬°ä(¥9ýhƒ²Nlk)ëSs÷êjx/6×g”ix/‚ hi eeq–ähÁ@Ë÷ß#$Œ€Ë!t(O*à»I´ƒd ˆ£¹ë®»¤Ìhr‡›¢l¼°â;ß,—ò ´tàÂ?¸Å!Ž çpmUŠ–{ÏZ{Zô¦l§¹mÙ’#¿L6´º5×^sã@F4½qŘ à‹ÌvuÃl®?øN»׺‰ ÜŸ¸ȆÅ:J’ƒ+}ôQéÂ{ï½Wú~aRâ›o¾‘Ž!9&´°mš%fÒõ–V#/µ„÷aiÿØû´ç x'¼»ÎÒû…T÷Ô–¬3Ò=uàTê¸#;ÂÕAÂP¼àªŠ‹Ù@‹ð§—Î ·‹˜)´ˆØ‹H kyÿý÷-Âz#Â…Î[üq‹˜u’®Ã§Hm*<ìj¤óâÉ"¬Ò9`/R"¸Ý"²rIÇñ]±?‘Z´®}á.#Õ)fX-ï¼óŽtÜV[uM´ñüóÏ[ÄÚ?ºõÈH}ˆ—% oñ²d6‹XÉ"šEW‹xªÆ ^~-"žJu ;Zô£e *´”³uOá^µv/jé³µ{ÑV[Öú/R¯K÷ºð··vZÓ1-47‘zØ"\Ô,ÂÂhë Y„µÕ"›-â%ßòé§ŸJý“‘)Í"fr¥¿Ñ£G[pL)¸¯ñ½/ïáz*}GE–5é˜H(R÷=—Ë ï"c›¼«úÔrï)ÛÙà4ÝkªFšÙnïÈ‘#¥1ˆ‘Ú"b#¤’Íኋš‡¸ÊíŠø9©ŸÊÍõ׫²?‚àHØ Ë„ò0o+%Œ„EÝQ1h.-‚ÌZD,“åõ×_·à»&Ü@ëž™¶~„»›E¬s%Õ+ˆ¶ELpHuÛ*#7îÊûCnÓUŸøýðy«!T'ß}÷]sÍ5u3Ëu'tØø×ª zô‡õtö­›ëj+ª¨¢Èû>¥Ž“§P«vÉuÇy£yà®Vqâ0µM¥–Á¡V T=-ó £šÒbÝ XmÌ ¬ü•úù•Ð’ûë}®—fœ  oýLùoÝBÑ¡ê¬Göv.iâÇLšЉ‰iT©o‘Fɘ ²–îj°ºÀ} ×j¤¯…Y\¶ú(Ëaö ~ÀÊJœw´-eÝ®ØFf:¤Ä^´h‘®Í¥Ìî*±ÆÌ¼5w;¬Ã!$»ø4œýhCÃz±¯¥œ­{ÊZ8Ö\Ÿ›ºµ¶µqãF)+’üiªZŽkÁÀÖxÄ »4^9›֬ްìàZk¿ (ëb´´ â šúmhîÞkª=[ãÔÚ/ù:Xs“¦ÌP&Ÿ“?›k¯¹q _ÃUÆîŸÿü§û‡Ì›²+”|Ž?k@Âüæb=:å˰‚Á³!99Yú®âû`ÍK@ëï‘o[e\ñ½Söŕ۸ñÃ.wivµØð *(«¤j‘ò׿em¦¯ájä/ü«EÖ ûhB!éj·˜†5DצùÄq½2¼5lÃûç+ÊDF¤`UΗÒHG9IpP©5w5ecÊLKÖ®E,GC÷%eMm7 |V^'›î•ǰíh[ ë1z/Žr°¦žm¥%ÁA­dûA°sSå´èGËPWCÑRÎÖ=Õ°>y¿¹>7u/jm+))IjêĉÒD€Ü®#ŸZ0°5Ü“2ÁAûxØ[{a²¥c”±‡à ¦Î5wï5Õž­q¢^{ý³%͵×Ü8P·¯á*ã‰L]²XGßCü,+ñ9 8ùy­P2Ša}¸x±˜XiàŽÆ¢?¸ïáuÑEé_¹—Õk`CKŽ— ‘‡£3.%9I­Â(P¬i‚—Ðáé uCé(ÈÏö#µYŒêò#Ð Õ•ÒÂ`©  HtêBÝLÍž–IüYôCÇ,9XÌ~ôÞBÚôÓ°}5Á½‹-9öaæÊ«Ù’cÚ«W¯–,™C† 1®/©™-9^¢HÃ¥)¤†Ó>&œðª”ôøHª*¬Mã§<ÎÛŒ€-*.¤~Äý£=-9rì“%ÂÎocö›Þ ð ÷«”;u¸&9îÔ€í¶Ù’cgÎÂU k$5ÓéLÝÞV$‡-9Þ¦UcÇãR’ƒ¡tlIûrÕ+'÷jC%…ETSYeìh¹v¯B LÌ ø «M=É9-H¤Ž¦N­kWGvvÀrÐ1“g‘T—)к`¢º$ïy#lÉ1·VÙ’cœ~`É>|¸q xQÍpWã˜/R¨ †âr’Ó»m,í8ž¯ZoAr xiea´"P–ŸGÝ“bèB¢>©ØÁÓT&œqÖCd’ƒ”Á,ú!À$G?,½¡&ŽÉ1¯&Ù’cŒn±nÛ¶m£^¶ähŠ/«CÀõ$'9†2rÎQUÍùºNÀ…-,8JÅK+ # ʳù4 šÌ€@·¬§GR´Öjl^Çîj6áqø$HÖ7`a€[rÌ}°%Çý i 2w²%G¾lÉц_U€ËIfØ+«k„Ëš:g`J<•ž:Uß3Þbl €xޱ'î¥ì8ž'¹ª…ø);¼-[rØ]Ía­DL»«Y…Æ'"&§\¬•†$,æC€-9Æè®jXãÅÚ‚ÏÆ´èÙµrâÏÖŸ;zïr’Ó½M4…úÓúL5¡Õ1‘ÊssÜ·é”ååQ•ˆážž¨ê=î«íãTǜّI»«9ƒbã²lÉiŒ‰/‘ó;~ü¸/Ã`Ú±³%ÇÕ€äpêhíØr iíXñ•µ¸œäø W¢Á©­iõ!5¡AJé’‚Bª*å\ü|s6@qN.…‡©ÜÒt` 9ÃaÖKd’Ö½­­$3÷,ŒWÏÌÌd@Lˆ[rôW ¼Ö®]Ë®jv@‹ tżp¼ˆñ¥.'9€³ï«æªÐš– ¥a-:yRuœwk”ädÓAŒEVò:Ùy"Ÿ Ë*Ywê.p`/ãHÌ$Çðl f’c_;KAbb">|Ø×†îãÅdÿꫪ½{÷ÒÙ³g™äØ+“;ÀâK%ÜCr:&Ð~‘F:·¨Þj@CÑ)<–Ūal"`&›âã'hJÏvªëVÊrŽn#4»j8ŠžõrLr¬ãâËGSSSÙ’cÒ€IŽþŠá¤öc ’akŽýØùj ·€=YXr,.K¿v±Ô6&‚α_¶ ZA 8ó0]Ý?Eu®j%UÒ}¥:¡ÃHÏ餢 œ’’ÅÞôu`É9rä! ›Å\0ÉÑW§Äräx;ae’c'`|9¹äLéÕžòŠËeY»~P‰—XFÀ%Âʇ,|3¦«NÏÙ~”zŠ5˜°°¬ÞÂAòz#J’/:ˆ#\X ’SVVF'9ùŒén&9úªYÕZ´hAC‡Õ·b/¯IŽ—+Ø€á¹ätKŒécèÛMjBƒ—W¼Äb¡GF !g¢v±‘4°CýZ8°ÎÚ|˜®êŸÖðr]öaÉátǺ@YW ,9¢¢¢ºc¼áÛ€ä@8.Ç|÷“}u²fÍêÞ½;EEEé[±—ׯ$ÇËlÀðÜFr0–k¦Ñ¬-‡U.kXÈ15>Šòì7`¸\¥'#€¬jÐï†tT ®j' J…uÇ’K»«© wzG&9ì[í4”^SARRaBÓH›O¥LrôÕ H/j?¦LrìÇÌ×K¸•äÀj“#^NWîÏVéáî‹»Š—Ùƒt¾ºFuœw|‚¬,*‹ÅÞ6¢‹ ˆo6’ÒFwÖA#„ÝÕôGë¢@Ø’£?¶žZ#֤¢ LŗA&9úé.º›6m¢aÆéW©Ô„Òžó…ë0L·’œ. ­h`J<}²fŸj(7ëL5â‡à,' Páâë;ùû2hl·¶”[ûC<ʪjè+ArnÚÉ0xØ]MheK“ý±õä;vì(d{ò¼±ïLrôÓêöíÛ%÷g&9öc@øc’c?v¾Z­$ ß1¢›ˆ§È¤sb¥zYZGÓå}S(ïnùú8"vã\Ö1ºûân*$àj· S[wT9¹Ã–'´RÄXñ›…èÚµ+a%xs!À$G?}¬[·NŠÅéÒŸg–~½5_M˜ +--5_ǸG¦DÀí$çºÁéäײ}±þ€  ?OèM…¹§¨('Guœw|S;wRRtÍè—¢à£UtYŸ16J8&Çdccc™ä­ÇÖŠ`쌌 N#m2 "§öÖG) 9ƒ–²«éS£oÕWg^~À·tîÌhÝNr"‚èÚAéôþÊ=¤\aXZkœ–Hgvìpf|\Ö ¨®¨¤ü}ûèáñ½ÈObМMTÇyÇû¨*+£¼];éqá¶P7àÓÅåôîŠÝô°î d´„„„P™è ‹¾€ääpb}Aõ‚ÚಶgÏ/‰÷ IŽ>ºÄú8Ô§2­…ÝÕ|TñÛø7D;:öüeƒhÞŽ£´>ó”ªÔ›W ¡³G³¨àøqÕqÞñn²7l¤¸° ºOÄl)å•…Û(,0€îÝCyذm&9Æ@Ë$Ç\=½V¶ä˜K‹ 9þþõîÂæêçôfóæÍLrœT»«9  7É™Ô#™.JO¤'ç¨cpÆvI¢Ëú¦RÎÚµd9¯ÌÁæcÚò¡á– 7¦¼ýûè+‡Ph`ýñ8ï‰L|Mê«:n$4øQeKŽþƒääææê_1×èÑ€ä ëÞ©SêÉ.”‡wž8¯@ÜÏÇÅD퀜¯Ì‡k`KŽ+ß¡›Šä ÿ/^>ˆ–ì9N‹ÄŸRÞºz(U2m±x7 ±ÙkÖРÔºN¤W p\x0Ý5J½(¨ò½·Ù’£7¢µõÉ19˜%fad“a—5÷²»šó:زe‹TIÿþý¯Ì‡k`KŽ+ß¡›ŽäŒî܆‡óà·k©ZaµI‹‹ ¿Nî+bs6 ²SìÀP¹ˆ§ pZøã‰Y¯Ý0BÕå­ÇòèÓ5ûè•+†ˆd®[³$‡WXV©B—øøxÂúàÛ÷µÑ÷ë×¶nÝjísLrœ$§OŸ>ÎWäã5°g…ßvß”$'=>RJü”pMÊ)¬Ïjè×’>ýÝÅR‚3ûöÙ9T¾ÜìÀMíØª_)9*„žš¦6é¾þ­>”CŸ9ÌåÃÀªÅba¢£3ò˜±oÑ¢Çå茫7T—&9æÑ$H'pNLrœÃO.Í$GF‚?µ `J’ƒŽ?5µ¿´ʽ_¯V㢴iMìµk¨¼°PuŽw<Ó»vSá±ôå­cTîhH ÷Å»Gu§Aâ]>Hü¨B8ù€¾Ð\ÖØ’£/®ÞP,9ˆÉ©¨¨ð†áxü*Å¢Ìø¾²8†îãýû÷SïÞ½«€KÕ!À$§ ÞЀ€iI2j}xãHšµù0ý´ý¨j(/LD]ZѱåË8Ûš ÏÝ)Ë?KÙë×ÓÓ—ö.iê…Ònj¸^š1Ø-d’cìmÛ¶e’c¼[3H2zíܹÓcÇàMÇü;èMãrÕXvïÞ-ÝÏì®æ<âLrœÇЗj0-É&tkK¿Ö™îþß*Ê+©ŸÑƒÛÚ··¥ ‘fôĆõ¾¤/¯kMU5eý²”tˆ£'¦ôS÷Ë é½ëGPD{få‡;[rTªÑe‡IŽ.0z]%:u¢ððpvY3‰f™ä8§¸ªá9‚ûšÅ9˜ä8‡Ÿ¯•65É2Þžyù RsûWªtÓ51Š>ñ9¹;vPþ¡Ãªs¼ãYd­\AÕåôýÆ‘ŸˆÑkâ@ï·ïJÓzµ—»üÙ\ Lrô‡žIŽþ˜zC-[¶”‚´9.ÇÚd’ãœ`‘ìÑ£ùù¹.+¨s=6oiĈUUU™·“Ü3Ó `z’HŸ‹9;ŽÒG¿e¨€»apGºwlO:öëJ‚»‹ç!#f¸Îe¡ïï/„Õ InùlE…ID·î„6d’Ãi+õ$çèQµ;ªþ­pžˆgX3Ö˜ä8§‹ŒŒ Â"·,Î#ÀžÎcèK5˜žä@£:µ¡G.éCˆÍØ•­&3oˆtÓCRâèÈ¢…TUZŸ‰Í—”è©c={äe‹tѯ_=„°>’R^]´–í˦ÿÝ6–ƒü•§\¾-“œ‘ÞšE_@r233õ­”kó @rvK=fmYÜ‹“çðÉéÚµ«s•pi &9|#؃€G èy‘l`€È¬uÅ‹© ¬²nŒ•mÎ/¡Ä?ÊD§F«²˜’S§éè²eô‡‹»Ñƒãz©:üKF6=1{£´èçà×gSSuFì„…ÕZ˜˜ä4DÆù},Êë9¥·Õ’ëé>^.Àíªe’㸠ðÛk5“Ç1T–d’£Dƒ·›CÀcHŽËô­pi*.¯¢›>]Npg’%F¸4-¾où•ÑÑ¥KEƵóò)þ4!åtdñBß5‰þyÝpU-¡k?ZJW H¥‡Æ«ÉêBîàGë¹°»šþ cAP¬AtDXõX%ˆa¦7*ó¶É‘-ÚnhÞ£›Dêèóâ„IŽ>j”IOŒéƒ§·×â1$ŠHŒ ¡YwM …»ŽÑß~R?ø°€è¢û&SEnŽd!À‹‹ù¨(*¦Ã?ÿL="i– ­ÊDÅÕ4ý½E”*’JŒ2Uçñ€gKŽþ*éСƒT)“ý±õôiÀ€´fÍOŠÇ÷<òË¥ÇÆÅ€«tìØÑÅ-{gsAAAÒÀ˜äx§~õ•G‘ ‹~xãÅôó·ÒÇ«÷©ð€kÓü{'Rѱ,ÊZ¹RÌ«N󎛨ÊÃóçQzT0-¹²*Ö¦F( œçJhî=)L¬‹c&Ë“ý5Iqq"¦Ž-9úƒë5^tÑELrL GvWs\ 9iiiÒÎâ<2Éá…‚ÇÒjð8’¥üþ¢ÎôäÔþt—X?gÉÞ*=!IÁOw_B‡ÑÑ­™Ž wíÀ‚shîJõ£eN!dÍSÊ}_¯‘ ÌùãDJPž2Å6,9ì®fŒ*ðÀ$Çl=½Vœ={öPpqeqLrÇþÀ¼>Žãð5* V“œFÐð+x$ÉÁ8ž»l ];(]JD°.ó”jh“z$ÓÏ÷L¢â#™tD,2 X÷!€œ´ˆ@úíáK)>¼öGJîÑÓs7Ó¿î‘2© Im-6Õ'[rŒS“ã°õôš‡ &ý~¯_Ï‹>»S—LrG8©©©ŽWÀ%UÈ–vWSÁÂ;M à±$ãùôæÑ4¦KM~gm;ž§â„nmiéS¨"ûe.Y×*ë3²©.äC@5œña´êáiÎk‹wÐó?o¡ Ä}S í‹3•ƒä°%Ç›.›žžÎ–œ¦áñé3 ’«Çå¸÷6€«®œeÒ½=ñ¼ÖArRRR<¯ã&í±LrØ’cR™¬[Mrqí»;'Ð@‘Zú’¿Ï§Ý'ÕkèŒè˜(^¬/¥À¢³tP¼hW› ~ïîÖÁ90o.J‹§åNmä¢öÏ»é‘ï×Ñ߯F· ïbj08ñ€qêaKŽqØzCÍ—ã^-âe­ZµroG<°õJ1¹šÍ$GGÝ1ÉÑL¨Ê£Iôäß’~ëätkE£^ŸK›³Î¨ÔÖ¯],mzürjDt`öl*kr°@Îö픹d Ý1¼³pœ¨J2€ÖaÁ¹÷«ÕôÊ•Cè¾±=ï“-°»š“Ú(’“››Kp‰aa"’w5v;nˆŒköåx(&9öã••%Ý·ì®f?vM•ÀrHâÀ–œ¦âãJ<žä`0¡"×Âû¦HqoΣՇr•c¤vÑa´îÑé4:-ŽÌK§E +‹1ÔTUS¦X«èä†ôÆÕCéýëG¨ÒD£UÄà<úÃzz{æEôÈ%}ŒéˆÎµ2ÉÑPEu 9C‡)Žò&#P‹HNaa!íÚµ‹!q2ÉA&Dûª°»š}¸5w5¬9“ÓJ|xÉÁ@BühΟ&J1:—üýgš³ã(×Idp€H/=‰žšÒ²~[MGW¬ šêêºó¼á<eùÂ-pöD§OÒ/M¥Ç©óDšè? ëÍ ó·ÐG7]안ˆ***’wùSG’““¥™9¤Zea"гgOÂ÷oíÚµ Oñ¾ Á„°%Ç~°ArÂÃÃ)66ÖþÂ\¢I@rØ’Ó$<|B€×Œ)Я%}÷‡ tãÐN4ãýÅôÎ2õÌ_ qÍÓÓúÓÏ‚ìTŸÈ¢ßO%§O+ààMG8µk7íûñGêDÛÿvîÜFU•´Ð绋èÓ5ûh–БÙcpT;xPsLWCXtÙoÙ²¥ä³¾oŸzÝ+]*çJ<,¤8tèPúõ×_=~,ž8Ù’Ã$Ç~íNJ1ýœXxðJI”ÖqùäæQTuˆö~ó5Ú½‡, ®S–ñÅí*‘â÷¨ð‹Ï±7]B‰¶±i“DþrÓÿnKë…{ZŸä˜F ü´ý(õ~n>SD«™NWgXkTÀCÀ’SZZÊku¤/Xr —cÀ^P-\ÖV®\I555^0Ï“Çu’Ó¦: ãµqI¶äÈHðgsøÉpMâ“°ê–UR¯ç¾“²¯5ÈI@H5ýêC(ãÙ«é²NñtD»îÿ~=œ),;ÍÁé]çá¶—-’2ìýòK*ÍØCÏMíGŸ»†®”Þh §‹Ë醗Ñåï-¢Ëú¤ÐÖ¿]IƒSâ]ç©@r lÍ1Fƒ111”À$Çx½¢V¼po.,®C˜sÒûñƳ ·nÝÚþÂ\Â&lɱ ŸT àÙ>DŠhÝì™-ˆÎô¢X«åáYë談¤5[p\)paûò¶1ô¤x±ò§MôÃ/K)42‚bzö¦¸.©¥¿÷BW!Öƒ9µs'å‹”¾Áþ-é‰I½EìR/‰*1’·?_€™ì°(ëÜ{&Ñ´^íåS^ó©$9üÀ7F­—c ®ÞR+$ÉemÈ!Þ2,Óãܹs­~>š¾Ó&è pƒ –ŒE_Ø’£/žÞ\›÷¾©ÛÐZxqî²tÍÀtºý¿+©ŸÈþõÇQÝé™KRth ªd·Ä(±®Ëx:pªÞXºC¬ó²Žr7o¤V;Q¼È"f ½A,¤Up,‹òÅ‚Œç²ŽQ›è0zmÆ º}DW ²~›l=–G÷‹Ý«åÐÝ¿—f ¦ˆ ïÌÀ£$9Þ o3Žq9ì®fF͘§OH%äýë_ÍÓ)/ïÉ‘hÙYìC€IŽ}xÙs5,9œBÚÄ|÷Zëo¯>‚¬7k½œ>YAOü´‘þ·á =+ˆÎ#»R€XXT)ZGÒ× 9úxõ>zUíÙµ‹"ZS«ôN–J¡"ßäD<ÀΊ¸£‚¨¼´ŒÆtmKwß1–®èŸÚ(´<4$xrÎ&iaÏ¡© R»A¼Ç5M§ò‰ òêßÊs¼­˜©Ÿ#²ö±0M!—µ»îº‹ÊËË)88¸©Ëø¸Ž€äÄÇ{÷ﻎpÕUÅ$§ Ý7Ø]MwH½¶BŸ&9ЪX;T²VÀªó¼È¼öÐwké%;è©iý馡½èÇ…ÖàyDü­Ø—Mÿ^³f ËÎñµk¨•0ŒHI¡Èví(¸U+SÞ4°ØçæJV›âÌÃTRPHíb#éÏ£»Òm#ºØL󌸛Wn£÷Vî!àðùïÇÐõƒ;šrœzwJvQc’£7²õõÁ’süøq:{ö,»ÇÔÃÂ[ &L˜@´|ùrš¹¯ô"ØÀ²ƒDcº$Iå7ÕЂ]Çèë‡éç-›è˜X?&¬U$…¶M¦ðÄ6â//®Ëí¹êi°Ëòò¨8'—Jr²©øø i­q­è¦!©4S»l?À`¹y{Ù.zwÅn‘ :@¤LwêÖ(u´«ÆäŽv˜äºœamûöíRº`ã[ä< ¶mÛRÿþýiîܹLr\¤¼<ñü`’c?Ø 9ˆÁBª,ú"à/b¢«««õ­”kóJ˜ä4Pk‡˜p)Ác“úJ–;Ùùëè^‘™í®‹»7ŠÙAq¬3£oŠôW-,%ëÄ¢£ v'hÇŠ ª9žBÉ ³aÁ1±ROHL4FDRËä©AwìÚE6´ŠÂAjò©,?Ÿ*ÏæQÉéÓTUYEá!A4"=¦\1ˆ&÷lGEb…ædWöYɪõ¥pã‹ IúÓ=£{H š+ëmçñ° ¢"‘”ÅjÎLrŒÁ×[j½ôÒKé“O>¡÷Þ{Ï[†dÚq ³b˜äد"N:`?nZJ€ä s #ÐLrš@ÙÕ>»e4½(¬ï+Ü´`á¹z@Ý1² „Ášø ÿ·¥¿§¢ÒÊjÚxô4­>˜K[އ(së– ¶ ðP  "žÇOø˜ûã/(Xdoó£-ýjIP‹–dDÉ"Ö‡°œ¯¡ñЩ>éÕåTSQFçEªÊòÂ"ÉBƒ>øS÷¤CǤ‹¾&RëSsRVUC³¶¦DÌѪ'E=ÑôžˆEºqH'BÂ_XsØ]ÍØ; oß¾´mÛ6cáÚ=œgŸ}VºOp¿°‡Ài1Ia’c?Æ%%%$'¬±¿4—°…€ŸŸ¯—e >W‡“œ:(¬o´ ¥W®L©¤¿Xw€>ú-ƒF¼úuk-ÅìÀÝ+-®6(ÝZ H«<ªSéO>"±/÷eŠ3ñw$¯ˆ²…KXNá9:s¶œòK*¨¢º†*Å_•ø«Ö¡A0Ä;P|†Š f aÁÔ:"X¬-LÉ]ÛQªèCJlœ¥Å ‘Fs¡C¢zZuð$}#Òi#¥vIE•´ÖÍÂû§Ð%Ý“5‘#ylÞüÉ$Çxíâ¥uÅŠÆ7Ä-x,pWC*i¸¬1É1VˆÇ0ɱgNŽa?fZKÀ’ËkEË·¯c’£QÿHŒ4ÉøÛœu†>dçM‘ ®lÅ¢—°ðLéÙž®·c­ú?ê›+ýY;ïŠcÕç%b3gûQšµù0!î¦WÛB Ò-úHÊýð¤6˜ä¯-¼´Â þÖx±0 hÑ¢M›6M"9O>ùdÃÓ¼¯#LrîTœÐqül•„%‡crl!Äçdø-BFÂŽÏíãh€pázçÚá´\dXûfÓ!zmÑvzôûõ”.żŒïÖVrƒ%È kÍÎùôÛÁZ´ç-ËÈ–,6=„[ÛD¬ÑÌiÔU¬ ÄÒ4H#ÍîjMã£Ç™~ýúIÙ³öîÝK½zõÒ£J®Ã €ËÚG}D'Ož$Är±ƒHçC=pycÑ^+[r´ceï•lɱ1ß½žIŽºGüÍAfð÷¯/¦M"öYÖФŸŠÔÒÕ5穃p!CüαŽLïäÉZ’al¶šƒ§ $R³],عáÈiZ+’!–URdH YáÞ¸z(MîÑŽÚ‹D ,Ú`KŽ6œœ¹ Öàɘä8ƒ¤w—Åz9˜%ÿùçŸéöÛo÷îÁºqt 9¼FŽc ÉáÌjŽa×\)¶ä4‡Ÿ—`’##áä'b` ·5ü=-ÖØ) Ögž rhõ¡\zUXzr K¥VâÉé(E bi:ÄDP¼ˆ¯‰q6X&:Tdò.mA"ùRWû‹¿Já^V)Tˆx$3Èq;y%åtF¬]“SP&Åõ ¶1>N‰ kâš–¢SZ·"Xž^š1X²,Á%Ížx'añªâ 9œ]ÍX•b†®GRPù7Þhlc\»Ç"€—ÇñãÇK.kLrŒS#HÇã8†/»«9†›–RlÉÑ‚_˜ät„‰„°˜àO,¦¹ãx>íÊΧç 1)¦¹"&&+¿˜ „•ŪTŠ4‰§%v$ò ht‰_Ë–AaÂßÔ^í%b2ƒŒjˆÿaÑœììl}*ãZšDq9°ä°0¶€ËÚ<@¥¥¥ìNe ('Î!»“Çdw5ÇpÓRŠ-9ZPâk€“ÞñÂJ3®k’ô×°Ù*áÚ&[gΕVR¹°ØÀr³gÇvzøúèÃy¿R‡ÔTɺâ+ê‚å'JX}ìH¤Ö°YÞ·¬y€u#XŒEq9ÈœÅÂØB`úôét÷ÝwK.kW_}µ­KùœƒpÌ“ƒÀ‰bŒÇúj,ú#À)¤õÇÔ[kd’cÍ—´ÄÈéOÙ¥.!Uô°80 1”ˆtÎ,îC$ ¼±‹,9˜A†Õ ©‚Yk´nݚƌCß|ó 1ɱ†óÇ@rúôéã|E>XRãeœEaÑbÁÇ,Œ@3øöêŽÍ€c†ÓXröìY3tǧûЪU+¶ä¸àÀKb7ntAkÜ„'#0sæLš?>‹‘YôG€-9Žc K“Çñ³U²¥pÓ?/HgašC€INs¹ù<^¬ñÂÇ$ÇÍŠͳ»škt€TÝ:u¢õë×»¦AnÅc˜1c†ä4{ölƒY;Ž—tXT9E·cbKŽc¸i)Å–-(ñ5@€IŽÉïÌX àݤܯ(N<¸xÖØx] 2„6lØ`|CÜ‚G#KS¦L¡Ï>ûÌ£ÇaÆÎçææJ³åì2ê˜vàN…ç7‹þWvWÓWo¬‘¿ U¸¬±%ÇýŠ‚%„Óx]€äÀ]dÆcíé-ÜrË-´|ùrÊÊÊòô¡˜ªÿpUƒ°%ÇTjáÎ`Éaw5¾´ À$G Jn¾†IŽ›p¡yXr Lr.bàÇàÁƒ©°°222 l…«ö¦NJ111ôùçŸ{ÃpL3&9¦Qw¤lÉiï6‰“œ&¡1Ï Îêe]È–N#m¼>| ((ˆãrŒ‡Úã[ n¸>ùä¶üé¨MLì`áUFÀLpLÎÿ³wðQ]Yÿw‡HˆAîV\ŠkKÝʶ[ÙÊÖå«luÛne·[W*Û–"E ww$X”¸óÿ /yob“ÉÌ{#÷ð óæÍ{Wþç̼{î1{â†}E*9öÍ1:iɱ&)JŽ´äØž¨/z92.ÇöX;CwÜq¥¤¤ÐâÅ‹a:v1™Â½ul+[‡_SwKwµ¦Ð‘Ÿ©JŽ ;=–JŽ}0ÆÏÏ<<pð™ØÏðeúèÖñÆŠŠŠÖ5"ïné®Ö ,òdH%§Pìí”tW³ŽH…S?^ .g×®]TZZª_§²'‡E`Μ94oÞ¦ÎÚ¢Tr€³ram?L/rrrìg@N<Xr°ºuëV'ž¥œšµ¸ä’K(""‚Þÿ}k5éÒíÀ]MfV³\¤%Çrìš»J\Ö$IšC@*9Í!dŸË…µ0áüÅI¦óÖ‡‰‰‰„:(긜cÇŽ‰,ZkÖ¬Ñg²‡Aq\wÝu}øá‡TRRâ0ã¶×Â"m¯Ã³ûqÁµYº«ÙŽMRɱ¶ÎÔ²Tr€›mÛ¶ k¢”d,Òª¦/þýû÷§eË–ÑÕW_MQQQG×_=AÙ‘$0EàÎ;濫¢"úüóÏM?’ï[€@YY9s†bcc[p—¼T€tWS£aÝciɱ.žÎÜš‡3OÎYæ%…¯²³³©]»vÎ2-‡œ‡´ªÙŽmñ;vПþI+W®‘ÉYŠ@j%_Va·¹eüVB!þç?ÿI·ß~»ti±™iii"w§N,lAÞ&ÝÕl'2&ÇvØ:[ËRÉqŽ*ŠMff¦Tr æÜÕ°d}’ç$Ç{ŒÖ¯_O+V¬ˆX€”iű8Õ-Pr&ÉúHKŽõ1uÖ¥’㜅‚EGZrŒg–´äØ–ðcGÐxS–Å}Ó¶#‘­;2C‡¥1cÆÐ‹/¾èÈÓ0lìRÉi=ôÒ’ÓzlAÆä4‹<ÙRÉi{<—5©äÏXrPœR¨´/ D÷ߣ֜Ž;Ú®sÙ²Ó ðøãÓ’%KhݺuN3'½&"•œÖ# %GZrZcC-`LIHÓÐçòœD@A@*9 vþ %Gº«Ï$(9 dº“d;žþyBrX1Õ?léF£FD7†À„ hĈôôÓO7v‰<ßRÉi˜œ–19-«…—"TrZš‹^®]A¸(Ž0m¸èHKŽñœ‚Û H*9¶å…ŸŸ}úé§õÜÖàÎ&-9¶ÅÞ™ZöÙg…5GmW‘ARÖÈif¦WÃ’ƒÅ8jI².Rɱ.žÎÜšTr„»Ò]Í>¥dÊÊʲ9ñ(ÆG7Ýt¡r¸šdf55ò¸) C£F¢§žzª©Ëäg*°™VRR"-¦*L,9„%$3¬Y‚^Ó÷H%§i|ä§uhWuçå‘! ÝÕìƒ!H<—)iÉчo¼ñÍ›7Oà`ÓŠŠ áÆÖ’ÞOœ-¦£Yâ/5»€ŽçRfA e•ñ_)åòkYe5ÿUQ9ÿUVŸ#/7òöp'/w7òõò p ð¦ˆjèKqá(^;· ¢@oÏ– I^«#p}=z4-]º”àÂæ¨¹LÉʧTÈ2Ë1^3òŠ… g–R6ÿ”U° +²\-¦Z+Ë,ÓÓp–aÈ1dºcˆűdz<ã5!"ˆdëHHHHˆhë´,[‘JŽ”sJ޹H|´äÌ€óÝÃ/©äèÃ`ýïÿ›.¹äÑ!Æ žã+œÊ£µ)§i{Z6íLÏ¡]9”Ê È‹•–Na@m|©Ot€P\Bý¼ÉÇ ;+6näÎq@åìf‚Å"”žâòJ¡‰…$·µåXý°%…Nå—ˆªðPz¡ôôêF½£Ãh`l[º 1’ûðýÊÿŒE–œ©S§Ò“oÚ´IlR;¢æ{‡Ò½)õ m8z†vœ—ã}§r©¬¢JÜìë%”’è!gÉQ!Bq ôñ¬UνY¦A¸§¼ªFñ)(­ E!ÂëÎŒl¡,-©)Z‰{ýª©ßìÛé‡y4üÜ)×N|/Dcò?³Pâ7srr(11Ñìûä…Í# •œæ1’WÔ •Áˆ‰…@IDAT‘„ÈÈH±°®¬¬¬ç¾ã SpšaÂeMº«éÇ΋/¾˜fÍšE¿üò‹P*ÔîjûX©Y¸;V|Š~ß“N«ŸŠ4ìÈ ?êNvïH\Ø‹z´e b…úyYu\¹Åålȩ́='skd9:šÞ\¾ýe‹ØA#;·§I=¢iDç(aá´êœ°1Åz#7ĬÏ\©äXSgmQ*9ÂY[cûäɓ㠣vÎaâá%\úòöý÷ßîFðoß—ŽÞüj5+7Ç…ëY»ÝŒíÚž˜Ú_(ýb"ÈÚÚL#S…Õ§kd°ø›Ñ;¶öª<Þ_ÇÖ¤5‡O‹ë—ëQw±…gj¯N4»uo_ãÊR{“<°)½{÷¦«®ºŠžxâ ‚ÒŒFSNqýo[*ÍÛyŒ–íÏ B¶²te‹Ì˜¤tçèî4<1ŠÝÇô)z ¥)´Sõç?5¥°KÜš#§„,ÿ²#•þñûv `kÑxVܧ³Ì_Ü/ŽÂØ*©>(X 9ƒ%G’uJŽuñtæÖ¤’ã ÜU\tNœ8!•ƒyKŽTrôcÜÐö­¦WÝC+?yƒ.þèOÀ Ãt¥É=bhH|;«ZhZ;³v%šÂãÂqK÷eТ=iôáŸûèé_7SOvm»b`"]=¸³n ÙÖÎËÑïá…(99™Þ{ï=ºï¾ûj§ƒìW¯¾úªpg3MrQ{‘• Ë*éÇ­)ôÝæ–‰tá +Í?.BSzvb·Ç+õdf dáïº!]Dƒ©Ù…bsa!Ëò=߬¡;¿ZEºE³,'Ð¥¬¼xË%…y¸¬Ég…ëãE¦¶–ÎÞŠüErÃE¾ÿPr$‹,9²f‘íy€˜—O× Vï§#™ùÔ/v <‹¾~ñ*(ÝÒpÜ68[$œädÔF\vÔKÊ·Àé^9~VÄç `‰°s­jã5&Ô_ÄC˜»ƒ`ï‹ûƉ¿ê«I¸%ÍÝr„Þ]¾‡žb…g\rºmD7±+Žþ$Ù¸¸8Qdö¹çž£ë¯¿žDdÙ²etë­·Rjj*Mž<™h“Î7ˤÿ¬ÚOßl:Lìš6©{ }rãšÑ'Ö¢¤¥ìÊvŒ¸KfÔÈ1’hä±Û™’@î” X†D!l±AÂÈqÛ@K†5þÌ‚+þ ¼ÿºãÍe…íVvîùv ]5¨3Ý62YX-ÍiÏÙ¯’#-9Öç2”{°ÆZf²Ek# •k#j£ö¼¼øáÄ‹k©äØà4 >ìÛ·¯wÈK[‚ÀfìmÉÞñ>JA>^tÝÐ.tëˆdêÙ!”kN\DÞÞÍ»Ç:“Ïɲ8°:‡_shç‰\:‘[H•¬Èâ ooòòö!w~mãÎ?…¬ØpõQ:×ÆÚœãÚ¼=W]Eç8£[ei)•ó_EyEíTüxlÈFÕ?&¬&é[g`a ÷o||ð¢Õ%JüýóŠ hÁ®4úÏê}tÍÿ 0¾‹Ç»Çôà n2iA-ÐVB¯/ÙI[gQaôÂÌABžÍU«ƒ$;Ò³E¬ÌV–e$×Èâì€D°q²È²ÜB–=}|ÈÍ‹e ˰g\PÅIα,³bT]^F,Çe¥eT­|¨ EpÆ@¸Ê)²Ü‡cÚ Ô”Ò åý¶Dâ_°[&¹ÿ¬Ú'ÜÞ¼°7]ÎÖJ=\G1M{$éÚl®”——K%Ç6Ð:]«RÉq –ÂeM*9Æ3Lfº³ pü¾ƒVr~ðÉ c‰Ô»Ì)8Ûy—°U‡NÑ*އÉåE`^è„‘gX8ùÄu¡èÞAäH^Aì+ïëÇŸ·|Õ¼è,縠ò‚*ã¿Ì³géWN8ðýÎTRTÌ ¶¡Îœp`L—HÙ¥½H8äÛ`Gî<€é½;‰?¤þ7/ß[±G`ÅîáI}¨3'Ld='4ÒŸ|òIm£ÕÕÕÂJŽ‚¡÷ÜsOíyKŠ8ß+÷Ò?ÿØM'˜¯— H w¯AÃÚ5Û$,4+œ‰4V²,oe…¿»Ö^žäÇVÏ0òéK‰<¯ó¼f UòBr¬ÈòÜ<Ú½7“ŠWʼ+~ý9áÀVÊGsœÐ˜®í5ßEuŸPÚî×Sü­K9Coóܯÿd=ú¿âÜVÞý9!ˆ«”™¤Æú\G)Œ–$h×ûÕi;þÉ222ìx„®1´víÚ æI²‹÷¦Ó“ì¶µ‘Óå".aÙÓh'hŠJØ­ YÕæí:F¿±5$+¿˜||}É/*Šü{ô¡vüêÇ ìr[“ÜØÌ'8Hü™¶ kOá™3TxêýÀ™Þ>AÂvêÑ1œfôŽîhH>С^ɳÓУ“ûÒgëÒKwÒ'ìªwýÐ$zò¢þv«ÑÐìýÜŽ;è–[n¡­[·Š$.êñ"yÕªUêS->†L¾ÏÊÍ+œ´ã°>þu|/Šå´åMÜŸ¶rŒ,å•JÞí"©=·EÞ,sèäMuKPŽ<ø;‚`*cå½ðôi:̲¼gã1zyÑvòòôI¦õŠáø›xÎVØð"Êܰ„qôò%ƒé­e»è™ß¶Ð?ï G&õ–Jßó©­Õ}:ëqÿðËò:N KM §mn?l8±,{ù7¬@´d\–^‹ï/ ñ‘”$š)g+e~ZmNO£å?m¢{¿]KÃ8•ô5ƒEÒ†äÊÝ›— £Ç§ôJΓ¿l¢×XÙyfú@¡êÑR¬vbiW®\iµödC5À’~IæJNsÙÑçpWƒß¸$c€’‚5§/j%µ ôÿèOéË ‡86¥=­}dV“®˜¥ìôb×W*9æóYËÞX²‹žeë2;ͽ}׌‰o´åNÐ+‹wÒïì–æàG!Iœ|€S{4íþÓhƒàÇ–&üÑÁTÀ Ÿãû÷Ó_¯åEâzºƒ3RÝ7¾g=×&mÃíéJÎZõÒ¢mô÷ÿmi¨?¼v$fQRó :”¶oßN]tígÌSt°3¼nÝ:³”dþ»—³‰}ÏÅ_a…üî¶ ¢^RC£\ÿ±xÍÌ£`vaê8b$…%&aG#(c!q±âÖJ(î‹ì§ïßüâÛ†°«Ý0,‰LÝÒ ü½ÍÉ6îbeðï×ÓØ×ç‰x¥·¯NÅ­96¦ã…’s–]ÿ¤Õߙֽ—–œÖáçJw;Þ/¬+qÇd®Prrsså¦ .z¿U[rôîÛQûÛÆ¢ný|¥¨¨÷•¿qP½:¡€2/ìv#«Úó¼ã»“³£whO‰“& kR¨; òâ• §,^ ~°a·TG¡Ð§¦õ•íÕs…¥çÿ83RMßûÝ^ þ&”Ÿ\:„P—GRÓ`3bãÆtã7Ò÷ß_/&wcwØœ¸XÿÆŒP¶_pï”ÚzH¦#€ë’¼¹l7”VRhRê>æBòåÔÕÎBPÒ"º&‰¿~6a«Ù=ß®£Ç~ÞD÷³ÒŽdÁ&ò™ÌYÜüe2¡ÖÎÝ_¯¡nOÏ¥WYŽ¡Ì;AÉÁš“àlÓ3l>Ò’cô×±TrˆePr@Ȱ–˜˜è@#w®¡"C“Ç_ÈäÍóq4ϰ«‚±‡%DÒŽ'g7ºãý+W~ÿû/›i/§}§ä‹G’ÿy×Àæ{rÌ+<¼½(ªwoŠìÙ‹rØýg!Çÿì"õîs3PR;mp-ê”üz×$šËÄEüƘý÷†Ñ.´ÛŒñsß}÷ À }TtbŸ³yófj¬šzZnÝøé ÎþwRXÝžŸ1üȆ‚ŸÈ.ö2ÇŸ”q©šðî=¨SžœÑϹ㩠¼ÅŽI¢Ì=»éÿØrõÚÒ]ô(Ç'ÝËÊŽ©K&Šåî~z¶H:2ç«Õ"ÖîÓLj:T¶‘ý[•JŽm0—–Ûàꌭگ°3¢ÝÊ9!»H¦‘n%V¸ÖY´i Q«æ‚W~¡79Sj¬|hzƒ ÎÆÔLôò/4ó_‹é¤Wu»t6Åsš_gWpÔèµa·´ðΩ˥—RÜøñ4ÿp6u{ê{šóõjQ°T}-Ž/ç´Äûž½œc :ÒÔ·²ug-!ý°¤æxøá‡iÁ‚äïïO¨£¦²²2Úµk—ú”8†[ZŸçà”ÐE´þÑYôúì¡õ$ ø7׉éô÷oè™Û) ¹'u¿ò*^ôtzG ”9ÌsÀ"öño6ÀHMP%0¶ÀX; EFFŠôä2.Ǻ-åL–•°nO²5GG@*9ÄA¤£„ßø±cÇhÔÎ9T(9Ò’Ó8o?çôÉý^ø‘ëiVÓÖ'.~ø¦ÎfH@píÇ+hÈK?Ó¡2wêvÉŬܰ;§ÏuUFaìÖ’4û2ê4z}¶)•âŸøŽ]žv¬bj åêõ_Þ<–¾ºe}Î1ƒ^üíãB‘’šG`òäÉ"•t<[ ÕŠŽ;§G\ŽBˆ§¹™Ý,/ÿ÷Rº’ [B–ríSZyè$õfy¿“•RÏøÎÔý*(7ÈÝ…3@aîÀÊŽG\g¡°#`eJÀØc` ̽£d 1œrcÒºœ„’o IæJNsÙÑçˆI€oyjjªÊ5‡‚—´äÔç=*´ßÅ ½>Y.jb`‡¶k¤Öå KõþÜGIOÍ¥ÿíc_u¶Útž6­&(¿~“.yáG᜾7ùŠ+y7¼;Ç€l¢¾ÿ÷mæâ¦„ l;žšM>ž4ø¥ÿÑÓ$©yºtéB[¶l¡I“&‰ÔÒ¸¿±Š’“’U@Ã^ù™~Ù~Œ~½{2½wõˆzÁô9Ået#/ÈǼ6N¸ùS·Ù³)fØ0òðön~.r…‡·À¤;cŒ€0vjB¢` ¬9°b8%¸,ý`].J%Ǻx:skRÉq0îÆÆÆJKŽð n§¸Pž¤:`™Í ˜/¹¾Ës&Š`bÓTÇÏœ¥á¯Î£»¿YC]»S×Ù—ShB|]#òHƒ€;`ì8h%_6›2ª½…ÕëA®-dºËš$+œ.Ї^öázø§ „lv’šFñuóæÍ£'žxB\ˆÌk¨k‚ ø/þDn¬ôl~übšÞ»~ªx¸U%±KáwÛÒ(ቔÀÖ!Ÿ¦;táO 0VÀ Ø5䚬yþ€ŽLؘLã”Û’¬‡€Tr¬‡¥³·$•ã°Trìƒap<ÍÁ%Õ °#=GXòxwvãc³è’~qõ A¼Bïç~¤ÝyÔuÖÅ=d0§Ðu¯w»i,½óǺøýÅTÄS%5¬7Ï>û,ýøã»îÓ^û‘›XZóðLŠÔ4_Z!Ü,áVåK]/gE=>Ns|Ó8À ˜;`—U`ª&`¾ö‘™‚ÓßýÞ]±Gý±CKKŽõÙ%•ëcê¬-J%ÇÁ8'-9vÀ3iÉ©c¢=é4òÕ_Ù--„Ö±{RĪ)»¨Œf¼·˜æ|µŠB{ô¢.³f±kZ¸úylˆ×AAÆ®œ˜áŒ›/ÇàüL¯/ÙI¦ö]þà4ZŸrFXÖN²…MRó̺øºì…ˆ‚ÚÒµñîôÙcê¹§­cL{²¢þî4êu°i8üpaÉAâWçŠNÿ×"a¹YÄ5CLk¶l=ž%¬KS²¨ [":DmÜäÏNk¾^þ”xÑ4jÇ©þi#ÍbÒt'|h|;‘±ª°¬‚c~!dº“Ô8KvÅ–ÒÜ£¥ôŸŸÒõû׻øå{„2_Šæe\»)¦Þ5òDˆÀ˜b£›Òc“ûÒ×·Ž£÷Vî¡+™Gà•#6&‘] sI­G@ÁQ*9­ÇÒZ« ã2”˜j¥«”±Œƒ%þûYYõÁ™~½¿·r/ÝôÙJz˜ {¢¾…iüÍ'kÒÐW~¥"¿`Jâ]r¿”dØÃŠÚ÷í+6,>|†úsfµ½'ó4'DÔ¸üDùÒ¨×~…X5È7¤ÞžÅ®}¿³Erñ}Ñ­ãÐxNã­⟮ùx9ÝÇiº#Y±ŒŸ4Ù¥RB+8Øê)§)°ÆÀÚ4æ Y×~¿o*Áj ^9Rºt(9Ø “ÖëHÖ? Ô½’$h©ä4‡}%$ÓHËXr@®ªl¾ºx'W+_Mÿ7kýßÌAfÀ} áoþl…÷ìI S¦‡L÷©ÁÈZoY»°™ÕƇ¿ü3-Þ›®i:ÌÏ›–Þ%q†;$…ØÂ–5Iu f u†àÚ·ì‹hT—šïµrÅé‚‘(ã‡i”Èrܾ_?‡—dm€)°ÆÀÉI€½šFwi/x´.å´à™£Ä›AÉɬ¨†VÿWRR#Ò’Ój(]¢©ä8›£££ µä¦±ŒS”W̰öW1øÇõ¢À'\IÔ„ÖÙ.¥7–í¦¸qãØ=m°HË«¾F[/??‘”À'&–¦¼³ˆþ»æ€¦ƒ@oOZtïT®ïÒ–.|k>íÌÈÑ|îªo «Óß]$,\ˆaÄø¨ –±\ÃéàÙ2GÌ¿½’l‹0FÌ0ö¦ÖIðhgÜs2WðÎ,:aaaĉCŽ=j[ð\¤õââšCó•$h©ä4‡}Žâb;v”–ƒùJžžž.gÉA†´¿_Go\>Œî×SÃ…³%å4æùôÛÞ ê<õ" ïÜYó¹|c;Ü8Î)nìXŠd¶[¹Ésó·j:C÷ÏwM¤¾ÑáBÑ9pú¬æsW{ƒ¸ŽK?XBÈ ¸ä¯QïŽa?„c™ ¼ü©óŒY„ìv’ôAXs`˜&$¯À3𼃴…5Ë7º!Æ®‚q&Їî#ÔQÒ|ÏÿËdÊeÞ]ú¡c¸û¨Çßšc•|kÙ.‘ Ð4ɬ3ß[BÁ¼ë;v§:—)Zƒµ5îÀ ð¼1µè€‡ŸÞ8FðÔž †Êg¶5¤¡¦ ©äXKWhI*9Èå.]ºˆ ƪª*½ó Ù”(1qp6 |~qóXMf©®Á2á­…”š_ή%‘— µ+áe‹/ˆXÔ?õëfÍØ¢Cü…¢³á躋³ä¹Áð×ïÖ‰Œ€HI¬&Ä}\ôîïĘu5š“e¨?•ÇF"^€'à xd£^"Ë#xkjí1rÜê¾ñÌ>{ö¬¨­¦>/[Ž 6ÙKám9x.x‡Tréy—¶¼¼\æÝ7˜wÎ®äˆØ…þ JÒþå®I„˜…ì;û2 Ef/¯€å#ùjG„%&ˆâóó·ÑÛìÖŒ I¾ºeœÈƆ„ÎL)Y"vãš!É4# 2xMä4Ò~:RìV䥂cw¢ž€7àxešu <oŸ^Ûá™ :|ø°½ ÍáÆKŽtUs8¶6`©ä½åcWtèÐ!Ë‘w¶(9ΓƒÝÿOÐs.¤¶>¼nþüOZ—šEñS§ÊÌSdìïMD×$ê8d0ýuî:úeÇ1ÍgôŽ¥'¦ö£{¿[CS35Ÿ9Ë–¼äƒÅH\3R3-Ä“Mzg!¹‡„Rìø ÒEMƒŽ}½®kÌ#ð <3­£Þ‚Çàµi1Q£gCÞÞÞò™mF@É‘I¬¤‹4!•d4òîãO*9Æ2JNaa!ØßÎak‘Y°;^\¸Þ½rx½àìgÛJ_o3} íÚë-á8çKDp÷7«)-§ˆ~ºóBB*m…йè”w~§Ü*7Š»p’L2 cǯ"ó <ïÔÊ x ƒ×à¹=R¼#.GZrZÏœdqD 4vÿ&fÄpòjOÓß[L™…¥µãåØná¶V}îÝþ埵çá`î–ú„‹£"8=>\»0ºõ‹U´çt>ÅMž"ÓD;³‘^<ïnýB+¯à1x žϼ·'JJJ¢ƒÚÓr,ùùùRÉqHÎ3h7cº•½¶øøÊ]¡Ö¢ØºûÛ·o/p6%çæÏV’¿—'ýëêသ9K×~²‚ÚvKæ¿nšÏäûGµbǧüsnœUm)U±R£P˜Ÿ7}vãXúqëQúd­s,ÄÒr‹hÎW«èÎÑÝizïNÊTÅë;Ë÷з›S§q㤻¥Çxƒ‚¡àÝ7lQ/Õ^ƒçw0ï!öBRɱ' äÉâ¼ÖÓZ‘JŽƒ2YZrŒg\Û¶mÉÓÓÓ©,9ï¯ÜK¿ïI;ûÞžµ õg§pu¡è ´ÊOíEòÀîððöb7É"CÕßÿ·I3ÞñÉèÁ {‹øœÔlÇ/ŠºNQA~ôúeÃ4óDv®û9>©=N•ÖH 4õ¼ÁKÓŒkàyd ¯¨íe/“êÚµ«´äXRɱˆ.Ô„Tr”ÙPrŽ=J2´q ÄÎ8¬9Æ ÂŠ=§çÑ#?m¤‡'õ¡a í4-?ÀÅöŽò®hìxY QŒ¾ñ £ŽÃGÐ?~ßAËökëZ½ŸVžãÝ”œWïõQ^ºx°†! ÷¤‰ Tˆßpç ’œ èŽÁ±w|½†Î–”×NI' þ¹vŸÈ­=oÏÕœ,î¯s×Ò”žhJ­"sÿ÷ë©¨Š¨Ã!ö<9¶V Þ‚Ç൚ ÈdÄH’JNëÑ—JNë1t¥¤’ãÀÜîÑ£íÙ£MŸéÀÓqÈ¡#&çôéÓ">Ê'‘WL¯,ÚNŸÒ—3QùÖNÖÛ¾\Mᜪ<¸“6ýníEòÀ)è8tåWTÓßÑf[ƒe¯_§pá¶æE€ù^VÈÞ¼|˜f¸+q‚µ¨+ë^^šÏäçA¼Ákð\Mo\6TȆÑI¤’£æŠeǹ¹¹Âíϲ»å]®†€Tr˜ãÝ»w§½{÷:ð è°äœãz#Žš1ç¹ù[¨+7÷Oè­a¬;g Ë `IΊ+FB¬ÐZmÔý&§â]º/]$"°g ”?3o³¨Ò52¸v¨Ø¹¿›].CYQ«=/œð¼ÏÕV›ä¨šÃµs #£Jb8 ?E»·ÊÊJ©ä¾ƒö)•e† KNff&eee9ð,{èPr@Ž˜|àhv¨ þäEýÉÛ£î§Ön§v}û’§_uDZ9%Gßáœm- "œkä¬Ó\6¢sMîCOþbß±9¨ïw»¿OÕ¦…ÆÎý>¶ît:T3/ùÆy¯aÑ3µÚ<β¬EPr°)&3¬YÆœœq#b’%IÌA necÎÕò»B–tY3Ž-¨“ãáááJγó¶Plx ]?4Ià£?o$w??j×KkÝÑ\$ß8°Ú´v-ߟNów×Ìí9ÎPµ>å4-ØmŸ©Ò‹¸Pí+¿o§»Çö •Á–UÒ£?o¢ˆž=È'$D9-_ðº-o>Æî—…P2YÌA‰‰‰äÅnu2–Ö2ô᪠³¬y—Ë! •fyÇŽ)88Xº¬ÈC777êС?®]8$³º†çË ‡ééiÈà KÜÚ*¾^˜" $7ή%ÉuˆŠ¤°¸8^n&u|ö Ø¶4½O,=÷Û»ãÞ™/æEëÃûhÆ÷ö»©°¼ŠÚ÷ÓÖ}Ò\$ß8%íû÷§‚²*‚ ¨ 2YÌAîîî„wRɱ }EÉ‘–ËðsÅ»ä*ÆÁ¹kŽ´äËÄØØX:v옱ƒhaïo.ÝEÑ¡þtÕàDÍO±uÇ/4„B;kÎË7®@äÀ´+-›þ·-U3áÇ&÷£ GÏÐêç4ç~SÉÿä…ì­#’)" ®ö Ü’^渲ˆž½ÈÃÛÛèaÊþuF<ï!ê¬È d²cuãTײVŽeÈ+JNˆ´ÌZ  Þ%•g:ârdòc™èhJNnq9}¼æýu|/roSgÅÙs2—¾ßœBí ÕicÁ•½ëŠ€»„'&Ðã\G½DÐa‰‘ôÚ’ºŽ§¹Î¾Û|„Np dYMXÄ–UEöÖžW_#ð2YPd2Ù1‚—#-9–!%' €|Xd 3¢GîJŽtUsdê?v©äè¹U{„%çÌ™3”mÕvecæ#àHJvçÿ½j?Ý<<™PðQ¡“g‹é+ŽÑ çdu¶åSùêJø²5'”ë?½ü»v8«oE‡ø³üØGâMÇ2iëñ,ºw\O {>[w J+9ø\{^s‘|ã@ 5Af ;!½ JNEE9bŒ%IïùZ³?dW“JŽ5uþ¶¤’ãà<–ÖŒg ”œ‚‚Rü…Qã#Xº/ƒŽfåÓm#“5½³|ˆ]ã XIVvÿG!䧸…ã°`¬¨b? ƒé?¬¬÷ìFp¥SJü?ï¢Ð¤.äé[££|._] Èd2aê~Ù£CA†ô¦®]»²;pé²fðØÌˆˆ°àNy‹«" •ç<ê´È kÆ2JÈ’ü‡wá‡sí“n\O!,X?äz"¡¼Ã(3ª)¨¸ökPtG  ¥÷ÙµQM7ïJÙE¥ôËŽcêÓº#5ð7›‹ ruçËö³Ÿ™GízJ+ŽW>†,@& jÂFdHfZý¹­Žý8='.X*“´aÔlÛ¶mËo”w¸,RÉqÖ÷êÕ‹¶oßî3qÌ)à²w%'3NaqŠìBjú•ã,r K)¢«ö¼úyìz„tíÆ.ŒG45Eà®6©{ }²ö€¡€ü¸5EX“®ªµ<~øç~ ŽŠ"_Y,ÐPþØSçÈdCMlð@–ô&™|À2Ä¥’cn®|—Trœ€ûý8PX*9Æ1Ò××Wì.Ù»’ƒ´ÀÈšvI¿x XðÃ?„c0¼ü5çå×F œ]˪ªèÛMÚØ«w¦%{Ó YúŒ¢ï8 ”­0¿ºôÐY¬¨ÿ¼=•B8{•$‰€ÞÀl@F‚ì@† Kz”iÉi9êˆ?––œ–ãæÊwH%Ç ¸ß·o_Ú¹s'Uñ‚D’1$$$ÐÑ£GéÜÌ^çn9B“{ÄP6áÀ²}éÊ~â’$j<|¼)86Ž>6 ÚžÙ7–ܹîÿ¶#ï9Åe´”eöŠ êáÒ·œ¸ ° ãï¢$‰€0N‹Ù€Œ¨ér–!ÈdJO’µr,C[Zr,ÃÍ•ï’JŽp–œ’’:xP›AÆ ¦æ0Sˆç´Ë))úïš Pv†¼0ÔÿüaëQòàšÁç]îÌmO^ç„&&Ò:.z‚³ï)„¬|“{DÓ\vÀ1X$¡dMïS §ŒëëM)Ô)–Ü<<”SòU" €L@6 #jšÁ2$v–)= –œüü|:yò¤žÝ:t_ØÄEriÉqh6ê>x©äè¹õ;DiÇÚ¶m›õ—-š…,9ö¬ä æ {ªÑ´Þ4óùš]‘‚8q‚›»»æ¼|#A1„üÃíâðÒþ ´üÀ ݃¶1&Ôê¹°{GM ô .îe,$A«ÄãzI Ù€Œ@V‚ÂY‚LéIPr@ÒeÍ|Ô³²²èܹsRÉ12y%# •'/.z†TÒRÉ1Ž™öî®¶pwà¬j¦µqÖ9ÍéÞcœäØwÏȶ%øÛ-Z×´IlÉAÐöœfZO*ç>‘%k »]ªé§mGÉÓË“•2íyõ5òص€l@F +j‚,A¦ [zQdd$…„„H%§€ÃU $-9-M^*•g‘™|ÀXNÂ]­¨¨Hf5v$õ{¯>G´˜ŧôÔZqíI'w¶àuŒ®“<#8”œ ¬ #;ŸBm|h@lAyÖ“VóN|aiE=YþmWr:}™]On8V_ ÈdEMø]„LA¶ô$ÔË‘–ó—JŽùXÉ+ë–œ:,úÉd†5ãXKÈ]ÖPÙ;³ DÄQ¨š¿ë8vhÏ1 ÒUM‹<Ö"%˜õd‘QMý vÀgEYOB]¹ÆS\x@m·¥•U´âà ^ÀJ+N-(ò A #ÈŒB%Ȕ޲ —µŒMÅ®`௧OŸ›ráááŽ0\9F;A@*9vˆÖ–ø¬¦§ë»èhí¸åþv…ðààV{̰†ÊpÞyGux…ªØ·ùwND Ý{Häk#xx{QPd;‚åOMcºv £Yùšõç¶8^uø$Iê izÅ“T^Q)]Õ4¨È7 !—5È dFM)È–žKŽTrÌGIàæçÆIG$IÌE@J‹¹HÙùu}úôá(md\ŽA|‚ÛŠ‚Ú£%gÍ‘StAb¤H< À³#-› KÊØU­£rJ¾JEÀ¯CGZ²ÿ„æó!ñíDf*È—TVYM[ŽeÑðΑšî„',”¼üý4ç剀)‘@–ÈŒš S-Ș^”””DǧÒÒºÚ=zõíˆýœ8q‚:tÐnp8â<ä˜õE@*9úâm³Þ‚ƒƒ q!ÒeÍf7Ûp"§Û=|øp³×é}Á¶ä OŒÒt»†c,¼¸Ь ¯ÁE¾i®Ÿ–O§òKj/ð÷ò ¾1áùÒƒ6¥ž¡rv3B 5­3ô %ýÀ’Ó¾}û–Ü"¯•ÈÄÎ$pYÛºu«3MÉ¡æÒ…+ÄÛ›’s<§NrXrÔ´Šwßýùa´Ò’$Í!àÏîjl*&S«Í°„HZTŸ…!ú‰ ò£øðÀÚá"#ÖVÞ÷g%L’DÀ +u6µ„ˆ@![t’eŒ³sçÎÂûâСCæ Û寑JŽË‹€EHKŽE°ÙçMƒ¦ 6Øçà\`TPrìíµ3#G ß'Z¬¹úÐiòeÿfIspç:\Aáa´>E«ÐÀ’³›e ülM;Ós„åHÝÏvv»,¯¬¤€H©ä¨q‘Ç#YÌ@vÔYÞÁ2¦ùúúR4g{³·g†^óoi?Ò]­¥ˆÉë€TrœH† "*(§¥iSd:Ñíz*Ø™C˜‚‚»'†±¼óäãY;&¤>™WH~áµçäD 9<ÃÂi«É°wt—WÒ‘Ìüænoõç»X™BjÚ‘ž-jŸx©OËc‰@£@VP/²£&ÈdLOÂ3ÃÞ¬ÿzο%}IKNKÐ’×*H%GA ^(R,JkŽ1Ì„%dO­Ùõ†ÊƒÜ7L»`45Ù«£ àËJÎv“îíC9ÛQ›/+ÙT´÷d.õRene?–cévé(Rdü8!+åwPdkß©\‚¬éEˆãÇßÄsæ 5á£[€öœúsy,hO?rcKejv¡æcTŒOͶ­’s”Ûöõ¢P?¯Ú¾Qµ>« „¼¥,ׂ"ÌB2Ù )Ù‚ŒAÖô"ÔVóúÔ)mJk½úw”~ bÅ@%IZ‚€TrZ‚–\ %§¸¸˜víÚå£u¾!BÉ9xð ]LLÙ‘„;‘šg—\ª!‘Çf À¤É/(€­6Zwì€ëaɉ3QÖ e뜔e3x'/Ñ"PóûwŽjd¨î3Șò»YwÖvG111¢q™,¨iŒQ4X¡à¹$‰@KJNKÐr€k»wïN¼€•.kÆ0+99™öïßoLç&½fä‹3Ñ¡ÚØ›´ÜB¹04ÁJ¾5ÿ@:Ƶ—Ôù:q^ÖÔç­y YŽÑʱb=’ »5‘v¶™QdH™5dLùÝTÎÙò1&X¸gddز‡oJ¬^’$-E@*9-EÌίwss#dY³'%§°°æÍ›G<òˆÙèáGíý÷ß§[o½Õì{ìáB(9999”™™iøp²‹J ᾞîµc)«¬¦âÒròä ŽHUÅ…”÷ç:õÅk”¿~‰Yã:WYAù—QÚë÷ÓÙÕ ÌºGï‹Ü||éT~‰¦Ûpo*­¨¤"N%m+‚,GøhšÏ,(îsìÆbÔR~ÛrÌöø}PÏ·±ï±ú[Cfàz Rd ²¦yxxPDD„(ýÐPŸûöí£×^{–,1ï7¤¢¢‚–-[F÷ß?-X`Ÿ¿! ͳ¹sRÉi!ùycH%§1døüСCíJÉY´hÝ{ï½ôí·ßš…*”¢5kÖÐ /¼@¸×‘JèÀ†;›³`…›, •¸‡·vÁhø`Í@þÚE”öê½”»X+KY¿~L9ó?7³Û_V–~„²~ü2þù7*?“nV‡%‡wQî’¹tæ›·¨"«&›Y7êx‘‡)(Óô¨(7[QV²ŒsÞ¾ö!Ç–ðÛVX¡]{û>˜Îµ±ï±éu¶|ïíãM!5á÷Òôœús[GEE5“ƒÔÒ~ø!ýío#%ð¾¹þá¦>wî\zë­·HÉHÖÜ=Žð¹TrKö9F©äØ'_Z5*ÄåÀe yåífÏžMƒ&ìZ™CtÕUWæa }þ¹q‹]T°ÆøíÁe keªà¨<À±XuD 0›ü{ &r×ÊRòg(éƒåv3%ïèDЏô1ž6&cml~Éý©íåw7öq“ç+r3é,+€¶&, MvºE:»H«üXs,P Le »'§B·²„ßÖwC¼·å÷¡¡þZ:—ƾÇ-m§5×{òo ²é£´#,9&Šò™­^¡ä ˆ´)¡†ÎwÔü†˜ûììß¿?Ý}·e¿!ð>°×ME(9Jü’)Nò½D )¤’Ó:úÙ°aÃèܹsÂb/S€þZBøaoi áòåËéïÿ{Kº±úµIIIv¡ä”UP §f~yÅåâ½»·}¸øhgî–£6m´²äîË™¿Ø•Êž¨vŒ-û:…ÈüÛsœ~öèãWSù‰T›OߕгçeHé,ȧF– J+”SV…,+ý(C–1{!KøÝÚ±7Æ{[}ëÏ¢y4ð=¶¨ oróò&å÷Pi2YÓ“Ú¶mKgΜi°K噩¼6x‘ÉIE!jɳ)¬¯¾újJMM5iÍø·yyyTPP crŒg…CŽ@»êSƒ6E ]»v·©?ÿü“¦Njú±.ï—òÃ?ˆMÄAé2ýÑ…9;G0Å>œÆoÖØ–.]*ÜñBCCéŠ+® ððpqœ™3gŠ~`æGPçôéÓÅg–öeÖ€L.²—äåãíQƒa–O™Ú†ýÑ-¥òÓéó+µ}'nYÉ1'¿“gÛŽ1ó–zŠFþ†¥T´{¹…RØ…WGH ¯ÎqÚÔ‚Ílyá…N@ïa"ΦôØ ›x%ùÄ&i†Vy6‡r—ý ñ~Ý 1“5×T䜡³«~ã1Ü,η¤}Ää,ø‚ÊO'ï˜.ÂRäß,Á¨`ëŸT°e¹yz,3 S¹/Ïuºg"eÿú‰¸®hç:ÊäĹ ,†*M=@‡þ2Y,î:ÌyN´—Ç >&q v–³æ}J»gu¦ŒÕXðZÒ>®ÝwíòIìIQ·IÅ{6RÖ/‹V2¿{GXwÂx±ï—Ô‡bxS((XÄy°U'îéOÄu°{æ3Šyè-Š}â?pR,Æ”¡¤>s#¬=mX–".¾<ÛuT>Ö–ˆé7RÐ kϵ¤ýÓŸ¿Ê‹Å2 ì7’àâÓþÖ'D;a“¯¦˜߬m³¹ƒ³kÒ©Ï^¡è¿¾.ÚñnË»å·inÃŽ|êó·R4·ë—ÜB'\F¡¯àEí{TÈ;ùQSXbG߯û q«O\2/fÇG`ˆØý·¤¯ÆÆ œ‡ÜTšìt+E¶”k­ùÚ%Gì¸dÉ1‡ßMñMÁ&sî¿È7¡‡XÄywˆcå·/å±E>åïWQȘYäÇJä?꺇¨º¨€JŽîÖSÞ{ÂJcò}PúÂwxa‹§B°^ ž@^ç¿SM§1Y3gžÍ}•ñèú «d²lzÎÖc £ÜÜ\Z¸p!½òÊ+ôú믓¿¿?ÅÆÆŠg§ºKžgÍÝLƒÕü†`ƒÏÎjîy«—-‘€^P¼$IZŠ@=w5÷óêêêÇP´´sy½í€’ƒt’ëÖ­£qãÆÙ®#“–ÿøãáJöôÓO×~‚](üˆnß¾]œûæ›o„òõðÃ×^ƒŠÏ´<|ø0!;\CôÆoˆôØêÀJì:Á5NMê]/KûR·×Òã=z¾?Hÿ K•³‘+Xpù&ö¨ZÔÒÉO^¢ÂmR[¸G [¸–©­Þle¨Ì¯á•›ÈîÖ†°¶@¾ ÝÅ+^ üMWšö¼ƒ­xëÏ‹úâƒ5²T{žýëÕdNû¸Y±*ó2Ån´›§ï˜÷a—;*?¦n®ÙãSø@s v—ÓÁ„Oï¿ÿ~½8£/¿ü’” ,ø‘F:L…,éK¹·5¯pY3^Éi8°óÂŽ¼5 ‰D`2'€Ë–W‡x@]]ª Ë^ð¥ÈbfNßÈ$*ØÜ¸,™ÓNS×ÀU™Èâžú˜“ô¦hŽ[€ÒÓ®{œD 4e+z§½Õ]á°hÍüA+÷Ø ?ÃI"³°<¯D©yjI_ õoz}x©âø\±äØrq(,9ç“f(cBçLÎ)ŸÙòÕ~›Ã·²Œ£tê¿/PØÔk벞wÁøùÏYôµf:•yÙ”»ü5çà½æb“7ø}DVÄüõKØ¥ôuBì™BÍG\gÒŸ9óTæaËï±2‡–¼BvLe »Þ–ÄÄ€Ä3£¡š9ʼ,yž™s¢Ü(ÏNsŸ·Ê¸lõŠ÷±cǤ»š­n ]e½(•œÀ‘§ì (9H<€zRO"xbà*‚›ÙÊ•+Eªè;wÒ¥—^*2Ÿ=ôÐCôê«¯ŠØTi¾ýöÛéºë®«*Š™BBúiRùîwH±¹mÛ6Bì®SÒK"í%â{ø daƒë^s}Õvj¥{Pr½=)¿¤Æô¬L+įƗ¿ª¬un>ØÍ.9ºOi–rÿø‘ØÝ&„•PÔõ£ N‘|pÎ8VRVPñþml%yšª Ï’WT'Ϫ–ˆ…QA&6P5Ç­€BFÍ ÓgÏÿ‚ª„”È…[WrÌLºÈê†q€<m+ïäß\û¸™¬r—þ ¬+pSCw3Ÿ*¦Ýïñé÷AQÄŒ›9Ÿ§KïLîþµ57|/꯹y¶ä{\; ;Êï¡Òd ²¦')õ×^{­èö/ù !­3¬¦ß}Wó²zõjÊÎÎ zš{ž)EÀ‘p„2 ÍÝ£¤‹F /~Cðœ6çy+:°áؼ„¢#crl²IÓŠ’#ÝÕL€‘oíĆ š²¢lè1B|1%ÆF-~œðc‰Z9Èv¶víZáBöûï¿SÇ !ù@÷îÝéù矧Ç{ŒEV²·Þz‹V­Z%²Î<é¦Q,mΜ9â(mcÇŽ%e"ÍåwÞY;5ø4ãGœ , d®iª¯Ú­|%µà‚`‰êÝ&Õ镪ñ•­Íüææ&R>§s=2@•ŸmBûo"²¬¹‡óŠóœPžP|°Ë»¿³•+Ž2Þ~˜ö^Ö]•ž8a±‰:&'ÿ]…-2xšÂŸ#f#pÐxÊúù#:tçx‘]­¹¾pŸ%TYZFþ>š[³ÏWˆ÷·] gx€e™T¢çqT´VŽ531ÿ9ünŽo°p„³ÂåiÌK9ñ@ÌßÞ¡jVpŽ<0“å¦Z|§9s`ÖOÿ¦C,“¨OÕéÑ÷ü™òÞÍ˧Þ÷5sÔäFa“® BÔç›Ü,MûC&¿ææ ËWsßãsU5›êñØú²Rd ²¦')Jž—Øô›?¾Èn6dÈ‘e )¦ñLÂqÒM=Ï6nÜHÏ>[óòÙgŸ‰gqs÷`®ÈJŠ:u}ô‘xEv5sž·¶Æ • ©äØéºö¡`ƒœ%&¿ yj¶ÉÏÏ»ã(@uèÐ!êܹóù³òÅQÀ.vtðé7eff’ŸŸŸø¡Æê ¦h˜ÊKL½ 8ë,5p_Cû¦„,øCaR“%}©ïoÉ1¾?III´iÓ&¡àµä^k]û麃t÷׫©è›k›Dš_Ÿ»?¢ÄI“(„S”ZBØUF†¦þëaùH©lá¶ÒÁ]­,#…cuâÙ%§>¯º§¡sHh€û‘æ»Üî~õe©¡ûš;·òÌ è;‚é ±5PÂò¸ø(}Èׂ% ±I^‘Ñ5Ö!þym(P¼Œ•BÈ=¬ZæRSXâg¼‚E%°ºMKúR߯>>úÇrºÀ¯‚æÝ=±öôü]ÇiÚ»‹„œùyÕËeS{]k.âöÛòâóÓÇÔ6óå†ÃtÃg+©ß-·ÔžÓûÀ~7Å7ŒE…- Š£Ì.¼µOPPL©)Þ›^«¼‡œ7ö}ln<õ×Ü<Ñ·­¾ÇʼZòºí¿ÿ¥ÏnMש[çÜøé ÊdEgþ=“[ÒT«®-((Õ°9ˆúr°\À!:š¹p8ðF"S²äyÖÔ=è^êrè³¹ç­é¸¬ùɉžzê)aŲf»²­Æ@VXl<#þÙ4TãwÙí'oÕ{"Aƒ!>B’ã#‹,)ø±ÔÛÇ»C 5¤àà3Ôh)aç –’ÆHñq6ýÜ’¾LÛ0÷=vž €!Û¬XF¬6ÅåìVƃ|Ï×ËAF*?/ª°Rý$¯¨˜&§ææã«I5ÝäÅM|¨Î0e-Ö¢Ôgn¤^ó‹z;Äî; ‰¬kp%ã‚§ÍQÄÅ· ˮî5œšcOj#Žêÿ‡::-¥¦° “ª~ºmKúR߯>ÆB62J«`b÷ÛÇÓƒl¥à 츛ZrÚúP5ÇU²5×£E zܶ:6‡ßMñ ãR+8xoªàà,&QS¼oìžÆ\ßÜx믹y¢m[|ÑnK 2Ù ©IXrL¬;êÏmq¬Xr”švð†€‚jê™mÉó¬©{ÀWSchîy‹klEû÷ïîï¶j_¶[%y”:•yý«çL=%GY *~Ž39Ò†¸ð …%¾¶ˆÑ‘¤°$! ƒ’R[Ÿ^µ½t®±œ¤çQ—vu––˜Ð:Ë»‡–v±ƒmM‹Š¥ciÍ}%»‚â£pó £b©ìdª(jЏ–ö7=Fù\¸´9ò0IñÛÜõŽüyeaņEi¦‘WLB,·ÒikäMGngF¶æÓ¸ðKm9˲»ôH’˜‹d¤Èr_z^õêXßZ¦|n‹W(5P0”X[ôá¨mª€_Iú! ¬ý]@¿žmÓS=%GÑÞ”‰Ú¦[Ùª^À¢·®%K–H%G/ÐÏ÷Ó»woÀ©s·µÝÅGÔ(6G³ò5JN—¶A´ÖB%'{áW¢®‚ú3Þ~„".¾­ÖŠQÛ±ƒ„O¿‘ª8È;‡¿Ó^»­9\àTÄIÜĵCžnf¡´u+dj6&{³PqA!)r¥tùŠÐº†*ŸYë5ŽÛOÍÒ*æ±á°(µ!,Xý¤’c-¨]¢%§ ÕÈPÝ”!c5#îb’´À’>Iú!K”nSwýF`Ýžê)9ØUð÷÷«¬Û•lÍ(&Nœ(”÷KÒ¤îüþûïõëФ§PÎ>äëE©Ùu™ºpIBD­Þ›irµyo‘=M]ÝÍÓvÁææÈò«ðCyíâAÕmL ZÞ²sÞYQ\$\|Lȗ鎸µˆg«ÍÙ’rÊ-.'È5ȇSHGúR™… »µÇ(Ûs 3ÈB-ÈdMoÂo‘$-ˆUÊÈÈ–-,6t¥V’Í;´qn µ3•´ä4„ŒcžƒË2’åææ:ætÔPrÛ¦Ôð1bXb§]MI‘!TœkYÌ{@°ˆ@Œþà‡ï $œæ¹Xz>N3)²¦®‡rGŠN–ôg*Ë]£B¨ÔBYVÆ/_]È dGMŠlaÉ’#-9jnÁŠ’îjZ\lý £ÔñÔ¶îÏÖí7¨ä eaVV–­û–íë„êÊ€š*ЩÓP\ª¸«ááed\N2?È÷œÔ*·½£Ã¨œÓD–¹?äd[‡@Iv…ú‰,gJKHl‘š]@Ý¢š]Rî±ä5]/½9y†©,÷ £Š¼Kš”÷¸0ÈŽš [1Èš$•-êPrÆîö’ôC™ýPzÄY¨A%§C‡"• ³LÒÕç*2|-^¼ØÕ¡Ðuþˆo‹‹‹3TÉéÝ1œv¦kJ`mIŽö¼®àÈÎ’œlêË ²š°0¬®>gó`m·6B‘Ú•¡•YÈr1˱ŒfPsE7…d2£ü*×B¶ ¬CÖô$(7Uœé­©LjzŽÇ^úBÒÔúsw¯s)´—±9ó8\BÉAõÛ“'O:3]nnpYCòIú"€¨[·nÕ·SUo°Úãö|®ä­PÇé´  âli­U0‘¯Í#PÁJNÿm3(ÐHÈÉ,lMX”š*ì}¢Ã©¢¼‚ÊÎj]2m=Ù¾ã"PÆ1ÈŽš [¦Šús[—ž/hëã£Mgm«þ¥ÝÝ»w‹z-Ž2^g§K(9Ò’ã,âZ7$8zô()„ë>‘G¶D`À€´eË[vÑd۽ϧCÝ‘ž­¹nD—H*9}ZsN¾‘4†@×ÙÊgwµ¡ í4—lO˦ž,czl~÷a…ý©©/+]^œ,§ðô)õiy,hBþ݃Ì@vÔÙ‚ŒéMŠ’£ÔËÑ»{íoç΄¸VIú"àJŽ´äè+Tzô6tèPBAN鲦Úu}@ÉAâ£bÜ:…P{®—³öˆV¡™EEgNK7Ÿ:VÉ£&(:}†³†Ÿ£á,7jZ—rš†ÆkõçÖ<ÂýœÎ/¦U*i/w7êAEìG.I"`È dG!Èd 2¦7)JŽ´äÔ!ÄWÇŽ#ĵJÒ´´´Úb´úöl›Þê¾åªöaÉÁ¢L§Râà‡ð÷3fŒtYÓ™Pr@Fº¬ ïEkŽhÃ#©¼´Œ3Si“è ìÎA(ä…aLxEÕeÓ+â¤Øý†|éAƒâÚñ¼{=YÝ%ŠÊXa—$0ÈÊ–5á÷²Ó›JJJD—Ò’S‡ü®]»Ä©äÔa¢Ç¬8ÅÅÅ” GwºôÑ ’#Ò™úV—Ù»X'S¦LJNyy¹‹Íܸé"#¾OFº¬a÷–upvvÕðõ¦|®C I"ÐÅ'2èÂäšË6=CUÕÕõ¬;š‹¬øÆÛÃðüšÃZ…fLR*ÈÉ¥ò¢b+ö&›rF #•Ñ,3j‚LA¶ czÊ €œ¥Â¼5ðCFÒêÔ©“5š“m˜‰@JJЏÒ镜Î;‹‰:tÈLhäeŽ€À´iÓ¨°°–/_îÃuš1—3’w-³ Ki·*3•;§¶žÔ­#²iZ’D )*ËÊ)ŸÝÕ¦ôŒÑ\¶âÀ Šçt»Cü4çmùfdçö´âà Mcº¶'/OÊ—²¬ÁE¾©d²™Qd ²eåœÏr¦<ó5§OÄãH+Ž9HY÷Äm{yy9¿»R£VÎáÇ­‹ lÍP°+‚@¾_ýÕÐq¸ZçF+9ýb"¨-W÷^´']ýE½:QÁ‰“T]Y¥9/ßHÔäg¤’êN`¥XM ÷¤Ñ¤ÑêS6?FNåqmžÂÚ¾Pµ^XsÒ¥Â^ Š·ÕëŽÿ ðñ¬'ËÓzÅPAz:UWUÛªkÙ®ƒ#Ù€Œ@VÔ„ßEÈdË‚% IÄu·ª 19Ò’£¿4ÎÎΞ3¶îÇQÚ?pàQÿþýeÈN3NÔ&êÑ£‡ÓÌiRÉ‘19NÅk1X"\º¬éÇ[øZ£r³‘JÎäœL„§üÛN­5çêA‰b[ÍISòÓŽS%×È™= AóÑO[ÒØ®(ÀÛCs^7Ó{ÇÒ’½…qAÃx'>/åˆC}8  ÈHÞÜQ›>%È”Q”ÁÉ_ÑVR ›6m"oooiÉÑY àÕ±ÿ~êÙ³§Î=Û¶»F•œîÝ»‹Â‘JwÛC¶®m8à æÍ›§W—²F`È!†*9áþÞ"¦â»ÍÚEàlÞÕÄ"ö,×ò‘$0E ÷Hà CÄw]>P«ø˜Þk«÷÷‹YÝ~ÝqLÓÅ5PØã³JÍyùF"™€l@FÔB†@È”Q”Î.t;êk5j®æô %±Ã({!I?ž‚õ¾Ë(92hv0_Ir.ಶyóf:yò¤sMÌŽgƒb¬7n©Ùæå9ù@å—j]ÖÆw‹¦\v$P#PÉu”ÎK¥›‡%©OÓ/ÛÏ/ ûjÝ~4ÙðM˜öhš»¹&Ý©ÒÕ¬tc÷ºé~©@"_Ï#s$EÈdDM!ÈdÊ(‚%':ZßFÍÕœ~±64h9—Êk¬ˆÖúˆ ƒÃ™¨QKÜküüüHÆn8»kæ2aÂBuåùóç;ßäìtF°ä ü"ìVrÑzúiÛQÍæŒJ¦<öW//,Òœ—o\l.!àíîNWšì~½ñ0]ȉ,Býêè«¿ïM£œâ²Ú®#|ÄŽ|»\H’¨È;°_ÈdD!ÈdÈTñQ>×ë–©äÔ ]É·íÛ·Üê%é‹Öú‰‰‰älEiUr ÑõêÕK*9úÊ™.½Aˆ¡èȸ]àÀ2 ÅÒȸdÇšÙ'–>Z­]Î`ôP~øgñB@’D@A ïÀ>ºfH"ù{ÕÅݤ牅áMtU.3äõÒþ "ñÁëiú¿}d2åªÝ%œ–W’D@ wŒì¦²ƒä%£îAgΜ£ƒ=õ‹¬jÀDZrôç ™‚WÅÅ_óÓ3¨+Ãß5Zëºðñšîï#”e#!Bƒ«u®§°#KV|Û:#ݬd]õ Y€LŒKÖ÷ÿgÕ~!CF$ÏPBÄ96¯ÃsF 7ú€€êÖM«Jllb¡\VÉÁ—P’s!0sæL*//—.k:²uĈ†+9(èˆ*õxÀ«éž1=¨²¬ŒrØEI’D kçÅé¡ûF‡×‚QÍÿ²ðŽÑbl4ÝÆŠùîŒZ—r¦v(møèች(÷à!ª()­=/\Èd2ÙP2³çDA†Œ$Ô%A2 g«Mb)¦ëׯ›²f¥Zv”mÔkr9%qgÏž¥½{÷Z†œ¼ËnhÛ¶-;–æÎk·ct¶AÉÙÇÖü˜ExÐÃ¥çã5û5)xQ3çš!){×N‘jÚ¨ñÉ~G „å3—ãÔ[3˜Ÿ·§ÜÕn7qûÑ\¤ã›A±m©§zûmr(a>”¹G{^ǡɮìÈd2¡¦þ±KÈdÈHB™Ž˜˜áÊlä8ì¥ïÕ«WÓðáÃíe8.3¸ªÁÛ¤oß¾N7ç&·ãஆäk×®uº‰Ë ]~ùå´`Á*,,”pè€À\ z1ÚeíŽQÝ©’·åÿ»Z›áo{SanåÕ&&ÐÙ…!pzÛ6JîF¨­¤¦×–ìõDº´ RŸ6ôøÁ {Ó÷[RèXNÝo˜¯§;Ý?¾'eïÙCUl­–䚀÷ÈdB!ÈÊ[ŽdÇh‚%')I«€=&£úÏÌÌ$àÍ@Iú"°nÝ:ž‚¸ag£&•QßC*9ÎÆöšù\rÉ%TÁ5R~ûí7çœ Í EA‘ž»UF²bÝ2¼+½µl—Pv”±ôhJ—qÖª3[¶ˆ,lÊyùê:³'›ÓíþߌõÜ{Ö9MÙÁÂPÍË&R. YVÓ}ãzrf8¢Ó;µçÕ×ÈcçF¼‡ @ÔYÌ@vŒ&¤í•ñ'5\ÀsnjÊf Ñ¼q¥þ—/_N£GvÊ)7©ä`Æ8£wžy;˜Txx87Nº¬éÈ ìR­ä`ºЋÒs‹èÛMÚâ ÏM@ÅlÍÉ=rXGTdWö‚Ài®QÑ+&¼^qÄ—m£!ñíhWŒ·'òpk#±È˜UXƒÌ™؇²v³"ϱf’\ 𼇠@‚Œ@V ø@vŒ¤j.B %Ç]„,ÁëL¢ ¶ävy…À‚†„/8#™¥äÀo@Hr>ಶpáB*((p¾ÉÙáŒ ä ØYII‰¡£‹¤ë†v¡gÛ¢±æ$G…ÐÕC;Óé-›e¦5C9¤ç…§NSNj*½4SkÅÙt,“æqeø§¦ ÐPfô8‡3Àùqšë,Þ¡¹ú^^Èx¹ÓÉ­[5çåçG<d3d@MÈ dÆh‚kVqq±TrÎ3›ÒUM©\±b…° 9RÿÎuè±Y%gذa€U«Vé0Ù…Þ\|ñÅTUUEóæÍÓ»k—ì&adµ³Ч¦õ§cÙôùúƒ^¼_qˆ¢cŸÅuçÎWÕ²Ý:r˜ž<úoÑw€üÃp„=Gî„Ñ”nµlƶƒk*­=y•¾à莖íMÞ™GZÒ—«Ûtóõ:¥©L~?ºÁÒ´ÒÌæº,Ï9æÞÚpoàÁ½¢ƒ“>ާ§§ºãÖ>`Óü`éä¸~6nÜhØT5 ™¬“ƒƒš5kFÒÉÆ4¤¬­Y³†îÝ»gÌjlTàAiÁÉ,Ÿ´®J·y‡sâÛ¨ÍÎeÏí™®ïÙ­1ôdwÔF 6*šB÷ï¥A ÊP9^Z Ña‹vS“2¨ ÙÔ²A=ë³¶ÕhÚÖSt6ôn\WA»ø±{mЏr…".]Ž{]þaL0ǘk̹5å÷ÄÏ|oà±VZs' ¨ S½zuwvA3ׯBQ± (Íu >Ò&[µj庋ºøJ;9—XVöâE[’²‹û*/ç$:tè@ A.[¶ÌIWÍZ#‚ª ?|øÐúe·ü¨Í-*ÑW«PȽÿ¢6XüÖ»…ñ—à]^4H3.pd}3¥§¯8½ÇÚfî>G‡¯„ÑÄ®¶Ñëc´ô7v?l¡­c^¿D>êËË;wP¬””ÖÒ”©ÚÌ-æs9·6Üv¢;ÖǸòïØØXñ •Äž£¾aÃjÒ¤‰+§@^‹X¹r¥z0rm"‡œðr¼½½e4Ç  ¤$¶hÑ‚þúë/ƒŽP[ÃjРÍkAe ȼÏQÈJôÏ> Zr”îÕ‹Óu^8<åTiÆCàÞµët‡wò~éù’ Õýè'â~XÏ6º£e°s?©kmZ}â ­æ;k›Ð¹&eÍ@tcï^ë—åßBs‹9Æ\[îܸ7¬£;ÖǸúo¤ªAt@:9D‘Ì¡:Èe 6/͵ |HóæÍ7̵WvÝÕrrP  3™²æº‰qõ•úöíKØM¹~ýº«/mºëåË—J—.­™”5DmÆóÂ;÷[Ïß´™)œöáù,–®É‚À6¸áIlt ]Û¶•:U)J*¶Þ'OŸÑç\/GOÖ d>êRµ(½5w'=Š‰ëº¿—'ýÜ£6Ý>}š#“¶PÜAòÝ"€9ÅÜbŽ1×Ã=€{÷î ­8(àÃâwÀìfIÝ6ª„±VçEà·mÛF­[·ÖjUé—CN® Ö¯_ïvé[UF-I€@Û¶m){öì2š“ç¼€]+|ž´bX´©Püµ¢bŸÆu+órfö«OwΞ¥ˆ  ¸×åúGàÚömä“A¡_z׵̮ PÁ_@šZެÿ-mÒð“)Ý_¢ˆGÑ4fÙ›^âïU£$]ݶ…ž<þ¯¦ŽÍAò‰îÀ\bN1·˜ckÃ=€{÷„– Q|dȤKçÞZ=ZÀ›«P÷‚Ó'Íu t”u[¶l麋ºáJ;9íÛ·áUÍqÃ,¹à’Ô£Gš9s¦ ®&/ñáÇéÖ­[šã§žu(äîc¡¶fÝ©v Ñ@V$º¶c;ŘåZøC-Es dÏJßv|‘Æ­=J»ƒl/ä¹ñËJÁ×Ë"¡:¿I…… žÕÈæ©qéü6£µt?] @ÓzÙFwlÒÁˆÔçô¤Wfl¢ÇOþ‹LúfÎD ^oD÷®^£Þd¦o0‡˜KÌ)æÖb˜sÌ=î×Xˆ@K>Î;w$Ñž'‚Vg9K ¾Ò\‡DV­ZEPÖ5º9ìäŽDÑH(ƒH3³„n¿Œæ8n!äE­EFQC¢y`ê5}|n1T _6¸)=½Éüœ–—å£Î'xÃ:ªU47}õ²­šÚÆ37èûõǸVÎKT8‡·ÎF–°»ök Šƒ¾Ç2ØÖ†±#ïæt÷Ú5ë·äß:Bs‡9Ä\bN­m8Ï9ŠâКac+wîÜT¡B­uÍåýñE(ëÖÕ÷¦ŠËKã{tt4uìØ1-iÿô;9´eËíLö0Uôë×æÍ›'>©j@žä0–úSH_Ð’Íè[ŸÆ<„]ë~•ÌæpzÓíÓgÉ×ú=ù·öÀ}¼i#ù¦{FlB¬øáÌ[èûçf!(Býk—Ôþ`èaGáˆê£,?vÅæŒ! ©Ç‹ÅéʦM%ëƒÙ`£‡'˜3Ìæsim˜kÔKÂÜãКƒ‚ô,ÉÇ!²¨{AÜJšë@ª²IŒžªDSää R{ÅŠiÑ¢E®› y%—"лwoº{÷®ˆØ¹ôÂ&¼x9!!!tì˜m!NwC‘Ç' ýÑ·ÍÞ{ž~ßqƦ;í+â‚zUéÚÎrÜí?¹ºc'=¹IË7#JXìû؈ܥg§ç×Þõ,/âDôþœ®ÔïÏ-t)ì¾Í˜~cÁ…À<¾tyÍjBATiú@s…9ÃÜýþŠíýŠ9Æ\cÎã‹hatÞ¾};5mÚT Ýqk î… ó6mÚ¸µf»8HUëÞ½»)†ž"'ˆôêÕ‹à"Ô%ÍxäÏŸ_|Ë”5çÏmåÊ•)oÞ¼b7ËùWKÙZ•  [V¦·çï¤ýÁ·mNþ´MêÉ;¨Á¼#ù(<Üæ=ùD›„=JwΜ¦SµB9m:ùÙò´ùì Z<°©¨—dó¦žüØ£øg¥NÓXÔŠŸƒÌÕCš“_†gtyýZÉ5ÓÁ\?ca Ìæ sù{‹an1ǘk̹ ©jX;¹Â¼£¸ P$Ž"¦ÎqÈÔIŸ>=uéÒE5ÞJªœœ{*7Gš1€Áš5k(44Ô˜ÔȨ®iv­~–P#¥AÉüÔ™·ØJîÎèSjÎI—xGH¦ûhä†J¤PR»¾wC¬EˆÄYÛ²cÁ4vÕašÊ»/6¦ÊÂK5£Ë¼Ë?hÎvëᢖk‡´¤§‘,ª±„µ¤iÌ æs…9ÃÜYæsŒ¹¶v~¬q÷ßË–-£5jPžb§2ˆÒŒ‡@æÌ™ ¿þú+áËHšó@n6já _‹†êá+ßnAgB"©Ï ^ZuÒÇ3mÚ’ ûzPЪ•ò”Ò´ƒ@Äå˼y m\ž•³¶k‘©õÔ5T£Hn®/¢ÍÔëþªñwËÀšÔ­A&{þ‹6MB Nû=Æ N¡ttlàqëÌæsƒ9Н¤†¹Äœbn[°2¤VmÏž=b3«]»vZí¢ËúµuëV…Ô½\ªB§páÂÔ AŽ6Æ!©rr°ÓïééIsçÎ5 r xóÍ7éÊ•+‚ –àMù‚jxyyQ£F4›²†–Λþy³ý{4˜Þ_¼Çfìp‚6oM²d ‹Ë—Qô=[r·ÍÁò‰Ë¿p.3gj`½24¡KM›ëF>Ž¡VSV3ÿÆ“UÖš¢rf±· §äômçCl†Ý¨T~ú—eÒï^¼(œC™ºf[žˆ5vÔ1'˜Ì‘µm=S À‘ÇÜjÙPc°D‰T¶lY-wÓ%}ƒx$´K•*å’ëÉ‹=~Ì’êþI¯½öš©”ýRõë–5kVêÖ­ýòË/òÞ1(ø2FZâÏ?ÿlÐjgXHYC½œ¨([Þ‹vzHT¿D>QsbâÆãôÅÊC6]CnüÎ÷ÛRñlžtqÅ2zió¾|âZnsAßË›7Ó{MËÓO=l«À?ˆf¢ïÔÕñ(†V iAÙ²x¸¶s¸Úx.lÛ®B!jóÚ¢ˆö¬y§=¼ÌN¢,|ëÎé"<˜ Ì æÆÚ ˆÒö‡µb.1§Z6È·/X°ÀÅ“›‡§OŸÒßÿ-±H(•߇àí  rËÚn.UN†4xð`:uê!ì(͘ šT%–æ<£]–µk×:ï"*´Ü£z1ú¹g]údàcgÇÚrdõ¤mﵡ y|è²éþMÛ]rëcåßÎCàç[_Ù¶MÈ|ëXÃæBQ±O©Ýkèâí{´aXk*è¯ÿ‚Ÿ6tð ¸fàçÔ-‘—ZL^MÇ®Û*"Z°‰ñ‰ ½Éi˜+¤¼´ƒ¸ªyd¢=æs?‚ƒ9ÃÜa1—˜S-ÛŽ;è/íÑ£‡–»é’¾YRÕºvíê’ëÉ‹õé“ÖUl. yÝ?­#ˆ ¬ÚšJåÉfó¾Ùž EïïAM©bj:ieG¼½´'Ÿ˜‡ì´/• ‚.¼A ÖÌ=æ >æ s‡9ÔCºåüùó©\¹r¨í”:WL3JTªTI¤î¹âzòD;¹®Ýþmxûí·MGª …þþù‡nÞ¼i:àÌ0àŒ3ŠÐæŒ3d]$'OxçÎ/'&&ÆÉWJ{óÛ”§qjÒ» v%PªÊœ1×[iBף˛6Óõ}û˜Äm-WöëËlx¡pu5˜k‡´ ¨¥YÛýè'ÔbÊ*:Àé=pp*¼àoý¶iÿƽºœE5óùQÃïW$H]ƒ‚×:PINÃ<¿t©,~ë‚;å.G;€50öñUÔ¢ÖàûåbÎ0w˜C­Ò³°e–â‹IÍê‛$£8I¡¤þ{S¦L¡jÕª Énõ[×v‹irrÀËñõåªÃ¿ÿ®íQÊÞ¥äo¢B®¢ ©¤N„n=êO¡@šìýfèGV傪Ѩ÷Ût™#ßuªA3úÖ§0–¬ ZŠòÚåÙt^gOй–PN%Šö}Øš•µU— MM&®¤s¡wiË{m©jAY“ÂzгrAÐU,I\“#7'¬L F¾ÙŽm¨S…ºÈ÷ñÍÇm­Û’§lƒ[`ܹb€àøÅ¯ƒ¡ÌQ­¢yÄœaîô`à[ÞºuK¦ªñd­æù…*¯LÛsÝ$8PÆ sÝE5t¥t¼Ëš¦mÖ>ø@Ô̹ÌòŽP\“f<eáÅòŠ¥9ZµjQéÒ¥ 2z±™»ÏÑëm£^/§ß^©— uäЕ;ÔîçõþD¡€È'_^½ MÓýÄ·vÈÑ#œ6å Ò_¯6$ßÌ™lút羈âT5DpJäöµy_>ù'OŸ‰šA(Žúg¿Ô½Z±ÿÞüÿ¿¦n>IÃ8MÓ7 Ôk@™²dNpŒ|!åhÚW®\Y:³£–òoܸAb&Ò×ôdG¯…SëV „¢¦jëÄ·_·Ÿ¡w˜Çã‘-;¨WŸ¼r戈|ž wΜ¥›{÷P!?/Z8 U*ÃY{ÎÓÀÙÛ©) @Ì{½1é%­'™¡»ìí¶œ¤¡ vS¯ÅiZ¯º”%“íî罨'ôæÜ4wïyÊU¦ å¯Qƒ2rA_iŽ#ËÜÃ{÷Òm^3ô¬Q‚Uë$pÔ!–1hÎvš³÷‚(ô©õ:8öF?yòd3fŒà-£ô†™ ¼Ñ£GK,\x|ôÑGN52­²dÉâÂ+kæR“TqrÎp]¸oCV°ÕÌäªÚ‘zã7ù Øæý«z!“7Ö´iSòññäL½Aqãî#ê4m=¼!R~:V.œ`çnÝ¥~3·ÑÞ PÊÅŽsþ*U)½Èà ââ 8u¥¡ï±4÷Ð&åhlûê ßH¹ÊéT?ñ"ýýæéë—_¤ é4®­ëb½Üê“WEúZ!þ,¾Ù”ŠäðIpꢃAôæ¼]ôð)Qþ—ê_‘Â Ž‘/$D âÒeº±seeßñ絩KÕ¢ ºvŸ:rškpø}šûZcj¨ÏßœŠ+Ò‹/¾H¿ýö[‚1šíl’B•wúôéfº[Æ .u‘"EhäÈ‘ôñÇ»¥¸¨:NÒ¡Cá¡ïåÝiÆCª_øÀôìÙ“¾ûî;ã P##‚39hÐ ÁÊž=a4D#ÝL´1Xhs´æç­§hD³ŠôU‡ê ÒK@üeÛiñ÷^R<<)ï‹5ɯh‘DÛ4óOŸÄRÈ‘ÃtûØq*/ýÉòÐÕ %@=~ßHÇY^÷¾ ¨s‰gZïpš:N[GWà §½m…‚ š„°Ã°E{hÖ®³äW° å¯Y“2ëðs›``Nx!й7ö졈+W¨OíR4±KMò÷JÈã]~ìŠàßøg¥%ƒšQÑœ L'tOõ&÷±²d ŽòíÚµ‹À·4³:tˆªV­JÛ·o§:uê˜ —}Ô¨Q4mÚ4Qça&5õœœÝ»w‹ZP‡jÒ¤‰Iñ4ö°¿ýö[úꫯèêÕ«BUÏØ£uÏèîß¿O(Ö…4=W&FÊÔ`NéA=–¹œ2e¯. "?#ÿÞÇé(ç)[þ|bè•3áÞ=3áÞ«BX üü9 á…RFå)}Þ¶ óÊQF;Uçî» °.àçM‹X¾»ŒTA÷ŽF¿WGÊÔ[óvÐ;ÏÒ›õËŠQñÓ×0º­çoÒ[ówÑiŽbæ,Hù*W¡ŒRˆGL|,s oò"÷öÉ“T6¿ýؽ6Õ/‘/ÁM¬ß[´[lôgô{ÔI­Lp’†_èÛ·¯à±¢>‰Ù ”l€Ÿ`µMiÎGàöíÛT´hQ‘q0›zN@lÞ¼¹Á…Ã#Íx@œ‘Ï>ûŒÞ{ï=ã P##B=TÇÖ»šÝù[÷DÊÏÉá4¾sMÄ‹D{ÉSû.ß ÄL–÷/Z˜òV©FYüý42®í¢\œz›EEÞ¥õJÓØvÕ(§wB%¯ˆG14dþNá$iTŽkÕÐEÝ×"ªÎÕš6ù!y}½hVÿ†v£iÏxò~ßq†>dYõ1OÙÙ)Oy*”§ &åë<åè(G ïœ8Î\½ ô5§X¾^§4ÙñÓ¹†ÓzeÆ& ½ÿ˜~a”½6ufÒ5­€d´nT™×óf•hA˜ XL˜0Ad)¨Ñ¦l#i°>›3gŽ xyy%}°±ßU×ɱ„g¡ ÑºukcCgÒÑ >œ-Z$><™2ÙJÖšÕ‡º Ø0€$¥õl±¼òûlùAúvíQß2Óö¢:#ä{?þ÷â”+?Þ…ÊÍœ¬¹réyø÷]aœÂƒ.R+>àEAW–0þ¼]U*™;›Ý6ò¢ûŽ`Á8ëµ °{œ|Q=®F<iT[Ïݤw¹Øíì|zÙ©Õò :–¦l:A߬;JÑ,$˜£l å ,gÉiHBß>y‚ÂN$OÖ!úÓVßa'ÜÛ3c‚ÉxKc– ÉOPý’ùDZ`€Ÿþ úÈzøæ›oèúõëdòE&7Nd€ ³‹/$ø8á/^\`>tèP'\AWMªëä`èíÚµ»Ðy'2$½êênp¤³W8ŸºX±b¢–KïÞ½9E“B f‡P3ÖÆŸÂ³µyøá«aôú¬­tòfjYYãíU+G$ãïC—è‹U‡éØÕ;"-gù ”ùFü>‰Ž¡;gÏP8/ £>b.MQú¤MeQÑÝÞL^{À u;ió°+ŽèMö,RÙËVÎx ÷çtŽÖ¼Ï|2?æ“üØó¥DÌ»,§>™‰¼€¿K~%KPîrå(‹Ÿ1£”™è|‹Ó‘"Î'ŸÌi;‚ï²s“-‘ûâoÍÝIÌkBñ`ÜÏF°g,‘ïoˆ0!zafX/´oßž&Mšdf(\6vDQçܹs”9s —uDRßÉA-¨hH¥5m̰3zñS§N‘Ì5vºÏÛüâ‹/DªÒÖ< ’îò”· ëÓÿV¤\>™E [§Ê‰ä7Ÿ½Aß®;FkO\¥,Þ^”½d)ÊÁ‘-Oooçïß¼Ia¬Jt‰<3¦§uK‹è@!ûc{È;Þ_­>,ð+”Û~é]×.¯ÁEÝ7ýeBî=æHÚNB[ËrEËÄ"”àš hî¸uÇéÒíHÊ–7/eçûØŸÂé3&Œnè Üg±± ¢H¾—ïrÁè"¹²ÓÈfå õ³ìq—0¶3!‘4œÅVŸ¸"ÒÒ¦t‰Ó#o‹ZgppΞ=K%J”ÐÓtªÞWÔ‚(•ÄBuhí6xìØ1±þž9s&ÉMh‘úNšEQC,‚¸ +¬Ú½ùŒôâáÇ…$Òª y,M}àÜ.\˜æÏŸO;wVÿnl‚.ÙG³Yp !u\«hîD{tñö=Áuø}×9ºÃ‹Ëìùò’OÑb,Ù[„2yégqôˆóô± |Àÿ²$tåB¹éMæÜô¨^Ün*AºßŸ¬Üõ)§ü!µç“6UõBôT1щ5ÀmèÂ]tŠÀ9Õª2åñ±O" ´ñÌuVLÎöá{GVm‘…c8•ŽÍýkøwžÄ<¡ZÅóR¯êŨ[µ¢ÉŽuó“~Ü|R¤®}Ö¶Z¢.™0'^ b5YF|ëÖ­T¯^='^IûMC8NÎ6®ï…GiÎEŽ$Ö oœãäà ¨ÿõ×_táÂÊ–Í>yÖ¦+ò‰®°|mÚ´‰6l¨«¾ë¥³È«mÑ¢?~œÊq.¿QmÝ©k‚|¼8;ï5­@JåOr¸HZÍilËÓŠãW9Âóˆ2sEg/Nòb î¬xÌ‘ƒwÇm«Õ'ÙhßŒŠ¢·nÑCNÛ‰ ¥û·oÑÓØgøBjW!€^®\˜ªJZHÁ’Ú4aÃ1ºÄuZúÔ,IcZW¡Âœ¢&MÛ`îP "ˆºc2´qyJ,Ñ2šë‘hÉáKâ>ÞrîÅpm$VôÌýü>öΓ—<³ùºÌGÄ)šò¡!â^޾Å÷rxydÊÈ‘—üÔ¦|uä4Ó²'툡vӤDž†šW2܉¥±YðÐó#²X‚ƒƒÅNºžÇ¡Fß[µjE(‰€Ú8Òœ‹6œ+°POÙ²eéï¿ÿvîÅôÕºóœ„Ì ðпYýòË/ê_@c-®b§eÜÚ£´•z•9¢3¼Iê\µˆC²ÈG®…цÓ×iûùÚ~1”"<æp}zòÎîK™üsPfNñà´ Oòà™²xñû)©:1P ÿxGãßÝ»NÑáaô˜w»‰—¢Åód§%òP]NÅkV¶€C|,tÝ~š~Úr’ ÎõJÍ4²yE*žË´EÜR>99ü)¤dAtàÏ+ä¡.–TJ¦¥ëQ±OiËÙ›gg+ߡXZ9†#‚ˆôxùûS¦ìþ|/g»q/gL%g/–%žq[î娈HzNXÝ‘æ Uá³õKäÎMƒRùú,îº%ÔåÀWÊÏŽÄÊ—ÕŽeÜFx â4TppæÍ›G]»v5ÂR=KJûªU«¨eË–©nGžèPóCi“\‹ ¢Òâpž“ƒKüð~Eö@IDATâž ø9PØf,vîÜ)ªËhŽóæõǤ‘#G ÅBäy›ÁP3cüú£BeÍ7³‡XðcW¼çñ;j¨Ñs„ÕÙŽ±õ‘«átŒ97"PìÓ§Ï›`Ç3³'eòÌLù1àé9ê“>=)ì¥ãœÄòì))ìÈ#JÍ‹¿˜¸.xqߊæô¥Êü©þqúYUŽÔäÈš°Š{ÜIV %mG¡~ÛqZ<úóy(8ùsnr³0ƒ4}#€ôÊœ’6aý1:tå•ãû÷1XVfsÄbø<ÂÊ„GÙ‰?Î÷ò!¾—Ï2qÿó[ˆsÁm›<³dæ{Ù“2 )¢—|‹G€{^á{™¨g1|sqÎh–y~fù,°cž“yD¥¸ˆl•*Ïý¬X U ÈAò„ÂY!í/.Œ:A'¸ŸU°IÁÙn,…n¯€-ºe4{ûí·÷Ù+fç"Ãɇ¸¬4ç"Å[DpPôs̘1ν˜þZw®“Ë;PåË—§R¥JG¤ps¢ùGy ÒÔGàG ^xáQ¹øý÷ßWÿn V0é’½@t‹¦®Ì(’Ã'Å=wâzäCºvŸn²øAó"ÀÀíqÌSŠáE_4§Áù±ß3c±ÈóΜIprdÍ,QÇ£pN‡ªÖEÀAZxð"->x‰nñbµQéü4 N‘Îæè¢ÒºMù·öè÷ÖPºô$ ÊR’ž°ãÒ¼l€à²´­Xˆ e¦¼Þ">Á,'Ž{ùöýç÷qØÃ(Šä±ÑüþÅpª$̃Õûp/ã_v/vŸßÇP8,ÌŸ#¨õÙ“sOÕûÑOh9s㢵§®ŠÏ D4°R`ri™Éµ­·÷C ,W¤ˆÈZ³cf;Á2â+V¤ ÑDs´8¯/¿ü²ˆà ­Ý›Ò¬p®“ƒ+mذA(pýûï¿¢†ŽõÕåßúGÀÍ9¾Q£FúG0bÄ‘qéÒ%ÃÈI§fìWoa9iµ—p 8&ˆ˜´fþNËrôbáÜv«¨§äÎ<‹A¤Ò­áº ¨osƒ-ìêwå4¦^5Jp4(å›3û+ÛV¤nçÏŸ_ÔLéÝÿuŽP Ç`Ãék”#‡M˾ jí@ŠZëÜ+Ôi‚ô3jܬ?uƒÏ¨I™ÂaëÄ5žìüTMm¶† ¨Ù³g¾£Í^›¤mÛ¶¢ª¬•èü{Õ"6 •nÅÚùN.Ý«W/Qik²âm¢“¡Û7š5k&TÀ$ÁÐ9Sˆ ÆØ%Dú' }™ÙÞ_‹-(ùóÎtC*¨ÃjO/ÏC•rº5E&’¶v…ÒÎ ¡´ýÂMÚÍ!D‡ªý¿c†‚Ÿeóe7ó4šjìS§N¥>úˆnrm$æÐX DÜÇË‹{úAÔ‘2r?îcÜÏ©‰XZÚWãñGŠyĽ žRåÙ„TtÛ …DôÑÑÔ;5ú£Å6¨P¡Bô¿ÿýO¤çk±®êÖP•ƒhÖÒœ‡D HT«V-QjÂyWÒuË®qrBYi¨4?{ýõ×¥®ïûßµk½ôÒK"j׸qcûÉWÓ„Ào¼A›7o¦3\tÏìùÞÖ@žº)"$X€íbg)hPrªÀœp À•Ác1&ðçÏ–UÕˆO4§‡ßç…ß]æþ„ѱkÌýaîÄÙлш’,@€Åj^B€ §·äÙXÏYþFÊ6d…ûí·D‡ î œ‰µ'¯ Çø xJy|½7Æræó£"Ìóã´35-‚ÓÜ.ݹG'oFîîepBYµÐƒÓܪ²AÝâù¨y`á|É´ÊÿÐ=z´†¹|ù²é7q±@$˨eþ›u÷ÿ…ò, .ÁƒÜ¹¯3çþžºµ®qr0ÄiÓ¦Ñ!C!LHÝI3Í›7§‡Šˆ±F¦Ñ€Ä nÛܹs©[·nÚè”Æz´6ì4ïâ( hÂé`ç#âa´è)ký½!7­ÁMÈáíÉ FOÊœ üðÒ‹"ð&žóž 9à0nirà=„Ü}Çë3ƒ¢kà5T`©h,F±©],å’NÆî×wDz„â|Õ«Ww¸p ÷_¾E¨%u”ˆœb$†¹6°lY</¬@ö¬Ïùb|¯Á‰†P‡…ƒƒG¸9ŽÎ½¨˜ç÷1ß˸Ÿ¯£ÆRå¨ïÃg¤,;RÏÅü©F‘ÜTÓAñ¹–»¬®ˆ(Äa>þøã„˜èPÀÁ½^­Z5ÜõCE}BÐ"Ù³gO×w@?Wt“óŒswk×®- CîFëç.q¤§{öìaSäˆB_šúôèÑCÔÌ9vì‹€ÉE‡£ß`‘ .Ÿ»ÕW#2Yû1;,Ï—HN¢òÑå“ëéMO³å E‹Ó“…#C‡ I8H la. ‹çöMyÜÑþËãô‹@ß¾}ÅgV •)¤jñý §idx„ô8oဳÓþ—µSä¬ˆäø‡÷4jÝ@Diqx„Z YÔÐÔ¸«>ùä‘FŒ(Ž/KÔ›ÕbXuËJ•* Á³âàŠq?zôH  Ž¥´$p“ƒn@ûjÕªBÏûÃ?L²gòMý!ЩS'‘N…E¸tbÕŸ¿³gϾØf̘A}úôQÿ&o±J•*B$5¤IÒŠ€µàÀ AƒÒÚœ<_cÜâ¿(t5È÷šÙð NÒéÓ§EdËÌX8{ìï¼óýõ×_b= AiI"0É¥ÛÁX áƒ`)Z”d×䛺Cà›o¾¡óçÏÓôéÓu×w=téjýúõŸìœISÂ… vd¥IÔ@`Ö¬Yb³Â;ÒŒ‡ÀرcEô‹N3Ûõë× X@\©{Òœ‡ d"éà8†s:Î)G*»Ëì)×¢9 5tâ”1#á“fð…¿hÑ"áìx{{f\ZÈÕ«WEUíñãÇ“Ùë1¨='Ç'¤ÒîÝ»Wí¦e{&DÊGHÑþõ×_M8zc›!Øtš2e 8ÐØƒMftà„à;™:f—ÏNª4½}çΑذaCÁÍMScæ9Ùµ‘àŠ4¦™3gŠÄW_}e¨M2Rä(?~üXªè9i¾ª*_~ù%!7WšzÈHŽzXš½%8ËXôAQšñÀï\Á‚éµ×^3ÞàR0"Ÿ7oMœ8Q:8)À-5‡BØÃÃ~úé§ÔœnÚs\š®fA; X¤!ÄyàÀËËòÑäÌ™S¨Ì Ò€ú.ÒÔGiP²›“HÙ•QœT¨ñÓ ]±bEÓKùƒƒ{éÒ%úñÇ5>cú†JëšYZÊp›“ƒÚPˆ€ƒ#•gR6iZ?y¹ˆ2 Œ½iÓ&­wW—ý{ï½÷Ä.â矮Ëþk±Ó%J”5o b'M"Z°Û ®©¬_‘Zµ{~ϰ£þõ×_‹ï íöÔ¹=ƒŠ²qÍ*R¤ˆs/fâÖ±>îÒ¥‹¨±%ëSw#¸\x ~7׬Y#êª@ÊQÒŒƒ@ûöíéܹs„ˆL™2g` BØo½õ–À·L™2镾»u 7ß|Sìšé{$²÷îB*¢uêÔ!¤¬I3KB,êáè˜Õ UU·n]ŠŽŽâQ²\„óîþFáÏ#GŽP¾|ùœw!ã¶ìzáøX¶hÑ‚ j¥(¹ƒ}?G4'88˜¾ÿþ{}D£½±B… dv S5§|Aù=¤&¢æjkÇŽtêÔ)Ó+nqÖ!Û{áš4i’‡çð˜~þùg¡¦öûï¿Ëzx£–òq¿-^¼X(©I'åøYÎp[ºš¥xDè;ÑËP-Í€ã0jÔ(Ò¾råŠ1¥¡Q¤OŸ^H˜nذþùç õL¿]‘NŽ~çN =Gô¯!: Í8@ä³Ï>þü4zôh‚¤§L LÛü€ãƒX†4‰@JÀfÕ¼½½SršÆhQÓN:9Ná‚ÂÉš5kð’4#‚îG}DÆ ;æ:Žæºß¶m[±û6hÐ ºÿ¾æú§õA¼!{öì´oß>­wUöÏÍlÙ²EÔN‘©jnž/tj|þÇŽ«b«új Ú!þ„² HS½oݺ5A2BrƒD]|­[Ó¤ºšu-÷ë×@ÎÚ¸q#Õ¨QÃò²|Ô)±±±T§Nñ!G©»¯îDÞ¾}›Pó¥[·nôã?ªÛ¸ ZkÖ¬åË—fΜi‚ÑÊ!¦lÂ]ºt‰W/Mÿ,Z´HHE#Uͬ•æ7mÚDøþ ^¦=«O#K‘‡¦;wJ®“ú[·¨Mu5ëZþ†2ê­Àû…Ò‡4}#1cFš;w.!'²‰ÒÔEê„S¦L©kk×®U·q´öâ‹/œoiÄÀFÂ’%K¤à@béìõ7nˆzn0«ƒsõêU‘ЩS'éà8áþE DvïÞ-2“¤˜ƒ@Ž×¤æÓÕ,ýÅ¢»,¥J•» ×®]³¼%uŠ@Ñ¢EE”„;„l¥©‹” AETªÙ¥ [89çΣÈÈÈ”(6  §gÉ’EDKM3hƒTQêÛ·/åÌ™“&L˜`ÐQ&=¬èèh‚s“;wnš1cFÒËwS…ÀÛo¿MK—.¢ZUªTIUò¤”! 'थ‘#Ú¼ys OÙhåÑšCê-ØÙÀ#Ò>¤©‹D ¶öú믫۰Á[á Ÿ]»v|¤rx©A÷Ư¿þ*¾·¼¼¼RÓ„Ò$ñoáÂ… 2U->0:|EýüßÿþGPV4£Áa‡\6 ~Ê*õïÜ_P ¾/¿ü²ú-&Š€îœŒ’{ @±P8:px¤éDèPLü(ÛHS¨Ù9R¨Ù?^ÝÆ Üp“NŽ'8 Cƒ¬níÚµ©\¹rihEžên?~, Ž#r‹Šóf4|Ç!Š3zôhQ³ÅŒ8sÌPFý:”CP‰4×" K'/^œ°›†…1Ä=zäZääÕTE X±bb—_™¦.Ÿþ¹à³¡ž”í¤%œœƒä>¥I,ܺuKäÕKÙh "ú}|ÿý÷ ‚¨Ý–>½n—C©ž€‹/ N»víD$+Õ Éí")nüöbSiùÒ\€®?Õ!@DçÔ©SbšãÒô‹dQ?5‘°¸”¦™2eùæ'Ož¤1cƨװ[‚“‡2Ÿi1¤ÔþôÓOB¹ »ÅFçâûî»ïÒŒ›sð%€ŸV ¤looo!5¿O§OŸ¦ñãÇÓúõëã¿e÷ù“'OÄoj†­ZµÊî1òEç €‚Ô×·¤K;ç*ÚmõîÝ»BʸpáÂbƒ1]ºtÚí¬{öÅ_ШQ£è‡~œXwÎ(uo¬7®0WGa1…#:º™ðôéS¥iÓ¦ §$*¼Ãff(œ2vŽ’)üc¦0GÇ)í­QÞHQØ1IÓ°X±Hᢑ WRW¸öŽÂNfšÚÓòÉgÏžU8Z¨ðošÂ[Uº¨pm4UÚJk#,«°*¤ÂõC4Åñ:ÆÎŽP‚÷í½À›9 K ¼˜”lïùšà†Â?•×^{Í ­k¿IÞ¼Q¸Ž’?~…Óþµßaõ98âw–#8:ë¹áº; B†°(þþþ «V(ÌÑ1ĘÌ:ˆˆˆ‹ËêÕ«K§Õ 7KJ‹MV³sBëÆjòwÞQ*T¨êAq­…åï,Ža\[Eሳø› ŠG£ýÇ…1Síäp*˜²zõjHð}®•Í+½cãì›>Zžàu89L0¶¼”ì#ßSåäØÃ*Ù‹Éæá(•+Wÿð·mðàÁ «*X7ISáÇ+œú¨°Ä¼º ËÖRƒÀD]§«YGÀªV­J›7o&¤J Z/B±Òô‰ï°ÔÀ/Ü¿!å«Ï‘h³×Hý jv¨ -qZ¶lIÇŽyû‰•ø;ø>B®¿%u8<==ÅwwŒh2dòŒÙÑ1rWÀ/_¾ls $}!N¢Cn}ݺu…òž½þXx–G{ÇÄ 5à`)Á+1¬â·-Ÿ'D${”+X¼x1eΜ9áå›o¾¡iÓ¦ ÖMÒÔA€WàBÀE¸çÌ™#ê.©Ó²l%-<ÿvMK :—w\…ª7iÒDHMstGC=”]qÈX¢ø+ê!AÚ %ÒÔAÀ¢f‡8N»?xê´l¼VÀËÁBhÍš5ôꫯ& Ô™8:DXÜ£ÞÓ /¼ ñC‡¼T¹ÆâÞ§tQûöíÅ¢¯sʈxïƒk¡ØñK/½” ò:êöÄÄĈE6G‚¨Aƒ„Â¥ö yÙÅ#„=P|x-Æ»¸´mÛ6!ÛÞªU+ªT©’å-âhª¨çÀ;¾Ä‘áè |Ô]ºÿ¾È寄ÓðìÅÿŸcð’ºtéBÈÿ·gp¶!бaÃQŒ ~¡Ñ.úúhÖøƒûÂé„ùAÿ€)Ú‡ó¼Á£BUñzõꔳâ®ÃQ6Q’ [·n”#GŽø‡$x~óæM1Ÿ(jmÀ÷œWK¿øKrójÝžåïÄÎI +œ›š±Y®iôGð©ð­? F·e|XÀæ â;v´¼,Óˆ¸›œ!!¾ë,X ±M#žªžžšøÖÏAnt¡B…DšIhh¨Ö»+û—¼ã$r[çÏŸŸÄQò­Ô À²Ý)N­IÍuô~ËÔ+¼€N0 ^ä+ìÐ(AVãÎDSñ½ƒÔ*^ˆ*û÷ïWX2TÜ¿øÿx±¯€CÈŒ’+W.q.žÃX-R0`€Â“²páB… i%0v.vDÄ|!…Ž$‘nÂ5ÄûñÿCÊ';± úˆ¾¡ìlÄÆr± oˆ4°#GŽˆ”º¡C‡Š÷‘fT¤ÙM:U©X±¢¸.ÒªÀ_Îb¯¯¯R«V-ËKbœü#¥°Ã§°ã$úÍÑ‘"¹oß¾¸ã¬ÿˆŒŒTÀIÁy¬x%p¹sçŽòÇ(\üYÉ“'8×{ï½÷Äq¼Hø°X‰ÂÑ…L…:1Vƒ~b HŸ³æ…4Á¸¹6—ÂÑ5…#n–C}Äü²3—nˆ‘{öR‡9ªS§ŽèÛܹsãÚIj^q®qƒ/g±¤Î±‡æ;-c³\×Èøœñ†…òá‡y˜‰Ž ©–,@“fŽa¢0鬾)¾ãð}½nÝ:“¢ Ùa‡“âàà`…e¦Å? ¥é|ñãÄ;¦ú„F{ŽÅ ´Ç×hÝß­É“'+\TU,"­{3{öl‘{ AÍX¬Z/äY5Kc}þîСƒÂQ‡¸—ጀÐnÍ')íqDBÇ5ŽÄsŽÇœ p|윎rĽÅRûŠeá çΙµÁaàBˆq/YÄ–,Y"^ãȉx„S`íäàEôÇž“Ó·o_qþƒ£Gâ^‹ÿ‡?.Jhóúfqr,o`>¬9{p~Ð> ,ü,><<<”±cÇZNSXùLùôÓOãž_½zU` Ñš¤ ‚(Ø8C¾½ÅX M8VœmyIÏ sfÁÚ‘yïä8rŽ=¬R;¶¸Îø8Íœ]8ýf3\`Þ»wï8~ Ù0pÆx9R­p½,±ùaý½ïŒkÉ6S…ÀDC¥«ñKœ,XP¤, EÛÒÀdø÷åúA²¼é+;vì ÞMÖOç5ÞSäg#mU˜ñ(Ó;N;$"­rõàèX …ÝžÄ p‘òe)Š‚«¼·–è£uJ+‘RßP´ÕbHÇBšG¦EÊÒÚ`¨ †Ô8ŽYMðXºti‘ºË‹Qi»H‘""-~ùå—¢ ë“ÀOÏÃb–k!­†öRjÖé0ì|Ò#ÙÙ!ŽÐ¸I‰™5.8i`ñ£G W‡£=b|Hsµ¼ÆÎ»H_ÿÂbø.AU{ë¢ÃHÃKNÖ)„¼qFÖµqPcB_,fI´ŒÁ‘yµœkyLÉ9–ëàÜÔŽÍr]£>â¾ÆgŸ/`káŒu¼ñÇuîÜ9Â:ˆ7"Dªžõ=ÿXùÜqNŠtzð¿±.IÍw¤ãW“G¦Ã:9$wîÜ‚à‹[ÞÕy¸5J-Vò<7!/ø øBáÔ!/¹÷ÒÒŽHÏXàbQN§4…öÖÑ6L°0樆“ƒûÎ'Ÿ|"x;ÇœGÌz±ðOP·#1³ÙY¤á{nĈôý÷ß  G£„ˆ|¸Gdl.ƒ¾XÏ»åZ–G›ƒSù›Mpr°8H‰“ãèåì9C݉+æÊi^âÚà•wŠ•;©dÉ’q§qúž]ãà?™WëãñwJαÜCi[üëí9>àL;…Ï«™ìÊ•+‚Ÿ .¾¿ðy–v8óAlŸþŸ\¤SgµÞY k¥]ìð|‹ÝOìÂr®»Vº&û‘@þþ÷ß Êkpt@Œ–¦øáA‘æ{¨Ó¨ÁZáZ7âþ³Žv B€è0vïAæåt¦Ú²@ÅIp\@ž‰_ ƒs‚‚œpZá<´ÿí·ß ¥B8aP/tµ!:„1#ª””Yã’ÔqñßKì<Ëë‡ ””D ðbÅјÓâÄgÆ^[–k¦f^SrŽå:©›½¾é5æ9ѤI“ˆ¹]EW4væ# ‡kŠL,È¥¥úeî®NÚñtv †wr çeçHÓ›o¾)ÂÖ¨H-MÀÁAÚ笋X¤HS(kqm¡´–T4A«é¯89Ì¡íÛ·ÇuŠpJÚ´i#^s4‚ƒƒ±8µv˜˜Ü/"vµ6ìÐCò;¥Æ¼¡2Æ…u‰ ×B¥y:"ZS¦LQažŽM³ˆ–&÷™B´‡ëüØœçè¤óA1Oö̲`·ÆÅÞq©} iep°Pá>þ8™_EØõ¶gÀiœHé´pŽˆº`1™˜¥f^9'>V©[bý6Âë¸ß ÈÅw©{÷îF’Ãc@ú%>û¸O˜ /T>Y˜(p–‘ú‡H°Ä5Q˜4õ†)œ Ž;vt°»É̉«Lvã¤é Èóbw²H¯Rkç[_(8§·HëüüóωU¶üiÿ!®$êYå/îE&¶d…™€.x&géXpN`\M\8p¬ Ñpnàh %¤< µßQ¸¿uÆ÷d©a¸ œ–ä ¼ ì8ÂÀM¯È’"ÆÄ{ÑAú[ð!Ê´Î8>‹åZaaa6—B 2\?ö88ãˆ]µ®U†ñ#Uí‡~°iÏú 0Aú}A}"$“Ñ–åûïáÚxÝÚX´!·ÇY;e¬Ü&乑·&8€Àí#-1¾ÁáB4‘0l–Y~G`C† }“ ùXrô ä©“›W Nè?Ì‘sìa•Ò±‰‹ô?ÜØ˜€ 9J˜É° ˆ¬ÜW7[Šž3æß9p–ñ=€Ï=6Eâ8㺲MàÉ3qÚ“IS®¥£@uEšþàÔ*!-Ë?dBiJ#Ðny×SÈý2aU»tCÏÆ'páE³¸:sÄâsAH9s$@È6szˆÂ…7U޵­AƒÅ©¤áäÍ,;ÍÑ…£“ ׬í:uJa·8ž¿Ú•råÊ 9i¼É©lJŸ>}Ä{Ì5ms½qž½ÿ˜'¤0VH@Cé ²Ó¦¶äšqm\‡£ ¤Ú-)c¨¯á½®]»*ø¬Y Ê_\wF¼Ç‘ êkP?ƒ:Ú„^ 5+ÈRCf™îB}ÎÒNb\ãL´Í˜3°l3ú¢ ”2!åŒçà†´<ú„ñâ5ÈMCY k,ª!^Ã8¡zcGDä4°Çñx„¤0;3v»„ß Þ SP–Àž±C*Ô ¡þu:(œ¡¿,l‡wRó lú© Û`Icé‡5VPMéØ,íí‘÷JÙ²ežìämxIŽ*ƒP:d'X‘ßßIBå𛸇°Î€z£µÌ»Ã È݉ÀÄt¸:ÁšÎ°ƒ‡#Âü(4gÆÂ`zŸt¤Ÿƒ4ìH[òÒõ>.w÷©<êàL±«ŽœniÏ ubW»õâ>vïošˆðuŠè¢#»|ØiÅ=?} *^ˆ<Û‹,8:ˆz ¥ …4AÈgÉå§¢ïà›°$tŠ?;ˆÌXÔÝ)gΞ¡}¤z!šäˆ?D±u¦aΰÛôµ¤ú.'æ©)‰°FT8âXŒÁÞü§f^“:'1¬[bãÑóë˜ DoðûÎu©Äœèy<)é;"8çWQJ¨JKUW|±ô¾(.œ¶åÙ.F`’iT8:¸¡0…êØÒô…Âñ˜CTKçd±8Ô×´Ù[,4Q)dz¤=%¶ˆÕfï×+,","λŠlY ÀÁÀæQ•̵0!ôkK‰”RŽ” AN1Ä!ÒÁQ¡âŠnlæ@˜"Òt‡À$ÓprìM v  àjÙ‚˜›T¾¸½óåkîG€S…ƒŠHŽTSo> „‘«á@b×_§Œ K|ŽÄÆx`Ó|D¤iH¹ãw\6K½"í÷:í=”NÚ1ŒßÂŒ3¼=Ô÷‚³#œøé繩LÒLÅùŒ•’Þ}÷]QK">¡U?ÓiΞ"¥EÞ š„9”¦œ×.d“!Ÿ+q}Ž)R#‘Þô矪²lE“ í ªj #íOš¶@#æj uHü˜Å p1™¢¦ÎŒ3Ï‘˜?I¨¥5|øpú矤«s%ÙŠËà¼^iÿssÎ]W¸¨ŸÂ¹˜!À;x‚H ’7H¸ÒÔA€7æŽ(\á]uÞ «i)œÆ$ï1ÏcRÝ·Üó—/_Nê0ùžXºt©Âµ…V†Ô@o\×–.WXn\až épÝ•y%¬ù8r#„X ¨"͘Wx 1/’I„Ü*tæA0[š~@ÁPV„)V~•bêÌj¬ šÃ*Uq’Æê´¬¿V ïŒ(ÒùÀÑ‘f<°;ŽêðˆbJÓ.H%BÊ2dÖÍ1‹GŒqCâ¼Tp'¥¥p¸ Ýþ ;8R´!õPjíLssrìÍK¸Š*Ö(ZÇÄ»×B-ÇÞ±ò5í!о}{bFúøÎ*,¨½‘;·G¨‚: öZê¯8÷ŠÚmE ±ùôHiÆCu‹°pDÚŠ4í"pæÌ!:ƒ¢—–UÚí­z=Ãý 1Ò(Á)–Nê±åX±Ô¼(œÚ A±ö“ªt©ÇS‹gšZ]-¹ AñP®Í vLþúë/âúÉ"ß×X¤ÀáA~6× ‘yõ*Ì ~à8rzˆÔåz *´ªÏ&@nîÙ³'¡è¦$¥êsë5¾óQô<NƒJì0ùºà4B±Ð‡¤; gɒŽqÝ¥Oœ8!88MB$Y®IR=Š#ˆûŽ88Ò ‡€Œä$5¥¨üŽ(Lq¡6Q3$©ãå{ÚA¡|T¢G5w8;\P;ÓiOP»î¹P#µjÕJÔ¢ÐéPÒÜmTSG]l„H3 CTÄcéàhs^!o,‹œ9sŠïx³88Û¶m޲M°0—NêïO¬ë*UªDH=ÆßÒÁI=–Z?ÓôêjÉM$QX¬B… "Eåûï¿—ékɦ‘÷‘R„ƒ}ûö‰hxVÒÒ†R$P7„«»‹E®Ìž¶uz6Àà(A+""B§£ÝŽ þá{i™Ò´‡¤Ûáà À- ´fÏž]{tBÀÿc_»v­(bî„˾Id#|õÕWb£¿aXÛa'͸H'ǹEÅîåË— ‰Ê?üZ´h!*\;pª<ÄÍ ^ÂŽ;DáWä1ƒ°)-m`‡Ò¥K‹\f䈛ѰÛ§ÏL„g£Ï3æÅ…‘$M[`3ü›'OžÎHâf0ðºté"o¤ÉâûWZÊ@ñwÜ?Ÿq¹ñãÇ‹´k??¿”7$ÏÐÒÉqpºªóÁˆÂPAAAT¾|yáø8xº<Ì€HÌ!%@X•–6P+ÊS(ŠÝE–ßL[ƒ:<ÛÇLJÞxã š2e ÉÚZ:œÀx]FêÊÖ­[¥à@<\´ð/[¶l)¢lˆÎã{Ç 6zôhzë­·Ä+Š•KµÐÔÍ:êÝ ¿S»wï&P¤™éä¤pž¹†Žq¶iÓFTÂÆÐãÇSØŠ<ÜÕ`g¹·àQÔ©SG¨¨¸ºF»ž¯¯¯ ¿b±×m0Ú“ϰaÃDºÒÖ¤éDq "ùhiÚA¿¯ø½ éÇfP7 ¤øo¿ý–f̘Aü±v&DG=yøð! 0€:vìHàQ:tˆªV­ª£È®¦éä¤AoooAÀF(w!·óÈ‘#©hIžâJš†üqÍš5EN.òœ¥¥ _)ráè gÞL–/_>‘F‚ÅÒh¤é,¤gÍš%"sˆøJÓ˜Ç@üß3%J”ÐFǜ؋°°0ñ]ºlÙ2-ïß¿¿¯fܦ÷ïßOUªT%%ð[ÿ믿RÖ¬Y;`92»H'Ç.,޽ˆ¢“G¤Àýøâ‹/(66Ö±“åQnAiVà“à‡£sçÎôÝwß¹¥FºhÞ¼yÅ+$¨5`¶ˆÒXCBBD¡T#Í«™Æ®ÃƒèÕW_5Ó°5=V(b"‚sðàAáà”+WNÓýU£s(FŽM¸+W®ˆkUL9ªØlúä“O¨víÚT°`A:vì½üòË)oHžad¦»Ø“'O!åÀÀ@±ØAEtiÚFò¿ï½÷žØ‰¹$ri©GÄNÈKCylÓ¦M„(‡Y ÜpΞ=+ï#N:j>aA„è¼4÷#‡"8¨yV±bE÷wÊÉ=زe‹H«‚D46âòäÉãä+¯ùãÇ‹Znp]@Ff7Ï)‘¬““°="Ù GˆB¤ˆ<{ö,ÑsäîGsB"RQ÷åîÝ»î{ÞˆÛ0ÈwÃé1‹!gÊ}’›£¿G4~Ïž=Rp@#S‘(˜BbóæÍ¦pp~ûí7ÁCÊ/œéà¤ìf|úô©(è ê²5ð™~ûí·¥ƒ“2 y´LWSqZ¡âµsçNúôÓO ª(,ÆÎ®4í"Ю];B‘5ìBy ÊyÒR¢7ø‘Δ)“ptÌ¢ºV¸paÁçøüóÏ¥Iêo§Ÿ9aÂ:þ¼Íu 8€ïnD!¥¹l4!E ²ôø1zŠÒÛm8p ¡<"‰™3gvï$èìê¨Õ†ßnHCƒ2¡âÅ‹ël²»ÎB@:9*#‹HÎG}D ¨¨(± õå—_JR²Ê8«Ù"o{÷î%‚z®ÒRv!±@ÁŽ":PE2ƒac‹4H½JÓ&_ý5!ܱŋSdd$Íž=[,2µÙcóô s:&à£àû£L™2†£;8à€"½ ›>9–Žã76"ñ :”/Ào 6#¥ƒã8†¦;’¿ø¥¹&%+\«Eáü}…L]tUy™Ô °víZ1WœÊ¦pºUjšçü?¼T˜ï dÏž]Ù±c‡áqaÕ>ÅÓÓS¹té’áǪ§â>äøDÿñBI¼—?~…#;zšnûzæÌ…UíæÞ(,T¢Ûq8Òqv°&Ë{Œ# G9Móÿ¬ZµJáb½J¶lÙŽ–+,ì$±‘$‡ÀDÉÉq¡[‹ PŒÁ.ä1¡ÝŽüciÚCäW첡ò4*$¯[·N{ÔI|||hõêÕ‚\ñ·‘mðàÁ„ªì KÓPíJÊÝÁŽ0šÀÍ“æ\À¡@ªŒaçÒ¹tcëˆVÕ«WþüóO‘=qâD)5ïà|ܸqƒP“ ¨ø\b 5`ÀÉ_r?³&ß eÏ›7OÛ¡ rå7ß|#… \<Ž\é»wïj?-[¶Ü*) îr áȆ zãÇ Ìñ0ªAYnܸqb1ƒûGš6@핤 u²@bž?¾$/'” ïAjv}Ð g8¨Ðª6›Àø0N8ÙØ8Ãw ´äÀo-D\°F‚CŒÍ1ðeÍT-y”äÉ"\¬G¾ï<¾;v¬Â\…?È PtÞÅdËiBàçŸ)H7VBCCÓÔ–™OFŠçP+¬"¤ð˜¡¡@Š È´ Ì2“•MUcG¤Â0^#½5n7¸6™ø.íÖ­›Â‘rÔ#ƒ “â•ôéÓ+=zôPØÉ6ìXÕÒE9¢ªð†‘ÂjµÊ£GÔ¾„lÏLIVš›@î>§¯‰`|ñs ››{$/oÞMRŠ-ªðN’ÂE/í"_s&æ‹ûýÿûŸƒgèï0&Å*X®›7o*}ûö›`Ø$âúuª_C6h*&J édc]®;`ùòå4|øpQ) "Pø‚â—4í €:(ýúõ#Ì4ú1G¼S§ê¨'(ˆBxàªýôÓO†ÌQ‡š"ê° (°¿¿¿Žf'u]}ÆnÄ»éòûzÿ1…=ˆ¦;¢(üa=Љ¥˜§Ï(&–ÿÿ’.yfÊ@̃ñ̘žü²zRެ™)§÷óý½©PoÊœQiX(§µmÛÖf`¨K‚úX( (y86Шþ„74DÁFÔ“Âw§QmÍš5Ä u!OŽô*¤ªIKȸO™2EÈA³@ ?žºté’ôIò]‰@òL’NNò ¹ôÝ'Ož,~ðaGn÷îÝ]Úy±äqÄrHþõ×_‚<›üYòˆøÀYÄý Rî¢E‹ÈÛÛ;þ!º~‰âR¥J‰šœò¨ë±Xw>–½™S7#èøõp:vÿ]£ ·îÑ•ðìÄ<—g†‘-‹;.ì¼°ãâå‘‘vjØ¡cË9÷px¢ùxü‹xdqˆ¢)–!ÚÈë›…Šæò¥rùý©ü þT¡ÿãG´׆ӆlNAûË–-B06oÈ'ª!€ìo¼!œýüQü­Zãjã5j”X ÷ìÙ“ðy‡èŠ´¤`%SQçæòåËB¿«rs7iÌä»# ‡¡rñÌû üã?ÄBšei…Ê—‹»!/—‡"ü˜¡r5 ½B-OZÊ;ì ”®\¹Òp*Kp‚ûqôoïÞ½B¹+å¹ÿŒ‡…Ùvþ&í¼J;/†Ð¾K·Ddƃ–Òy³ Ç£TžìT$§Îá#sûd¡ŒéÓ¥ªób(8ü>]âˆÐå°ûtñö½8‡*’!8'ùýè¥by©Nñ¼T¿d> ðËšäµ8m |wÜ´iÓhàÀqÏåê"'¿S§N¢ØêÂ… .F4ÿÅoÁ‰'ˆÓÓ¨ÿþF¦ªcfÈ\A1Tüv¢pzáÂ…U½†lÌôH'Gë·TEÞ}÷]Úµk—øýòË/…<­Öûm–þ¡°+ª-ÃÉÁn%"<^^^f¾jãd^šž\¸n†jmk¡¡ ˆÂ“pè S¬;I«Ž_¡Õ'¯Òöó!"BS’™—ŠçÎE"¹©Tžl”)ƒkÓ5¯F<¤ƒÁ·ÙÙzîp ¾#úÈ‘ž–å¨E`€pzâ;X–è+¢åp’˜ÐLø>•æ › Ùß°°0±yå:#d¡ßyçâ.Bý«téÒF¦jc‚Êe¿ÿþ{Q™+Mš4Q­}ÙDÀ IRx@',NåQ¸´P¥1b„®“ž›£›‹/äeþS>lŽA«]×ï×gmUÖŸ¾¦pZœ5D. tâ{¯^½’AB¾ðˆ¢ª(òiT!(ñq”Jä¡•È .ŠŸrº¢’;wnQš7dATÃϺÛ(…¬<>Íÿ‰œ_¤W°,%ACù¿\AYg5ßytðÚµkôÊ+¯ˆ¨Û'Ÿ|"8;zÙµ×Êô°¬ºHíúûï¿ éE ðÅ@¶Æ&RZ°ë«»ý„æí»H¿í8M.ߦ¼Ù¼¨s•¢ÔµZQ±IeƙچððßàÞX„2²`ø9àédfˆ€¿ãÏi¹æ%No[t0ˆì¿H‡®Ü¡<¾^Ô¯vIº³a6Mÿi*Õ¯__ÔfA=#iê#â=jÁ°|:ásÌêÕ¿ˆ›[Dqh¤¤AtfæÌ™ÄJjnî‘¶/ϲáâ÷¼ˆÍ@| B,ÚžSôN¦«éqš‘çŒÅx:yóæNò¥Ê—ûgÎ'Òbð%^¾|yš5kÉô…”Í ïý>îqP‘Rd„{›HÙ)P a!ènƒhÀäM'h>;OYH sÕ"ôjíRœê•?ENÄŽq[hÜ™ Ì£¹xû>…D>dç")+RÉü²f¼ž¹žó{À¿è@æÿ¤$=î÷ç¯=çiÆÎ³tmù/äv™~Y´‚zÖ)Ÿ¢1¦¤ÿf>¿Iœi@½{÷›Fs$?~,¾“À»árB\AÒì#ÀõnÄý€t{àÅ¥4µÁc¿×òU! =O&"ˆXÒØ)–äwmÌèéÓ§Eâøñãb‘ÞŽê®D;¤ …7kÖŒæÌ™c¥"üØ×­[Wp¸ÜENÞpæ:_wŒÖ2צ,;oÖ/K½k” ì¨•!2³[paBië…ùy#n‹¬¾>”É7ed…<_òàÇŒ™3ÿÿ?OÊÀ‘“té3P:æ$¥‡7°)ÀSèGxb££èiTÅò¿˜‡(†s÷Ÿ<¸OOïߣ‘wéd§yç¼t>ª_¼ QËrE‘ÑÄ ®ÐøõÇè‡Í'É's&ú¸eeT¯LŠÒákÛ,¯;vŒÚµk'Qo(00ÐPCß¼y3½öÚk„"ÐHÅçRZBX4†­AŸ¯¯¯(ú Ü$'5!Vò—" —Âí¢‹íܹS8;[·n¥¦M›Š(Rd¤¹¤sÀÁùúë¯/’ÓàìHs G G5°£ ¹n=›%m Å8@Ó§O²ÆHiDåo¤þ¤ÕÀ±x-=r™š–-@_´«F}NÌnÝÑŸ·¡K·# ÑŸ"E){Ñ¢ä•3'%ñI¬]W½Ã÷H$›ï]¤»7CÈ‹ž>5ŠÓ€ºe¨r@ŽD»q³æý¼õøg¥ï;×¢¶ &z¼|ã9 “cÑ_½zuá¤çÈ‘8Æzà 2Ç#GŽQl° °'ø¯Òl?õmÀCõððiŠÅòû±û]bKJ89øŒ®_¿>y@9âîü|å!š´ñ8ËåKºÔ¢æìä$f»ƒnÑ÷Ñ€3ÞL¶Å)G‰’”•Sõjpx"8ryö =ˆˆ¤ŠsѰFÔãÅâä‘H]Ÿ`Pù÷^ZȬfŒ×”î/‰:@zÅÀYýÆç6ø=Áfvï$0€ÏÞ€ij[÷îÝ¥nÛµ`ƒè RFYB[Ô‹ófž4‰€†Nކ&Ãi]“ƒ…!œžÚµk‹§æÍ›;íz²á¤Àvð?þøc¡´©ä:uê$}’|7®%d¦«U«F¨¢ž'Ož¸÷ôòæذa¨SçàXúÇ-2’#)Y³Z^røqýéë4pövŠxMŸsä¢ñ‹b¢1…ÿÁ©ùfí1ÚB¾yr“™@òã¨Mz–t6’= ¡;,qñ"åôÉ"œAŒKbB ÛYPáÝ»èÔÍݪ }Àœ”(º »øcÁÎ=RoW¬X!Ò·°Ù`ÃØ°XG”½sçÎâ;škºexªŒƒk‰¨”/‘v‹BåÀL*Ì©¯lD}¤“£>¦Úm$n8;¯E ˆ´iÓF»6xÏÀ½xóÍ7iõêÕbG©l~~~µ:ÃjÈÎHcƒÓç]†êïX$‚‹'1C*P‡{;ÁérG@IDATë‘cÄÂ|“é;W-JS9 ‘××>oÎÍÇÿ Ó7"È¿p!ÊY¡ù˜ Šm·N§vx<2¤£šU w—#Nk‹oPc›°þ8}ºüçhØŒ¾ ¨Z¡¤yLñÛ0ÚsD¡ñ™ƒ3€Í#¥@ƒLȼcNŽ´ÿ€hœ?(çaÎ-Òùrr«4‰€†Nކ'Çi]Û¿?Anzùòå‚g *Ô§æzæÍ›'võáAøßH0‰&Ò¼À €Ó޼pÆÕ²Ý¸qCl.\¿~=Én"’ƒqAÍÛzþ&½2c3פyF?÷ªKí+²{Ú¦³7høâ½t”Eü‹¡¼U«Q?óÕøxó„BÙI¾süyeJOŸ´ªLCÚÖ€×ôGƶ3ÆÿãÈØÍ+™²¾Î‚ ‚ØTÈÇâF°   ±`_»v­øÞ?~<‰[”Ö9BZ ÿý÷!6ã 4'ùIiEVžï"¤“ã" 5y¨±ÁÙÁ®qÙ²eéý÷ß§=z¡&;làNaÁ>jÔ(‘ E<]¦TäŒé³ð>ýôSBA\È»zyy9ãRª´‰¨]Ÿ>}Än¨…‡c¯aDôõIjã!– xŽùw?[wT(ŠýÞ§>åÈšßÄÅ9‡-ÚCËŽ\"¿‚)oõÉ+‡¿½ËšêµØè eåÃÛìðÌ‘•¦t­EmÊ'@¼ms–>f…ºšEsÓìWQ€_ÊS õ.îQìØƒÿUH,vÀ¿̸qã·²þø¾5Rd*­÷~ÀG‚¢RÔ $ŠzJç&­ÈÊó]Œ€tr\ ¸&/wâÄ ñ…?þ|B=’wÞyG|©eË–M“ý5r§ H€¶Ã‡‹èȽ’Ì™üŒc'NNhÉ’%š®GçÄfl.$e¨ŽúJö,äÞcêúë:|›&u«Mê”NpX Gv¾Z}˜ÿ!O.Ι·V-ÊÆøH³E še»oîÝKaÌÙiβӿôªC…ì=z-œzNßH·ûyS“Ò/Ø6d°g¡¡¡ÔµkW¡þÞ >_F0pS¡^ˆˆ*¾_Á3‚ã¦ÆÜ Ð)Z(XBŽ-x72º¥º² 7 7€®ÙK^ãz“'O¦_ýUð°Ã\AÞý•æ:¶†9€0¤8Q Õµ“ÚÕw]ï´{¥Ë,Ü©S'B ʬY³HëEèfÏž-6ï?ªƒEvÐQX/¾íₜ§­'o®í²dP3*—?! ÇôŸµ‚î< ¼,Ð;°¥K/ÓQãciýü>ËN_ß±ž=zHßt¨No7,— 5í!+׽θ.:DcÛW£[T²nÂ0£ DE±i`¹û IK! ©vùÛöü–E*-Rõ ‡à ?ðnPóFšD@ÇH'GǓ紮ƒXŠE6ü0tëÖM„ª‘-Íu`W )l |bG?ÊP“–8H­À.íŸþIÇu‰´¼K‹§½{÷¦íÛ·'"(]º4f’¼µý±ë«§m£åè¯þ )[ë· Ñ›ÑœÂ6~Ý1µ)P§.Gq¤¬« HID¡œÊû"ךÿz#»QIOÐûï¡NUŠÐŸýPfƒ(Ò!ý›*ài¶jÕJlè=¢M#¤£á»*`HÃÒúH·¨ªo;wN¤ Μ9SDk²Yp-§üª €lÌèH'Çè3œ–ñAi)lØáAeë&MšgGÊO§Õ”Ÿ îR±»Ú¿¡p#¥M“Æ‘8;åÊ•÷páÂ…“>Áïba‰ Dnð·uT'888n·ù“eè ®ó1åǶ¯ž 8ç™Hêúû&:z—òתM9K•rã¨ô}éG\ÁýêæM¤<|@¿rúZO®¯ß6³CGލr$íßÁÍíò¡âŸ£åç·oß‚›6m"Hc“@»øîÄï¢8pÞäžÄ¦ øU¨ÓNÒˆ#„,¿¬Ù¦÷;^ö?“ÒÇ{A>•Ä!€p¨<er.8°-ZˆÔ„µ¡¼"Íù ‚†ZGHoÂ<”(QBì¶"j!Í> ÷8p@HLW®\9Yþ‹ýV\ó*Ò!_ g¶L™2"WÎ!ƒP@„ÀÀ+l¦¯×!ˆ |iÇÁ™³ïU»„‚cÒSÉŽ¤ƒ“Æ©óò÷§^¦¬ÅKR¯é›Eê_TìS›V–ÊO»>hO×#RÍo–rjà}›÷õôdëÖ­„gÏÒŽ;tïà@žE[è7³ƒóôéS¡Œ‡¬€zõêœZHgcÎ(‹Rëé+ûê0ÒÉq*sجY3Q…„øêÕ«‹EHÞØí»À•Å¥9(ßá ùÒPÅC:ò˱û/-!Àg/Ê»té"ê{`GªJZ5(Bx­p|°(YòÏRެ…;W iI¯½dyÂéUïpáÊÞÓ7Qö2e©xÛö”YæÑ«2ÅéÙÉ àúKŸpòÜ—¨æ·Ë(8üMÛeòf§=viƒu¿[F§9š¦'C*j§5nܘjÖ¬)OPCM¯†7DkJqóСC´téRÚ°aƒPÕë˜ÒÚo`‚=lŽÁñÃï6"\È @Ý# H“t¼@’+$£Î®Ç…(pEëŒ]3DxP§¤eË–’ ïDÜ-Mƒ(:zôhB.uÕªUE˜:uêXÞ–ñ@Ú%rÍñCB†HÑв¡po7^\»~ƒ|þ@kF¼L5™#bmᢩÝOëhïå;P¯>ù+jý¶ü[E¢˜§¼~eˆzDË7£ºÅóÚ´~/ê µžºšÎrªàº¡­¨R6ïkñ ø–4Áb)ÉZ¯3•†XÆ Eb-GÊ`¶œµ¤Ú0Ú{7oÞÎ êÜ@ܤ_¿~bSRëß}F›9·" ÓÕÜ ¿Ž/þíxÕúÆ?Ho@¡BB“Þ" ¢(X@l¬Ø (¢×~½V¬ü»èUQÄ.¤(ŠTé$PÒIïð?ïÁÙì¤n $[Þó<ÃN93;ó›!;ïùRJNŸ>]ª´«H…‹òÑ£Gë—H¤ DUd¶sG Y³fºX$F+Œ•+¨~îÎÀ±ŽŒL°ÂËPÏž=Xí¹uïÝWZÞñªxu$Ou÷)#pP¤²Ï‹?Éæc™Ònì8çøfÂ:ÖVqv k*ÃßX _m8húÆ*ÓÝ’.•nJÜ\øÚ|ÙŸbÚno p{…{¨0ªïÈîu°>Ýzë­2fÌíY€øW8H=ñŠ8D†·îó;ï¼c÷ƒ;ööÿ„çãøh§tü{X§W€¸¤ã„KÀ®]»´E…›7o®ý|·«B{lçŽ@·nÝ´á‚ ô;R½ÞvÛm‚tàlfmÚ´XH&Mš¤k~ Þ ™í­!CÚ•ï.•=)¹²qñwrËø1¦S\›$½_üQ’Ny踟 ²)¤M;p¡FÔww—ˆáIPÇNr½Jððò’­¦ãúzºËü{GIïV¡2òÍ…v麆8>ÄÁâ>lØ0í Ñïˆ I9ùÐEá’ ·o¸"#«& @¥è®V)n<'`NŸ={¶`”–XvPœ uMf̘!HÌVsð’óì³ÏJll¬ÜsÏ=š12ï<ÿüózd´æ¾Éq„gY‰ðò€—©+®¸Bjå‚¶I•›>]%÷^Ø©ŒÀù;.EFýßbñ Wg(N­Ü‘ê}I`¤ŸÊºöâ¢-Ê}m‹igÔÑùþ΋ä»Í1e¶™:ÖàFüovå•Wê¤0¨mæhgñâÅú%ˆéÄàÃÌ™3]Fàà ïD½/3Åo#~;)pjð? åÔ˜]Í©o¯}_Ü î¼óN]™é}QÈÃÕ ÝŪ\‡Pó€­f@ìÀå#&&F‹W^yE v^xኅ./½ô’víÙ¶m›tRq°6VÖŠ÷lÒ¥ŸÈ-PÅ>—IïˆPy}|ÓWÅ¥ekãÒXZ N5ûZn×VÂÇ~ØP&½ô…í›ÉŒ«Ï—'~Ú(Ë÷9ãGqX¸mUÖ ·Kd[Djô÷÷÷¯l»Ú†ÄÈ!ûŠV"~nÎø;å b©°¿ —¼=zè߈„k]»]á)à5Ö$Šœš¤Éc1ü1G66Xwðbéíí­G#›4i¢SþâǦz¶³'¨ÅŽaÙA’Š®CÔ¨+kÏ/Ü,1)™òÅ-ÃÄZàKÏ‘ëUŒNˆ b§À©Œ ýoóVIUÂ/¼P>ýc¯ÌYÀtÂSFt‘Am›Ê-Ÿ­’ÂâÊã a9,-pp0üÄ`Dµ#Ä 6LgÑ„XƒÈàqVƒk„eƒoFvíÚéâØ°Ž# Žé¿Hàœ`âs‚•­ >>>:8þꉉ‰:vÇ××Wà†……é@VÄóÀ‚­úÜÝÝåú믤 …¸Á‹D$F–QwÃzd¹úGwÌ=™.'°ä”nàT·ð«¯¨ÁM i†ŸÛG"‚Kâ5‡sãìUR¤ ´P#ÜlŽO ‘±oܹ“LþrĦdY.v»n$’2”ÛÚVËúÒ3È~ˆàûòþŸáùÃ@Îwß}Wz7»ZÆß JÁ¥mÕªUz‚àqÆ—;dAkÖ¬™À ÉTrø-BQüe#¨tW«Îü–Z$Qƒ`NXy;¤$ŨFÚa©`;3<¨e„ xHêã(œéHijÏìÊE¿lBàÁ²UžÈ±>.^lßdÝN*%3àåõóøÇ#cMçoüº]%!X'íÇŽ¿ÐPëÝ8ïÀN*1²ÿǤs#OùcÚÓ=Ÿ±t›<ùóÙúä8iÖÐt•+V¬ÐVX+k^^^ZtWå"YÙ1jzþoàïDÚ–-[qmHÊ÷4glÈd7gÎùꫯtzþŽÊ̓C˜ìé¾8#{^ TAàMŠœ*q³cÀh'êEàG.Ȇ4«"‡û3ˆÙý…ËÅo¼¡ÝañAv'ddkӦ͙ÐöBZ×Ç{L»¦UuºnnnÚÇ:ýëGköÈ]sÖÈßO\-›ZqPèw|ö[ éÖ]šª„lÎE G¹6îýþ{ycüù¦b¯ÅJÀôyá UÙÖ–ÆæZýÄßU¤®Æ¡sË-·ÈC=ätVŒ¤¤$ù^‰UÄ!+%„„Í Aƒè†V«O¿Œl"@‘c&vr:ÈZ„ÀPXy.\¨GM\ &¸°ÙN/[¿üò‹~ÑÁ¨2Ü4à®…¼ 8CƒëЮ]»ô 2ü­\¹RûÚÃ/………¦ËÄ:fDÂc?m’Ùî•ýÏ]+¾ž–~Ÿ®Ý§ÐW+7µ+į1ÝÔ,`œlæ¤J0°ïûoe`³²ä~s˜ ª@èßqɲóéñrÏ]wjkHi 4DžAÄ{ ¶4pëСCÐB|20"Ñ â#aÉE< Rÿ;KC¬'~#÷‰ÿÇp„Ë3„ Ò]ã¾°‘ Ø-Š»½5<±Z#€BðÇby¤mÛ¶ÁƒU‹Ú~;³‚˜”Ï>ûL[;à›ëŽ3ºqÀm£ºk֬љ“öï߯AyzzêQy,Ürç=òEýòúøþrÏÐñ ëNë'¾÷–Q>Ðlݱ6{: ÌcÇeß/?Ë/÷Ž’Ëº´´œö¡Ô,éðÔ7r•ïQ™3ãI½Þx~ðw§}ûöZÔÀZ€ "§.ÛæÍ›u\^üaýž:uª `.’¾8CÃßXlP¢Yá lD©ôG͘Ng¸É¼W!@‘ã*wš×iX$P…‚ÓÁƒµ»ܯð‡xž*=,[Õ`-C=#d$è3Äê9ë(bÀPè¢-^c>ôŠûåÈ·oˆ‡[‰;Ú#ªVÊ[«÷Êy®w¯ëNÕdÙÃQ Īg¢Af²ìUnkÖÏ /*s¿]ÜUâ-X”/Té§!h( š“Ôŵㆫ/jòàÙÆ€âm®¹æmŬ‹sªÉïDö37Xia™Âß{¸£áÓY\M2ã±HÀPä8ÀMâ)Ö!ÔEÖ‚ ´øA¬^ÖñÇ©®\EêÉ}5^øá³–7ÖÙ¬ÀÙ³íŠK”®÷½*×DxÈ'/=)ÞÞÞš’ txzž4=¿¿4îTbÝ9#¸ÜÉaä«âÅ»çªÌcWõ‘©ª^ŽÑÞûä3™²`<}ËÕòèèÞÆê:ÿDÁ[ÄÜÍš5KŽ9¢]µî¿ÿ~]ó¦ÎOî,O1„n7°ÆBØ  ,6°Ü`™HÀ¡ Pä8ôíãÉ×*Ù#À‚ŸXŽŒŒ´Ô‚€k[ÅðbñÞ{ïÉìÙ³µ[ êeÀÕÖgdwÓ§«dmt¢ìzv¼©ðçõ¯”Ÿ÷'Iûqãé Yñãâ”[Ž¬ß ¹ûwKÜ‹×I€WILÇ£?¬—OTñÐØ®_O÷:½v$>˜9s¦®‡…ÿ—ˆ­ƒÖÑ%¶mÛfI>ƒtø°ÊXë)lêô±ã—“@M È©i¢<žk@`0ÜÚ x0áÇõwŒMdkÕª•kÀ8ƒ«„û >|üñÇú3 @n¼ñF-xœ%vçȉiýï9òÁ¿ËMýÛY(í>®2µ=3O"”À ŠŠ²¬çŒk(Ê/]_Í‘§.é&O\ÚÃrÑÉYyÒò±92ãêóM±[–çx®º(> qƒ¤°R#‘À¤I“6§á^‹ â-‘,1McÆŒdÔ„[ âŸØH€œ’EŽSÞV^T­ˆ×b/îHOTªíÚµÓ…ðǃS¼È³•%øO?ýTgiBà>b`ÝAÂ{ˆG({ƶ­™þýzùü¯}ûâõâi‹3îýå²äÐ iwõ8aísÛX:[¯£›6Kæ®íÚšÓȧä%u”–튗}* _ý3x8D¥º/íˆ#C†4¸“:tH§àôß.GL¸’››+K—.ÕÂî±p¹;ï¼ó´¨°AªmG¼.gû?Àë!Z @‘S ù.F ??_Ÿã‡vÙ²e·ÔVAý TÇ­QoÅÅðTz¹ú…u™.÷ꫯւéÅ$3¿P§)ÓGu—ÇÔd´ý‰Òþ©¹Òzø Œlm¬æ§‹(.(”]s¾”gGw—GK=žž+ó&_$Wõˆ¨Ä• ^fÓ¦M6Õ¨úóÏ?åý÷ß×…’‘1rpICfIGkpƒ5¬êÈ”‰¿ÁÈŠ Qƒ Nl$@.G€"Çån9/¸Ö  X¬;†èÕ§Q£F:x×=ˆía+!‘‘¡+ŠCðlذAÚ´i#'NÔõ)Õ»«wÉÃßþ%ñ/ßhª‹s§©ÿbK¼´¿æZ%ÚJ®—s®G ~Ý:9{@[s¬-}cg-•Ì<•Ö~êe6C%fòäÉ:e;âá.[^KOO—Ï?ÿ\‹$UéÑ£‡ÜqÇÚjêH–f¸¡A¤Â×âïï/#FŒÐî¨eƒ"¼l$@.M€"Ç¥o?/¾N ²!xP`.;;[¢Tlây¼SÓ¦MëäÜìñKï„—¸¯¾úJPœ¯ÿþú¥l„ jŸ4{>ÿ½tm$ŸÞ4Ô‚1Í#a}ÏgF5 ×)ÈΑ*6瓉ƒe’UÌÖ‚í‡åòw–(—µ Ò&´òtõˆ£™6mš.Ê ’(@‹$sçÎ5]§¬6X‹èµ×^«Å ,ÊŽÒ0Xd$~Y²d‰À̓F¦KX{«ëªç(×Îó$8#9g„;‘@ €ýÚµkµ[‚}aµ(,,Ô!v˃OŽJŠ¥†ûß—_~©ýíóòò´ëªÃ%‰ì¡m:œ,½•ÈYóÈXU2šüò’­òÔÂ-Òéú¤¾ze#Xõ¾y^šl}â* Œbå¦ñØWrC¿6òÒ•}-ëKÏ êÔ`ÀbÇhp=KJJÒ.kø¿qƒzM]»vÕ >¡Ö® ®w4ˆu„PƒˆCý CØÐ ͸ëü$(‡EN9P¸Šꌬ:(& Á+ÏÆu1I΢#–¨5ãÊ ‰- /qxÉÃîØ±cµ;\ñ2TWínå’¶jß1Ù¥ >픚iýÄ\É —–«ùéâ²TÒ½*£Ùæ'®–áÁOÿ²IÞÿm·yåSêq£Cll¬Œ5J+F¦Bë†Z^ø?€X<á’†˜@{o¨ÅƒÿÏ6pñEÒdCõBØ žÑ‘Üêì7ÏœœEŽ“ß`^žƒÈRÅ‘Õ=Ù„?z§N´èàAqRWvoËÐ7ß|£üôCBBô7,Tu–Œ†Të·Ýv›Î]^}õUËöÊf—óÐCÉ‹/¾XY·ZÙ¶oß>AZgL°H#¬­p=ƒ°Á—46 ¨a95 ”‡#:%«^î уøžcÇŽiÓ»womåÁ >üóí53Ù¹ˆÑcˆÔß…‰®¼òJ-xëdà nÖ‰`ýAL,;x9«¨)O5 {ø2íânòÈÈn–n?l‰•«ß[&]”ûœ‡¯¯e=gHÀ ppþ/rqs?ùú¶aÆ*‰IÉ”È%¿L¾Pf=ù€¶|Te½±ì¬fðÿ…v+óÖýkj>..Î"j l2é‘0`ذa:‹d÷îÝm¶ªÖÔyñ8$@.G€"Çån9/ØåDGGkÑÁñƒÖp‹Äܸ0¡f†+X{àÆcÂàà`´ÔÙ@2¸Y¿PB!¶á믿Ö/iå=Dëc“¤ß‹?Èö§ÇKçf–.>\!KæHÔe¶×=±ìÌ— ¨„wò†uÚeÍÇÃÍrÍm|GÒ¾!©GiQ€Â݆Û%>ñÍúƒX®¬fŽåKÎrƒ°ÐÖšè¿#°$CÔ`êÛ·¯i á,¿’»“ €-(rl¡Ä>$àLP_nmÆ´~ýzÅÊbˆ|:bõóêÜ+@Cð }7Rë F£ñ¢hÝ0"á·¡çž{NàdÝž¿Y>Z³Gâ^ºÞ²‰MùL‚zöV®j  ¶€áŒ‰\Ö¶ñ…,¼ÿ¹¤S¸Þ†ÂƒÇß,±Ç办QúyÃÿQŽX»gþüùz”º°°Ð¸ìr?QŸªÇc´üº©OK±?£š˜ú/ÚyD|›·0­ã ”G ¡Êþ·OYP§fIË ÝÅ[ œ^­Bä •Š¸V¾ýöÛòïÿ[[`±3Ü× rP ´¢†LŒèƒúR˜Ö®]+'NœÐn›HEÿý÷kQ« ŽÇF$@ŽD ž¹9µèHgÍs%° ©©©ZðX‹ŸƒêsƒÈÅÜ»0EEEY‚¦íâJÄÚÑ£GK­­z±aã&âsé½rlöã–ι…Åpÿli9ôB jeYÏ(ÀI•j|ë§³åó›‡Êõ}ÛXºLûn,VÁíO³¬³ž8¹ýöÛ 5¬“e \M‘bÞh‡$ 1D ²B µlÙR§œG, Ä þ¯"É 80º«9ðÍã©“@@Ì2’a2F‚aé1„â\žþyý†T²p ëÚµ«Eø`ÙüùwìØQ©ÀÁKÜv0á:0ἪX›¿²% v½9rD ”ÐÖÇ&j뎓0 ?I Bõ•Õ& TYm&˜DÎÀ¨0ymÙ6IÏ-†>%ÖÔ›yøá‡å³Ï>Ó‚¤´ÀÁÁåµuàv aÄHH0dÈac<³ž7 € »šÞ4ž2 Ø3¤Z6RÇ癃‚‡JDlݺÕ2!32H! .â k>!‚°ÎH•kç\~B´|ðÁ:6É1~~~AS™»N“i_Ȫ>ŽõË"\Œ|Õ1=ÕÄF¶ðRÁÕûÍ–Ä^­Bu:èmGReP›&z°±`Ó¦M“ÜÜ\}ØÒ™ïBò 0 1À}÷ݧ­5( ¡ÎF$@ÎN€"ÇÙï0¯쀂”‘ “u‹µˆ¸Î|¡Òè>ûì³ú¥V’Ž;J§N,Ÿ˜G‚€sÑà^‡ŒT×\s>$b°¥%eåIBFŽtmdê¾9.Y¼*)jêÌPü”%gÏÎRX|R<Üêk&ȪÖÈ×K¶Å§Šoú]à ƒ¶xšÃúøòË/ËäɓɗH€\ŽEŽËÝr^0 ؈ˆÁ4vìXËI!ÙË0!Ó¦… êêíè„Z6†øè1DP ¸}6 ©q‘Lîu°2Mš4Ig”C¼Be /Ÿh]››ºmŽKïŒÅ1AáB¥|‚‚µ‹ãîã'ÔóT"š;{ʬÿ>.÷­øI[7m8ø"XB‘X€"§RìÜH$à¤(rœôÆò²HÀQ ÀU¬ÿþz²¾$9Ø¥*ßȆ– »!6Æ?:tLíÛ·×no¶Q£šáú«RgßqÇòÄOHEÙâvM•Ðià­÷Ç?H:›”.­»–¼¨Z6r†* àݨ¡ÔWÖ—íÊ5Í9Há|ì§w%~Ï6m½1 p¢f\Ò*jí¿ÿþ{E›¹žH€œšEŽSß^^ 8$9¸à‚ ôd}UHm Ác-€,X x9DC 5\Ï xJOÁÁ%Öd…C_#}´ñ‰8>úHî½÷^yôÑG%44Ôúë%:)CÚ4n`Z·7á„~!õ 4­ç TF ž-þJèì<šféq}ßKïÈŒ%[%ú?ã.ž°:žÛ}ûö 2§Á j4<Ëìû÷ï—ììlA| ¸ŠWºÛ¼VpB*FåuëËËÌÌ”½{÷š&Ô´™9s¦%`;DÅÌÂ)xËk GÃ~³fÍ’©S§ê¬V§LlJ–D˜vIÎTËõÄ3À,~L¸@åpSÏÌéç§dckõ|ÏPIÜ=´…VÊòyâÇú3::Zg5,o®# g%@‘ã¬w–×E.N‰ PÄ“uƒ»ÏáÇMâbÁ܆õƺ¿1±ƒé•W^‘·ÞzKyä™2eŠz!ÍË»µ2ºéO¼¤úøû*×£ÓÁã¦\ JxøÈþ¤DSˆm<¤uû°†¦mÖ pÙìÞ½»ž¬×sžH€\‘EŽ+Þu^3 ¸0c·jÕJO#GŽ´@‚¸õTÕ ±óôÓOËk¯½&].–¦ý3í›’)^êe•ªKÀ³A€ÄÄž.¨kìÛ*èô³t8µr‘côç' €‡ù ¸<Ä. P¢- õrŒú=½TŸ«ïįߒzn¶ý\øvè)¡×Ü#É?|PíK-LK’œÝ›¤á€QÕÞר!pÄ8I[>O²wo4V9Ô§!ŽS²óÅú¹ ö÷’”RÂÇ¡.Œ'K$@µLÀ¶_­Z>)~ Ô6$0M»víÊšòÎ'M½ˆ¢ÁȺ¥ªõ^^Ž/rpMõêýcðWémm%‚¨ž­»È)å2óøõ8ìj›÷©°£:WËyWØÉ>7¸ÿóÜœ4%I}½ÄxÞìóÌyV$@$`_(rìë~ðlH€ê€X'!°õò‹Nê®^n¦]ò‹ŠÅÇݼÎÔÁÎ27ÿ&™›VI}/e͈C2N½ é¨À5 nlþÝJƒ¾ÃM•~f¬[.Ù;Ö‰[ƒ@ ºh‚¸7 –“ùóÄ ’¹~¹x5Æ—I£!cÄ#¤©>VyûXIQzª¤ýú­ߎ*›žÊ ‡c8b«ÿÏs“[Pl:}oõŒÏ›iH€H€Ê%`ûÐ\¹»s% ¸.e}@ó*%h ”È©Wß1EΑw—Ô…ŸKØIàÅ×ʱþóÏ . ™Wʱ÷Ÿßö=ħõyrð¡+äðK÷üÓ¯ü“…rè¿·Kщdi8è2Á1vŽë ¹Ñ»äTAžÅEÍ#´¹x·j/õ¼|¤²}ŒoÉ‹Ý+ûï%>mºH³;ÿ£bÕ+rê¹~nŒg˸NOµ¾ô:c?I€H€Ê È)Ë„kH€HÀ&ÿXrsöo•ÀáãJj‰›6äÜ©ceZ>úN©~/ežÐé¤C®¸M ¾¼âŽØò˜²eŸÜ}[õ±|¢:›é ñ8¸ˆSŧ…³ñlW5_O/c‘Ÿ$@$@U(ùU®¢#7“ ˜ À… ‰¬›‡ŠÑA¶0Gj§TaÓ“y9Ú2RîyÂA ;ÄÁ è§­ÍÈt†:U5#Á-ûggèÃ!‘Aéf§ôz{_6žãÙ2ÎBºô:c?I€H€Ê È)Ë„kH€HÀ&FÂ$°nˆÑ9uÒ¼Îz»=Î#žIò¢wJaJB…§èÛ¶›CÙ*ç=SX^çÍ2­3àþæÙ¬µÚç]µo®±Z¦,üB ޶XpŒ—|[öA²´Ì+ô§3üc<7Ƴe\„téØ/c?I€H€Ê È)Ë„kH€HÀ&>ÿ¤ŽÎ)(2õ÷õòâBÛ-¦ëp!lÒtýíq3îÓiár—ºl®^—µeJ¢“ x„…Kü›ËñÿÍܘݪÏ7røùÉ|é¿tßâ¬ôÓŸ¹Yúÿ4™8M Uºé}wS¢d•äìù[޾÷´ ¯g“––tÑÙÛת Ч$gÿ¶*÷i4xŒ QAÊ‚Ïi¯ÑÚ:kój)Hˆ×Ç€…Ê‘šñÜø«gȺáóõ¤‡¹5Γ @e(r*£Ãm$@$P `ÿÓ?QÞº¡R}Q^žõ*‡˜¾äiþÀ I_³@¶ m${&õÓYÖÜëÚ3°¸ÔWq!mß^¢,3rdæ#²k|G•fú9irÓcâæ ÜÝÖ˱žÕ×›2ÿ3AÆ6´«ïÔ}²woTBçBÙ=©¯ry+Ðqwéí¡Ðg¸$ÿø‘ì¿k¸Î®VÕ>°>µ™¹H¼•jßä!²}l”Jˆ0M%Oè­Ò[w—ì­*·AÇ9Ey§Ÿ¥`süMJVžù9GY}Ãù œcõÔˆÓµœcÈ<< €sHË) )ŸÊ²)£eD‡’´Å½µHþ.ö“ˆ!ƒòÂaý(TÓ<ÃZèØüLÔ÷ð,s-ùÇé"¡°ÄØÚà®–$Z¼š·–úÞ¾¦Ýð=…ÊãY*teû(LKÒÇCêk¤¹vóõ769Ôgz\¼X´P2fÞ,VÖœÀ)ŸÉËWõ“Ƀ:8ÔõðdI€H Ž¼IÛw‘ç×’ 8>F¾žâ¦Rû&«Qvëà-Å æØëíö> Z=w1çW+9{ÔÑ©n«ïí#>QÊÝ ÉJ t¬lã@°ÍQÎ@w•ÜÁZà©”äé¹RÚºc\/?I€H€Ê »ZY&\C$@6À˨4ÇÓÍ‚&<ÐONf—Ä£Øt0v"E  +Kš4ò3±HÈÈÕqJa>¦õ\  ¨˜ENÅl¸…H€ª$ ±)™¦~!’—n^gêÀ¨€@Af†D…˜¶Ïž+6  ÛP䨯‰½H€H \­Õ‹gL²YдV§  @Šò Ê݇+I "EÊ’Ó¦”˜ÈñTµ—š54[x*:ד (WgB  8såYr"CèægœN¥|æGçž®F P=3Qÿ¶ÊþÄtQ±á–Ò@¼<Ü%7%Õ²Ž3$Pâ‚BÉÎÈ”.̓L]÷&œ6O gÓ. TH€"§B4Ü@$@UÀ i®*Ôx ©Äjƒ÷ŽÍÔúÔ”ªÀ$ðÜÔÓ¢¸k)‘³->Uº6WµŠØH€H€l&@‘c3*v$ ²:5 ”úJÕàEÔºõ ’‚4ó:ëíœ'Òr”(öóöÔ®iƶÂâ“²çø éÚÂlÝ1¶ó“H€H |9åsáZ °‰€‡›´ m¨DŽÙjÓ'"T²“’tê_›ÄN.O '1Qz«çƺíMH—‚¢â2.lÖ}8O$@$P–ENY&\C$@Õ"лUˆ¬‹I4í30ª‰ª‹Ü³ø1uâ XÈK8.CÚ4±Z#ú¹òõt—ŽMMë¹@$@$P9ŠœÊùp+ TI` z1ýK‰ëäpcó÷ñ’¬ã UîÏ$P˜“+Ùé20*ÌãƒÇ¥oëÆâÎÔj&.\  ªPäTEˆÛI€H  °ÚdäÈö#%18Èö{zaÍ>~´Š½¹™D2S±]õ¥i‘s A ³u‡¼H€H€ª&@‘S5#ö  J ÃZOùýÀ1S¿K;‡KVü9emâ1õà œ&wXúE†I€—‡IBf®ìS飶1[w,8C$@$P!Šœ Ñp ØFžDÃÚ7“%;ãM;\¢DNAAd%Ðe͆ &(±”}$^.ïnZ¿tW¼xº»Éà¶MMë¹@$@$P5Šœª± TI`”4+÷•ü¢“–¾mTåúV! %]Ò³‘@Er’“%/;GFu2‹œE;âdPÛ&â§°‘ TENõx±7 ”Kàõ‚š_XÆem|ÏÉŠ‰.w®$8-̓¤GxIÁOx8Â’ƒçŠH€H ú(rªÏŒ{ @-ƒü¥S³ ùyë!Ó¶ ½£tÖ¬l5ZÏFåÈT"øú>‘¦MȪ–’•'—viiZÏ  ÛP䨯‰½H€H Jã{EÊ·›¢M©¤QC'<¸¤¯I#ëÕœ' °‘EŽ ØH€ª"0¡w¤KÏ)ã²6±_Iß¿ŸYÖªè‚ÛSöï“ÈÆ¤WËËÕÃUíÛÍÑJø˜­;–œ! ¨’EN•ˆØH€l#ÐAº#ôÜ f«Í­´—¬¡i=H€H€l'@‘c;+ö$ * ÜÔ¿½Î²w#ëv×àrâpœägfZ¯æ¼ HݽS®è!¼-N䨸®¹ý‚ó,ë8C$@$P}9ÕgÆ=H€H BxaÓ-B>üÝlµ¹ªgkiè'‰Û·W¸/7¸ÌãÇ%#!Qº¨«é¢¿X·_ÜTuÙëúš˜:qH€H J9U"b ¨ÛUŒÅïûÉŽ£i–ÝêÕ“i#º(—µ½R”_`YÏ×$¼m›ôl"ý#[¨|òîê]rmŸ( ðò°¬ç @õ PäTŸ÷  J ŒìØB:6 ”×–m3õ»í‚âí^_wî0­ç‚kÈM;!©±‡äÑ‹ÍVœE;âd—Æ ëìZ@xµ$@$pP䜨<$ €k¨§.ꈮ2gýRÚ áïå.Ó.ê")j¿¨€Öƒ‹«}ß´QÎS"ñ8ÖíÕ¥[åâNá:CŸõzΓ TŸENõ™q ¨’ÀýÚJ ¯—¼µÂlµypxmÍIPB‡Íõ䤤JjtŒ¼0¶·@ mÓádY¹÷¨<<ÒlÝ1¶ó“H€H z(rªÇ‹½I€HÀ&^Ê-mª²Ú¼³j§$eåYöiàí!)7¥”Û¥07ײž3®AàøÆõÒMþ,mÅyúçÒ'¢±ŒP©ÆÙH€H€ΞEÎÙ3äH€H \÷í$~žòòâ-¦í÷«˜‹?/9ºaƒi=œ›@z|¼¤:,¯ëgºÐu1‰²`ûaùϘަõ\  8s9gÎŽ{’ @¥|=ÝåÑQÝe–ʘuLU°7Ö¿vu?IQ™Ör’“Õütb§Nž’ãk×ʘî­eXûf¦+}RYqD5‘QZ˜ÖsH€HàÌ Päœ9;îI$@U¸sÈyâï-x‘µnש4Á}SmFÙ!IDATZ‡ÉÑ?ÿ¤fsnȨ—Ÿ™!oŒ?ßt¡KvÅË25=EÓz. œŠœ³ãǽI€H RÞînòòUýdöŸ{åï¸Sßn¸@2%i×.Óz.8üÌ,9¾q£üû’î`¹¸"eÝ™òÍZŸ3´]SËzÎ œ=Šœ³gÈ# @¥`µ9_Ym˜û§©_·A2ýânr|ýz)ÈÎ6mã‚óˆ_ó»7þJäô0] F'gÈ«ãÌÖS'. œŠœ3ÂÆH€H zÞœÐ_þ8x\>_·ß´ãS—õ”|$î÷ßLë¹à’UÜÌž8X<ÝJ~rgäÊSÊ…)Å£B8ÇÅò*H€HÀŽ”üŵ£“â© €³èÓ*TîÒQ»'Y§”†;Ûœ[.”Œ¸#’¸c§³]¶K_O^F†]û§®}3 2ÌÄ⾯ÿÐu”žÝÓ´ž $@$@5C€"§f8ò($@$P%¯ì+Ȭö`)·µó[7–§/ï)G×­“ÜÔ´*ÃöOÙÔâV®a å¿cÍI~ÚzH¾Ý-ïß8H?ö5]eª‘Ó¢‘Ÿ|7y„œˆ‰•ã[·Vyv°?p7ŒûmµÜ7¬³Üзé?\³G~ÞvH>W1X|ž8DÂøÊØYK$+¿$«|«^Š;‡5è $?3«ªCq{8uê”¶¸å'—Å÷*SØó‰Ÿ6ÈâqòíI“>up†üJ pM9®yßyÕ$@vBÀOÕÍùåÞ‹åȉlmÑ)V/ÍFCìβ.‘¨FÞ½p¾ä䛸ip«¯^-™q‡eá} \­ÛÇì•þ­êá –ÒÅ@­ûqžH€H æ PäÔh°xú™û˜vàB¹N©ôÐÇ·l‘c›7ë i_ܽi¨ŒïIz$@$@öG€"Çþî ψH€*&]P$·þoµ|»)F^¸²LS Jljäk‹ÎËK¶J@ãÆÒlÀñ ©ø .¾¥(¿@ŽmÜ(I»vÊåøÑƒ$*´A*F'Èø÷—‹¯§»|çEÒ¥yP™>\A$@$`(rìâ6ð$H€H š^_¾]¦¿NÇéÀ¢PÚ¥ ‡ÛŸ*“¿\#b$¸}{iÖ§xøøT󛜷;òî$íÚ- ›6Š¿G=yíê~rSÿve.®€Ï/ü[iÕ)\>W1:¨YÄF$@$`·(rìöÖðÄH€H  ë•ËœÈÉ—&‘±ÝZ•»ÇWÊCß­“äì| îÜEºvwO×}IGJÑ´ƒ%qó&ÉÏÈ”)#:Ë—ö,“0cS²äÆOVè¤/_ÕWîVÖ…­\è\I$@$P—(rê’>¿›H€Ζ@V~‘<0÷Oùä=r}ß6òæ„êï]æ°9ÊÍ ÙÀ^\²MòŠNJ°:;uw/×;(˜p"&FÿÞ$Ù©irmŸ6òܘ^度©Ü*±Àyü§ Ò:8@æÜ6\:7czî2W €} È±Ïû³" ꘿ý°Ü=g@̼qMùW¿¶å #¯PÞüu»ÌX¶]‹ åÆÖ¸Kñ (·¿3¬<©êÝ$ïÝ'©;¶I޲Ü\Õ£µ<7¶·œ×¤Q¹—·ãhšÜ¦âž6N–GGu—Ç••ÇËõoÊ…Å•$@$`Ÿ(rìó¾ð¬H€H ú2ó å±ÖË»«wÉÀ¨&ò–²êô.÷@°}´fÌP±=ÇÒ²¥QËp êÐA†·”zõK§2(÷v¿275U’öì‘ôûEŠ‹åæíä¡]¥mã²Ip1i9òÌ/e–â׫eˆv¤õÆîo3OH€Ê#@‘S®# G&°áP’<8w­ü¥Ü< ½rÉê]nb\#‚ê¿ß#ïþ¶GVî9">~¾Ò M ŒŠ?ÌÈV˜“#iÑ1’~p¿d$$J«Ð†r× rëÀöRŽŸ”~ß#O+ã¦ÞócûÈ-;ˆ“h=\" ¸ŠW»ã¼^ ×!0gýyTYv’³òäî!eºr½*/^Ç ‚:<¯Ù+ÿ[w@âR2įaño©­;þaavkáÉKO—Œ¸8ÉŒ•ôcÇÄÇÓC®èÑJnSo¨J ]‘] ïó¿öëtÛGNdËý*©À“£ËO@`0â' €C ÈqˆÛÄ“$ 3$€š9ï­Þ-/-Þ"Ù…rÏÐNò€z™//å´õWl<”,s7”y›cåPrºxªllþ-š‹_“fâß$L|‚ƒ¥^½Šäƒõ‘j~¾ +K²Ž'¨é˜ä‰—ìt%ȼ=etç–*™@¤\Ò9\¼ÝÝ*üâe¹|qщNÎÐi£ŸPâ¦U…ûp €C Èq¨ÛÅ“% 3$€„o¯Ú)¯«„i*å42±=tQW›2†HÊE;âd¡šÖTâ"7_<”µÄ/4T<ƒÅ'(H‰ž ñjаF³µTb¤ 3CrU&4Ä×䥦H~r²ä(‘ãV¿¾t ‘K:5WS¸œ&îUø—!ææ½ßvÉÿ©,s°n,7åþPäØÇ}àY @íÈW飿X·_^_¾Mv©,bƒÚ6•ÛUÌʸž‘âãQ±õÃ8;Ô˜Ù©öûãàqÙ¨êôlŠKUÇI•üÂ"ÝïRßß_ܼ|ÄÝÛKMÞâæá¡ÜÝܤž›š”@‘S'"æÔÉbAö³¢ü|)ÊË“b5Rq5ùY™’›•£ŽyJ[Œ"TlMÏð 5…ÈÀ6aÒ7¢±Mç‹“úC ³ß-ó6E‹‡[}¹cpGíšÖ¼‘¯>gþC$@$àt(rœî–ò‚H€HÀ+KwÅ«—ÿ=òóÖXñóòëúDÉ5 jÓ´ZA÷¨)·¯ƒÊâ«âzb’3%^elKÈÌ“D5%gçIŽÊüV V¡ÊrV¨>àï¡\Ê<Õä­ÄU ¯—N Ф·4kè+ª6MëÓSû°F6 ãÒ£Õ9ÀÝ17»¥I•-íö :Èç·•u­l$@$@NM€"Ç©o//ŽH€l ™+Ÿ­Ý'_(A°ýHªŽ××+RÆtk¥£ÔˆA}›…;k‹ ¬LȦ6^]Ç­JÜ %4 €Ë Èq™[Í % ì9~BY@¢µPØ©ÜÐ`áÖ¡™\Ü1\.hÓDº4ª–•dž¯<ã.GNäh·¹å»è˜¡ø´,-lÆv ½£äB•Y­ª83þrîH$@$`Ï(rìùîðÜH€H . NÍ’E;ãd±J8°bïQÉÈ->žÒ_ù÷kÝXº¶Ò¢§Š—©"æÿ¬/Ö&X™¶Å§Ê&Uq6‡”kœ»Š±éÝ*TF©äȪ†ùs}.g}1< À¹&@‘s® óø$@$à w‘„PITŠéIérRmðõt—6êÄÒ`jÒÐG[U‚ý¼%ØÏK÷ñR±7ž*ñ€§{})Bæ45å«×ù…Å*ã[Îx–¢âw’TÏ¡ÔÓ±=ˆñ9˜¡ÖåjŒa |U|M° PBk ²,Alù©ïg#  +9V08K$@$P ¹Jœ ÓÄÏþÄtp¢„JñÉ“Õ8ZI׆ÊZÔRÕ¬iÒ@ &‰ m R]iËQeÅLKŽÀ9  '@‘ãâ/ŸH€Î do;¡¬3°Ì¤¨š4D°Ú ÃZʰæ®ÒHâ˲«5òõTŸÓV¤yf#  ³ @‘sð¸+ €ýx“ÃeöwSxF$@$@$@$@$@gA€"ç,àqW     û#@‘c÷„gD$@$@$@$@$pþæyÏÝIEND®B`‚glance-16.0.0/doc/source/images/architecture.png0000666000175100017510000013625713245511421021573 0ustar zuulzuul00000000000000‰PNG  IHDR   9¯€IDATxÚì½ \ŒÝÿÿkßÙ³–5„²ï’ÜvÙ×È…Tn»²E²¢RmH¢}SÚ#iQ)D¸-÷òý|?÷÷ñ{üÿ¯æpÝc¶fj¦f¦ózœGk®9sÍt-ïçyŸsÞçýËÿGEEEEEõ³~¡§€ŠŠŠŠŠ²ŠŠŠŠŠ²ŠŠŠŠŠ²ŠŠŠŠŠ²ŠŠŠŠŠ²ŠŠŠŠŠ²ŠŠŠŠŠ²ŠŠŠŠŠ²ŠŠŠŠŠ²ŠŠŠŠŠ²ŠŠŠŠŠ²ŠŠŠŠŠ²ŠŠŠŠŠ²ŠŠŠŠŠ²Mÿýïß¼ycggסC‡_~ùåܹs¿ÿþû¢E‹~ùYt¿Hûq2qJ_½zõÏ?ÿ-þóŸÿkqýúõÏŸ?Ó³J÷Óýt?Ïý_X‚õ– >}ú4lØ0ssóǤ“ÒÒÒ–.]:dÈ\Zá¯Å‡ȵÈÊÊ¢çŠŠŠ§Ð‚D»ÓÂÂâýû÷dÞ={`Œèé–„ÿþûoᯅƒƒ½TTT­OGGG ²áÎ;¹¹¹ôDKHÿùÏ„¿ºººÔ{£¢¢²sC‚l@[•žeÉ©mÛ¶"\¹_~¡gŒŠŠJHÁbH Ù>}ÚÂÂBz.e•ôëýû÷ÉÉÉ‘‘‘/_¾¬OcBÙ ²†Šÿ–¨´²²‚SÆï%e»ÊÊÊüÅÛ²eK''' Yʆ:)))‰ÌëÚ¶m[Ùàíí½k×®ºü˜û÷ïÛØØP6PQÉŸ***ôôô444.\¸PZZZUU•ššzðàÁsçÎ5òóó)øjÆ ­ZµêÞ½{»víàëÕ… uÐÒ´iSÊ**ù“ƒƒžÐ[·nÕ1F"Mƒl\lxûö-|ºÍ›78pÿÈ7T¾råÊ Aƒ´´´ttt¦L™Â}9÷íÛ7vìXæåñãÇMMÍ &<}ú”½N^^ÞŒ3p33³äädòÖ‰'@)8›}Xª¬¬¤l ¢’áé:t¨ Í   yóæuëÖ­}ûöË–-C¿FÂÓF °EP—.]è<%Þòôô„=…gWXX¨¢¢‚ÓͯæÉ“'Q)111::ÚËËë£Àñ\ mmmww÷ÈÈÈáÇ6Œ©Ó¼ys˜þmÛ¶]½zµS§N , oegg/Z´H]]=Œ¥>`ç‹/„¿öööô ¤¢’BÀlÚ´IH6œ={ˆ Mß¿„§`‹>Jz¼^‰ì^°ñãÇ›˜˜í™3g*++ð¬ ÷(®ªªp9Ù_¢€‹D¶¯]»†kžžNê(**úøø·,--qd}Jÿó?ÿ#üµøóÏ?éCHE%…‚±† Ã΢ö)Á5jSŸái£Ø"‰³Avý†¬¬¬&Mš8;;°äáá3uðàAîšÏž=Ã[hÔ ¾œÌKR_OOo8K†††x '‘û#vvvp°A$§ú TTÒ©ììl[[[Žý°Úhþódp‚'zõêÕp˜Î(~„Ÿ`‹>ÒyJü´cÇŽ_¸Ô«W/îš¹¹¹xkíÚµB²Ô_¾|ùu6‘CQÙ@Ǩ¨ä@>|PUU555åØ'€t4qX†5kÖ4oÞ|çηoß;vllàg£Ø"ÊÞ‚çÕ¡Cxd¯Ù´{÷nü;<û”Ú·oOÆ„éSB}žãN‚Ù€—jjjì^!e•|ÈÂÂBAA!66¶F6'ãäÉ“äåÌ™3kd?%ÀQ6ð–¿¿?~9ÇêCÏŸ?WRRbúæØåììŒúÛ·oùòeNNÎåË—³áĉ¨+‡Ëüöí[𯑠ÁÁÁøÀNÙ@E%g‚hÑ¢E«V­®]»öæÍì)))-¸ÙPXX'cõêÕhàÃò´mÛ YÒd`@xÚ(¶è#oà©3fŒ1‚{¿™™ÎuEE·K¸oß>uuuü¿@}ß¾}?Ö}ìØ1mmí&Mš¨¨¨tìØñÑ£G5²—?LYYî$e•œéÙ³gÓ§OWd Íy‡N:Ávs[†ß~û­Y³f¨0iÒ$___´YIÛ_€ái£Ø¢4¾AŒzÿþ}FFØ+|'cVVZ"}KQQ” TTr)„„„ÈÈÈW¯^ ¨öúõkÆÀ&®\£âi‹h|ƒ ‹Æ7PQQIH4¾A†E㨨¨d’ Ôo¨h|•L²GÿJbu¼ž1***I˜—Ú°áo*‰IT6Ð3FEE% óBÙ@Ù@EEÕ(ôòåKÊÊ***ªŸôÿ÷” ” TTTT?I²ñ ¿ýö=ÅRâôÑkAEE%¡¦§Èl€WBO±”8}ÿüó=cTTTRÁÚV•§^ ***iaíã–ž‹G¯e½xôZPQQQ6ЋG¯•øDã(¨¨¨¨8Eã(¨¨¨¨8Eã‹ÓG¯•„šž4¾A†>ß@EE%-l¨u[õyWý;}Ôo ¢¢’6ÔÚâƒÙÊkèxeeCƒ±¡ªªj·½Cï>bñ™ôôôlmmqLúøQQQ6P6È*`ć1fÒ4¿°ø'Åëø¿g”~¹ùdºù,CCCŠ**©•”Æ7P6Hlv9Œ™8Mì'Álælxô ¤¢’NIi|eƒô°¡Gox b? añ™úúúô ¤¢’NIi|eƒØ¾º\‹ºw%q—ÌÒ/txœŠJj%¥ñ ņm‡ƒ¢Óe… õß ¹kAÙ@Õ€úòåKFFFqqñ_ýžßÃÃãìÙ³ôüHo|CíÎÔ™ó´[´L+ù]¤OuÕïáêÀ½¿n§é³2/Qg¨É˜·‰õß@Ù@%gzñâÅøñã•••µ´´TUUMLL˜wçÏŸoffVÏ? ßHfñ5iÒ¤cÇŽ£GöõõåYA__ÕªUééé²Í††oˆöZ«i³¦Í´OzøÕXÙÈØÄÉͳF6àºà*Ul¨ŸñÊ*yRaa¡¶¶vÏž=SRRà1üþûï÷îݳ¶¶np6%%%ݽ{×ÍÍmذa€„³³3{…¡C‡†……íܹSMM­_¿~” µ)ö‡OM™1gþ²µc'™²ï7l|ã~Ù69Îóvĺm»UTT[µi×E¯{Fé°ÁùìÕi¿Îo©ÓzñêìlÀÑt;uI)¬¢l l ’]YZZjhh”––ò«ÐPl;v,óòóçÏS§NUQQ)**b*L˜0l/Y²MÕ¯_¿R6ˆ\úôtÎ;ÈçN¬¢’RTV ³¿…N««AÈvÇÎ]Q')ÿ]>ýŽœ&ÕÀ†v:::Ÿñ½f0c`C@ÄcC£VÖ” ” T2ª?þøCUUuíڵ갳áæÍ›¿þúkÛ¶m àêêJvnÛ¶mÆ =1bDÇŽ:Ä|öÏ?ÿDc¿wïÞêêêݺuóóó#;ÑØoß¾=޳wï^aØåääÀuسg7V¯^=pà@IŸ+9Œo¸õ0¥uÛö™e_Y¶¾§Íž#Ø@@ÂÞ§ÄÔ3Žœ¹Â°ávä¿°xeeÿÉ” Š ɲ(ØõüùsÜuB²áøñã§OŸ†™†TH´&*ôèÑ£OŸ>xËÑÑÌÏÏ'õííí‡ÜÜÜ{÷îedd`§ꇇ‡_¿~]KK+ @6@¾‹ ¯_¿öôôlÓ¦ σˆWrß°xõÆnÝ{-³Ü‚Ò»ï€î½ Db3Þ/ƒ ؘ¿l-ö»\º!lprròòòúöíe?$ Ñ‡Ææ7–¾ÊŽðkÿ` ¿Ÿ‚QDD°0!ÙÀTÀƒƒƒI…-ZØÆ©VVV>þ<é‚»0~üxöb§¦¦&C^N:uÞ¼yB²ÁØØ˜Ù‰ ,ág¸»»×ù’·ø†´’ß›·ÔÙh»gǾ£(Ûãl2¡[`Õ[°‘^úY§U† jdCBÞ›–:­i`CÝãÐÞAC®.š6LÛ‡²†ÀÁÁ¡˜ÖªuéÚmëVëòòr&/‚â(-- çÇÇLJ}'ÚàÛ·oçdž‡ÂiX½z5>xãÆ î àldee¡û2³sàÀcXÒÕÕ…Å’ zzzK—.e÷Š‹‹ñ];w^¹r¥¤Ï•¼Å7œt÷íÓo ûã‘ã®XG¶û²ëÀñ¤üwæó–(+«6L4¹fó!Ù€rè´~Þ£å#¾aï޽Ġ())áFLLLlälFŒ1vÂ7ïûw_>xòZ¶JØã²‹7¢ÆN230è‹ÖÇ@¸Îî:€Ÿ}çÎ;Û´iÃÍœ± tëÖíСCp5jdCzz:êpÄFøF¿âéµp³áõë×ðH8À=Þ\ᘒžÆ*oñ £&Lµ;p‚}L¹vóO^~Âöa×ËšZMáXì9êÆô)÷ †+ïAH6TO{6RØ –øX }}}öV'Ú5¿ÿþ{×"&» èPd‘ xŒÇOœ"sHà.c&N·²²ª¨¨xÿþ=®&õˆÆߺuk@¢F6dddà}ðàé;ª‘ 8É***Ó§OçèSÂN\Á¿Š› øUðé™ùTìl€‹ÃŒË$¤pü3¥°Š; .½ôs\n¹,®§¤¦¦Ö•¥îÝ»ciÒ¤Ihþ¯d ¾ð>–.^¼H†àBCC#XJIIÉg ¶;¹û%´µµmmmŸ>}*àZÄæ¼3qš€"‹l000ðð —6\ ˆíÔ©3.qYYð#õçŸR6<~ü.òðáÃsssÉž­[·òdN]“&MΜ9÷ËÆÆ·«‹‹‹6Cá‘ ‰Ñ®OMMÅN|¶yóæ×®]<Þ½{ÇíšÓo_TPPnii‰éææÆ^aÔ¨Q/^¼ÀÇÍÍÍñÃx‡²®§ôý,ýç?ÿ)c©¸¸8–¥ÈÈÈëׯ_c ¾0aš-¨ fLžwŸh6mÖ©k÷ó×#à1ÜI|yäì¹KÖS6PQ6Hp¼¡þÏ‚Ü_>>:::""",,ìÎ;AAA·nÝò÷÷¿q㆟Ÿßõë×}||ð—lcg@@@```hhhxx8>˜””„ã<}úôÅ‹d]|ôüïRß@Ù slxRüÑÉÍs¨É˜&MšÀ´™Í± Ë¡Ë%.ÞˆD5û#Øw¢!?Ùžl8á§ÁtÖ|jÏÑK<ë€p È:H¨fi½—cq$ììÞ«ÿ€Á&(­Ú´ïÝψ²A¢Hx÷îL6 wff&H@¤yøð!`ã~ÿþý{÷îݽ{¶>88`¸}û67¼¼¼®^½êééyùòåK—.¹»»_¸páüùógÏžuc ÛW®\Á§pØ´´´‚‚pH !¥ñ ” bwú$krëaŠÅ*«fÚÍQ³§AM--ïhùo‰+Dµ%k·³ï´X¹¥EËVv?<µbüÔYít;¯Þd?C6\¼…jád'¾Ññ¨)¼ʆºTÀ¹ÊÏÏG£M{8qqq111hé‹ ®®®§OŸvqq9qâÄñãDZ:8,hľR¬Ž7Ô:¾²AìNŸ$â⟽î;`0*èèëwã°S·S—á‰rß0Èxtó­ Álp¿½@:Ž„aÃÄ—JÊÊÃ~î’º›X‚3篤ó”$ª¯_¿–””.#€!11±>ÙpôèQggç#GŽà]\5f'9dõ¤Çé“„ßð(½HAAAIIyëî‰Ïß’„ ç|(**¹˜@öÌYlÉͯД&Mšl¶sò½Ÿ1ùF|ÃŽC‚Ù€2wÉ:Õ½'<ï§¼‚ÇpÞ÷!vâãZMµí<£óݼîS6ˆWýõ׫W¯²²²RSS– NNNååårË:Þ ÷ã á)Ï-·îjÓ®ƒªšºù¼%ׂ# PÎú„ë÷ê‡úÍš·ÔlÚL]]ÓèÇ2»ÝÿuájeHÕbÕ³9ËTTÕHw6Àú›ÎZö ¨kh‘™¯p€JM]ÁÇ)Ä«7oÞÀcHHHÀk(6@ØCz– ((d• ¤d–}uõ 3qY“uÍæÂäJ’õ”‚b^  ï}'…}=%Ž‚f>“^„GÆG|î>á8lØã2ø"øKû”Ä.ð ** g <’Y"C’fþž;wîÊ•+ÞÞÞ!!!ø±±±øvÊÊÙfS"žl°ql§Û ÍdÓY è:¬4.Z†;~çΜ1bšá@€ì>q ?€‡Ú±Â0píÚ5À××ÁÇÀ ---55_‡o$ðlÐøÊq²á»ñêÛ9ï ñSgP6P6È,--œœ`Öa¦XgkXêôôôÌÌ̬ÊÈÈ€0@ ÔAM€„a>ûàÁâX„……Ýg)Œ¥,¡ê㳸.L|Caaa~~>¶qü'OžÀGföíÛ×€gƒÆ7P6ˆŸ 4eƒ,ÊÜÜ|Þ¼y666çÎC‹Þ,>,5ìuvvvnnn^^±ãhS—–––ýЫW¯ÊËËKYböUTT¼aéíÛ·UUU¸Ÿ>}úðáÃû÷ï+++±uJJJpLß…ë…ïõðð°³³[¸paž ß Ãl’øÊʹaÜ9s,,,V¬Xaee…fû•+WÐÞG+>>>3YPP€G vF†vÿãÇ8ß?þòåËׯ_¿}ûöÇ|céëá]&ÐÈÁ‘áDFFúûû»ººîÙ³gÛ¶mëÖ­ã™YZNÆh|ƒô8}’ˆo l lW6XZZnÚ´ >ĉ'@ˆëׯ‡††FEE%%%¥¥¥Á“xúô)q#ŠŠŠ`â œaX|xïØ„=@0ÿ^ ŸÊÉÉ—äà°ðQÜÝÝÏœ9süøñƒÚÛÛË?¨ß =NõêΆá£'“ôMš4iݶƒ¡ÑGg÷z0ñµ^ñ›²¡Žlؾ}»³³ó¥K—`¾ýüünݺtçÎû÷ï?|øþD\\\bbâãÇ322à<{ö,Ÿ%X²òRff&€ ÉÉÉd4Ÿ"CÙ8N``à7®]»ÖèØ@Çä{¼!:»tô„©Š =úºy‡qóÛ²Û¹OÿÁ€ÇÂG’(»žµÝ{в¡AØc}ñâÅÙ¤§§ÃŸàRø `¤¦¦’™N” ” ˆ R_ˆ”TØ€µ.l0d$û:CM&()+_¿—Fû”ä• p¸Éд¨l€ß@&Q6ðPÇŽ³³³)$!Ü8½” ¢ÊÃÃC__ßÅÅ¥²²²Žl@¹|+®Ã²u¶LNžUwëvÖÓÐÔê7ИYÖ‚¤ñq:{£Wß:­ÛmØq(ìqÙŒyË›·heïûŸ=tÚ‡ywÄØ©½úž üQ6Ô\]]­­­—,YÂÁ†}ûö]¸p¡vlÀ6Y`CT6lܸqïÞ½” ” —„ü™‹æç7hkkcgff¦Hl°X¹E]CëFxI×DAaÑê­Ì»Ú-t&šÎž üQ6ÔÞ¼yc Ó̰ÏŽ››Ûõë×kÁüÍÈÈÀ%ž ;wî<|ø0êS6P6ÔÇÿ(v6*//ÇI^¼|÷îGþ†ââb CVfÝ·oßâÅ‹ðlH<¾²AŽç)e”~i¡ÓÊþð)€¡U›vFÃFf–}e†©«lJ>eeƒ ±‰}ƒqüøqØèÀÀ@’(N@vvvQQ =l:Y‚›IÛÀž³'ù/–pL²xu˜Ü>®Hii). |ŽÄÄÄèèhPÁÃÃL”’óõ”(ä;¾ÅîàIÔÑëÑ›x@Ùw₦VS9X‡õ^r™‡LõøAJyƒØw›ßNmµ?ʽMÙ Q6pÄ7¸ººÂ“$âãã™Î¥‚‚ ‚dn€­‡+£TÀú“$?DÄExùò%\²‚7Ž“œœqóæÍK—.5®µö(äÛoø>"ù20*cç­‡)7ïÉ4¼ï¤VRRR×Ðj¢ €>ý×?LR¢l¨6àÙ9tèбcÇN:uåÊ•   ö9¬$í3éq"´€õJJJà|ŸØŸƒŒ0ÓuX)ÅVÝM—Âe— p4›6ëÔµûy߇á©w_9{ƒ}Q}îܹ+W®`?ü rð†Í Jã(êþO6›ís'ví–ÚÍ[X¬²RPT6j¼àXé:¬tV)|‚àà`&·q Hªg’Þöûñ.l7^2ûQ3þ‡HuàXØ€%@ |…[·n ø"xð|}}á:øûûEDDà€  (‚Ã☀Pž ß@ÙP{6Ä?{ݤI“ØLòrìäéêŽN®YåÈúX4eC#ÔÒ¥K8píÚ5ØqÒl'¹}ˆ“““M2ùl?ØC2ü`?ñ8Òû}"9>þŒ—Ln’öÇ@#x®®®Û¶m[°`o lÉø†êØ7EÅá£'Ò£w_í-™—(” ” 2Ç&ömçÎgΜñõõ ¹wï^dd$ûð@˜òÒÒR ªª Fæž_nŸ¿¤÷ÁN¼… ¨ŒOár ðàˆÀQ€›rÿþ}éèÑ£t­=ÊÙöä5ö²²;ö my//¯àà`8±±±0è€DQQ|ˆ²²24ÿI®Pâ:O¢ê‡à@T²D…ââ✜x$>¼uëK×a¥lhDñ ´OIJØðøñcʆº³‰o€÷ôô¼~ý:Zú¢Îa%#à ¼ºF7eeeC³¶ løôéeƒ¸Øàããs÷î]2aIT6¤¥¥EGG3!” ” ” ” òɆoþ½j­Õ/?ëø÷’ª¿æZ,­ËþötƒCB¤“ !!!0ñµ`jÆÇLJ……Ñü ” ” ²Äy’¤ÙPUUuÚíÜÓ×&¿x/‰›änBnÒ³Wßþ”F6À²GFFÖŽ ÉÉÉdJÍß@Ù@Ù 3lx÷ûÿ°—ÊOÿ–·ßËßo?V—7låõŽòWJÕ÷RΔ÷ÕåÕ÷ò')eï~*¥(•Lù£„”·ÕååÏ¥ø )ßPŠHyýo©6Ø;8L1›%¡;„)§.^Û»w¯´±! àáǵcCJJJDDDPPÍß@Ù ‡ó”äž •<ÀÀƒ \`ø‰ `L…2žTàv*pƒ¡Uê :è2Á’+÷ruuu¥ 7oÞDÛŸ„Oׂ QQQÁÁÁt¼²Aãä˜ •’wxPALîBa=²Ar·Gýß-‹-Z¸p¡Hl¸wï^\\\íØ€††† `ÃÒ¥K)(¨ß El`ÃÛº» µîDª­»P]*¾¢P6ˆ¤K—.íÞ½{ùò峯=&&¦vlHLLZpžlغuëÑ£G)(èxƒ4±ÞÔè.TÕÝ]D…`øVƒ»À¢B«P6ˆ¤?&$$\¸p0`kdCPPPttt­ÙÀÎ'''ggg777œ®ÑMÙ ílxRüQìzféifC­;‘Äå.¼¬Ñ]xÍ×]( l¨!`»á\½z•,  “ 0œ8q‚› 8ÃÉÉɵcCTTèâçççëë‹¿àY »°°ðóçÏ̪” ” ÒȆ½ üÂâÅþ¨‡ÅgêëëK'¾ƒ¡ncÎå5Ž9¿'RÁÏ¥Ǫ̈76<Í«ïFô—/_JJJ`ÊcccÃÃÃA Øñ@–°ÂYg ô…¹ÏÈÈà`»ð³t+YÕ•¬ÛŠí/^TVV~ûö­a‘@ã(„•Í.‡1§‰ýQ7›9ÛÖÖVZÙÐ`ST¾xøØQÇœ¹©ð¢¼ºÈ¾þÑÀ yœC¸dÑ‹¤¤¤–âââ`ß3X‚­'ï2l@}}ò—¨   ¸¸Ô©¨¨xÿþ=`ð·TŠÆ7P6Ô ªª*ÃFÀ¼‡ŒÒ/u|Âq„ØÌéæ³zöì‰#K)êà.d¿¨PRV5f|íÆœgü:oâ”éuqê ¶võÆλü- ò÷÷wqqù[öEãè<%¡ð°ÛÞ¡Gžb‰ÔÕÓÓ³¶¶–B00l¨‹»pÐùŽ   ð$§X*\ö¾µ÷àqÆQ luÌ™'ê‡ ï>ÿ%gã uÔøñã ¥Ö ã ” b‹oõ~’éçƒ µsh4dˆñpÄq¿“0î‚ÅÒUÃFŒæbƒhcÎÜTÈGyõEžü™¸µÒÓÓ•””ðS###)(äßoh„làp„sŽMÎÆÇoßyÔÛ _ß~0,\¼b«ÍnB…â7_{ö6ðº 0ì9pL[»¹º†ö,_mwaƯsÁ†›AŒ†oßAw‡ýQÝBRj¼!ñù[!óÂÊ6oÞLœãÙ³gS6P6ÈùxC#dC­Çœ7oÛÕ¹KW0Àaïaça|:ñLF[°hyÉ6à­³ÞðâRžM˜lÚ§¯á á1Oà(€ Ýôº÷èÙgß—-6ö¨›òLTw¡ÁÙ`wðd§®zÖö‡b²Ëpzµµµ ÔÔÔŠŠŠ(((ä’ ¢-‹$tìÔþ¶S³‹6l±%Hÿ²áíůÿeŠÅ’ê>%¦Éì×¹ÚÍ[<ŒË€£_öIIIùðq7QÝRž7’òß99Ý£O?ee•)3æxø‡ÕÝþ[Ë××—}\ >eeeƒü±Aä)ª!øìq9…or ÞÙA·SÉÛo% X0(bcCñ†Q̘³Ù̹'Og:‘À 85Ž9s€á9)e_|«wHôŒ¹‹UTÕàFls8\7¢þãD•‰‰ ;´´´*++e÷Y ñ ” ┃ƒƒ°¡v«¨.\²‚{RÖÍà ^²±ÁÍÝ›LF²X²l`&#60HÕl8|Rx*äÿ )Òß›óÊÒÚ^j˜j>ÀÅøÁÂI&£Ðì’éɬ4¾²Š‹ ¢/‹TðêcÓfÍö9 %»àujN±¦–L?Ü…QcƛϚGØp#0üª;‘,–®22Œp&6°:‘À†½ll¨™ l`È+kÈyJ™e_ݼ'šÎTRRÖÔj:wñªëwã<Ž›b†"ñ VVßÓÞ1C¾¾¾ìNf¥ñ tžõx³A¤8ç³îÞjjêO +Ù§¨ÂWhÖL;¿ìãæmv;w‰yœ{îÒõî={3l€¯àtò>˜œUD:‘¾³áÇèÃ!;‘*äÕxÆ7\ð mÕ¦¾ÝÈØäà)÷”ªŸG#*ål¼¡²²RKK ¿péÒ¥¥¥¥Ø8vìDXXo l ñ r2ÞP‹e‘&Lž6kîBŽ8gÿ‡8Úù˾QIÙ½z÷­nHöèõ æ 6θ{“N¤ôg¯ 6VVVY´lõ6˜2ŽaƒðîBÞ¿ås^éç†òŽž÷Zµa{h\v#™ÃêââbhhËþvvv” ” Ôo#6H sgjÎK~qÎI™…O_V‰:E•Ÿ»0<+­. 5ÞpÉÿ¾›W`ã‰oˆˆˆÀ¹åùÉh·eo làbCýfî¬õU~î³Rñ°vmþüù‚{Ex²aǾ£b_ŸQ†n-ùÛ£l l¨“***ø=¥¥¥2ʆzÎÜYë)ª?ƒá'*ˆËoppp cª...3w¢þ|<Èâa?*Lš2][»yF^†»Q?¯¤J¤Ì“‘ž—ýÞL»¹Ýž#×o…„FMŸ9wüdS!ÇœóøPAT66“'OæëkÛ¶­Õfëº%ãgÈJþ¹ao ó”ê*ŽedˆŽ;&»cÑüܰï’p6â.0l܉äq-ÀñÀ1Æ]8évùè©‹ n?TPPÈ.|OÜ…ãg.;¹\iÌù;ب[]~‰ üÎ OAnï—þõ‡¯Ü¶þ¬W £óŽ W¬ã—x\?Cr—ú t¼Æ7HD°8ºººìVCMMMFW’ag÷èØ ¬¬;ÿ’ Ï^V s^¸d¥ñˆQüF8»ªkhˆ0E•'Ò0ü^]^Ї NNNìW[[ÛÊÊ*;;›¼Ë3¾ÁîàÉùËÖ²ï‰{Z¡¢ª™Q,…ó”Ä~dÊÊê7|—»ùX»v­Œ> xŽ9ƒ pZê´0h06܉ lxOÀPPñÅÖ~W½îZZM‡›Ý÷mÆJÒУWŸe«¬€„5V[—®ZOÀàìr±MÛvà Þj6 HX½~ë’•ëž•ü¾n“ *´jÝf‹­#ã.¬Z·eñŠuÇÏ\é¦ßcמ#îƒXØPTTD"~I?á™3g¸g¦qqÓY :wÓoݶý£I126ÑÐÔ4l,sX)(èxƒ¤TZZʾÐŒ‘L³ßÕ‰S¦¯Û¸íÄÔ9zê|† Ä]ز}·võÈÁ¡ °X³™sUTT£?NÎ?iZŸ¾ý¯ß‹J¦³ÆbSóaèáiùܺ6L7Ÿ3~²)ñ¬6ïèª×ÃÓïŽËùkššZn¾ÄQ05ŸÓ©s·þ9y1,:› (9â`@åÀ͆ͻö 2\¯G﹋W1ÅÑùL\n¹Ó›WcððQŽN®xy;òI½õ)Q64þù矯,YZZrtVž;wîóçÏ‹/¦ûkÜß¡C‡Ý»w———ÿ÷¿ÿ•ÚØ·Ù³g3Ýвû<°±Çd$Âl 1Z§U묯 ž²Øð¼ô#œƒk6çüWŸ5µ´/_ ,X\ݧÄt"1l HûN£YÍt"U³aR5² ß〻÷:waô¸ÉÓfÌ&l˜f>§uÛvqi…<Ý…œEx6ðìsÏÎÎÆ~Áq*<ç)y‡DŸ¾â/¼O0h5mÇ¢O¿ë·ÛcOJaþ÷ŒÒ/” òɆOŸ>>}úãǯ^½ú(ÅÂY柗––¶téÒ¾ÿ^jÙ€v%aƒ———¬³ßU¸ ±iÊÊ*À@(aCñû¢×ß"bÓ«JºèÍŒ93aè0† LÔÇL$Âft°!¯ìËÝÈê•— ú 01¥m»†ƒ†Ñ…i3挛dÊÏ]ø^Š—†5º…)l'Nÿ–Öv„ 饟A‹ûŸ×C|eOI6¾aÏž=æææ¥^RÎ"àÁÑÑQjÙ´mÛ–}aÙfWàØ`Y͆ꡅ [l•”ŽŸvgØpça2¶/û2cΓ¦š 4ä;†×ĆcΦ?ØÂ:àk»S¼I¹t=„8 „ ßÁÀM²‹ëê7ˆ‹ ™¯¾ ®°ÑvÏÜÅ«ØÙàêú‚õ¿ÎSâ)ÉÆ7èêê>~ü˜²A,Æ+++¥™ ...vvv2ý<|gŸ8g† Eo¾=}ù¾c§.mÚ¶GýÜâ÷ðr ß)((lܺ“™†ÔR§Õ¯s,ƒ…KV <ŒéD"# Ìd¤j6hh23‘¾³¡ôsfÁ»êõY—[rOQe±a?w!ûGË<%Áâ9OÉ/,þÈ™+dÛùìÕÖmÛ÷0Ø3ð!?6F¥5Ón¾z“­é¬³.[µa{ V‹Vm ñ (ÉÆ7Ȅ͕6@¸`õ?OéIJ:ÙÀoY$ÂfŠêŸ@ò_6ÀW˜»pi—nú~AÙ•6»÷Uç} y$>qNMM-!£ðàûhóN$†¼ŸÙ@x°ÚÊFóØ™Ëéùï缺M¦¨V³aâ4~î«|B©6ðÌß`µÝÁb•6¥ij5e±|ÕFC#®ÃyŸàŽ»’ó© ¨¸xõFŽ`ß Wã ” ╨$K|ƒX:”eeõ46pD´}g[àÂÔé3«ÙPôžt"¥?¯01ÞƒšººvóN'/N¤”ܲFCáÀà˜‰„²ï_6T9³³!£àÝò5UTTÕ54TÕÔ€2º@ØÀ‡ ßÁU/làé7XnݵÁƑĻuî¦O†”›·Ô‰Ê*áy'$>›UþGæ«oaÉyþ’TÑ9¬” õ¡åË—÷èуy­ªªêææ&sl¨Ë\Ûºø “ µ^E5»à]lÊs0àûÊH?ÆâÓ ² ß×bY¤ì¢âs²Šªø.pS!«¨ºÔxÞWƒµ×í¼jÃvE%¥c¼ÉŠIJJÊüâ¢ížìÔUÏÚþPLvo lh6vêÔ {Äî7Ü¿߯ÆFjÙ Þ‡_¾Ù ¡UTk½,¿)ªÜîC…†eCfÙ×ûõ4dëîd1pÈp~wBRþ;‡#§{ôé×jÊŒ9þap#$w·TUUòüß+++ɪ‘” ‹ ¸'Ægddôöí[±³a×®]M›6¥l 6ˆ¶Šª¸RñpƒAðU~îJ&«HÉ<%á‹wHôŒ¹‹UTÕàFls8ÌîFˆ÷n155v6oÞ®®^½Ú©S§ 0¿G[[›ã÷È4ø-‡ µl¨çÌ5v"‰ê.T—ÂêRÏ~ÌúØÉÓCb3ݼÙ—Ú®1·é†Â§&šÎTRRÖÔj:wñªëwã<Ž›b†"‰ø`€L‹B#LMMl㙥ã D’oÀH `|aš™f>»ÐÞgü›k×®a;==ýÍ›70Ðk×®%û±æ?ÙFKßÐବ¬Æ>%™ý&Mšddd6(**úøø·,--ñó˜O-[¶Œã÷—ùùù2ÇË!H/$™¹óðñsûŽœ®M'’Ðî 3 ?Öó<¥¸ÜrP?4.[ÔÜ>¨ßªM;ü#c“ƒ§Üq‡ü<Q)‰øŽõeIb;±¬LãdÌo8þ<®\Hö·ž={Fn‹á,Áèc;((ˆðM{@¢¼¼F f̹E‹­[·†+Àôq³!//ãënß¾=p_tuu™·P ÷%ó{ôôô¸OÆ7Hh9)gƒ0H&›’û´ëÞ³÷g×3w²OQfþ}é QÇœ»  õÞñ ¢–£ç½VmبÔg$^f‰Y¢•+WÒyJt¼ÁÞÞ?‰,ñD”››Kn‹ëlBóoÅÇÇc¿‡‡¨€{ˆ}I(T9TTTz÷î]ZZÊ“ YYYøøþýû™=÷îÝÃOOOl ¿?˜û÷Èh|ƒ€å¤™ Bv"Ÿlj8pÈÍHû½Î½z÷…_ü QðU† pŽž¾tøÄùÝ…laÜ…Ÿ©Ñ~Sn„'nµ?ÈkûC6{Ž8:¹^ò¿Ï=‰_ñ¸yOÒ£S€;8RaS64®9¬K—.UTTôõõeÞmÙ²%¿ß9hР1cÆôíÛ÷÷»111°gÏže컚šZUUyùáÃMMÍ)S¦0õwïÞ/JHHÀò{†*7ñ –CZ6?æ 6˜ŒO|…K¬0齇O ]0ýÙW¨ËÕL`(ø˜^ð±¡æ°pqWS×ÐnѲm{]M--íæ-p rï¾âžVð\séjУ“~¤89#ëxà8xð´ä÷m»öuí¦¯¥ÕÔhèÿ;Ñ Øû‘VZnY´Ü’P!»èã‡CzÝ{©ª©uìÔõä9¯Yó—Î[´Šq‚"÷èÕ7.ýe Tø†dÔüþFC“_¼'/Ã’ó q'>Ç'4¦O¿KÖlâ¨ñvÍY¼²fµ1ÙÎáÐS64j6Ãû÷ïæ9žÆæjkkà PQQ騱#™•••Á¸2„ýhfffªªx~Õ:uêdccC¦$‘à‰3f(++7oޜ쩬¬Üºu« KpVfΜYXXX# cÇŽñü=²;‡•çrÒÌ!Çœ H'ÒúÍ¶Õ (…F ­íÐL¶µ?x34Ž‚ŠŠjD|6sf_^›}Y¤õ[vœ6öïF¥»{†';rQVV‰ÏxIÜ…e«7Ž=ç˜37Deƒxã¶î>`imǾgݶݳ­ÀÆþ“ ÆQßÑùŒ‘±IÄ“¼»cßÑ´’ß7ïÚ³Ø$àÈmÛ¶­ûÔUÊù\O &>++«¤¤„}QQ‘ºº:ÓkÄèýû÷¨ÌóPøÆS!ŸDúI<,²á’ÿ}7¯@ÙŠo~Š*ØÐ½gï]އ‡˾`ñ*€!»¨:'ÏÒUVÄQÈ-þ¨©©µp隟—×þ}ɼ´üJ47†ËÞ‰* q°çÐ)ð -ÿ]KVG]¯îDb/i¢°¡.„ç –¯·&qÑWƒÁuà^›ð –fÌ]L¦Ãªª©§}tþ AGGG¼ëÓø™™§Tëµöáà¡’ƒõ”Š hŽ™8MÙ ÌÕñ“L›6m0Ì^°ôìådhääq9w™¢ ßb°±É¿¾Â¤ï¾³d^ÈÃ|ÄÖþǘó¤i3 6¸\ðÑnÞ2õy¥àN$† i/P>4”ßpëQ*\¥©æs¸¸_ô»³ÁÆQCSsÕF›j—Âþ é¬Ü½Žƒ‡ªžû´¢U›v§¯øv½„#$佩Ÿü ùùù” jñ µcCUUÕ Aƒ¶mÛ&=¿Sã¢2_ö4èÖ¢ ±A0ا¨‚ #Fçs¼Ÿ€ƒ\¼z‹ \˜0Ù¬ÿÀÁ¹?/¯ÃƆÀûÕÁ•pþk´ÙÕݯI“&a±Ùc&L]´b½îƒHlï<¥êÙhWoµ×íLºò;vé¶ÌrË“—Ÿ°ÿøEît¡ñÏ^›ÍYDRmÞU½¶9þk²Ê·HùðÌÚ;8ôîcðKãV×núÛllq6êò,4Šøš¿¡ç)¹y›  8v’)Gˆ¬ô³Apœ3~š‰”–÷FAAaÝæÌL¤-[͘½€Œ9³/¯M¶Áƒ'y•ÊÊ*c'L嘌”–ÿ®EK…K×***ùßÒ] åI½°A@|CfÙ×{IÏG-ð,wrQDmIÀŽa2vÒ4¿°x~ ¾6†’Qúåvä“Éf³úõ7¬ $>ÞðI$+¿Sã„ÈJ9„‰sf±a÷³ç/éÜUß; üɳ7Ö;÷â˜>·#8ú‘Ʋ؀²lÍFUÕS®g¼¨º–XMÖЂÅòu¬¬Ñ…wžü( è7Ô±¤—~榪ñ¢%:~²i£Ewllm¥— ²2"/—3hþ†Ú±Aøe‘~ËQxœójªÙ,&ÉÏÁcç˜Ø…ïl`ñ`êw6Tw"¥æ½³p9ü M-ûý'Șóõ (ü¤Ýû ï.T—üêÒPsX,¢%d‰{ZÁqä¡Áûñ l— ³›žeeƒ,­ÃJB^¥— ¬¢ZZã*ª¿?É{ó0ñiNñ'á—EJ}ö6<>`¦¨^ò½«ª¦—QÂÛ]à¦Â0¤6,¢%96 BcîJâÙ¹T—GŒ²²Aâlàò*ýlK*žÜº-‹4v¢©©ù<ÁHi\TH‘ â§$`-‰²ò@Œ®y#eCzzº­í¿ãøÌïÄÇ$""‚²A\œ€WifC-2wŠgÕŸg"Åg–‚ ×ÔL.0ˆÄ†ºÜ“"-¢Uk6Ôß@Ù ^6H6¾¡Ö}Üõ ccc]]]///† ]ºtéÙ³§#$)„¼J-$¹3[Äe‘jsæ¦JÊóª†ò,¢Å^¢³KMÆò,¸g8Ž\c|eƒxÙ Ùø]jÙ@bå¡Ñ£G㯩é÷Å–œœ¤ö7‹Jri`ƒ€Wéeƒˆî“¼·!g¼µ)K¸e‘jt8¨ú¼*…Upž¿E´8R¼ñLÄ3Pñ ÒÆ†.î{ŽºÉ.$ß Í~ž“¶mÛrôu¨©©UTTHío•äâš§•ù²wß»ç^`ùAJ>޼пWifƒîÂÃħC†„íÓÐÐTVV?É4>£¸v«¨fÔì.Ô܉ÄP¡ÞØ (¾×"Z’3s¢²ì!O}“&MÚuè8dÄèî×Åh—§Îœ'8Õo^ÙÚÚr°A\É=$¤†Šoxú‚œã‘ã°Í~ó݉ÏüL y•Z6rØ:‘¢ç7oÞ²ÿÀÁ÷c3s«c›;wÕëØ©kê³7µKŃrÚÝÏÖÑIÔ1gž`xœW%Uñ NnžÒÆC£÷àÑ::Ÿ1lŒ;Óæ7§?xæÚm´“(ä™ ùùùJJJìlHLL”×yJuñÀu ³^:­Ûj5m††¿ðlð¸yoßñó±9¯¸C^¥š BLQ]²r=ÎFRV)Ó‰› ‚{Y$Á™;ÙÝ…Ù –6)ê˜3ê‡ ÜW?)ÿnö¥’ò+ÍæXÔ±ÏGl06˼|òòÓ¨ SáüE<)üÁ¹‹WÁÉ ló9¬æææ ŒŒŒ¤ü×6ÔX4aI<Ùl6öL4IÌ}lð‰4t9øGNGe¾”þØ7aÆœ³‹>¨ª©-^±ŽcÌyüäéðf/X¶nóN‚„Œ‚ú=ûœ¿z›P!ýŇ-¶{:wÕ×Ôj:hÈpŸÀHÀÀÖáH³fÍÕÕ5ô{ôY¸Ì’€áøY¯ñ“ÍtZµéÕ§ÿ®½Ç…q˜RÿlØw₆¦&v**)a{¼B¢t;uÁ¿¹÷Ø9if Ë€ë°ÁÆ‘¼t¹t÷9ÚC½ûÀ­Kvîܬ™vóê°½ Èè:Ïj N]¾‰ýÍ[êÌ_¶6 · ÿúá)ÏgÌ]Œšx‹ ’~¹5›w´nÛ;7îø²¡þʰAŒÉ=ä• Lç@ÓfÚ°V®WoÕÈfÅ=<À adl‚LšÙ ÌÕqÙ¨yøäEŽÌ›¶;ÂuHñî‚ñˆ1³æ/%lH/¨{?vÆ“¸ `0`½k¿Oà£)f³UTTîDe¢Œ?µgŸ~×ïúßK$î‚ýá{Ý ¼nó.T‹I/­Ñ] %9¯æ)õ0­Ÿ;±k·ìÔnÞÂb•ÎÆ°Qã¤äKáxPÚuèÈÄoã.µ?| À-TTTá‘IîQÇe Pàw=oánò™Ì*ÿãÖÔ՛lUTÕÈG¤œ ‚§¨^ñ»[NÜ;˜cÌyŸóì¿“ 6$66|LÉ{ ç`ÑòudÌùÉó÷šZó¯Nÿѧij TÀN¹ß`¦¨ò£Cò³÷õßÿì5ÚÝ!±™ŒñÅmƒ(|ŽhEìñ <ÙÐßh(÷N˜oüìõøõ)qTzôO)ü>¢L˜f. þÀ!ÃMg- #sL7N iH¡Œš0uªù\ßP:vìΠµµµôÿTéa±õ»¹¨©k~&cs^pqÇM?CQI šK>¡1RΆ#ÚëWÕ>qöǘ³ýA´Só*b« õžñÌ(üøàqõ¶«'3à<|äøACG¤½ø— ì`¸èsÇró®Y –³Žp-µF*°À Äâ7à>ø™—0j&c'‰«ë\ìñ <ÙбK·™ó—0/áàv pð“~üØÀ³ÇxÃFÛ=ZM›ÓϳþIw_¸½û€—@öÇd BŸ~‡šŒAiÛ^è¢ñ õ§ŠŠ ---ñ&÷0¾!³ì+¿å”Ï^pwÚ<‰#tëÞ NqR~¥L¬µÇÃ]à \x’÷mäÍ6{8¦¨Î[´²O¿dÌù;X“‘Ò^T1l¸‡í3—ü6Œ4½Ÿ¡6ü7páyÕÔsu;uÙdóÛYÏ@|ÊùÌUH P’DaƒXæ) ŠŠp(IéÑ»¯v‹–ÌK”º°Aìñ ÜlˆË-×·Øí'M4h:vîºÕþ û»üØ  6íÜ Ó6ð«_éaŠÉ¸É¸¯P™,Pˆ ë·Û¤à#4¾¡ªª íùOÍ¡¯¯Ï¾h‡„H. ë°>HÉ·ÚOÓfÚÓ~ô¼C)gCË"MœjÞ±s·Ì ’rÊño.\º–ô# 9nÚŒ9d2Ò¥ëw@‚„ìr…5l˜ÉH-ZêLŸ9¿š WÊÌDºq7Ÿ:-ÛI¹oØÙ À]HúQê9¾A¤X6)oX³y‡¦–™.•†^ò¿Oƒb7âó–¬f2ž ¨ÆÁ†qS̺÷2PŸ)«7Ùj7oŠO †ÒÄÄdòTÓ[á ò½žbFé—è'f3gŠ–|Cò7$¿xì‚·ýáSìE˜ÿ:âIÑÓ&c'ihjŽ3QÊ٠̲H!RÕÔÕaý³Ê†ÈÇ/à(ôí?(>³”ô#­ÝhÛ¡cçÐÈôãn×ôº÷ª¶ì?ú‘fÎ]Ü©‹Þ%ß°ø¬W›löà­+7Ãá+ì9쪪¦žGB"3ъܵ÷ø½ØÜek¶Tç utªÑ]¨O6H(Cý°aàáÒ‹Ð|ñð›¿l­¢’’£óòîýÇÏqæ\QaÕ†í88<`f.–ªš:Aˆ€j`C V|CI<4ð–àÆ§<–¬Ù4hè²uši7‡k“˜øü­_X|cd<†ISWjSóY¶¢$ßhðü ¸wÕÔ5psã~e/5/ÅZöÕ+$ÊÒÚÎÀÐ×nÑRšÙ ü²H·Âº÷ì' MÛöÕ3ÚëÆ¤1mÁžàÝê.5ýž·Â’ØÙ“^2Éô×êjêÍ´[üæäF:‘"S û‚Ö✅+,µTVQQQU]¹~¼ UUµ3—o vªËÓêÒ€ùd‚ ß—ôPPèØ¹+Úõ¾÷âØ+,Zµç]EUmí–ó–®H—NÜÓ ÃÁƸFØ) Ø0jÂÔ¡&c4µš*(*â-²D9¿úà„àarèÂuXf¹õÕ54PS¤òÃVjŽ„Æ–|CO”ä ¾ÖÞæ]ûú Œö‹ðÿãÍISfÌ?€Üôë¶íö !#rÒË^î‚€8稔‚«þá§Ýý†ݼ…ÎŒY ¹]MÉ{K"Ú%¿àçœ]~76;5¿ŠcY¤°øg Ù¯™1çÈÔâø¬ â.ÜÉ­Á]`Q!‘U(êXp«3}h鳿™Q̼ųZRþ;²XHLv¼ma‹QY%<{àpŸýBÞØÐSsˆš|£ÁÙpè´YIørØõ2;‡]/Eg—ÊJÞ7Áî‚àe‘|ƒ£&›þ:nÒôô‚µ_EU`àÂcþHì`‰ â]‡UnØ@ó7H èÅ“r6 4hèÁ“ä 'h]RñÔ"s§ðqÎBº ?Ê»†Êß ‰Bó74Þü ” Rˆeë¶2SM†Œ­  0rüáçŸðK,ílfmí¢š©^çe‘jsæû„Üê"O~ÍßÐxó7P6Hm|ÃÚ-;aÍÁ`à—:XzÙPkw¡V™;Eu’jrDbƒôÏS’PþiȲ@ó7P6ÔõâÉb|ƒ€ÔÁRËI§â£»À õÆù¤m¼„¶ 5^ÚVK•Ü¡ã … ßpÖ+™Î”…+Ö ˜D u°t³áßÌc'N:|´„2wr€a’é¬Ñã§Šî.pR!žUh|{Ù}È…ÌXÌ(®µ92yCíŽæäæyð”;eeƒ°É_Ýàñ vOÎ_¶–#绊ªûÃ&|ê`)fÃOî¿l@æÎ“|·ÛfÜ…ýÇ.î9â&ê˜37âsª ÃúÓ:zƒ† 2ÕØøˆÊ!“74ÈÑ(Äà¸i·hIÖRÿwm^ÝN°Y°b­Û¶_»e'Y­÷èy¯†½ã-,,*++¥ažŒ{çnú8?¸›I126ÑÐÔ4l,øä—:XjÙÀщô btØ…Yó— 5©ÝU~îCe[ e­©~5èQ>ýسØò˲À3¡wò†VÖ«¬Žž»ÖU¿çŽ}G¤gÈ|õ LÒëÑ[UMÃIw_~GcêoÝ} ‹^wM­¦xÐØ#¢I5ÏÀ‡@ÌÔVûƒ” b(ñÏ^k5mÖ´™6Ç2&`ñ ÞYåxܼפI“à˜ŒdCii©““S—.]𮋋‹”ÌaݼkîEÜÜhì0ÅÑùL\nyÍqѼRK9˜N¤±~rÒò«6Ùü›œÇ;0’¡Bêó÷Övºé÷RUSÓíÔÅÙõ*xpÔÍk+EOÏ>ýwþvŒ€aÛîÃMYÉ|pVç/±¯Ú4oÉZB…ÄÜJ«mŽºTÅ£aWn>d¨°h对‹×œ»ÚàÐ6í:¬·vd§C\Ne=ÌSêÐA—Y”[r%âI®®n] ˆåÖ]°ÈxÀ·;FÍ ètÁYx&TàNÞ€wìÒ  >~'>G@z†uÛv£ad³çª]ô»ƒó<ãĬßnrà×úÞ‹Ã~ õ™ß fè÷ìƒ/²Úî€çþãç” u-8›SfÌAëW…› d Àî|öjƒ°!11Ž‚––ie2JÒßà-jv‘†Šo¨ËJˆ“‘8ÜËÉy¼o?š2½:9OhdqÖlÜ+°uçþÀ©nWn߸›6l·?¼sÏÑ€°dËÍ»”UT¢ÓJÀ†À‡é£ÆMéѻ߯Pßи “Lg?•¸ «7ì@#f“íÞË7"&N›…OÝzFܼìÔEøÙîà¼jÃüæ€ð4v*Rñ ö“ÍfIú1)¨ü«¢¢¢ÖHèб3¬3‰CVPPX½É¶Æ, Ü ¸{ðqxÒÜqéž¼ü«Â1 Îóh„ ð]àOÀÃf–œÑÔÒ«þíùhÞø!3;À3~9õh|ƒ¥OÿA缃|îÄ**)±‡§6à>°Ýë DƒÃõÉ<´çÏŸ722â0R‘‘‘Rû&|a‰à·$§¤ÙPk¿cÌy Æ}GŸU'ç±X¾Žt"¥æ½«NγhÀ˜ûî‚ñˆ1üÆœA…jwð‚é>ú•Õ§ÄŒ.6 ±Y¯ÕÔ5àLN¤øœ·øŠÙ W’¾#°Jß;Ið¢3+`vî=ù²+c³ëÃo¨ªª6|ÄîC'%ô€ Ù¾r½õË²ŠºÜ¨h˜ã]4ÀòÞ  ‰çde;¡‚k.8ÉÄÀ>Î!øhpÅPÿøEæ­c&âgóüRp‚ßz—4¾AØrëa ORtÕï ÿŽ }ú ÔiÝwÆa×Ëõ9Þ`mm­££#Í+г?¨¸_¹ ÷ÿ%LH„ô²áç¡…ïl`W¸^œÇù´'3æZ·î½–YnAAs {/v6¬°²ö aúë žžžÆÆÆÜ÷®‰‰‰´Å7D=à!U}JucÿC „ ìcÎ~¬ä<®7™1ç±Mû=Éÿàw'oíÞ’=sçã¼÷SÌæèvì²qûž3—o3lHfcmþΆ§ï½nGWç•»pƒs=ašA£ŸØÃÎ'v*Ô'¤DüîÃ'ÅÑàíKœ”¨Ì—šZZs¯ä—e_BŽä Üç—ž¿AYY…£[ÀÑT)((XnÝżÕB§•ÙœEõɆF4ÞVò{ó–:ðwì;вÍá0Î>3úÏŒ7°g¯Ïñ<À+W®dFˆ<<<¤*¾Í"‰ãÜÕ`ð !ç aéDš5yÿC™™H“¦Í5n*xù¤_±bÝvf2Ró:SgÌ#cÎ H'Òw6ü ÊRð\«ª©säœe±¼i3í'/?ñ̲ ¡{òn3- =òu[UTÕN_ñÏ,û 7ÏŽà£ÍZ¸¬s7ý«AÀ‰­»àPÞ!Ñ” )'Ý}ûôȾÇx丅+ÖI ˆ***Ž;¦¯¯OØ ££Ãžü§ÁãxÌO½ÒE¯;÷þC§=àV (2À†^Ø0ÀÈ8<áÙ}”ø§(I¹oÌ¿'ç¹—ù Þ>uùF8ñ¯Ü ¢ªzì¬wòÓJ¿ÐxŸà˜ ‡0;÷¿“»”•¢ÇÆÁ‰t"Ù<­ªªv7ö)™¢Ê°Ålö¢ŽõÎyÝy”Z²Þºz:ÊŸ{¤‰°éDªfƒ½;b²ª eʘ‰ÓÌæXpì„Í%z~Yø%TàHÞÀ=ÞÀ/=\‡¹‹Wá€(ðZÈôVGK|þvÊŒ9Õ>Ô5`ý¸¸ósV(êZpØ8Áa¿pZÑv60 577WRRÚ¼y³4E?L+Dîy£'LmÛ^—”ê›»EK²­Óª žlH9Ø{ÀŽ>´Sî~Ñi/'McKÎsÄ]ˆÏª˜5™¢¢І¦ÖŽßŽÁ]˜¿ä{Šžë¶ÍZP¢Çõò-ð <© ß€!ÊÊÊØ Lü— ï$ŽŸ2óÇW4ß}ЕéDšP͆)L'RÓŸÙ@À@Ù LeA@Böä "e}Àþ©/˜ñŒ–RXÅ]ŸÆ¾Ñ¸è¿óóóíììJKK¥„ þapÆ™‚æX§®z=z÷œ/ºwßÌ(ê‰Ó=á~]šÙ ü*ªqYåwb²SžWq¯~B„DeÃ3`ÂÙ¦Çf”w!4:‡=¢-4:7:½œgDÛ£'¥·f‚ÜST¹;‘*Äd½E©‡yJ²Îºìe]£[ l8}Åßxä8¦ 5~ÕF›Ð¸lÿàVûƒ$Μ)·#Ÿ‘=égCý¯¢* Ι{Š*0TS!šUê!¾²æo ë°R6ÔrÖ¡&cØ÷x‡TÏÀ‰xR Ílø*ª¢,‹$ª»@ÀIýÊß@ó742J¿¸zŽ)QQU3Ÿ·dßñó—Âwì;Ú½—A{ÝÎYåH-„_EU,™;k\Ixº  )t¼²¡Öç“æo ùjy]î%>]½É¶U›v5^¸ó×CºuïEFqµš6›4}™À'½lzU±dîÒ]ˆÆ]øA…(V¡l l ã 4C}Ä7‰NnžCMÆ4iÒÙ‹+·ÔøoÂKx˜Vèÿ ™™ (ålK'Ò¨ñS q®Ú´ë0h¨É!—+ìT9î{ÝÎÝÌç.ñ ŠÒ]à׉Ä9cCbbbC±“§›Œ¥lhH6˜&—EÔ¼˜ ßpëaŠÅ*«fÚͱ§§AM--&GÎbßÄ5æ 6ôî;À3àÑ)wÿ¿ï7` ±yÇ~Æ] úòXºÖZEUU¿§(cΕ¨P]2äŠ †††çÏŸgø¡lhlèÓÇ€}òÆPBb3»ééIÿXtü³×} Ɔ†¦æìE+®ßÃ×íÔåFx"¿M¦cßÄ5æ 6 6ŠéDŠÉ¨1z¢²²rУ,Òƒ6 1–ø ÓÌ(((DeTÔÎ]à Bduy#OlÀ'«ÖëèèlÞ¼ùéÓ§” … öc'MkTl˜<}Öv[égYÖXIIyëî‰Ïß’/˜ 2û&®)ª£ÆMl<Š}tÁ÷N\‡ÕwþÄ–¯`>wiÏ>ýk7æÌBRälž’¿¿?{âäÉ“±‡=+b}²gŸYËI„3)Á1=úôCÓ*óÕ·5›w´nÛõ7îø#K{j Ê‚«8pј‰Óà=°÷JËeW<†IÓguïÑ“Ÿƒ,msXÃSž[nÝÕ¦]XvóyK®G fƒLǾ‰kŠ*Ø`ô/¾6ãNœ6‹Œ9W³aøØ{ñùŽGεÐi}ÄõšXÜ…Z°áÿgï:Àš¼ºÿ§UëjµÕVëø[Gmµj-ŸU?G­VlXq´hm¥ŽŠu`EEq 8P7[– ÙCAÙ„Hf ì°eþOxÛ4MÈÎïyž—›wÜ{î¹çwÎ]/Y°víZ‘5êLèèè°X¬^æ†v¿ásF÷îàÁC€ {(-Y±šø²ÐÇÓ?1qð¾aôhÄÈ‘‚ ˆ×7tH'NiMÿdæäS¦Nûãz—ˆ¡FÖ7€e¿cîü=ð­· \¡×~ôŒkßzjŠêßÜð¯™HŸÏû/$ýHÀ ÛÀß”íâíî‡ b¬G%ï?ýC‡í«>%áoø+O{qSXeïùມutFɰáÃK›÷|§´åµŸÂõ 2MMMRäSvÖ7øG§‚»4~âdh k7ý(gkßzjŠ*Á "ST'LþxíÆŸˆ~¤¶¸ák—€ø£ZWÇ}4i½òŽ.MQ툀ˆCþâ%%%V7nœºº:Nï}nÿ†«×mRX¸Nn™Ùߥ'>ì3kÎ|hpŒûhâ\…¯:ù4®o$$$ 4(88Xö³*S눯@?°v]ùÝ9[ûÖN¤öV´-ý771x½ 4xßáÓÄL¤¥ß¬n Â…»æOø»w~.ùÕŽÂyå‘ñsssAðÝ›ÜÐÑ7|ˆ¾ àI[±z-ñQOâÃ>ûÿ<}ÓØ–8ˆYeœdh¼¡¯NäSEEEö³* ëºqníÛ›o‹$˜¢Jpƒ`EÛνêÃGŒ|ò,‰s^º‚Ï D”`ï÷ Gµt%Ÿ¢Ú +ÀñT¾¸A0OièСªªªâËz“:ú†ñï{cƪ¨îkÐ âÓ&Ñ%ƒñ'#7È´Âßã…lw*³Áï7Hxkí[WÃ…ŽÖ97Ìÿ•ë³'ÿØÛ¦Î?ü¸ë­·;{]0iÉŠ5ó;øÆ?ö[¶ò{ð7lýº1æ,B O‰#F®¸AKKkæÌ™W¯^ÍÉÉ‘ž¢vÄ ó, 0‡{H|Gßð!>á)³ç)RàšwGÖ½o^JËLÖGn]BT9憾ÚkŒkߺ.ˆn‹$Xöf̘7”©TóL–õ ~Q b~΢e+Úõ»ø!³ÜPYYYZZ ^vzzzbbbdddHHȳgÏHÈ*d" †øøøÔÔÔÜÜÜââb'ßÜ€èÃl9á …âââÒ®ñ÷÷‡¦%ã ½7ÄgWš;ûoRùuÄÈwæ*|¥uå¶`2YîS"†À¿K öÜ*•IDDD'áØØXà6 †ììlpwÊËËå¸C Ü [v„DÜÐWã QÌ"ÝûKV¬:lø·ë~¸cáDa•É27€g 6,iQQQ~~>›Í†`0´6¤¤¤Ðd)m€ÈpZZ‹Å†b(++ƒxƒrrƒÌEƦÿyæò”i3Þ3–ø`œlrƒ€ z())ÃZPP$‘K*äååq8È<@ÀsH ääYäˆî˜;*®WòöÐAƒ›Ø{É27èL*0ØÖÒÒÒR2 ±džÇãAA]…t×7èèèÈì¸.é¸ÉdNš4‰\ÜŸ]iù$pëÎ=£Þ{À€ó,>sõŽà“²Ì † PEZ + ºé®oÔ××#7ôjkk‹ŠŠÈ QÌ¢½‡OLœÌß8sêŒOž8ïA#ÅX4¨‘öúÀ7”••º *•zôèQˆÀZ߽̞··71qðàÁ صk—´ß8eÊ´ÈÈ hGýºNÁŸ6mÚðáà Y,—ËŠŠÒÑÑyðà,pägòäÉ 111žžž‚Œùøø;vLJUŒ¹í¢_ש––”ÈÉÉ©£ ú–( dïþýûâ?pàÀYmàp8Dº‘‘Ñܹs! ‚ÇjjjB¨!œ½%K–Ì™3§ó·#7˜ÀÙA;‚À:}C¤§§Cq<(97€ vöÅ‹nnn`U/\¸ ¸lôèÑ`¦=jaa1yòäü‘øéæÍ›ð ’ÐÐР  +++i5j”±±q``àâÅ‹-ZÔn {p;XXX˜ 1!!aûöíÆ ónCQQ$êëë<rÙƒL¾ÿþû;vìdÞ5oÞ<Ȱ§§gçoGn 17ÔÔÔ A`¾!ÀRCqÃÎ]íS‚`Ù²e‚ËÞzë­GÿîÛ·L3q'.p¹\ñƒ8·´´„œP(”v_tùòe tøðáüüüŽú”à‚ü€¥ KdoÈ!OŸ>•äíÈ 7 Aôë:¡¡!’F|ÿޏåôéÓ»wï†à@Ð%r™¦¦æÈ‘#á$99^>¾È+ˆôiÓ¦-nxô𯫫kGY¥R©ŠŠŠp͆ Ú冔”øÞ+Hqvv†kkkñìuþv:ŽÇÐŽ úo½ýöÛk×®I‡@ÐÑ$bU÷ìÙ3zôè'N€å]±bÅk¹!)) $¶wï^‘Wé¿þú«£AEEB¸Wœâããá‚>.€——¤˜››‹g¯ó·ãúä´#ˆþ^§`m, 7qÆÍ›7‰7nÜøZn ú”>úè#bH@Þù0¸8LLL nnnÄ+†*誂ç1Bx ûÔ©SpñË—/Û~:y;®o@n@;‚èïuJ£ÑÞ{ï½±cÇZZZæååAJff&°E»Ü––qÆîÝ»ÁÅÖÕÕ7nÜ„ ëÜ 7À• ´?ÿü3###11ÑÌÌŒH×ÓÓ#z€ròóóýýý!3"ÙË~íÚµÔÔT‚™/^üÁ°Ùlø÷É“'p;øû‚‹!’z¸wïÄÖÖöã?^µjUG=c¼ÇÐŽ °NùïëÖ­{« àM0`òäÉ`:ÛµªçÎ{÷ÝwášÕ«Wƒý4há}w àÑkkk6 D¯øüóÏ—]¿~}Ô¨Qð´!C†Lš4) @$o`²!Kp#ü5k–àठ6 hРC‡A”uH|EòêÕ«(¹ì6D‹ŒÜ€Ü€x=tttþÓ†1cÆÀ_à ø;tèPìPBn@ 7 7ô_üçߨ¹s'J¹ó”ú5€ D¸!44ł܀Àõ È ýÁÁÁÂİpáB” rãäD‚‚‚€LLLP ò \߀ã È ˆ.ø€ †Q£F•––¢@举¹¹Ñ+@jhh 4举¹¹Ñ5¨««4ˆJ¥¢(ä8ހ܀܀耔””PÈ yà†ÿ Úƒôš—ËÕ<­õÙ¬Ù(dDßbê´iÇ44@!‘pžRû5S…‡ð!=n€v¸pñÿ¾þöûÇÞ/¢Ó‹QÔxôÕË*wŒ^³~ÓÜyóºDÈ ýe}rCorÃÉSZ@ (až>ì/xèÑéÅF=~ݯއ¶/2ûî¨ÑP³¡´|8¿tÛÄ+,™øé®¥óÉ‹7¤Ç ³ç)Øù¼4´u?£{wÞI;wµ“[®Þ3×¹eÜϹ×7àxÛ/’sG¾ó.X¢›&_{±ÂÂ%Ðð^Ë àçBKî}n`±XçÏŸŸ2e ü4räH:NRnØöË^pÌŸÅeÈŽ_lç b±ñ ÿiËŽßÀ—7,\²BðotFɲUß <Ä?:û”p}rƒ­ÕéË·ÖlØ ÆhÅ굂DðÎÀS#Î.ýÆÜÙN~?zjÈ·Ç~8~Ê´±¬ràÝûßÿ°íý1ìØý‡07ÀÓ&NžÞe¯qC``àÎ; $Ø¡LGG‡¤ã qì óÖ{$é3Ñ7µûvíÆ1Œûìó/´®Ü\³KM]å75s—§ó,?aÒ‘Ó:ÿ{{è°Iÿ÷ñMc["qÏ¡ãŒûõÇñsâ/ IÊ~kÐ µ?µDÒO\¸ñİáÃg|:{ûoˆG9u”dÄÈwÀ™¬0'²tí%ÄCǵ¯IòRqnh[!¡Ãcg:z,‘'›w¨îS×Ü9™5÷Ë(fQGïy”o$m֣ߗ¹z×7 ú7@ky`íúÈ#Zþ³øL"ñ½1c-\ˆs0pœ„Ñ >™5 qpóm½BÀ– Æ€ý#fÏS€–&mnàr¹&&&³g‹î–:sæÌ7ù¨Yßrƒwx dࢾ±$ܦØÌ%ØJ¨¨#Á5`§Ïœ¿‚A‡úDЈŸ€ã¾õÖ±³W<^$=öpy‰ûŽœ„ 5qð¾aôhÄÈ‘·:ˆ¿÷‡wÂs”·ï £s„s ^ŧŸÏ‚xÔþ?O[üyæ2(dr×Yš4eêÜ/\¾c ¯–ð¥âÜÀ×± “ài‚’Š%ÁõÀWî>pƒs`4øhøC‹’*7Œ7NJ#÷¡ 0sô… Ûyvi¬XîºoåòÏÒè÷€3àœÂ*4hðùëˆ>-[)|/$‚ã4Cü»lÕwß)mo|v%Ð øìMü?a[)ܧ“Y Ä‘ Z`|ÁÈYe J`ué¥írÃ\…¯‰"–Ä:PpB&ð:`,8ïä½"‚kí¦!ÈÀ±hD¿ãhÃSg|ú˾Ãp|öù3>Ý%nŒ7@”  „›éú¦vÒ㇣££3}útyŠœ¢ × ­Eæ©8Ú.7€ÃAh¸K0h$r ðpòäy,\&2–K$Κ3j ŽqMãÛQö,ŸB6NĹÁ-8þ¸àúÿ}ý­Â¢¥âY’ð¥írx÷·ýÜS §,_õØw8,ù`pU'ïyÔMc[z ]@„Ü€èGó”ÀŃˆû³Çµ¯ÁqTëòÀ‰Þaà†‡N~„Ë9fì‡ÂÜ à€×rÃË”¼÷Ç|@´@©6çâ⢨¨(ã ©\ÈÀþ?O 'î9t*BÄ„#†ÈûÈiˆ3$ᨸLdy‘okHD-p'f欳078úGÀ£ ÇÊï6Ìýrx–$|©87ÑÀaÍ ’pƒî} ˆ@¤@ÄD¯NÞ+þ(¬—|£¡ÒÁç‘ýe}´ ðž„S.ýæ§]¿ócö/œ¼x#Œ^ ´õçÁƒ‡šú·k7‚‘’à¸tÛr.¥I,âM…J¥ª««5Š˜§Äd2I:OiѲ•@«`Ñ:ç—g1US‚Å%á†èôb¨PáyD $ª¨î—<‡KV¬ž÷ß…ÄùÖŸwñßEbcß‘“‚+ÁÏX¿y»x–$|©87€(FŒ)˜ÄÕ97@~€4Îë¾5h1ÒÉ{;š¶û °Ï;—0nè%kµlÕwšõ„SÀ”C€ÆsùŽÙˆ‘ï€o¦pŸ’Á£'`³ zøÓ^-í5n PZZjbb¢   ¬¬LRn°÷ [—\…ã—}‡Å¹Á'‚.홫w(Ìßü ÙÖÔ¹Ù97ðõû‘!o½ýÐ!Ž]5ž>$ÂíïŽ ^6G(-_üó¥k,A7ˆ ?`^ ­gøöÐac½é§_þoêt ×°ËGN]„\Y»µky_ûR‚æ/X ô‹¤›8xoûe/HæŒîÝN ºH „VC†¼ *úÚ÷ŠÜåò¤%ü¼ç `á!®o@ô—yJ¬ujw5ø§Ô“¢û588¸{£²°.ÚÎ7ô³Ï¿€œC¿óî¨á#F,^¾JÜ„mÿí>°õ{ŸØºsØh¢‡¤nƒ¸eÇo`a᜘ù Fè^6nñ‹bˆ<6–U±ˆ`Òg»›)iNÌÿ)06]øƒHà[ˆ??¡ƒ¯ÖtþR)’¼J$˜ØëÈ È (m?bYåÈ 7 7ô17Ìülv»síñÀ£¯·à¸©Ó¦áúo@nèKn8yJëëo¿G ã!;Çšõ›444p}rrC_r—ËýâK…¯WÑCïÏ»Çá®$ˆ€fΜ٥Üâú†~Ä qÔH ÐOkiÍød& Ñ·˜:mšººzW¿|Žã ý…©ÂÖÖvöìÙ***T*µó+™LæÄ‰MLL$´P¯½ÆÀÀøÚ¸r8UUUpŸ±¾é"!!AQQqåÊ•‘‘‘¯½ 4Xg---É-”$—íÝ»wáÂ…<ïµW1À•jjj, ë¹ç)!=0ôS¦L¹{÷®$v¹´´tùòå‡ê’…’ðJeee Ÿ Y½~ýúôéÓá¯$ÙFn@Èóú¢&ÕÀÀX¸!''G»¶mÛ¦¤¤Ô%s,97p¹\0÷úúú’›ªªª‚‚‚¿¿?V(rÆ Ä›ÂÖÖváÂ…`X»Ô-,†¸«{žw©ã444täÈ‘ðWò[QöîÝ T5+ \߀ã „¤ P(ŠŠŠóæÍswwïÒwïÞ•d¸ø ¹`nn¶^òP†èéÒÒÒ‚»LLL°‹I\߀܀@HÔŽ?Pƒ®úþ@$@ ’ŒT¿97Ο?/Ḵ0”””àÆàà`¬î\߀܀@tbhaܸq{÷îíêy"ÔbèvŸ~÷”|íÚµêêêݸÑÁÁl¢ššv1áxrÑ!À¦+(((**vÏëg±XjXYY½‰…ê^”#ùŠŽº˜ Ûý¹‹ ¹¹hT*(ˆÁÛÛ»{O€ n×ÑÑyC Õ½ÌÆŒÓ¥qiñ.¦%K–tû È œ§„+€Ó½wï^0¬®Zè¨' l+<çÍ-T·ïµµµ9sf—ƥŻ˜ þÐÔÔì‡]LÈ ¸¾øÇ +hhh¼¡5TSSÛ¶mÛ›÷ɼap¬¥¥Õqiñ.&à˜þ6‹ ¹ÄÜ`jjÚID»pwwŸ>}º¢¢âk7Dz-ttt–/_ÞÕéLÒàÀÊ•+»7.Ýnœô}ÈËËC‹LVnÐÖÖVVVF£†xCC ³gÏî‘ÂÄ ƒžÚ­è͹ (šƒƒÃ›g†˜ÅAU7æk‘ àt¶´´ E&+7TUU-[¶ è¡ÿø2ˆžE -Øeâĉ ¥{6Þü!Ð@FÕ#£ÊÀ 1`[[[¹ìbGAUUuÑ¢E¸öÄÜ@ÐÃùóç¡5B ¨¯¯ÿúë¯EvoÇtLO_¾|¹”ž¯§§×ƒùÿA÷ž3kÖ,¬wIÒÁ˜€I)//GsLnn@ º??¿3f¨¨¨¤§§÷Ô3ÙlöøñãÝÜÜz¸Iô܈èÙ³gçÏŸîT< Üj##£±cÇjkk£‹@n@ÉÉÉß|óÍœ9s‚ƒƒ{ð±àfÂ3oß¾ÝóM¢Gg˪ªªöà àŸ}öÐ-j¹A>€ùVSS›4i’¥¥eÏú¹uuuÀ7`"¥á>÷,7@ÐvüýžÍddd$PㆠrssQÓÈ rL¶®®îرcO:%~ä;w®^½B*M¢§gÙCä4räÈÀÀÀ2PP¯¶¶vOu[!È iAC ¾Y¼x±”ˆAÜpss#^PPÐãO...¦„Фg»ìäDO:ÈÒZxÊü1D)6 é¬Î½}û6GJ”Fðñ¦M›°‹ Ü€!HohA@9£Gf0ÒmRÛ¹lwÏŽK Ä~éÒ¥±cÇêééa¹ÑÇöЂ@ ãÇï…É9Òã0ÙàÝ}J/ól6ûûᅦÐ*22•Ü€èØÛn„±t7´¶­?tèPßJ ¸¸xïÞ½ÄôÎæ†ùá†êêêÜÜÜ“'ON˜0ÒƒJKK·oßþŸÓe3]__ênèС2’uuõ#F|ûí·2’@Ö—ŠŠŠŒÈø¾ÞG}„íHöÓÁ;99•••qîKn€ÀÑ¢EJJJÅRAWWÔHUU•F£ÉB~‚ƒƒßÿýû÷ïËŽˆ ¥õm j&Ož( ÒÐÓÓûðÃOœ8‘••…ÍGf“––'=ØØn‡ˆëƒ\pww_°`ÁüùóeÄèFìαcÇdJP}Î ¨# páoȆ²²2Е££#¶#Y†©©éÙ³gû’\\\’’’°&È‚øøø5kÖ€{> Ä|2’«ŒŒ *0:²“%ÙáÀ… @>yyy²ã[Ìš5kÕªUà¢b›’Ùè\Š>ްH¬¬,uuu`ø ¶Xv2|V¢OÙ±}²Æ ÅmÛZKÒd§ÖÀ-u‚¿²Æèöö%7ÈNãAtCCÃÉ“'ÜÝÝ—.] Z,Ëù“-Ú€ì‹T6¹ŒT4X¨ÞõÁƒA&_}õUXX˜pú–-[À¬ þ%>$˜*­¦¦ÿ*((\¾|ùÎ;Ó¦MÊ`0àÊN~Ò××ø@8c•èôéÓŸ|òÉË—/b…ý-`‹¾ä†ŠŠ 4ʲ`m×­[.†µµµì÷w-X°`ÿþý¤¬Ì†ÅiiiPÝ}Ò#FÌ7˜°Ã‡ççç·Ë »w溺Ø/È'„àÒ –À>‚`/]ºDØÍŽ~ß\Ø >zôè€bccìûÓ§O;êçÛÁ%'Î---á± …øwâĉ¦¼¶O©“òŠ÷)u’[ñ¬ cÍš5ÄÀ1À(ÂÉXG%Ú°a›Í×^Œú/222Àµo4Xö§™CÃdvÆ*‰¸1a„>‰©Tª¢¢"¬R'Ü èyûí·Á;¬ªÉÍÍS»gÏÂn‚¥îÆ!~JIIçkjj ~rvv†÷#r£ˆ½NNN†‹! !>f¦þuuu%hÎOœ8!!7´[^nè<·âY¿‚šáiÀ7ß~û­ Ÿ«V­ÎXG%àùóçP4ÁÄ*ä†þ””” 544$Ñ^àáŽ5JÆg¬’‘@ÀñTWWï…w¥»víZjj*aà/^üÁÝÜ·oß8p gþ÷¿ÿ)ሴµµábð¸ÇœœœÄÄD"‘Øi¼fà‘ŸD# …"ùä&“)L!ÉøøøÌÌLñ+SRR²³³ß¤¼"ïêRn!6Ü Ïi×Ûûâ‹/€ÃDÛ-ˆE®Äõ rww÷ùóç/Z´¼oÒeÜÉO>ùDxŒ¹AJS ¢–©v;ññ–{àÀ²¶7p_Á××W__?<<h"P?##£n? ×7È-À€Ð¼……·A&–2œ>}š¼U@®°øàÁƒK—.•ÁŒ‰`jj:yòdÈ?~/ø@°Èî µ·ÇJ=.—ûàÁƒ &|÷ÝwQQQä-ÈÑ£GÁ‡åp8¤®Ò©7ƒÁøðÃ---É(mÐ#F¥¡¡AvÍ‘íícn¨Aô¼½½§OŸ®  àïïOê‚\¿~}æÌ™Ð¼É^#dTïÈÈȉ'†††’TæT*UIIiÊ”)hÞP{‘HhÏÊÊÊФ x<©Ëâââ2nÜ8háòѺHêd7C J^É1=@£`2™hú#À¹VWW9r$ÄѤnÌ â3f yã fÆ;âU¨“¢¢"©u *âüùóàgèèèÝaê­"7Õ ™˜˜€ê«¨¨ÐétùÐE(޹¹9©KÕáîî.Î wïÞƒK¢‚¬\¹Ü²+4 ˆ F£Ñ%¼zõª/¹áìÙ³XÝó¯gÏž ­—¼ýÂ999"мyóÀÅ#{Õ·%())U"¸¬ÒÂ… ‰-‘IT‹5eÊ‘Z©8RÀÛÛ´kçÎòáEõ öû’€š°ºb¿_ ²8oÛ¶M0ÚÌåröîÝ+ññ±—¡C‡Â_(Ô AƒàdÉ’%¤+KBBDr‘‘‘ÂÑI;¨b‰ëׯc“$èãñŒ$XRB¹¡}’]¹KKKÁt è xbùòårÓb555ÿ#’šT“éÓ§ b€HMáÀv iP"ìb’unÀñ ]àðFÕÔÔäcÞ±ã#aeÀ’‚Å‘ƒ«Ðét"V¼oò2x$`O!¶ …rMœ8‘ì,nkk ­içÎ8‹ ¹Ä€ˆLçÚµkÉ;´Ðn‡T=´O==½™3g’± »s@} sð©]EEE r¢Öràtü@õõõ±‹ ¹d`±XàÚ€é7GžÊUZZ:jÔ(ÂÊÀ „ùòWwÞÞÞb_›ìC ùùùS¦L>|8Q¢«W¯ÊG5Q©T‰fÏž-<¦‚@nië©¥¥Á»\Ž›¹¸¸ìæ!CæÍ›§¢¢æÒå)ÆŸ>}:QFeee2æßßß\“%K–Œ9Rdì„ìKD```@t1É_üú&Àõ 2bIç¡C‡ä© ^ÄwÕÅîDHrÃ…ÀvD¹ † i€ ' ˆ„äL9ÔÔÔ€!ä`gž®o!„††*((€§&—Ý,X,–È8- ‰‰‰œµI°žàqCÑH].b„°"UAžü)'1¶‡ åàúY±˜‚¡ùv[ÀЛ0:òÇ ¨ªªêëëËAAˆý'ˆä`ÉtG€*7nÔ¼î×7ô}«ÓÒÒïòêÕ«r°!Òk±råJ¸̛7ÏÊÊJx÷!ù…B‘³¹¹àÁaŸÌdí¼‹éСCcÆŒ‘cÇEÖ¹AÚã ¼ªê’ÊZNyMnY-£°&2§æYfµ+½Ê:gLáÝ /Ó}Qr!¨ä¤÷˜O¡ºç°Gþn¹j®9êîÙÇܲ5½²Ïúd_}šsçyÎð| '^DÍ)KË/Ï.â•òª««ß({ÄЂ,x(½#((&øž¤WqêÂ#fè=ŸU5E¼šüòš¬’šdNMXvõÓô*'Z•e<Ï ºâfXÙ•’sÅ'þ-OøÑðÈÑôÌ:ç“}- ç~H®yxžK\A0£(%§,S‘[Ì+­¨êCy¸»»ƒÓNL'ÓÇ%QãʪšÒÊÚ‚òš¼²Ú4nMtnÍsVµ;£Ú&‘gBáÝ/»ö²ôBPñ©§E>…GÛDt°Mgޏçüéž}Ò3[Ë+ë’_öíç¹&¡yvÑÿdn<»45¿œÍ­(*«¬”Žä–*ªj9µ)…uϲêíõâëô)µ×£jt#ª.‡U^|Q¡\~6¨T+°Dó©¨âB­ìsÎÚãÀVµËüÅ6ãçGé*Vi?Z¤nyHßdJÛbš²ûã‚W†kl~»4¸“öÜQ(@ -,_¾¼Ï÷ØéMA’ד’»{ùúç–×&Ôû³êmiõ÷âënÆôŒ@~|HûÝ–qÍŸå“X„ŒJö‚䬈W•ß`“ÚdH•¢@¾»› bF5 ÉÊæVôayY,–²²ò˜1cÄ7>_‹vÝ^u ­èÕ“ŒFÓ”æm"ºÝ&¢áÑÃ6ÝKP2H¼à‘NÉ(.çU!7¼ÉE¯,ÓZMRš¥ª¸Ð’Wߎ_©«ã™VT^%Þe9qâD---aäúõë2µjAE   þåW×Fs2zI ëîÅ?x–Ù³†¯ 6sUTTþðpCGëÞÓJê3Z@DFÔ&©rˆh•~ì g#·¬Ë+‡ëÀÅód7[§µ>¤·HÛË[w?qÄÃöô`ºhß‘’’”RÁÐÄ 2µ!’ŒŠA Jùu.™Í–©-f4©sÃúû‰Ê†‰ZOÒÂR‹ú~´©­‹  b…ŽŽ1·B䤠3A96Ìsz‹q²tãÐ¥‰ûmhnqù=XX9\ß@á¼z”Ö Çã´æ‡ôfãäFä†{ñõ=Õ;¼Ù4e«iÊ6³ä­¦É3N¹¥ï²¢|LÎ(¡1àe¬\¹R7D’A‘((„ç½²LmµIkµMk6Mi2¢6$¾º×3Ü@t¦o5KùñaÊOSÙ§žpeî²¢zœ"#ÅÏÉÉQVVž2eŠ»»»––ÑÒGŽ)O€Î8¥·´éL‹½Ù$¹‰Ð™[”ºáeÓ”-|ÍIÞkC?ᚾ׆¡fÓ“"’Ãõ vI¼‡É ¶i­vé­­Ö9¦5ZÓ_%Öß­½C©Ñ®ºQ©ÊÓ )»ð¼ôÜ3¨î ßÂc>œ£^ù‡=rºå|’ý‡KÖç,5'ö~GÖ~ÇÌý™ûìÓ÷Ø18gœób]Ⱦñ,÷¬[Ý)}Ÿ-ã'“DáaÁò®eË–Éì†H}.(²%:›È³ 5=fiq`6Ù§6XÒ^$Ô r=œwõeDûyé™ÀbMîqˆ7GÝ3ÿ°{î!·ÜƒOrþpÉ>à̺fh{ó¢˜sÚƒuØ‘ù›5íGcÙZ ®ÞìÙ³g̘!Xó!¼Ï<èŒeò+›ÔÒ_:Ãl´a4˜$Õ߯ÝŠ®¹Æ»ü²üB0_gNô+Ôð)9â‘Ç‘Û_"ÚïÄ"Dô»C¡3‡œ2Îx²®úgÝx–sÞ'ë˜kÆïSw˜%ÉÏxƒ4â†?lS.øçRª,SÓ›ÝX-Y­>Ù­~9­¾ÙÍþY^™¯Ü™uN´šÇÉÕÖIUæq•÷£ÊïF”Þ‰(½^r+´øæË¢áÅáE&‘EæÑÅ6q%ŽI¥®Ô2wZ…½Â‘ZnI)1 çÞ|žÏ¯—Œ“néßßøÀ·Âû@°X,Ùlá}+(%‚#ŒKùÆñ5V´çŒ7v«çßñËnñcóâ–Z ±M®¶J¬4‹å@îE–@n…—è· Ä Â5*¶ˆ)~_â”Tú$¹¼M <ûÄr󘃰ÂAyg½A éGÒÖÜ–­¯æy{{ƒç'²\\Ї :9¿Uaž\ïÀl~Ò¦3ÞYéŒ_V“«tÆ…Qû8¹Š¯3ñ•Ñ|Ý%t& DT|/Œ/%Й‡QE 3‰¥.m:ãNç9%WXÅ–Epo…ä_ôÏ9þ$ãÄ“ôÍF òà Òo€ZYu‹²ænÂqw¶QT)D¢Ni î™M¾YÍÏr[^pZ Z#¸­QÜVJqklIkBikRYkry+­¼•^ÑšÊkMãñÿÂ9­¼%©¤9®¨1ª !4§>Uã•Ve—Xn]bQt?´à‚îûŒM¦)ËoÄoßAD]ˆ42¸È«oE.n@A cŸuòÊ[±J†Ig¼³M)åv´ZˆGf£_vSP^Ë‹‚Ö°‚–ˆÂ–è¢VJÉk’\Ö ‰å6Fq^d×=ͬqgTÚ$”G•Fpï¼àœõÍÙeËT2¦.»-;ƒ` §8ˆÝD@gÖÞ‹S¼Ð)ãNh‘MR•£Þ-£Ñ‡Ý˜ÛÂi -h /l‰)â‹(¾©hM*mŽ/nŠ)hË­ƪõfV9R+RJ"ù:s90OÍ1c«9}¥~,rƒD-YɺŌŠuÑ?×0œkŸTá•ZȪ ή™ß–ß^ÐÜV=-qP7%­‰%P­ÔòÖÄÒVH‰)n.j‰,h ç4¾È}7z1kSx†1‚Kþô)øÃ-w·#{Ç#¦òß-Y0Ì ŽqãÆÉÚ—¿úPP$å”07|w/a£Qò¶‡Œ½öWó!ƒåìyÆ® É~ŧ©M Í„@þH µìo´Dp@t ÃV{jµ•w/ªü\PÉQoÎ'9¿9°¶[§É7P©Tuuõµk×Μ9S|g0bÛy‡#à†õI›Mi;1µ¼sî½,´M(ó`TdÖgÕ¿Ìk%¤TØYØL)n‰o“_gÊø0€6:ó2÷ÈÖ›Yã”Ri[q)¤ä¸_á!÷¼½NY;mÓ7?¤!7t­%o·J;ï—{ýyþ îݰbƒÈ2JDpVIÕ6ɵv´:ÇÔWßùg5?ÍnÈn†ˆÏ)ƒ?ˆdJo1¡µ%7?HjäOàÔ]¬¾Ê;\v:°X¼%CŒ)®+4,Y²DUUUSSS___¦öñï+A‘PPÂܰõ!}— SçiÞç·^pï…F•™þ[ Îi¯|Ù„@àðÉnvLo±Jˆ~LݵˆêK/yçž— (’MnÇ£P(&&&ZZZÛ¶m›7oÞСCáDÍ:I˜T,S5=³uƒ87C¸wB‹D”ÇT<Œ«´L¬~”\û¸Mg¼2AgšñËnvÍl±Ik5û[DI÷ø3nDÕ\ «Ô)×zV¢á‹Üðf-ù¤göµ [²ã•3³áIF“;«Ùv‹e*TI×ZòbíeeeЉC‡O€!ã{éô‰ äƒú¹ D¸á¾SÜ 7ˆÄÕbÁhG ¤ã†Ž.྇1"Üð瓬+rˆÈ)­ÁUHgløs¦ÿQïsƒ®ooɇ\غ·d¯¬[f?5y((T7¸aç#æŸnÙ’s¾GiòÌ Âã n8àÄÒ È—@gìÓ[Í}É r¸¾A¼%ïsȼúŒ#yKvÉüc÷“‡‚BAuƒ~¶fþá̾.17xfµ8fô;nØm—yá)pC¡„ÜàšÙj™Ú—Ü ‡ëÄ[ò®Ç|Æ–¸%{°ÿÅØýÇä¡ PPÝà†Öi{2¯<ãHÈ '¬vÈR¾¹áWÛô3¾¹zsƒGV«uZ_rƒ\®o ‰·ä³¾yÍIØ’½³[»:røM Õì±å†=™Úþù’s¾®EQ¿&•W:³NŒNzå^–”@g§·>dtVݱèNqÎ-íû»qŠwã…[²†GN—Zr[ÌÛ…–¬jA%ÉCA¡ º‡“Î Å;qÂܰÛ>SÓ;OrnðÌ⯩–T¬Ò6›&«Z$‘Kg6%ˆpƒº[¶nPäÜàœ á¦dÜ`“¾ÙŒö‹y2rCgà–WrÊØbJûÁˆªl’ò¿V2¸dA4'yKvcÿÃØí¶dPÜý.Ù¿ÚeþjÃÜjšâO¾Ï{¡ PPÝ=¯â7Û4 ¥ª]†º[޾ÄÜà•Ýê’ùn8â•¿Ï9ëÛôÖ©¿YÓý’ È¥3—ý²2§o2NÞd’ò£Eê/6éûÙ—Ÿ7IÈ îìVËÔθá˜OÚ“œ]ü —˜ÛÌR¬Âr:C2÷• ³U?’÷»SÅ‚ñ³Uê/Ò€´/<åwÜ +1Œ.3‹åY&V=¢ÖtÔ’½²þªCjÓ½„¢뼨8TzÂ{Ä3ÿwg6쑷בõ³uªŠyÊQÇÔ=hN réŒKfËݘê#®ì–Œ–©;­ù:sÚ'çFp›KÑÆ  |yœÒ>7€ÎØ0[€A‰ÍnoÅÖ1\~É;ÿ¼ì¤‘º8YàIl{˜ò‡}Ÿ-]ÝÜÐS*­èM6m;£™$Ô\ *Ps€&M;à~Á7ûÚ³¼Á}¾ÇWBTŸ½“€½klSj§ÔÚÑëìõvôzƒ¸šk‘ÕÚÁÀÏ¥Ç} Žzä©9±÷Ø¥o· m2JÚm•rÊ-ý´Gæa§ô=èʆñ¤3y((T÷àRmÊŸ˜k›Öl@©¼ð4Ÿ]ÚvKä\Ç/çzPÞçüq黳¦y|CðB«µoˆmJݽؚ«áU`é lÒð)8▻ߑõÛãtsÚêï6´Óî§<2ÿpHSµ¢m2ˆ'—Î<¢7u ©¼ Ј b«Ñ2Z\Y-'AXi]jSdƹ”sÉ—­í•Iˆè‚[DJ—|³;¦í0Oæì²¢vH=ë™yõiÖà<Ãp›Ïó@ÇŽ?ÉTwböì>¬r»¾a‡íæ‹B‹„Jë”z'f£«Ù?§9(¿c µm»’”²VjIs<·1Û_•[•ËÿÌw qä×Eå¿ Ï¯Í­)hˆä4¼Ì©{ƪñN«t¢VXRJî¼äÞ æÜÊ=áÆÞlF#ïÔL ª«Øg¼úNÜÛÔ{áEü€€öÊ9½Ñ›Ýü4·%$¿E°#$éÔ4þH^mt_ ayõpy‘]Gl>èTþ0¦äö‹Býà|Ý€œ#Ι$úÕ4Y' ïa<Ï*¹Î!­Ñ#³É/t¦•RÄßQ ¤Ô¦3-ñEM " §>*ï_:Cɯ‹ÎRÊ­æ4Dqøû3±j|™U.)Öqe÷¸úÁ½ ÜSlK:®o¨% fBˆzñ)¸Ú!¹Ê=­öe~c$§)º°9®¨™ZÚ’RÖB/kaòZ3*[3+[YU­ìªÖ¬*þ $&–¶Ds›Ã8MAÙ ¬:¿ŒZF¥]Ï(ºäl@áA÷À½ö¤Ú’uCŠc^?ãÐ+«µmeÛ·^ûÓr_ ê ¹Aïe‰)E"x°[ì˜ýh]´€t‚Š ¢Ë_;‡•Ðç þw¹qV©·dçE’·ä'¬“”~jòPP(¨npßÞ݉¹¡ÍðõCn8È}%)7¸±ZÌéÍTä)·äsEF·dðkÒú©ÉCA¡ ºÁ G½8ƒŠ%ä‚,MSú7h=åÞ‹””@g¬Í†È ÒnɧŸr»Ô’¥ò£¹~hòPP(¨îqÃùgE’s8Å–ô~Ç šþ…÷"Ë$ç†ÇiÍFÉøý)·ä~…†Ñå’·d;¦D-ùk¹3y((T7¸AÝ+œb“˜.pƒMªDܰ\ޏá¸oÁpàž„ÜàÞlœÜÛ{íÉáú†sn©«ïÄvÔ’ûÞêBKvÎàGs·äM¦)J÷ãHgòPP(¨îá¸cUÇÜ”ädéÎj±g¾n¯=ë´Œ“•îÅ‘Kg~0ˆ_Ý7hø臕šIÌ .ͦ)¯ã3šÒýž\(‡ë¨ÙeÛ-hë$®Dl&Ò’ocKÜ’Ý2ùŒÝnK>жËÕO–©Ê&É×}ÓIgòPP(¨î!$µx³iʺû‰ ød) §üùé’“¥Ëßd)Î û]³yœñ£c»yŠþÓ réÌŸÎÌ I 3›þÚŸñ_Üpãe©i¬¤Ü‰æ´ö¹á÷\ ÏŸ¬R7›¥œ÷`ÊÏxƒ4â†üòº‡ÔºŽÛÌhÛÂñ—£·Ï9ë {î1ï‚ë/JÆIª¸ÕJü«›1µW¡J‚Ë4Šþô.PsÍùõqÆOæô†‰{¬i/³HgòPP(¨nöD×Ý©Úû˜¹Õ,¶ûgk¦ª]æï.Y‡<òNúꇕIÎ ®™ÍfÉwâ^éE×ð™òyÙɧüä~wÎÚÉßH޶Ù8éÃ:,›\:ãĨÓðÈÙnAç«9£-tÈØËgÐÜ?½8—ƒKŒ)]àkFãƒÄWú”:Ýðª !åàLk 4Uí3·[2”©¿Z¤ÜÆ–nÆxC«þQZ«MZ«alÕYßÜŸ­ Á?™Ó~² ÿlÃÜmÏ:å“·m«HSþ.`•m»EÛgÖÃá”Æ×cÇÔWm{]Ù¦Ô> T]|Q¡é_tB~xI?šÓ”$l5JTw`saþjIÛbD¾m‚PP(¨îÁõÊ*­ÕšÑ|/šwÒ#KÅ‚¾ÕBW±d€5ÿ݉}οà~D©Q _ @ ¶ýã@Ž„@Òø!¶ ãx7ºJ;¸"°®9¿Ù³vX¥‚„7$l7KúÓ‰¡îÄüÙ<…\š:cÇl1I¨Õ à¨Ú¤ñIô!è ªû7ûÌãžyú/‹ £Û6x«´L¨z”\$aOèL*_DðסMJnW}%´âT@ña÷<ˆY±a‚n4HÜjœtÐŽ~Ì™¹Ç†±Í8¹¡3¢ƒ46&n1NÚnž|È>õœ‹¿ëœ¶–'[Ý9ý7k¹4‡¯3±UVôFÐÇÌV§Œˤš›/¸'ÝY¿=‚H‚¿}Þa‡4-÷ÌóÞì m:"º"®3~9 3 ÉV“¤ ù:ó£uŸ ]Ó-ý‚ûÊÓœó>YÇŸdì³ía‘Ï9¬¿XÑ˜&ÔØ2]2øß.÷ÍiõÏm}–×êËnt¦×Ø%–{$—?I*±£pí㸖‘œ‡aámGÇ6¦~rI,²‹+²¦Ù%”º¦Tx¦V:RË­ãJ#¸7‚òÎxgŸ‡jsL[AÊO]¢ PPÝÁÇ´½S #ËÌk§6>Éä÷øå´>Ím ÌmñÎlp¢UÛ'–y$—¹$ÛÅrí(…ÿˆy8RS â¹Î EÛ·¸Ñ*Ü<‡¤r«ØRƒ0®n`®–W'>ÒÍaUµ¦é˜ÄU?¢58:“Í×™À¼V¿¬¦'©uvIO¨enÔRûX.VQ|±ë ük]:"²/z[ü8¡Ô%¹ÜƒQéœ\a_fY¤œÎ'ûœ7û¨sú÷wã^S+ÄŒÃ-¦)g}rMcÊì’« ’…¸Þ›Õ˜Û’ß^Ø’RÚ¨hM㵦ó„ö·ªnÍ®nÍi;àœUÕ’Ákf”6Eó7‰¬÷Ϭu¥WÚ&VXÇ—™EßxÎ9èš-ÓöQP(¨.˜Ãúý½Ä–Œ‹þyæ±å´'F½kz£oVSP^Ë NK·•VÖšÚ&þ.{Uÿ„FößI¯h¦•4Eq‚³ê|ÓkœSx6 |n0Ž(‚Xd¿s©×7l2I>î–õ ¼Ø–Zé@¯sN{åÉj ÈiÎo +hI*me”·í²×&%Q©áŸ°«Z2+[RËšb ^æÔ?ͬugT=‰+}]|+¤@Ý-û\ßÐ¥–,2ãPÃ+O?´Ø:çB¯ñaÖBãLà6&5%7§”6ÓJ›éü #[Ò*Z˜mœÀ¿ðµ¸)"·Þ‹YgK­6ˆáÊ÷’. Jn™Ã F\Ó'ÿnD©MbåFF¥€¿;76¹„/¾4ÊZRËÿ’`é¥ü_á²Ðœz÷´ÚGÔê;‘ò¼öí¨gÞµ"ó¸ §”j¯´Ú°œúxnSR›ˆþ‘’ˆÎ”5§”4£ò^ùeÔÙ§TS*/¿”úÚ7¹ý~C'³ÑOŸ^fW%á¬øÉ)­á^\ÿZî‹‚BAu‰sXOøjë„”›ÅK:qË5£ÉŽÑp›Ò_ÖEkøœzZt.¨ô~L¥5µF’yJ|a6&ÔëõÖºhùÿ~CG-ù~LÕ#‰[2\`’X£_š< ªÜp1¤Ü(¶ d醯ßqÃí¨J«$I¹Á…Ù`N­×îû)õaÜ YùˆÚ…–l•R#ª?š< ªÜp!¸ünt¸Á1­á!µ¾¿qÃpžEb¸áíÕ­ÜkOÊ-Y7 ¢¹.´d{ú«›ÑýÑä¡ PPÝàíàr½ˆÊ.u²Ù¤Ô_ì_Üp%”÷°+Ü—Ý¥Ô\EnjK´ìJKvd¼º[s¥ÿ™< ª[ÜÀ'Ë.u²ÙÑêïPjú7輨0M¨yœ")78¦¾2Š¯Õ GnèÎ¥š`l [2ȺÿnÉ{œ²vÛ¥«Ù3¾ ß6A((T÷˜\°Ç:y‹1_ pÃåPža\µäÜà@e‘ÄßRI˜ö8²÷Ø¥«;3­HµŸèÌ9÷4³deãäιáv4 ƒäÜ`›Rw3²ò‚7ìsÎÞã¹ßžy?¨'uF×7rŠxvQù§ÝÒ:0Õ2ö;fîwj§%ߌêZK†‹õ£ VÊ´Ÿ—œ (<÷”£íŸgžË*­ª®!#PP(¨n ²ª:5¿ÜìeÎÉ'8¤ÿ%ç,qn¸Ón`¼²¦ÖÞˆài—ž^r¶M —òl¢8´Ü2Òé §¤Ò#¡@Ç›uÄ D”¾¤äÄ熑UVÔ.pÈó^L•ÎËrBDgýù:óàE^³¤¤¢'•F×7À«ª¦ç•û%YEÜ|ž)0_'°àòó¢«!%×BËo„ónEUÙÓêœS_¹¦7¸e6y°›=ÙÍÞÙ->Ù­¾Ù­>ü“ov“«É#ã•gú+÷´:[j•il…mB™£,!—WVY ¨!9PP(¨nrYÊ«NÌ*óLäš…sn@8—‚ A º/J¯ƒ@"x`Åœõ.i¯ž¤7ºg64¼ØÍmrhm;Z¼³šA ž™ž Ìz·ÔºGIUf±åIe©e´|^EU5Ùu&½ â½È6ºàNpþ•gù:Ï ®ðu¦øZhèŒ~$ÄÞ9­Mg2š€ø:“õˆþÖ™FÏL¾ˆ@gì“«ÆñW{ÑÊbØåÜr©hŒ®oèe¼êT/*‹”Îófð\i•Ž)Õ>éuþ¬zïÌoV“oV“_VóÓ8Zžç6‡ä5½ÈkŠä4Fä5Pò멵éÅ5œòÒ[8 J:TÁ-¯kÁæ=ã ¤âDã Ä/ST |4½ÈozÙ&HH^]Ja-«¤¦°¼Z¾u&“[IÉæ§ó|<7:_D^L¾ˆ|Y¢:”Ûò"¿t&<¿12¿1&¿>© ŽYT“W”#õ¬Êáú@›¤4Þ€@ ä@ 7 ¹@ ½ ù\߀@ ˆ7<¯o@ D÷ ‡ë®\¹‡UÛm”——'&&VTTté.¸ÞÂÂâçŸÖ××GöOTVV&''3ŒªªªþÓlÅÛ‹´Ërûöíi—‹”ë&Ož´s8çñxÅÅÅ_/9`Ïž=êêêýGVaüú믋/ÎÍÍ%þ…¸ÅbÉ=7tÔ^d‹}}}E*"Ü%K–¼ûî»?üð—Ë…ggç ÉïÚµ‹¸FEEb4â¼°°pÞ¼yp¥Èe§OŸ~ûí·Ç%,‘GGGH3fŒ††›Íþæ›oà===²èzLLÌÀÁ÷¤¬_¿ÞÇÇNJKKgÍšEZþW_}MB s]]ÝwÞyç½÷Þù€ÿ(,+´›r´´´<{öLü§ŽZÊãÇ¿ûî;h_ÊÊÊD:h 4êüñƒ>8xð IÛ‹€ÄËÒ®($—\ ì MlÇŽ_~ù¥€ ÆêÕ«áÊ™3g>zôˆHLLL\¬¤¤$HïÜ ­­Ú†œœœNú”@j`¶À½…SSÓêêêéÓ§ûùù·K§Ó‰knݺ5wî\â$¾aÃñË zæÌ™sçΰøâd`êÔ©P©OŸ>…r-Z´ÈÃÃÃÆÆª­«}÷}Pè¡C‡Þ¸qƒØñíÀDç”eÈ!ûöíƒs°Ë–-5ÈgË–-p1ȧ  @XV¹‡——pƒ ‚FG-ü ¸qÊ”)÷ïß''MštïÞ=0µà[eìA¤½ (^–vE!¹|¾ýöÛßÿœWww÷Ñ£G‰ðRðeUUU³³³ÁÔ ><<<ÒÁ+§–ÃáøûûC÷n˜1cÆ¢6¸¸¸tÄ `°FŽ””tæÌ™;wB"ÜxèÐ!áÞ`—AƒõdKÜ+~p5¬´ûXâ½Ä9\@d .†2¦¦¦’¨©ëèè lÚ´ ŠÒ ˆ¤ñÇ|úé§pQ(´ ÿüóÏ."+DÔ5(¼xz'-EàÉ;vlóæÍD"¡Tð-,,,ÈØ^)K»¢\>µ6L`F}JÀ`^nîòåË¡‘ <â‰7ì€"åú†vû”ĹÁÍÍ Hû¿ž&\0aÂp„÷~ÿý÷'Ož„@ìÃ?$ºAÄ/Ø»v+b(ßyçb"¨ é¸ðâÅ‹qãÆ=z”Ð3ðJ>ûì3ˆ \YYYàÔ€¬@É!^I—¤¥˜˜˜€:‰$*((ˆ„Û‹x‰²´+ Éåãçç7jÔ(ññˆ-Þ}÷]AúÁƒ•””<==…»Í ¤\ß !7@@÷Ö[oeddˆ\ ÖÿÒ¥K#FŒ())!R¬¬¬¦NzüøqáÉ"—½#ôµ£ÇÊ7À‘"±ÈaáÂ…‰^¾|J*^dn WÛF¼ rss‡*𤥀ÎìÞ½›ìÜ Ü^Ú-K»¢\>qqq GH\²d aîŸ?æ\7"}Íš5{öì.&zt«ªª pé7r}ƒ„ÜÆýÓO?… ‹0î§¼¼œB¡À9Ô}H!î… þ¿½;vI& à8DEíB`C 4Q»ÔÐ"‘h‚k`ÿ‚¹ØÞfK››Ò&-KЄYnI®f- ½¿·çåè½÷´òÍÒóû™®‡ó8ŸîžßóÜÝãip711qvöû´Ž«E"‘ÍÍMÇͺ&ÎÏÏ5Â5‡ÔÊÊJ,{|½¯>K*•z|½ç¬eëy¤VÙ`ÕÕÚÙÙÉårí—ÝG*Çsttdþ¬V«mέßl6* Ùl¶O³Áñ|qü.ŽUñ©ú™œœÔÙw¿··§FÉÜB¸½½ššR¹V.•J.äóy•ÏÏχÃaíƒvIÔG:øvý:¿á#Ù jëçææÔœùýþ`0¨Ú÷z½"èOë‚ Õ´iXg–WÓ±®£ßt l›uM6d2™ÑÑQõ|>ßêêªÙmsÅàääĺ¾©’öÙð¶®Íìììúúzûe÷Qw*‘HŒMOOÏĮ̀m2W«3emmM«©< ™–±³Áñ|iõ]«âãõ³»»;22¢TP–Xã†Ç×CÔjÆÇǓɤ)T×VcˆÅÅÅýýýŽ/ðÄoíÕj5ëò‘7 Û:Ñh4NÛFʶÕÚÖÜÿnÖÔ—¹¸¸xwfûlu…A`žz½Þþ4M§âäææfÐÎÇFãƒõ£u¬ç\m®®®Ÿ¿¾¾>==%:T©TÞ^bÐ=ürA·ë'olllmmiSêõv¶²áO6 Jà~vÖ4õó)———¹\.N›©©dàkðþ€ïoعðý €ÿÔ—ónÎî7Ù@6Ù@6Ù@6€ 0¿`Çü€óvÌoôX6x½ÞR©Ä¿zG¹\Vãü“Ù°½½ÝñÏ‹º!«qþÉlxxxŠFÐ jµÚòòòÝÝÝOfƒ‰”Æ/ÇÇÇOOOKKKC+ ”SN9å”Cy±X|~~~ùRC/ ²@6ÈÙ d€l ²@6ÈÙ ƒå%›š éIEND®B`‚glance-16.0.0/doc/source/images/instance-life-1.png0000666000175100017510000006744113245511421021766 0ustar zuulzuul00000000000000‰PNG  IHDR¬”Ï–ØbKGDÿÿÿ ½§“ pHYs  šœ IDATxÚìwœUÙÇ¿gæ¶½Û“ÝlzÈR 5…Þ‘Ž€”ׂ ˆï«¢”((Š¢,QTŠAĈTJï„’@HH6$›ºÉ&[o9çýcÎ̹{w³)„M2O>“{wîÌܹçœùýžvž¡„J(¡„J(¡„J(¡„J(¡„J(¡„J(¡„J(¡„J(¡„J(¡„J(¡„J(¡„J(¡„J(¡„J(¡„J(¡„J(¡„J(¡„J(¡„J(¡„J(¡„J(¡„J(¡„J(¡„J(¡„J(¡„J(¡„J(¡„J(¡„J(¡„Ê/fØ¡„²Cd`ù°)B %”PBˆ2®ªªêBU^^ÞF/¢a³„J(¡„2P¤®¢¢â6Ó4íD<.?óé ÔôiJ@UUU­.DØL»œÜ6C(¡„²;H2_Ç3†a¨ Ï;S½üÌ}ªùݧUó»O«;=GMœ°· ¨ššš¥ÀIa“½¯r pfŸï­•‡º~^ïÀÆ]¸=jÂ!QÛwÓßv00Ân¥„˜†a\Z^^Þ¨ã;Bþû‘ß«UKŸé±­\ò”ú龡FŽî×Kz|…²ãå@czùün Hî„uþ­»­l-8Æî¦mq2ðå°B)!§WWU5I)½÷¸½*ÿz÷/øý¯o'ìFÍ4#œwö©<3ïãºk¿Œ,„x¹¦¦æa`Ÿ°9w¨Üãz½°”5¬­¯û€ÔÐåìænèÎIýÎÔdæ6X50 ÑËyÀQÀ@¢ëïÌÐÇGJ|.ô÷Oß -j,p0´ÄgÕÚ¬®)Ñ>{éßYÞÇ€™ /ÑN3´‹"Œmì2½¦ºúUà55Õ#oûùõM.—;Å4Å•••wÃÂæÝ!ò °øŸŸ©ŸÉ»Š<)·ÿn&náúãŸ)ê1½ïXß¾ËÛ€û‡€ïC€‹?¾T–øŽ ?éó®ÞVèßúGàŸÀ/A%îÍÝüXuðàmU]ûXàFà/Àãú½+µú7=Ü |hW󀾿¿ Ø€òmOÆIñu÷ý·¨q>UtÎzúúOV—Žñ³/ðjÑçŸîãþÞÒÇJý:Iö¢ïRš\Я/ûöçkŠí¶¢ß¼Ö÷Ùç.ßgó5‡20e\uuõ?„jРZû»×]©V¾û¬Z³üÅíÚ^ùQuÉEç«H$"c±X¾¼¼ü‡%@#”­—ëõsUL>;çô ´÷Ýz;ÁwN±Kð}í#ŠHC_ñí³€ÅÀ¯5)¤ô¾à^MH9M°~¹Yw—¶màÙ>”ÚŸ›5ÙÎâú³ék=àÛ\‚¼IßóRœ°Ž–)Ö¯èûýð+à‡zÿP`µ>þ&`‰¾Ö'w5º^7Ðx A3¿¼Œ.Õ?n¶ï¼!šÊô¹óµ–äÊݹ¯Õ×>__ç4Ÿ&±XÖ$=XÐd2®—ûxR™¶xŒ"Â:X[}èïˆéZü/0U.ü_QÛ¼¡‰h˜ïá9Yûm­ §Éì±gœ2ÿ ù¥/~Z-^ø_µö½—vèöüS÷«³Ï:E !TYYY*‹]åP¶^ÆjþN‘EÕåzN6Où”„!ÀB âÑ@X×—PÊO(RðýÄz¨þûRß1é}'—øQœÕ7{i‡iŸX××ü …¹·S5Vþ¾ˆ°*qþÀ:`°Ï3÷ ðÞ®èTÐ×k­å ­]üGƒüoô 8ÊwN‹$i}îÏp&^6èÏ¿¢;ç<õz ©è{ÏÖdw)°H“ä,í6<¿7&‹ ú»_ÔÝ/o¯ë-§Íõ}´©~+ð&ð ýŧSßçZ}?h¢~IÖ&mnÿ^›Ôµ!Ö I&‰kñøªT*uÙ…çŸe¼ðäýâëWüÕ•Œº=Š[~ö]þõИ9ý D.—›SYY±Ö0ŒKc¨Û"Ë5Þ\ìk¿keóý÷LœLÁ?í>ºO+¸ûîà{ú·~õ'ƒ<¥_÷Ò¯'j‹j±¾¿™ë*qÍ<°F{‘N£ÿ¡…Óõëoô÷¡qìyàŒ~œð…ÐÏtm¸Œ)rGî„U,«}LìÊš¢}Ãïk«äà3>ÍÅeÿ·õ@ìMöÓ¯7hkå1ß´Õ‡»íMf¯ëïìOÌkª&åíL[R[J—ÝOÿÞÇ|›ëÿ Ý‚¬˜¦i^ZQQ¾&“É\wôQ‡Åžøç½üø†k:tñ¾nûí7‘{~“øëŸneïqc«¥”·WWW/§ï4íPJËmZéuŸ­K5‰--zÖŠ]rÏôóYÜÚ¸³›ä‘(±Ï+ÓÏSš<ž÷y^FõrÝ3ôozH¿^Ò{iÔnÄ·Küöš-(ÎeçÎòÝãóÚÛÔ×}¾/y®™/A„yŸ)šÄ‰sÅô k×&kññí[øž2ýúª¶„\yA[t¥dŽÄWéï¾g¢à²>¾'®µ’îÖýpå”i‹ë…¢ý­!Î|`rzuuõ/ÛÛÛGßgœüö5_bÆôƒ>d˜#›ÁcÎ0þñè<~pãÍ#ÛÛÛÿ^SS³ ­­ís%6”Òò€v[}F?WSµ‡Æ¯4» è›E ©ÿszyÎÇkÜÚÏS1Ñï[¯ìQúÞû#K´›ñ`­€ßd´wËîÅY£Ii¤özù{ '¦×›¤uüøÈÝÉ‘à;§kÆ?'¦äšôøŽY€ãË¡-¶Rò®~}M»Ùú+/ë¾{\‘¦ÒE!›Ïu?žî3ÛËõ@ب‰í= .×Äû‹~º*´6äfF Óšw¶Œ«®®~Hñ¢iÌùÞ7xî¿gö¡’s©>¨-ò‰„Wž}ØøæÕW ¥}ŒbAUUÕ½8ó Cé]ÜIÄgà¤{ûĵZi<_+­Wi’:F?Û®ÂúO  'C.‰ïþNÜèaœD«/í@«ðIàë®Ä‰ßÿ´—ã“8ñïÏ~¬ñÈMœpãfwéûü‘¶ ïÇIãÿ…þ-ßÖD,&Žô&WSˆµýN_û»EìnKX ´Et¥&¯£øÈãPM*WáøK[ôgn’Äfœ„ˆaÚRÛ¤-ž%@„¹‚¹5Ìmt]vƒ´µÓJ!I$ƒ“ðŒ¯iík½¾ÖÙ¾ïiÔ½'µó>œy [’ßèr…þþõúáøfˆ/;Eê*++ešæ’\.wê•_þ?^}þQã’‹/ÀŒFPB È-^–àòÏ_Â+/<&>ÿŸÙlö¼H$ÒTQQñ zƈCqdŽÛÿ¿ZQ,–ïá¤b[Ú²iÕVÊo|ÇüøšöÀ¸„w‘~æG×j¿¦[ïÛP䩹DĈ–]%ŒÈ§é}ªC•Þ.ÇɨÅÉF|Ågu]ªäoà̃­ô)â7àd`_†‡:„`Ö÷«E»28P“ù }íé™°ö¾ËÖúì÷×ÖÛ@cµÿõ)ß1tCúc6Sôyó‹\ƒSpb[›ug¿F!‹¥X.׃f2Áàa•Ñ=RTq2ã“{1"¶Q1T߃7kÖÒ-ײ$ÝijíKY–yŒÂœ,Wâ8þáú~žÂIu ï(œ9 󵋡¸m ýz³îÐ`³ ÇÞbÌû&ÉD"ñàš|>ýÄGÏW]ù R·Kþ˜Õk×óÃßÂ=÷þh4–UJý ›Íþ=£‚Ãn%£N˜S3Q÷hœ¬ÁžxåX45Ï›½*l¹[qaÖpÞÐ,~œ6cΜ+gÍ¡¹cp|³—RÈ0ÜKï6àwÌj úÝGLÓ4/)++ûQWWWõi§œ¨¾yÍ—Å>ãvŠbK–6ñ½ܤz䟢¼¼¼3›Í^jYÖŸÃnß%ˆj‡àUó¼Ù{,^ TÂ:ÇÍg™¥Ÿæ3·±ÇûIü‰#¦É„‘ûrþ^‡³_í> NÔ²9×ÉÚT Mí«ù÷ª×YÐüzñwµâø»ʬ¦ÍácµKËéÕÕÕ7···š>í ùík¿bÌœvÐnùC翾H]þå¯Ñ¼jU¦««;vý€&ª’xeš&÷ÉQŽeü^ƒ¨­,£³;K˦ÍëÚxýíf^_ØÜ+^5Ï›½ÇáÕ@®iW…ãrLàTÁX«­ªSpÊŸ85 …à‹Ó?Æ'&͈ä0Ê gÚC©´¥V'/nxë^¾·›îÚõÀåÌjúKøˆír2½¶¶öÖÍ›7¸÷¸Fyí5W§vânùC #B4Vѵ¹å]qŬ+Ëÿ1ïl&“I„C`À’U¯„|ôÜœwÂd†® ë=@)EGÚbþ;ëøÕß^ä­ y­.ož7{«]«ëÜÆkð•?9fÒÜtØlF–èñC •Ä¢)”R4w¯æºW~Éý s‚>Ǭ¦Õáã6àe\MMÍÏÚÛÛO«««“_ûêO|ü<"s÷{P ƒX¬²«»}XòÚå™®Vnùó<üŸ…!a \² àÕa‡Mâ—ÅÈÁ% bR) !z ²ÒäµjC?»ûþõß…=ðªyÞì=¯vÂšÛø}œv¨¬çï§^ÇáuÓ1…NRÙ¤U†ŒÌ’–2v['±DD„¨a’4’T˜åÄEܧÅHu,áÃ_ËÆwR<€³˜Õô|øØ H©«¬¬ü~*•út<_ºü³â Ÿ»„dr÷ôŒÅ5©lºM-{ãá²ÎÍ« )mVHX›¬<¼ª««äú+Ngæ¤LÓ\[*ÒyE6g“ÉKÒy ©sÀ4 ’1“ŠD„x´ÓR*¯lãê›aÙ²–^5Ï›½ÛãÕ®AXso@Ï5|"ÏœyU‘*Oÿè´»Øß„ô °ëÎÕï•ÒV•>>*bÔF*©1«Â¶²¹où£|ö‘o¹§g€O2«) h dþ]rñÇÄU_ý"õuƒvËWd¤••+ßúO´µåݨ’6Ro!a h²òðj„áüöºs©NF<,êJÛ´ve±•­ï•÷^IˆG•eQj’ C Û–<üÌR¾ù“‡xÕêñ2ÿ†Ô×Ëk®þŠñ‰]ˆiî~™˜ÑH²;ݽæ%Ï–ç3횘¬°v=²òðêWß»€Ã¦8 ù¦²’YD ¼RExå–ò½/&.¥|Ö–R ©ŽS“Œ¢€®Tžÿ™ý'V®Üø¾áUó¼Ù(^ dÂz8zâÈ)<ÿá; «r¬Ï¶ôòTiíE ‡¼4AyƒEéÁ!.Âúè`EjE[®Æ¿|6—ÎS?žsejý†W£:RIÔˆ   Óîd]w ooZÆŸ–>É;kßâÏpâdÙ=ý¡¬­ i 5PiVℊ¢ïðþм´ÈÊRH/R%B܈aÃ;E¡è°:ùâSßãÑEÿu/öp8³šÖì¡Ï}°îºk>g]uõ÷ó¶•Ý­L«H4‘WRæ[×½c¶mhŠKMT!aí„õ0pêá3ÆrëÕvð*'ÙÔë‰U½°…*"0Ï=èâ—êim“ÖÐÚUe¦óêY.eK²–Bêk S‹ g³G[Б²¸îÖÿðÄ3‹xÕûÈuà,·ò]vÜq»¤tµ­¾·ø)c¯ G§¥Ü5IË0L„éNwm`ãêwÊ-+Ó#ÈÊn!W ZÃØaÕ  ;ëÄÄ F¢N.‡ŒTâòF€¡qÅAü1p,+ØÆAM2rIK‘–Ò€å?ÎKøóYnn]îw]“ž+ÊõNz‡8o6w[Ì_Þ†”Ε¯úä$b±ŽW°œÅ‹Œ#Ë'_š€úü´ÂŒ’¤ex?¾ØÒ –ï¢ò½Gôtâw;»ý~"+Ë Ás$|—‹W†ßð‚eëS¤r:uÔNÇ«IXÙçx/}Üó‘PB L½ ¥÷ù‰É‹I½¸ysÅ®Azá#oŸ7°DÀÆ6J:ýæ¶³§Õn`dr8$¼ƒ£ö°‡‹ ]mkcÍKŸ+˦ºR†MÔÇ»fTf4•Im²6­{'™éÚ±{Ï%¬_pc[vQ¬¨bËNáxzX[A«Ëðá‹ß5H/V–ç ÔûÜXÉ•f%|×üjO91Ùƒ+ˆ—'v*^ šÛXL˜Z·/ tš¸ ZLJ‹´÷½á=É)—òíW¢T<+he°¤„oް(š(‚nAé°ÊðȱÕrf¥G„Éñ£q/4hODµÞ’¶Åºæ7’×½c)cgZ[†‰iF»óÙT¦}CS2ݱ¡Œ0óo”Q'Ìñðjò¸!Nf°íf߉"+‹>ÆE–RoÖV‘ÕUÂ5Ø›•%Œ Â\œ|áw :Ç(ß{_]}Nk—CX¦ Ó¦ŒÞ©x5,¬©€‰a0¢Ü)rk)«`¹øˆÊ‰?õtáâV®Õå#áOΠÐCF†èieùÝ}ÅÄC‘[PùÜ‚ø\~Âô»7äZ½k?üà=š°ú+éÎÖøú¯%³©Î.!Œ®÷;“Ð4£Ý(»­kssyªs}R)vž-SÓ0 †râW–”=2ù„p³“{ÉÕ¡'qù³ôñ®^räKYY®‰ä]W¿W· /[ÐcbýÎëÆÎœ÷=Óö±Sñj`øÜ¥9N8pèfBVÞ³ª”(òµõœ(ò-y­@êNsçÁ¸ÁGéNj@ãMÏRøf>€ï8p–±¶•ò¬/gÎH᤭»ó°ÐnF×jÂ7ÙË9'm?‡T†ˆ04YV?EJ›MëWDãIªkGuEË*1 3©”ÜaŠ˜fʶst¶·$¥_€´k–ßã­+¯¦LJ"A ÈÛEÙxÓ={µ ¤ðùß… z Œû^(å¢üìá`Da¸µõ÷ }ª­‘L¡KK2@È6¹`瑟;Ì7—ÕÖ‡å,KBÄ€úÚò=„°æ6Ö™%~20àÓNóÊ夥3U|±¥¢J=*¨ ùè'"ÿÜ;åN׃ĭp!5ÉBy¤ã“[|²0P„P¡¿GéÁQ :?y4"߸+\ŒBUòÚ=-YÁß}š/~öBjª«{=ζr´µ®¨‘(UC»Éje˜Ñ¨Rr›cK2RÚ²»smÌÊgÜçdª€—i!lïq$U¯Î8våÏj IDAT†á<»y[ö˜;eø<.¥Ç™ò”jçUH«P¼[ã BcWÎõ°¸ÊP^õwateWa·Q…B }x¶U Àr¹“ñjçÖÜÆ£€/gÈü‡au{sÆø“ÍC9…D1Y‰-XXnÉå †‚åS0¹•GP+Ë¿>q!c0¨ÕøÇP°¦`ð/p«\ôÜßÛ (Qs·°þrÿ<î{è)žq!ß¾ú9túÔ>WJ’êl)Ït·’¨”N”ÕvfÄ•ì¿E%ò¶eY©®V3ŸíìQ£O9k)v…ÕºCÙDÕ+^k¬ãÔ#&9J“,įüdåI•C+͆R>åº@ R¯H·nðUeòÂÞùE˜£zbP)T~åÙ§eN/œU³(%úÝÈšÛ.Ä)á\.vÐ(nžv ÇŒ<”eÃ<ë*C¾ •?‘¡§…%]úqI 0¼eD<Û(èêºH®k¿£zt­[fÉ¡¹‚•e |嘄ËT€Aóƒ× ¦jh-MyUÌ=,¾òűxé*Þ^¼œ¯\óÎ:í®šui¿Îͦ;ʬlŠH4)ãe•]†Ã0M%eJ©HI£´2™î6‘Nmê«Ôxà9à°Îw[’ê¯FŽÄ¥çLç°Æ0|p²`]Yª ,‘•?ôÔcaWÂ#-·”ÏMG@K^2šá.3"ÀP¾P„ÿQ(n«Zfv‘ÈŸx¡üqù"¦úXyµSñêý'¬¹§·£ $5†ßv)G ›ÉD}À‚R@VfHÛ)· ¦kºà_l‘xŸ)iyæN€S½[9¾ŽTQl}õa5mÉŒ.Nywë ÆŒ¨÷=›º¼ulÞØÓ#Y–àÎ[¿É·~ðž~îuþpï#¼±p)?ÿÑ544Ô÷ëÒÊ™”¬FÓ4—ftŒwÂTBC#iÛVw.Ó!Òú›L1g¼pÞÕîGV=ðjô¨A|öÂÃ9lÊHêkÊ^@W,%±Tž´ÌÇÆ dá& E––ûê®oåh…‚¶®Åd¶oîß-hû¬"¿?¸TüikÄ vöt÷ÕE îß'WÎwß¾ÂÄ¢Qæ\w9>ò4¿üõ_xeþB.¸è N=åh®¼üÓÄãýVì–J)÷–2‹•ÏF= Ovu´¾W!åVõôàiàȰ‡v¢êW3ËW/9šñ£k½:s\‘µ%™œÂ–%¬«¢z¤Â/òÜy:ëÙ%-C…ô‘ƒ("¯R>œ^À§?zõO)…µQïý« WìT¼zknã©ÀïÐÉ}‡Mæ·'~‹‰Uû,#KYdd–ŒÊb+»¨mP½BñûRóxÜŽ0z¸ÅÞÙ¾z9ä×G7÷~¯ªä`¨5„[Z<²òÅ=•° ‹Ž•Ÿ~,3¦MåË_ûï,^ίoÿ O<ù2_üÜEœyúIý¹ü*¥ÔÞ=J¢”JÛ¶U± ÷;He„²«“U¯& 㻟?™ñ£j‚xe+2–$—“XRõ¨í·%6T’Щ|J‰~yjèQj_o€UŒMýš:Ø×µkÊÚ°mÉ ówuÂšÛø1à÷Ú7%Ÿšt!Q¢*ed†N» KZ^…õÞVä,&€âõeÜóüæ³è·’± jˆK6^j{ßÚH¤:R @Kz#X™ÐÂêÍ7|þýOøí]÷³f];N;€ÖŽ4×|ç&NÿБ~ÄÌÞÉP©ñ½~¸msª€'£ÃžÙ¥É*€W_½ìD>zÊd¢fu2y›ÎŒeK_!ZÑ'Ég«û k;ûUwÛöȶx{”±m¬*s°¼¥=M6kí„5·ñ_ÉZžûÈ­LªrpB¢HË ív¶´¼%?¶5ÕJéä‡âuÌØª>øà«¤ÒtüwÛ¼§;wB8÷V·3vT!É0ŸûÌǨÜÀšµ-T×Ö¢lÅw¯ÿ ß½á|ìÎåãûq30´ßTJíß;amó8˜ ta¡â]“¬<¼ª©Nò›ï_À„Q5ž¡ŸÉÛ´§l,)½%?¶Ç} ¼ÿ‚ÐSœ 1pÑ "† 2á¸á—5oÚéxµã*]Ìm<'XiPÝÀòßW +e³>߆ÜFò½® ¤zíåÛWü™ÔËÚ÷uÅ-‘Ù¶ Yä¶VO—À¨øP/3ò‘åO¹ÍgVSXJ ^Ëu7þ–­í…v³mP‚‘£FPUQIËú6æüàz¤|ã[7p܉çðko yõWƒíPBQjs&æÉmÝjùa'í’dåáÕ†jþqó%YÙ¶b}GŽ 9ò¶Ü"yˆ"(ª÷ÏÜ¥íû ãèììäõ7ˆßÜþûØ)§ËaGœÄK/¿¦—w6鎥mßʷžÚeÈÊÃ+!·ßp! µ ”‚®´ÍÚö Rªþ¸*à†òn cÊç tþš¸„~_¨DábZAQ/ÌíìÈbžRýôúÈ)Ñ#…‚²¨Au™ãœÿÎì¯v„…u!º´ü­§~‹‰Ú ˜–YÖå[|Mï4„*n'7]s‹ŠŠò,+× X Þ~ÏÒ*tz)kM¨ÂuKÛBªÈ2Ú’P•Ð’T¯Nʱ‰Þ·üðջ܃aVÓÒZ©¬®à®ßÍaÔÈ~÷ûû‰'|ã;7ñ·ÿésZØVËÎS?xs¾ O?ñîºãVŽ;ö¨M'Ž'“ÍðÖÛï𵫿…’ÊÛ\‹m;·CÕaoíâáÕ·¯<‰#OP6o³®=ëNI°W¥•ÐâÇ[(s”çT¥#Ÿ‹ÐÁ.n¨˜è‘ *IH²˜¸zc[Ù;vù–*2þFÖ•ySÁî¼Ï˱x¤yÞì†Wý#^zé¥ß>óÌ3/D"Â×èªÛNUÙÊŽ ¡2ZB m‰¥,¤”¬ëhaæÙ‡‰˜N6 ò­Aå&^è•y…[èѿ֔ò­cåþßKy&P¼³p1›Z¨ª¨ô §kl¥RÜŽú½RŠMí­xÊ!$«Ê}B!‘H$c¿¥÷j ç|¼ ÇÚØÞg‹‹¦`šï; ­eI)éÎ¥PJ%’Fa¹ûçÏŸÿâ+¯¼rú£>º»¯·^´|íKrþ‡G˜&†aôÄ#inzƒT&ã>É?ýûO™ÈŒƒ§Ò´b5—]r>8È€ib ÃD˜L#²YQ®”Š=ÿ«Ü~Çøìe—0íàBþEYEý† «Öï€ûFIû%m¤Þ”÷j9¯ÊFÚúUÚ mlýê|nïqµ‘·üáYþÏÂl&“I„<ÔO¼ÊZUÒV…¤¢,æÔ•6–tž·õ­ívÂY‘&:3ÜÐ+…{)ìB¯oU4תhy{p— é ­`ñ¢lXý•ÅÛ­Ø^µnjã £N¦¼¢Jïs2”Ý÷Rá»…WYxo}n»Ÿ Å¢çç±ÿ>Ș¦wá»§T:‹RŠˆ)Ò v^õ+Kðì³ÏžtÅW\][[U¾ÆìÏ&•ä¥ù¯2iúdz–`+Tp*3qœêÃN%b§j’:‘½ª¶ZûPŠÔšNN8öxòù|ÐìÞ¯Óý["‘à¾ÇàèóŽ-ii•ú¿0…Âq9ú÷»þ6¯j嬃O£ºª:àóîå^ÊüO™2娷ÞzëpRowgQ¥×Ö5‹1v*+špü±‡2zÌH~ö‹»¸ù7÷pò‰Gqïßåž¿>Êÿ^ú1ö?®øš ”RG:ó`y°7¶É±ß·šÏ[Ë-^66Ÿ·‚˜,zOp_e2ʈ†0ÑpGÉvá•T¼¼ð5&ï?­’®a«¸ÖP…Òn®e¥¼7…•”¯æžK^®‡¨«uÇwì6áÕý?Î1§|Ä{xüîž<‰.®úEÿq›Ö®æŒã¦Q]U5àðª_„5räÈÏ744étÚ·D‡hRÊ’ZA,£}ùf˜®K‡ˆ`I||3½%¾ä}$$­žf²+K¾ÃþûïO&“q4ß=–þH§ÓŒ¬N6—'3=?r)SY*ÿ~ŸËS©’Éë篦îÌ‘Ídíç–ï÷oþ6ˆF£±#<òk÷Ýwß{"ø¤»ZɯxÑ£¦²>±št&Çu×~‘gž~™;î~Û¶9íäã¸éæ;°¥ä¢OœÇ‡ÎpÛ±¶?<¹ƒÖ¶2rÁ³>|.†ìW\ WWÓð;©7l`ÉÛóØgL¸rÉŽíÅ«¶uïÁþÓ<…ÓÔ¯†»ƒ¥‡‚ª…ÔUvü¤…#€@•›Åo/`ÿ¶¯FÔ"ŸËaD£…Xš/«CH—h•£“ –ñ½/í í·féÔM;‹l63àðj‹umŽ9æ˜ÈÅ_|ïàÁƒãJ) ÃÀÔ«¼º („À4M"‘†a`…ÐX>Ÿ'“‰åH”'W *,),_⺠}>K_A.á/¦Þ멘ÿô|¦NšB.—Ã4Mï>ýƒÀ0 ï>Mßjµ¶mSW_Çë _gè˜ÇBRþ,F/¬ê›qîQ•îwÌzT ˶ÕÝÀàAu(¥÷Uªýü«‘*¥ÈçóÔÔÔTvuuýåí·ßnÝq& \uÄŒýØoR#Â(,žèתj‡0¸a/ZÖ¯¦¡~‡~ét–žÇºuëi;†æ•«ùË_ÿAw*³i„½÷2L/¬Ýc‹Å+R]íëÊ·÷æ×·lbŸ}§Å¢Ñ¨îW§|”€´õVØïlµµƒxsáRFÔ›šà4€z„|uyåÍfÞ]Þb[–u}HS;¯bQ²DI$’…Å } ²"‚¥—¥.|õ ðÔs…_Ìñi¦NÞwÛñª®ž×ß\HÃðÑž%Å™ˆ….™y¡û"‡•BaK›á‰4uƒ H¼ÚbÒÅàÁƒO™ÂÏú]Îùði<õô ÜrÛÛ›W3lDï.kâë×\Ïußý!7¶–ÈÔÚ®´vl[2läx2Ù¬ëñ4N·Ïýýíj¤~í~ÔèQ,ZÞ2ÎvÊÁ+Û¢<"±•,X%:€õR‚´üUdÕ>‡×^~ž±{í¼¿7[Ö3•ò²ž;P˜R:~E†•/äaon&™,°xÕ'awÞyõGyäŒT*%”Rž–êš§þÆ,¥­ø·}÷LÓ‚¥A+KæHÙ½–,žg¥ü®9ß„ánºÖ—ÛÐÅÚ@ñV|7MÓÑ®^x«„uåOÃV×ðÏ»øKšC&D*ådºíçלúÓ~†a0eʱiÓ¦3BHrÄÊ´c§×rÈ!Ó8êè1tèPêëñ‘sOåæ_ü`í×¾z9‘¨Ét3×~k+W®Â¶,Þ|}BIos:Wn×¶dé &í»/¶mcšfàAw²7rÁǶ%Õõã·§LÔ/;¯&OÞ—eï, Ć $¤z'-z¦±Ë"²R D>”öŽÁ+Ÿ5ˆ?\¡ ®?åËy2h}¹'¦S] +~r̘1ÒmL¿ßÒÏ®¥ÔeÛx<ŽeYTWU³fáªàä^Ÿk°4iÉÀ+÷ŸTAS´eM ûN˜D&“ñ:²·ûtßûMkǼŽ:>Ø\žjYáW4ùØMq÷göæT V¾´ŒÑ£Gc$)©U•j?Ã0H$X–EyyyrÊ”)W…°D ò݉‰.œ:é3cøˆÑTT$;?þp¾ÿݯóø£w󣯥ºº’ßÿñ/z­ "åg;Ý«ÖuQV–DeY÷‰ßÿ¿¥þÞo¿),[¹)ìØm”‹WU¬^ö–Ʀ k°7Ò NB÷ͱ*²È7¬[ˤ‰v ^åsTÆÀ¶í€u¥Š¬«-¹AÑ´èFØxÕ'a|òÉWäóù¸RŠL&C4íÁ´NC©@§ !ˆÅb”••‘J¥R’Ëå3d4Ým¾l:åÙ*~ÒRˆÝr.qá«y¡P¼õÒBê뜤†l6뙯þ†ög ß’ô¦i’H$°m›\.‡mÛLž<™¥o, jJø,|.JŸ»2èTŒ Û²½¬×Ôw]E¥:Þß~‰D‚îîn¤”d³Y<ðÀÎ9眑!4¹ ¥M.ÓŠ²Û;¶QNŸqì˜Q£Ç‘HÄI–%9ú¨CùÁ÷®æ—7}ß™0\"Ëi[·T:ˤý¢»»Ûq©”ДKõµ ºàZYUMK{4ìÐm”ŽWÃèìh/„(ôÜ¥R¤UÀ¨¢IÂîç>[øúKÔÕïX¼z÷­7½Ì@Uº X‡Bõp*Ã˶¶ª*^õšÖ~î¹çNœ:uê ˲ˆÇãd³YºººˆD"$ ÏÅá×t=“Éx7F1M“‰'òÄ‹O3ýC3<ÅpMi!|™yþùU…$©{À]©×Õ†VÔ“NgH&“¤R)¯±‰±X,0`Ýûtc ¹\Ž®®./ÎP^^N$¡åÕu4¸OaÒž/vUʺ*¸œ×¶õ›Ùüd,Ë"‘Hxíá¿/?XºÀåÞk&“¡««Ë3É#‘ãÇ7âñøÇvGƒÉmí0¹°2óâÁ5UÔ×CJ›ÎŽVò™."†pâEß±=iío,\ʑǟC&“ŶmÊÊÊ< «85Ù¯úÝ:±XŒŽŽ¤”ŒÝgR©E$"d ­÷ ¯žzù%¦u"R'úwð£`¡ˆÀd,á%:Õs¥½!ƒ*ɤÓ;¯Ö¿ø {O>ÀsõùSÙû´®ôoؼi=“÷0àñª¯yX"®>‘(L¦wÝ.ú8ÿƒêžã×fâ±8Ým]:QAxÁ?Û0ऊJ†rVÞt ­0;Oõ4â\.Ϙ1{‹Å°m›x<îi#þûôwz>ŸhZî qý¶±XŒ„‘ g*{ÿ¬+Pt­o§~B½WþÇß~®iípÜ¿ý¸{Ž»?‹EËÊÊö á©/ÖS9…"›í‚lAUE‘A ”U¼G&݆,ÎÝ’4âuHé¤ÿZ–EWW†áiÀÅ©a(ær9ÚÛÛ½X×>{ïÍ=Ç“…¹u²ãñ*£«c³&$§€­,ZÅ„·‚¹;Q4¸¼¼(†€|.Ç^cÆìp¼*‹šŽ%%زu%{ZW›Z¨Ÿ¸¯§¼ T¼ê•°„‰5kÖä;::ÌÞ&±õöÞÕZŠ'¼Åãq]gKa Ê@ê8–‰ouM=‰Øí|)†»–*Ì7W8ó&6­oaãÆÆïmÂ]©û.¾_!›Z[ .IϺr J2‹­+[8ókÖ¬[Kû¦¶>¿»ÔûÞö•——£¶´Déž-ÝJ©Š­®tºÒÌÖðÑX¯ÌŠW®‚í¹{q–²®°{³w¼ê°D&“)“R ¥C† aøðáär9r¹\`–xñƒÙÛ{÷oG°1 ³Ðñ~mWl'¸(2Jí…µª2é´s/†c‚î»ï¾äóyr¹\ÁÜÛ}Ç1-EbKÃ>² ºË™¸V–RXÊ&—É¢*œó†ÊèÑ£Éf³är¹`õuo¥ÚOJ)Šá†âHKw`¿N²­çI}ngg'«V­"™LzAj7NU ‚n–W)÷NqÌ+”þé*ï^åsXH"Ò(–©,”çÔ®B7¾¥KÐù{3íᕱCñJJ‰—ÓTd÷»Q=çƒ)¥°du=ÃŽW½ZX–eµåóyQYYée¤k…þ9'îæ6|ñ|‡h4 ¢ñˆNa— «p ¡tAáUBW^¤JxEs=?±&0Ã4H§Ó 1Ø Xz‡Û‘:Ôÿ·;a¯x28• òØi™†GV~W —Œï#2?}EËbd³9ªªª¼kG"ÊÊÊz”ÿû]-½Tû)¥¬L&Óâ¾€°Äp”¼E(uôöhæ[ýÝR깂N ¼¢¢"@NÅ×/~¸ûxÈÃŽÝJyðJ‰:‹;Ú†B¸¬„S”[x.Bz—ô\€Ž»ÍUÊ ÃÔx5b‡â•m;eHË[y®@ ²¶›  ˈ¸ºZ,VF.—ðxÕa©ÊÊÊo»í¶ŸŒ1â0¥”°mÛ´,ËPJ )¥©”ú©¤Ñïšhv™RŠáUCiHqæ˜&ÂDË#Ì8í°Bx²¨^ÁÚRâr|=ÞÒ"ÞÊ2:𬦜®²Ͻö<Ò–ÞÀ”ÒqÉ• °:Z…®•¥ïCÂÙ„Àˆpú-"+ßÜ04P•r çŒâé=j³°-Û)|jÙ=f¯ U8@·6µžUk1mcsMwÅë†aØJ)[a†‘Ïf³Í–eÍ ­+G¾9çfÖ“'“JqÍ'?>z¿‰ã·…®åi+E*E6“ÁŽ:Þ‚òòr&L˜àe æóyo+Ψ*Þ׳:·MÞ²‰a÷··¯¤2:s±‰¶eJ)†ÖWÑ0¨Bω3†O2ó˜SÝH„»J¬- .¼…«x¾¢èå•ÕtÉÏ¿4[Ú^áNÌUA¼ž¤Ýon¢WZ`˜Qö?òT qÈJ{¸mŸ+ÐV½¹ß0rïýxæ•#SíØ¶…´%¶wpÊv< N%!¨L8ß»®µ‹µ-†Ü\i¦v ^‰m<ÇÄáSoŽ%|IC û&ïÁG"€ºÑ<{ê&V/|‘€¬Ì‘•YÒ2KNæÈca)Ë›Ïà_ô¬0ÃAô¸kQtûþø…´%J*lÛr…˜ Ÿ IDATȶœI£ÂIÅ405A ÃñM—ŽYõ$+×ʲµÍe»‹9º©J:îB%½ãÜÅ ¥´‘ÊÆR6–¥jæŠ)W<õ=î_òoÜÉ•M—ìaÄ4hýêåð?çœà-àh&†qÞ›&ˆbýϳl®©#rÀX^ذ’57ÝË=?øee[·–áà¡6¼÷Ö¿¶iGÛ²yô™ì·ß”’¨]¼Ï–RJÛð×èY’L6‡‘[á4„ 8î8Œ+‰Wƒ¦~¼&Y7é„q¤‚Ñ#ëøÑUg0at? C.o“É+²y›œ%±lE^æWy Ê‹¶÷€Ùbý#°Z°´AJò¶²%yiƒ-Qîâ¶S“”@NÍw‚…›Tу¬ìBésJ¥îò'R*rؾ mÛ©©h[ )-ê+#°WJ|çÖÿð¯§c îlþ÷×w ^E¶áX%?ùÝúnæ6žÜ\LëJ¿ëøüôò¿û}Œae J7bÄUTz)[I,,,eÓ–o§[¥pæ;xK¦¡ ®U|ƒŽÅF$ЉbÞÌó‚K¦¶îiý$«`Á^Yb®–Ô{¥wJ¢ gÅ/S†dDyɲ$ Åýk^q|YZQ}K{g7ç@E¢‚¦d Ž›Â½<ÊÅ~x§¹ Óà”#÷bSÛ:lœÕƒó–…eÛäó*oyˉ8€¥”´ì–D,2Ĉ@Ô˜¦Aʆã7 EUùpoí¬P¶ßúê ¯6½ù‡îòæxxµrU+çñ>zö .:}†J"PÄ¢&±¨—"@2 K*lÚR9ºs² P±þï«G…0P¦AÔˆ ¢õKᦥã«iªz&X¸nÀRdåYù]î~[{šl‚¤&„À&D(ƒ¡u•:IDðò›«Ïv"^Evøg5å€K˜ÛøOàF`ä/_º›_¾t7 ãùÁ!±ýd†– !a$¢à6„AœqÉxÍÙ5¤dº0—ÜW°ÑèÅ8ô¥ ˜¿›Änʺ;  ©ëÁ¸T°R¼ô¿ó¬¬BÅyÛGnB9„g+ÿRkK¡¨‰TŠŽ|¤ÛÝ ãT[òò2æ¯i¢®¦‘}U¹æÑ#ckž]¶õä£z.à¸|ÅjÆŽÑ?õ]ÀàÚdÉÏÚÚ2dµ5ÞjÂ&°T)9DJiåY±j#ê JYÚ’ âk[G7EE2vü–æy³sÀ%£N˜ãáÕÝ÷¿ÈÝ÷¿ÈøÆ.>w:ûíÓÀÚ$eQ% ¶ab†€”Ŭޜ!•³ «ÿúÌ,ћƢ{¼,„C¤ð='û,Üôu›@Ñ]ÜÊuzk¶ù »×soNÒ«.‹"t¤rtt¤w:^EÞ·+Ïjº‡¹¾|¨Z¿~ Ÿzø…cêÆpÁð}[5’QÉF–£!9„½k÷ÂÄdh¬žeé…ò"ªÐùv„®Š…ª°·0¸°ˆ Ýú­¨bË*@VÂÕJó“•ÒÚŠkg¡$RH§V¢žõViV8=ž^ë¿é•!”ô-'5“¿]÷c†}é<6©|ÌÞØF2ÝêŒ?Udae2Y.ùÜõü÷¡›·ûóû¿3fô0>ræqþ݇K€ñkÖ·rÕ·nãO¿¾†[ï|ˆ}Çäðé“z\çö{žä ýÆpäŒ}ÂŽÿˆëžQ'Ì àÕ’¦õ\ýÃxÇŒUǤ ÃÕPÍк$ÃWÒP[ÎØ‘µ˜†`HUŒåRNÅÜTøR˜Ô‹òSDRPš¨ð¹Ý ÂY©BúºkYI_ÝC‡È²²}SŒ„&$éúõU™Ç·tIEY¡`ͦî¯"ïëÕg5¥€o2·ñûÀYÀEÀI¸Ž¯+¸w㊞úÉß圽O#*¢D(y™ÃŸ¡¡zøˆ)²ª‚CDùÖ² ZTÁÕ .À`ÕøÈJÈʳݔ/§P»m7ëQ9±- TIb†SìôÕ5oº·›^ !¤oI–%øú§?Ε߻9m¬+«kMó7~{›fû“.Ü÷Bí L=Õ#©Cí=øèƒ¨©N†ûÁ’V øæ¨æ”Ä«ÍYѼ±ÇyßûÊ©œq䢃hÄ gIÇý§|–Uo„oêq‘bîe’•û™ô½wÝx¨ÞÉJI +áOm/râ»·àoyÂ$1@Á‹Ö| xÙ)ß2«) ü øsk€©zkb8nÛ‘À°ï¼þGÎÙÛÉÎ)7´ÚN~Cøæ–÷—dÑ Á}Å”R>7Ÿô­ËUš¬ü+×J_zwed—pL.¯}á’_½ù°{Û÷0«©5„-Ëä ã²ß27ÛÖÞQǼjÛãô÷%ÀCÿ|Ž??ð*+ʹü²s™¸÷hî½ÿßüãñg1 ƒŸÏ™Å3/¼É]÷>NyY‚Ïælš:ÁóÇø¯oY6¿¼ý¾iϾ°05¼aÇPo¾½œ±£ë©©Ãoþð/^œ¿˜Ãq͗Μ{Óíÿâ‚3Áwj(;ž¸<¼uœ-âÕ]Ìçô#'P5Éäl€!Ø¢(ºâ# éã¹"¢ò*Àû]€zž•?°7²r£ôÇ­P‹:¹•‚‰#*œó”ä¡y¯{xÕn -ãº9wpßï¯Xç®ÜõçÇYµª…Ûæ~eù —Nþù¯þÀ»M«¨HÆxùµ%,zw·ÜøYºS^Œ€¼msýM2mÿ±4ÔU´ÂÞß9äÕ+^:aÎG?¾³t¹œM,f‹Õ~E?§&Ö­ò“”.ä_µXŠ uTpú*ïŬüdå%YÈiI©¼µLK»M†Ö8ɨo7m`ñ² ^EØøxÑ}³)³‰aeC‰‰¶^edw‘ľüÂ=—éIRžYí#ªb«ª¯Â—ñ×7Yõ·ò» `fõ΄;¥øÒ³^Ÿ?Ŭ¦×C˜è7Õìܤ …%½×§ŸSN˜ÎÞc‡³÷ØáÜ:¸†Åï®à‰gçóáS`ìè¡Þ±MËWqÿ£Ï`ç-6´¶9×)aa=÷Ò.»ø ª+““G¯_LñßËè‘CX¹j?ÿí£œúL’ Çe|ËÿæØÃ&rʱS ÂˆW­† JH7Önl…‡ÚgEõ )½ßOT²«Ê{ß_²*·ò»…€CÆÖbèTú¹w<åáUó¼Ù;¯ÖÔÄYMË€«»Ö¡€¸óÏÖ.7[ô½IS²½É˜n%×Ò)\+¸P¤óÏöÜ}‰- E˜úCV²DÜÊö}ÇÌÊ©6+PÀÓë^`ñê¡uUÂÚÙ‚´)¥ØžÅ·TŒ6ÐÝ ,Öš˜B‰˜ÞþL.ÏfÿŒ3?t—^t†! –ZÑõ‰+Wµè2Aô¨pÛP_Ão¾‚1#ë¸ô+·ÒÖá·O=~ž{e)‹Þ]ÒÄÀ±¾<¼Z·¡€X´0.¤›zÞÍ¿*±TÎ|*é#'[_ËÖ+F ?÷W¯ Lý^¡úKV%âVþꫦ²Ì±mž[¸š‹V`x€ãà%àÔ×¼ÆÁõQ’F‚n™rCZ!½èÜAÀ“Qóª­û@Ñ¿B°ðYUq3ÁÄÄ8’æÿ·wæñQTÙÿVõž­K€,"PAÁ]Gqçó|³8Ž:8ŽÌCߌèuÄq\ÆEÇÄD@Ù «,²…%ÈÖI§“î®zÔítu¥—$ ¤~ŸOêv­·ëÞ>¿{Î=ç\'5€7XËQ êŠÃÑ4«²ÒÏ[)Ød¤ "Ë®­u´ß{€ëçN UeðqÛV—ši“ªª£Z⡊N²Ù-ØmV¶ïÚÇÅcÎæ7÷Ì w¯®<| »ÍJïü®\2n8?ñ&î´dìv½{v%pìxV‘žBQ:e·có–˜4n86-‡«&ŽbÆsïârØA"ÏW[W aâ:xè‡'¿{'’“õ)ÈzvËfô9½¹ÿ±øÇ_n ]šÃdŒV$¯Öm=È ÞÙXe§Ã‚× OÑ'1Ì’Ä’$!óœÞ’Åd§¨ÚÜY¯NÉ8í2~êjƒñÔr¸¬¶ÞÁ"Yéç­l™sóÝd¦h}­èh÷þßœ“*¯,­®ù'dôÆ|Uq€ûLFBB!Hy ãÊ3ªH‡"–ªÌtÎmðŠÁz£`½{„ª+£’iÍ oRO²÷"K2ÙFº-{'\’O°ŠZ5Ѐ¬T#Y)ÚÓ²íí•z6i¶TŽÕ–1ø£ß‚·4O›‰LÙ}° ÿñ]ÀÔó†ÉYýòu©gäpY–‘$‹ØO¨Ùè—uiÆ'%½£·ôÐÖdI‹LAúõéFɱ ÎÌcèàÞ,_µ›ÍÂÿL¹‘$§ÎÙ™ôí˲•›Øw ˜‚~=1¼?_/[ÏÖí{¹`Ä@öË£Gn6ÇJ+è•׻͊„J·œäõèÌʵ[9|ä8]0¸¢OÏ®UUäÜÎí© Y´dû”0ùª‘t뚨ätÌ {×Lr:ePQYC§©¬ÙXÄÎ=Gƒ@`ºÉ'î¼q=1û–sÓÄAH’¦•Txýµ=œUŸµ>X /¢P¿6dý\”n¤­Š~nIW»TgtJÆ.K lV™Œd3¸l2ž:?cÈJ›éjgø¤ºlH@i•_=ô•åÞzyU´pêO.¯Z£†¥Ù…+QR{ŒŽ,R-©õÞ|!ë}}’QÕਪ15¯—v5ì)ZVU¡»3‡NŽl-Ö+dÍÑ (’Bÿv}I¶'c‘-tvfÓÉÑO°Š²@•Á**á¨âÄŽËê$ÇÞ‘l[’,ÎúZn®ÜÎE³§@UY¨zw0e÷óïßhTUõ¬–ÒëŒ}¦àÌüú¾tF^÷Ü~m“åÀ~y ì—W¿¿]FtßUU%Éå䆫Æ4è—C z3´ wèk{`éˆaýG…‡ïú夈TLçéUÿ}hAúÔL&Z¼:|¸‚’ íÓ¤95oÕ L!¢ª m$]p©b0 aЦÄuŠ.t'7+‰ŽnHš¼Z·ýŠ ýòÚ“â´a‘e:¦;Év;ðø‚”{ýx¼~<¾•5Áúy2»UÂe³Ò©ƒì4.»Œ‚„¤Â¶ýeÜõ×)/¯ª—WE §žyÕZ +X¿9°‚kó/Ã!ÙI³¦PðÔ·m¼¹Ž†®¤ª0 êÜÙu¹'TUÅ*Yé’Û¢i@UuÕL[úooîæ²Ì=CofrŸ+ÉMé¢-½nIÕaRTT9ªkJ]ÐϬ1í«gI¿Tà>¦ì~ÕüßGÇ ¯}ĹÃ2¤àLývi©g¨êIÏŽÞ ðfNÀS“°€uù†}\qAoìV‰Ô$žjÄ€9”fBU§qEÆgÙd‰žÓHsiF²ê?{ù+>ýj«W27_3Œk.<“œöZrí4—…T—2õÏQåhÿ ‰@ Èû‹¿ç陋PJ½¼*Z8õ¤É«Ögü¢ÌÇ„Œ\`ð§6r×Àë±ÊV\²“£u%Zt·0ßD3ñÔgõÓyý…5k5BÃB8p¤Z’93¹7)-¶åPõÏù5ëö¯¨È+òrá»,+ÙH¦#‰$›‹$«K·¤„Ô`y •C5ÅÌÞ3‰ þÌ¢m Cuó71e÷+æ>¶I0--•¦¿HUU5CõC’-H ¨?Ö*©é]¼Çn9ÙËû¦Ë@é^Ÿs²Þðá6tÎ’•»Ø{ Ô4 žDTî^èsçËn9Àä‰X-2N»…£µq$•ñi)Ò“Tý[”Sú椑äИ¦øx5ÿýàû¬*Ü_/¯TU• ·àO׳aç’’¸\v’ÖúU%έ©ªÄáR/Ÿ-ÿžúœ…K¶¢*ayU´pêI•WÖVÚþ Ü‚¯ÌñÉžù\×ër’,NÚÙÒ)ñ—"!ÅM$©Kq«kˆðw…°Ã{[Gº9s°H2 *ëK61~öPç(G‹xß Üü ÈZ±w+ö®ÒnlKfB! hùi9¸ì.ªüU”T—rÈ{Œ÷~% ²y,¦0e÷fó/½{æòÎÌGx䉙ü×oÿÂcß³«cvVÏ–{‚Úìø­ÆY€HMt⎊yõ½•êÚÂ]RffæFŸÏgv”V ¯Ê<>ÇçËwqåè>¸l2©vŽWÖ%\Cm Y¡T‹Á¶psï”á$7Ó…,Öܸó(·ÿïÔÔÔÅ”W«ÖïeÕú½$»l ܃ÞÝ:еSIž?¥Õ-­æ»u{Ø¿¿$ª¼*Z8õ¤Ë+©Õv§òfS°'±÷óH±'P¶x¿§: T64ìõj¶¤Fž IôqæÓÞÞ®>@øýæò»ù„Î8L`ÊîMº:9€€k€ €´&IFX<”ݟ›ÿñÈJïýÝϸñšqQ—ùdþRf<;«æÁ©¿v{^‹<´KÞ°’m«þÓ¾•¼ƒ¯%8ZUQ—9p°„×ÞûŽEßn$))©ª®®nšßïð›Ýçä¢ëEͦ¸\v¾úK’\6EeË¡*j|âV!»Bki…=1$Y¢W§d2“íõ¡³—ìdúÓŸGÈ«¢…S7éêÔ"òªháÔV#¯Z3aµ#…”»‡ÞÌÃîB’$ê?…Õ[ð©qþ£Qæ·TݲöV7½œÝqZ4[®/XËý+gðæºÙ¡Ó·㙲{_œúYaÀEh©[:ŠO6Z|[±ø €O˜²ûˆù×na=ÿêG ÒŸÃ![,Á=û—Oü¥Ì`0È÷ýš^ù¹?ްò‡—l]Ùj«ZUŸª2õDUZZÉ.gμUªÅ",ëã^¯÷o@•ÙmZ aÕË«›¯Æ=7‹$Iø*÷WâÆÖâë/”Hs V”HO²’×AsY¨õ+<þïeÌþ|m„¼*Z8u_œú5Y^-œÚêä•Ô ëúXx¢ÇCX¥i¯\ö0—wŒ„7èc§o7O}»Ži*3AÖ²´âz>>^JØÄ¬€Îš(¼@ËÇè>&št ìÞ;®ãÆk/®'¬½û‹¹ï¡ç=r0wÝ~ó:«Õ2`Ñ׫xüé×yÞ îþõ¤¥5o*§ç¹%[–¿Õ¾Õ¼™oT%p¢ñzkøÏœå¼ýáRµ¶Î¯&%%½æñx¦GÍîÒúäUÎØG’dë4€é÷]Î¥çä!IPãWøáp5•5~]XŽq@ÝP§Øeºe'ãv…gnŠJ<ÜõÈÇìÛ¯%áUÔàÚêýË®/ßùY›WÒIz¦LxPýVŽ8><ÕÍÏÚ/F&IâÓë_`XÖ mùiJüÇÙQ»ßp[X%+mY´·e’fI­ÿÅu?ÿ0;=J}ÚcŸ²@zéðÔ=µÞ( ®ÊJ”²¾cè·&™5“°dÙ‚?¨ò·¼Á†Âž=uj—δ6¬óóÚ¬9¼ûÑn¹a"×^y1©)MK «Ö¬ö­è=Ô‚u¥sæ­ì4ó?_)¥e²Ûíþ¬¼¼ü÷hÁš&Z©¼Jê<ÔÙ÷êÅHrŽ$I¼ôèd†ôîP/“Žyüì>â% ¨1 BV ´w;ÈJ± @M`ù>[ºƒéÏ-DòJñû”¾ö‡ºò½mF^h 5ª5JCÇ*Ëá²vùŒN ™ ,^ŸøWÆçŒÅ"[ê¹É§ø¨V¼T« 'Vì;vÉNŠ%9ÂsOQVYÍU Xƒ* á{ï ^=²XÇxjŒŒÒ‚1:@´Î4ŒtL$ ,1‡å÷årËߟyÃñÇ»~Î%ãFÔ_Xr¬œYï|Æ'_,aÜùyò$ºvéÐ8Âê5²dó²· ÂRU•ÅË6ñÜk j>âHOO_]^^þ[ÀŒÑû©ôÛ)¯Ü½.ÉOË=ÿ-$9Ãb±ðð'1îìnXê}ÇUjü*ÞÚ U¾² »„ÃbÁn•HqZ"²ø(ŠÂÊ­‡™þÜ|ÑÂzT%pÄw|ûŒc…o´9y%€·ÄøG(–8£–ÈÏíús~úÓÈZ ›aùgóÏó§’—Ü­q‚¨¬ó°æè¦~÷û‹·‡9T7[~½x–r,PC¥N4B Æ(ëÛØøÆch®j­°f<÷Cõcܘs—#q^ñ‘ã<öÔë”–U0íÞÛèÝ+Ü|¾Zæ|ú5o¾û9yݺpÍrÞðØEZ¤hèÚkTÉæo_;鄵nã<÷Æ"e˶d·Û½§¢¢â·À|“CN8Aµ¸¼r÷šØ?-wäÓHr2Àà!ùÊW½Ö ~Žù¿gsõ*>-ÝZ+޽WIÐÐJŒ†Ut©6VƒGÛhCfØ&A©ä™ÞÍZ±z³ôä#SÈÍÑÖ * ï}ô%¯¿ý)“Æ`òµ—•鎸©ß`åšÍ,úz+×naÎÛиr{Ÿ_²qé«?9a)çÕ÷–0ïË•8No ø³ßï-O›‰–—s?½¼²';² ~qƒ#5çB$¹¼ÊHOB *TxbÄÑ©Š?à=þ½÷ØÖU»æm)YÚ´¼jaé9ZÃ[c4¾±DëÖ8jzý~¹‹Ý­LÊKÇPrzÜÚñS\·ƒ-Õ[äo+w(ž`¹ã6|¢Fnìè#`è(çâœÛÇÅ›Ãú¸`Ùw…Lb&¿¸i×]y!²˜ðxªyÎ"Þ›½Á}¸uòÄSaù:]:·çÞ;o¢SvVÄñu…ÛùìËe,]¶¼î]¸hÌPÆŽB»ŒH“a÷¾cJ6|óâ #,EQø|ñFf¾³D9Zr\v»Ý_VTTÜ l39å'׬LyuŠË«Æ&¿•Hì-íå[‰=AÙ”O¼Ža3líhùÓíhÙ¯@Ú"yqžMwoI×I¤(D‹Ô õöeãñÆî‹uÌxïÓjmmí©©©?[°¬CrJ2Ïì‰$É… v7¦ ÍnŸÁ5—Æëõñàô—(**&·k6î´d@¥Sv&çŸWÀÍ?On×l 7íàùW>䊉£„‹±vŸŒöyÞÃ{לä·ËVmçÁ¿ÏQ>Y°J²Úì›jkk¯®­­}±J­‰ª]™òê4”WÍѰš3Biì¨ÅÖH[s¬sbM‚†רZÇR£cÙp›;úd”8#šD£¶2—åLOOŸ]^^>á¦ë'ðÇ;o*’e©k\s[mŸÌû–Yï}A·®Ùü|òD†„כЋý.*YÿÕó-ªamþþ/½ý­²aÓ9--õ@e¥çw´íÕ¤O¦†eÊ«ÓH^5uy)ÁVnĹ GÚQX;–WL0Š}Õ–b?ôñ‰On[+ʵºOè|¿î~Ã' k$¿¡auE· &PïcÓ<>Ÿïí´´´¬5ë¶ ÛµûPÚè’Õ»«Z-úõéÁõW_ˆËåmålIDATäåÌ;|‰$ItìЗ3öròò½‡÷¬j kÿc<ùòBõù׿Kžj_E0¼Óç«ý…iþ;©š–)¯N#y%5ãü11ÙXwR£šo•ȆŠã7ªèXñlÆ ±=jëšÈó¦-9\D…ËåºÛçó=5 ÿêSÜ!g¶s7úÚïwîç‹EßñÕÒõ¤»S{þ`Æž?„œÎ‘0òκ¸díÂg”†u¼ÌÛ¬bî‚UØm¶ZEU®««›!„‰“KX‰ä”Ñ$h#¿k‹Ü8ï¬ %k>Ý$ ‚Ìý²7>\¡TVVI)))ï{<žû€}fS—EWŽ&3¢9ce–Øq\±d¡)¯N2aï%ëF2‰Ô㦔q$$% ,šÐ¢©ÜÑIssi|ê“D#@Û™£j6¬Vë͇ã_>Ÿ/ù–ë'H¿ºuIqæ§EX.)YóåS",U…¯–}ÏÌ÷V(‡ŠKä´´´%•••wÍÖ9%ÉK¯åD“zùc‰¡AEŒË1439I6G^AìeD¢É°ÓB^I'øÞFS LÓ² K 4ªX*u"7OµÄÕÔýÍ|¬9¡i¢iHw»ÝOUVVÞšÝ!K½ï®ëå1# š}³ü—–¬^˜°Öîã•÷¾SvìÚ/§¥¥m«¬¬¼øÊlŽÓŽÄˆbš“¢ ”¥kKMKÿ‘tÛ#¯$"–|Œ)¯šª©1äU°5È+é$<¯)sRÕ¨bTý¾X>F•;Z‡ˆ¦qÅò2×Äjy ÍÈH»¬¬¼ç }Õë.)]xÁ`lVk“nÒsàÄ’U_<“°vî.æÕ÷×*kÖ/§¤¤®ªªºxß|ým–̤„&E‘a±\×ãÉ+IGLÑÊøÝ(Çb§tŠGf­zé#©•Õ%Þ\U¬QN¢‘JcéâiZ4RÕÆ$¦Ÿ²Ãá¸Ýår>X^^‘‘áV®š8R¾jâ:wÌjÔ zL*Y5ÿñ„u¨¸‚7g¯W.-”\.§§¦Æ÷?ªª¾„ælÂDSµ39Ž ²)²XMPN$ÏÔæÅSî%ŸªCjD9žv•HÓ2IèÔèc233§–••UqNzͤÒy眉§›ŸQpYa•Wøxç³BæÌ_­Ê²ÅŠÑRB]ô2»HóG%OƉ þp‚ž-‹g›8µ`.ÏÊÊúÊåtúBZsû¬ÌàÅc‡«÷Þqúæ¿þ¤=ôCÉ]·]¨f¶KJ’¤¦¤¤|,F°&NoœìFËÛ ý~Ä`:–“¡y-DÆ„¸v"ç¶¢ÝØHWŽòl½‰RïdñnzšÝ¤y¸x!ÎñK·Nàó_®5›á”ðœÜž••5×ív—†Ìb±¨€šššº8Û|Um/¿;š~cúb"³ …h¢¥‘R ûe£k~4gXsiw/šÝ¤yØlP} € b$+¸¯=9€!BÕík¸w0n¸ÿ@Ý÷NÀ:³N+ä7¸\®§D_1Ѷ°*ÊgK€5ÀJ´¹€#ºó‹€‡½b‹à¬–¢™ÏП.EwŸCÀ³À"ƒ,z£³¬\¢Ÿ~¬–s€\Aêë©/ïž¶^ö¶Å«ç0±ßD‘ ìÐ}[4æã@ ÐÕ@X¢^ÖŠF¹ØüMìû½®Qö/‰û< Ìf³uÏÝ&Te&Lœúði¢üNXeYG!óÛ!oB9"×׉ò-ºó/fŠëì@!­#™+£Ôçb¼¹xW”¿Ôw‹Ø§'¬xžƒç Ò •»vÖˆWÏ4 Òì&MǹÀ'¢<øAwlU ½ìn¢:  ™{‹ýŸ7è®Ë#©Î@GÑ!;‰cç™ÍaÂÄi}:MÅ Cl:áoe‹jt×uÐíŸN¤çqPhqèîsîøw‚4 ¹Ìr4ïÔ]½RÙF#,kÂ’ueÉp~¼zvÚd«‚|Šu².$6ÍMæŠÆ7æî9„Ò~çÔÞîB‹šüChvÍÿ¶ §¶vªØ,4€AÀ†síb$r¾g Ú4ƒC ŠC8/¾ƒ ’õÀ8à~AdÇ…Ú‹°z íAn2ðs¡…-Ï@<{ƒÙ &Lœ6°£ÍKŒsNÚ¼U[A(^Ðmf7inžå ±-£ã iei|`t¦Ý„ ¡M>†®¸Þl­É44u›h:†¡y›ˆÄÌI¨»2ð%š]÷u`äOðlÍÞl":’ˆ|i¢e1m^âóU˜0aÂDb¸€ÇÐìí¡DÀE§ ±_G8ÄáÇâI1˜Š6Ï9mtX h±&*ð'´X˜³Äu‡uÖ€ð¿Qý›ÐBT¹ؚBXkÐLâhýj¢»âïD3«½‚æ,Ò\Ì®¡áRã!Ò/žÂÅhs;ÐæÐŠ g펅hój__™]ÔÄé Ù|&šÚBp½€/' m9–‰¢ü š{õhÿÝEYžk¡ºœCxÕŸ£Å@½ª;þ ÚR 7ζmÄ4“Z(Áih- ŽŸDKpŠxN×&ÖQA›G2>ÿ´$¥w¡…ØÑ¼ /V£{ãÑ<sc<ã ¡ þE BÚi®Ù]M˜0aBs„-3_ŽfŽSÑæ¸BZûÃB«A3ëù€ßèîÍ風§_£l4‘ !§¿¸w¥øþ‘æÆ.hŽ*Ú\Z¨¾z3_?4çŠZ´x.•p’å)¢Î*pLÛ!­|àБ§OŸ¾zõêïÿ;bBÜpÃí?f……E 7Â7Vlã3#sÉÒå­­­½½½À$È“þõ¯á$ 7Üpþã?þWê°½Oí\¨ªª ˜yÒï¿ÿþ?ÿó?H¸á†Û;éÿ÷án‚ïªØÞÇÀ…ŠŠŠjkk»»»_¾|ùüWípà ·wÒ¿ÿýo $l卵œœŠŠŠ¶¶¶gÏžýýï¿Âqˆn¸½þïÿþ ÛRVVViiiKKKÿŸþ‰„n¸½3þõ¯MN Ef5Åt~yoà“®¤ÌÌÌ’’’æææ§OŸþñÇH¸á†Û—¤ïfξçôe¼‹w\è å“äs}T 555}Í@bÜú7n¸}AE3ìëܵ©5_¨—úäŸ é#¡ˆ”ÝBºJÿÙþn_DûC i†!>‘L_4 ÌiŒLè¨éî}H‹ÔÐÛ¿CJùÆCŸø‚Nãg·I*î;z‘<ù®]FYÂid.–^§etžuè—›ço;‘ç<}]LFUJA®üVŠDg·ì6ÿ®£¨nà’A=®#!§vìü½ÑÞÆ‰‹›2eÓ6)xºg4ù¹F»2|^8 }ÞS—­Éײr “VÚ•4Œüc‹1&Oûý÷ß/]¾²zV“Âí nË–¯8züøŸþ ˆ2A˜…I_TÉnê´‚kE@ž©ÓàîúêÃ[¶~KùVž½a‡N†œç›:·xé tPdÃÏrª: À\d§¶):W€Û:œ|Å‚6‹w®½O<÷7ðÓV x \çðÙÛpŸÈ|ò:›·Ëܶ €‡9¾ ç€dx÷_JÍ–8^:ü«„¯?ðƒ·ÂAϰ왳æÀxî¥{®pM ¤ÉC#QÑm[Åd±ò¶/^*l‡´Š HWW×ëׯK0!&}]@‚1:ɇŽÉÔ‡›8¤#äù‰…Ý‘YM`*;ïX¿D ù( ²“)ßðÄåu ãÙ@žÄå À­Î'o4Wiï>„®y¼˳XÞ{ÉŽìveêç…tJEËú€Rì/‡ôÉÛµk×¶I`å-l_T˜ŠùÁý½½/^¼€H “¾4 Y»GI 5+‚?Q_Ïñy ‹ŠË± “€\äš•ç¦mRäÕÕ àø_ÿúWÞ9ó‘Á«ÀS¸¼8ãf1òà‰‹èEáøÚ[XžÂþ6¸i´+S?ï©ËÖðÎÉܰi»ùÉk“mžÞW $ˆ !!a¬¼…ík“ kiiyòäÉË—/!Oúç?ÿ‰4 $H†dÒ³xé Ám»8H^MZi<4á’±¿8ÁâeTä ´ ®?gÞBêùß våÑ€„²æ»tA‘ öÀ@š àJ¶¯M*¬ºººµµµ¯¯ïÍ›7$qWfùº€ PÒ÷kø8p¯O,ì†èâ8è’AŽ!ÁipüŠ|!¯ˆ\.o3ãÛï-\h– ÕÓà:³xç^ºçJN àø6У|kÈ*ù¹F»2G %U’¨3>p–Z«Ä@ú„é^WŽíë\_\\\[[ÛÕÕõâÅ‹·*³|u%»-br‘lÜ,&*.¿aÓvD;ïØ¥|+áé›·ËìRF#4(•Y²\@@p-Ÿ>ãÛ-¸¿  °.ÿj O°ñŒ†dKxý¦µ·ÀÕF{`çn:Ìœ5ÞÉùÛNÔl‰ã•9ÉÊ5Œgê´MÛ¤Öÿ( /á–4€ôÏþ ÛW¤ÜÜÜŠŠ H’úûû!Þ¹Wí¾ ½«A’Ý2Ú£pGÙU1ž2Æ‹C¾EÍQ¨ÆH­¡¾.÷·ñNWæø'ç¼ $lؾ* egg—––B¼÷õõ½Uàÿk»<}ÝôÐ9H¤ =Zοú10&›ì;AÖ°}^Ú•ï½½½¿ýöÒÛÍ3,ûÜM‡S—­/ÝsåB#xhãf1ÃôÂ@Â@šƒXc)]`Ã@Â…  éSibY&³Ü 6 $ ¤I¤ d™är'HHØ00& >¶ Ë$—;Á@Â@†„4Y€4‚,“Yî  Òç¤%È2iåN0†„nïÙ0°}T ½¿ Ë$—;Á@Rÿ«<}ùß,Ööb„õÚß{ŸÖÃfOž±ÛŸÁFX7iýƒÖ5hÖù”ƒu€õ‘ö{;Õz kc³Öd¿‘Ö‚ìÉk&íñdM#­±›ÕÀº^³X=iƒV7l¯ëxUËf5`íÃV=h/ k´*k}Y9Â^€U€µŒ°rÒšI{^Fµ&ÂJGZ ²F°gÈŠ©Ö@XÅ0°M@Éî=Y&¹Ü Ò0ÆA£Þ±ÓèÙ(4x/u¼~+†€Ôô6 5pW½#j8Ѩú=iÔ2 š9Ш”F%\i„„ïPŸ‘ ˤ•;Á@HÜhô~éÑè4úó]hô;Æ’µpÒ8hô–ô¨s”ôèCшH/8éãÑ  ²` }2 õN`±nTMT±îãѨŽ#:F¡Qû{шS±îyù;Ðh¬Å: $ ¤ÏBÛç ¤‰:øª†Ž^OÆ¡£æ0t4hõHذa }x cèˆ>ÞÐјŠuoèhŒéÑÇ:ªâH£ÖO3t„hTˆ„oRØ0>8úF-ÖýýƒëðÐѨ4jÿl†Ž€FHHØ0>¾Ü¡£ª¦¾ÆÎ—ì@ªiíãB£ò†ÞºöïD£ê¶çåMýã:*¬}üAŠu¹UÝ% ý{èÑ ß¡°}… á‹‹‹zzzÞ¼yóáôÅÍšÍëÃ>tddjÆHðDoz4=pòÊ*n¢æF{œ¸eá@ K24= )£xáÚ}*üB“ MHÈ(ž¿vŸ $ iE?FâûÏóž9‹×Å'bÜéÑ;Ñ ß¡°a }H }¨¡£'“sèˆ $ QIM'{±Ž#¨e: QaU ÍšÅëEÒ(³¸iþ‚E5í/ 7:wåî‘“ç·ˆŠkéS‹upüðÉó›EÅwéS‡Ž€F7m}ÿ¡# QFqËÇ:Â@Â@†ôÑôñŠu`è¨oœCG{Í&g–"•Öuï;p\AYíÂÕ»$3Kâ±™š‰Ï(üí9™j¼ï¼’ÚmKD£÷l§LùFLRV[Ï„‘ @:qæÊ¨•: K¥hÄÒˆ¡#€YLZ —¡#}sŸý2 ;o[¹4Ò76÷I00=¨¬¦c´çpDR‚ž±™Wp‚ÉiyU7zQýÓƒ'.HÊ©;sÑÈÑ3TY]WRVYßä@bN-I#]#3Z`¼žñÅÚ–Ž~Wï9’@²q¥_ºe‹„€„n_¡TØÇÒW°ê•ì¶nߥcèyþêÝé3f é˜hhé³Od€'zF¦OŸ!(´ÖÙ#lÎÜyE$äÀq€‹whFaiË6q{÷€wÒÐБ’šö¹«÷¹ëàåøV¬´u ´wZ¼t9biÚô«ÖXÚ{9ÐB˜%»p$8.(¼ÎÖ~÷¡ûÔ©ÓäU4¯Ýs°s^!°ú†¥3 þµq tò?|òâÒåü$à‰ü+ïÚzÚ¸FeÍ›¿¨ ®¥G«…ÖY9ùc ! qTÞb]JÁV„àq#&}o‹DNE ¶ØäPÀª#>Ž(ip Þ·Åò(J+lÅŽ1νÂá—(ÛÄZÎsšF.üà8¿‰û\§·j¸¼u•È(tGHŽqœÅ}pzÔU%l³y9[R¤ù$@zw‰ O.ÄmÕpŇ ¤èä<žÆ®WÈóvHÈ Å¥Ƥ䳻H€"äO&ûèïÞ‡|hÖ,^À(Ò¦N›æM©e»¯ ‰â¦æÇÕ´ ¸ ÁÛ¸óÀ %F÷lhk„וÉÚÑͬ›9k6H6®¨X‰‘©ù T©ûõê}e =T¬+ªïÏ*ïãXíÍHFÅ:x¢…½™­Ý𓵳?ÐÈ'4à„Kv$00¾M(>þÐÑŸÕµõ(Õ´üî oY{í¾Ój¡ug.ßÃ@â$HŒfóò2"âY€´wÿA° Ëf̘“:îÜèà‘“–6Î(fÏ]º±nýÆï¿ŸùÓÏ[Ø»Q#7»m‡<´yëöØ”d9¥°˜´’ÍšÍëƒi4ÙhTÁÒÇղÓF»KÜ´tJɯŸ€ÉYD.ݲµtðe§K¥hTßÒä=#€ôq+u,@z'•T·.\¸¸£ï7³!‘I)™Å ÞáÓgÌ€?IñΙëå…Ŧå–ÖCü6mý •: QY]¦Ñd£ÑÄ oLŽ7&»DGa ±Œ8|<§°:m}‡ŽœTÙ©qý–Å ² +3óË©4ró °wö$ƒ.mmç‚úôÐXm]CEeµýŽ–×¶£Ükv82>ŽìÒÖG4º~ÛRMS[MCûž•’»wÐ>ó#ʪêV¶Îdl0b´t ”Õà¡Òê6§{ö M‡#šÚú§g/\ƒ>ǘ•ST±²uEý ²xèÌ>n´`áâGy•Üid²ï`htª‘‰™¢Šú{Šk8ȈNÝcvX]KþÜk~4%« EºÉÞƒ!Q)¦ûÃ;§ù2šº^ž<{YNqçÙK7Qà{Ó£4´ àœ“_ÞLÞŒ÷ ŽL1ÙxóÖ6®$Šh¾¡w¬1ÆA#¸M&ÿî5-O›º^}ÚÝ%Jê{ªÛž³Ð¨¬ñ)Ø„mLNÕó ëúr«žŒqw ª=*é`¡Qþ— ¤×¯_ÿãÿ ¨³È’Ý1I=ãà°XÒŒßéï†Æ{´õ ©?£Ò—.]NݦŸ·ºxø¡¾³§O`xPxü¹‹×Wð¯D1¼Y-(äêàñ(’–SôŽ dÄ\¼r IXd—?ÃÓŸ1wÞ|à O['ï€0zhPg¿ ÒéÌ«9{øûÐÃ!T·í ùSc¶¾­?« ÚÆ‘¶hñÒì¢Z8Òò˜XŒ9(uÛÂŽŒb5MÝk·­¸çFðŠü«Ü}Bh~¡K—ó!&ÁÁU‚BŽî~žáƒ%»àéððÚõ€"['¯©Ó¦©¨kÝï7L`• •;Ä;üëîÃð Š9}î*ß ò@\sµ½«ßµ;,\„–à ‹¬wñ Æ4&HŸDЛ~Ê¡£¡…G#|ÈÀØ ìmLþöôˆºêhCG»÷½|ÇéÈ©ËkD6|ûÝÌu¦ Ö<Ã6mÙý°IÔ'4€´CRÞ- ŽJ£/HùË_Ö®]ÛÖÖÆH,sêÒ²Šxx¦ö<ûeE’Ò²L •ºG9¥éÙÅ,A°‰ˆK…N^IÍ÷3g>øƒ Àîþ?š:ÀV®ŒMÎB@¢ù¡Ü(3¿bÊ”o;¨! 'PTš&„ˆ@>p䤶ž÷J¼âCGOѶÎÞÂk7 9{’Uj¤ïÐ;õ! :päŠúk·­!1B!ßÔýºª©L`¥`hì#HŽ´@tøáÇŸ]½‚¡‘pÂ4&HŸËîࣥµŸvèh”WÑÎR¬¤·¥Gï¿19uÕÑ»¦G 9µó,*¬ïGé‘«_4=&'«¢ÛÆ-. ’4š5{ΗÀÌò.÷À¸¨´r ÐhÃ[¾ AãããCSóáokk;*^°ÉÛŸ" +Ò(ãFç.]×74ÎñS¿î5;L ü¹xÉRyEUEeµ9sæúE" EĦ¢xôð Þ"º%*á„°˜T•V¶ÎúF{Px;ù+d9p5e5Þ9s}èH¡1)d´ÞB£SØò$ À õ჻zÑQüZÙº®]¿õ/^½£¡¥Ç}Ü^182õ£ü* wå *Ó± °‡âÒ#+{wõ©¨iAçÐñ3‹/‘•W\Áç¢ù…‘@¢G$¡›À½ÎâRrþº†{Nœ¹„i4>M MŠÉ™•åRpMjY²õÆÎ¡²òÅ›$¼#5´ôåUMöÊ-k"Ý®¸¦k¯ù1y%µK7,8x‚!?»píž‚Š†ªºv`x2—yÞ&ûÄ¥ƒåWv˜š…ýõò$pš˜Ô⨔BY;zß}àJ:‡³wèõûöà ®¾á;5õ¤åU M¦4® oBHwÃAeuäg.ÝQPÑ»xó’µ£¡á­ zõ®I#G¯P =I9}“I¹uˆFº»Í<èñpDIM€tèäE=“‡ŽÄ¥¯ÜµG•:áu©’Þ$„æÍ__@Ò(¿¶ÿ |–ÿüÏÿ$WŒAË–-ÏŸ?¤€µjõHÚz†FŸÅPRÕ ÷åÎÞ× -NLÏCA—˜–4êèyþ„,Š $þÁQð MgRÿŸñ©¹@£¶'¯P®à`é AºãÎ1l÷8…âÞ­w@8Š_WOú’¥ËP ëšš9É}¼¢oP49yØC‰!fR HO†™˜ 4ªïxŽ~Œ._!0Háƒ@jèxÁË;71³„‡‡'»¤Óh|4š }F““nŠÊÊî¾ TVV#ÊÊàˆ°ƒ›¢ôÈÊÎÍÍ'Ä;(úÔ¯WÀMI·ûéç­š:†ž‘¯ÝŸ>}†¡‰98“òÎ]€(¯À({7ÿ9sç1b6Ï›(ÙD£lW×6t÷‹ Áuк×]zÆ;5õYr#¿ÐÄEK–‘αáÇÍ–öžÐ¹óÀÕäæyôôåe|ü¤L'¥»=BÀ€C;$å=ÎÞaÇÏ^C@Z½FÄÆ5ŒwÎ<'¯P”Ý´t¶q£;{‡>uiér~”!Ýîû¶ž„N]ã³M[Ŭœü¨CG™eáÉÅ×-œ,\‘R@Ê©êÇxèJ—’SJýzÍŠ’¼Ê®Óïä@‚vîÜ9vùýY³fÑét.@êê{óíwßÅ&ejš»á ¤È¸Ôð˜dö Û.&±KGt@RgßoЧ‡ÆÂk1ÄœD‡€4ðgGóæ»z ¬mîéHp„RψЀ¸4¢ ÂS[Ïèúm+ªí½¿å•Ö“e:!‘u—®ÝAÅv#S30ÂÊ;5™}bÜhË617ï Î!QÉA‰ìsêàµtP_×ÐÔÈÄlH$~cRHv#€à')ñ”Ÿ‹# ö÷:±rõIiL£qÓèi2ï.A’«w0òËÁ²2Ó¯2Ëʤ 6t½ªhìP1bÒÁ·Âã³`¤«IH+š˜Å?"~7U· ߺv×FE]{´UGHañÙßÀSÚ^ ¿Ù&.€™”žË>t¼ñ Ž?ˆ{TöÝ÷3+Zž#Ÿ(o~ž_ýl…ÀjÿðR°î“/úú#“ §L™’[ÕMç=¨áÍLŒ ÷Ò6ØK芲+ºÀt7’¥ƒ9­nêÔi.¾‘Ôôèä¹›7‰þõ¯36;ŽÒ£äü&pŒÕBëÜã\|£-YB@2Ú{TECŸ¤Ò×ÜØò &)(©®]·AFN‘’ÁnbR;lipµk7ïSCONA#1q)yEU šÅ0¤¡HŒJx´déòŸ~Þ²Et»Èº ¬@²˜¬<\M`‡¸”œ¢ê¶,@Bê ¡~Û“×@;:H€¦LùFÏÈ´õÉk¿åõÝ?nÚ²JPhå*ÁM›EËë» –á_8¹åñ+ÎZzF{̳ÿÊ„WÔÔ6X»þ‡ ?ü$$¼®¤¦c$~ã$5V AGZN ~qn“”‘WÙº]|Hß@BÔIÍ©€ÿa¯ÀHL£qÓh¢4Éw—@nÚBú(Ó/‘¢ßDÈG;³pñye?ÈÖÝ}CÁ½œhtqI9ÒÕ´õMHŽ´@¸Ï›¿¼Š¬‚êhó¼ìݶ‹KSëF::öËe #p³Ã§ LÌ‘Oì?|zÁ¢%’²JÒò*³yç8B>Ô:¼ú}ý]üüY”eÕ¡á=$tõ®†ÎnD£!én%I9¦twp sg#Z`,9ÉðæË>­ò$ Љs7H¥à÷í½Q±îòm»5Â޽® ªMÒèÓfH$à÷ßȶiÓ&É‘M__¥;vì:¥Y[[ûPZRRRæP£Ví¨å»íÛ·³‰ºúµ»ÿ·ªÆÎ÷\oT×ÒÛÒýü­Ê@ÕM›ºž½u½QusOcç3îÊ@‚B"é¹åäŸ …•Í-±+U6>©kí'ùÜå›§½LF4à*« †=®Qu®±ëBÑYýZZ×]ÕÒÏ}½QX\Æâ%Ë0Þ‡FŸH“K"ˆØgol@ŠHÈÕ¶?Cn? h~á -!½m§¦.ÉÍ'ªDЇژ|HOFs"˜”¬"QYÞ!)#¯¼u›8#:¥ Üô%¤åå•Ôö<¾—¹‚•Èz8øí·ßݸg[ÏÉ™H AÇF&IÉ)¯^/.%OÔÀqž÷m+gøß>sé6éÒŠmÙ&!)«ô³¨G ù„$-Z¼lÃÆÍ?þ,*(¼nHM#€6(ݽMBBFiÓV1Ž@rðÝ´EŒÜåˆwμ™³fCô·)SÔ´òjz ‡†‘ÖýðóŠ•‚|ü«6ü¸%)¯ €”˜×ôý¬Ù9U=$¾l Íœ9^â¿þë¿deeYɲ Ókx 4š } Aœ±¤¶«ªù)oÛ¼uÇC'/Ò½Jë{ «:Ç.TÙú,³¸ù}$‚²ËÚ kžŒE"(£¸9¯êñ[W=*iÍ©|Ì]°N`µ#>Ÿ,Öe•wÅfTå×>e—J)h&Õ =:|úŠÙ±sÔôè RvvöìÙ³­­­¹+5|Á4ªoë'g1¼+Zº_Õw<û€4ªjêûà4*oè­kAíŒF£²úžÚ¶ç,4‚ÛEMÛó¯ŠFŸ HŸ÷îÜ ]¸vÿäÙ+6n­\½¦²¹¿aÂëÚ8oL>šDЬC@ò I¾eíöN‚uhZÝî}Ç2˺©4ÊûÒÇÞ*ôçFÓgÌ`D§Œ/72?|òþ'ˆå–'oŽŸ¾°i³è÷ßÏÝ.žHøésW%e)(«¡Hg9H»´œRHdʇÍfÍæõ¦GS;£åFhÃh–ÜhÖ,^ÏÀ¨¯ŠF¤ÏwcòñI%f–ZÙÑn[:Ø:ûp¤r¬ò¦~Ñí’Bȶ2mL‚uíï%XWÁ‰FåŽFãPP¥®:b¡Ò\©#€•2T¶,X¸P±ÜØõb¯ùѰØôÚ¶›÷N™òMz^ pyæ‚ ÿ00z s9ò _˜7=ŠŒw ÑO?‹~ØJ ¸TêH Q+u,@úhôI€ôUî.1Ú^G»»'MÄîcI8Ò‰…FûÌÇ&f˜8ª¬ª1B´[ÏPQ…í®¨k'iDêvÃÉ>á}¿=UQYíâÕ[$®ß¶TQÓÔØ¥—NÆcF^Ezn \<l=ÈØô ŠBÛ„D3E¾wî3?RRÝŠÓtÿ¡°Ø´½æGøØÕ[;5´ÁîXÚu ÉÕ“¾×ìˆ¢ŠºÅCgÂ~ÁÑš:òJ;÷˜.¬laÑðFrÝ¿œ¿ Ž•ºõ?üäàæ‡€dçâÃær̃,\”’]Î…F» —T“}‡ä•Õ¨zÞÆûë;+r–¶n<ÙäÅ’G@„º–¾¬‚ªñ¾CÙ%MT ÁïWýÝûä”ÔîÛ¸¡Hp»8õž‚²†Šºv@Xò—J#¸M0ðî“qw‰œÊ®â†§cO>XTƒ4"7BJÛÞ¡`sçÍ~ì=}éáÁáñç. ‹v# µë6ŠœÜ|¦M›®¦©óÐÞÍ/(båjA;gC5 meUõ °8wï ¸Z\J6ŠG}£=À–¨ŒˆM_²t9™ýôó'w_ˆMGšW@X`hìÙ ×øøÈr!òMóGBAÀ!iYß Hÿès—o¡È…s„„×Ñ|`ðê€"á4¿PF P‡o…UÛ”ëÝ.áêÄÝeõyx¦F%æ Ð ý¸i‹¤Œ‚¶¾qbF1 ¤Áƒz̃”¨ß©¡sù¦%—ÜÞÃáµ.^Á`sæÎ'×ÀBZVÓö ݘKÍ9©ëµ…›‹WˆW`Ôɳ„¼ H+øW9yÁ£K–ñ!&‘@‚{…ÒÎ]À*À(;WBê%$æÑI£O¤O4tÄžMüî6tTù.CG3gÍvñ ÿPź;iñY5EïN£[Öî1ÕH#4¢RŒ¡ÄÈüÐq“½æ¨ÿxàæ®0R´ÉÛ?õ!1:rüT©»uÿ!ð&3¯nßí=¯P$Þ³²ƒ< EbjvIrf{¥n¿@hL*ÄfVaÍ÷ßÏ$÷ÙƒN}{}[ÿÊU‚Q ™H®^tªé¹åpˆGYæÔÁ9€"Ôßw਑©ŠßÖž7µ­OÁàjñ$H¹î©Ó¦†Å³‡6`F×À”Œn8®ïC:pä<=-¯"݉èîÃð&æV’Qovèä.#.•:8ŸÜWbÙHŒÆ¤¤À(tO¨ëxYV߯¿ruHt $K;ºWXÛ{ ‰¬'÷Џô’oxx*[ЭãÚ‡'}‘4š` áÉGݘ|4š˜¡£A ½ëÆä£¤Gp5Gϰq }?k¶G(•FH,ãF„ò)s# 0k;ƒÝ{†÷â%K”FˆvSt»‰>¤G¡„äF¾ÁýÛßæ/Xˆlöl^e5îãFé˜@çèɳ{öB ýE‹—Ê)ª((bØ(%"ƒ†T½Ý}‚7oÝÎ>Û2†ô»ÅC'=CSÞ#'ÎÒÚ *òÌ«y„³kx¨K_QhC~@j~üšãœ:1 ™c§Ï³D:qðÔy2äÏ]¹­¶K˸¼‡Àð$rᑎ ºp’×H 8úËÂEK¤å”eTgóÎuó E7 ’h"ºK$e•óððÔïDóÚÞ}¤ÔË,^Õ/’F $¼1ùÄmLnéàuÛÊ…ô'OÆÕ»vðÅ;{‡©jèIÉ©˜LÉ«+£É›’¶ÁÞ¢ú§¥CRª·º“éÑé‹·eÕTµhôxD#F|~H\. .Üxð·)SDÅdÔ´<è E§Îß–VP“WÙ…¶; O.Ñ12Ë,Œh´çàiWÿØs×­á‰[wHïÔ2r ŒG4ÊÅ@9‹H‰é„hwg/«h7´Q™Ž’$_zÄ’¥ËßiCaE$U­O^.\¸8.5Âþµ<~‰“_€H¾A‘ .f_o4$¢Ø~Ÿ ¤˜d¸Ú’¦î(ùV ±è/,]·öÙ#C[}—Ð¥¡óùh3¼á=£=,ÁŽ’!¯c` I—Y ì@BwRUK?êkéîfR×k¤°¸L ©i¹|…©~HþèFY ‡ $oBêå 7¢ŽgO &tw‰‰:jüÔCGçy{‡$,Z¼Œt‚ ?n¾oçßý-+;· Ÿˆ#§.-[ÎÏ$€Ð”)S8˜†În£ýˆFòÊšÒòªNÞá>¼sæù„¦@b¤®½[Y]—Hþ¦MŸqðÄŇ®ôØŒ* ‘œ²†¤œª½GØ=;ox®WH 0IYCÀ4:þëõ?nŽOX:<ÑüøkzTz% zFüÉI´›Hi`H½o 1r÷"ƒ1»¨uÂbRQÉçÔmÛ!¡©M¨†£ Iaî:ñ §¢^Y€„DThþ¤BHÉ(~B¢’H„|jïï¾AÄÕ¼†ÀC’–®ÑÕ[Vdhë˜n“ªk ¢R¥Ì¢:æÉY¥ß?Ó݇ÑÔý:³°{r&qÐ͇A†üfÑΞtøôð¤€Ð„&®@‚dÝVð¯rp€NaU‰š¡N@h" ­7òˆ ö|¢ICÛÝ+´õLôwï£ìÕùºš)õbçêGÒ(!³ì‹¤ÑÄéSï.ñU -ããG‚uÑi¥ß}?³´éúúKŸåVvåTv­XíšLÉ™ Hê$CҳЄüoxxòkzPbtþº5äI¤ ˜œÀ¨,öÉájC%»ø¼o¾áÉ®ìAeº_¯YAž‰Ñ£²®e|+ ÷™5{NìиÑpÉnˆFH,3¼9–ìØD»Y€ôç0†„Ô\•«DÖm–Süö»ïî?pè¦Njà4Ãû¡ƒ¼«+7î‘A*#¯‰ÑvBä[pÅ$ˆÙð¸GK–.ûqÓ–Í[·‹¬Ý@R;HБ‘S‚Äh»˜”¬‚Šèv ON@VÁC(®«›Ÿ²ˆZÛ»CŒ7v½„§ €pÁÿå"Äxc'ëA2ä‹k:áxCÇ û»t Mö#œhô©Ó¦‰KÊ­Z+%«Hì8à HÐAò.¢;$¥å”·lR'$õ]ú"ë~X·á'A¡µ¹å­T …Æfðñ¯!D[f|ûÝõ»¶_$Ê&H“ew‰/zèˆufÝÑÓ—Ôµ ³ïÐi}csôõCŸÐÿ–Q’’#ô¿íiÁh";JHÚƒ@²vòûë_ÿ6gÞdp²¤¬2—™uT Y:ø²ŽÊ@U-E£­7*©{’_Ùñ¥æFe¤O¾1ùg7tôA6&OÊ®†´¦¨¾oþ‚E‘à[‡FEu½è‹_¶œŸ#ò«{Ô´ u ÷l݃.^:öyÞT =t¡“JÞÔ¹ ¹Õ}"ëÚ²]J`•PfÅ“a Ñ0F ëÔq´ð¸ ['ï¨Sg~øTMk?û~!qX§nâiô €4ñ“OØÐQç:bñ‰Ÿ·Š)«ë¬\-Œ<À“Nè£õFN^„\7$fuŽÒò+­}€Fi…ͤº>Hn,|H…'Üb]ý¢ÙW ¬²ròGÓêr«{á¹÷ì¼I1‹òkûMÌOn—…Žœò.mÃýH' ?*00°jê×C£ :ÚßwÕÑMK'øÿ?}ñùÝ‹K+,’ë\Ù»—Ž’µ“ßÔ©Ó¶‰Ë¬Z#"&¥€€ò OV ­Û.!7ãÛï.ÜxR#'50‡ŽH ݰt™5{δé3nZ¹„¼CS—ñ­\-´n›¸ÜŒß»þÀÎ#tÞü…‰¹¡´âö%Ëù Õö_»ïŒžxÝÒ‰;00¾<M4¾Þ¡£w˜çýò£Je”¶ªz"TX÷41§Žãª£¬²Îô¢–ñ‰2<*éH.hæ"ÄR©²§HÜiÔÔ9€¦Ø½?ÚÚ{ްШ¾½¿­ç5—­oëG³ìÆN£–ǯ†·Ú{U7?ý°4ªlì%'2T4ö¢ÎF£U´›EÒÓèƒÐhBô©6&Ÿ¡£Ï^"¨ù½ô¼'R"ˆžæTc ½%7š=›7(<þƒäFÄÔ8¦50áú¡±\ÂsœÀˆåB#7ߢÊVjäš9yÿ¡e`×êkî~e~ø$¤ÑÂk7ÐüBÉ &4¹£R?`noÞ'(>!¬Í%7š>}F`X»‚*ZýŠiôþ4š8 }1»KLþ¡£·»ÄÇ¢çôè);Ø«s@£šæ'¤R7HƒQ 4ªlìæR©U4tsÉà–Àˆ!#·©Õ (‚~S׋}æGÃcÓ놴ºåWµ1w—Ø.&UÕÔ—šï*>½E4Ðèõ+uHM\DÆ>7 a}} Møî_ÕÐQÅGÞëèƒï.Á¥X4Â@b¡QXtò.m}eU #“}Å•ÍpÄüÐñÌüÊäŒBƒÝ{H»yÇš„©ç—ŽbpXÌ{ä¸S/.`÷3%UuG Ìý=Ê+'§w›:¡¤¢~ý¶•½‹7ÜàXzn9ŠVÓý‡ÂãQ¼ïZ9f$¥åô MÃãÒ!l‘V7ÇJ݆~rt÷C«ŽÂbÓÑA#³Ý{A½`á¢Ôœr.42Þ{fºÿ°‚²Ú]kG2Ø Œ÷7t¾D}+;·‡ŽžÍ#!Dö½#5´ôåUMö"'s# ÙºøÂuä•Ô,mÝXV¿‚]¸vOAECU];0<Óh4‚[ÓD ï.1n‰ âú>°½»D^uØ,Ö=*éÈ­îËîÜi„D¥QvaÕœ9s½ýC!+rõ ‘%»Êú¿ _¦ý°q“¡ñ>D#=ïØ”l–u¯Ôx ð ¬òðcx„-[·˜4›R‘ƒìD[ÏÈ?$úê- 8Ùdï²dGî(Á"àŸF¤8!¯€ð‚Š&ˆ\ÑnÞAì4*o ´º£“rŠªÚàƒ£íŽÀ€RrdPïÔйzË’Kn„4¹]½ƒÁæÎïEB‘\¬ëxŽB^ÇÀÄÈÔ|4 YÚ¹¹ù„@irS´‚•³'\9dé2> [7*”wîPAßÞäy„iô®4š }v“Ô¡#k¯GEc/ÖéíÞöN4ºoç‘’_ÿN鑎Ñ~´v4ÞÙµÜitû-.³Ahæ¬Ùöaï3t„DYY×Ô9À6†GVê 7Z·acs×3 ©ç¢ï®•:SÏ›óÎ(bŸS·r;gO’®Þ"ë6@‡"ŠMÉ™6mžÒ² $X¼™šAØÂ þŒ2xá"ô°xöz»”Œ‚®¡)tüBbdD;{2K|ƒmvøä.]#.•:BhÜ;õ÷˜…„f˜ñŽ€Ô4 Àº^•7ô‚ ¬dD§“@²²§¡¾µ¡ÉMª0Ä?*ááá!uê®ßµ< Óè]iT:¡@ÂCG¯fÎâ¥ùG}èˆÞ2t/áâñNÅ: šá=Z±Žðö çž‘ƒNR^Ó¸†Ž0XDÅÁî=pcÝ.&qíæ}4¹ ÑÈ'0lþ‚…u(=UÏ{”ÞÌ1¤•Y…Õ3êœwŸ`Iiy2BõŒLG‰UÀûRÀ›HðþC"“YBXMS€„²¢´ÜŠ¿üå/dDÛ:yoÝAõù+·Õwéq7‚÷‘„"ý®µ£Žé0ÚŸHY¸h‰Œ<¡ÉÍË;×Ý7”e îÉÙ{HŽ´‘‚ܳyṘFïJ£ Ò缻ąk÷!ƒ>à‚F{†'ï;´SSy‡Í;^Ñü"ÔvéË(¨îÞ{(£¨œéÚ][ˆŠ²»ôŒÂSýz宼²º²š¶#yLNy»Éþ£²Š;Ï\º€4ÚЩÞmhzÈŠZ^\¾m3eÊ”íâ²:»}BáËO*MÈc§Ñ£’ÖÝ{HË«ž: h„€ä説®+)«¬gr !»ш"à½Û#(€Dªw»Æ!'ÏÑ2¢ÑôMÅå! iîwõÕÝm.)§jå]Õcv윸ŒòáSWH?wKJ~§œò.¿X*0Øg1‡è¡±?ü¸éÄ/ç!1"”š Ôy”[FŽ!=ﱯ7‚[y@H49—îªT ùE.Z¼” R -}RǨ@ú3˜Zݤ†72u-=1I™Æ®èOÀ‰Ü‰üØéó;5tÈ Ö50…$‰Ë,¤fN@"µ´ôv¤ˆ„, ¹ÄòT yD¢b @ˆÐ¾’›!ÈÇÞ“F¤ÏyèHYMKBZžæîñËùëh7­•«×Ø8ûÚˆDí˜ãfŽ÷º:y{D?sy9ŸøSh|6<÷ø/—½éõà=Šª»<îþ‘6Î~ðÜ ÈtpŽÍ¢bjZ®¾á$8ŸH¤gܶr±wrõ8rúÒ2>~ðƒ è¬iÓg9uÉœ˜Sß7IEC}èèç­bªšúŽ^¡$x Ò K燮ŽÞá‡N^\ºœ ùGfÀ pÄÆ-(.³ZNYCJNÕÁ3ì¾=¡ÞíÍH ù…"ßN\|èBɨšI¨Ò…¡ôޝZ(ºaéÊ3u𬒯ÅÛvÖÎt>þUWî:d5$dUmi¡wl¼fÏ™G JÂ@â¤ÊúÎŽžÁm&Ξ¿ª­kH©¬¦ hȈ¡Î©kÒó&i”]XaˆÄ¼ÙW¿Â­\[Ïõ!ÚmjFR{ïop5G7_è§f—ÂÉÆ£ixG ˆYA!wß2xµt® iuƒéšn“ªïxF g%UÍý3µSû–.ã£ù2ȸ†lÉœޢÀ–À$:;¿Y!°ÊÑ=:EÕ„&÷h@‚€„Ö~à+ ISÇK´õ MnHµLAn{7’FIYå˜FïJ£ Òç¸1y|F ü°*«ï¡VêÀ/ÁóÐÏŸøt¢vLÙÌÑmæˆ|¨ºíEqÝ0þ•«é‘ià7Ô’]tjñ7<'86›å[§GeJw h"ÃÖR¤(Ca}VygVY'ÿj¯äá=÷˜%;F|<1§ª%Fç˜êÝ,ãF`ß# 1Ët$K¦$FFûŽ¡JÝé‹÷wêÆràåOÐ<ï3W,e•w‘4ÊÆ@¢),:yÚ´éR2ò¢ÛÅVð dV=F@ ‹³²uG…>25 mD øÔ\~ªž·µD"šÔÀ^<‡[9_·~ã†? ‰¬+¯ëdYf•¹r• œ&-« ¨¢~àÈÉ¡Yݱ£í('OÞ9s™TÞí­n°šV­îö4䂊U‚ëøiîÜù¦û“q]RKhr7v½DѽKÇ};˜ÁîìÁÔä–’[#LhrssÇqR“[F^yë6q*Ôµ†5¹ó*Z©JÞaq™+øW ‰¬‡ß¯ß~ûÝ{¶˜FïJ£ Òç,äH£oÚ¼e舺½£§Í‘™!v‡”’S–aî YHà@¶nðܹó ƒ‡¤åUl]B¤¯hê‰Ã<ïýH½[vP½ÛÂÒl*8Y;ù„ÈiuLaï}@£=O-X¸D\ZQRVyÖì9Ò(êÝìX€ä샀éÑÕ{ŽHAntÏ·¸àÜȾŸ9[\Z ‰KÉ®®¥—9¯áÖ êyM¨åñK„"îÊ@[·‰9¸ùŽOˆÐêÎ.{«2Pim'uÏ=ˆè³—nž<{™ íU«…åW]¨¾ãEnYÓX”Škº*›žrœÖTÝ:€PÄqP¹´¾§°ºçFã£ÑÄé3“¢ù…Ï_¸ˆÅç¨@rÞÌqÄîŒØ  QEs?ò§e|T !rò Y´d‹Ó8y†@®CÎóVÕÔc‰èH½»¾¹Â²åüL =g§oÝÖ=˜O`5 $eu]mÃ}~ái@£¼š4t´t9?;ºêÝW! å)”3î:¢¡#$+gú‚EKY&24Â@šT:uWoYœ¹pí¡# Ò£U‚BÏǧSŸaëì=ºGNÕ¶AíψÇ:u_&HŸïª£Ú¶çsæÎ³uöA>Güö ¤¡ÍýI ¡Íý ¤ªÖgÐwó#v‡D@Z%(lïNG>TÑ2‰‘³é4qÊJŸÎøö;ÿÐdpŽŒâf $çð "Ô»Kû‰½É½Âà%V®²q ¿{À8÷€–iuùµ½ð*^Á‰@£”üF` É=0€TPÛ4²÷ äÀI ¬²f xçÕêÝ÷í½I…&! ¬²tô  åR€”UÙ‰Ñ/’CÁq$²«0&‘jjF~¥­³§ÅCg'wÿqÓ«¦bF£‰Òç¾»DPd*ä@Ú¼ió6!‘õT mæ˜ÉǬ³læ()£¸œO`ëv )YåÍ¢âH÷mÜgóÎ…+XØÑÀ]‚£‰ ׯc>÷êð;7:ô%e•ˆã’òHçyêÝ|ü[¶IH2Õ»˜ûÝ~à:›—Ð̾ûн¬ipR»(Ãgx E¤á­Í,Ù‰I)@bô³¨¸¸´â¦-b$n xß²vó M]¶bX½ûüõH×-ϹaéJ) ¤@‚ÜÈ3$2ÈUkÖŠŠÉNŸñÝÙ«ÖÙHœ„5¼1¾d} }¦»KT¶—£}Geนc^eGI]ï[%‚ jç”·SÝ¥¬i ½°‰ûîÈ2KÛò«žŒO"¨°®/9¯žeÕQzQKVE7ûÆä,«Ž•t¤4@‰ ”¶„Ü&j±h„ôÅÓ¨¦åiS׫qÓ¨±óeMÛÀ8hTÑØ‹i4Ùh4a@“ìÝ%ž—¿>¼žwþ{K± !a ½•FÎ4¿²š6î4rr÷+©n›œ¹Ñô3‚#’Ç™:y×ÚÅø/篋¬ûá»ïgnüi‹…+ïõ/ö:1oþa‘õn>!è ”¬"="ÓhRѨdB€4y‡Ž8KMàÆäcÜ]¢¨¶wË6 +¬íýv—àN# ¤·æF³góÒÃâ¸çFÌ-$â&g¥nHã¡Q^yó‚…‹»_£0KˆO+¬j~Jó %ù†& mÛ!YÖð$*)ŽÇ¤äÃA ÑO?oÅ4šT4šP ½g±®¾ãZ=÷¡$‚ª[û«Zú?êî¥ OÁ>ìî¹UÝh"çÝ]‚*Ÿš^Ò‘SÝË…FiEíÙ•=,4J-jˬxÂF ea ¤QhRûV74ÞWTÙdñ€©¥-#¯o´'*á (€£¥k¨ ¬¶ÏüH)3+ºgô¶áÓÈøG‰»÷˜¡=÷ miH´;$2ICK_IEÝ`÷Þ‚òFžé¹å)Ù¥Ô ur÷è@#iäCD;ùGkêÈ+íÜcv¸°²E®É¾ƒŒèT8¢®¥‡ùò Uum°[÷mœ=‘8÷=kG2¢½éQÚrŠ;á¡üòfRÉ;$2ލïÒƒ¸>uîªÉþÃ+u2òÊ÷8£H‡—‰JE}ãýF¦æ(ØfI™e˜F“‡Fpƒš ½¿DÐî=À¸ Ù¹øŽ;–¡#Cs°w¢ÑG¯¬â¦±çFúÆf`\Šu–öžé oÝ]ÂÂÎ#5¿¾‚”ªóøT»K°Lòf_uıX7mú 'ß }?s¶;ƒ{z„D¥QVA^þŒ °8W€¤G`p·={ñš/=¢¸ª€dëèáF;{áÚ ~Rb:óœ ×|èáE•ÍŒÀ§Ö'/©’tù•pe_F #֙柞O>ºKǤa1iK–.#s£7mqpóAûÀæáêóËù«|+ÈŠÜ*A!Gw?Ï€pø8$%£àéCþõÒM¤5ÂkݼƒÝ˜âÜ>ô(ÎVöîî¾ ß ˜Ó爫‘‹^W­²wõ£ù…AhoÝ.îìI§†yecojN…¥ÛÂÅKÒòª Ìó+Z࿱¡ëŠzk{š¸¤ŠwUuK7,0&&Hdè‰K±nÖl^pß±±i,¹Ñ¬Y¼Qcß]‰ËÐÐÅÝ/ò­CGpš«oò  QFIË;í.ñ1†ŽØ€Êeèˆ H~X€Ä‘FYU}_9¨•:à°ÈºÆŽê,†%;f®³ï÷†öúö•«£3‡Jv±d­€Ôòx€CB"ëêÛúY"49³8ñQ!K¥ðÀˆJ† Í(¨úþû™h«=³­OÞÔ¶<X%Ÿ€  á”ì2BP®å)µÈ'ŠP,ï5?jh²Ÿü‰ÙÔýºª©L`¥`Xì#$'Z ÝS§MógÄSs£ Wïþ¼eÛ_ÿú7óçP¤{EË’ïàÀ¬ò!¿ÿà MCL£ÉC#¸5M4v›šµ=yÜr{opqÓ}ÃbRLÍUÔáwI£ò†Çû‡ƒ—®ß£é7ø¥©m Òù "+ºman'.%§c`†üõÒõûŠ*;5t‚"“‘;–Ôví=pL^Yíü•»T ±;¢G@„º–¾¬‚ªñ¾C·Ý¸GH£ŠIÊjé™Ð#RÁÏâÒŠ£S ÙgÖ™šÒ¨g/Ý¡ÉÍ7|§¦ž´¼ŠÑžƒ˜YÑÕ;6ƒZ«ºÆ~Ì…G`g/ß•SRWÚ©å’€âʰ^ª±#ÉhïሤBä pq¯à“Ò ª¶nô↧‡N\”S9væ*ù}ÿrñŽŒ¢š‚ª–=ž;tŒÌ<‚õŒHÉ«^ºeKBh—þž¼š^D£k÷nZ¹¶ ÖÖ=DQMW\FYg·yLF59tDÌà¡¡»GBVõÊ]GD *ŽýzKR~§¬ò.'ßX*0XÆ Œµïm;$®Ü¼ßÞûf˜4:vò×E‹—Ê)ª*(«AÒãC`Ù†œ#P]ùò{¡\ÆÎœ¿ª£o :râ¬É¾ƒ$àÏE‹—È*¨È+í„—F)1D™Œ€äêôóÖí,%wæRê#¥D£CÇÏW“W‘S$®æáÆ" „ôR‰?ÙæÔU4ö ‹¬?õ.ô“³Êþò—¿?C8xnÞºEý¯—o«iêaMMX# È×õ M }F¦¿À*wßHù—.ç{`OC@Ý.¡¥kä @‚sH YÛ»Ó Nçc’‰}ÀàOÈåsJÁYUÔ´ &}‚¢ÝæÌ—¹u›8ü&ò ˆ MŸ>cFzž…›‹Wˆg`Ô‰3W–ó €Ã…'äÀSàOïÐŒÂzp2-=cµ]ú,޵YT\]ËÐÍ7€ç3DøÐÝ®Ž´ ȇŽýry?ø #. N8vú²“GHJ^8Š‚ê.…n~œ}yçÎ ˆHŸŽ!N;rú’ƒGHRn-uw ¸Ñ ¯Ý}è>uê4yÍk÷ì܃W¬¾ié ß7‘–Wuò·tðá3Ï74…ËÐ\m• ˆµ³¿µ“?œlçÁ@@úÛ”)9U=(1RÓ2Ò2؇€Dʧ’@ºzÏÑÊ)úæÇ/,YÎONd€+/ã¸ïàgé°hÉòËL&‘@bŠ«ª@ÿöCB\ÕžDÒ‰}CGï›FÌ7?}~HLŧæZŸ¼BêVð øÐÃY€„b°©ë9ŠG]c$0àHô†?;uŽË,†¼²ž©MÝ/,\“œƒh“”ühR솨d’w`$sg£EH- E&fÃÕê;ž£‚\#˜º«¡§,í1;¢¶Kíl@JÍ©@‘~ääyUuÔ×Ö7$ ÓhòÐèÓ©€ÔGáµ6Nžè§–³·ðÚ à¾q©yà÷䯑b2L úqóã×UÍ}`d:?k6¯OP òÔ¤ÌRžºögÈ;oܳQÕЉJÊ…ƒõ¯ îA@­XWÛñ²´¾Œåê਴ºá’Ý ŸE%çG$æR+4.ûžªÖÈ™¶‰Këï'ݨ²åyaÍ゚nj€ˆT²d‡%*¥ž[ÒЇåÒÍ'‘%»Áq£á펞# ‘¢ Rr*¦æ'Ð7ýëÕû*za‰pÁüš^ôM_¸n y—¡#¸ÐëôMBbÄHß³ ÕèrªûÒŠÛÁøøWÑ艹C@ºzßë®Y¸¬Z³–R`tî7ßð¤—=F‰Ñ/—-e•va qRE]¹ÛÞ™óWˆ]_þ\#$âéÇ@@ ‹I µõ¼„ÄÀÓà?Y«€®^tèT4t/Yº €TZÓNæL§Ï]!ÇQ)@ ö9u¢;$4´ô…DÈêzHd2 ¤¹ûUS>^šHM]¯æÎïàæ‡B¸¬¾›R HOo¸ZcçKè{Ó£àjÌA# Áû¼|ÓÑÀ“QX‹â½¨ºcðÚs—o£?U4öšƒH/o膹z‡ ;¤JN4:¦Ñä¡Ñ¤R;H!̪4xvF~p%øb’2äôP]ÓÝ{Ì‘+Fé¼Âˆtž $g:Ë~YrŠªNt€™¡Ãï#Òh4:p”F•–#6éšÍ;²" qô-;·€éL»ôŒÔöÒìðé…L!TiyBÕÉ3ÉmH6.¬Z«ÀV 5©Œ $zòH†PVøÓÚÉoägKÊ*s:‚«¹Ä¢¡£K·lÕ´H eWŽ$æ¸ $ó“ó.“R—!´Y­œé$˜cHƒÊ@@ HwlYÅUŤ•HeUb )4ŠPû–”‘ߺPûÎ,¨ Ù;{!-mW଼2<´C\JNQeÛ $;çA½m4¡ÎÝ'xÚ´i’ÒòB"ëdä•H!‘Iè\™_àQ^%¡Cp1ÃÛÚÎ ÞÛ¥ë÷¨3¼¥å” •Ù.&)º]‚H(1 I_¼tÙ›¶ü¼e»ðÚõHAl@CWÛ&&)+¯²u»8HßxFnÝ&Þ<´ØhÎÜùé‹—,ƒ›ŒŽ A@Ê)m‚Ÿ­ëø N0Ùw…aUœ\×þÓhòÐèÓ‰TÁÒÑ7&äÜÚ/$<Z`• éÊšÚHQ(ïdMç‡ôø $òà—,Ü}CÁ/I ©kés¸]h\&¡D×2€|n9ŸÀ >‘ÁÙ‹ééO;5õ 1%‚à‚eO‘»,ãã'€Ô:HŽ!‹/㸖 HƒÚ©Üdë¼pñÒ±Od €äSÈ HYåT5 H,Pß‹‘ 4ʬx‚*uK–ñSôÐ- „€:$,,ZÊ2‘‰#PnTÓÜÓÐ1ÀE‹¡º¹§±s€ûz£¶'¯‹«ZYV6=©këŸõFå kÛÞª TZÛYÓÚÿÖõF¥uÝÕ-ýÜ•V¯NÌ,!c¼ªùivI#9§ŽºÐDn»væÂg.aM*M4:P¹À“^^^O” H ié!ÏÖ54521'nìzñí·ßÁO*ðãâjâç Ò`:ߌί^#âꌜµ¡ã$FŽî¤_¦æT€;ƒ£RÁó+Úà‚8ùŸh"s×Èçàs4Bi• °#Nº”#Á'8Ž:­®¼©Æ·ß„§€3e•Ò¨ú» ùÇÃË›À]\}Ãá‚HpA;7:r—²¦~Èc8û’ž“VŠbåja×@äT1ïA 1¿fv Ö=… Z:ø@ŠH)F4rˆuõ‹ BËøVÞ³ó†NR^n4 9ûŲ«z€F6n!ð©@RR×C@RÝe¤¡kJ)£â $F·z Ñè)=®€¤Q&Ö©ãªSLjIèè9:³Ã§*[ú1&Š'HÈÝݼ‡ÊÂëdä”H A´vý~ø Ž—Ôt Wv÷ „È*¨@v/)£€€¾Ë’Î# =pð@u ä  Y+V ‹¬—”V€‹Ü¼o^èì }yeæ>Z †&f£ÍóFÒ¨¢Û%¥å”·lgé•…í 4ª¥½GíФßrp'¤Q¥4ª”<˜„´"!„º]BRVi³¨˜“G8Ê݇n³yçÀïÛÒÀWèQøV¬\#¼NLR.rå¶ ò‰;ÜH½TV Æ•²éIY¾rÿð´å+V"ùT¸àÅ›Pn¤ª©¯g|€H®œ€daï;uê4Q1™•‚"Û%åY€”G)ÙÁ£mÚ*¾CJñÇÍ;¨@RÜ©#(¼^híF¸HlV $à-hP\u+S\õÌU+’FH˜FX5õë¡Ñ'Xëã×E•-Tÿ&Æ"“!%"QDzsS׫ÂÊVöUGd:Ï]"¨¢±·¤¶‹êˆuí/òÊ[Æ"TXÕYÖÐ÷®AàL-Ï2Š›ØWå”·Õö¼uÕQ^UwVi[g·Ûª£‘Ë`³Ê;Ó‹[¨Å:þ•‚)¥c_u”[Ý—YS0Áº¤¼æ´’NŽAÊ#q\u”TЗÓH-Öa ! aa}%4š8 qߘœ:gv—à莟\°Ž›|h‰ gŸÈ ¬c׬ã(Ä2t”‰ÄûÞ¯í=o>êZûá—"{Ö¶>myüj¸ó~4ªnîkB¿ *›ú:_¾àw'»05üËê{0&&Ho¬³¶s+¨lù$»KT6÷‹î¶í„U4õ×$q•šsRd(;>‚žw (4Â@b±ê(4îcäFð£•Âþ“qÖl^FÌ`'$æ]idçâSPÑLF1!§ó®¹s¾R4Ù8{“{““* w¬¡CH޽研¬"R "ŒÂ‘ÀðdL£IE£‰Þ]ï.ñVÁºÑh„ÄR©U5>þ•: F0H ÊëºÞ57‚§ûÇQ 4*©í|×JШ¸ºƒKn¯âMÆ{NiÓ‚…‹ÐŠÃ‹×î;}aë6qm}jihôãÏ[1&à5!@H1ïÑhTÕ܇ʽÿÆädnöQ‹uÅõ}`ïM£éQNeWqÃÓO¾»DZq;1¡ŽI#$æÍew‰”ÂAao*’ Ú2ÊŸp)Öa 3+2;x,#¯ ĈJÖÔÖW"d¿÷V4r¤‘éþCa±i»MÍ”TÔmiTù®V$Ô7,ßEÈĹûËw±©÷÷}æGS³ËÈ€í¶°E:uÉ™%,4ºmIzIHÉ阆Ƥµ0Õê’³J¡c¼÷ #: ‰|ßµvDíâI'ö1Škš_±×Sv!)³:»÷ ‰J5ÙwH^Y  æ·,ìl Í¡=yö ¹Þˆ\t8$ {æ/\”˜YŠi4yh4!@ÛîlbÞ¬Å:ªv*Çb5m MÌÐ,»±ÓÈÚÁ+£¨qìÅ:½ÝûÁ¸ì.AÈxÔ¿µXwÏÖ#9¯ù1ËÎ;ücï.ñÖ¡£au†šþïgζ£…r:"ÆúD³ Á³º3¸Ó‰eÚÙ(3¿Š¹œëì៘žÏ17BŠ\¾ /ÿ°eËùH&qRK<™FÊw 1‰¬Ô¡ŠVR´Û7ˆínc®ÖÐÔ6` ÞØ”AA/ÿ°Ü²Æ²dÇ\âºFx­«w°+SäÛ›cS ¨û ¯]ïêÜÌL€|˜ z–‹W0Øœ¹ó½#á Úå äîšUÜaù’``ÒÈd>Å¥ë˜F“‡F $n“¿uw ÎbÞ”b™¶±X‡€ôÖbÕ·ˆ…±QÜh4²XÇ$v§!e¼¹Q§zÒ‹š?ùÐ lÙ€D-ÓRÕ(@âD# $–9uH`„8w{?÷Jܦm=QÒ ‘µP¥ŽR7 Ù’ò].„|;ü˜@"E»©eºÄGEqiìµ T²cbx9@ gBäÛx? ç~ü­&ŒLÌ8¡è&Cž("Õê Œ÷—ìèÑd¤O6Í7$Žø´ïà MmCL£ÉC£‰‡bÝ(bÞ„C_¾q_I•ê‰Jaß]‚ñf>™Ü<2m×10 a¦í¨|¬¨¢¡ª®CHF4*ªîÜk~L^IíÜ•;@#$NÎGóPÛ¥/£ º{ï!”]¿K¨}ï”Ý¥gžþ“Z•RÈB£Üòv“ý„Ú÷™K·‡Ôúˆ¢ÊTû64=˜VPCÊxkêû2’—œ¹tGVI]q§–wpòŒK·ÂiÛÄe4tv{‡$î9–X€¾u½ÝfžA ú&¤åUm\é…u}Oœ—”U>zæ*ùeŸ¾p[FPû¦ÆsOtŒÌhôÝÝæ’rªoÙ’@ÒÔÛ“SÕƒhtõžã K×a ÕŒÒC·źâ2JÚFæQª¨@ºiMS×5•U¹|Ç‘HÇÎÞ””Û)£¤éèƒ4H£ìýª?$Î}åÆ=r·=v …F§ ~VA5ÏÔŽQD‘ïbî+'£8eíã o–Y T ŠAÌpFŠAH÷:CCGÏpÏÉ3—Øž„ tÇ že‚~z² >``XuNG ýzéÖNM]L£ÉC£¢‰ÒŽFóVU'„ºá‡•“G ¤óñ™T ÉxÇ ïp¦Œ7JÛOýJ¤íÙ%DÚ®¬¦à7u`ž›ž·e›8ü&òˆ MŸ>ƒHT·»oãêì ùÐñ3——ó €{…ÅgÃSàOg/FzA} Sªn§¦>K±n³¨˜º–«o8 Î'€Ät—;Ö®ö´ 7¿È£§/-ããwA2ÞG™2Þɹµà *š2 ;]|"¬}yçÌóOŠÎ„ú‘S—ìiÁ‰Ù5-9•ìÚ·ýö¦Ú·²æÕ»ö¶îA|«oX:×-§¬)%¯êènaO¨}{‡¦p)ÖÁÕV ŠX:úƒjß4Êé Š'H;µŒvìHWî:ÂsmÜfÇ.,YÆOÒ2>{ö¾÷ü µï;ŽY I+ª‹Ë¨tTÝö¢¸î ÿÊÕôÈ4p²™³xiþQ¤WE&å‡%äRý)4.랊–çÈu¶‰I“@ƒo:¿ºl…Àjÿð–÷"’ á¹Eõ}È3.Þ°†<©ŒZ²c~ß,@zà€¾xHŒLÌŽ£¯üì•ûÊ꺌ø|¸`nuú¦Ï1Õ¾¹ ÁÕ'(1Ò79‰ªÔ‘@Êã¤Ü%;H}zS ÛÀ–ó¯r L@àÊWî9£ÄèÚ}Bí8„€ä•óÍ7éôrpó‡—þ¡ »öŠ"éý/Êmaýp[ÂWIÓhÆÐˆ@â6Ï˼QÈeÞŒØå+ØW* ùÇ._!Àqž75l÷%mß,|c ÖÁ]Æ*êºH,Å.&…s7t FÚ¾YÔÌif‡D?¸)©é E$ð^G«ÙäFÚ¾9.ƒå¤šÉ€äì±tùʩϬã$²›€TP÷IAMÒĉ (ï• 4ʯ{ˆºéÛ·GXñ¼£IÛ÷(|¢mÜC—,[É>—õFÍÝC,›ŽsCê~ø²®m€E „ô]ìf ®/0Š&]oTÛ:ØÒûô›âR ØUý¯k‚˜)-¿†‡‹áÂeó3®Ñ4šQ4â/&q$ó†¯Z,2ï®Á—y0ÂpÁÍ-mÄ@êx‰4ÞÝ7ÊPÃö¶~ÂöíꊔUTßÜû>1ß'z€%ÁI{WŸª¦ÞËšjiçîÁ§­©Ÿ&ø $ö’Mʼû8*‚šº×¶=à¶êi¼']uT×þ¨ªy€Z[úž—ÔvOETÞx¿¶}x꫎pªï~š_ÙÅ^tŠêú*š‡&U•4Ôö¾Ca]aÝ@nUµ³Np톸용+‚Š›%4OE”^Ú•S}Ÿ£"(·ö!BÇUGå½)Å4¢D;¼i}:4âÞ©°®{š„uSÞ]‚}èè ëÞpw‰7Py&LEÊ0}Â:nË`Yh”Oi&Ѩýþ³¶þgÓM£æÞHïFð­´µï9M£™O#¾i½çÝ%ê»FH÷¤Ýc©®óɇ²»Dõ4+‚&Ý]‚+¦¶»Ñ@bÒ»¥‘›O`ecÏÔc#Êêõ©ÒÈÅ+°¼¾ûµb#cSH<긓G@Imï*ïèÁ,®éBù¹óæ3ÃiÍ|Í MïîS÷yÓ»Kð¢Ñ´í.Á‰Fãá ¤i^wG‰1 ½Fl„çyO½§ŽH,œe,Ç/ B(™òÆ~šF3ŸFÐRñHŸêîD¹©n®jæÑYWÚø ¦ãÉ;Ü]¢¤qÒÔãüšû%ÍïÛY—SÕ_Ô8ôZuùõsªxÐ(½¬—…F4ØitÓÂFIUÒ=[¬ô>lzRUCÑèÆy%5e5íØä\L£àÈ$5-=i9¥CGOT’û5[Ú‘*nq)}㸔<ìð–S„÷jE'å 3Èï%#¯rõò{™r£Q@x¢ª¦ž”¬’±É ݵqzG%åBEÎ,¬M/¨f§QMëàa³ÓÒòÊ—oZb 1Ãà’”Q4:r‡Dw¬IIØþQ·7Ô÷+·¬eC˜&Þßè¶aùÙ/ ×D$䙜L˯A-€±YXl–ááãRrÊ^̨æ¾ç§/\“Q<Ù7ð;Àß« ¬›IÓˆŸ4âfÊÐÑÛí.ñÆCGÚM´ Lx };w>§r,VNŒÌÒ¶©iêăF~h?r”¾%¬t±¯;tôÍ·óœ|£yÐÈÜÖ'!¯‰éŸ¼hîˆä˜¬®{d¯˜Ô™Kw1öˆJ¹$Qi”_Oi€Câ’2A !QI—¯[ ¹Ýë7lòô ‹)©hÊ*¨„F§xû‡/\´8)£ÅFn~Œà˜èä —o®^#@JÍ!TÜð’W^ß…ÞðÞà¨dOF¼7!½ù½Ôµ ÓH¤ßË”[ldçâë ñÐùKÄG@NÎ">âÜ¥›~Á±%µP—L-öØh÷^Q5-} ÀÞ‚€dëâãÁÐÙßn¬Z#„ê8! ûrÔí]X݉ a¹ú„,X¸(&9ª|BF ºÆ'0¦ ª¢%ÿ°DT÷áü¦-ÛEö®þ_Ì™#§¤~×ÎÃ+€X¡híä­œ¦2B½ƒá†‘Iù4øF£Jþiú†Ž:ß÷ÆäMœ†Ž€FT ÕsÒhFš&vÙ5>žýÅ'¿Xî@ç­c|BFYÓ(…=}C¿/\´ØÓ/½l꺀”˜ u‘&ïQT4Á¿Q‰Y@‹ž‡¯BAI„º44ªä†[|£Põì~ð #OF®­yeM„ß+&9jkus?ò{qœSŸ Ñ9ðòá‰ðHÈö«sx\fhL:Knퟕ˜ùŠÆ>øRXl&©­ÿ’¥Â ñŽ„$Œu¼µï9aó Áµ>«¨Õ}¸uÍMHÄœ:kGHM½Ïà†ÎÞÁ¸AH/¬£iÄ7ñH|:êxCGN^!!\zÔ´ HÌ¡£×™çíÎ „߸¸(ªê q“y㽎\|#—._É>Ï› $g_BàͲêÈÉ'bµàzü˜åU´Y€Ä²äh|ZHeœ€¤H©¨ñÑÜy Â’Ë>ÿ|v|n#g í.q‘HcøYºb•{(ËwÖñyÕ.@µ#¹AdÞœ¤ Å ƒù5½¼W ¼)Oº¼e8½¤mzAOü"2—,[ù–Â:ˆ™‚K¸)‚ *2=síЉ‹ÔðˆhDAÑ4zo4â>þ¡£©ù¼§MX75ƒjùù¼oXº¯Û¸õÜË×òy³û¼BÒnXyq£$ÝC§2+ïçÑ@⤷¡QûýgÞ! $‚4­4jêiìy·4ªijî{öÞiTÑü°®ë)j@ÊGó\iTÖô¾à²´*¥Mj:G>2AÅ Ñ»Kð{w‰)†GSÔ§^¾íhé8M»KPÄ©ŽÒ»‰•\¼+º§!ãÜkÑÈÉ3 ´®k걑¾‘)$(rô`ÕtMJ#wfaujæÎÏM|ï±Ñ·sçû' „ÌÇóˆ¾üò+fDK«B(Ç‚â?2ñHýн»Äì.Á›F¹4ÞQOG Qc#b‘ldòÔ{ê^+6"”?ISï©#t”¨«_yÄF„ÂÑ·R QY}ß{ï©ã$Ž=uHõì@ú¸hÄ? }$CG}“ Aªj®læçî•ïcw‰¬Š¾‚ú!D£ÌŠÞüú‡UüË7­däUT4Cc³Põ7»LCÇ0<.Ûèȉ”ÜjÔè›…Äf<|\JVÉÃ?²©÷ÙéóW%dÏ]2ÇmüÒò*òÊšÁ1™¼i¤gd›ep蘤¬’¹µ+n1´ ŽÔw?EMÇ={okg¿‰@z䫨¦s@JAßøXNy;H¶®LMýÃ2Jv^ã@ !$“U–UÒ`F¦Ð4â;f¼"ÈÎÍ?¿²ãm†Ž¼Ž}cr\8¬œY¥moF£{Ž~iÅ­o0tdáà—RØò÷:eG‰e ,{šóåW®ÌD–Î:x—½wïð(·îÑ'$–ØÚÖ1wê:úƆ‡ÍH_~õ• Ð:ß (FpÌÊU«\G™´gŸ˜†¶à€DŠºGøÅ@<„´ÜP=S² -7¼d„Ä–ÕuBUUPÑðE&{ø…-\¸8>­*ìî}¢êZá‰$ääæÙºøøFA‚hnu;v“zï€XÒRï %"kŸp¥Ož»&°J‘dï>wÍÅ72£¸ Ä[O'Ÿp€¸g¯®\%ˆÊŠæAym–¡£„xïñ³W|"R š¤äUH)ºùÇZ»@¨ž¨‚ª®²†dÎ\¼³íÇ]Åîm³3W¼Âó¹…GHÚmílíFH»ñ2X€Y~ÝC$Eu5Àä ì¹zÏÍÊ5ØÁ'ÚäÔåkðD¸óÊÕB÷œ-]ƒ—._u˜T?¤2*" v^Q·íóæ/ò IÇ4¢Ä¤Îghæ‚–®!RPDRHT2´ûÈÞ Ñ)ê6%fæïލ/}õ! ¤ 1 Á™å+X¾D2‚c!ÖÁ5WUSoHl4ŠO/µö?C/‘¨{R A÷ "6Êõ[…Ð~@ŠM-5÷>EõnèÀ $¯ÂüÍq½Ñ¤@jA@r˜$O&aþžú,Î@ê%€TÝö5ªš$ÆÇ‚âÃòFÕíQ“­ ©~ Hж <@®Ç{ºùE.[¾òC7â;ø>t…¾mµQ‚qT¯Þ¶URŸŒ7Á¸Ãh0Þ7ÚeGí©kèy^Ñü°¼ù!DÇå°Ó(2™Ô{w=C%é½QO]mçÓ’ÆAHðÞ ˜¬:ʆ{Ô2¯¨~°¨~`µÐú€è,xö)%áIEì3ëp—]tZ|hIÓ#ôt/ݲƒ8 =Ñ‚ºAÕk Ÿš;o1nÄ&¬ãÖYåž´zÒ64ƒÀ)©‰+H ª”QÞ iÕšuž!iH×κëVžk7nÅ@ Œ/þüóÙYUƒ¨³îìUˆ“0h ±Ì©Zçåªo\±RICÛ "!££@ ÎABÔKîM^ÓrŸu@ŠŒ'Ìß]ƒ/B¤– i¹:^B`äá†ëlnicûýç„“›Ü‰¼²‰0@â4§.,.s)±Å ´ 9u–nÿH\©Ãb3C¢Ó©ý€øˆˆ„¨Ýe ½¤ö›RhL¹ÅóvrvÜiý†Íx·½fÒüíâ‚i”QT‡Æ×mØŒ»æØ„†“@,@jìy|aÅ4J-¨C­D`d3"…e¤™Èq£ÕkÖ:xŠj{—- ©$fD ­7üÀßH’’ºPÕ2ÔÔ;Œ7ö¬îõa!!ÙJ#¾é}ì.A Æ!<²rôFy¸ã}€tôÄh @íΈb:r$õÞxZ*Ò{w¿8rü¼WLb4wõ‹dˆÃÇÎ-Y —Éî—”Ÿ;³oyÞ£@jjëÄ¢ý“ßÖ/þç/™;pÖq:‚r tù¶#°gšÆTÄH&¿,Z²|ß~q¹oç-°v Å@re& …$— *ºãÈüì³Ïæ/\ŒœÜw@–' ãF>sæÌ—Ú´ù; )9Hà ںíûíßÿç?(*ò Œuo$HXËMš¿EH!÷Q-·“ío´†0o5Û¸Bõ@Nî1ó7H,µ‰º÷RDÝP‘í\ýæ“n7ê¤jo¼'#>B\ i¿¥q—Ýþ1ó7üh×^Q$[çQ¥·«?Ôú˜”ÂÕ¤ù[T|TÔ¾’Ú8û¢Ël],@ B@êç$ÈD&æï›·‰æï›–N¨•PÑÐ38tŒe¤™#€CN^¡_|1gŸ˜äú[ÅÄe0ð¸dDÅe 0ÚµW œ»E¨@RTÕÙ´eû–m?Â÷†¼Ênê>j¡ y¬B²–Füßæys”J£Ôü:TÔ ºÇSeàû@>àËü-ã@¢”¡ªvBï åJ Ò{3ÂS‰áŽ'PD<˜1ð^—1 ­]¿ÉÑ+ ¿°TReÛc(îþÄeH¾¡)ÞÁIì%@hý&{x®e-ÃY»àg“Q…2Ʀg÷ŠJÁ£•VP×Ô7A@8á÷ y%{$–²)V ÝudR‹;€D²É•™@‚X ÎØyFÂßB±í¹V^U_YÓ)»æF·íxf]pb ¤‰@šdõ+)2!‹öÔñÍSç–L{ꦃFï HÓ¿êˆH–Ä!¿ç„J˜•£Ï¼ù àíÖÎ~PªÄÄeq½s‰£”Á é½%ä6Œé½¡pø*Á{ÅÄ$ˆ鮽÷¼Q{·/” ‘2íÜ#**!ûón$4©ý©ß±u[ØûÆä¬º}øùꫯ/›;ÀCueÄ.Z¼,£¬ m^u?ÜÙ–äÐ-Olø†ÀH^EGËÀ”ež7;ÐD†{γ¿˜³KX|íú-{ŤÇä3HÙ+&ÑO»Döí—ùa‡0 ¤a$iEÍõ›¶mÜò½ÐúÍñy-yc@‚ŒWXæÊUBk7lÝ%,ñåW=wÍÓ(§–Òä.;ŸŠÆž÷B£æÞ‘½ÂûYœ¤­©4Þ€FüÒ*‚¦ycr"¯ïç­*®ë¯ly4©"¨¦c$·¼“eÕÑeM'•2äW÷7>x3EP~ÍýìŠî×U­ÚQ=õUGõC yMSQ¥wfVôsTeW Џ­:J)éN,h§ÆF@#H´Ã›¦Ñ§C#>éãÛ]âÝ ëjø.¬sõ§ÒˆHïBX÷˜£°.ŸóXÖ¡#D#HVñHòÆäïrw‰Î™µ»ÄÐèî.Á›F4ÞfG Ž@±6"yê=uHSØWÂNÚS4ÒÃ@â©»yÅFÔ¹Ý _Zß÷Þ{긩~ ÷ÔqÒÇE#>écÝ]¢û#Ü]"«²¯ aè;ërkdUÝ/x}¥”t³Ðˆ :å$ð¾~ÛFNQMYM+*1‡#®cuwb6Ðé½I÷¶qtR.âÐ5óÑk"²QµÅzï+7GõÞÜ8„ôÞ’2„ÞEEw¬I½÷~)M]£HRïVP“šWÅ¡ªæ1Ã÷== ¤ß!ñ*ê¤áûðñ"2*¢¨» Ã7¢Ñå–2¤,$6j½¹%q0©÷‹Ë†3FGN$çVCk od“ip踤¬’;#²±çÙ©óW%¤Ï^2§Òèâ Ki9R*Á›F„á;&ËÀø˜„Œ’¹•+j4´ôÔu ¦ãž½—•³Ëz#*<ÆcÆÇ²ËÛQ“2é½õKÈ(ZØyÖqÒdøVÔðHÿhhÄo }"CGSñy[:ùe”´ò‰FSPlõÁ= 野tŒOüvËÑÈäô•õ›¶ýõëo¶lÿùÒL£ìêÚF'æ/\¼vÃÖ»ÎAH»E¥œýi qÒX`ÄQà­¨¢ ˆ ŽJödfîô"4n„¤ˆÕÝ£&g齃cKk;F ÊÄ5Éî~a .ŽK+ìÕ{ëlHHïÍ H¶.>ÞQ!72¥~ùåWðé½;ÆV²×î]{EÕ4õ¡ñÈðM‰¬æÖ¤áÛ?,ñ—_o¬Z-DQwßð" ßíP÷å”Ô¥d• rö&`‘Iù±iÅpÍRï_ÑMÀ/4Z8¿qó6@‘­ a“UT·°u÷`Žê½Q〤b¾Á Nž¤T,!Gl7\¿q«‹O8¤ù - WЬšŽÇ¨õPÓ6Ô>h‚Ú v ABÆ|#¼‚âO’†oÔžÌÕ{‡ÂmI½·' ¤ÔÄ¥½F ßÁ±9ø¤÷²»7&g÷y{rôySбÄ5 î=nLΣ³Ž+¦@£ØœÆ…‹—æÕ ¡ØÈÉ/Î?&?£¼ÏÒ-tΜ¯à%ÐèÇ"ÉÅ>ÙpÞ72€äÂLÜúýNH¬@£Gwvqœì|¢" kÂÌ¢"¤l¬î&{êî¨]vY…Ä5m÷Ÿ£¯·I½w¡÷žÝ9ø Õ_¤÷æÑS×>ðª¾c¡÷NÊ£vÙ¡näìŠÄ¬2–ÚQB¾ï¿DÕ|Ÿ¨¤Öþ—µíjHÃwDb.Ën{©¹„¬¡{Õú›¤›¥Ë®)dHn¾á¨A€Àèȱ³¨§î ¡÷Ö–!)§ŠÐ€u>Aíà B*Æ£§Ž0|û„£ãàáA‹Ô€€dÀH¸©ízVÚô ”4|Çd# ÝsðF­Š¥£ï†ÍßQ—Y¿deë0jX®Ü¶‡8éã _€ô±oLîš,¯¬%.£¨®c”^ÔÄÑç-?Ñç}õŽºFEó d ÝCå­Ñ#7·ñ´°÷ŒwP¢¬’æ)EU-äüFxÒQieáÉ¥,ø®#㦵~¢ŽÞWî8‡œ|£d•µE%äµ L“ò›1Š4ôM¼‚S4 L¥4‡Î\´U…tá†-@ˆX#åÈ„ D%.š;bq”y«ëq H†3Rò ÃGN]V×?ʱ³nï~™ßÌPg@È5 FJšFÊÚ‡PgÝ‚EKâK0²i QÆ8 ¼á$‹™[ZV‰ $Žên q¼Æ“A¸·qåÕÒ3â$¨¼„Þ{¡÷–”Õ{c ñ7‚€LXT÷Ékêa a÷iø†¨hî¿r%`,†oöY T E¥£6¹‚P³€–'BÆÙ;t¢<›Šñ7"örLC-†¹•+à õ¬¨iq’×ް¾ý¢üÃS±›Ž`$H¬†oh[>•ñ HëÆä ÙUPŒ ¸†‚bãÊŒH*œŠÏ;4qÔÛíì‘VØ åà/³f•6?BO]Yó ¦þ‘˜ôйóØy„¸ùÇZ:1Bâóáa+kÈ)k³<`ïÐÔ¥ËVâ‡úÝ÷;îØû£½iånçêâczæÊ Aª‘aІ;ö~va@#àÐI{¯Ÿ¨cg¯# ­]¿ÅÒ%Ò¼‹ì½#ØeÞh¯£ÕBÌm}Ñ2Øvì³pdRi”ZÚœT~ÙÂuÑ’å¡)@£˜œ&(95®Üuß¹÷"¸¬êÉ_-0h Qg1pxÃÉå+8®7B@"ÔÝ+ØgxSäLè½Yf1øÅzï± «ª¡ËHPyãÒŠHùé3T—‘Þ‰‰€Ä}ƒO aøÆÕ\E])&…0|ãýV­š$²_Ä+€p€±·HxO ½RßKÿ(¸áÔg1L’ RUÛ0j=T4 x)Œ4|Wµ?F‰À*AHω%êÌ!hX9Ô­Ô\}#—._ù1Qûuø¤qcòÆ© AqY¿qKIã u"Ÿ÷Õç=Úe7VX¤wÄÝ?v݆-µ÷©;,±8$¡}è8çœ ™¨ôÊ¿~ýMIócôtKšçVõçTõ¯\çžtω‰b£°¤RøèìÊ>êÌ:¸P„:ë´š©hãÎ:ªÌÛ‹”yÃÅwüñ̺Ù_Ìqô¥Îó>~Þ|Û»?ûì3ÝC§ÐÆä¶^Q„ol.ƒ¹cᢥBZ†'¤µ1h QW¿rxw ¾‚˜Æ“†i”WÖˆ™Ü9Hª»ÔÝðï¨Þ{ÌP×@°‚i”SÒÐÖÿŒÐ{'æ@…­hìCzoŽ9,6àÑÖÿòþ¡¬zo\Cc2‚£ÓXêxiøO ö /­ï– …†ïæ¾çPÙýBÃ7 $BÝEþM½Ï 0röÆÕ?½°®uLï›v 5sRC÷Sˆ9œ<ƒq‘š_‡,b̈ö‘f ^` ‘zï h1 kz–-`R=Hþ¤á»¦ƒXräÉ$ ß~‘HJ꺨IQÕ‚Öà0HUí£†oÜÂ$äÔ|4âfÂÆäÓ:t%¾íØ-rþ²”•)ú¼‘·v ñ¦œÿy—ð/— ó:2;sEQM§‘éYM}ôhŽž]¼t¹ðYQ y"Øò gWY:nÿi7Ë0&ðNÌ(™`æ¶u¥ #vu7pÈÁmTïíèÁì¦ê½Å‰k,l\K^L¤÷–ß¼…8Ï$j>0fãÆzïRï4ÛönþhÃ=–I ¨Ž{L4|# A5ß/! Ñž}£†oˆ‡ âSÕÝPå£Saø>0jø†“Öø;š¹ Rd«T šeRïÍ>ïi H/0 Ýpô !ôÞ¢’€|Qq*¼&©¾W‘r2R0@ª#¤@Ñ{çVv¡/»¸a ‰g5|4z@z/“O÷ª£êŽ'bCé19q ¤î>o ¤ª1 •4¡¼’º>’¨º0¢7÷ácçy̬KÌmøüóÙ… Cð=—Ï••4*¨BQÑ AH„¨@²÷ŠX¸x)˪#H‰,@â&ó¦n-iéòUVn¡çykè›JÈkcE5HX¢zðèyqYU$9U=’0²k†>q ±/3Bov3Ps÷ãºö<\ XÝÍc½Qcçpmë µ›½ýþ‹ò†ž©|­¬nlìzòff –¾ç%µÝ약¢i ®ãñT\ å÷ß¡‹¡¢åaI}?n%„ÖmL+¬Ÿº¨¶s$§¼}Šf ÂÚ¾ò¦!ö¹Q•mã(â²Þ¨¸a0¿¦÷£‰f>С#\trÊ;F÷˜èz~ì—+ð½fŠ>o¡õ›Јß@z?Š i:ÂeÅ/4Âó½¢?íÜÑO|V›ÏûŸwdz;vãÞnxð¶îÁpŸ="ë6lÞ/ @ò JDg~ܱÞ•VOMjà8Ïûú=7øß>}ñ6žç½OL£Ÿw‹ ýq§0 Aˆ $ˆ¼BR—,[¹uûŽí?í^·që 5Nè²ã$óžVgëñÃŽ}(<Ê®y0oþ¢o¾÷‡PNU?«z):«A`ͺ[¾Ÿ7¡š® ¢Q\^+\œQ5ˆÃ#H3Fͽ#{„÷³¤¦žÚšJÓèíiÄW Íì¡£—o¼êŠ Î>óPMÅç]Þú8­¨…åyçTvç×ÜãUGeݹ5SQ¥wdWÝŸtÕÑD™7gEКµã‹qO]zy_TF}NÍ»"”Q1€'y›œ¾ftì7*²h ѱM£O†FüÒ;Ù˜üm£&ZôZº¦© ëØAîÁi×,=_KX‡’¶ñÉÔò~L#Hì@¢Ò¨­ÿ)$4jêzÜ9ðòÆFÔHh*4ªkÔÚ÷üuiTÓþM®›"šzŸÕv>áM£ÊÖ!šF3ŸFÐpÍH Ñ»K¼SMÏîÃïdw êXê\D#Hdv涪ûÁ1™úFfb²—oYSc#8“AÓh†ÓˆO@úPv—ધVø¹»DÑûÛ]‚7h ñè©£‰cO X#øi`DÒÔ{êx ÇF,‚|EcÿëöÔÊúxÄFT3P~Uçâ%Ëšz_ êé†å‰³—wîU×1¤öÔ~øy7M£N#~iúA-ï{w‰Zzw‰·Þ]b&vÖÑ@Â@Â(ªkhrŒÐo_½e‰º†eÞ ÙtÍÜZ–bò¶°Á¶o#´ôÒU¸FAUQU+"! ©iË ohÆDi~ÞµÏÙ;”¦ÑL¦Ñ´iÓæÍ±©…ŸøÐQõûÞ˜¼„CGo?Ï›ùE.[¾©»»{ddäÏ?ÿüÄ€4AÄà9fû&ÔÝƦ=CdÞ”‰ÝÉã&ï-ã&o Ž½«ßüù¤íÛÝ^Ƨ¡kÄÈkîX»@µõôµ}“núfÉm3r;—q½7¤¹¬@jcRFQü™~Áñml³ºY€¤Ž€ÔÿÊÚi\æí¿s(¦‘®¡ÉQ݃&¨A(®ë…¸ª¾û)M£™L£é4ׯ_—”ù86&ŸÖ¡#>lLþa e’IX\^G× h]]]OOÏÓ§O?q AlÔ9ø²¢±—Ý „eÞÜ\ ÈäÍ{½Q}ÇpuË uÜ·­ÿEY}Ïô™¢’ó—¯x3LI9Õ¼Í@g/Þ:yî*M£N£iÐp<þüûï€/n1)T‰=tDñ:J«€Øh´R`Ujjjiiiccc?§þóŸŸ,>>Oµ“7Ä^×nÛ¾§.,>ׯ…ÁÛ tøØÙêöÇ4f8 ›F ýç?ÿùßÿýßÇ_¸pAPPè¿èƒ>¦|,[¾BK[hT\\\]]ÝÖÖöðáÃW¯^ýë_ÿ‚rõ 飴¦Zظ¹ú„ÒÖTšFx¥Ê4 Uª?ÿü¾Ø @›R[[[^^^RR­L}Ð÷J”“²²2 Qsss__ßÈÈÈüÁ»t~¬@¢Þ4>•4O'P¯IpShM€Ið--- uäQKôÁé@ÅÊISSS{{;ÐâìßÿŸýu3HÜhÔÚ÷=êxÉFÃí÷_L…Fí¯ê;†Ûú_||4ªhy8aFÃkÒ¨¶s¤ªuø hTÒ8øniTT?PÑöqh4Ï…F…uå­ÙiTP{¿¬exfÒhÚ„æÚýë_ÿ‚ûBœ088ØßßßÛÛÛCôÁý€‚/1CCCðmhôüÚh¾…G3Hzú–¥K]×3³Ó¿ÿi÷×ß|»c·ˆ_X ¦QMLjÑÑÓ -Ù°ù;ŸpŒQqfdú;Œ¾;Ï= ‰È3ã¸ÅFs¾üÊ'4…=6‚w¹ùÇÎLM;ðt;ˆ“þüóÏ¿ýío/_¾2=¥ú˜ìxöìÙ‹/Eüñ|§šo£G3HCœøU· pë©ã$j$¤¬¦cnéć؈êêæ[OÝ8^ŸF9åí‹—,U·?68|"0:³¼y芹ݬY³’rkQT4ÚµW¬¤þ~DR! 2¥hôýO»ÞaO7 Uñ¥§Žfø$Ì$¨`pkhYþùÏþIô1Ù!ø*%ÊŸi4³€4¡Ú¶GÌFmßH‡LOeÖ"…Ƥ+«ëÀ:ú‡ «Ú©@j¿ÿâÄ/±Êž[wV ¬Þ½WTS×(»¸¡±ëÉ©óW¤å”áejnbÁ!³°¸,Ã#ǕԴ©/¥ä”½˜Q-}ÏÏ\¸&)£xá²9¦‘_H¼²º®„ŒâÁÃÇ «;FTWwX\6°'%·:)»’…F–Þ˜@Þ±·­‰|‚â•ÔtÅ¥ ˯ì@4Ò32 ŽÎ€3Ø tñº¥¬¢:¤ëHŽžÁÆÇ$e”Ì­\1мãÕtÄ¥ôåV´#éšD¥Ãy-Ò©ó× ÏÞS·eÛ¶®L$hý£2PO¦Þam-Z²,!»Št ˆÊÐ52ƒ¿ëÆ=L# ½C•mŠnÛzÞu𙤎q ¹2bäU´÷KÊ뚥·b Y:û«ë: ¥hníQÉ Hç.ß—Q–VP÷ M 4âPƒã?äË¿éƒ>&;þ< À ÂÃ*Ì Q¢¢ÝûDÕµ ÆlßH¸Ë.»¨nþü…ÞÌÈÀˆ$ï ÄÌR ¤ºŽ!à ‰ã,†¸´¢Ÿwí=zâœoPLuË °˜„¼²#4þú;hʳK;HÝÚõ=}cÐËM[¶Šì]ý¿˜3G^Iýž‡w@ŒàÚõÖN>HñdF1ÂùõúªÕBm÷_Źº½¢ó+Ú?@&€Kl“¹l…Ò÷?î´scB(åΈô I8}áºÀjAÁ …Öm´wðð‚—À!‘Òž1ÞAqg/ÞÂ@Z¿q«‹O8¤ù ‡îÙ{¹úEøÇuà†(6B7ظ1¢àåŽÝ"Ž^!,4*¬íû|öì°„| Qny'<©ÚΧˆ@w¼÷ŠJ` É*iüvÝŠGläX·q‹ƒW(¤ù ¹ûÇ AVÞòåU4jêRÇ8€=æ6ž^a?~öêÊU‚H«Ö¬µ÷-_±êÖ“0¤äUU·r €ÏeFe¿wñHìp¢ú˜ôøÿÞë1#€D¡Qraûîzð;šSؘ¤‡mܼµ±s˜Ú/@rò X¿qó=;wsê€@NI %ßÝ#(0’QP=rìD WŸÜSGxC`drüÊ_»c€ÁÝt-ý/kÚ‡ ¨"rY¶×ƒ”˜UŸQÊÞS'°ZYSÓ ê¾þæÛ¦Þ¨§®±÷yeëCHpðø$'Ï`%fWAVÑò½ËP„ tðð -ý#¸§®¾ûYyóƒ²æk„Ö‡Äf# 9xá.»/¾˜ã’Ä$áýRªZ†(<¼<0ìÜ!*Â/LNCTWËH€"ÔS§o|# U³‰É $Hm#…uV ®gFe! ݱõB=uö>ë7}GRTZµ¤é¢ÎÅ[v'½w½ Ñ}|ÇL²}ãÞ„í{H£{¿jéA‹ ÁÐå›–hr "˜Sç.óžá€Ôñàoî~á{„÷ãQ¢ÛVÎý …Ædàq#ªž.€`¨},*’SRG42=uaé²âRò2Šóæ/„¨ˆHÜÆÎ\¸®¦e™£'/è™!=ynx@RN\ZaÞü($‚ß$02 9{‡þ´c/û o¸†™†€dn媦mˆhdrâü’e+öKÊ oˆB"ââˆ4 $ƒD•Fôj;Ÿ¡9u ÙUÿý?ÿƒ déèûÓÎ}xÜèì¥ÛrÊZ¼ä–ŠzênÜsöPT= 2;·dé QqÙý’òsç-pò‰C IFŠÍ¬üPdëôÙgY°h Jp¢â÷N#HôA Á±Bk7àÉ „í› Hh¬ˆ–°íûŸPé¶•Ó¢ÅK<á<I ùƬXdvêÒÖ! µsRÛD Ť<{FÐ ïU«…x‰}CNY+´¡õ]#‹—,‹L.Á¿pú®'¨§N`µ Hˆ=ž1ÄŽælë8)"1hTÝñõÔÁ ÙøY¾rœÇ4’WÖÚ#,^ÕöÏð®é| @JÌ©A2=õ ÏbPÕ:hhršÇ, ž±©´iIICŸeܨj,› 4*m~„B¥•« µ?…Ûº0¢„€@p1HŽÞáK—¯œ!ãF4èƒ>>< µß'lßQI¹¡Ê&Òö @Ûj2å =x“½_~½¦ª¡‹ÇrJ€I¨SŽoî}úõ7ßF¦@¾ªyà„Þ£›S7)‚cÒͽπF¾Áñð· ®î±m÷ G§D¦rœS·s¨¢ª6\æÔF¥Á zž„|‚âà†HxJw}÷Ó Ù» —%uýhNÝD ¹ ÁRm×€Ç3 nÈ$D %u½‹7¬P^UËp×^±Ê–a–õF’²ÊœD `Î>áxDKöÁêø†&³Ï©Ò3 ­Z³ÖÆ-h”SÙ ð ©Š$ŸÐZräʈ¿ÂÉ;IAUAHYÓ@]÷H¥ÍÃY¹`ÅdT½wÑ@¢úø`€ò@úíqÛ7uï×Иô/æÌ= µsðª5BYEuH€œ¢êÎå+,í=9ºH 1QTćf}ð~ø÷°éitrH¬@’ã$b†÷~ YŒvï; %¿k(’ó˜«Û•/Õu •Ôu9Nï¾gï ÿ¿]»‡§w‹IÈBCÜPRnçv A ‰ÍZ¶B`û;Ú±wãæm¬@z‘¨¸ ܳ_RnÇn*pH¬‚A¦´ñ‹àÊÂÎ)«¬mÐú-Û~œ¿`‘ž‘¦Q^U7 ªý äÕtuÌØgx“@JapÈÎ#ø‹/æì•X·q‹Èn@‚ŒðŒvì—ýy—0©’’œŠöÆ-Û7÷ú [2K;0\±@€˜Õk×oún¯¨ÔW_}}ÙÜá½Ó¨˜}ÐÇ$ÜS×1ð²¼±—‡¨¦u°a⼆70µÝYZ× ÑÒ[ºÊï×¶¿ÃõF¥õýÕ­&5×öVµ<šÊz£¢Ú¾Šæ!Þf µë7ÇgVNjʯî®l¦Îð>ýëÍc¿\AyÁµ“rë^Ë TÑö8½¤u*f Üªž¢†ìf ’¦GE×å×ÜÏ®èž ±Q1 $ú H´§îýzê‚c²-ýÞÀSglúKyó#”÷NüÄ=u“Òˆ}ÐÇLÒ»¥QSÏÈáý, Oò¦iD[Sß#h Ñ}Ìh ѱM£O‡FÅM4èƒ>> M?êÚ¡½4§ƒFÍ}/ªÚ†{Ÿ}|4*iD“ÞŒFmñ$ï©Ó¨°nàÓ(¯º¿¤éÎð QnUqã#våTõ5>zÑ@¢úøp€ôÖ4rò(©íâÍ7Ÿ–84 ‹Ëþvî|ÑÒ>AñÓJ#ÿÜŠ>ÇFhööÇFMN]¿ç\Ù>brò·í?îúë×ßþ¼KÄ;8 ³§¼õñÁ#§,Z²~Ów^aè¤ð~iøÐw}3wž #gxÄFð'{%³Fß|;ÏÙ/ú hÄo Q­0ÿ¡úà~ÌÐÌÒ»ˆØ„'ñî©•7öOGO¢ªö»Ž|ˆ{>Á |î©ã¤Ii”^ÒºhÉ2 QYË#=ããþÅ .Þ´ýˬYqYÕˆ=@£{DókúC à³Â‹ 0‚Oz½ÛžºÉÔ2$î@zñHˆ@H«úÿ’ÇÿCôÁý@…YVŸ>i QhtÕÜZAYÒm+'ÀOS÷“Óç¯HË+ké¥åWaöhº_Jñª…3:©¦s¨¤yÑ覕ûm[o*Ê&ÉÉ7JVY[TB^ËÀ49¿É¡ª}HLRñ†¥;7 9ùŽéÌE‹ÒJ’òêžÁ)“Òˆ@úã?nݺµqÓæÿ¢ú˜ò±jõš³çÎ?}úôŸÿü'”È÷‚¥™$ €CûÅe!q̰„_¯Þ ‹IÀIxyÓ‚0sç”6! A~㦭žþ‘,\ìšY 'ÏþvÃ'0¦°ªØcãäãÅŒ‚N®Z#„Æ ŠòKÄ«_7lÚêÁˆ€´`á"¿ø6r,ð‰–èâ '£’òQlDèÀ×mtôD `Yf1D¥ü´sï‘cg=™Ñe ÷÷ŠŠË*ªûÅ_5·…7¦5@HÄâí&þ„ÍÛE6Î~_|1®¿cã?Z#´þžƒ7Ò={o7B×pê<áÿ E¥ÁO¿æÎˆÊ.k…ÀÈŒaR@Tú²å8$ÚþãNkd,ì¼\|"¼ãOž»&°JÅFÐò ®ÛhãÊtõ‹„—À!áýRnŒhfÌé_oR¥ÝŽ^¡ŽHÚÍŒ“·m=|Â=âNœ½¶r• qõ¬œý}#€C?ï±óf¡QnUÏç³gÅæB>£´ CEÛÒ[¯="ˆ:Ò êç¯ÞãÁǭݰÅÖ=ün.~Ñ!¿Š¡ØHIÃ@Cï0 ±ìûË¢ÅKP‚‹%d1‚£ÓyÌðÆ@rñ ݽO Ý´t‚! !o7¶¦ÂãQ‘ƒ7¼D4ºnäÿ†¨iÒq£“篩h@æÈñ󺆦ˆF'n(!w@Џ! ‰ˆî²ðT$Ïàwì©ã"íFù÷\T´#<Üb¤‡{þˆ‡0|ÃR0€p¾¡)TÉ(jpH“Qùßÿó?x†·…½Ï;ö"öœþÍ\VI“÷¸|œwp2Ê_µpüŒ©aˆHe\€dtôìâ¥ËEÈŠJBq;ÏðÒ‰cH‘i€Ä$Kç@B(¾p JpRD\Ž7¦Hp—Í›·@lD7¬tz³ä]¸båªÞÞ^(šÀ$ˆ“þýïó-Hš9@Bøa„Ä-YºŒ $ß ÂÌÇŽþUEC—H\€›ZHÊOŸ¢ù «Ö@`RðD yÄ,_!Àq;XÖa y2£W¬\g1=y"Ž@ à ¤ˆÄ€GMÇ4—" $o$ž³2Š›!P«j¼xÉ2´Õü 7¬jF=u«ÙäÆˆ^´dY퀗GhOÇ6Ù[¹J©šHÏHk¦f·ð,ðFÛâ2«Ð¸Ñ‘¿J+¨cq*I¼g1` •ORaÝ$E5=R 1£r€F…õC¨§n…€ ’£O7rö‹äPdç¶dÙÊ©Œa5=ž^ ÁÝéž::½Mß¡¦¦&`ÄIüñïÒùQ ã§ýþ‹ »xag]kÿ³¯¿ù6(*ÀSÝ2H˜¹ÇºéÐ ï»¶nHë7nFênH¡1¤–¾çíäìø,ï€ÉÔÜ÷ #ï`L£ôÂ:Ž@b_ýŠTß5¿¶x2@¨´¡à„ºæ¦ ${H@ ¤ë®ëzJnFNèºÖnØìì†ñÃŒHc„¥pœS·c·ˆ¼ŠÖÚõ›Qx䞆àñ`ælV u=¯n2Á"k&bO~M/ j4ÂKÚ†ÜÝŸ¸1š$2*RTÓ½pͱGYóàŽ=¢%C,KŽ$d”õäÔÞ_¾b•ƒW"„J¶nA(Ôñ Jä¤r*ZGV Y93!“QÚ¹tÙJun@jñ J K‹F€@ð‡` É©è q#EuUmc* † 0ºçÄÄ4ŠH­àM£éÒþó¸;ݪÒém¡ªªªÖÖÖ¼|ùò_ÿúßzífX¦tG%æ@€òÃO»vìÚ»yË6hPŠ4s/>lzGKÄöEHƒ¿c Ù¹ø!Ó¶½›?¼Ü/) ¼W\J~×^Q µq’!ÄF1)«×nÚ²Mô€ôWýÚÜÒ™H] HùBñÝûÄà_££§¨}tx†7g ûŽ{Ë@‚$&.‹uÝ;÷ˆ 0ºçè3oþx»•“HMk|RËôî;¶ðŸ|þŠžáMø¿W îÜ+&&Aø¿'il†w@TÆÒå+·ý°ã‡{6lþnÂ>{ ‡DH÷Î=¢b²?ïaÒØ ‘#æç]"h¡+ËìsOÔS—VÔ²ZpÝæï~êš"e•w};w^YË0ŠTuµšr’¬]g1g·°øÚ [öí—æ $¿4¡nŸ˜4F?ï> ûãNa $Y%­ ›·mÚúÜ$¥¨ $È0"³{ë6nÝ#"ùåW_ÿvÓž7¦Hÿ÷ÿG‰No¤’’’†††¾¾¾gÏžýãÿ€rõÉ ¥ªæûMÝO0{Ú^•Õ÷´ô={-3ê©«l¨ï|üëªÛ†Ê│±÷yAu±áÞÛ¹Šëú*[†Þáz£‚ÚÞ²¦‡“šòªºK›NºÞ(¿º§¤ñÁ¤f ¡õ›¢ÓÊ'5Jšáº“n˜ž¹Œ{ê֬ݗU3u3Ü*¥ yŠf Œ²îÜšv3P~ÝCŒ"Žë²*ûÒJ»&¦Hp Htz'@‚Z[[ÛÓÓ322òçŸÒ@z{O];í©›až:fT–…½Ïëzê MÎ7<ÄàñH˜ÉžºIi4½@ú÷¿ÿ=Ó€”PÔ•V1ð±¶Ýå_E¨°° hww÷“'Oþþ÷¿Ó@šá4ªíx²{ŸK‚“4>ekêThôÉéëoçY¹G|¬@úbÎWö¾ñ+ºººh ѱM£˜FEŸ€FÑ9-4h } @b'P}Çp[ÿ‹·¡Qmû#b®ÝN£²¦µ#oL£êŽ'-Þ€FE ƒïœF…uå-ÃUä„:4aòkî—6³o[:æú°hôÉIMÏÔ/º2ÊÚ‡ýTuL„Èßv L«04ûmï~Ù#§®â‹-ÝÂ%ä5á$\ž^NÆæµi˜Á»Ž_¸sù®$|ý±óæ"ŠdTáÎÜ~©|.Ç[Må¤kVÞ †pÍo·]¦xCq9uxiç x£œªAHJ5 ¤$Ž!u½Ñ›ÅFÔÕ¯nl3"íc#££§oZº ü© ×7mÙþõ×ßnûa‡¹;P§ºãéÑS¿}ÿÓ.8¹c·ˆoh2ŽDÈ0ÂÓßmlôíÜyîþ±€ȸ‘n±uÅ+NßÓäb?D6~ª]vÐv mØ -;eösĤ”Ïßt´p^¹z-nÊ!ÀõÆÇ/-_¹ܲ}‡”¢° šx¸‰’¦1:w€Ö.¾aã7wþB×À4nA̤ŸËñVSy#\³b•¹C\¶d¹À¤7\%¸7ñ-þvîx#\gC²rô¶ue é›Çd:N8¹‘=ÏN¿*!­xî’9¢QRNUbv ìÝ,¼1<bÌ­]©önãcy£{I×È40*Cßø˜‚Š¢Ño×ïÉ(ªÉ(¨]»ã€€äàHÈ(Ý"½Ý(±Ù» é  Ø»Ož»yŽ=ubr·¬ÜXºé¶lûÑÚ…‰{êfq™•ûlþ‚Å(Á§À[&æáø¹Ün5•_˜z 3¾84Å¢¿å/³fmÿyŸéÙ[3jª $*€Cbâ2~Áqþ¡ ¿^¹€´aÓVOF„')äf„Æ#rQ̰DWŸ8œßNž—ö Šc„Ä_¸lN]ýZÕúp×^QÐ(­ vÞü…î~áŒÐDGÀØ´b`áuíA@ÒÐ1Ô3<Šb£Q'7#ÒÖ…ñÅÂÉmaëîÉŒ\»ž@NÿKuCd¢FEÁÑËVà`hû;m]ý‘ÈÍ/Â78áô˜½%$·sc" Hd¿”‡4ðæ—ßn! ­ß¸ÅÙ; Òü…‹à<aïö¥Ú»‰ØˆÅÞ½c·ˆƒg0•FÅ ƒݱõ\²tERn•Fù5½ŸÏžŸ‡Çd5~½fÉ#6"Dà¶8x†Ø{²mWF ‚Ô8ìRÖ<ˆw”`ÀÆÁ+ òf¿\]¹J $ÕkmÝáÎaËV¬ºií1$’F’rªÀ*FŒ•3áÛfDfÍdÑ@âÚ¾C04ÂÅò•kHw]Bá$¾›¸œ:ühñÒ¯;ï€ãçr»Õd鎃øc§xCìé·oØò½Þ‘_h Í Qi”^@ؾë;†©sê«÷˜ ÈØä¤îÁ#I˯!-ࣞº[–N *šiyÄÛk۱̩#b wæº ›Éˆž:à@®ºmˆÚ5Ç$]¤~Hnÿ?{ïE¶íÿÿÖï/£Ž:cV̘1Í8&”$9çœTTÌÙ1€™œ3M“S“ƒä$9ç#bž{ï{óî»ë¾ß[ë¿«]UÝ-"‚hÕ:ËU]tWÃÌ>çSû콿›ÉBtà™?vê&7p()»©ctw’8êÝÏà†áñ9 #Õ»aLŸ>Ã?"…è]¸vÿ×]"ùË_;GröKÊb©„,Ó#§U4 øìÔÍÀ~ùÃ'À1â$cR Á¨h{UPûÆ*¡uÑÙ8î8ø¡ó»Nþë7må ë°VþÃÓ‚*€çÊ-GY%­¯™F4x®ïðS`ò`™†?Vö¬ªgàdܰñƒ‹ÌØB<†oƒëørPÆ$^·%dUtqGYÛt”7ŒÎnÂÑkjq…¸iIéë’Okçž}¤ oL"(>  —ò©É=wžŒ¼Š3jçî}Ô o¸x6'Ï]%Æ9@ ="â¿[Ù :"´ú‡€›‰‹±½" H¶l T>q£3—¬4táäèÉ‹†fÇŽrQïÆâFXˆ(&Å\ý"vì!å/ "Ð-[ ]StNRïö¢ˆ¥Â¼•ð~Òá­'ü &Â-) ~ 1‹áÜïwÕtøÄàëü#Óx,­Ýñ¦Ã@bk¤òÒA ¶Ø¶”‚[lÛÕEUM˪üdïÉÖÛ^¸ ¸..­ô5Óˆ¿õ}˜,8F¿î—Û¾s? .z„¤-_µ>¾{¿´è%“èÍàQ-[¹f͆-pýÇ™³ÎÝpxÝj”@–¬Û¸¼¡u±9Í£¼¡#aÚô;÷ضCþä „H_!Â1µoR†7W ù‡bšÜ¤,fXâ¢Å‚Ôz#¸Ãm7à–'3’˜Åb†'mÝ¾ãø™+Hõ]¯PÜHS×øc@ú0 $Y 9e­XW¤®Wð[E?(„+¼Ô»q ¡,Ÿ 8øHA qÔ»ÑNWõnK—¯ôdÆpÍð6:tBQMÃÉ^Ñ­/I9uê:&à$ñÉbà$LD•7RÑ2â¤Ðø<¤mŠ ´|¥H¨Û À@±¹‹k‰àò¯?nôýéSGÂÃö%½¼~ Ë7ò–ˆZ ð‘ñ’]Û­À×ÁQôI7„Ÿ~…y 4ðÑÞ©}»û…!U·<á¤Ö¾w`ü#pe7ÀÅù ºù†"U6?iŽ!ef×ÃG\¼‚9ŵ=¸3tæâ 5-8Y%´Ö4*mè\¶‚/> ©ïChLFptלºÝ"bÊêzë6£œºê݉¸z÷ a[p¯ç-XèèŒPT\×ÿΠiX½û5ÐÈ—«z7{¨j\±²CÔ©é|“V؈«Õ­Û¸ùüÕ»p®®k²{ŸDyó5Ãû·Ýû±~¯l1£ÒH9u¼€´rõZÏj¶BÝ’¥Ëy | ª7üÀ_A’’†’ª¶1Öy¤ÒæpŒl݃qÅfV}Í4¢ôÉãøÅ;O\¿Ü£•Bëéæ4& H0b8jß;wïÛ´yW !¯(>½hµÐ:x„”ܬY³ïÚ¹–XIØÇùm7ú8HàVvi¬¼CcҧϘ!*)³k¯èÊUk2 êHžþ‘èâ†M[$¤¨@já$-=vR· ok'_ø/[ZãÞTõî@âøCaqÙ‚KWlß±{Ç.‘Â[x RïÞÃU½›“Åà?Bªfw—!4tMàJqãS’÷]G_D£üªxseû ’Ц¾©)۽¦OŸ!"&½nÃfQI9®@ò LðÀOÁ1Ú%".&¥ðÛQ"Õô6oÞŠémg–v„µ2ŠË[±jÍúM[÷‰ËÌœ9ûê篙F4>y%”\½çuÁÊÙÒŽÁ‡Fð£_w‹‘M/HŸ $äU6õ7v Ž¦Þ¨®c ªù )nTÞØ_ß9øÑz£Š¦Çļðš½-¨êúÒõF%õýU­£Q*ªí­lyñÑz£"®êÝ#3¼×®ß”U‰ïÔ•5?Ï*ioé£Ê@§/ß²8w ß©Z»!9·nôÊ@å­/Ó‹[G£ ”[ÙSXÿ”«2PqÓB¯z£üšþ슞¯Ü7¢DHS H´NÝ—Ó© ˱qeŽA§ÎìèÙ²æxÜÈ/4™Ö©3h уÒÔM#Z5õ›§ $zÐ@š@¢iDÓè{ Qa# $zÐ@úºôÍÓ¨¢å9ʬj»^Wµ½G•4>waêÝl‰ ¢ºÇ¼Õ»«ØòÞ¼Ô»¿Ñ@¢ ¤¯H߃o„½ŽÁ7B*Ý¡º®·ÇNÿþËo{‘ w@d*N S,E%eÑ’SA4:9ò"$L½;:s|}#L½;8±š¯z7¼Ô»Q®Ý7O£‰}ÐÇgß;¾õ:FO£ÜŠŽE‹E¤êöAãC'Cc³*šŸ_»ƒ r§äÕ" U>xìœg@Œ'3Æ'8G’}уÛsЈ­Þ=ž;u¨ÄGħ„'±Ä·K£‰Òë?þûÕ‡’Æ Œ÷#ÆË¡ñ_/ßac€2^¼¥Ž?ŸÃx3b<ÃÇë¡ñthü'>ž¼â2ÃÄÇôÇKlôQÆ£4þ^4^Œ=øxþw4>Úuë2D]z¨ ÷•…×RÂkíà¶­O]#¸íÝ|o„°Âm’Sæö÷¤‘6ÐØ5xúÂu9E5³´üj0Vrž³²‡K×îØ¹ø¡ó3—,SóªÁ6ŒZ°’òLŸUTÅÔê84ºzÓVNI]IM'"!·£ƒðÒøð  =°–Ô¼š”Ü*œ½Cñb#~¡ñwí=ÁŠüq¡îC˜P7²%C³caqYpEY]™ÖïV¶ò*Z0¬î»uꆄºí<¥á*Ýp=¯¢79Óc!±™†+©ëžºhihvœëNÝæm;<ƒq Ù¹’¬€dçÈÕ8ÙêÝÕ|h¤gr4(fH½ÛÊÚ½†z7þF#Ô»Ùiß¼Ô»©ýˆêÝß&Hc ÑËÑÓè-½ù,=þýÇhiÄÒG›RsÝ¥áK£¿’F­ÜhÔò™4êåA£î;õ$Õò¥ $’ˆŠK+©jF&YÝs‚u<·´).µ@híz°‡Ö¾w³úyÓæmìF?À¿¨îuæ-Þ,ó, OÃPTÕ’UP ŒLqg„ÃŘÈ`¨¶‹O¨opX [¨Û€d?áñÙ‚ËVà@úeÇn'Ï °" ”7꾄 u#sB7töñ ŠÓRPÑ“”ó Ž÷ M<÷ûm\§nýÆÍþQîþCBÝ``ÖN~žÌhFXÒé‹–p7Üäf°oèèì»k¯˜«o•F…u˜ wdòCHÛ~Ý%*)«¦mŸQމtT"©wS›põnþêÝFæ¼úáêÝÇùªw#&ûջ᫃¸ªwOYMøÑèóÜ#Þ4úóShôŸTÆ=êå¤1Ðè#îÑîÑxш ¤÷\ôåhD‰’³J§M›ÖÔ󃼲†ùñs` ÓgÌ(®éŠJV­ZWÙüÄ“µk¯(.Ä(BFbf~Rߨ<-¿Ök\è–µ+øI8Üá¸Á$gW$f–QÝk Dhlœ ¡î¦ÞwȖलõ9ŒÕkÖG&å! ¹ú†#»JÉ­LV´<'¼h„vꌟÔ5:ŒÌ¬¾ûmyó3˜JwB.$gŸ0dxÓ§Ï`F< f1f4tMð—NÞ¡îþ,ïàøƒÇÎÁÇ“ójÁá¢ábj¸ÇÞ©#ªwsÝ©CêÝÈ\y«w³Ä£¿Rï.¬{ ƒ—z÷½ê݈êÝ0~çªÞ=•iTØør’ôr7ëxÒh¢6ë¾ÚG±éÿ4â¶Y÷®áh4ÚÍ:H$ yD‰ˆJâ¾ò];w ƒ¬¢š³gЉ3WàŠ¾‰¹#Âäð‰S®“”`ÜsðÒÖ7õô¡>gž´¼J§¹.LÇ'n„„º[8BÝÈ–HBݘKÄÖc‚7ró‹Ü±kÕÀˆ:u·m=4uMáäȉ¡» ©tÆâ@Â$‚؆' ÄÊ ™Ÿ‚ª6 oÿJÚ©=pôô’eîe_ÄWïæÓ߈™VÃG½ûcýz·8oõnñ#Ô»F¥Þ=Åi4Ñ@šˆÐÑ›ï*tô·¯1tÔ3¡#¼hƒ>üCã—­X…ÛƒÅéKêZ`À!eu»÷e7´õÍ6lÚ›ÌR\&2$¿8Áe+¸>Áð)‹!·ê®g uÇ<([‚u]¯- EX”op<¼ŸZoDÔ©C@b±Uºk:‘½ÁÝF‰5¤¥ËWzÄÍOIMSÕþ’WSQMWSÏŒd™è"nœH½›#&õnö¹êÇÔ»K›yªw£ó!õnB; ¤ÞýíÅpM(Æ:â¤/:ÕfÝ— Ò=úr¡£f®4z49¡#D£*HKh~ôföO?‡Æ¤=T5?8ùFƒId<¬;oÁÒe+À$j;–.Z´ö@©.Hm 5?z Ž‘;#7˜Œ¢z¼ÛHpÎ5§êF¶„„ºzÞ 6ä𛄌ª¾ûͼ ¼Bu•Ô÷!C­Œˆ@‚—l¼½F=Èán\¤ªeð»•nxlAîÊÖ"ˆêÝ Y•³gÿìæÏÂ/‚YÆgU΂‹ Q½ÛÙ'¼–¨Þ=ºþFlõîP8Éå¯ÞÁO½[Yà’¨ÞMRï¶ãªÞýMÐhâ€4Ès³î¿Æ}³Žñ¤Qÿ” áí4ˆ–™4ÁB‘ý’ðï¡cgp«€—jZÈ0¶ÿºSRZO¼¤ NâÒ W ­Ý´y›ø¹™³fß¶u§ö’À“¸fx#¡î+–Ö¸- uïêFY œ”î!Ó Ï!ªt7òœ`*ÝÙï]{ņ€ÔCÖQ~„ ¯´‰,È}ßÑ©wÃG€‹—,°8s•zñØ™«¸e"õî*¢z·™Å(û9z‡T½ŒV½»¬“¨Þ #„«z÷·B£‚ Ò·:jîìzò ¤ÖG¯að¡QCçËöþ÷ŸD£–¾w ݯÇ:ªj{>.›ueÍÏØ f¾lè—W¡Dª7jëÿPZ×ÓÜûæóëjÚ_”7=Çz£Ò†þê¶*×õVµ¾øh½QQ]_QÌ››ÃÚõ›³+?ªÅPÖü<·¢©wãfYÚü<§¼“ÝþuøQéÌå[ÇIêÝyu£Wªh}™Á[½›¨ DUïÆµ0õn@ïz£êÝß&Hß|èhÎÜyaÑ©ÔБñÁc0ø >ÂzÀ‡F.>ÁÅ5ÝÄuç°Å9¬šäÉ‘ ÙF-àYøúm{"Âã³Ì,$¤®Ý¶' `Ãã²>?Ïûç9óüB“Æì}h Ñ:uüuêÂâsl]™ã¨S7B½»ë­_X2­S7a4š WèèÅ×:b hTÛú„ºYÇHÄm: QUóc>@š3g^PT ¾îÕt/Z,ØÚÿ¾]Øxêüµ="âh߬ƒë'Ï_Û-"®¥oJ ~ݹ÷óCG@£ÂšÞ/:¢„€DÓˆVMý~h4 @úr›uã:cèèð±Ó9EuˆCuíÏÌ-ÎÊ+©]¿e‹)«¨6£ š::xôTFa-àÇøEtrž©ù 9E5kG/<¯W@à1I}3VR,7ç.ß49|‚¸ÖH‹ Ј ¤« À,5¿–OèÈÀìXhl¦®ÑaiyUk'?.åñjºÆ‡N&q6IôMÇdšYHÉ©¸ù³ª;^?w]RVùÌå[hæ{Å+©ëIÊ(˜Yd•¶ ׺ ˆÊÐ7µPPÕqô¿eçOl7Ëò¾; $2hÑ4úÖiTÐ0@úªŽÐ–ÒÞýâZºFÁQÉ×nÙþ8s&’®¡™†¶5‘>•‚QoØ´Å70Æü #“`ÅIÊ,…ë!¿øÂªNXqöì÷dF~28‹ˆ¢šÎÕ[ö|6ëàëV­^ëΈòdF/]¾1iDy|@,{Ë.-p}ƒðVw–­+súôrÊš·í¼<˜1«×¬Çz!‰'ÏcéÄÒB¡µl݃Üü££RŠ.¬l&öúM[|"h Ò¨iTÛþ¢åÑÛ1Ó¨®óUmç+D£ª¶ç½oxѨ²õ9Ê©û(*ZžÕw¿ž o¤ÕMÓhŠÒh²€ôéAo&]"ˆ_Õp%” ¤9åÓ¦Mëzúʬ•F@JϯJÍ­ ¦Õ(B;ufGN˜˜£ugΜyQ)ø¢3}ÆŒVjHÔõeH„uäà±³jÚ†|BGðkظø£ÕÁÎ-`£ðÖáòxï0´.üàDoÙÒ§øF˜ÁD¦ŒÙ720= =ÇÀ“3"™—o4T`4 ßîãžÄ‡Fy\5¼a„Äd‚ƒ..%OÌð¿¼s¸x]äøFb䃢3iMQM(¾|èèÏ QäÄáyÞàá1$®yÞD EÆg! ÝwôÒÑ7ã¤?~ÀSxq x¨[.C@¹Ž\¼~׸ä:"&à¦æ×ý0m¥øã HQéhƒ{th[ÇvŒä”4až>~añ’eÒ ’2JsæÍwgFSgaÜ´õ“†©«¡kzìì5âLþÞô‰;u@£²†¾1ïÔ±t hTRÿˆ×NH¼wê€FÅu½||#ÌçKâªá ã²¥Íñ³Wwí#ª]²´±À/vê‚ØZÝ4¦(&HßFèèª#HañkÖnÀó4´ xU‘€„ögî;p¶Ü,[±Ê/$ޏÄhQ€„ö[H@B«†–ž)8I|ªŽ0G-m ƒ–+ÅH ½£)€¤Z"óF­hž/_)äîÏHmƒsæÎÍ(¦6Ó@â$パ 9lén5ßàØ–¾wg.Ý–W¹xõÚÈ53?™–_ƒKwG%æboVP½çàÅ‹Få=-£ zÅÒh„ÉôðɹÕ`<ÁÑiðoÐ68˜SÖJRCÏ›c§.3ÓSr«“²+©verødrn¢ø:aqÙCÂÞ¶p妵›€€À~qx Ëcãªá à! ¯„E@"²gábÁ¤œjšFS‘F¤o³»Äžÿ½½ÿí¬Y³cSòB•M}p€ø‰JÈŽˆË¤ÖÀ€ôdHÀ¡u„½£ñG`u#b’7H-D é™’žjwîÙïΈÇd±Ò¨yÞðk¨i" Á¡ctxd5"¶ ©› j @ ˆJ U²k ½‚âÁª)@‚‰ml~Zhí†ý²¤ÉL‰èÁÿ M›·Šœ<ƒ¦Ï˜¡¨ªeíä Ï(Bk×Û¹2ÚFnÙ¡À¤W Æü ÁSáêíW×6òKºbi R{Ë.<9-¿vî¼ùžLœ;{‡Ä¥TÑüt·ˆ ISÏTESŸšÅ÷a„'!£BÂÞîŒ(HØ;:µŒ ä›]ÖöÆUÛH΂Šöe+;šFS‘F'Hß|w 6 Q¿à`’´œ’ðæm’RòH:fꜤ†ÎÑ íÆ8¸Ì·~êäK 3Ó@â¤Ç©¹ HÄ'O& ˜t@Fq£ð6±r$ G§MŸ>CTBfçžý+V aJ>e¼sËÛ—®¸ëàÅIj ב€D’‹ÿÜyógü8ÓÖ5€¤á CÏxDL^‚½é޼¨Ë¾ãa5[«»ãM£©H£ Ò×ß]¢µ÷U÷Ó?&·»DmÇ@Kß;ê»^Á˜°ÆäD=ï1Шª}pXžòc¡#â(¨}LÃ4&@¨±çMau7û)mèÊk˜e QjxS[«û:M£)J£ ÒÔ‘š3w^xLê䆎8…G#ÖCÓ£0Æ©1ùÇÝ#bÕÑBG&GN[Ù¸£9ê‚ÕÆÍÛgÍþy믻ˆ‚u0½½‚vî…ý²S$4>5‰ñL'ÍaH´†÷èuêÌŽ+o i4Ei4A@š*Ý%€FumO&7t4*oì'mÖÒÇÜ£ÏoLN¬:úT÷(³´mábÁªŽ×hÎ3°RK‹Ÿ¹ùGÃmá%N£9sç»øE5vFNQÍò®ƒ‹W ¤ë·íä•5TÔu¢søäy›9•þ°ÖŠ&¬~¾ôw+$XhRójRr«H4rö±uaà ŠoHü{OÔIZUS_JNÙèàñüÊN|ù00ä»á¢’º.Z,.[ÚÈ+k¸qÏÉÙ;Óð–W¹eë/ ÞÁñÊú’²ÊfÙeíhÐ39ÈÊ€+Šj:0óOœ¿Aì°IœÛâR 7m=ÑôÞú+QÒŸ½  ÆfTV&0 $ZÛ¦ÑwB£‰ÒjLŽoÙý8s¦ð–m̰87Ÿàé3f(«iÛ»ú„'¬Y»ÁɃ‰Ü#'ÿÐØè~¿¹rõÜ=úm×^M]£ Èä·íüq¦‘Ù1À’ª *8*ÅÓ?bþ‚…q©¼ò¼ñ ôqu#fx‰]?o)MòÂã³—­À”í;v;zÁ‰ Ã+ Ú?<ùôE««„ðµãG\º;0 ਤ¬wPœoHÂÙ+·ÖoÜìÆˆ„1oþBŸàx´"Üsôuógù†$ž¼`¹|¥Zn·½{¦S×õvç^1'Ÿpâ„/ª’˜SsÇÁwñ’eI¹µ0ÃK›À0\¬²*@©ßo;áXNYëâ › HÃ@ú4U·½hê}ûUÑ Iv½þ$Õt¾ªlŠžÐ4šº4ÊŸx }ÍÝ%ˆ@b„Ä =:ðQ,N]@›u·¬5t ñºÎ§4v½„ ŠK}4JÌ(€á›uRòFfG3 j§M›Öòè Ú¬»m리®Ã«ê)!«Ÿoé{Övý<¤äìŠÄÌ2jèxÊK/¨ŸýÓϽïÐ:S½¢å9ŒÕkÖG$æâ‚u.>ah±HΩ€%ƒ¸L ix³—£C't ã“¿¦óMIãSHºÉÑ+ŸùÓ§ÏÀ›l¢‰}þê½_wŠüå/5=zÍðœŠn0Œõ›¶2£ÒýÂR—­¡9l|ø´²†qÓ@ú$ßèç9ó"R¾*߈¤J¥‘{`Ny;HHíP©orT쀎ž¼hhv ­#GN^\,¸LRFQJNyî¼ùÞà="WË»úEìØ%Bª:Â4¼9BAìÜÜ¡RŽt·¢¤¬2[º;¦–ÝÙ( * Ÿó€7fduƒŸ:wõÌäºÇ`öž!hV[Y{lÞŽ¦ñ™+wäUtˆsøÿLê1ù@úÄ: QiCßWµS4*¬íåãåzaœ.¬í|@ºtÃÆâ̪7Hè:¸ãêº&Ä: Ñ/¿í¡i4Ei”_?0¹@úº$‚°>{£RRf1Ш­ÿ-Ú©[¹z R@xâÁe8T5õHþ¡ñK—­¥D’ËÄ—UM"¨‰ ¹e­àQÕu½Z´X0úA!,"Ñ)…@£ÚÎWhÊ‹$X ¼ƒâà#¤ª£@²RDÒCL)µí%Ú$Á¤»™ÑT -]¾¬#Mr³ã ª:h>ÃoëÊ`¡Ùkç²dért®ªmltø4qö~ïÒHö´ˆHÈFÞ>A1ÍÞž¾ˆ©}_¸zÙ†©ùÉÔ¼tndfn±ñ¡2 ªwí=yÑž`ÂⲌG­IðУñÁã‡BØ`wÎBBÝžÑõݯO^¸~@Nùܕ۸·˜UA>™”S…ú‡Æfšaw`·; Ìò¾+<Äì—3 ‰Í‚+§.XšYÔ_ À‚ÔuLHq#€YbvM£©H£ Ò×ß]b@ò‡—@ ¨ÄlRÇã÷p• &RÇ“?Àró ¥çW³cHGÛúÞÁEOÿœFÙÅ \%‚ðÂÆÆž×3gÍŽJÌ•Ôõbõól …ÅfG§sÍóÞ-" ʺ ÂèÁ68&€TI©¾÷ M€ßEŒHÕòµ]¯ç-XèèŒhTX÷hºßÝä) *€TÝñ 5&Ǥ»9@bâ@ê|«¢ipÉÒÍíÊö×6¢óÜŠîu6ƒ„¦´–Á!-ýChKÉ«¡s˜À¿îÚgïFœ½ß9HÎRû9zNŸ1CAU뾓·/[þÇÖ…AÚ²CjßžL Lí;,‘«o„bŠÎÞ!>A±x葞|f(ô8$xÛFá­€"{÷€éÓgÈ«hÞuð£Z½f=ê,¬©kª¢¡OÍbÀ•8±É¨áØdÏ»èE˜Ú÷KÏ€˜¬ÒV¸þ‹Ox_ ¡¸Ј ¤A#ym°@šFS‘F¤/*4^ɇ€ô‚؉ H0È(€c´OTRZNiï>q$q ­Ý³WBJNNQÍüø9pŒÒ‹V ­ƒ¥.Κ5û®{·‚Gb¥½W»~^Vi£ð6ñrĤ®yÞÖN¾ð_û²¥5¾Ç"!¥ Êž}’2Š»Dĸ <´ .]±ý×Ý;v‰lÞ: ¤î@Bí81éî}ÒŠ;÷Šq’W`üÎ=bx—#Xw~ž3 ¿ ¨é—·¾¬â„‘¶þ²kõÚ «„Ömß±'»¼&sVy÷Osæ–6g/ $€7 pŒÌOœGì¹qÇe»à@ja P4Ô܈­öÍ«÷««o8q§®±ç]åpè1’‡Ú©ÇèÅ9dN×n; Åg”Å¥—PsêØ=ï“8±É(´S‡b“õÔ-»îw@;Fx  áY \drä4<Ñ4šŠ4š } AܪŽjÛž6÷¼â#´{¯¨«O0¾SW×1PÕüdôAMÞÕô|ŽDPI}_Uë‹ÑHÖô”7?ÿhÕQAí£Ò¦çüë֬ߗQÏó↧i…Ím¯¨;·²Wg€9|òâÍ£g®’&0 $PË`pÁ=B^ 8QPÑÂÔÂRhl2˜»öžx,þ½_RBh×—£„À=B^ 8‘WÖä“áMR+½›¤64! €÷4¡‰˜SGØá¹ßï*ªéÐ4šŠ4š, MíîüË`¯ß¶?妃›?¸Gk×olêyÝ9a‚u}Ü“ó’wÁ:4íƒcsî;û’`šÆ&ægŠêŸ‘&0 $*Pˆ+˜C@ú0 ¤>2šyidèËÞdé=$dN@ ®@âZoDR=/ q2¼—._éÁŒá $R½ HÈá 8I4¦"&HS·1ùØ$‚²Šêœ<¬½Ü}C¹Ò©¡ûµÈ~I‘ýh ²ùQ Öõ–`]#75ŒÆ  JÉå4x ï(ÁH†ÔGR3/ =ú2zü€_€ z¹)(:# *ú¸Ã—ìÍzŽr›_$nu*š¤$o$jõ+H¸5îØ½ŸX GÓh ÑhR€ô]v—àÕëhb»K4r/NüâÝ%FãñšÀ4¸©Ÿ;Z†€”< ¤2šy‰“áM =î#©—;Ø=ïU4ô©ÆR" Y;3Ú·+³Ž-²k¯2<²ª·Ñd„ºFæÄë:FæÈóª0µïʶAšFS‘F¤ï«»D]Ç@kß»¯¹»DYÓÓšÎWŸêá(ªl,kz1^4zXÓ_A™·4&^¨¤¾¿ªu`•ÀgJȪƒ2ÐéË·,Î^£i4Ei4I@š¤ÐÕ=úÒÝ%à¡2">‹O+Ø/:j]èˆøÐ:†Í:óÓVÖîUí¯œº²}ÇžY³Þ¹G SPåLõãçnì—ECRVŸÉçnì—E®£ ¼_RŽ‘F‰+¾+ºÐ¸ä-}ªNÙÑ³è ‰¦ÑT¤Ñé»kLNŸÆä¼/:¤OlLŽ {/•µ <UÜøì÷[ŽHÌ®FSZBFÉôÈYW †G@>™%¤•LŽœuñcÁpgÆ¢9 4Ú¶ciêÒ@/Õt îÙ/A5탴j*M£¯„Fyu¤¯4tdzøxlò°°wç°°÷]"nܱWPÖPÑÐe%å ™:çfæ'äÕü‚cÛúÞ½d)#¯réÚ[Ø» {›ZPg»ðÖ_mÝq Y»P§7é¾K5t´pÑ’˜Œ |ê–Ñ@¢5¼i}74š0 Mhw‰O  {‡ {Û {û#ÁE9%µèÞÌÈù ŧPÍ<#$ÖÙ+>¨¤ªeëìã/´v=°Òj¡u>ѾÁ±ËV¬â0 k> Ì ŒLqg„Ï_°0æAŠW¯Y·ÑÅ;Ô78ŽW lx&ì/(˜°·G,"6Î OLØ;éôEË«„ðX\Õ»¨êœpöÊ­z†„½ý°ây@Uo΀„½í°wçÛ{ĽÃI³=¿ªç‡iÓPÃ=¤­¿ìÚ'.«¢iÄJ-#‰}]FEÓõ ŸÆ²JZç¯Ûçíw¤)G£ªÖ—0>‰F%O«;^}*ŠžT¶ŽžFmƒ¥M/øÓ¨°î1M£I¤ÑÄi²»Kp áÂÞ=laïc§. Íº[÷4´ €FYE˜V7’­ݱuSQ×E@ò ŠA›uà=yíÔYÝsR×6@‰ ðG÷´Yçä¥ç×À’ÝØó9F·¬]ÁOjeÉÍ/_k’²+2K©¡#àMAئ=ZJzÞV´<ƒU×'ä" ¹ø„¡Å" ©z7=#¥Õ {³—¢°wMç›âƧЏª÷¼Bñù?}ú ßÐdÒ³ç~ Y5c|†Û{†8ûFyÄ™9 ϪFSÚÎ#ÄÉ7Êkb~®ÇfV¡yktø”’º>qÞÒ@/9z=¬ìüT9xæUtŒÞ7Ò3>ƒ?ìÜQ?-4~ž3ž~>Õ7‚Oy‡$ò¢‘k@FIÑ8MÌO[ÞwG4bF¥ë•”»xÆ$RÏbšFL£‰Ò×Ò]‚kèW êÁU‚Ø;uŽ• Ÿ@U«)2!AÜ#7tn¼DA#CBÛtYÅ À¡öÇxúGޏáœyÒò*Ha±™ …½ ÌŽ¡¥ö>À®®G*AD‰ !UoÊF CâTÅã*AêÞ2HÕ;­Ã*AìÉÿWÿÈ4✗WÖ qe€±gÿó“—©ó®>qMÝÓ—oË«èçí÷¤ñó°ŒððäOõàSŒ°¤ÑïÔ! ñ÷H‚ó‡Õ=ŸºS4Ê«êáåa¸ NÄ_œ.¬äXæ…kÖGOÿ¾s(ñá ©g1M£ ¦ÑDiÒ“óOd@@ê’? HÌ0L«›šÈ€)>‹¤ "’ðÞäÀ’_Hœà²ÔÌ:î@¢,.9HØ»“-ìRØ4,ì=ˆ‘!ao6‚8@Rõ¦TQ„T½+Ú^¢å©z×Ä™üK—¯t#{+¨êZJ›xíÅiÔuM©ó®«é˜¢Ù«ªeltè $^@22³ÏÖ36—QPÅÚ³ÄÐìX8G®L¥¦màä¹kð-=ÓäœJd6·lÜ~•‹ñ9`6W¬leÕUuBc³ˆ4ºbe#¯‚-ï¹Ü´†O ì—ÑÔ5 ‹ËnäD"eÕTµCb3ŠêúLÌOIË«^¼~Ÿ¤ÄìÊøÌrHòÞÈ5GIÞú&Gƒc³ ØM°F0?gŸð;öÞ8À­lÜ‘5<—Q¦¨g|4(:KßÔâ€œŠ¥µ;˜âõ».ð´$"&­¦cWNœ»¡ozŒ´A4"‰Ô³˜¦ÑÄÓh€4ñÉùWX u€Ôñø=ø1^ÌHE¹%£’†Ž’¶¾©¾±9à§ùÑ[¸¡;#§QFa=HˆF¡±ÁÑi\Ÿvqaoôl‹„½ëº_|C†…½g€TÛùzÞ|LÕ- HÕ)`$ªwUû+˜ÿ¸ª÷0½Ž” ÂÞ0±w‰ˆ—4>'NøòÖWÉy h>Ǥ—Ïšý³“oÌç²–Á¤Üz4“Yipý''ŸH4{1µoÐ2H<€F²Jh­'“å½lù*Ä$¢\7˜Ê>1)-FXâõ;Žð£Œ¢°™¸ôb8?sÑÊ'06·¼]^E øážìâ>oÁ¨¤|D#@‘˜¤¬»}ð¹+·cR±O¾h •SÖ†"‘ðA¿°$gŸ0ø`Db>i×^1U-Cßàx $0'e }R܈•‚É{Ÿ¼`éÁŒÉ*i%nÙÁõu7»øFÂÀ"šAñÑŠ.¬îx|£ ›¶ºøF oÙ'‚5bŸÚ°ÙÙ7|Ê30>"¹.ž8˜Ò‹Zª;Þ€3äèÆH¤žÅ4&…F¤¯2tÄHÝ##1£˜£Õ-?kÖì{öÜ€ÔÁ HjZÂ[Ùº}džM[Jë{„âÒ aYŠ›9köm[wH¸oÄNjÐçº÷bíˆ {_ºaï´À/6,ì½W e1 ‰½—‚K—oûu®ê]ßÍHµ¸ª·ˆ¸„´Rõæ $Oްw»áñ¸mïS‰ç%|ŽEK–Âë‘S¿£) ^\ÿ‰sÝüÔ43K»~úynqÓ $>@²sc" qðØ(¼­…-µàæŽL%>½蚎Ad*àx@VÙÕ7 „oÃ3 9†D±4*|8@‚Ç#´;wÓÆéÔݲõÜ'. æ§¡gfqîn‡8ràSx,ÓÒÚ]MÇdHœœ:xîaD¤rÉB=‹iM &H“Õ˜üsëF)Ô1azÞã%X÷‰A¼ë°÷ë*¸M`có3uψ“¦% $b°ë³Ç6pw D …øÅ.[¾ŠØKül”Å€É;0Vpé jƒ;ÐH’H^ì’²¼bV¯YÛ•²†þ q{îá¤.¤4*ÛçÌ›—YôÍ(nÁS>¤ üQi鲕®  i“WÀI¢i4)4š8 }3Ý%Fã}9Mnw‰ñ’O­à1‡¹NZHÄœ:`º¶!2-=SÔs I]׫Ù?ý• ç¥ ý'2›u„áNê»ß€ã↛PêÃ:°œ:Nû`dHàsÿk× c]õ8ý…áƒÎÞa8äÕV·¿œ9k6ê>ž_Õ $À•™ZÇMªÎÕ/·:>@R¨:rZhí†ý²Äœ:þ@ßÝÙ'·Ie R†7$m.@Ú±kŸƒgM£I¡Qî$i»KL0>W°î HÕÅÝ%xÐè5>ii 3¼=àñoÙ¾…•,®íi¡tØc„%Î_°pÏ~ ø×ìÈiÜll]ýçΛo¶wˆN)XµzíFá­b’²€“›ÖnÈxÂâØíƒw`íƒá§`E6.þHŠÛÖ-^F%¬\½vƒðVQö­î»‚E¹1"á\BZ]R÷PRÕØpyokW&Hoy))·†ªO4E~@êxsÏÑoÎ\ì+ï·s(n¢:†#ĵ Í‘­f—c@-kyIÓhRh4A@¢»KPTÝö¢©÷íÄw—(m|Zï*~´›uEõO*8ýÈGé•6Õ?ãC£¼ê~h ‘ê3Tßõ ¡ˆ—CcﻂêîºÎWüëÊ[ž×÷QUŠj{+[^ð©7*kzVT÷ˆhZ5¯ò*:¿2Ph|î’¥Ë?S|¦ØŒrþÊ@§.Þá¥_xiÒÒ@"V¿ó¾º»Ž¾6m½beÿ™:uA1ÙÈ[â#Ä`rälQÃsšF“E£‰:"lÖ‘ªŽ¾\cò¡þМ•‚T?ÆÍºŽ7@£œŠn>¾|¯G`>«Ó [.Z‚çÔ»zßüÔ•ßv‹ªjã“h´í×=4Fi„ƒµ“oAeçw¥šjeíîäN«¦~4Ê­0 Mµî7nÛ+©iøcë4 ŒLR×6‘W15?QRÛhd|È‚•”gzø„¬¢Ú}/>Ý%BcÒU4ôdTu æ•·! ¹ù…:ïÚ{â@òKTeëv:>¤<öèƒÙ±ÐØ,£ƒÇ•Õuñrzv™=V-/"I9U YÄUƒOU<¥0ÞÆß¶÷ÆiäîmÅ.}§Î|³ãøˆ®ñÑ@V¦žÉ1IY•÷Ýab_»ƒ•Êï“VÕ6fFeÀd¶8{]×äiT@‚±`ÑVZ $.@¢5¼i}ë4š( MµÐpHBJ> <1(2ùÒõ»€7¿àØà¨”óWn®\½óÞ°i‹O`4Œù rT‚È¡£Œ‚º¹óx°"S\¼CãÓKೞL ø,–ËË^el\^ÑÌðä3—¬V¬B+ *Åwò Aý¤åU°jyFX’ »Z>2)-C=¤ îQô¬*þÔKOJUüÈÂøTÏbÆWu¼FkÁ†M[ñ’CR"qËî³vÃfGïp€¸ðĸhqëAAÌçßöˆÚ{†’¦± H¯d”´Î]³¦D_U¶>oèy3¾4*k~VÛõz”4*E}‡ÇƒF¼4¼±˜eÛ Õ=.çÔäE£²–—Å ÏÇ@£üšþq§QNU_aã 줲¯°áeW>*hxN¢QVÅ£‡õÏ¿åÖ¾˜8 M•Æä™…µ?4t¾$…ŽÚ¨ï€!´v}̃|$@Ú¬3;rJßÄœëf]`d °§ºý©äP„–ÓÃ'Qþn '" K L·;1ÉÅ7-(ɹU¨­ –÷°jy´ˆÄg”Ŧ•†¶ì¸•€p ãßâ…ñ5xa|çÛð„ü…‹y¥Õ‘€(B<'-ýC0«¶ìâñ)=mú ¯à$Ò4 $lê:¥¨®Oœ«4Ƭšêà”_Ù96ßh¨üht¾fc!‰ãâ‘|©9u¼|#®AM"PP“¨È€¼ùòÖW‡O^ÞÆîw M>¡)¸Ñ–6>5áâõ›¶©[ûŸoôÓÏsݘq€ü„—oÿ}¼BRH¾|ÊÅ?î Ñ„iª5&÷ `íܳ:²8}i‰à2)Y%ypw¡ñ¸˜7Ò}/m}3^¡#-}S€Ünñ+–6(¹n¸ä¾ïÃ]{O-=ÓN%#Òí–bëv³]" HÁ1hMáZ-Ï'Ï-ÔšDNa<9Ë+Œ“†U+Œ?{×®>ÿaJÃ}á©hžÃ$WÕ6& fï_|ÃSI™u$ Áü™’[=2Ö¨7$Ô-!£©gŸ –£kt¸®ë52k'?;·8ñ IPÑЇgÃÇs+:xi„QD¦*©éJÉ«hé›e7ד¬”" ]|\¶²CVwñ†´‚š¼Šv +Ù[|fE\F5n¶dç¤mp耜Ê]_Üö¶7Ö²º8?½di{ßÙ Áƒš(¢©oz쀬Š%Ûƒš˜þ·¶q+ÓâÜ =“c\wê„·þjíˆÎ± §¨ dÀšú‡´ ÷5N¯àO#m£#Œˆtãc2Ê×îº"“ÖÐ;XÔø[Ùzßvðã$@‘3#V^UWLJQÇøhÒÃ&Hw˜êºÅ¥•oØxqÒé+÷$eU¥5½BR§&HS«ê¨½ÿýü‹Üýª[žDÆgÚúÞ#“àÂNÕ=*ªíiè~vêN_¼ÁîËRß …°u»±x»+9|Hèñ¶Ž]-ïâ†ÓèA~Z8‚X°Xž^‰UñDÝ0"j@ªa·2C…ñ|j`Ðl礶×kÖmrô Ǥ¤®·‚-#iø™ñ×ûlÝCK ^Hø£ö ö "ª¦c±iCBÝpžSÞÆ|Âwz5uMõŒ 2y0£ýÂ’N±{ ã7  Qrn58înŒH°OÏàè”"b£ÜŠNf ›·íÐÔ3ƒ‹rÊšRr*`uŽ^¡óæ/ ‹Ï“cSR×£f1€-­\½ÖÅ7ÂÕ/jéò•w8L·ì¶ïØ£¬iàáš5¼YÇМô„„Ä¿|"` ˆ&˜%Ôtõc¥6ÿ¶GÔÁ3ŒJ£œŠž~˜—‡å…·Á‚±rõ:FD’•­bÏM{¿u·žR @Í«y† tÑÒü¤)D£IÒW:¢VU6õ7v âUGU-OºÇ,TÞø¸ºíÅh$‚Jëû«ZøK¡<¨¢º¾/!Â.ŒçŸPË/k–G¦,øL¬Ôr^A0ŽŸ·:rújéÈüWHD !ö 64;FRM%ÆH½ÜÆc~‚ÓkX–Ýk8 –¤D®q#xî[íÚ+vñÚýj¶ê Hnþ,pâÁ[csö òœ+)£Ä'õ8A†FX· ½ÃEĤqƒTÓ1æ $Fİ¿Ž«Õ`ð O%™«œ’©ŒãÅdTüßÿûÿáÖ ÏU¿îÚ‡€túÒm9þq#ø5|B ókw]?H‰@Òç$ŒF&GÏ-Z¼t¿¤¼˜”"<Û9xG±äœ‚€•V ø!ÉÚ-þkÏ[° ¸.z@q Ñh€4BGŸ)XÇ+t4^Ý%>*X×ð 4â'Êž <–^¶²wÁºVÖG/Á06?ó°ö)©ƒR3µ_p×P¿àæáäÞ@êRuÛKt®®c @ŠJ.áýV¬"‰POM5¯|‚ã7oÛqää%"’xâ3ÊQƒ;3nF_o„-¸×w&É# vñ’¥¸A*¨êpR8? • .[éâÇ"𫼠Ö︸ñn±¥-ƒ¤ØÌ*ôòð‰Ë²JZxv¨á¡Sü³x)ŸcáJ†$ €4ʯ{†vê–­XÉÙ/ „í(€dï¹Xpù”‹åL,¾ÞÐw‰ ^4…(C}׫½û%H£®óÕê.aiíÏ¡8JŸï' bÇè»K”óöÈyGÍ4†€DÍ©Ãû7Ȧ# ÌfíawFn6«V¯E¡Ç¢Ú^Á¥+HAÑXÌÕù³{ é/ åUtVµ½DÖuüÜu¤Š)»¬häçÔUµ¿‚+Ž^¡¸Õ%åÖ€½1£Ò¨õF°à*kpÚ› n{8*Û^Í_¸ØÚ%ÀVÆ+†ÄHXPÓ;ª’†Á…6¸­ªjïÚ+NZ< §jpð$V{TÝ sòŽD6 ®’§¯±OXªwH 5§Ž $0ì«ÖX»ÃIzI'ƒ$ב@ò I a%GMƒ@ ø_ƒIAMIYÓH]÷ $ÀO~ÝspŒî¹á4ŠL­˜B4šP ÑÝ%ÆÐ]¢ºý%Œ/Ý]¢¬éŒqì.‘_Ó_Ò<ÀŸFT Q‹Õ‹i Qû;aý‚/õ &é=1êf7™Åê\ý"¦OŸ±_\fýÆÍâRòhËNï5,Íî5Œé D¦ÂMö‰Kÿ¶{ÿò•BI9ÕD Ý´ñ? ^¢!§¤ f‘˜ „oÚº_ yÞ¸çRÃIj ÖÁ‚ ×7nÞ.¼uǺ ›sÊ»†Äîo“½Jh¼ îv@VÅØü4)ˆî8 éßuòwˆûmèP£Èš~R¿ã[vÞÈtS [ໄ·ü:wþB]ãcȆ3Fö5VTׇQ™+lÜC§MŸ±GTjíúÍû$ä(@$nÙí“Çè·=bû%åÝ-ŠINEwý¦m·üºf½pJa’3#AÈŸ•³|Õšu·ì“þqæ¬K7§&Htw D#'¯ \‹l4Ý%ôLŽ àóèid窟:Ò12‡ÁHðLš^ÜÊ?ttß™™VØ‚|#”ó=šîüÝ#Hã« T×õš˜ÛFQ]_Eó‹OR*¨}TÒøô“ꊞÔôŽR¨¬eGŸ@æŽ]û·4e p˜¢ÓÊG£ ”YÚUÔð·á#û¯^³!.«zôÊ@… /’6R(½¤+§ªŸZ`”_û ¡ˆW½QfyojqçÔò&Hßlèht@"†ŽP†îèCG@#î@âµYÇ-å飡#î@âQuÄË="BN2˺>ŸF0-i Ñ:uD]¸f}üÜÛö>à ­ÝPÒø|l:u¬¬»ŽŒ1èÔìküÊ3(‘Ö©/M6¦Hcòk·ì”Tµ`ܶq´ˆLÈ19|BECèêM[9%u%5ˆ„lœFþaIªšÒr*ƇNTunÙ¸ ü *!£¥g‘ƒ–˜ß­lÙ2©:¡±Yh})©ï3=rJZ^õÒûD QßàxeTÕhfzLPÕT²*ã2Ê©4zXÓkÌ.0<õ$ÏÀx%u=I%}ÓcÈ%‚qíB˜Þç¯YKÉ«É)k1"ÓÑ ¿zÇSS•RÕ6öÊÐ7;ÎJ+G3YÇèˆ_xš.V!¨âèYÜô0Å¥Ÿ·Â'í™ßïKÊ©Ê(iù„¥iDiiTÙ:°{ŸiT´ L®jj\få]G?Ëûî¶nc¦­šúuÒ(§æËiª‡Ž€CRr̰Āˆ¤‹×î´³ËÖ®ßèêê4RTÕ’UP ŒLqg„Ï_°0:¥ÉÖ…áÃŒHA2©àŧcµŠðÒ'(6¯¢–-yUð™\|Ãç-X•ô–•Ý"bjZ†¾¡ $x?^/Bݬ»ïäëîÏò M*)¯…¨ƒÙ÷!%·Káí|…«û.ŠªÚ±iÅp±¾ç-ZGDĤÈ@¹‚Ôv½)mz cõšõ¡q9¤\X#P"iiˆL)„o©jV„½¢ð-»Êö×Eu ë¯Z“M*ƒI/ÿá‡i%Mh†ÿ~ÛINI Ímâ–v&3V!èަ±¸´¢±ù4]/\·UPÕzP7ÄŠ3سô’•£Œ¢ $2hÑ4úh”ýé«nLîŌڹ{)t„ 4Äe¢Í:OÿÈ¿üå¯ -†±`&u*-§‚htô¦Çz€#“ ^ HîŒH’L*¼ÓÍ/ „/%x#×Бù‰ ‹—IHU5‚WD’Oå•çíäÂW5¤C/Y*.¥ !£4gî|W‹$Ï0øç/\Œ€GBZ‰šÈðÓÏÊ8@ò KEÓœ!ä!-/xiãŠÝpÁb4àƒbRŠÄùI‰¦M£ï„F“¤©Õ˜œ–¸h±`;o ù…Ä .[AMdˆM-Õw¿ByÞ+V QäÖI%-1^±àëà«‰Š†>7 a‹EdòC QUûK´p,_)ÄH<Üý£W ­Ç'¿’º),!hTÚ<€f;܉XTèêÇZ²t9ת£a µrÒ+´;G’£OÔbÁå¤DHÃ@¢iDÓ軡ÑDiÊUµö½›¿`¡«o(¢QeóãöÇ#€Ôüè-øFnŒpE…õðohl&©±÷ j ÿH¡u„=˜,¤†î7˜L*A|,ía]MçàÌY³Ãâ²a5)¨î€ idZ] +€„£yÇÃW !5UœFQiþHyÞå-ð-ÑY0ùsÊ»€7$ÿˆTêræ7 ‹ Q[3pŒìE‹¾{ Õ´Ò4¢iôÐ(£òÉÄékhL>†oÅ* ÞÖ}â23gξzÇ—CÆÕT)@ª¤ fi@tR>™gξ|Ë §Ñw¤M›„£S hÑ4úh# ®HpérR]]Ì÷ÁÁÁ ÒW:â&ÊPÞØ_ßù’OÕQuÛ‹òÆÇœª£¡Â£²†þš¶VU´<+©ï#..u]¯ó+»FSuTXû¨¬ù9•‚ŸDŒŠ¶Á¬²RÕQ^UOQÃÓŠ2<¬éÏ©èá:«ÇVu”]Ù—^ÚEœ¥ß9`úXZZ–£iDÓè{  Q)%£ÒÒÒ†††ÞÞ^˜ïÿñÿñE€4éÉ?)tô¥%‚>M°n¼õ¼ÇQ"h4‚u£”"ÍÒ"Hÿó?oß¾ýå—_$¤dYÉë»_Ó4¢iô­îÔo$z@aùò™™™ÍÍÍýýý`ÿþùç¿ÿýïqÒ×Ù]â‹Jñë9Ím†Ú]‚¸šTµ¾ÜC)Q¬äV¢8šî£YÆÐ]bÌîyºŽœŠß-àÏü׿þõòåËË—/ ­ù?ôAßî!¸t™žž^VVVyyy}}}ggçóçÏ?|øðÏþ“ÿ|ÿd ÑÝ%¾ªîcs¾JøÒè»ÿû¿ÿ φð„øîÝ»gÏžuuu555UWWÃó#LÚ²²²Rú oå{«ónhhhooüøñëׯÿñÀ3L„ñÒxШ­ÿM[ß›q 5õ¼†ñE7ëj:aŒow‰Ò¦§(³nlÝ%¾„žw~uIÓŸî¹U}E/H4Ê©ì+lxÁg³î;r’þû¿ÿ¦å›7o€I===0W[ZZ€LôAßÐ&ÝÜÜ æ ÓüÉ“'0Óÿö·¿{á³_7& Sž·ñÁc0ølÖ¹ù†”ÕwHFfGa|Rw g¯àÂê®ÑoÖé›Á§»„ƒG`^eÇGCGvîHª®õ©:)¡#^UG|š•‘6Ù‰Jû¼hô= 9I0‰“ÀO,ÁÃc}ÐÇ·rô³0l0ïxüý×ý׿ÿýoþîÑgé³BGD q Í™;/$úÁè7ë>)t4gμÀÈ~4¹YG5t„õô Kúh"±9 ÐèauϤ‡ŽH@r§‰Ú†´YGpî÷ $œIÿú׿þüóÏ¿ÿýï>|2½}ûö }ÐÇ·r¼åïß¿Á‡‡0D#þîÑçiEÆö>ÿÑÈÅ3ÀÝ7 df~<îAž±ÙQy%ugÏœFõÏÌŸ•WR»qÛ–¤`VІޡ¬‚ª™ù‰r¶WtßÁC@àñ²º†f±)yˆ@7nÛ+(k¨¨ë²’rª[ž>vFNQíêM ‘!7D$©iHË«˜>¼¢;¶î˜V·¤Œ–¾idb.p(5¯&%·ŠD£²†~³#§eT/[ZéÑ ŠŠ¦¾”œ²ÑÁãù•@£›ÖnûÅe4uMQ%,pè²¥Œ¢º‚ªvHL&IÀÛèЉ„¬JÄ!}“£AÑ™fRr*n VUûàñs×$e”N_¾…éâukiy5y퀨 þî‘®ñÑV¦žÉ1IY•÷Ýq iê*m@4ºeç}ב1 ¤Ö@rõUPÕ—VÔ1>–RÐLÒ=çu½ƒ2ÊV6ÞT ¹rORVUZQÓ;4••Ið'Ãü,ýóŸÿ„'Ç?ÙÇÒ}|ÇŸœÌ¡ ~443†ó¼aMï~úùFzFMYÀÉ3g ­Yç¿|Å*gO&ÒÞýâZzÆ!¬¼ ¼"'¦h\hôƒ ¿ß\¹z )%» Þ/™añ%µ]@#%5mOë3rþ‚Eñi…¤=ûÄ5uŒ#“H?þ8“$âf½Ã7(ü¡³—­V®Z@JÈ(ÀKßง•$ “ª¦i³n·ˆ˜º¶!#,€ïÇ€ÄöŠlœžÑþáɧ/Z®X%Ôøè}L*&þ /½csÊÚ€FòÊšÒòª~¡IÎÞaóæ/ŒHÌ—(ú&à}ê‚¥g&àMÔN…ë„·ºù³¬]˜Ó§ÏSÒ¼eëéÎŒ^µfý]G_ ‘¬’æ9ïàDÏP¸aH|.ŸÍ:¸ÛÚ ›½#`À›=âoôWâÆx³g-ÃÃDùT"nÚzÃgݘqGÏ\_¶Bˆ¤«ÖØy†Ù{E.[ L"vk–RP—Vvñ»ï4wÞBFT1Û•šMp 2ýçø7}ÐÇ7qà&ýÿØÇ(QôY@êÿ\½ƒÐf»O°ð–m@£ÔÜòiÓ¦w?ÿòŠÄ$¤HøN]׳¿5u½„±f톸ԇÄ-; Qfaí´iÓZûÞ"W鎭›ŠºNrV)\lümÙí—Æ€Ä;tÔÒ÷¾¦ý ¡µëYÉùÀ¤ŸçÌ ˆHÁwê’³+2ˈ4ŠOÇ´º{ß¡mº}bR8`€»SÑò Æê5ë#sÁ%·ì€FI9UðÙêŽA´Ywã®3øIõÄ-;6„H@rñ‹D›uà™=‹6ë®Ü´WR׋˨€–µ ]½ã ~ŸÐÜ éÁ00;ŽÚ©ÃTÎ He#¶ìÀõy™[Õc¥Ð:ÿ¨L”Èw¾iç‹£Ûö~ë6nÁ»5G¤”þðôüÚgˆ@­ÀO"Ö^Ð@¢’‰>èã[=>y’~:FdÖ! õQ€“”ƒBGË›€C½ÿð b„ð,]Ã8Nœ½,¸t™´œ’¬‚ʼy À+„äÀÂE¸aÀdäU¼¢DÅ¥ñèc¼ÄÉófGOŸ1CBJv“ðV)YEHš:†[¶ý²í—ß6 o©néG¢ Œ@ˆŒœ²ð–m’ÒòC[vÏÿ.%£ŽÑ>QIi9¥½ûÄœ<æÍ[ð#<Œ{“3ŠW ­Û´y›„”Üä®;`É70Î¥d•Ðu¯ª#IipŒ0­nY¥Ý"â¾ÁìݘsÙ_áàŽ;©AŸ$ÊàÁdq´º·‰IÊq¶ìÞKHɃc´gŸüh×^1$LüÖk[·pŒ¢’ V®^»-ò 7±¼ïŠD¬èm6®L$”çÍHC@…%äà ‘ž7ÜðÚ]!e }Óc¤èƒ>&Htw‰ñìuô¹A|úÇ|™î<:¾ wy¡DôAã ¤ÑèycbÞýoølÖ5u½ì`gÖÀ7‚ñ%»K¼¯i c|»K7<­jœôîyÕýÅM/HbÞ¼6ë²+4<'Ñ(«âÑÃúçüiD‰>èƒ>ÆH£ì.a|h„˜75tDÒN¥ÒÈÍ'¤¤®{ôÉ ÙZvŸ:rò *¨ê}èHß䈞É>4²wÌ-ïøhèÈÎ-0«¬éç9s}B¿èfÝh|£Ÿ~žëÆŒCÂÏy…Žfü8Ó+$…$<ËŽkèoI‰>èƒ>¾ÆÜ]â£@‚7³Œ¾1ù>e³îç9ó˜áɼBGÔÍ: ‘žñ>›upC\Æ›{ĶY'ùU=“Þ˜œ $>¡£! Ü£ÃÄ5t„7c¦DôA_HüļB–wìT4T5tc’s¨Ý%nÜ–ñF4ºg©}‹IÊè˜E'ç!]¿m'¯¬¡¬®•ƒhTÕüäб3²Šj¿[Ù`@2=Ê‹FþaIªšÒr*ƇNTunÙºajß2Zz¦¨ô5%·:9§’ä•Ô÷›9%-¯zéÆ}"|CT4ôÈ*<ž[^Ñ{«ûnDoD£K7ldÔT´ƒ¢3pµou“à˜,£ƒ'â3+ŠôLŽDeè›ZSqñ‹ªh´8ËVû¾t ]¸f-%¯&§¬ÅŒJçß]BÇè3*C—­ö}ýž$M½CÅME7m½ï82xÉÅ?V^UWL SûNzØLÒ]'¦ºîAqiå6^D !¾rO‚­öíò€H#HôAôñ¥€DòD¸‰yÃPVÓJ…F§z³…ºÒ ‰@R¦Èx’³ËþöÞ*ªl[Û¾ãû»Ím«˜³ йm#JœsN"(ˆ¹Í9(’sÎ9çŒd$çEEÄܧûô9çÞûÝsÿoîZ°ÝTE°æ˜£G±ÙU…t½<{νÖ;çÎwáÊWŸðìâF ‘¤Œ"€Ç30ÆÎÅñ’¥aq™¤‡øä•5Ýý£H„Û7È[G¦–.!îþ1g/Ý^»ž€ž@˜sÃ—Žž¡éÏë‘UŒ‚]³n?7¯¬¢†³O Î'dbîlëììuúâ­µëØHýl¼ójF¢R ‚¢ÒNÞ‘æ„9·_D:)0†8íÔ…[6nÁI95P-9xG" nß[vX9[¸Îž=GDRᎩµkðzöM÷Ÿ8D$åD¤í=#žØyà z…¦±¸u„ܾÍìý -^jíŠj#Â:¨¢ IZQKQ]Ú¦£éö#‡'ö_ݾɅ ðÊkÖs˜Úú>¶ó[±zb<ËÂ…’ áömáúÐÒcÛRç€$ $8pŒ-èhÏÌ̤d—ÁñúQ§îÁckiyHÉÙŒ6Þ*h!Ѳ ŒAµQb&qZuÛ{Ô¬»÷Èê¤È¤¼™ðÜŽÏheÝa>Aæ@êkÓU·}*®ë*ª#ܾ£ÒkÚ{[vd›.2¹ <1J£ÐxÂí»¢åZÈÀÍ+H©¢õSYó‡üêWùU„Û·_xZEÿÉ{‘)…ðܺnTÝx`uZË@Þ7‚¤’¥“?jÐAa¤sâz|ùöc 9ÕÐÄxÁ¼ê7¨Ywõž9ÔI,šuðj€"T©= …S ) $Z›.»²;µ°=¥°}ÝN—€DH·MQat‡æöMÉ7špûN/y‰šu¿Ý"ܾI=Ã@ÂǨ‰ñÖÑ@fÞNžÈ¨{iÔ-".MÉу‰7#Üû¶MHLÚÎ-ð0Ÿ¹Î[IM‡HT@nßÂ’Èœª"F 1Þ:²vö?Ä+XÙ·Î[AE h¤oÔëö- Bsûv¡kéèGçö}DDŠn!C? 5@r ˆG‚òè¾™ºutï‰#ÔFOí}éܾù…$YÜ:‚Wsò‹C@ºñÀZFQ Ý:¢©‡H¹ iŸ¸°lùªÃGÄyinßfŽh!¼²W iU"dlåýÃ?Pݾy$0pàÀ1@úº0óædbæíî±jõZÆ]GH¤7ã:o*\|ˆÓèÖy;{nßä²:YEu*¨4 #ܾ˛ߡEÞk׳³u!ƒƒáöM®ó––W £3F䬣µëØíè€ÔüÑÖ-dŪ5LwQWÖõ©q Ñn! Y¹Á }09#òû€”Yö IJ^c ¹§2Ê^£[G«×n €Dƒ¼²…s’¥+áöUÙ ¤'ö„Û7ÝB’FH8pàe 1]çÝÐÉÜÌ»±ó3Ô4n$ŠÒó*ÉéÈÆÛÞ-€¤QjN‚áöíLÖnß.þ$’s*ªZßÏûi~PT)¯¼•pûH¾¡„Ûweë{ ‘«o$ü+87oµu "iä’èORiã[x 4<³˜pûF@ò N¤Ÿx4rò"ܾ6nÚbí€Öy7ô@adîàCÒ(*­ N³tò/Šú€TPÛ …Ñ;o’CáÉÅè³_œ£o ݲº¯@ª%€$ݤµë9Y{sT åR€äà @ʬèY¸„À¿ÑÌá+ÄeÕ¤4åTtI e”½‚Âè¥ $ÿØç$0pàÀ1š@bJ#ÖfÞÑI988·nÛyDP N06³¥Ž;¢Øx‹‘6Þæ¶îÈŠÛÂÞ˜ŸNãHÜlXr𠬸hnß|¢$×yó ‰CaD¸} KìçæE@zlE˜sÃ[<±q‡ÚHAUGZAÎ"ÈÆ%°Ïí{ï$H>A1(Œ·o!ÂíÉÄò«7@È?êÕíûæC+Ô¦36w^H;ÍÄÒ•’W$ZçM$ÚBHðÀ'<}ízÂíûŸÈ¼yó¯ß·D呤¼ºª¶s ÕÒé±-Åí›_t åV÷"ܾ7ì9À{øˆØ¯ûyHÙ}@•VÙ´e'Íí;:³®H¡P¹¥¬Yϱ‘kûA¡¹ó~ºxûé3 $8pŒè,‚š:?Rͼ)A•MÝ¥u/ÚuTÑØ]\ûrPS†Ò†7…ÕT‹ ê¶9¥-CÙu”_ÙQR÷f軎È[GeMï2 ›we—µ?¯~=讣ÜÊ—™¥m¬-‚†µë(³ôEZa µYÔ‹H)ò6Øžœª®˜Ìê¡Ö%ä5¥w0µÊ(}“U7)Cb~klv#µY‡„ޱÒä4¬ëø&úÊñ4¬¦EƒWÔXûy˰ŽñÖ8ÆHLšugºDYSÏAŠÏ7ÊÒÆ·“zºÄ(Zõt : ÑÑèY98ÆH#£Ñ7N—}?o<]bT§K°¦Q8FHãܬ«Çé•,§KÕvC² QnåË↞o™.A‡¢ÜÊ×9¯‡N£Œ’ŽÜê7Ã-R Û³*º†E£Œ²×©Å,šu ù­Ta áÀc,IJ<»[Gc4]b(Í:U-ÂÎŽÅt‰ 9yG² Ñ#k÷¤Üº¡—GÊúJú,v[¸õÎ#§åÏ Ùº‡·YG®¬èÖÑ=3×ÈŒ*jm¤¦{úÊ=+D#;ïXu}n>‘3WÉò¾„ãÔò(£ü 8FHãqëèåLŽ€ÄâÖ‹[GÔuÞCiÖ) ¤¾Úˆ G8 $ QR^ÓporY”GtÓ%"Ò«–,[‘YÞ…j£ÓWèž¼¼{ßaIyM²Y4ÚþË~*0pàÀ1š@ú.ƒÉ¿ï­£ì²vm}Âðûâ c: ]ºEoæýH4]¼i"$.+&­ä”4ºùÐ’°úæ’UÖöNFú톉˜¬ÍÀA(½¨EóØ)ésWR’P—O¥Ñµû?ΘqGPFIË50 #5“AqˆCJšÇ]U´ ø…¥®?°™ÚøÜ2±#iôÔ1ðÚ}+’ª¶¡L>⢆¾“‚²æ >!©«÷­€@—ï˜Ã»8,(¥ éè›@Ò?s]Ió]³hD$@Ë7:? ã¤)?˜|ßA^E '¯Òšá7¢S3oê¬#QIâøÒÌž8Á'<- : ^ÁèÂMk×à„ì´V@TÚÁ3ÂŒfàí–@Ú{€GJ^ÝÖ= €ç“@êHȪRoùD<ƒ ÎݰpŠÉ¬* Ú³qG@B†ßOì}ŸØù.Z¼ÔÊ5Ô'" ‘SÕ€Äɵý±­/jÙY¹…‘Þ©›¶šÚúB.b[já⚟¹þÄ> <µ€´{±•Ý­#$êBAq…3WŒIa áÀc´4ndžßeMÐ`rn¤Ì¼I …'=‡ž×¾Aºë÷- N*iìmÙ¡Ú(,‘8'¿æ jÖ]£xûGeO¬ëA:pX £€˜\¿èlº… _[v´… t@¡ªŽ¡¼ª.<ضsÏc;_ ‘{pÀ µìè€(BÝ9mC9]êt ”³fϱv‹È HªGO‰Ê¨’4Â@ÂǨé;Ü:úý;Þ:²ròçæ$×yË«h# Y:17ó&S-hnßTsî#Â’t@zê@¿°äS{_€yÇHVI‹Ó•u½@ê[VG$GŸX¤ë¬¤µàÁ‡6xB2JÚú§¯¡… t@"ͼ¯Þ·’RÐdR÷3fØyEÓ­¬£Ry·á…»B’ŠH8pàs M¾uÞmÃ^çmïN~“Ëê¤h†ßðÀŽÁÌ›nÖ‘[0œÀ¸ëˆ $k×`dàMM+— õìœ$$dUH,Öy“Ëê€ÔÃ$(Œ²*».Z“?s欨ô*´á+ªqßhŪuíèÖy÷í¾‘„¼&IH8pà[ Mù[Ghe]q=aøF’g5N 1˜yCF¥—RgÕ¿…¢ÇÌÞ‡¤QTj @ˆcÓ‹¾)|Ïû ¼IE¤çV½™7o¾{PÐ(9¿^áêíC·–ƒs 1ˆ¯oAÝ @‚ÔÐ=½c37¯pNß:ï^ õy§Ò 8ľqË#_Hb²ªä or7#víá~`鉄ŽñÒ¤¼u4t‹ Ö.À$~! äÛ @Bbfæý‘:ëÈ/2ƒ8aËŽÃüÄ 7X„š;Cô-\‹‘÷ÂÀsýaàýÔÁ˜Ä+(ŽŒ½{T÷-j ³º÷Ä ½àý§. b¤êžà„Bø_iéJ®ó&€äÊ H·9. ½ËÇNP=u Þ½ï0I#9ÕcÿA øh•Y/›Zò’¤Qz8FH#´›[GcbÔß°®¤±'ýy#S‹ ^3ï-‚rÊ;Ÿ·²Þu”Uö"½¨…j”_Ó˜[?úA´% nA)ËW®ùF‹  ¹|¢r²üœ8wK×è •FH8pàe M‡[G£ëç=ΆuŒ@¢ÖÝ~äÀɵý ÓaÒˆÞ"ÈÑ/ñ¶©S&s@RÓ=TØA¥8FHSuºDÅ÷ž.1ÄÁäßîç}í¾Õ#Ÿ1š.A5e &8ÆHcJ£a¬ó£[GCž.Ñ„§K •FH8pà MÕé-SpºDjQ{Ve׈§Kd”½J-~‘5|ÅçµÐÑ c¤irëè±Gj~=k™Z»'çÕ†\¡ÚhÎÜyÞ±#uôVM÷Ô•{–€¢gå]:·ÿ²ÿ§ù?ïÞwØÚ=’¤QZÉ+Õ£§Ø–,Û¸y»‰/ÒA>Ï $8pŒ%¾Ë:ïqLN$r¢‹ÚˆØîê9ºƒÉG«Y7 †@#äê ( ¥¿TÖ2pð‰O~Þ~þÆãgÌð‹}NZýºŸ7.·É58mΜyn!$[¯ 8Æ Hßc0ù¸M—€tˆ“”U“VT;š”S}ËØŠðçæ’WÑöM9x…KÊ«‘R?jUQ)éáÍ'$§¬í’ (RR?ö¼î-‚Ð}3'bËQãßqeiyØÌ*@Qh"áÞM$+»OIY¹ßxh ²v —Uå’TÑ2ˆÍ¬!Q¤¤yÜÙ/P!*¥„8t¸<ä¥;OB$+/8O˜pï&idá"*­Â+(¡¤y‚œu¤¨®oïGD$•²*»õÏ\WÔ<Á´Yǵ헻OÝP³ dç‡ #売Ä>$?‹—®ðŽÊ#i”†„ŽQÒTLV¼ˆm±µK€³O¤™WplvpL6üA?õÛ-[·äÜZÒçNÖ®AŽÞ‘§.Ü\³Ž€Óëám㜘]@‚ê°L¥IVY[YS?<©pá¢ÅæŽþöž¦ÖþQ™$YÚFWºòÈ% aÅÊ5$vü²ï¡…éîcs‡[÷pƒs7V¯e'o½cóC wsÇ@ pˆ›WØÂ9ØÒ5ôä…ÛH7mëuï^¼ÔÂ%éæ#ûÇv~–®aÇÏ\_½vCvß®Øõ›ï?u{b@ڽﰱ•#b²gΜå˜4 O¯†Fzék¤&û  ŠËŸ¾lLÒ £ ¤É8˜|ˆë¼C›¸¶åU½¤.d kÙ•¾Aïs*:!×slòK-![v}Í:z iè;xFpnÞ–Uö‚Ú© ŒÉõÎf¼uœsö‹ƒ¡IE?Íÿ9¯æ-jÖåU¿Í(îH/îXÇÎé”L鑵ªcóá­ÓŠÚ©+ë{7jÖîݪºd³.«âMòó6Èu8ý³* =´ô$WÖÍš=ÇÊ-‚q÷!Â%ˆ6˜ü©s(¼)¹–Ᾱǒ¥+„TtN‰J«’4Â@ÂÇhiJ&§Þ:’WÑž1cƾƒ¼¯7ôÐ h¤wò·å+Vó ‰–\ȶª"êXtëˆHD=¤¬Ç÷à9íaAm7‹[G†çnH+hŽ\PÖ<ŽhtôÄ…e+Vñˆó IÅ–S£a©Ï®=éÖyà öÞ±¨SGš¥Bj?¿tùªÃGÄx%,ZL+‰z}ƒÈuÞðÛzEÓÑHP\€”QÖ…VÖùDçýŸÿóÿ‘@ºeê¸k7’Á…;BŠ_TŠ„Ž1ÒÔLN·²®¤±ÇÑ+|ÛÎ_ŸºDR)Í¿hô¼ö ZË¥ Šû€”WÕ…Ë(j" A‡l=¶îØ}ìäE+ëb2*gΜ•]ÙµtÙ ¯°t ‘Wh:Ð(«¢ UE«×²Ó€DoXgáLŒ8ê¿ëˆ¤: ¹§2J_!8­^»¤hr‘w¯«7…FÂ’J{¹¤¿$×y§—¾ ùÅ iŸ¸((.€$!¯EI£´Ò. $8pŒ&é­#’FéÏ‹ê»Q³îäùRòjÀ¡›¶X9 ¹&Šêß<Ãႀıi‹eŸ‡7äº Íì|àAÚóæ«Ö’òêóªß ,œ½Žn¹øÅ9ùÆ0]ç½ç˜´2ç´¬ÎÉ7€”Sõhdå ïkîȤ¬Ê®E‹—>´pG(JÌkÎHPž•¿Î¦­n€d $1™¯®ÞÈÏû×¼ÉE/¨A|ÂÒÊÚ'áA\nËŠUkM¬} TºoáÙG# $8pŒ¦Àt ºÚÈ= nöì9‡ø„öì? ÕOTj1pÈÄÂeatmbéâƒoíçæãß{—‡7mò¼7¯çæm¯{›˜SK·å(½¨%³ôňw%´d”vÅ"(!·1­øÅ »Žâs›R ;X[!Wï¡X’ ;ÉEÞÇÏÞ:zò •F©H8pàu ç`òÚñºu4Ù-‚XÖ º¹ÁÁ/ñ–©Ó° ëPªêžNxÞAÒ HxºÄ8N—]±˜.ÁšFi”•u$0pàÀ1Ê@ÂÓ%±jxÿ§KôÑèûL—`M# $8pŒ&ÆÎ"¨ö{O—(ÃÓ%¾yºD? õoÖa áÀcl„§KLˆ[GcЬ a áÀcŒ„oMâ[GoÇçÖ8ÆHøÖÑÄL>²é£{ë(}0¥–` áÀc €4I§K u0y3¾u4ÊÍ: Q 8FHխ呂E¾u4–·Ž, ;1pàÀ1:@Ú²ukDBö4¿uTò½“çã­£o_çM-ÜòW®Zƒ€ÔÒÒòîÝ»¿þú  Ãüá¸}û¶ °ØÔL>¦·ŽÆa0ùäºu”BKAI5u­œœœòòòÖÖÖ÷ïßc áÀcØ@‚€??~üå—ÝG„ÄÂã³êÚ?â[GøÖÑPn%vBm4Z³v]BBB~~~UUUGG|œþùÏb áÀcØ@ú÷¿ÿýŸÿùŸoß¾½té;;ÇàÀ1äX¹jµŠªÐ(77·¤¤¤¾¾þõë×_¾|ù׿þŸ+ $8p Hÿû¿ÿûßÿýßýõ\ØvvvÂß”²²²çÏŸçååÁ_™8ø„À礠 hTSSÓÞÞþîÝ»?ÿüó¿þë¿às…„Žá uí Húã?௠0©©©©¶¶¶²²²œe8p0 ôñ€ÏIuuuCCÐêìßÿ}<ûuHÿK‰ãÀ1¢ ~ЦÐZ»ýë_À$¨“º»»_¾|ÙÑÑÑÖÖÖŠÇÀŸà\ÄtuuÁÕ ÐèÿøÜã©¢‰$D €1¨é¿iñ_8pŒ(ÐççÿÒñiéÿï[nuÒ_ýõ·¿ýíóçÏ@¦÷8p >|øôé èÏ?ÿ„kPѸÝ=š8@‚û½{÷¸¶lÅ·qŒJ¬[¿áÂoA_ÿüç?O“Kÿ1²ë;`º¬ƒ¿,ð‹ø ŽÁJ"ø¨À¥ |ràó3Î4š@úûßÿ~èÐan>{ߤ¸üö”¾Õð8qŽ, ;]‚Ÿñ ImÙº½««ë?þ‰¡K½IʤJ”ÚøF8XÇÿЂìz‡Ïú÷Òýû÷yŽˆà?£8G=y% N÷ôô|ùò5ÃÇÿ‚ï{i {³8p°ˆïüYÿ®@‚þÖ­Û 6Â=qŽzº‡e¯^³®­­­»»˜u\NÆ" /„Å1mV”~o Á€;u8ǨwGVW“ Nç H8pL2 ýûßÿ†ÿéÄ9F Ÿ®ââ⺺ºW¯^}þüy<·œc áÀ1É€ô?ÿó?H8ÇHyyy•••ííí>|øÇ?þ1ž›ü0pà˜4@B S1pŽ)rrrÊÊÊZ[[ÇÙG ŽI$4Nlþ!‹ÎiN,ìÄÐ'ûïœ4ÖÒÒ2ΓÆ0pàÀ@œ¿`Ñc‡àiH ›œƒ’*¦Ìïá;޾Ä@Âitþ ‡¥×NC Ñh²ÿ0pàÀ@šô@RÐ0pËFeUõ¬Ü£åÕŽóH>°òI,ìÔ1¼r舸þ™›èSû !Ie8çPË‹ˆgõJZ†ð,£K¯›8B¢ã'/Þç’“‡—eñ3 ÷•RÒW¸òÀ–P„/cs[µ .Á ârn¡Y¬Ÿ;Ð[“¯Éµ}÷3fì=$¯fíK÷{`ú\s—øIà]$äµüãK00pàÀ@ÃBaöœy›·Š€(³fÏᑽx×ÊØÆoÍúèo=ü¾ çë][µfù"Ûví‘V\ÁŸrxe]8O‡?ßpò3÷…lKì|úà)«×qÜ·ô†_¾j-z/8¸Ž}Ó­Ç.ðÀ—{òà Â[œºbßò‰~Îâ¹½5ùš§i/ÿ xñÀÄ2êïés=#r,\ ïÇáéN©HH8p` -ào.z eÊÑÓè10 #ô8¹èUtN3$PÊÖ;ŽØû%½ÈÜX€ÕÆŒ™³ ^ ƒg¯?†‚ƒ®=´G…Et€àÕÈmÅP¾(k ô\oM}Mº–úr ç·Ø9·Â¿·ì&.ð@“hj ÒP€dá…C¡@vÀà| Ô[º|7Ÿ(à *T»Ü3÷Ü{H€|Aq9 üá‡Ø/C ïOa$ò}½¢ò táÕvïç%Ï?Ó ý—Å[S_s=ù"äïa ç§T‘$Ö1¼Bâé{ tÁ9y§¶` }{Ëî¯F»÷óró‰îÚ{ ÒÖ;~Íúðôý‡…x$Ñ ¨¨V¯ãàØ¼Î÷Óù›f,€â9¹v@ñÁι54µšî‡A\ȶÞþ«¤eÈú¹½5õ5¯=´_°p1AkÉßÓçš»DÌš=gï!¿rÃoÀ3"éû tÁ9y§¶` JB›ÛÊâø“ª%Òž2hË8%ÂÉ@ ¥µRô¹CykÆ ŒÏ…#v]ôè‚sROmÁ@»Ži(òy÷å_=ŸÿIÍ·Ÿúewoþ£û#‘o²ë]þõò}¿|Eæ;"_öæßÉìì¡Ïo©ùg™ÝD¶3dÛ” lEÙÕ/[È|ý7Èf†lzÕ/!_þN— (;{³þk~©ñ¥Ž!k!;¾f ‘Ÿ{³½7«©Ùö¹ª_~‚¬„lýšÔlAù±œšÍ˲² òÊ2{³˜.Þ1da}¿|޲î™uï0&qËnй/)Œ~a:÷%eh£_œRéF° ý-&þ8 ¤¡È‡ŽFŒ@úJ£€Ä@#&@¢£#iÄ$zu ‘FoÆ“FÌDG£~@bJ#z 1¡Qå 4b$:1‰‘FÅC¤HHSHƒÎ}IaýÂtî 2 âèÆ–ÝßbâcÁ@¾Ky4<½eM#VåQË`@¢£S ÑÑè›Ê£ö¡”GŸ)úhT1±,ЇPÑѨpa Mz ±6‘La6ú…éÜ—Ú°–!Ž~¡ÒÐß"eÂcÁ@úv F†B£¿¦d³®~H4ú2±šuM£Ü¬Ã@šú@b:ú…éÜ—Ú°–!Ž~¡iXoAZïLØq,HÃÒ4úçè4ëÞ_³®mͺ–¡Ðè›Ë£Ú!”G´Y÷måÒÓÑ/Lç¾ N qô u˰ÞbâcÁ@&êZ†?ǧ<F³nâ¬eh™Xk 0¦UËŽéè¦s_R†<ú…nËÐßbâcÁ@ú ÑÑhâ¬ehÇkkÖ•g³n Õb Mƒd=ú…nîKÊˆæ¯ ë-&ò8 ¤¡‰¤QÛ«¯ßýI¤o>w¼ùÌ‚FÍï;ßþmXͺö7¿7¿ü8‚µ µ­=#oÖõReSwËOy-CåÖ2 ¹Y÷qÜšuHÓ7¿qîËy ¤ï¤· -bc ‹J¤+Ž? ÉH‹±G&°hÖ9ºù•Ö´Sk£“§/˜Y9!EÄgè7•¸gbNÒ(".CWÿ¤qZ ŠH„ŦJ³ná"6ïàØé»ñhlÖ2 a MÓüƹ/£øH“H}: Q}ëºGt@blÖjZ^³hÖ±ÃH •Ö´-_±ª£û¤»Æf¿]¹uˆ‡_MK—Ò‡f.ß䆃šºÔµ @£½û¹G¥Y4*®y7ryT‹„'Òˆ€DwëÈðԹܢ*Rc{÷ÉÓ祤åï?¡)§°*+¿‚îÖÑ £³Ù•Ez'Œb’2õ NKHË™Û8žXØÏ˜1óˆ ¨º–ntb&éò{z§éšu@#`]³Ž@"«"€YF~%‹G:z'C£Ó4uŽ‹IÊšÛº‘@‚ãÁÑ©GõdÕHÇNœNÊ*i;E•uö­kÿxîÒ-a1éK×ï“4òˆ’ST‡ƒ:zFÙ%͈FZº†)ÚzFÒòªv®,œH 9y…=xb?–Fa-Ã5ë ó1pâÄ@úF -bc %Zvÿ8Ì˯ª®@š7o $ ícʪšTA-»ˆÞ–œ¼eÛOÿ0È%K—„Æ%?+œ;oÞ•÷|‚"‹«ZHP ¹ù„ÐÝ:b $ê}#Y•»Ææ,šuðv88]¼C\}ÂÖ¬]O0‰VÁñ›¸l]|Ý|Ã{[vA±¤¹ságÞéâjaï9{ÎIES GWŸpö›Ì¬]žX»:{…zÆœ¿|gÝzTÁ7rrY9ú8{‡E%ç/[¾²¦ý*¸¶î´w RyÔ6xy4¹šuùH8qb HýWÖ! ¥çÍš5»ëýß„ø„€@ÏòÊÒsJèVÖQôE¨_wâäYc'^ö¶ìâÉ•usæÌ ŽL¢’& ¤¯ ˜ÉàÔEU-kàg°´÷@…‘•£×Öm;I 9¸ô»‡Ô$'ÏÔ¯ƒèÄ©ßP§îöCs9%u²YW×ñ¹¬áMiÃUpL’‹Ù©Û¹{¯½{)4. à4íšuHH81FH^~¡G„É›FP‘@bºÎ›Hqéèµ“ºÖ1F ͘13<6ne Ñ­ó¾q÷‘œ¢‹µ ð3G¥ ¥çWÍš5 Ý=‚ã‘)ä:o*#RÐÝ#( *B@‚ª¾D@2šEñÜ·†»·|X;'†ç¸5ì&Ãð.è†{ëu¸kR'ˆl&HZH  ï€p`’˜„ôö»„DÄI õ-jè'¦@zI’­“'Ûâ%ð-;goЈhì!~R,:z†Ôëøô¢s¬ßAíc†H2eu/‰Šçåg¤EU-Ýã§è6ÁÉ+klÛñËÎ_öpmÝ^XÕ>8“Y òˆ8F܇ŠHàæcRPbV9ü´®¾QÓÇD•©¬¦/¦¦&òk{02Q}ÙóGUc癨nÞ²íY~ÅLT¯ÞzøÛ•Û¤j87mÉ,¬¡S €'(*¥¾ãCQUû¨˜¨"áVw–7¾e-Ÿà˜Ì•«×NÏGGYH˜F“²<Â@ú^&ª1IÙP- ËDIÆðôÅúŽ÷¤jüÃUƒ€4þ&ª¦–.\[wÞ¼ÿtÂn< U¦²Ê«Á@ÂåÑ8ijÑèûÊf"iø&ªuÖÄ£ž±šxdfíRPÞ<&XÊçÁ{[×€éf¢ÊجÃ@60py„Ĥï?ñèíøM<Â&ªc±–É  —GÌE…i4ae3a€ôU#-/ß¿ìùc|'õÓK]kOë«/tzixñ¡¾ãÃx„-kxSÛþqÄ4ªl}_Úøvk ª_þZ†Æñ\ËЃ„i„Ë£>ñ “FHTUÆ¢El!´UvßkâÑÂElþ¡qt’Ñ>f9žæÎ瞦éëÅÝðDÕÏ‚…‹¼"LJFHt10:—SXôNœŠN|vÜðŒ¤´¼O`dçÛ¿]¾~G\RöæcR)aqJªšb’²ú§ËzÇýUÓòÆàÔy i¹ûÌíœ½ÑÆ#rÌ„¤Œ‚¬‚jD\Æ@zÑ38“–[J©h|}üä9q)¹›÷Ó€d€$“šS–œUB׬³sñëuõ¦©ÆÃ?ò‘¹èÅ+0Z^I]D\ú¨¾Q^Y3Rö±“Q©:úF2 ªH57î=–”Q„¼gjMÉÖÅ_[ÏHDBÖØÌ´éæ%«¨.$& Ç³Š›Hi5ô OÑ:f$%¯ :{é¶Ö±“Œ—tG„%š9 mßùëýǶL{ ³èô’ɸñiÂiŒšu nQÄØ¬Tv°(,ds÷azqwõ¶©Ñùëû¹yUu¨åѕیÎ_ÛÇÍ« ¢CRïÅÝ77ë€Fi…-ÓHß=è4‚Zv ‹yóæmß± Päèê3gÎ\YeK;ß ¨œ›m=Ll=¼#‚"®Ü¸·)eßne5­€ÐXÀÏÜyóŽê"™ÈÈ+KHɄŻx-Yº,69‡éÆ#¸öò£µì¸ó+©jùÅàuH ©hèÊ+kЩ&$:mÕêµäZ†Ý{X;yƒpžÚººú„zÅ^¸rgÝ$¢{—“«O8Ú Ë/(êæéuéú}H›·lwð†\¼d©»_H汕‹£gˆG@̹˷׮ç 'sprY8ø8z…‚ˆ@26.$ k_'d–?²tY¾ruBV(¨´‘øÔ9x„ ‰É¥n=´¤êH\FéêÇ“qãUVÓHÓáîѱ§“³K‘œJj_꜕½vÇ”RbfiBF1#Ž?𬀤¥k™F\ñ‰Ë<4³G@ºgj3cÆL~aE5ÿÈ4¦w@#: ¡~ЈÙò&.îÒJXÐHMû„gp’’†ž ¨ôƒ§N¤~TµO¸%ªéJÈ(kè…%>GÊQÑ:ᘠ¦c "méXPÛmpî:¿äé‹wllÝÃ$dUሪ¶A|V ©eÍã.þñ*ZbÒÊm}n=²'ecîxý®È éÍ@òöG2ÂèÔ¹KH Æ­ *"•ÒÙógcÇ{HUlrvBzÞœ9sH± ‹! =˯˜5kvËëÏH)&f¶P'1Õ RBzœßÚõ7$^~!$ÐKbfQBz!£j€7A‘ÄÞ£ô¼Êù?/h|ùi§¡óKEc7$ÇÆÍa±Ïì\ýÑ5$PÖð†N;p µŽêŸRÓÒG—q5íŸJê» Ù7n ŠÊ díìGš¨Îž=Ç30ŽTÐå[&{öúá‡õ / 唵Áožk븆óŒ_µfˆÔÑÑge5&ãÆ# ¤QÒ„]Ë@zœ@<ħ ¢é}íÎ#P’²úQªO>¹–¡·eGó8+>{÷ H¸âsõ‹!…'äÂñs—n;z…eÖ3^Ü‘@b¼{ÄHÄÅÝíÇ,T4gî¼u6Z:X»“@AQî>!GEI±¨ié" ¹ù„üðãË–¯X¶ŒHø<‹IÈ0Õ ’‹W0ï!R2ªº$Ök~»zGIMÄròìem½“H8ðxÅÊÕB¢’ÂbÒllKÜ|Ãû9¸î݈ñJŽð꽇ôÅØÌ^IMhdpš0üè3üvò #äšD®Âù„$Òm<*¬y͵u祛& ¢ç5¯à7oéä‹äóà‰ý–í»Hývý¡¤¬ÊdÜxDÍÜé ¤i°¸ŽRLJþ¬Y³à¢Ý’åáB@ŠK+ŒI-` $@ºÊÓÑ?¥ª¥´´`!›› ©%tqWÝH _ôu- $êŠ ÚÅ:‹k:à„±… ’‰¥ëæ-;Jú€dfçƒôC$sG$~aIãçf.Ýz …ˆáYI$PÊ=(™’©µ©œm;÷<±÷ƒ3=BÒND6HoÒ_锘ž4jïú‚”²Ã'(Ò/8’‘WRG@òŒ\½fíP6! yDrpn&õ¢ ¬ñH¬eÈ)niÔu|X¾beTR¨þ ü¨mÿ€®ä „b’»_ä²+7õ‰èu?¤),> ^­²åê.¬]ÏÑH‰äÚÔÕkÖ;z†2.ÒÖ;%%§ŠD4sÖ,{P$KGß•«Ö’:’WÑMÆGHL슧äRoHŽîA<üBä!(ŒZêM) †Îò™½¢šS 1^Ü H¿]#.îXÉ=0I(*pé'äÏH.þqH9P¡ªFÂòðà¨Á…e+Vñ ˆó I.\´ØÂ9ˆ’“o©™›Æ¶yA'²ÊÚÇÏ\#5ƒ4™—ìéxó7I@h¼éÅÛ?  rpõ¤ç–‘÷Úº~‡ÂÈÅ3ˆÔKVa5S½ 5½üôÓOóÃãžXJj:à R×ÁQ)ILUs🬢Ú&®­H5‘)€úŽO Ï€hšç)qÓˆ ¤ºö‹—,µvòAÂ)ªî¬ÿ ¤$tç ä¯VÝöhäêK~# Õ0INI­ªlý˜’[]ÝצÛĵýâõ‡HD*šzH"rÊšz¤Žöì?låä?y×2Lea à}ãfrƒœ’:‰ÅÆ#F œ<¡Rô€wmŸ¾©ßRo$º ò*ÚpqÇBEÀ Ïp¤GïH`HÅt@jxÏHÞáé@£œÊ.¤œÕëØRNÕÀU`lþÌ™³¢2ª0˜ t1(lÈ–Ý»¿„E%¡0âá—>ÄÃ@¥Ä$eoäÜ (“’3<}‰%>5ƒsÛö]Âb?ÍŸojnÇÔD Äâê L“Úº}ç!1Hä¢FɘY»À¿èú]Sr©·€°8FÜz õmÝsñÜÏÍ ò©hy´‰A—z £òæwHDÙ¥­»~ÝÏÁɵcÓ/{f•¶"e•´Âù¥=“w-C.Ò”ß K©¶ý¨48:´TPÙÇüÓ}CG$ÎÍ[íÜ‚/îF$¸¸³tòýx%¸Ä1’´‚:Òœ²¶’ƱD H@ : 9ûÅòªß€r¬ÝBáÃÐ ¤ZHŽ l4ŽÞÀ±™›W¸Ÿlª1¾ÉDµ¦åMS燔rð/TKT±Ô·õT6½š^þl~ù¹¨ªí7•Ô¾¬jîÔDõyeGáä=ÈÆ£‚ª¥õ݃º¬bÒŠ‘‚ŠjߤÔCÅÃh¢šSÖ^@îÞ£‰èÜ•{§.Üøöµ ß·Y÷Ý•…4 ÀFÅ—ˆÊÉ#˜$(" Wsü‚¢4 )«•STgôe Z¨J äôØÚuÛbø®™;õâRMû8u}0|‰„¤ªÕï8|‰h”UÒBT<´‹;Eu£†ŒNHÊ©rmÛµm篜\ÛRŸ7! Áq×á Äsøˆèšuìûòñ ˆï9ÀÃH! …ðÓZ¹†b 1Òh™¨Þ56»|ãžµƒ;”G›6oizùqìLTGË—a¸&ªƒ:›D¥?±qUsâÑ1ƒó°ÉÞ¬YMk M7£ ºöùå-cdD½¸ŽQÐÇsWî]¸Tľ‘+6£œQB´{HñÏkßô¢h4lëRŸ·<+ëô:Î=8eùÊ5J6H£h¢šõ¼ÊÆÉÓÌÊÉÉÝ¿y`5v~8Ì+@— />LXÕÚáúlM¥‰GÃ/0FHØ¶Ž¼¸®m®Áù¾‹;W¿¦*B@ÕÛ¦œ\Û»aŠ44 ¹‰êØ•Gc:ñˆ‰|Fe lË$6QeJ£é $ì£úM@_ÕûO’óêÆßÕûú+SŸ‰&› ¤ïb¢: ÔDµ¢±»¾ãÓ0›u½Ú©nûPÞÔ3‚fÝ@"z^óêû”Gco¢J·–‰$L#ñGO€¢$dE$d<bl\üàb0$6s µ ½^A턱·œ’È €¯C‰æË ­ FWy‡&­\µ–\>±ñ!›;Ù¸öNÿvkí:örŠûF®'¶^6n! "@Ña~;÷P{Ïð3—ï" qnÞfáäɶx©G8’н'Ž–Îöž'Ïß\³ŽÉùv?²ò°tñì9ÀóÄ·*žŒ’ŽÐÄ¢»¦ËV¬ K.ÙdUtÁÃÜ1_X (uùÎSR9Š篙ÐÉæ;Æ„Òk"ãÒ‘FÌmœÔµŽ1sõŽ$›FŽ;²vpGbòj#WošÃ÷²¯ߢ2¬õrñÚ]euЋѹËpцTW®Z-$*%".ïî™‚€äè´÷À!ƾÕ"È䩽²úQ ‘á±[P„ðÿ^ĶÄÙ;œiöx#mMHíHÊ*C1TÕö‘Ñ—ò¯àÉsWé pÐðìUH„·œÊ74ë&ÇZ$«i ¤É²¸Ž $ïÔ¯ƒÂÈàÔoHZwÍ¡*"EÕÐù¥¼±»œv)›ºŠHÈ€‘ý:äÔð¬tæ¬YU­ïP¿î®‰ˆ‡5ÂrhOùHÊ €*ŠH*OÈclÖ­]Ïîœ*ŠÍ(›?AYÓ$¤Ò¦÷yU/!7plò K%dîàƒTžô|ÆŒ9T Á €"¤ ]#Eõc¤~ž×õd•½€\ÏÎ霂€ôØÖ›ϬÙs¼¢¨Ê9{åÁ/{þðÃZúg‘róáƒÁɵÝÉ/ÞÞ+zåêu篛 Ùhèž–U›z²ùf õj„H¤«÷KŠ«÷@@BJA@ò ¾‡µ–!¯´Šª†ÎËW¬ŒNÎÉÀFõ/>’CÜi=:*<ü#á|Æ.7#² ƒÔÖ÷¨Ñ½n= H_êû»¯^»ÞÙ;ŒªiyÕC|‚åÍ=­M•’SUR×¥Ûx$%§¢¨~”‘²ñŵ lãQÒ€4–z|ú€„†‰¡ö÷S[W$-¨Š$e•¨Ð(òRùBÚ»ò&ï)© Ù¹Ÿã¥Ë–£\¸MHLz ¥Þü¢m\BTKo¤î¹xKNY T¤otQ]Ç Ißè·å+Wó IˆH-b[lëÒÏÒ›¦" G¿Ý{¹éTD5 ºeb#«¬$¤kHXwó Šó ÖÝVÎAH.~q¤~~œ1ÃÙ/ŽQ<éÅœ\;NùÄãvø`<²öB²¹ñÐfóÖH9§.Þ‘R"5“S…ÄH¯ú€ÄÔÕ)…5Ú^ÂÈÙ3KÖój$“èÔàÈd¦z9x˜OŽpìÞ†$•@jìü ªñ $»Ý€Tßñiñ’e6ξH8ŵ/„¨@2¦É?<™6â#ÐÈÝŸpìf $9eÍë÷žÂQTÓ9x˜¿´±ª“7M>1iÅó^`ï ¢HΩF ŠI+‚ƒvnÁ¤ˆö8låì?…7Qi4e”5l M¢½GCRd"1Ê¥¦ý’\" ¹ùEÂqHpá@‚«¹U«×qã’£g(Ô4$dÔ(@b²–!9·*ªâ†·Ë–¯ ŒÎ!D?Õw#­YÇN$šŠìÜC—._I'! if|hÖÝyÕo~àI 9# Ñ”åiéM'U“¢ÒÊH93gÎ2w DÊ1±öZ¾r ÒŒ´¢–ºîi ¤þ@úªF õwõ–B®ÞHt@²rpG2A@ÄѾ·nß% $öÓOó=µC2A‹HUª^žÚŽÝ7ï™’‹QE$àóˆ—pì>xˆHB!1„c÷î=ö »é€Ô€€¤v$sDH #îÃGàRïÀ!>Ðê+PäæµŸ›Ü&A×e5µt U¶¼‡§€š@3fÌ4:´SÞL=8ãäùk¤‚²Ki6ÞMï¦ðÆ£¯²Â@š;aé€ÔÜHè¾R`1Ê¥®ý#Z¨JŒrñ 'îǾø 5¥ƒH+.½ÝCnÁA›¾[²IÙå¬TÞÔ3ï§ùþ©@£ìÒ8ˆ€ä’èÏtqݾƒ¼’r*7mEZòJÕ!+UG¯pø ™©¨þ-Ûâ¥m<‘Šž·$º» ¶Ûº‡F¨Ì€$)§váÆ#¤œ¼ê·‘©å}mº¦›·¾t©E^UÉæˆ¨Œœª.’Í/{=²ñ&5“„42˜LX»z³ÞxT×ÚSÙøú[6•Õ¿ªnétãQQõ‹ÊæžA7Vw–÷{÷nQ Í=S›|@K¶nApæa ®­;yˆ" ¡E L—z?|ê¿í‹7ŒI!ñ Š­]Ǿÿ?¿àŠH½*ò I^±jÍÎÝûvïåÞ¼eG1Œ{Ƀ¬»¹ùx ëî~@êµ[(| ‰'·êÍ¢ÅK^°  gÌ€ê'§ª &1·qû/û6pl^Çιc÷þ„ÜFÐ üNάxD¤±óeè _†q7Q ŠÊxjë1Š&ªÇ ÏÕuOíGTYa MA£ âšÎʦ·,öí;ÈÕ)ª’ú®çU/†nTÑò®ï¾eïQVY[AõkB"%”^Ø’[ùjPý¤¶d•¿d-Î-Añ¤r2J;£ŸUåV¿eTNb~sjQ;)à ·õO_’²1°‰êt6Q£fÒ8é{ÛÖ]¿ûøüå;O¬]¡<Ú¸‰«‚b9Mlë܃’åhêM/éÄ@b¤Éh¢:¢‰G¿ãģќ11U6Qe5Ý4}|T“²ÊÌlÜÍì­œ|*š{UYcÏÁÃütYÚðvX@š¼&ªCì*` ¸Y×Ðþ®½ë÷‰`¢:ZÍ:R;Åu]U­àAaüøžéÆ#”…5¯Ë›ßÓ˧íSAõ«²¦wßì4)×2` $ìê=Ý\½1¾ HChÖ-\Ä?ZÍ:[gŸç-#(¬½óÊš‡=–e³nÁB6wÂï±oùÀò™;wžWH"£‚à‰.¾QÓÄD•©¬¦5¦ù˜‰’ú®ê¶“F„±wÝÛÓ(¯ú 2ö¥¶MUÙ|;†Û¬c¤a¯eh¾ï0dà‰^A±£Û¬£ÒÀÍ: ú)ˆ &ÿZ†Ü!_äMk á¡G ²yÄLL }‹*iì_óVÏèòÎ_ü4ÿç=xûT! ÎÝàæææ%’_XŠTΉ³7ÐAH>a)¤™Cü"Ž>qÙHÉÙ#ÀÆÑƒ”‰_pÌSk'ÐH@h¬¢Š¦˜„ŒžÁé’ê6H¤·×Ò=Ñöúw¤K{w[goR,·˜IHVßa±éH,i9å)Y¥tz11³›1c&Ÿ€ˆŠ†nhlÒË­ûOÄ¥äeäU‚£SA)ÕÚz'kÚÞ!¥œ:w5(*õá[x"ïeõ£ÁÑé,Ö2héE¥kë‰HÈÓAªié£+9HS+rY鈖¤"í¸úFÊ(¨ ŠJi;ù¬°É€dnï­¢©'$&óÈ™HH>—o=——QòIúöµ “¢Y‡4€4âfÒÄlÖ! ŒFñY5ÈØ;»²KíèI×ÀÄge—o›ý8cFHbRŸ¤–þYsÇ@H+×PR9¼‚šzgŸ:BZ¸„"Í8úÆíؽ‰HH Q ÏV¯YGjdϾŽn¾ kw/ÿ¨‡.߸·žƒH€„–WŸ‘RÔ4uuô ‘R¤iVß~¡qN„ÕwtR(EUm‰í'–¸Ô‚¹óæ]¼v×Ã/"¯¬ Ä"%«$&)ëëà°xɲˆ„,‹‚ЦЯQPÊõ;¦{ö„1)ù€„ Wî¸ú„g7²rþvð‚\¼d©›_ò¬«ly‡®äÕtÔuN°Ò#K{÷`øòì¥Ûk׳“@Z¿a£­k {ðª5ëHbx¢3 HbR À*gŸ( _¶ÅKý#Ó§C³iJ•G6N>fÖ®¤®Ü|#áš®fv"«¨.$& zÙÅMŒ@RÕÒG7c!M-]ÌlÝ®Þ6•“Uö K&D1öþ $ÒØ»‚ÞØ;\ŠjìÍèêMrõísõfnì=€«7½±wÝ;dìÍ(ž-ÛwôIâ¹+£rH÷Ÿº2Êfɲ±HT Q×2l`爈KäÕþüó‚oÿ@2éèþ£¡ýäFÎÍ1IÙ @êµô&”ž[>kÖ즗ŸRŒÍl Ö±$e%d²°y„LÉ.›5kV}Ç$™­¥åUàAuKÏvÎã'ϳ±-É+kþÚ² Œt-ÃÁ#]ÉÕ?µ ¤Zæ@Ц ©¦²õcaíkÈ ›-H¦V®H>­Ý¹¶î )*µˆð7©‹.én>°€:ib5ëêFgãÒ÷ÒØ5ë#SW­^Kêê—=û‰ÍFÄ('¯Pb”Ëíuë9¨@r u«h~‡Ê#šÀŽƒ–Ä¥…Åe@WVN„xPô3tq×gìݯ#]Õv|.mè*­ïb߸)(:cP ŦƒHÊ›zÐ%ÞmcK¨“†nì]N5ö®üjì=¨«7Scï\½û{Ó”3kö{ª±7M<Ü|ÂÒŠZ¤xY{™9ø[¹†j韅§'"å˜Xy=±÷·p ÕÔ;ǃ ‘lÔuO‹ËªM=ÙŒHt)ªl†Ê¦­ëËŠ«Òó@&ñiy@£Öן‘FÖ³s0©±óRŠŠº ÔáI³úúÆ#ÒÄÒÝ7.È×24t~Ùµ{¿Ð&®­µíd˜‰¡Ë=ÊšzPU¤ ¢E$Š­IplШ é¨í3h„’³O’‹oœO’½G(qy7ÕMT³1¾H“c-Ãù+wUµAQ†g.ié"Qôrê冀„‘@Bw i·q èÆ×w¬ï¦7ö&ôÃÔØ{PWïl똺z3ÚÖý8c†ÕØ»öˆ¤"‰fÊÀD<ûÑ=y‰Q9ûhÇ‘rŒ.Þ‘RÂ@H ‘C<ü Êê\[¶!™„Ç¦ÑÆO[ŽüCcá)Þ’@ú“ƒYzW4¼! µ¼ú…‘“G )–gÕ ”¨” È$F½lâÚF d¡I¦ñÅ'(ŒÈù{éy•Èÿž_P HË©h;‰Tp"§ðŠà2Î74‘9ú¼SI ­gßhíâÂÉ«h_¹z-‰êˆO’wH"|þ+ZÞœ}"á—@Ò'xYbæ^[ït M=ÔîF@*m|¢³pð!“^:õLT™ÒiJ-®Ë,ªŸ5kVuëûeËW†Çgƒ¢à¿@#rèm”  •&ù„¨äáŠOû¸£g((mXkz½ë‘±÷³rª±7M<¤±÷ ®ÞLmërõf´­ëgì]ûNLZyÿ!¬Š®Ä#*¥$£¬Í¨ Œ’6’Š”‚&ISO6ß $ŠF¬ìÝàøI™‰J@at˜æöÍÍÃïÐH®ÞÁsæÌáÙ²u‡ˆ„6 H±)½VßGhVß&Oí@)ªºòÊŒ.A–öl‹—Ì7ÏÊÁô•”³ž»mçAâ¹Æf¶ÞA±ËW¬,ª~z©hì^·qÈÜÖ.Î öž$yeM=#ÆE@Ldë0{Î~áÍ[¶ó ‰³$¿ 8Fñ–ØÏÍKIZAmËö]Ûwþ LÍ,n©  j£ÀèÌu6nÞºƒçˆÈ¼Ÿæß2¶šòk0¦æRï‡ødäU7mÞŠDåGåRC[¥êæ‡F¹„¡B$ˆ+>g?PTnEqŧ}¼²å=\£Y9ù‘@ŠVŽ€„Œ½™.®#Œ½e‘±7qY÷ÕØ»ù£ÅØ{PWo¦@ÈÕ›HTcoE­½ù2Ë^QÅ“SÕžZ†”›ÿÓüŸŸØûƒf²+»ÃRÊfücòàøc;?$•_ö2±ö&5ƒ4D™T7w5¾x?RZ_)ªje*–š–žò†×#3Q­lê.­{É\2¨fã&®´Üª¡o<ªjýYÔ4D— ¼òŽ¢Ú7Œ×s% oE,–§æU½Ì*k›ª&ªLi„4E€D6Á[:ÿîêíG¤¨Ð(—ƒ‡ùD$pó‘C-I ÙP¯øÅH¨ý  B–ÞpvÇÄɩר›ÙRï‡f½ÆÞå,½uõ÷¤±7SWïçýT@1öÎ(î ›IsÇÔÔ’]ÙOùyÁ" Ü3f躂dUTïñåÄñcF—‘Târàà³ò×Hý€4BÕ?‡¬”?ÇÉ%èåï>Áñßj¢Ú>ÔØD•¦3¦ÑNØçU/ʺYˆª²õ}fQ#£¢ k_ÃõÝ·ì=Ê*mËïoìÍTHéE„«÷ *‚Óuõî5öŽ+`-žô’±YµÙUÝtÊI-îˆÉ¬ÉªxCªÅðüm½ÓWû˦{º©ŸF“IÏx[z³‰êl¢:þͺ©¤¬É $l[7º¶u¤±÷¨ØÖièI+éÄ@H#²ô=ÕÚ¶w5mïÆÚDµ¼©‡\\7âò¨’¢ d¥:LT«-º§/¦›QÐDÒd4QÚ²ù ÝÄ#[gŸÂÊ–¡—GZºÃ’Œ•£WNió°LTÕuN{`™N<2·ó$ÌXN<2³õHÞ@çË0j&ª“sãÒ· Óhú¸zc H£8vá"6¿¸¡O<¢ÒPšuðž1Ã2QH}Ú¡.®H>¤}*Ú”UÚ:æ&ª~ãUV™ÓH¸< H,hT\×Ý;Yy,i”[ùri”VÔž]Ùõí4Êœö@bÔÈ]c3yeÈGOmáKÝãFñz'NË+©#™Üyh&)CX¦†Ç¥“Jñ ŽUPÖ•9vâTae+(Åä)a™Ê/ ¢ª¡—Ärëþ iÂ25$: ‰¥¬þ•¾áY1IÙëwM©@blÖyDË)© ‹Iëèå–6ƒd<¶¡™« +« ŠJÕ$<+K/fTÍóªº'ΈˆË\¹ýˆ’«o”Œ‚:òKÍ,ì½óz÷‘5¼&¿°¢ªŽD*ÐèÊmS 9 eŸÐ$¤ ;&pÎŒÃ| *:¾a)Zz§¢R‹‚Ôu ¼C’4Ž ŠIÛ¸•4ö]¸! "uöò]RA—n>—“VòJšJ0XùçOO™ÛyfR» ƒ•GªZÇ!‡$Skä¼úaIYC’Œ-Üâ³jXé¹[lf5ÒÌÏ Y»…±ÏÐh„D'à °˜op´hìõÛÆ “¹óæqnÞâèæ¶ÉÐ,SýÃâœ= ËÔ˜¤$K;ww¿p¿Ð¸K×ï®ßÀb‰Oë³LõÈ/ojí³Lõ Žst\¼dYdb6èåà!>EM¯Àhœ?@2fÖ®Î^¡^1®ÜY·T•œ7wî¼ó—ï¸x‡g5‚^”ÔŽÊ*ª#ÕP…s€›ONYÓÕ/êÊ­Gð$SK7ÿ^¿T¤°„8Ž8z†¦Ô‹K+ ‰É¸úE[:ù±-Y•ò ‰#Î9sñ–½GhJ~Ý×–]˧9sçmÞºPdjå6{öQ)…{ímÝB6plzøÔ äGE¥½#ŸÚû°-^êž6e6MIea Q4ìÚh(Ýj³î+†\-X¸ä4¬f ¤$ôóÂE¶á¬õCB$æ6~{y„DÕȳü (ÚßQepñ B2ÉÈ#,S›‘s]÷Ÿ&f6P'‘iíú[mkOMKçæÈ„Ì6Ú”#¿8$–TÂ2uvËH,PÜ@›’?kÖ¬¦—_Lxø…ZËPÿâsyÃHö›Bb3@2 ~mÙdbÒ £S è„‘˜;sÖ¬êöOH;‡øI— ª¶Eu¯‹j_³sl ŒJGª!ECsÞ*mìAò!œ·d”'‘Û`¬]| 0Ò58Dtõî)yµˆäBxÁçuݨÇpã¾ÔISfãUVÓH“šF×ï=–”Q„¼÷ÈÍn ˆHÕÖ3’–WEººvÇTTRNRVÙ?"…R_ÃAšh8ЖzÓwh@¢k8rÊÚuôOÃuߥ›Æ¬äH3ù>""¥~Ô0%¿tuÓØjÆŒ‡ø„䔵½B’ABaIÏCó…”Qܪy씀ˆôùkI Ùº‡IȪò Iªj$d×" ]»oñãŒye”´\A?ç¯ ˆÊˆH*:ûÅ#ñ\½×{Ž´¢–‹‚ªŽa@l>RŽ¢ÆqGßxe->a©'ö~™¯õO_å”0<›ÔÌ™+ùEd„$ì½ãúɦéëZ7ŸýÓÉ€“†”âêüÃ?.[¶%a™*!ƒ4rêü••«V ‹I‰Š¬P! ù" uýáÜk™úÕnUD\ÆÉ# D*EEã(i öYg Ó¬³\|©@bÑT°s 8Ì'H ù¨Nœ¾/(Ðç— %Q? µ}¶v¦wÞ•bœxDÉç+<‚| 2¦UEPÁ—Ž~tv«G„%§ÌÆ£)©¬I¤áÒ8Ä/(êæéáuéú}仵q—•“òbQézÄØ¸vÝÁ1™ä€ º†C8µÛð¼%öµáà˶di@TF%͈AVQÃÉ;ââMcФÊ#P‘µk“wä©ßn­YÇåQ`L6<åÔ…›6nÁ‰95 " “¤œ*£Šöà‘’W·õ ÁSH ›{O-œí<" i>ßHB¾Ïàƒs7,œƒb2«„%äKÛº‡?²öZ´x©{p*èÇ;œ8çÄÙæŽQU¨ZBâ! ʶ?±÷¿ûÄyÖì9‚âr×ؘ9¬ÛÀyó‘=hFPLŽOHÊÂ%ì¡¥ç"¶¥.IHŒ@QøG/_±j` ýéE³Le”IlJШéå'¤”õ8äá±jõZ:±¸û†slÜLŠE^IHH2 Ù4§’÷¤û0¤Î^ ±^Ë:‚ŠŠl*Àe)$. ^°¼¹µ@AŒ@r ¹£2í. ¤¬dë²bÕš©ºñˆ*«i ¤É[%f–t oèf·Øºø#]Åg”žö´1bwL¬ N"¥"!u½ º–Ó†CH\áÿÝüu¸yY „TÒø>·òe.ÍäÛ;,´D¶ìŠ‚ãò‚brèTä•IkMô ý8, DdAmOfé‹g¥4Ÿï d¤²e_0sæ¬ì>úËwžB„ÄCmÙõ©ºH¦¶¾H6´¹|gTÎ_7•Vñ‹ÉƒL/y…dsñ–ÔISO6ß$¤‹ö®ß—,]æäjîô©×" #ÂJµO)™Ï«á¿!Ñ©¤æ—_@)>A1ð²HÈ2ɤ©ó3Fî¤^Òó*ë;>üôÓüИtPÊÿkï¼ã£¬òýÿ¸w¯«è]QéA@tÝu¥'¤'$!Ò!TPVCo)Þf’Iq^Iï½WZMW_Þ²÷þöÞß÷y¾3'Ï<óÌdBKžóz^ûu—ݼyΙs>§ºe¶Óž[HQtuVÇà}xEUg C)!-{o¥uÑÑÏ’†­ìð¸LÖ*7à3ãÕ×¢’󙲆~`„ŸØÜwoƒ…$BZúÞJ¯€ ¦±—jÞ‚0Ge6 Þ]¶ÒC--$Š–ÎI ©¦ë|à%Ÿ0"¤”üº)±—á6/¤ÉÒ³X¬óEÿeõzw·xD‘»aÞx“Z4ÐÑ7Aí?tDvÁa\Hœ zÆî‚HÙêmfi'‡ö~þÍ[oS%ßšzÆ3_ŸMß{t…¤8Ö]ò  „L-lQH»P=ßêÚ†›t¨žo¼÷ˆ)$gï0øï<û·ðñhèM($ŸÐtĦGÇéY<0=‚¹ÑPøÀ×ßxø…µ6óBâ<)™Å0úø“5«×nxÕYB‚'#O\™ªEW¦^¼ä¤hëm†‰Ñzu-]ãu4@H@Š›¯¸2ÕÃ/`IÍ)£*SW}ˆu«ç]¼€AH¼ÖÑ7Â*U–˜ÈhéÂÄhÝFMm=£5ë5PH—<Å媗½ƒ!Ù[Û™nß!ûµ«O` 8IK׈êÐÒÒ§® z¸I‡êK¥º¸è¾T"$'÷ >ÓÙS›V²øw—¯ü@nG=yÁ…tÑz†íè‰Pq.80W€¨Ø´bÙa4¼ÖÔ݌״€ä±¤®e°@Ròý—µê(¤s—3éÅŠ ®’)½©A–¢Ë~‘ðoÑÐ6\¶âƒõz¸d7Þó­Eõ|»I®;:íâ?sõ™g/ ƒã .V[ºüƒuº¯Ìxí»S®ˆÍ)'ñ{N»ˆJ™Bjá“·€úÀUkÕu^™ñê·'/ñBR ¤g^¢ÚÚ?¶n£&ëié{%ªªÔšLJKÐT\¬›öBzñU65v*Xv¨í¸YÙ|.8´©¸à Õ÷ŒV÷(ƒSqý@yëG+ ªl¿]ÞÉ¢(¯º¯¸qdB„ j‡r*{'±%(»²?½´›âx!=íUéžh‰j»ª [“RéýœÜxôKT9§G¼ž_!ñµuÏImÝÆf²„ôh%ªðtÝ…çѦGœ6‚)Ί&±NÂN}×h]רâUU§GåÍ#¸ÊýlKT k‡ÊZn‘lWJ½–[¢šW=XÒt“U¢š[=PLßö‹uÓ[H|mÝc ©ºýÖêõ›XOUÛ-Õ„ôÚˆÒã`Â$eמð(ØËà)«lìUþÆ£öûáQ™qj\}BJj{”_¬³ÞµÏš¾ÄR;ÌJoø8yåUv¢þ0s–hò3/Qý·?Ìò%’ŠúµÜ½ /¿2Ã74Åü*·€„G[¬ã…4 Bêºvfk£Öþ1xž¨F¼Gœ•1"Þ3œÔŠëSI•ª!åV ɤ¹ìʆ¼”Ò£.Öɉ –™³^‹MW~±Ž)$%ëfÎ|=(*Mù½ RBâÚËÀ¬ôæÄ§I"!AX26*ªí“²Ñ³ØË #¤‹u($Ö^N!)¹X7Ý…4)Ó£]{>ƒGÁôÈKVÕÔ§¼vÚ€G%!¹1"ž2\íP1âÉñèf †Þñˆ÷lë8NÉŸQÞ!i,ràW¹ ¦ 6/$dÄv÷þ?#&n>"/A(`bÿéç‰é¶öû ·ºy‹ˆ»nìûü°‘©ÃiG°‘- éæ/¡1if6z†[vï;XÙÔ¤œw¡š¿5´ô,mìñ,<gœ ÍL¶ZƤä!2uí#{|¥¿Ùô‡“ `E¦˜nß¡c`²kïWidÎ8RÍß5uÍ­íð lFA]Z~„šñ¯]í÷}©k¸åèñ ãBxF1Ùf­­o¼s÷ç…Õ]²•ÞÎÑãu7o5Üb—ø—”i™Yî Ïٹ狤œjDÉz×þ ØìöŸië›x£k:G?ûë1M]£/ž"}óãªù[¥x±ÎÒv¿(&Ûj×M=‡óž¥mÖ{ÊZo!J§œ|ñBKN!¹Än±ÔÐÞla{ µ¸•øœsmµÚ­¡k|ü¢2ÅÒ—ßÓoÞRjzÄ é·Ç\¬C!)X¬£zPb3”_¬ I•é3â)“òG<Ö­- ¦GTÄ“¬3ˆ7Õ„ÄÅLQ#/$)LàÏô¾ëë»íö~Œ¼2cƵ¥añ¢ˆÄ‹£““u6m·² ‹I!QEÝ!]ö KX¾ùáÔ¢wÔ–ôÜ xÃ7ߟ O,«ïRŒMÍõLCcÒ}£f¿ñfâ•@fÍzm–;ƒ¢R@H¯¼2…ĉŒ³G€H|PtÚ_žX´X Tá—>zÂ?4¡¨º ;ë¶lÛÁgõ:õ­æ6Âðd¼Ÿ颫Ð[‘úå·Ç.^ªôΫèp Œ·él„¥¸ú†¿>{Ndr°›N•iúæ¸w`\NY;sÉŽjþ^ñ¨è‚[Õüm´í”£·g`ìbµeg]ü"=£mZú&¾!É.ÞTówhb¾ŽàÓÞ}ïýK¾‘ðÀ›=ã¥ùÝï®6ßD”¶l·Ý¾c¯¬¥޾.¾‘ðzÿW?Î_¸„)¤‹Õ½Â}"æÎ_„Gˉ°y ^Kš·rä<š’d©,$Öôh×îýƒ7FÆ<|EÞ‚P m÷§_$eBú£"žˆ©¹›Šxð7Ÿqb £.w¡#^U35+º0ñvǧ¢ŽcÄ3³ŒMÍC!‘ˆw줣b!E¦l%¯®„ÄŒxÑÉùÀXF¡8â1…TѬTÄk–ŽxägHE¼¸ìFùmd/& "`ã.Œ®†ˆw˜ŽxGNaÄÓ7Þ©ØF;÷FgaÄûñœ±Ñ6«=¥’/`O:úž¹$×Ëà&Œ7Øb©NE¼ý)E-äÛ# êÌåÀ­–»‡‹>âh!Å#9‡¾;»I×DÇp›wh:/$Y!‘ÐF é¬Ü}ƒ1·yú‡®\õG¼]⥗~ßwãgŒnê›t(!IVêz¯ÿÜÒ; Ú»ï%^)X υƦ#&9Wë_zé¥öÁ{ÝÎ8zÀ<)nþî¼öIÙ¨¡BR°—¡}èA}çͺΛKÞ]“ZH–ìHŒKÍ­Jή`Ú(1“êÍjé¿n½º6…$ÆAD«j»Ï;jË"“óY•Þ)y5ðkk;G1Ï9œuˆdóKHnþQ˜ç€ûý‡1Ò}wÂÙh«UBV|`EÛ-äè‡3®ÆÛk T„(YÛ# v²Bª’J¥-·ókój-Y*ŒÊ&B:áä‡ñ`éòUDHؼUX/Õ¼5áª/¤ñÅ: ªÿÆCÄÌÚ–" ëO â‰Âãƒ0â“èéDô°¾ëV};â‘Å:ŒxL¨TˆxIù¬NbFÄ£(xÒ±NÒJ-ÖÑO\Ê Žxtš;ÊŒx­·žN_Ö§#ž¼Å:*âÑu ð`ÄÃÅ:–¶É<Cnõ<‹ÞY*ŒÌ"B:á臯O:Q),µ ˯Al¾vpÑ6ÜÆ ‰-¤Ñq!ußC!Yî°#B‚І˜DÄg€IPd’ÚÒ÷ˆÌ,lPH©9e`£Îk÷‘”Eï¨É )0€¹•­“»á§„Ô=|ÿÕW_K¤ï&¯mPH±)¹ÀN÷ðC %„nþ!„–-_)‰#åu@O`±Q^YsÛÀ]øÀØÔ€¥²y>%$f¥7°Ó:pH Œ š¿Ÿ¥ï­ô Œ!Ô„Åg‡Ä^a ©±çÎŒW_ë®ÖS½Y(¤¸¬·¨fÈ1’X| âìô®ïj\}Ã‰Ò êŸw—­pD)RR÷¸ª:Fï0b£¤ÜZT‘02CžÆ-$%¦.Vsô ”²Ë{@ !%0…ä–B*¡w«º ãá÷B*•ÉÐÔ…d¼mçVËÝDHE¤yK,¤¨Œª ë¦Y“ ¤Þ‘ûøÚÒÆŽ)".…™@E<RpT2D<"¤m6(¤4:âu ßÇoh©Ë]"’úˆè½ ñX{éË]ȾUˆxò¶ ‰#^ÿ]ÜËÀŒxÌEpÙ”÷O@"ž#ÇVo.!ÝC!J ©–!$ŒxÊïeP ¤â†ë($c3yBÅæƒ ®ãÚÂü…ï0…ä*ˆC!¹ ©ˆG„ääCE<ù9Ž’Ô–Ÿ€ÐØ—_~y“¶ÞŠ•èèm&B:V}øÑ?úxùÊêÚ®an ¥Šº!·a7Y²ÓÖÛ¼ˆjþÖÔÑ7Z»^#’«·èõ×ßÀ¯£€‘ä¬Òw–,]ñþ‡›´©æï³Nž€‰P,¼†”Fÿ}}JHriê5To–žÑšu€PãìˆÍß—¼‚$›¬Y'ö¼èÚoÈmTí·¦>âü×jÖ¬ßÿ蓵êˆ³ÒæFÑ©ÅTQ7]¸rü¼;tÁUˆo»è(›çd…„øGáI…ðË$ÍßÇκ!DÆf;¬v`q$OHÎÞa/ýþåµµß}ïý ›ô™Bbµ¯ß¤£×jlÔ4üóê!Qøl±\¶âÃå«þ¤¶leZIsSGóÖÄÓ£[Ó]Hd³D^£¾AÓàO«72…¤o2ñR‹Û‰€atî‚E0w]µf#ñN\"ÌðBRæDÙÔ0¹%ªòzL&<1åKTýBS_ UNMk!± Þü¹¶u€IŸ–×{ýACû5Öó¾ëe1ƒˆ×Ö?6!cÍ=TÄcÖ1ô"ž2tU·^kdD<å‹‚ÄOêìET)D¼¶›ƒG¼É«­+¡#!°^R^½òJ[n¦·*S”YÞ“W3ÄINAÝõÔây_ºfVô§]힪Ø<¶Ø6bbrÙ+ º¥ÿ™”¨6÷Ž­Û Éx6ÁÓØs‡/QU鯣ò'sã‘M_!)s…Ä×Ö=µUˆxOª¹î1jë¦06“.¤'qÇÄÓ¾ñH)p>µ&¥Dõ¹ºñˆÒ# ÉÕ+€µ,þÔ„ÔÜG_î"•ò4›zÇøVï§ß£Ê Iޏm$ÓîÈÆ¤µo´gøád-Öa½·2%ª³XW×5Z×9ú¸Ó£^)‚°þñ Þx¤J¤Õ»\›[=ˆßŲ€ÊatB>Îb/$¾ÕûùÒól#^H JTåµ;R°LKÞ…°þ¡½ÊOvÚï—wðH2®>!ŵ=*]‹å&lpì¸xåWu)^¬sòÊ•t?JŸ”x–Â’uÌjN ˜'a™L‘uª<â…¤²º¯Ýë–Ä=N!µAÜy¨¼”™ÁóDmY¯®óöäÚ¨´i¸>öYO˜w´PW³` “#$I±·9Yýõ×y!© $®vG¹[HR˜'a•áEVH.Öýaæë¢È4•븅Ä`‡yRB>¬;&àuamß3¿–SHœ{˜—,_UBHª.ÖÁSÐÀ IáôÈNB—¼éEW\†)N|²€Ù `ªé2$¾šå…$¡K®œ=ƒò+»&’£GPNy'²°ù†&?óÅ:²³›ðÝáµüé}06•EüªË‚øi‚Í# ‰e#yíŽ'Î:o61ÛbfŸ–/+$`„êu¤ßG÷:žwñbÖ{##§ÅõÞÑ)yÈKmÛ0©÷–ÏËÏx:‚Ô{c#þiì~ܤ»]RïNÕ{WËRSN¾.0„ô@–d²ÍZKªÄR|RÿhfiŸ‹6:BÕ?š’úGV÷#à³sωÙÕÀŽ2ÝJÖ?"J¶ûeë©zï–[HÝý(d•¨2[NÜHýãÔ¢Vrðè¬ëx÷#KHÈÔ—ßÛ¤g¢³™ªTuÕ’B!Ѥí’’lècƽ Ÿ2‹uLÀ”œaâS~±éR0=¢âÞÔ¢pzÄÜê 6*¨éã…4=…ÄÙîh²ÕÂÀÈ4".Ã?(ú7ÞLɾÊê¿765‡7„Åf^Ç4F½wi]0bdjâ¯÷Î(L¨zo‹ñzoÎÉ]è•vXRïˆÝGNø‡PõÞmä$¬ÌbÝêuê¦æ6°ä#âúGœ]tx‹b…á)âúÇþ±¥Tu÷·Ç}Dñ¹å@ñ6m€è2Ö?&ĤIu?2Ï’c÷£»t÷££ûQ\ÿ¨gâ#]ÿ(o±Ž³þ‘*¯kºYÉè~duÖ1…t‘ª$ ߸L'Õý8q’¤ _Üðí*Œ?çFÕ? ¢²UñBâ°QsÏý_HÅ="¤g]6›lÃÄ'+$FÜËG!MœøhÉK|²B d$>\g&¾Èä<Rz~mZ^5KHåCvû¾Ô1ØrD÷PHþ¡I&fTܳÙý9µÞ÷àÄyf”Cq ãž‰ypl6ÚÈáœ8ñmµØ—c³û‹„ì*´uKt–µ•øÜÑUí·>¶I×èБ“ÄF_» Mǽ€¨LÅ6²Ø¹O•ei{`“®É±³ÄFfV»ñ;Xäç´³…DIHrW˜;½’à*ˆÓ7¡âžùÎýÉ…-dajû¾hjiýxÞ[VHœÕÐ5Ñ64ó I›’Ø<®hq¶;æ—6Âß쾘œsö.˜BʽÚo ÊëhR°×‘à€”ì’ñzïnq½·EjN9©÷†·AC‡› 2mƒêõÞ$À‘õº”Üʤìr–èúÇæ¾û˜ä°þ‘,Ö5öÞ«l½Ï;jË"’ò›È’½X—’[ ¿¶¦óv#©41g5›ÔK IA÷#¼ÀúÇr뫘õ!UL$$’í®6ßΫ„‡ªŒÊB!w?J꽉ÂS©†ï‚ºLxßwÑ1ܦêôhº ‰sn´n#;î!itâÛ*•øB{¢hˆrÉ™%•‚ÄçM%¾9 çôˆ™ø.V¢3©Ä÷Õ‘~!ñ…ÕmtU$>ÖôˆŠ{ÛmTÜ;ï·!Ñ£ —Þ±ÀÒ!:îN±é¥â›ZDq9åí@”>Ä=}` ã^DRà‰ïkÏÀجÒ6ñ’-$øûËV|*:ÏìÄE/waÌâ%KO;ûHº›Í4õL¼ƒ’œ¼¨¸Ÿ¯@H÷œ}"œé¸çB¢ªƒo ¨.;ë½ò„Q2[@¾/©¶o²Ì‰ï‚g˜£Õö N*bIK«º¶ñ%AÜ™Ë÷Þð‹Ì&ÌðBb ‰³ÝQŒµoã˜èn!BLüƒÙ½Žz†&DHÝüDœõÞÑXï{,¬í¸‡¨îG-I÷£hSH Ä ß’E…mtý#ÚhßâúGˆqXÿ(R é¾›«þQSÏXöK¦TmÁ ÎúG{°.h¼m‹® â’TESHXÿ¸QSÜðíâ…Bò‘Ü-©÷&B:ï*[ÿ¨ D•ÓF¼ØBÊ,¤âÞÀÍ¿ 1â¼È/ÃÄ÷ëÎcâ“)Ä=z½âÞ:î©”øº¤Ÿ¼õ:L|µ˜øRĉ/02•p•’S™˜UδQü*î*ñÖ©k[Ùî#‹u ½÷*Z¯W`ÜKÌob^®Üw?9‡Š{Õ·1ÙýxÖÕÀÄ¿‰E¢$JH!ɘ쨎8ÿH„ ŸÝ¾Ã˜ìŽžpÚ¼Õ*.³~”ËZn!EßÓqOÁb|¨A²²û &FL!•)¹¼²¤éVNÕ<÷ü#³ˆŽ_ôÅźŽþï¾·Š),…jûέAZ¾vp†y/$)!Iö2p¶;E&Í[°s* I‘4oþBÙ]?/1餿ë½™‹ÛaT½79x´uûy¼$\)5Kê½!Àù‡$p,qs É/8p «Ü&âúÇ1iÅ`£º®QdëYB‚$7wÞٵÔÍ-$aŒªõl!Iu?’zon!aýcQà ,Q¥ê%BÂîGxÜ$õÞäNKglø~Ľ ¼¸§G¡±š:d×ÕÎÝ($Ap,gâC!IçÁñ¸§|â#[½1ñɳÑxâÓ§Ÿ0¦d¿=òPql‚¸G„ÄŽ{¢x–Üü8âÞ„B ˆº"Yû6ÃY<ðþÒÅ;œ÷þ ➎‘b!ù…gàò±³îà"¤"R«XH¥L!16ÿØîûëºí"ÛfÍrp|²WH*b™^"B:çÆnûÞ e8õ°yt!1vÖq¶;öŽ<ü‚¢‰ +Z˜Bê~Èìu„§ ¼ë½ýƒã—Î!ªûÑ;0ŠØ(¯´ ë½cR €—Ц™3_—æe™ˆ„ñzïºûyÁzo"¤°¸,ªÞ[ššºþ1"1¨)©×?Âô($–ªlèÃån¬l"õ42uÝw€”ËXÿH«(µ žt?’½©ò„$[µ%¯þ»eqöKºDz$õÞ¬U"$_ºþ±¤é&å*¤ëAH’îG©÷&B*l ëÝ‚ˆ€)•ëx!q)$*ùÝ¥ï IÇ=x™Œ‰skÆ=®­ÞL!aâcíeO|ô²&>NaâkêÃåoL|mDHò÷2øaܯ€›˜Y£bR©¸WÛ5Š{.Zâ#Škd ©÷¾W`¤3έÞã;ë$Bªe ©ë.KHº›ÍÜTÜS~/ƒ!Ö ŒÌlä )06lTXá¸ÇÒ%ÿX„ÇUï'BrôŽzsî‚)ͤI^»cZNUÛHWzÃ?½àâ»~È…,©’^GMº×ñ¼‹ðrë½_™áæÃª÷Ö'õÞ~2õÞò³Þ{õ: ©÷vñ‘%nYp¼HýãêšzäkW ™úG`‡QÿàDqÕ?^pÎÄîG·@fž ©[®jXõ3Äõ¤Þ[Qý£DHœÝB¢7y‹¾éúÇ?¯ÞH„Äì~L+é`]²“»`±Ú»ËW­•Ô?*sðˆ‰Õô’¼­Þ½Œ¸WߎqÚ2Ô;ò$>ÿ hb#L|($:î½ sb#Œ{ª%¾aÙÄ'E&¾–{U@¸8ñµÑ‰ÏŸDH¡tâc ©¾ã^.©˜Ž{($Œ{õ=c $ÿ:îÑB‚|a ©¶K÷ˆRòëQHð6òe,§j¸„TÑFÅ='¯PBQBv Ú(ò Kcí¬“‰‚‡)$ðñ€'Û¾)!Ií¬C!y‡¤`.ä\ö§â ©˜’Á+’‘ÙÎ-öDHyuTÜ;s9ˆ`žZÉ ‰)¤AåÚ[{G:¯+؆ÚÜs»¾cDñ¹ˆÆ®[µmÃÌͨíƒ÷Ëú”)Q­j¹Vßu[渞R[R{ïÕôÊ<*m¬j»9á¹½²æ‘âúI,QeÕ?b½·òJ[n¥‹ë½'.QÍ,ïÍ«’=ˬ÷掭ޛt bâÓ¥Ÿ%¤‡NT:£Ÿ§¨•¹©±ÕÛ+ š÷PHM̸§CÅ=oZHÜÆojŠ"SÄqo÷ι!Nç. ĉÏ5€CH4B²BB!î-¦âÞz *îýpÆ)2Újm¹ë€’BºèI%¾5ø–½Nžà…TÛ÷'™BÒ7± ßûAâK)j'BrQ9̶ïoŽ_â…¤@HO¢Dµ[µ“Éi zJT±Þû-Qå´Qþô’¢ ˆ{µ­ƒœ¤µö6t]—‡Y‹tÜS&ñuI'¾ éªl¾VßyûŠ‚{Ç kzd¡º q¯õ&*œJ!îÕ LbKPQýµ¼ª>&E`½ÄÜ:å•4ÝL-jU†ŸŒÒžÜê!Î3äùµ#¨"Nr®”÷¥•tMUlYHrl¤\ŸÖ(Qm¢ê½©J﵌뽕¿ñHžøÕ'Q¢*k#^H|mÝóÕ£ê’òô›ëTÅ&Ÿ’\!=Þ§éQ¯*ø<­ÕççÆ£é@ÖTRSïV܃§¡çŽ´øVï'Ô£Ê I±XŒtÝíº«`zÔÒs»ûڃǼñˆðs#ª_•;&j:n4÷ÝU!ÆÑìT¶^ÇÍuJ.ÖÕvÝ©j»%› Ò¦á§V¢úh‹uÊ¥SJa5Í…ÄO^X!=Så×óB’‹‰ížð(˜±êSY°xø…–7ô*?=ÂfÅ6²G‚ µ+5"UÕàW I}*;NžAy•„»}_¼è…¯ƒb³­víW×28zÜ‘‰üQtÖÓ)QŽ”ËvwžÕbÝT"ë…Ò3²QMÇmxØ/k™D•6Ý€Gy‘kZT tÌ«Y”R~ýõœšk ÈÉ(ëã…$OH²ŒÈ ‰…‰¬˜˜À? ‰IW~±Nž˜È°N‚ÊT]¬]­ïWäć%hdr+:ç¼5·®û.âó­ÃÅ_}ÿ—µê[-v1 ýñÏkd ªŒÅ:eWž½ ˆÕtÒ´žYÙîƒGX½[²Bºè!Ê.ëP^Hæ6ŸÂ£€¢s—3Ä{R¥ZñUZ¬c¶3pÂsÊE˜˜ßÂ$ÇÊþàÑSnHŽWHš™õÞµz‡¾;O°Y«®çœ&ÍM^H̽ ]×÷}~ØÀÈÔá´#%¤Ý8Ú¹„äpFÜë“’˜œs¦ºÕ5õ,vØÇ¦ )?žv2063Þj”‡°Ô´ï9ð•ÞfÓïOÐÝvû“+…uéù5,dØeƒíöLϯ…6ö"“òl÷|®k¸åŒ“7"ã!ˆ:É€ãÚÑ ¨±Ý{0%¯ØÙaw ,>ׯþs“SŽ^H «Ìûà7ÇwØÆÂl+ÏÍysnbNÍs·X÷Œ¦GS†¬§-¤}±Ž!$îÅ:V͉ìôˆ¹Ï[–&¸ÄK”Hò%f•w«*$°QFi—!Q‡øñÓo¾]ØpÉ9xä¬ýgG>údÃf3BØhÕG«y!ÉÒ/ë6hl·² I!½2c I¶Ý‘ÕoÐ7b4y_¡Û¾_™ñõw'¯Öv)F[¶ëo6 ŽNóˆœýÆœ„ŒbàeÍz 3‹¢ÈÕý(’¹µéö,ddËqÉ`y…*]\åãó &1³üÍ·æ6õÝCp–¯üÀSͼñˆªj\þ¾‡0ÚCõúì9~!‰NlºT™÷'kÕ]ý"Xøp ÉÀÄüÈqÇ'¼X÷ìe@¬¦¯¦áô¨´aÐîÓC:[¾u8ÏÒøÝ-t™7Áé[‡ :†¦(& Xr8ëypºŽ©Å® Øñ}-’or_K~uŸÍžƒšz&‡¿?7.¤Ž±Ø+•1éåLŠ~8íú/¿ûÝÚÚ&Ûm…QYX[^ð˜ÛìFfZP…߯?œqx.z†ýxÁ›ðãâõýwÆr×giå϶Ÿú…gšïܯ¡cüÝi7 çÛ“—àß²zƒ¶‘ÙNŸ°+ÀÌÞCǶÛìg‘6‚‡ÉÌì9o‡¤”O=lGH„‘+ùTýcߟõM: $ÎvG¦r®Ž÷:öˆ{-ÅKvÑiˆIV1õžÖ»Ëé‹0OJÉ.ÿWø…C‘— ÚDHiyU)9í2Ȱºµ˜B!5»ö´Üù)RóÁGñ ˆÆ*ë >i Ú¹ç ›½dÉŽ,*üþ÷/ #ÒXaNJH’åî]Ÿ~i¼mǤM^ÀƒG¼ØuÅÓd±RÛ–í6þ¡I $€ŠÒ}q™7ãî–ˆ¤–ô¨¶o¿Ðdê a‰ùQ©%ð _|íà›yµMÞ}-¯Ùhl¶ÃK”B‚÷!m1·ÝljÅ),±Þpàð—ýcR‹Z˜÷†Áƒ…ßNÞáðÌš=ÇMšX 3›’æ[ˆÐÒ嫽ÂYKvð«Ô–­¼èöšõúœKþ±q/¿<ãÓCÇœ|¢âs››?}²áœ{È„BÒ24;xô6é2Û­í~ý#®+ &àç*8& Ù9åèef¹KöBXˆk¢èL¹Bb<:üýYÃ-ÓùàÓFy¼¦¨Ø6ŠË  ¿zïá÷±ë6R…ßT™7¹»ERæMîn¡›ï'fSw±TµßÂdwì Õö ,1—ì8ïk‰H.þ×}©²ý‚´fƒ%$š¢¨´Òˆ”Eô—FdÉîKH "DÈj×g[­vB+?ü$üˆbó@N²·)SMžáÌ®L-íñº#WÆý{/ýþe·€$9²B²´;¨obÅ ‰àÃd$(2Imé{3 ҄펤ɛE SHaÔ{X°B©îG‹éöL!qB|RsJŸö^†kJòòÓóµXÇïePz±n*‘¥‚à·wâÄ =Ã'"$ÞFS}zØlÔ6²Þa[ZZÚØØØßß÷îÝé#$ÄG[Ïðqö2t«ݦÜb_¢Ê…ÕT"럔+üöîß¿ÿÑŸþ¤­k˜šSÖ;ò?=zL!MeWC‚f,\”™™YQQÑÒÒ244?Nÿñÿ1„4ŽÏGÒÒ5HÍ.í~8ɧæyÙË0…MI²TÒ?þñÿú¯ÿ=räè5µâ?”sçÍ·´²fÊÊÊêêê:;;oܸñÓO?ýçþ'ü\M!ñøðƒ'k2…ô¿ÿû¿ÿýßÿ 3AÐïðð0üΪªªÊËËá‹R~ðCþ€Ÿø9©¬¬fÚÚÚÇÆÆ~ýõ׿ÿýïðs5„ÄãÞ¬É.;@Êûå—_à÷ Põôô´··7777Ò£üàøã?'­­­]]]À L~þùçé³^ÇãÞ¬Én‚É @AïöíÛ###CCCýüà‡ü?!@ ü)|óæMøã˜ù÷ÿw˜1L“é?x²&_Hd¿½ß~ûíoûÛÇ­»üàÇDãÞ½{<`~ýõWøC˜™&ßñøðƒ'ëI ‰@¿í¿ÿýïðû‡éáoüàÇD‚ü¨ÀŸÅð“??ÓÐF<>üàÉš|!!T0þAøŸàøÁ‰Æÿ£üÀàÏÿMãÁãެɒ,]üàÇ„ãÿøÁãÞ¬'*$~ðƒüà?x!ñƒüà?x!ñƒüà?øÁ ‰üà?øÁ ‰üà?øÁ^Hüà?øÁ^Hüà?øÁ~ðBâ?øÁ~LññÿàŸ³lÄú‚IEND®B`‚glance-16.0.0/doc/source/index.rst0000666000175100017510000000301713245511421016762 0ustar zuulzuul00000000000000.. Copyright 2010 OpenStack Foundation All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ================================== Welcome to Glance's documentation! ================================== The Image service (glance) project provides a service where users can upload and discover data assets that are meant to be used with other services. This currently includes images and metadata definitions. Glance image services include discovering, registering, and retrieving virtual machine (VM) images. Glance has a RESTful API that allows querying of VM image metadata as well as retrieval of the actual image. .. include:: deprecation-note.inc VM images made available through Glance can be stored in a variety of locations from simple filesystems to object-storage systems like the OpenStack Swift project. .. toctree:: :maxdepth: 2 user/index admin/index install/index configuration/index cli/index contributor/index .. toctree:: :maxdepth: 1 glossary glance-16.0.0/doc/source/install/0000775000175100017510000000000013245511661016572 5ustar zuulzuul00000000000000glance-16.0.0/doc/source/install/install-rdo.rst0000777000175100017510000002267113245511421021563 0ustar zuulzuul00000000000000Install and configure (Red Hat) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This section describes how to install and configure the Image service, code-named glance, on the controller node. For simplicity, this configuration stores images on the local file system. Prerequisites ------------- Before you install and configure the Image service, you must create a database, service credentials, and API endpoints. #. To create the database, complete these steps: * Use the database access client to connect to the database server as the ``root`` user: .. code-block:: console $ mysql -u root -p .. end * Create the ``glance`` database: .. code-block:: console MariaDB [(none)]> CREATE DATABASE glance; .. end * Grant proper access to the ``glance`` database: .. code-block:: console MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \ IDENTIFIED BY 'GLANCE_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \ IDENTIFIED BY 'GLANCE_DBPASS'; .. end Replace ``GLANCE_DBPASS`` with a suitable password. * Exit the database access client. #. Source the ``admin`` credentials to gain access to admin-only CLI commands: .. code-block:: console $ . admin-openrc .. end #. To create the service credentials, complete these steps: * Create the ``glance`` user: .. code-block:: console $ openstack user create --domain default --password-prompt glance User Password: Repeat User Password: +---------------------+----------------------------------+ | Field | Value | +---------------------+----------------------------------+ | domain_id | default | | enabled | True | | id | 3f4e777c4062483ab8d9edd7dff829df | | name | glance | | options | {} | | password_expires_at | None | +---------------------+----------------------------------+ .. end * Add the ``admin`` role to the ``glance`` user and ``service`` project: .. code-block:: console $ openstack role add --project service --user glance admin .. end .. note:: This command provides no output. * Create the ``glance`` service entity: .. code-block:: console $ openstack service create --name glance \ --description "OpenStack Image" image +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | OpenStack Image | | enabled | True | | id | 8c2c7f1b9b5049ea9e63757b5533e6d2 | | name | glance | | type | image | +-------------+----------------------------------+ .. end #. Create the Image service API endpoints: .. code-block:: console $ openstack endpoint create --region RegionOne \ image public http://controller:9292 +--------------+----------------------------------+ | Field | Value | +--------------+----------------------------------+ | enabled | True | | id | 340be3625e9b4239a6415d034e98aace | | interface | public | | region | RegionOne | | region_id | RegionOne | | service_id | 8c2c7f1b9b5049ea9e63757b5533e6d2 | | service_name | glance | | service_type | image | | url | http://controller:9292 | +--------------+----------------------------------+ $ openstack endpoint create --region RegionOne \ image internal http://controller:9292 +--------------+----------------------------------+ | Field | Value | +--------------+----------------------------------+ | enabled | True | | id | a6e4b153c2ae4c919eccfdbb7dceb5d2 | | interface | internal | | region | RegionOne | | region_id | RegionOne | | service_id | 8c2c7f1b9b5049ea9e63757b5533e6d2 | | service_name | glance | | service_type | image | | url | http://controller:9292 | +--------------+----------------------------------+ $ openstack endpoint create --region RegionOne \ image admin http://controller:9292 +--------------+----------------------------------+ | Field | Value | +--------------+----------------------------------+ | enabled | True | | id | 0c37ed58103f4300a84ff125a539032d | | interface | admin | | region | RegionOne | | region_id | RegionOne | | service_id | 8c2c7f1b9b5049ea9e63757b5533e6d2 | | service_name | glance | | service_type | image | | url | http://controller:9292 | +--------------+----------------------------------+ .. end Install and configure components -------------------------------- .. include:: note_configuration_vary_by_distribution.txt #. Install the packages: .. code-block:: console # yum install openstack-glance .. end 2. Edit the ``/etc/glance/glance-api.conf`` file and complete the following actions: * In the ``[database]`` section, configure database access: .. path /etc/glance/glance.conf .. code-block:: ini [database] # ... connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance .. end Replace ``GLANCE_DBPASS`` with the password you chose for the Image service database. * In the ``[keystone_authtoken]`` and ``[paste_deploy]`` sections, configure Identity service access: .. path /etc/glance/glance.conf .. code-block:: ini [keystone_authtoken] # ... auth_uri = http://controller:5000 auth_url = http://controller:35357 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = glance password = GLANCE_PASS [paste_deploy] # ... flavor = keystone .. end Replace ``GLANCE_PASS`` with the password you chose for the ``glance`` user in the Identity service. .. note:: Comment out or remove any other options in the ``[keystone_authtoken]`` section. * In the ``[glance_store]`` section, configure the local file system store and location of image files: .. path /etc/glance/glance.conf .. code-block:: ini [glance_store] # ... stores = file,http default_store = file filesystem_store_datadir = /var/lib/glance/images/ .. end 3. Edit the ``/etc/glance/glance-registry.conf`` file and complete the following actions: .. include:: ../deprecate-registry.inc * In the ``[database]`` section, configure database access: .. path /etc/glance/glance-registry.conf .. code-block:: ini [database] # ... connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance .. end Replace ``GLANCE_DBPASS`` with the password you chose for the Image service database. * In the ``[keystone_authtoken]`` and ``[paste_deploy]`` sections, configure Identity service access: .. path /etc/glance/glance-registry.conf .. code-block:: ini [keystone_authtoken] # ... auth_uri = http://controller:5000 auth_url = http://controller:35357 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = glance password = GLANCE_PASS [paste_deploy] # ... flavor = keystone .. end Replace ``GLANCE_PASS`` with the password you chose for the ``glance`` user in the Identity service. .. note:: Comment out or remove any other options in the ``[keystone_authtoken]`` section. 4. Populate the Image service database: .. code-block:: console # su -s /bin/sh -c "glance-manage db_sync" glance .. end .. note:: Ignore any deprecation messages in this output. Finalize installation --------------------- * Start the Image services and configure them to start when the system boots: .. code-block:: console # systemctl enable openstack-glance-api.service \ openstack-glance-registry.service # systemctl start openstack-glance-api.service \ openstack-glance-registry.service .. end glance-16.0.0/doc/source/install/note_configuration_vary_by_distribution.txt0000666000175100017510000000046613245511421027563 0ustar zuulzuul00000000000000.. note:: Default configuration files vary by distribution. You might need to add these sections and options rather than modifying existing sections and options. Also, an ellipsis (``...``) in the configuration snippets indicates potential default configuration options that you should retain. glance-16.0.0/doc/source/install/install.rst0000666000175100017510000000042013245511421020762 0ustar zuulzuul00000000000000Install and configure ~~~~~~~~~~~~~~~~~~~~~ This section describes how to install and configure the Image service, code-named glance, on the controller node. For simplicity, this configuration stores images on the local file system. .. toctree:: :glob: install-* glance-16.0.0/doc/source/install/install-ubuntu.rst0000777000175100017510000002241213245511421022312 0ustar zuulzuul00000000000000Install and configure (Ubuntu) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This section describes how to install and configure the Image service, code-named glance, on the controller node. For simplicity, this configuration stores images on the local file system. Prerequisites ------------- Before you install and configure the Image service, you must create a database, service credentials, and API endpoints. #. To create the database, complete these steps: * Use the database access client to connect to the database server as the ``root`` user: .. code-block:: console $ mysql -u root -p .. end * Create the ``glance`` database: .. code-block:: console MariaDB [(none)]> CREATE DATABASE glance; .. end * Grant proper access to the ``glance`` database: .. code-block:: console MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \ IDENTIFIED BY 'GLANCE_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \ IDENTIFIED BY 'GLANCE_DBPASS'; .. end Replace ``GLANCE_DBPASS`` with a suitable password. * Exit the database access client. #. Source the ``admin`` credentials to gain access to admin-only CLI commands: .. code-block:: console $ . admin-openrc .. end #. To create the service credentials, complete these steps: * Create the ``glance`` user: .. code-block:: console $ openstack user create --domain default --password-prompt glance User Password: Repeat User Password: +---------------------+----------------------------------+ | Field | Value | +---------------------+----------------------------------+ | domain_id | default | | enabled | True | | id | 3f4e777c4062483ab8d9edd7dff829df | | name | glance | | options | {} | | password_expires_at | None | +---------------------+----------------------------------+ .. end * Add the ``admin`` role to the ``glance`` user and ``service`` project: .. code-block:: console $ openstack role add --project service --user glance admin .. end .. note:: This command provides no output. * Create the ``glance`` service entity: .. code-block:: console $ openstack service create --name glance \ --description "OpenStack Image" image +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | OpenStack Image | | enabled | True | | id | 8c2c7f1b9b5049ea9e63757b5533e6d2 | | name | glance | | type | image | +-------------+----------------------------------+ .. end #. Create the Image service API endpoints: .. code-block:: console $ openstack endpoint create --region RegionOne \ image public http://controller:9292 +--------------+----------------------------------+ | Field | Value | +--------------+----------------------------------+ | enabled | True | | id | 340be3625e9b4239a6415d034e98aace | | interface | public | | region | RegionOne | | region_id | RegionOne | | service_id | 8c2c7f1b9b5049ea9e63757b5533e6d2 | | service_name | glance | | service_type | image | | url | http://controller:9292 | +--------------+----------------------------------+ $ openstack endpoint create --region RegionOne \ image internal http://controller:9292 +--------------+----------------------------------+ | Field | Value | +--------------+----------------------------------+ | enabled | True | | id | a6e4b153c2ae4c919eccfdbb7dceb5d2 | | interface | internal | | region | RegionOne | | region_id | RegionOne | | service_id | 8c2c7f1b9b5049ea9e63757b5533e6d2 | | service_name | glance | | service_type | image | | url | http://controller:9292 | +--------------+----------------------------------+ $ openstack endpoint create --region RegionOne \ image admin http://controller:9292 +--------------+----------------------------------+ | Field | Value | +--------------+----------------------------------+ | enabled | True | | id | 0c37ed58103f4300a84ff125a539032d | | interface | admin | | region | RegionOne | | region_id | RegionOne | | service_id | 8c2c7f1b9b5049ea9e63757b5533e6d2 | | service_name | glance | | service_type | image | | url | http://controller:9292 | +--------------+----------------------------------+ .. end Install and configure components -------------------------------- .. include:: note_configuration_vary_by_distribution.txt #. Install the packages: .. code-block:: console # apt install glance .. end 2. Edit the ``/etc/glance/glance-api.conf`` file and complete the following actions: * In the ``[database]`` section, configure database access: .. path /etc/glance/glance.conf .. code-block:: ini [database] # ... connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance .. end Replace ``GLANCE_DBPASS`` with the password you chose for the Image service database. * In the ``[keystone_authtoken]`` and ``[paste_deploy]`` sections, configure Identity service access: .. path /etc/glance/glance.conf .. code-block:: ini [keystone_authtoken] # ... auth_uri = http://controller:5000 auth_url = http://controller:35357 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = glance password = GLANCE_PASS [paste_deploy] # ... flavor = keystone .. end Replace ``GLANCE_PASS`` with the password you chose for the ``glance`` user in the Identity service. .. note:: Comment out or remove any other options in the ``[keystone_authtoken]`` section. * In the ``[glance_store]`` section, configure the local file system store and location of image files: .. path /etc/glance/glance.conf .. code-block:: ini [glance_store] # ... stores = file,http default_store = file filesystem_store_datadir = /var/lib/glance/images/ .. end 3. Edit the ``/etc/glance/glance-registry.conf`` file and complete the following actions: .. include:: ../deprecate-registry.inc * In the ``[database]`` section, configure database access: .. path /etc/glance/glance-registry.conf .. code-block:: ini [database] # ... connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance .. end Replace ``GLANCE_DBPASS`` with the password you chose for the Image service database. * In the ``[keystone_authtoken]`` and ``[paste_deploy]`` sections, configure Identity service access: .. path /etc/glance/glance-registry.conf .. code-block:: ini [keystone_authtoken] # ... auth_uri = http://controller:5000 auth_url = http://controller:35357 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = glance password = GLANCE_PASS [paste_deploy] # ... flavor = keystone .. end Replace ``GLANCE_PASS`` with the password you chose for the ``glance`` user in the Identity service. .. note:: Comment out or remove any other options in the ``[keystone_authtoken]`` section. 4. Populate the Image service database: .. code-block:: console # su -s /bin/sh -c "glance-manage db_sync" glance .. end .. note:: Ignore any deprecation messages in this output. Finalize installation --------------------- #. Restart the Image services: .. code-block:: console # service glance-registry restart # service glance-api restart .. end glance-16.0.0/doc/source/install/verify.rst0000666000175100017510000001016113245511426020630 0ustar zuulzuul00000000000000Verify operation ~~~~~~~~~~~~~~~~ Verify operation of the Image service using `CirrOS `__, a small Linux image that helps you test your OpenStack deployment. For more information about how to download and build images, see `OpenStack Virtual Machine Image Guide `__. For information about how to manage images, see the `OpenStack End User Guide `__. .. note:: Perform these commands on the controller node. #. Source the ``admin`` credentials to gain access to admin-only CLI commands: .. code-block:: console $ . admin-openrc .. end #. Download the source image: .. code-block:: console $ wget http://download.cirros-cloud.net/0.3.5/cirros-0.3.5-x86_64-disk.img .. end .. note:: Install ``wget`` if your distribution does not include it. #. Upload the image to the Image service using the :term:`QCOW2 ` disk format, :term:`bare` container format, and public visibility so all projects can access it: .. code-block:: console $ openstack image create "cirros" \ --file cirros-0.3.5-x86_64-disk.img \ --disk-format qcow2 --container-format bare \ --public +------------------+------------------------------------------------------+ | Field | Value | +------------------+------------------------------------------------------+ | checksum | 133eae9fb1c98f45894a4e60d8736619 | | container_format | bare | | created_at | 2015-03-26T16:52:10Z | | disk_format | qcow2 | | file | /v2/images/cc5c6982-4910-471e-b864-1098015901b5/file | | id | cc5c6982-4910-471e-b864-1098015901b5 | | min_disk | 0 | | min_ram | 0 | | name | cirros | | owner | ae7a98326b9c455588edd2656d723b9d | | protected | False | | schema | /v2/schemas/image | | size | 13200896 | | status | active | | tags | | | updated_at | 2015-03-26T16:52:10Z | | virtual_size | None | | visibility | public | +------------------+------------------------------------------------------+ .. end For information about the :command:`openstack image create` parameters, see `Create or update an image (glance) `__ in the ``OpenStack User Guide``. For information about disk and container formats for images, see `Disk and container formats for images `__ in the ``OpenStack Virtual Machine Image Guide``. .. note:: OpenStack generates IDs dynamically, so you will see different values in the example command output. #. Confirm upload of the image and validate attributes: .. code-block:: console $ openstack image list +--------------------------------------+--------+--------+ | ID | Name | Status | +--------------------------------------+--------+--------+ | 38047887-61a7-41ea-9b49-27987d5e8bb9 | cirros | active | +--------------------------------------+--------+--------+ .. end glance-16.0.0/doc/source/install/get-started.rst0000666000175100017510000000570313245511421021550 0ustar zuulzuul00000000000000====================== Image service overview ====================== The Image service (glance) enables users to discover, register, and retrieve virtual machine images. It offers a :term:`REST ` API that enables you to query virtual machine image metadata and retrieve an actual image. You can store virtual machine images made available through the Image service in a variety of locations, from simple file systems to object-storage systems like OpenStack Object Storage. .. important:: For simplicity, this guide describes configuring the Image service to use the ``file`` back end, which uploads and stores in a directory on the controller node hosting the Image service. By default, this directory is ``/var/lib/glance/images/``. Before you proceed, ensure that the controller node has at least several gigabytes of space available in this directory. Keep in mind that since the ``file`` back end is often local to a controller node, it is not typically suitable for a multi-node glance deployment. For information on requirements for other back ends, see `Configuration Reference <../configuration/index.html>`__. The OpenStack Image service is central to Infrastructure-as-a-Service (IaaS). It accepts API requests for disk or server images, and metadata definitions from end users or OpenStack Compute components. It also supports the storage of disk or server images on various repository types, including OpenStack Object Storage. A number of periodic processes run on the OpenStack Image service to support caching. Replication services ensure consistency and availability through the cluster. Other periodic processes include auditors, updaters, and reapers. The OpenStack Image service includes the following components: glance-api Accepts Image API calls for image discovery, retrieval, and storage. glance-registry Stores, processes, and retrieves metadata about images. Metadata includes items such as size and type. .. warning:: The registry is a private internal service meant for use by OpenStack Image service. Do not expose this service to users. .. include:: ../deprecate-registry.inc Database Stores image metadata and you can choose your database depending on your preference. Most deployments use MySQL or SQLite. Storage repository for image files Various repository types are supported including normal file systems (or any filesystem mounted on the glance-api controller node), Object Storage, RADOS block devices, VMware datastore, and HTTP. Note that some repositories will only support read-only usage. Metadata definition service A common API for vendors, admins, services, and users to meaningfully define their own custom metadata. This metadata can be used on different types of resources like images, artifacts, volumes, flavors, and aggregates. A definition includes the new property's key, description, constraints, and the resource types which it can be associated with. glance-16.0.0/doc/source/install/install-obs.rst0000777000175100017510000002350513245511421021557 0ustar zuulzuul00000000000000Install and configure (SUSE) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This section describes how to install and configure the Image service, code-named glance, on the controller node. For simplicity, this configuration stores images on the local file system. Prerequisites ------------- Before you install and configure the Image service, you must create a database, service credentials, and API endpoints. #. To create the database, complete these steps: * Use the database access client to connect to the database server as the ``root`` user: .. code-block:: console $ mysql -u root -p .. end * Create the ``glance`` database: .. code-block:: console MariaDB [(none)]> CREATE DATABASE glance; .. end * Grant proper access to the ``glance`` database: .. code-block:: console MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \ IDENTIFIED BY 'GLANCE_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \ IDENTIFIED BY 'GLANCE_DBPASS'; .. end Replace ``GLANCE_DBPASS`` with a suitable password. * Exit the database access client. #. Source the ``admin`` credentials to gain access to admin-only CLI commands: .. code-block:: console $ . admin-openrc .. end #. To create the service credentials, complete these steps: * Create the ``glance`` user: .. code-block:: console $ openstack user create --domain default --password-prompt glance User Password: Repeat User Password: +---------------------+----------------------------------+ | Field | Value | +---------------------+----------------------------------+ | domain_id | default | | enabled | True | | id | 3f4e777c4062483ab8d9edd7dff829df | | name | glance | | options | {} | | password_expires_at | None | +---------------------+----------------------------------+ .. end * Add the ``admin`` role to the ``glance`` user and ``service`` project: .. code-block:: console $ openstack role add --project service --user glance admin .. end .. note:: This command provides no output. * Create the ``glance`` service entity: .. code-block:: console $ openstack service create --name glance \ --description "OpenStack Image" image +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | OpenStack Image | | enabled | True | | id | 8c2c7f1b9b5049ea9e63757b5533e6d2 | | name | glance | | type | image | +-------------+----------------------------------+ .. end #. Create the Image service API endpoints: .. code-block:: console $ openstack endpoint create --region RegionOne \ image public http://controller:9292 +--------------+----------------------------------+ | Field | Value | +--------------+----------------------------------+ | enabled | True | | id | 340be3625e9b4239a6415d034e98aace | | interface | public | | region | RegionOne | | region_id | RegionOne | | service_id | 8c2c7f1b9b5049ea9e63757b5533e6d2 | | service_name | glance | | service_type | image | | url | http://controller:9292 | +--------------+----------------------------------+ $ openstack endpoint create --region RegionOne \ image internal http://controller:9292 +--------------+----------------------------------+ | Field | Value | +--------------+----------------------------------+ | enabled | True | | id | a6e4b153c2ae4c919eccfdbb7dceb5d2 | | interface | internal | | region | RegionOne | | region_id | RegionOne | | service_id | 8c2c7f1b9b5049ea9e63757b5533e6d2 | | service_name | glance | | service_type | image | | url | http://controller:9292 | +--------------+----------------------------------+ $ openstack endpoint create --region RegionOne \ image admin http://controller:9292 +--------------+----------------------------------+ | Field | Value | +--------------+----------------------------------+ | enabled | True | | id | 0c37ed58103f4300a84ff125a539032d | | interface | admin | | region | RegionOne | | region_id | RegionOne | | service_id | 8c2c7f1b9b5049ea9e63757b5533e6d2 | | service_name | glance | | service_type | image | | url | http://controller:9292 | +--------------+----------------------------------+ .. end Install and configure components -------------------------------- .. include:: note_configuration_vary_by_distribution.txt .. note:: Starting with the Newton release, SUSE OpenStack packages are shipping with the upstream default configuration files. For example ``/etc/glance/glance-api.conf`` or ``/etc/glance/glance-registry.conf``, with customizations in ``/etc/glance/glance-api.conf.d/`` or ``/etc/glance/glance-registry.conf.d/``. While the following instructions modify the default configuration files, adding new files in ``/etc/glance/glance-api.conf.d`` or ``/etc/glance/glance-registry.conf.d`` achieves the same result. #. Install the packages: .. code-block:: console # zypper install openstack-glance \ openstack-glance-api openstack-glance-registry .. end 2. Edit the ``/etc/glance/glance-api.conf`` file and complete the following actions: * In the ``[database]`` section, configure database access: .. path /etc/glance/glance.conf .. code-block:: ini [database] # ... connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance .. end Replace ``GLANCE_DBPASS`` with the password you chose for the Image service database. * In the ``[keystone_authtoken]`` and ``[paste_deploy]`` sections, configure Identity service access: .. path /etc/glance/glance.conf .. code-block:: ini [keystone_authtoken] # ... auth_uri = http://controller:5000 auth_url = http://controller:35357 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = glance password = GLANCE_PASS [paste_deploy] # ... flavor = keystone .. end Replace ``GLANCE_PASS`` with the password you chose for the ``glance`` user in the Identity service. .. note:: Comment out or remove any other options in the ``[keystone_authtoken]`` section. * In the ``[glance_store]`` section, configure the local file system store and location of image files: .. path /etc/glance/glance.conf .. code-block:: ini [glance_store] # ... stores = file,http default_store = file filesystem_store_datadir = /var/lib/glance/images/ .. end 3. Edit the ``/etc/glance/glance-registry.conf`` file and complete the following actions: .. include:: ../deprecate-registry.inc * In the ``[database]`` section, configure database access: .. path /etc/glance/glance-registry.conf .. code-block:: ini [database] # ... connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance .. end Replace ``GLANCE_DBPASS`` with the password you chose for the Image service database. * In the ``[keystone_authtoken]`` and ``[paste_deploy]`` sections, configure Identity service access: .. path /etc/glance/glance-registry.conf .. code-block:: ini [keystone_authtoken] # ... auth_uri = http://controller:5000 auth_url = http://controller:35357 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = glance password = GLANCE_PASS [paste_deploy] # ... flavor = keystone .. end Replace ``GLANCE_PASS`` with the password you chose for the ``glance`` user in the Identity service. .. note:: Comment out or remove any other options in the ``[keystone_authtoken]`` section. Finalize installation --------------------- * Start the Image services and configure them to start when the system boots: .. code-block:: console # systemctl enable openstack-glance-api.service \ openstack-glance-registry.service # systemctl start openstack-glance-api.service \ openstack-glance-registry.service .. end glance-16.0.0/doc/source/install/index.rst0000666000175100017510000000274313245511421020435 0ustar zuulzuul00000000000000.. Copyright 2011 OpenStack Foundation All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ============== Installation ============== .. toctree:: get-started install.rst verify.rst Ocata ~~~~~ To install Glance, see the Ocata Image service install guide for each distribution: - `Ubuntu `__ - `CentOS and RHEL `__ - `openSUSE and SUSE Linux Enterprise `__ Newton ~~~~~~ To install Glance, see the Newton Image service install guide for each distribution: - `Ubuntu `__ - `CentOS and RHEL `__ - `openSUSE and SUSE Linux Enterprise `__ glance-16.0.0/doc/source/install/install-debian.rst0000666000175100017510000002241213245511421022207 0ustar zuulzuul00000000000000Install and configure (Debian) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This section describes how to install and configure the Image service, code-named glance, on the controller node. For simplicity, this configuration stores images on the local file system. Prerequisites ------------- Before you install and configure the Image service, you must create a database, service credentials, and API endpoints. #. To create the database, complete these steps: * Use the database access client to connect to the database server as the ``root`` user: .. code-block:: console $ mysql -u root -p .. end * Create the ``glance`` database: .. code-block:: console MariaDB [(none)]> CREATE DATABASE glance; .. end * Grant proper access to the ``glance`` database: .. code-block:: console MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \ IDENTIFIED BY 'GLANCE_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \ IDENTIFIED BY 'GLANCE_DBPASS'; .. end Replace ``GLANCE_DBPASS`` with a suitable password. * Exit the database access client. #. Source the ``admin`` credentials to gain access to admin-only CLI commands: .. code-block:: console $ . admin-openrc .. end #. To create the service credentials, complete these steps: * Create the ``glance`` user: .. code-block:: console $ openstack user create --domain default --password-prompt glance User Password: Repeat User Password: +---------------------+----------------------------------+ | Field | Value | +---------------------+----------------------------------+ | domain_id | default | | enabled | True | | id | 3f4e777c4062483ab8d9edd7dff829df | | name | glance | | options | {} | | password_expires_at | None | +---------------------+----------------------------------+ .. end * Add the ``admin`` role to the ``glance`` user and ``service`` project: .. code-block:: console $ openstack role add --project service --user glance admin .. end .. note:: This command provides no output. * Create the ``glance`` service entity: .. code-block:: console $ openstack service create --name glance \ --description "OpenStack Image" image +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | OpenStack Image | | enabled | True | | id | 8c2c7f1b9b5049ea9e63757b5533e6d2 | | name | glance | | type | image | +-------------+----------------------------------+ .. end #. Create the Image service API endpoints: .. code-block:: console $ openstack endpoint create --region RegionOne \ image public http://controller:9292 +--------------+----------------------------------+ | Field | Value | +--------------+----------------------------------+ | enabled | True | | id | 340be3625e9b4239a6415d034e98aace | | interface | public | | region | RegionOne | | region_id | RegionOne | | service_id | 8c2c7f1b9b5049ea9e63757b5533e6d2 | | service_name | glance | | service_type | image | | url | http://controller:9292 | +--------------+----------------------------------+ $ openstack endpoint create --region RegionOne \ image internal http://controller:9292 +--------------+----------------------------------+ | Field | Value | +--------------+----------------------------------+ | enabled | True | | id | a6e4b153c2ae4c919eccfdbb7dceb5d2 | | interface | internal | | region | RegionOne | | region_id | RegionOne | | service_id | 8c2c7f1b9b5049ea9e63757b5533e6d2 | | service_name | glance | | service_type | image | | url | http://controller:9292 | +--------------+----------------------------------+ $ openstack endpoint create --region RegionOne \ image admin http://controller:9292 +--------------+----------------------------------+ | Field | Value | +--------------+----------------------------------+ | enabled | True | | id | 0c37ed58103f4300a84ff125a539032d | | interface | admin | | region | RegionOne | | region_id | RegionOne | | service_id | 8c2c7f1b9b5049ea9e63757b5533e6d2 | | service_name | glance | | service_type | image | | url | http://controller:9292 | +--------------+----------------------------------+ .. end Install and configure components -------------------------------- .. include:: note_configuration_vary_by_distribution.txt #. Install the packages: .. code-block:: console # apt install glance .. end 2. Edit the ``/etc/glance/glance-api.conf`` file and complete the following actions: * In the ``[database]`` section, configure database access: .. path /etc/glance/glance.conf .. code-block:: ini [database] # ... connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance .. end Replace ``GLANCE_DBPASS`` with the password you chose for the Image service database. * In the ``[keystone_authtoken]`` and ``[paste_deploy]`` sections, configure Identity service access: .. path /etc/glance/glance.conf .. code-block:: ini [keystone_authtoken] # ... auth_uri = http://controller:5000 auth_url = http://controller:35357 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = glance password = GLANCE_PASS [paste_deploy] # ... flavor = keystone .. end Replace ``GLANCE_PASS`` with the password you chose for the ``glance`` user in the Identity service. .. note:: Comment out or remove any other options in the ``[keystone_authtoken]`` section. * In the ``[glance_store]`` section, configure the local file system store and location of image files: .. path /etc/glance/glance.conf .. code-block:: ini [glance_store] # ... stores = file,http default_store = file filesystem_store_datadir = /var/lib/glance/images/ .. end 3. Edit the ``/etc/glance/glance-registry.conf`` file and complete the following actions: .. include:: ../deprecate-registry.inc * In the ``[database]`` section, configure database access: .. path /etc/glance/glance-registry.conf .. code-block:: ini [database] # ... connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance .. end Replace ``GLANCE_DBPASS`` with the password you chose for the Image service database. * In the ``[keystone_authtoken]`` and ``[paste_deploy]`` sections, configure Identity service access: .. path /etc/glance/glance-registry.conf .. code-block:: ini [keystone_authtoken] # ... auth_uri = http://controller:5000 auth_url = http://controller:35357 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = glance password = GLANCE_PASS [paste_deploy] # ... flavor = keystone .. end Replace ``GLANCE_PASS`` with the password you chose for the ``glance`` user in the Identity service. .. note:: Comment out or remove any other options in the ``[keystone_authtoken]`` section. 4. Populate the Image service database: .. code-block:: console # su -s /bin/sh -c "glance-manage db_sync" glance .. end .. note:: Ignore any deprecation messages in this output. Finalize installation --------------------- #. Restart the Image services: .. code-block:: console # service glance-registry restart # service glance-api restart .. end glance-16.0.0/doc/source/deprecation-note.inc0000666000175100017510000000064413245511421021057 0ustar zuulzuul00000000000000.. note:: The Images API v1 has been DEPRECATED in the Newton release. The migration path is to use the `Images API v2 `_ instead of version 1 of the API. The Images API v1 will ultimately be removed, following the `OpenStack standard deprecation policy `_. glance-16.0.0/doc/source/images_src/0000775000175100017510000000000013245511661017240 5ustar zuulzuul00000000000000glance-16.0.0/doc/source/images_src/glance_layers.graphml0000666000175100017510000005162613245511421023432 0ustar zuulzuul00000000000000 Domain Router api/v2/router.py REST API api/v2/* Auth api/authorization.py Notifier notifier.py Policy api/policy.py Quota quota/__init__.py Location location.py DB db/__init__.py Registry (optional) registry/v2/* Data Access db/sqlalchemy/api.py A Client Glance Store DBMS Property protection (optional) api/property_protections.py glance-16.0.0/doc/source/images_src/image_status_transition.dot0000666000175100017510000000377013245511421024712 0ustar zuulzuul00000000000000/* # All Rights Reserved. # Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. */ /* This file can be compiled by graphviz with issuing the following command: dot -Tpng -oimage_status_transition.png image_status_transition.dot See http://www.graphviz.org to get more info. */ digraph { node [shape="doublecircle" color="#006699" style="filled" fillcolor="#33CCFF" fixedsize="True" width="1.5" height="1.5"]; "" -> "queued" [label="create image"]; "queued" -> "active" [label="add location*"]; "queued" -> "saving" [label="upload"]; "queued" -> "uploading" [label="stage upload"]; "queued" -> "deleted" [label="delete"]; "saving" -> "active" [label="upload succeed"]; "saving" -> "killed" [label="[v1] upload fail"]; "saving" -> "queued" [label="[v2] upload fail"]; "saving" -> "deleted" [label="delete"]; "uploading" -> "importing" [label="import"]; "uploading" -> "queued" [label="stage upload fail"]; "uploading" -> "deleted" [label="delete"]; "importing" -> "active" [label="import succeed"]; "importing" -> "queued" [label="import fail"]; "importing" -> "deleted" [label="delete"]; "active" -> "pending_delete" [label="delayed delete"]; "active" -> "deleted" [label="delete"]; "active" -> "deactivated" [label="deactivate"]; "deactivated" -> "active" [label="reactivate"]; "deactivated" -> "deleted" [label="delete"]; "killed" -> "deleted" [label="delete"]; "pending_delete" -> "deleted" [label="after scrub time"]; } glance-16.0.0/doc/source/images_src/architecture.graphml0000666000175100017510000016112213245511421023275 0ustar zuulzuul00000000000000 Keystone Folder 2 API Glance Folder 3 REST API Glance DB Database Abstraction Layer Glance Domain Controller Auth Notifier Policy Quota Location DB AuthZ Middleware Registry Layer Glance Store Folder 4 Glance Store Drivers AuthN Supported Storages Folder 5 Swift Ceph Sheepdog ... Filesystem A client Folder 7 AuthN <?xml version="1.0" encoding="utf-8"?> <svg version="1.1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" x="0px" y="0px" width="40px" height="48px" viewBox="0 0 40 48" enable-background="new 0 0 40 48" xml:space="preserve"> <defs> </defs> <linearGradient id="SVGID_1_" gradientUnits="userSpaceOnUse" x1="370.2002" y1="655.0938" x2="409.4502" y2="655.0938" gradientTransform="matrix(1 0 0 1 -370.2002 -614.5742)"> <stop offset="0" style="stop-color:#4D4D4D"/> <stop offset="0.0558" style="stop-color:#5F5F5F"/> <stop offset="0.2103" style="stop-color:#8D8D8D"/> <stop offset="0.3479" style="stop-color:#AEAEAE"/> <stop offset="0.4623" style="stop-color:#C2C2C2"/> <stop offset="0.5394" style="stop-color:#C9C9C9"/> <stop offset="0.6247" style="stop-color:#C5C5C5"/> <stop offset="0.7072" style="stop-color:#BABABA"/> <stop offset="0.7885" style="stop-color:#A6A6A6"/> <stop offset="0.869" style="stop-color:#8B8B8B"/> <stop offset="0.9484" style="stop-color:#686868"/> <stop offset="1" style="stop-color:#4D4D4D"/> </linearGradient> <path fill="url(#SVGID_1_)" d="M19.625,37.613C8.787,37.613,0,35.738,0,33.425v10c0,2.313,8.787,4.188,19.625,4.188 c10.839,0,19.625-1.875,19.625-4.188v-10C39.25,35.738,30.464,37.613,19.625,37.613z"/> <linearGradient id="SVGID_2_" gradientUnits="userSpaceOnUse" x1="370.2002" y1="649.0938" x2="409.4502" y2="649.0938" gradientTransform="matrix(1 0 0 1 -370.2002 -614.5742)"> <stop offset="0" style="stop-color:#B3B3B3"/> <stop offset="0.0171" style="stop-color:#B6B6B6"/> <stop offset="0.235" style="stop-color:#D7D7D7"/> <stop offset="0.4168" style="stop-color:#EBEBEB"/> <stop offset="0.5394" style="stop-color:#F2F2F2"/> <stop offset="0.6579" style="stop-color:#EEEEEE"/> <stop offset="0.7724" style="stop-color:#E3E3E3"/> <stop offset="0.8853" style="stop-color:#CFCFCF"/> <stop offset="0.9965" style="stop-color:#B4B4B4"/> <stop offset="1" style="stop-color:#B3B3B3"/> </linearGradient> <path fill="url(#SVGID_2_)" d="M19.625,37.613c10.839,0,19.625-1.875,19.625-4.188l-1.229-2c0,2.168-8.235,3.927-18.396,3.927 c-9.481,0-17.396-1.959-18.396-3.927l-1.229,2C0,35.738,8.787,37.613,19.625,37.613z"/> <linearGradient id="SVGID_3_" gradientUnits="userSpaceOnUse" x1="371.4297" y1="646" x2="408.2217" y2="646" gradientTransform="matrix(1 0 0 1 -370.2002 -614.5742)"> <stop offset="0" style="stop-color:#C9C9C9"/> <stop offset="1" style="stop-color:#808080"/> </linearGradient> <ellipse fill="url(#SVGID_3_)" cx="19.625" cy="31.425" rx="18.396" ry="3.926"/> <linearGradient id="SVGID_4_" gradientUnits="userSpaceOnUse" x1="370.2002" y1="641.0938" x2="409.4502" y2="641.0938" gradientTransform="matrix(1 0 0 1 -370.2002 -614.5742)"> <stop offset="0" style="stop-color:#4D4D4D"/> <stop offset="0.0558" style="stop-color:#5F5F5F"/> <stop offset="0.2103" style="stop-color:#8D8D8D"/> <stop offset="0.3479" style="stop-color:#AEAEAE"/> <stop offset="0.4623" style="stop-color:#C2C2C2"/> <stop offset="0.5394" style="stop-color:#C9C9C9"/> <stop offset="0.6247" style="stop-color:#C5C5C5"/> <stop offset="0.7072" style="stop-color:#BABABA"/> <stop offset="0.7885" style="stop-color:#A6A6A6"/> <stop offset="0.869" style="stop-color:#8B8B8B"/> <stop offset="0.9484" style="stop-color:#686868"/> <stop offset="1" style="stop-color:#4D4D4D"/> </linearGradient> <path fill="url(#SVGID_4_)" d="M19.625,23.613C8.787,23.613,0,21.738,0,19.425v10c0,2.313,8.787,4.188,19.625,4.188 c10.839,0,19.625-1.875,19.625-4.188v-10C39.25,21.738,30.464,23.613,19.625,23.613z"/> <linearGradient id="SVGID_5_" gradientUnits="userSpaceOnUse" x1="370.2002" y1="635.0938" x2="409.4502" y2="635.0938" gradientTransform="matrix(1 0 0 1 -370.2002 -614.5742)"> <stop offset="0" style="stop-color:#B3B3B3"/> <stop offset="0.0171" style="stop-color:#B6B6B6"/> <stop offset="0.235" style="stop-color:#D7D7D7"/> <stop offset="0.4168" style="stop-color:#EBEBEB"/> <stop offset="0.5394" style="stop-color:#F2F2F2"/> <stop offset="0.6579" style="stop-color:#EEEEEE"/> <stop offset="0.7724" style="stop-color:#E3E3E3"/> <stop offset="0.8853" style="stop-color:#CFCFCF"/> <stop offset="0.9965" style="stop-color:#B4B4B4"/> <stop offset="1" style="stop-color:#B3B3B3"/> </linearGradient> <path fill="url(#SVGID_5_)" d="M19.625,23.613c10.839,0,19.625-1.875,19.625-4.188l-1.229-2c0,2.168-8.235,3.926-18.396,3.926 c-9.481,0-17.396-1.959-18.396-3.926l-1.229,2C0,21.738,8.787,23.613,19.625,23.613z"/> <linearGradient id="SVGID_6_" gradientUnits="userSpaceOnUse" x1="371.4297" y1="632" x2="408.2217" y2="632" gradientTransform="matrix(1 0 0 1 -370.2002 -614.5742)"> <stop offset="0" style="stop-color:#C9C9C9"/> <stop offset="1" style="stop-color:#808080"/> </linearGradient> <ellipse fill="url(#SVGID_6_)" cx="19.625" cy="17.426" rx="18.396" ry="3.926"/> <linearGradient id="SVGID_7_" gradientUnits="userSpaceOnUse" x1="370.2002" y1="627.5938" x2="409.4502" y2="627.5938" gradientTransform="matrix(1 0 0 1 -370.2002 -614.5742)"> <stop offset="0" style="stop-color:#4D4D4D"/> <stop offset="0.0558" style="stop-color:#5F5F5F"/> <stop offset="0.2103" style="stop-color:#8D8D8D"/> <stop offset="0.3479" style="stop-color:#AEAEAE"/> <stop offset="0.4623" style="stop-color:#C2C2C2"/> <stop offset="0.5394" style="stop-color:#C9C9C9"/> <stop offset="0.6247" style="stop-color:#C5C5C5"/> <stop offset="0.7072" style="stop-color:#BABABA"/> <stop offset="0.7885" style="stop-color:#A6A6A6"/> <stop offset="0.869" style="stop-color:#8B8B8B"/> <stop offset="0.9484" style="stop-color:#686868"/> <stop offset="1" style="stop-color:#4D4D4D"/> </linearGradient> <path fill="url(#SVGID_7_)" d="M19.625,10.113C8.787,10.113,0,8.238,0,5.925v10c0,2.313,8.787,4.188,19.625,4.188 c10.839,0,19.625-1.875,19.625-4.188v-10C39.25,8.238,30.464,10.113,19.625,10.113z"/> <linearGradient id="SVGID_8_" gradientUnits="userSpaceOnUse" x1="370.2002" y1="621.5938" x2="409.4502" y2="621.5938" gradientTransform="matrix(1 0 0 1 -370.2002 -614.5742)"> <stop offset="0" style="stop-color:#B3B3B3"/> <stop offset="0.0171" style="stop-color:#B6B6B6"/> <stop offset="0.235" style="stop-color:#D7D7D7"/> <stop offset="0.4168" style="stop-color:#EBEBEB"/> <stop offset="0.5394" style="stop-color:#F2F2F2"/> <stop offset="0.6579" style="stop-color:#EEEEEE"/> <stop offset="0.7724" style="stop-color:#E3E3E3"/> <stop offset="0.8853" style="stop-color:#CFCFCF"/> <stop offset="0.9965" style="stop-color:#B4B4B4"/> <stop offset="1" style="stop-color:#B3B3B3"/> </linearGradient> <path fill="url(#SVGID_8_)" d="M19.625,10.113c10.839,0,19.625-1.875,19.625-4.188l-1.229-2c0,2.168-8.235,3.926-18.396,3.926 c-9.481,0-17.396-1.959-18.396-3.926L0,5.925C0,8.238,8.787,10.113,19.625,10.113z"/> <linearGradient id="SVGID_9_" gradientUnits="userSpaceOnUse" x1="371.4297" y1="618.5" x2="408.2217" y2="618.5" gradientTransform="matrix(1 0 0 1 -370.2002 -614.5742)"> <stop offset="0" style="stop-color:#C9C9C9"/> <stop offset="1" style="stop-color:#808080"/> </linearGradient> <ellipse fill="url(#SVGID_9_)" cx="19.625" cy="3.926" rx="18.396" ry="3.926"/> <path opacity="0.24" fill="#FFFFFF" enable-background="new " d="M31.291,46.792c0,0-4.313,0.578-7.249,0.694 C20.917,47.613,15,47.613,15,47.613l-2.443-10.279l-0.119-2.283l-1.231-1.842L9.789,23.024l-0.082-0.119L9.3,20.715l-1.45-1.44 L5.329,8.793c0,0,5.296,0.882,7.234,1.07s8.375,0.25,8.375,0.25l3,9.875l-0.25,1.313l1.063,2.168l2.312,9.644l-0.375,1.875 l1.627,2.193L31.291,46.792z"/> </svg> <?xml version="1.0" encoding="utf-8"?> <svg version="1.1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" x="0px" y="0px" width="41px" height="48px" viewBox="-0.875 -0.887 41 48" enable-background="new -0.875 -0.887 41 48" xml:space="preserve"> <defs> </defs> <linearGradient id="SVGID_1_" gradientUnits="userSpaceOnUse" x1="642.8008" y1="-979.1445" x2="682.0508" y2="-979.1445" gradientTransform="matrix(1 0 0 -1 -642.8008 -939.4756)"> <stop offset="0" style="stop-color:#3C89C9"/> <stop offset="0.1482" style="stop-color:#60A6DD"/> <stop offset="0.3113" style="stop-color:#81C1F0"/> <stop offset="0.4476" style="stop-color:#95D1FB"/> <stop offset="0.5394" style="stop-color:#9CD7FF"/> <stop offset="0.636" style="stop-color:#98D4FD"/> <stop offset="0.7293" style="stop-color:#8DCAF6"/> <stop offset="0.8214" style="stop-color:#79BBEB"/> <stop offset="0.912" style="stop-color:#5EA5DC"/> <stop offset="1" style="stop-color:#3C89C9"/> </linearGradient> <path fill="url(#SVGID_1_)" d="M19.625,36.763C8.787,36.763,0,34.888,0,32.575v10c0,2.313,8.787,4.188,19.625,4.188 c10.839,0,19.625-1.875,19.625-4.188v-10C39.25,34.888,30.464,36.763,19.625,36.763z"/> <linearGradient id="SVGID_2_" gradientUnits="userSpaceOnUse" x1="642.8008" y1="-973.1445" x2="682.0508" y2="-973.1445" gradientTransform="matrix(1 0 0 -1 -642.8008 -939.4756)"> <stop offset="0" style="stop-color:#9CD7FF"/> <stop offset="0.0039" style="stop-color:#9DD7FF"/> <stop offset="0.2273" style="stop-color:#BDE5FF"/> <stop offset="0.4138" style="stop-color:#D1EEFF"/> <stop offset="0.5394" style="stop-color:#D9F1FF"/> <stop offset="0.6155" style="stop-color:#D5EFFE"/> <stop offset="0.6891" style="stop-color:#C9E7FA"/> <stop offset="0.7617" style="stop-color:#B6DAF3"/> <stop offset="0.8337" style="stop-color:#9AC8EA"/> <stop offset="0.9052" style="stop-color:#77B0DD"/> <stop offset="0.9754" style="stop-color:#4D94CF"/> <stop offset="1" style="stop-color:#3C89C9"/> </linearGradient> <path fill="url(#SVGID_2_)" d="M19.625,36.763c10.839,0,19.625-1.875,19.625-4.188l-1.229-2c0,2.168-8.235,3.927-18.396,3.927 c-9.481,0-17.396-1.959-18.396-3.927l-1.229,2C0,34.888,8.787,36.763,19.625,36.763z"/> <path fill="#3C89C9" d="M19.625,26.468c10.16,0,19.625,2.775,19.625,2.775c-0.375,2.721-5.367,5.438-19.554,5.438 c-12.125,0-18.467-2.484-19.541-4.918C-0.127,29.125,9.465,26.468,19.625,26.468z"/> <linearGradient id="SVGID_3_" gradientUnits="userSpaceOnUse" x1="642.8008" y1="-965.6948" x2="682.0508" y2="-965.6948" gradientTransform="matrix(1 0 0 -1 -642.8008 -939.4756)"> <stop offset="0" style="stop-color:#3C89C9"/> <stop offset="0.1482" style="stop-color:#60A6DD"/> <stop offset="0.3113" style="stop-color:#81C1F0"/> <stop offset="0.4476" style="stop-color:#95D1FB"/> <stop offset="0.5394" style="stop-color:#9CD7FF"/> <stop offset="0.636" style="stop-color:#98D4FD"/> <stop offset="0.7293" style="stop-color:#8DCAF6"/> <stop offset="0.8214" style="stop-color:#79BBEB"/> <stop offset="0.912" style="stop-color:#5EA5DC"/> <stop offset="1" style="stop-color:#3C89C9"/> </linearGradient> <path fill="url(#SVGID_3_)" d="M19.625,23.313C8.787,23.313,0,21.438,0,19.125v10c0,2.313,8.787,4.188,19.625,4.188 c10.839,0,19.625-1.875,19.625-4.188v-10C39.25,21.438,30.464,23.313,19.625,23.313z"/> <linearGradient id="SVGID_4_" gradientUnits="userSpaceOnUse" x1="642.8008" y1="-959.6948" x2="682.0508" y2="-959.6948" gradientTransform="matrix(1 0 0 -1 -642.8008 -939.4756)"> <stop offset="0" style="stop-color:#9CD7FF"/> <stop offset="0.0039" style="stop-color:#9DD7FF"/> <stop offset="0.2273" style="stop-color:#BDE5FF"/> <stop offset="0.4138" style="stop-color:#D1EEFF"/> <stop offset="0.5394" style="stop-color:#D9F1FF"/> <stop offset="0.6155" style="stop-color:#D5EFFE"/> <stop offset="0.6891" style="stop-color:#C9E7FA"/> <stop offset="0.7617" style="stop-color:#B6DAF3"/> <stop offset="0.8337" style="stop-color:#9AC8EA"/> <stop offset="0.9052" style="stop-color:#77B0DD"/> <stop offset="0.9754" style="stop-color:#4D94CF"/> <stop offset="1" style="stop-color:#3C89C9"/> </linearGradient> <path fill="url(#SVGID_4_)" d="M19.625,23.313c10.839,0,19.625-1.875,19.625-4.188l-1.229-2c0,2.168-8.235,3.926-18.396,3.926 c-9.481,0-17.396-1.959-18.396-3.926l-1.229,2C0,21.438,8.787,23.313,19.625,23.313z"/> <path fill="#3C89C9" d="M19.476,13.019c10.161,0,19.625,2.775,19.625,2.775c-0.375,2.721-5.367,5.438-19.555,5.438 c-12.125,0-18.467-2.485-19.541-4.918C-0.277,15.674,9.316,13.019,19.476,13.019z"/> <linearGradient id="SVGID_5_" gradientUnits="userSpaceOnUse" x1="642.8008" y1="-952.4946" x2="682.0508" y2="-952.4946" gradientTransform="matrix(1 0 0 -1 -642.8008 -939.4756)"> <stop offset="0" style="stop-color:#3C89C9"/> <stop offset="0.1482" style="stop-color:#60A6DD"/> <stop offset="0.3113" style="stop-color:#81C1F0"/> <stop offset="0.4476" style="stop-color:#95D1FB"/> <stop offset="0.5394" style="stop-color:#9CD7FF"/> <stop offset="0.636" style="stop-color:#98D4FD"/> <stop offset="0.7293" style="stop-color:#8DCAF6"/> <stop offset="0.8214" style="stop-color:#79BBEB"/> <stop offset="0.912" style="stop-color:#5EA5DC"/> <stop offset="1" style="stop-color:#3C89C9"/> </linearGradient> <path fill="url(#SVGID_5_)" d="M19.625,10.113C8.787,10.113,0,8.238,0,5.925v10c0,2.313,8.787,4.188,19.625,4.188 c10.839,0,19.625-1.875,19.625-4.188v-10C39.25,8.238,30.464,10.113,19.625,10.113z"/> <linearGradient id="SVGID_6_" gradientUnits="userSpaceOnUse" x1="642.8008" y1="-946.4946" x2="682.0508" y2="-946.4946" gradientTransform="matrix(1 0 0 -1 -642.8008 -939.4756)"> <stop offset="0" style="stop-color:#9CD7FF"/> <stop offset="0.0039" style="stop-color:#9DD7FF"/> <stop offset="0.2273" style="stop-color:#BDE5FF"/> <stop offset="0.4138" style="stop-color:#D1EEFF"/> <stop offset="0.5394" style="stop-color:#D9F1FF"/> <stop offset="0.6155" style="stop-color:#D5EFFE"/> <stop offset="0.6891" style="stop-color:#C9E7FA"/> <stop offset="0.7617" style="stop-color:#B6DAF3"/> <stop offset="0.8337" style="stop-color:#9AC8EA"/> <stop offset="0.9052" style="stop-color:#77B0DD"/> <stop offset="0.9754" style="stop-color:#4D94CF"/> <stop offset="1" style="stop-color:#3C89C9"/> </linearGradient> <path fill="url(#SVGID_6_)" d="M19.625,10.113c10.839,0,19.625-1.875,19.625-4.188l-1.229-2c0,2.168-8.235,3.926-18.396,3.926 c-9.481,0-17.396-1.959-18.396-3.926L0,5.925C0,8.238,8.787,10.113,19.625,10.113z"/> <linearGradient id="SVGID_7_" gradientUnits="userSpaceOnUse" x1="644.0293" y1="-943.4014" x2="680.8223" y2="-943.4014" gradientTransform="matrix(1 0 0 -1 -642.8008 -939.4756)"> <stop offset="0" style="stop-color:#9CD7FF"/> <stop offset="1" style="stop-color:#3C89C9"/> </linearGradient> <ellipse fill="url(#SVGID_7_)" cx="19.625" cy="3.926" rx="18.396" ry="3.926"/> <path opacity="0.24" fill="#FFFFFF" enable-background="new " d="M31.04,45.982c0,0-4.354,0.664-7.29,0.781 c-3.125,0.125-8.952,0-8.952,0l-2.384-10.292l0.044-2.108l-1.251-1.154L9.789,23.024l-0.082-0.119L9.5,20.529l-1.65-1.254 L5.329,8.793c0,0,4.213,0.903,7.234,1.07s8.375,0.25,8.375,0.25l3,9.875l-0.25,1.313l1.063,2.168l2.312,9.645l-0.521,1.416 l1.46,1.834L31.04,45.982z"/> </svg> glance-16.0.0/doc/source/images_src/glance_db.graphml0000666000175100017510000003010613245511421022506 0ustar zuulzuul00000000000000 Images id: varchar(36), primary name: varchar(255), nullable size: bigint(20), nullable status: varchar(30) is_public: tinyint(1) created_at: datetime updated_at: datetime, nullable deleted_at: datetime, nullable deleted: tinyint(1) disk_format: varchar(20), nullable container_format: varchar(20), nullable checksum: varchar(32), nullable owner: varchar(255), nullable min_disk: int(11) min_ram: int(11) protected: tinyint(1) virtual_size: bigint(20), nullable image_locations id: int(11), primary image_id: varchar(36) value: text created_at: datetime updated_at: datetime, nullable deleted_at: datetime, nullable deleted: tinyint(1) meta_data: text, nullable status: varchar(30) image_members id: int(11), primary image_id: varchar(36) member: varchar(255) can_share: tiny_int(1) created_at: datetime updated_at: datetime, nullable deleted_at: datetime, nullable deleted: tinyint(1) status: varchar(20) image_properties id: int(11), primary image_id: varchar(36) name: varchar(255) value: text, nullable created_at: datetime updated_at: datetime, nullable deleted_at: datetime, nullable deleted: tinyint(1) image_tags id: int(11), primary image_id: varchar(36) value: varchar(255) created_at: datetime updated_at: datetime, nullable deleted_at: datetime, nullable deleted: tinyint(1) glance-16.0.0/doc/source/cli/0000775000175100017510000000000013245511661015673 5ustar zuulzuul00000000000000glance-16.0.0/doc/source/cli/glancecachecleaner.rst0000666000175100017510000000174213245511421022174 0ustar zuulzuul00000000000000==================== glance-cache-cleaner ==================== ---------------------------------------------------------------- Glance Image Cache Invalid Cache Entry and Stalled Image cleaner ---------------------------------------------------------------- .. include:: header.txt SYNOPSIS ======== glance-cache-cleaner [options] DESCRIPTION =========== This is meant to be run as a periodic task from cron. If something goes wrong while we're caching an image (for example the fetch times out, or an exception is raised), we create an 'invalid' entry. These entries are left around for debugging purposes. However, after some period of time, we want to clean these up. Also, if an incomplete image hangs around past the image_cache_stall_time period, we automatically sweep it up. OPTIONS ======= **General options** .. include:: general_options.txt FILES ===== **/etc/glance/glance-cache.conf** Default configuration file for the Glance Cache .. include:: footer.txt glance-16.0.0/doc/source/cli/header.txt0000666000175100017510000000030313245511421017654 0ustar zuulzuul00000000000000 :Author: OpenStack Glance Project Team :Contact: glance@lists.launchpad.net :Date: 2018-02-28 :Copyright: OpenStack Foundation :Version: 16.0.0 :Manual section: 1 :Manual group: cloud computing glance-16.0.0/doc/source/cli/glancecontrol.rst0000666000175100017510000000214013245511421021250 0ustar zuulzuul00000000000000============== glance-control ============== -------------------------------------- Glance daemon start/stop/reload helper -------------------------------------- .. include:: header.txt SYNOPSIS ======== glance-control [options] [CONFPATH] Where is one of: all, api, glance-api, registry, glance-registry, scrubber, glance-scrubber And command is one of: start, status, stop, shutdown, restart, reload, force-reload And CONFPATH is the optional configuration file to use. OPTIONS ======= **General Options** .. include:: general_options.txt **--pid-file=PATH** File to use as pid file. Default: /var/run/glance/$server.pid **--await-child DELAY** Period to wait for service death in order to report exit code (default is to not wait at all) **--capture-output** Capture stdout/err in syslog instead of discarding **--nocapture-output** The inverse of --capture-output **--norespawn** The inverse of --respawn **--respawn** Restart service on unexpected death .. include:: footer.txt glance-16.0.0/doc/source/cli/glanceapi.rst0000666000175100017510000000076613245511421020355 0ustar zuulzuul00000000000000========== glance-api ========== --------------------------------------- Server for the Glance Image Service API --------------------------------------- .. include:: header.txt SYNOPSIS ======== glance-api [options] DESCRIPTION =========== glance-api is a server daemon that serves the Glance API OPTIONS ======= **General options** .. include:: general_options.txt FILES ===== **/etc/glance/glance-api.conf** Default configuration file for Glance API .. include:: footer.txt glance-16.0.0/doc/source/cli/glancecachepruner.rst0000666000175100017510000000115713245511421022076 0ustar zuulzuul00000000000000=================== glance-cache-pruner =================== ------------------- Glance cache pruner ------------------- .. include:: header.txt SYNOPSIS ======== glance-cache-pruner [options] DESCRIPTION =========== Prunes images from the Glance cache when the space exceeds the value set in the image_cache_max_size configuration option. This is meant to be run as a periodic task, perhaps every half-hour. OPTIONS ======= **General options** .. include:: general_options.txt FILES ===== **/etc/glance/glance-cache.conf** Default configuration file for the Glance Cache .. include:: footer.txt glance-16.0.0/doc/source/cli/glancereplicator.rst0000666000175100017510000000404713245511421021744 0ustar zuulzuul00000000000000================= glance-replicator ================= --------------------------------------------- Replicate images across multiple data centers --------------------------------------------- .. include:: header.txt SYNOPSIS ======== glance-replicator [options] [args] DESCRIPTION =========== glance-replicator is a utility can be used to populate a new glance server using the images stored in an existing glance server. The images in the replicated glance server preserve the uuids, metadata, and image data from the original. COMMANDS ======== **help ** Output help for one of the commands below **compare** What is missing from the slave glance? **dump** Dump the contents of a glance instance to local disk. **livecopy** Load the contents of one glance instance into another. **load** Load the contents of a local directory into glance. **size** Determine the size of a glance instance if dumped to disk. OPTIONS ======= **-h, --help** Show this help message and exit **-c CHUNKSIZE, --chunksize=CHUNKSIZE** Amount of data to transfer per HTTP write **-d, --debug** Print debugging information **-D DONTREPLICATE, --dontreplicate=DONTREPLICATE** List of fields to not replicate **-m, --metaonly** Only replicate metadata, not images **-l LOGFILE, --logfile=LOGFILE** Path of file to log to **-s, --syslog** Log to syslog instead of a file **-t TOKEN, --token=TOKEN** Pass in your authentication token if you have one. If you use this option the same token is used for both the master and the slave. **-M MASTERTOKEN, --mastertoken=MASTERTOKEN** Pass in your authentication token if you have one. This is the token used for the master. **-S SLAVETOKEN, --slavetoken=SLAVETOKEN** Pass in your authentication token if you have one. This is the token used for the slave. **-v, --verbose** Print more verbose output .. include:: footer.txt glance-16.0.0/doc/source/cli/glancecachemanage.rst0000666000175100017510000000346713245511421022021 0ustar zuulzuul00000000000000=================== glance-cache-manage =================== ------------------------ Cache management utility ------------------------ .. include:: header.txt SYNOPSIS ======== glance-cache-manage [options] [args] COMMANDS ======== **help ** Output help for one of the commands below **list-cached** List all images currently cached **list-queued** List all images currently queued for caching **queue-image** Queue an image for caching **delete-cached-image** Purges an image from the cache **delete-all-cached-images** Removes all images from the cache **delete-queued-image** Deletes an image from the cache queue **delete-all-queued-images** Deletes all images from the cache queue OPTIONS ======= **--version** show program's version number and exit **-h, --help** show this help message and exit **-v, --verbose** Print more verbose output **-d, --debug** Print more verbose output **-H ADDRESS, --host=ADDRESS** Address of Glance API host. Default: 0.0.0.0 **-p PORT, --port=PORT** Port the Glance API host listens on. Default: 9292 **-k, --insecure** Explicitly allow glance to perform "insecure" SSL (https) requests. The server's certificate will not be verified against any certificate authorities. This option should be used with caution. **-A TOKEN, --auth_token=TOKEN** Authentication token to use to identify the client to the glance server **-f, --force** Prevent select actions from requesting user confirmation **-S STRATEGY, --os-auth-strategy=STRATEGY** Authentication strategy (keystone or noauth) .. include:: openstack_options.txt .. include:: footer.txt glance-16.0.0/doc/source/cli/general_options.txt0000666000175100017510000000453513245511421021627 0ustar zuulzuul00000000000000 **-h, --help** Show the help message and exit **--version** Print the version number and exit **-v, --verbose** Print more verbose output **--noverbose** Disable verbose output **-d, --debug** Print debugging output (set logging level to DEBUG instead of default WARNING level) **--nodebug** Disable debugging output **--use-syslog** Use syslog for logging **--nouse-syslog** Disable the use of syslog for logging **--syslog-log-facility SYSLOG_LOG_FACILITY** syslog facility to receive log lines **--config-dir DIR** Path to a config directory to pull \*.conf files from. This file set is sorted, to provide a predictable parse order if individual options are over-ridden. The set is parsed after the file(s) specified via previous --config-file, arguments hence over-ridden options in the directory take precedence. This means that configuration from files in a specified config-dir will always take precedence over configuration from files specified by --config-file, regardless to argument order. **--config-file PATH** Path to a config file to use. Multiple config files can be specified by using this flag multiple times, for example, --config-file --config-file . Values in latter files take precedence. **--log-config-append PATH** **--log-config PATH** The name of logging configuration file. It does not disable existing loggers, but just appends specified logging configuration to any other existing logging options. Please see the Python logging module documentation for details on logging configuration files. The log-config name for this option is deprecated. **--log-format FORMAT** A logging.Formatter log message format string which may use any of the available logging.LogRecord attributes. Default: None **--log-date-format DATE_FORMAT** Format string for %(asctime)s in log records. Default: None **--log-file PATH, --logfile PATH** (Optional) Name of log file to output to. If not set, logging will go to stdout. **--log-dir LOG_DIR, --logdir LOG_DIR** (Optional) The directory to keep log files in (will be prepended to --log-file) glance-16.0.0/doc/source/cli/footer.txt0000666000175100017510000000032413245511421017725 0ustar zuulzuul00000000000000 SEE ALSO ======== * `OpenStack Glance `__ BUGS ==== * Glance bugs are tracked in Launchpad so you can view current bugs at `OpenStack Glance `__ glance-16.0.0/doc/source/cli/index.rst0000666000175100017510000000017013245511421017526 0ustar zuulzuul00000000000000======================== Command Line Interface ======================== .. toctree:: :glob: :maxdepth: 1 * glance-16.0.0/doc/source/cli/glanceregistry.rst0000666000175100017510000000113013245511421021436 0ustar zuulzuul00000000000000=============== glance-registry =============== -------------------------------------- Server for the Glance Registry Service -------------------------------------- .. include:: header.txt .. include:: ../deprecate-registry.inc SYNOPSIS ======== glance-registry [options] DESCRIPTION =========== glance-registry is a server daemon that serves image metadata through a REST-like API. OPTIONS ======= **General options** .. include:: general_options.txt FILES ===== **/etc/glance/glance-registry.conf** Default configuration file for Glance Registry .. include:: footer.txt glance-16.0.0/doc/source/cli/glancescrubber.rst0000666000175100017510000001122513245511421021403 0ustar zuulzuul00000000000000=============== glance-scrubber =============== -------------------- Glance scrub service -------------------- .. include:: header.txt SYNOPSIS ======== glance-scrubber [options] DESCRIPTION =========== glance-scrubber is a utility that allows an operator to configure Glance for the asynchronous deletion of images. Whether this makes sense for your deployment depends upon the storage backend you are using and the size of typical images handled by your Glance installation. An image in glance is really a combination of an image record (stored in the database) and a file of image data (stored in a storage backend). Under normal operation, the image-delete call is synchronous, that is, Glance receives the DELETE request, deletes the image data from the storage backend, then deletes the image record from the database, and finally returns a 204 as the result of the call. If the backend is fast and deletion time is not a function of data size, these operations occur very quickly. For backends where deletion time is a function of data size, however, the image-delete operation can take a significant amount of time to complete, to the point where a client may timeout waiting for the response. This in turn leads to user dissatisfaction. To avoid this problem, Glance has a ``delayed_delete`` configuration option (False by default) that may be set in the **glance-api.conf** file. With this option enabled, when Glance receives a DELETE request, it does *only* the database part of the request, marking the image's status as ``pending_delete``, and returns immediately. (The ``pending_delete`` status is not visible to users; an image-show request for such an image will return 404.) However, it is important to note that when ``delayed_delete`` is enabled, *Glance does not delete image data from the storage backend*. That's where the glance-scrubber comes in. The glance-scrubber cleans up images that have been deleted. If you run Glance with ``delayed_delete`` enabled, you *must* run the glance-scrubber occasionally or your storage backend will eventually fill up with "deleted" image data. Configuration of glance-scrubber is done in the **glance-scrubber.conf** file. Options are explained in detail in comments in the sample configuration file, so we only point out a few of them here. ``scrub_time`` minimum time in seconds that an image will stay in ``pending_delete`` status (default is 0) ``scrub_pool_size`` configures a thread pool so that scrubbing can be performed in parallel (default is 1, that is, serial scrubbing) ``daemon`` a boolean indicating whether the scrubber should run as a daemon (default is False) ``wakeup_time`` time in seconds between runs when the scrubber is run in daemon mode (ignored if the scrubber is not being run in daemon mode) ``metadata_encryption_key`` If your **glance-api.conf** sets a value for this option (the default is to leave it unset), you must include the same setting in your **glance-scrubber.conf** or the scrubber won't be able to determine the locations of your image data. ``[database]`` As of the Queens release of Glance (16.0.0), the glance-scrubber does not use the deprecated Glance registry, but instead contacts the Glance database directly. Thus your **glance-scrubber.conf** file must contain a [database] section specifying the relevant information. ``[glance_store]`` This section of the file contains the configuration information for the storage backends used by your Glance installation. The usual situation is that whatever your **glance-api.conf** has for the ``[database]`` and ``[glance_store]`` configuration groups should go into your **glance-scrubber.conf**, too. Of course, if you have heavily customized your setup, you know better than we do what you are doing. The key thing is that the scrubber needs to be able to access the Glance database to determine what images need to be scrubbed (and to mark them as deleted once their associated data has been removed from the storage backend), and it needs the glance_store information so it can delete the image data. OPTIONS ======= **General options** .. include:: general_options.txt **-D, --daemon** Run as a long-running process. When not specified (the default) run the scrub operation once and then exits. When specified do not exit and run scrub on wakeup_time interval as specified in the config. **--nodaemon** The inverse of --daemon. Runs the scrub operation once and then exits. This is the default. FILES ===== **/etc/glance/glance-scrubber.conf** Default configuration file for the Glance Scrubber .. include:: footer.txt glance-16.0.0/doc/source/cli/glancecacheprefetcher.rst0000666000175100017510000000106613245511421022711 0ustar zuulzuul00000000000000======================= glance-cache-prefetcher ======================= ------------------------------ Glance Image Cache Pre-fetcher ------------------------------ .. include:: header.txt SYNOPSIS ======== glance-cache-prefetcher [options] DESCRIPTION =========== This is meant to be run from the command line after queueing images to be pretched. OPTIONS ======= **General options** .. include:: general_options.txt FILES ===== **/etc/glance/glance-cache.conf** Default configuration file for the Glance Cache .. include:: footer.txt glance-16.0.0/doc/source/cli/openstack_options.txt0000666000175100017510000000100613245511421022167 0ustar zuulzuul00000000000000 **-os-auth-token=OS_AUTH_TOKEN** Defaults to env[OS_AUTH_TOKEN] **--os-username=OS_USERNAME** Defaults to env[OS_USERNAME] **--os-password=OS_PASSWORD** Defaults to env[OS_PASSWORD] **--os-region-name=OS_REGION_NAME** Defaults to env[OS_REGION_NAME] **--os-tenant-id=OS_TENANT_ID** Defaults to env[OS_TENANT_ID] **--os-tenant-name=OS_TENANT_NAME** Defaults to env[OS_TENANT_NAME] **--os-auth-url=OS_AUTH_URL** Defaults to env[OS_AUTH_URL] glance-16.0.0/doc/source/cli/glancemanage.rst0000666000175100017510000000513113245511421021023 0ustar zuulzuul00000000000000============= glance-manage ============= ------------------------- Glance Management Utility ------------------------- .. include:: header.txt SYNOPSIS ======== glance-manage [options] DESCRIPTION =========== glance-manage is a utility for managing and configuring a Glance installation. One important use of glance-manage is to setup the database. To do this run:: glance-manage db_sync Note: glance-manage commands can be run either like this:: glance-manage db sync or with the db commands concatenated, like this:: glance-manage db_sync COMMANDS ======== **db** This is the prefix for the commands below when used with a space rather than a _. For example "db version". **db_version** This will print the current migration level of a glance database. **db_upgrade [VERSION]** This will take an existing database and upgrade it to the specified VERSION. **db_version_control** Place the database under migration control. **db_sync [VERSION]** Place an existing database under migration control and upgrade it to the specified VERSION. **db_expand** Run this command to expand the database as the first step of a rolling upgrade process. **db_migrate** Run this command to migrate the database as the second step of a rolling upgrade process. **db_contract** Run this command to contract the database as the last step of a rolling upgrade process. **db_export_metadefs [PATH | PREFIX]** Export the metadata definitions into json format. By default the definitions are exported to /etc/glance/metadefs directory. **Note: this command will overwrite existing files in the supplied or default path.** **db_load_metadefs [PATH]** Load the metadata definitions into glance database. By default the definitions are imported from /etc/glance/metadefs directory. **db_unload_metadefs** Unload the metadata definitions. Clears the contents of all the glance db tables including metadef_namespace_resource_types, metadef_tags, metadef_objects, metadef_resource_types, metadef_namespaces and metadef_properties. OPTIONS ======= **General Options** .. include:: general_options.txt .. include:: footer.txt CONFIGURATION ============= The following paths are searched for a ``glance-manage.conf`` file in the following order: * ``~/.glance`` * ``~/`` * ``/etc/glance`` * ``/etc`` All options set in ``glance-manage.conf`` override those set in ``glance-registry.conf`` and ``glance-api.conf``. glance-16.0.0/doc/source/admin/0000775000175100017510000000000013245511661016214 5ustar zuulzuul00000000000000glance-16.0.0/doc/source/admin/apache-httpd.rst0000666000175100017510000001536613245511421021317 0ustar zuulzuul00000000000000======================= Running Glance in HTTPD ======================= Since the Pike release Glance has packaged a wsgi script entrypoint that enables you to run it with a real web server like Apache HTTPD or nginx. To deploy this there are several patterns. This doc shows two common ways of deploying Glance with Apache HTTPD. .. warning:: As pointed out in the Pike and Queens release notes (see the "Known Issues" section of each), the Glance project team recommends that Glance be run in its normal standalone configuration, particularly in production environments. The full functionality of Glance is not available when Glance is deployed in the manner described in this document. In particular, the interoperable image import functionality does not work under such configuration. See the release notes for details. uWSGI Server HTTP Mode ---------------------- This is the current recommended way to deploy Glance with Apache HTTP and it is how we deploy Glance for testing every proposed commit to OpenStack. In this deployment method we use the uWSGI server as a web server bound to a random local port. Then we configure apache using mod_proxy to forward all incoming requests on the specified endpoint to that local webserver. This has the advantage of letting apache manage all inbound http connections, but letting uWSGI manage running the python code. It also means when we make changes to Glance code or configuration we don't need to restart all of apache (which may be running other services too) and just need to restart the local uWSGI daemon. The httpd/ directory contains sample files for configuring HTTPD to run Glance under the uWSGI server in this configuration. To use the sample configs simply copy `httpd/uwsgi-glance-api.conf` to the appropriate location for your Apache server. On Debian/Ubuntu systems it is:: /etc/apache2/sites-available/uwsgi-glance-api.conf On Red Hat based systems it is:: /etc/httpd/conf.d/uwsgi-glance-api.conf Enable mod_proxy by running ``sudo a2enmod proxy`` Then on Ubuntu/Debian systems enable the site by creating a symlink from the file in ``sites-available`` to ``sites-enabled``. (This is not required on Red Hat based systems):: ln -s /etc/apache2/sites-available/uwsgi-glance-api.conf /etc/apache2/sites-enabled Start or restart HTTPD to pick up the new configuration. .. NOTE:: Be careful when setting up other proxies/endpoints in the same VirtualHost on Apache HTTPD using. If any are using ``SetEnv proxy-sendcl 1`` then Apache HTTPD will buffer the incoming request to local disk before sending it to glance. This will likely cause problems when running in this configuration and is not necessary. (However, it is necessary if using mod_proxy_uwsgi.) For more details, see the section on :ref:`mod_proxy_uwsgi` below. Now we need to configure and start the uWSGi service. Copy the `httpd/glance-api-uwsgi.ini` file to `/etc/glance`. Update the file to match your system configuration (for example, you'll want to set the number of processes and threads). Install the uWSGI server and start the glance-api server using uWSGI:: sudo pip install uwsgi uwsgi --ini /etc/glance/glance-api-uwsgi.ini .. NOTE:: In the sample configs port 60999 is used, but this doesn't matter and is just a randomly selected number. This is not a contract on the port used for the local uwsgi daemon. .. _mod_proxy_uwsgi: mod_proxy_uwsgi ''''''''''''''' .. WARNING:: Running Glance under HTTPD in this configuration will only work on Python 2 if you use ``Transfer-Encoding: chunked``. Also if running with Python 2 Apache will be buffering the chunked encoding before passing the request on to uWSGI. See bug: https://github.com/unbit/uwsgi/issues/1540 Instead of running uWSGI as a webserver listening on a local port and then having Apache HTTP proxy all the incoming requests with mod_proxy. The normally recommended way of deploying the uWSGI server with Apache HTTPD is to use mod_proxy_uwsgi and set up a local socket file for uWSGI to listen on. Apache will send the requests using the uwsgi protocol over this local socket file. However, there are issues with doing this and using chunked-encoding, so this is not recommended for use with Glance. You can work around these issues by configuring your Apache proxy to buffer the chunked data and send the full content length to the uWSGI server. You do this by adding:: SetEnv proxy-sendcl 1 to the apache config file using mod_proxy_uwsgi. For more details on using mod_proxy_uwsgi see the official docs: http://uwsgi-docs.readthedocs.io/en/latest/Apache.html?highlight=mod_uwsgi_proxy#mod-proxy-uwsgi There are some additional considerations when doing this though. Having Apache locally buffer the chunked data to disk before passing it to uWSGI means you'll need to have sufficient disk space in /tmp (or whatever you set TMPDIR to) to store all the disk files. The other aspect to consider is that this buffering can take some time to write the images to disk. To prevent random failures you'll likely have to increase timeout values in the uWSGI configuration file to ensure uWSGI will wait long enough for this to happen. (Depending on the uploaded image file sizes it may be necessary to set the timeouts to multiple minutes.) mod_wsgi -------- This deployment method is not recommended for using Glance. The mod_wsgi protocol does not support ``Transfer-Encoding: chunked`` and therefore makes it unsuitable for use with Glance. However, you could theoretically deploy Glance using mod_wsgi but it will fail on any requests that use a chunked transfer encoding. .. _uwsgi_glossary: Glossary -------- .. glossary:: uwsgi The native protocol used by the uWSGI server. (The acronym is written in all lowercase on purpose.) https://uwsgi-docs.readthedocs.io/en/latest/Protocol.html uWSGI A project that aims at developing a full stack for building hosting services. It produces software, the uWSGI server, that is exposed in Python code as a module named ``uwsgi``. https://uwsgi-docs.readthedocs.io/en/latest/index.html https://pypi.python.org/pypi/uWSGI https://github.com/unbit/uwsgi mod_wsgi An Apache 2 HTTP server module that supports the Python WSGI specification. (It is not recommended for use with Glance.) https://modwsgi.readthedocs.io/en/develop/ mod_proxy_uwsgi An Apache 2 HTTP Server module that provides a uwsgi gateway for mod_proxy. It communicates to the uWSGI server using the uwsgi protocol. http://httpd.apache.org/docs/trunk/mod/mod_proxy_uwsgi.html WSGI Web Server Gateway Interface, a Python standard published as `PEP 3333`_. https://wsgi.readthedocs.io/en/latest/index.html .. _PEP 3333: https://www.python.org/dev/peps/pep-3333 glance-16.0.0/doc/source/admin/controllingservers.rst0000666000175100017510000002732413245511421022716 0ustar zuulzuul00000000000000.. Copyright 2011 OpenStack Foundation All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. .. _controlling-servers: Controlling Glance Servers ========================== This section describes the ways to start, stop, and reload Glance's server programs. .. include:: ../deprecate-registry.inc Starting a server ----------------- There are two ways to start a Glance server (either the API server or the registry server): * Manually calling the server program * Using the ``glance-control`` server daemon wrapper program We recommend using the second method. Manually starting the server ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The first is by directly calling the server program, passing in command-line options and a single argument for a ``paste.deploy`` configuration file to use when configuring the server application. .. note:: Glance ships with an ``etc/`` directory that contains sample ``paste.deploy`` configuration files that you can copy to a standard configuration directory and adapt for your own uses. Specifically, bind_host must be set properly. If you do `not` specify a configuration file on the command line, Glance will do its best to locate a configuration file in one of the following directories, stopping at the first config file it finds: * ``$CWD`` * ``~/.glance`` * ``~/`` * ``/etc/glance`` * ``/etc`` The filename that is searched for depends on the server application name. So, if you are starting up the API server, ``glance-api.conf`` is searched for, otherwise ``glance-registry.conf``. If no configuration file is found, you will see an error, like:: $> glance-api ERROR: Unable to locate any configuration file. Cannot load application glance-api Here is an example showing how you can manually start the ``glance-api`` server and ``glance-registry`` in a shell.:: $ sudo glance-api --config-file glance-api.conf --debug & jsuh@mc-ats1:~$ 2011-04-13 14:50:12 DEBUG [glance-api] ******************************************************************************** 2011-04-13 14:50:12 DEBUG [glance-api] Configuration options gathered from config file: 2011-04-13 14:50:12 DEBUG [glance-api] /home/jsuh/glance-api.conf 2011-04-13 14:50:12 DEBUG [glance-api] ================================================ 2011-04-13 14:50:12 DEBUG [glance-api] bind_host 65.114.169.29 2011-04-13 14:50:12 DEBUG [glance-api] bind_port 9292 2011-04-13 14:50:12 DEBUG [glance-api] debug True 2011-04-13 14:50:12 DEBUG [glance-api] default_store file 2011-04-13 14:50:12 DEBUG [glance-api] filesystem_store_datadir /home/jsuh/images/ 2011-04-13 14:50:12 DEBUG [glance-api] registry_host 65.114.169.29 2011-04-13 14:50:12 DEBUG [glance-api] registry_port 9191 2011-04-13 14:50:12 DEBUG [glance-api] ******************************************************************************** 2011-04-13 14:50:12 DEBUG [routes.middleware] Initialized with method overriding = True, and path info altering = True 2011-04-13 14:50:12 DEBUG [eventlet.wsgi.server] (21354) wsgi starting up on http://65.114.169.29:9292/ $ sudo glance-registry --config-file glance-registry.conf & jsuh@mc-ats1:~$ 2011-04-13 14:51:16 INFO [sqlalchemy.engine.base.Engine.0x...feac] PRAGMA table_info("images") 2011-04-13 14:51:16 INFO [sqlalchemy.engine.base.Engine.0x...feac] () 2011-04-13 14:51:16 DEBUG [sqlalchemy.engine.base.Engine.0x...feac] Col ('cid', 'name', 'type', 'notnull', 'dflt_value', 'pk') 2011-04-13 14:51:16 DEBUG [sqlalchemy.engine.base.Engine.0x...feac] Row (0, u'created_at', u'DATETIME', 1, None, 0) 2011-04-13 14:51:16 DEBUG [sqlalchemy.engine.base.Engine.0x...feac] Row (1, u'updated_at', u'DATETIME', 0, None, 0) 2011-04-13 14:51:16 DEBUG [sqlalchemy.engine.base.Engine.0x...feac] Row (2, u'deleted_at', u'DATETIME', 0, None, 0) 2011-04-13 14:51:16 DEBUG [sqlalchemy.engine.base.Engine.0x...feac] Row (3, u'deleted', u'BOOLEAN', 1, None, 0) 2011-04-13 14:51:16 DEBUG [sqlalchemy.engine.base.Engine.0x...feac] Row (4, u'id', u'INTEGER', 1, None, 1) 2011-04-13 14:51:16 DEBUG [sqlalchemy.engine.base.Engine.0x...feac] Row (5, u'name', u'VARCHAR(255)', 0, None, 0) 2011-04-13 14:51:16 DEBUG [sqlalchemy.engine.base.Engine.0x...feac] Row (6, u'disk_format', u'VARCHAR(20)', 0, None, 0) 2011-04-13 14:51:16 DEBUG [sqlalchemy.engine.base.Engine.0x...feac] Row (7, u'container_format', u'VARCHAR(20)', 0, None, 0) 2011-04-13 14:51:16 DEBUG [sqlalchemy.engine.base.Engine.0x...feac] Row (8, u'size', u'INTEGER', 0, None, 0) 2011-04-13 14:51:16 DEBUG [sqlalchemy.engine.base.Engine.0x...feac] Row (9, u'status', u'VARCHAR(30)', 1, None, 0) 2011-04-13 14:51:16 DEBUG [sqlalchemy.engine.base.Engine.0x...feac] Row (10, u'is_public', u'BOOLEAN', 1, None, 0) 2011-04-13 14:51:16 DEBUG [sqlalchemy.engine.base.Engine.0x...feac] Row (11, u'location', u'TEXT', 0, None, 0) 2011-04-13 14:51:16 INFO [sqlalchemy.engine.base.Engine.0x...feac] PRAGMA table_info("image_properties") 2011-04-13 14:51:16 INFO [sqlalchemy.engine.base.Engine.0x...feac] () 2011-04-13 14:51:16 DEBUG [sqlalchemy.engine.base.Engine.0x...feac] Col ('cid', 'name', 'type', 'notnull', 'dflt_value', 'pk') 2011-04-13 14:51:16 DEBUG [sqlalchemy.engine.base.Engine.0x...feac] Row (0, u'created_at', u'DATETIME', 1, None, 0) 2011-04-13 14:51:16 DEBUG [sqlalchemy.engine.base.Engine.0x...feac] Row (1, u'updated_at', u'DATETIME', 0, None, 0) 2011-04-13 14:51:16 DEBUG [sqlalchemy.engine.base.Engine.0x...feac] Row (2, u'deleted_at', u'DATETIME', 0, None, 0) 2011-04-13 14:51:16 DEBUG [sqlalchemy.engine.base.Engine.0x...feac] Row (3, u'deleted', u'BOOLEAN', 1, None, 0) 2011-04-13 14:51:16 DEBUG [sqlalchemy.engine.base.Engine.0x...feac] Row (4, u'id', u'INTEGER', 1, None, 1) 2011-04-13 14:51:16 DEBUG [sqlalchemy.engine.base.Engine.0x...feac] Row (5, u'image_id', u'INTEGER', 1, None, 0) 2011-04-13 14:51:16 DEBUG [sqlalchemy.engine.base.Engine.0x...feac] Row (6, u'key', u'VARCHAR(255)', 1, None, 0) 2011-04-13 14:51:16 DEBUG [sqlalchemy.engine.base.Engine.0x...feac] Row (7, u'value', u'TEXT', 0, None, 0) $ ps aux | grep glance root 20009 0.7 0.1 12744 9148 pts/1 S 12:47 0:00 /usr/bin/python /usr/bin/glance-api glance-api.conf --debug root 20012 2.0 0.1 25188 13356 pts/1 S 12:47 0:00 /usr/bin/python /usr/bin/glance-registry glance-registry.conf jsuh 20017 0.0 0.0 3368 744 pts/1 S+ 12:47 0:00 grep glance Simply supply the configuration file as the parameter to the ``--config-file`` option (the ``etc/glance-api.conf`` and ``etc/glance-registry.conf`` sample configuration files were used in the above example) and then any other options you want to use. (``--debug`` was used above to show some of the debugging output that the server shows when starting up. Call the server program with ``--help`` to see all available options you can specify on the command line.) For more information on configuring the server via the ``paste.deploy`` configuration files, see the section entitled :ref:`Configuring Glance servers ` Note that the server `daemonizes` itself by using the standard shell backgrounding indicator, ``&``, in the previous example. For most use cases, we recommend using the ``glance-control`` server daemon wrapper for daemonizing. See below for more details on daemonization with ``glance-control``. Using the ``glance-control`` program to start the server ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The second way to start up a Glance server is to use the ``glance-control`` program. ``glance-control`` is a wrapper script that allows the user to start, stop, restart, and reload the other Glance server programs in a fashion that is more conducive to automation and scripting. Servers started via the ``glance-control`` program are always `daemonized`, meaning that the server program process runs in the background. To start a Glance server with ``glance-control``, simply call ``glance-control`` with a server and the word "start", followed by any command-line options you wish to provide. Start the server with ``glance-control`` in the following way:: $> sudo glance-control [OPTIONS] start [CONFPATH] .. note:: You must use the ``sudo`` program to run ``glance-control`` currently, as the pid files for the server programs are written to /var/run/glance/ Here is an example that shows how to start the ``glance-registry`` server with the ``glance-control`` wrapper script. :: $ sudo glance-control api start glance-api.conf Starting glance-api with /home/jsuh/glance.conf $ sudo glance-control registry start glance-registry.conf Starting glance-registry with /home/jsuh/glance.conf $ ps aux | grep glance root 20038 4.0 0.1 12728 9116 ? Ss 12:51 0:00 /usr/bin/python /usr/bin/glance-api /home/jsuh/glance-api.conf root 20039 6.0 0.1 25188 13356 ? Ss 12:51 0:00 /usr/bin/python /usr/bin/glance-registry /home/jsuh/glance-registry.conf jsuh 20042 0.0 0.0 3368 744 pts/1 S+ 12:51 0:00 grep glance The same configuration files are used by ``glance-control`` to start the Glance server programs, and you can specify (as the example above shows) a configuration file when starting the server. In order for your launched glance service to be monitored for unexpected death and respawned if necessary, use the following option:: $ sudo glance-control [service] start --respawn ... Note that this will cause ``glance-control`` itself to remain running. Also note that deliberately stopped services are not respawned, neither are rapidly bouncing services (where process death occurred within one second of the last launch). By default, output from glance services is discarded when launched with ``glance-control``. In order to capture such output via syslog, use the following option:: $ sudo glance-control --capture-output ... Stopping a server ----------------- If you started a Glance server manually and did not use the ``&`` backgrounding function, simply send a terminate signal to the server process by typing ``Ctrl-C`` If you started the Glance server using the ``glance-control`` program, you can use the ``glance-control`` program to stop it. Simply do the following:: $> sudo glance-control stop as this example shows:: $> sudo glance-control registry stop Stopping glance-registry pid: 17602 signal: 15 Restarting a server ------------------- You can restart a server with the ``glance-control`` program, as demonstrated here:: $> sudo glance-control registry restart etc/glance-registry.conf Stopping glance-registry pid: 17611 signal: 15 Starting glance-registry with /home/jpipes/repos/glance/trunk/etc/glance-registry.conf Reloading a server ------------------- You can reload a server with the ``glance-control`` program, as demonstrated here:: $> sudo glance-control api reload Reloading glance-api (pid 18506) with signal(1) A reload sends a SIGHUP signal to the master process and causes new configuration settings to be picked up without any interruption to the running service (provided neither bind_host or bind_port has changed). glance-16.0.0/doc/source/admin/flows.rst0000666000175100017510000000147713245511421020105 0ustar zuulzuul00000000000000.. Copyright 2015 OpenStack Foundation All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Glance Flow Plugins =================== Flows ----- .. list-plugins:: glance.flows :detailed: Import Flows ------------ .. list-plugins:: glance.flows.import :detailed: glance-16.0.0/doc/source/admin/db-sqlalchemy-migrate.rst0000666000175100017510000000415713245511421023124 0ustar zuulzuul00000000000000.. Copyright 2012 OpenStack Foundation All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. .. _legacy-database-management: Legacy Database Management ========================== .. note:: This page applies only to Glance releases prior to Ocata. From Ocata onward, please see :ref:`database-management`. The default metadata driver for Glance uses sqlalchemy, which implies there exists a backend database which must be managed. The ``glance-manage`` binary provides a set of commands for making this easier. The commands should be executed as a subcommand of 'db':: glance-manage db Sync the Database ----------------- :: glance-manage db sync Place a database under migration control and upgrade, creating it first if necessary. Determining the Database Version -------------------------------- :: glance-manage db version This will print the current migration level of a Glance database. Upgrading an Existing Database ------------------------------ :: glance-manage db upgrade This will take an existing database and upgrade it to the specified VERSION. Downgrading an Existing Database -------------------------------- Upgrades involve complex operations and can fail. Before attempting any upgrade, you should make a full database backup of your production data. As of Kilo, database downgrades are not supported, and the only method available to get back to a prior database version is to restore from backup[1]. [1]: https://wiki.openstack.org/wiki/OpsGuide/Operational_Upgrades#perform-a-backup glance-16.0.0/doc/source/admin/rollingupgrades.rst0000666000175100017510000001270113245511421022144 0ustar zuulzuul00000000000000.. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. .. _rolling-upgrades: Rolling Upgrades ================ .. note:: The Rolling Upgrades feature is EXPERIMENTAL and its use in production systems is currently **not supported**. This statement remains true for the Queens release of Glance. What is the holdup, you ask? Before asserting that the feature is fully supported, the Glance team needs to have automated tests that perform rolling upgrades in the OpenStack Continuous Integration gates. The Glance project team has not had sufficient testing and development resources in recent cycles to prioritize this work. The Glance project team is committed to the stability of Glance. As part of OpenStack, we are committed to `The Four Opens`_. If the ability to perform rolling upgrades in production systems is important to you, feel free to participate in the Glance community to help coordinate and drive such an effort. (We gently remind you that "participation" includes providing testing and development resources.) .. _`The Four Opens`: https://governance.openstack.org/tc/reference/opens.html Scope of this document ---------------------- This page describes one way to perform a rolling upgrade from Newton to Ocata for a particular configuration of Glance services. There may be other ways to perform a rolling upgrade from Newton to Ocata for other configurations of Glance services, but those are beyond the scope of this document. For the experimental rollout of rolling upgrades, we describe only the following simple case. Prerequisites ------------- * MySQL/MariaDB 5.5 or later * Glance running Images API v2 only * Glance not using the Glance Registry * Multiple Glance nodes * A load balancer or some other type of redirection device is being used in front of the Glance nodes in such a way that a node can be dropped out of rotation, that is, that Glance node continues running the Glance service but is no longer having requests routed to it Procedure --------- Following is the process to upgrade Glance with zero downtime: 1. Backup the Glance database. 2. Choose an arbitrary Glance node or provision a new node to install the new release. If an existing Glance node is chosen, gracefully stop the Glance services. In what follows, this node will be referred to as the NEW NODE. .. _Stop the Glance processes gracefully: .. note:: **Gracefully stopping services** Before stopping the Glance processes on a node, one may choose to wait until all the existing connections drain out. This could be achieved by taking the node out of rotation, that is, by ensuring that requests are no longer routed to that node. This way all the requests that are currently being processed will get a chance to finish processing. However, some Glance requests like uploading and downloading images may last a long time. This increases the wait time to drain out all connections and consequently the time to upgrade Glance completely. On the other hand, stopping the Glance services before the connections drain out will present the user with errors. While arguably this is not downtime given that Images API requests are continually being serviced by other nodes, this is nonetheless an unpleasant user experience for the user whose in-flight request has terminated in an error. Hence, an operator must be judicious when stopping the services. 3. Upgrade the NEW NODE with new release and update the configuration accordingly. **DO NOT** start the Glance services on the NEW NODE at this time. 4. Using the NEW NODE, expand the database using the command:: glance-manage db expand .. warning:: For MySQL, using the ``glance-manage db_expand`` command requires that you either grant your glance user ``SUPER`` privileges, or run ``set global log_bin_trust_function_creators=1;`` in mysql beforehand. 5. Then, also on the NEW NODE, perform the data migrations using the command:: glance-manage db migrate *The data migrations must be completed before you proceed to the next step.* 6. Start the Glance processes on the NEW NODE. It is now ready to receive traffic from the load balancer. 7. Taking one node at a time from the remaining nodes, for each node: a. `Stop the Glance processes gracefully`_ as described in Step 2, above. *Do not proceed until the "old" Glance services on the node have been completely shut down.* b. Upgrade the node to the new release (and corresponding configuration). c. Start the updated Glance processes on the upgraded node. 8. After **ALL** of the nodes have been upgraded to run the new Glance services, and there are **NO** nodes running any old Glance services, contract the database by running the command from any one of the upgraded nodes:: glance-manage db contract glance-16.0.0/doc/source/admin/zero-downtime-db-upgrade.rst0000666000175100017510000001766013245511421023567 0ustar zuulzuul00000000000000.. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. .. _zero-downtime: Zero-Downtime Database Upgrades =============================== .. warning:: This feature is EXPERIMENTAL in the Ocata, Pike and Queens releases. We encourage operators to try it out, but its use in production environments is currently NOT SUPPORTED. A *zero-downtime database upgrade* enables true rolling upgrades of the Glance nodes in your cloud's control plane. At the appropriate point in the upgrade, you can have a mixed deployment of release *n* (for example, Ocata) and release *n-1* (for example, Newton) Glance nodes, take the *n-1* release nodes out of rotation, allow them to drain, and then take them out of service permanently, leaving all Glance nodes in your cloud at release *n*. That's a rough sketch of how a rolling upgrade would work. For full details, see :ref:`rolling-upgrades`. .. note:: When we speak of a "database upgrade", we are simply talking about changing the database schema and its data from the version used in OpenStack release *n* (say, Pike) to the version used in OpenStack release *n+1* (say, Queens). We are **not** talking about upgrading the database management software. .. note:: Downgrading a database is not supported. See :ref:`downgrades` for more information. The Expand-Migrate-Contract Cycle --------------------------------- It's possible to characterize three phases of a database upgrade: 1. **Expand**: in this phase, new columns, tables, indexes, are added to the database. 2. **Migrate**: in this phase, data is migrated to the new columns or tables. 3. **Contract**: in this phase, the "old" tables or columns (which are no longer in use) are removed from the database. The "legacy" Glance database migrations performed these phases as part of a single monolithic upgrade script. Currently, the Glance project creates a separate script for each the three parts of the cycle. We call such an upgrade an **E-M-C** database migration. Zero-Downtime Database Upgrade ------------------------------ The E-M-C strategy can be performed offline when Glance is not using the database. With some adjustments, however, the E-M-C strategy can be applied online when the database is in use, making true rolling upgrades possible. .. note:: Don't forget that zero-downtime database upgrades are currently considered experimental and their use in production environments is NOT SUPPORTED. A zero-downtime database upgrade takes place as part of a :ref:`rolling upgrade strategy ` for upgrading your entire Glance installation. In such a situation, you want to upgrade to release *n* of Glance (say, Queens) while your release *n-1* API nodes are still running Pike. To make this possible, in the **Expand** phase, database triggers can be added to the database to keep the data in "old" and "new" columns synchronized. Likewise, after all data has been migrated and all Glance nodes have been updated to release *n* code, these triggers are deleted in the **Contract** phase. .. note:: Unlike the E-M-C scripts, database triggers are particular to each database technology. That's why the Glance project currently provides experimental support only for MySQL. New Database Version Identifiers -------------------------------- In order to perform zero-downtime upgrades, the version identifier of a database becomes more complicated since it must reflect knowledge of what point in the E-M-C cycle the upgrade has reached. To make this evident, the identifier explicitly contains 'expand' or 'contract' as part of its name. Thus the Ocata cycle migration has two identifiers associated with it: ``ocata_expand01`` and ``ocata_contract01``. During the upgrade process, the database is initially marked with ``ocata_expand01``. Eventually, after completing the full upgrade process, the database will be marked with ``ocata_contract01``. So, instead of one database version, an operator will see a composite database version that will have both expand and contract versions. A database will be considered at Ocata version only when both expand and contract revisions are at the latest revisions. For a successful Ocata zero-downtime upgrade, for example, the database will be marked with both ``ocata_expand01``, ``ocata_contract01``. In the case in which there are multiple changes in a cycle, the database version record would go through the following progression: +-------+--------------------------------------+-------------------------+ | stage | database identifier | comment | +=======+======================================+=========================+ | E | ``bexar_expand01`` | upgrade begins | +-------+--------------------------------------+-------------------------+ | E | ``bexar_expand02`` | | +-------+--------------------------------------+-------------------------+ | E | ``bexar_expand03`` | | +-------+--------------------------------------+-------------------------+ | M | ``bexar_expand03`` | bexar_migrate01 occurs | +-------+--------------------------------------+-------------------------+ | M | ``bexar_expand03`` | bexar_migrate02 occurs | +-------+--------------------------------------+-------------------------+ | M | ``bexar_expand03`` | bexar_migrate03 occurs | +-------+--------------------------------------+-------------------------+ | C | ``bexar_expand03, bexar_contract01`` | | +-------+--------------------------------------+-------------------------+ | C | ``bexar_expand03, bexar_contract02`` | | +-------+--------------------------------------+-------------------------+ | C | ``bexar_expand03, bexar_contract03`` | upgrade completed | +-------+--------------------------------------+-------------------------+ Database Upgrade ---------------- For offline database upgrades, the ``glance-manage`` tool still has the ``glance-manage db sync`` command. This command will execute the expand, migrate, and contract scripts for you, just as if they were contained in a single script. In order to enable zero-downtime database upgrades, the ``glance-manage`` tool has been augmented to include the following operations so that you can explicitly manage the upgrade. .. warning:: For MySQL, using the ``glance-manage db expand`` or ``glance-manage db contract`` command requires that you either grant your glance user ``SUPER`` privileges, or run ``set global log_bin_trust_function_creators=1;`` in mysql beforehand. Expanding the Database ~~~~~~~~~~~~~~~~~~~~~~ :: glance-manage db expand This will run the expansion phase of a rolling upgrade process. Database expansion should be run as the first step in the rolling upgrade process before any new services are started. Migrating the Data ~~~~~~~~~~~~~~~~~~ :: glance-manage db migrate This will run the data migrate phase of a rolling upgrade process. Database migration should be run after database expansion but before any new services are started. Contracting the Database ~~~~~~~~~~~~~~~~~~~~~~~~ :: glance-manage db contract This will run the contraction phase of a rolling upgrade process. Database contraction should be run as the last step of the rolling upgrade process after all old services are upgraded to new ones. glance-16.0.0/doc/source/admin/troubleshooting.rst0000666000175100017510000003754113245511421022203 0ustar zuulzuul00000000000000==================== Images and instances ==================== Virtual machine images contain a virtual disk that holds a bootable operating system on it. Disk images provide templates for virtual machine file systems. The Image service controls image storage and management. Instances are the individual virtual machines that run on physical compute nodes inside the cloud. Users can launch any number of instances from the same image. Each launched instance runs from a copy of the base image. Any changes made to the instance do not affect the base image. Snapshots capture the state of an instances running disk. Users can create a snapshot, and build a new image based on these snapshots. The Compute service controls instance, image, and snapshot storage and management. When you launch an instance, you must choose a ``flavor``, which represents a set of virtual resources. Flavors define virtual CPU number, RAM amount available, and ephemeral disks size. Users must select from the set of available flavors defined on their cloud. OpenStack provides a number of predefined flavors that you can edit or add to. .. note:: - For more information about creating and troubleshooting images, see the `OpenStack Virtual Machine Image Guide `__. - For more information about image configuration options, see the `Image services <../configuration/index.html>`__ section of the OpenStack Configuration Reference. You can add and remove additional resources from running instances, such as persistent volume storage, or public IP addresses. The example used in this chapter is of a typical virtual system within an OpenStack cloud. It uses the ``cinder-volume`` service, which provides persistent block storage, instead of the ephemeral storage provided by the selected instance flavor. This diagram shows the system state prior to launching an instance. The image store has a number of predefined images, supported by the Image service. Inside the cloud, a compute node contains the available vCPU, memory, and local disk resources. Additionally, the ``cinder-volume`` service stores predefined volumes. | .. _Figure Base Image: **The base image state with no running instances** .. figure:: ../images/instance-life-1.png | Instance Launch ~~~~~~~~~~~~~~~ To launch an instance, select an image, flavor, and any optional attributes. The selected flavor provides a root volume, labeled ``vda`` in this diagram, and additional ephemeral storage, labeled ``vdb``. In this example, the ``cinder-volume`` store is mapped to the third virtual disk on this instance, ``vdc``. | .. _Figure Instance creation: **Instance creation from an image** .. figure:: ../images/instance-life-2.png | The Image service copies the base image from the image store to the local disk. The local disk is the first disk that the instance accesses, which is the root volume labeled ``vda``. Smaller instances start faster. Less data needs to be copied across the network. The new empty ephemeral disk is also created, labeled ``vdb``. This disk is deleted when you delete the instance. The compute node connects to the attached ``cinder-volume`` using iSCSI. The ``cinder-volume`` is mapped to the third disk, labeled ``vdc`` in this diagram. After the compute node provisions the vCPU and memory resources, the instance boots up from root volume ``vda``. The instance runs and changes data on the disks (highlighted in red on the diagram). If the volume store is located on a separate network, the ``my_block_storage_ip`` option specified in the storage node configuration file directs image traffic to the compute node. .. note:: Some details in this example scenario might be different in your environment. For example, you might use a different type of back-end storage, or different network protocols. One common variant is that the ephemeral storage used for volumes ``vda`` and ``vdb`` could be backed by network storage rather than a local disk. When you delete an instance, the state is reclaimed with the exception of the persistent volume. The ephemeral storage, whether encrypted or not, is purged. Memory and vCPU resources are released. The image remains unchanged throughout this process. | .. _End of state: **The end state of an image and volume after the instance exits** .. figure:: ../images/instance-life-3.png | Image properties and property protection ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ An image property is a key and value pair that the administrator or the image owner attaches to an OpenStack Image service image, as follows: - The administrator defines core properties, such as the image name. - The administrator and the image owner can define additional properties, such as licensing and billing information. The administrator can configure any property as protected, which limits which policies or user roles can perform CRUD operations on that property. Protected properties are generally additional properties to which only administrators have access. For unprotected image properties, the administrator can manage core properties and the image owner can manage additional properties. **To configure property protection** To configure property protection, edit the ``policy.json`` file. This file can also be used to set policies for Image service actions. #. Define roles or policies in the ``policy.json`` file: .. code-block:: json { "context_is_admin": "role:admin", "default": "", "add_image": "", "delete_image": "", "get_image": "", "get_images": "", "modify_image": "", "publicize_image": "role:admin", "copy_from": "", "download_image": "", "upload_image": "", "delete_image_location": "", "get_image_location": "", "set_image_location": "", "add_member": "", "delete_member": "", "get_member": "", "get_members": "", "modify_member": "", "manage_image_cache": "role:admin", "get_task": "", "get_tasks": "", "add_task": "", "modify_task": "", "deactivate": "", "reactivate": "", "get_metadef_namespace": "", "get_metadef_namespaces":"", "modify_metadef_namespace":"", "add_metadef_namespace":"", "delete_metadef_namespace":"", "get_metadef_object":"", "get_metadef_objects":"", "modify_metadef_object":"", "add_metadef_object":"", "list_metadef_resource_types":"", "get_metadef_resource_type":"", "add_metadef_resource_type_association":"", "get_metadef_property":"", "get_metadef_properties":"", "modify_metadef_property":"", "add_metadef_property":"", "get_metadef_tag":"", "get_metadef_tags":"", "modify_metadef_tag":"", "add_metadef_tag":"", "add_metadef_tags":"" } For each parameter, use ``"rule:restricted"`` to restrict access to all users or ``"role:admin"`` to limit access to administrator roles. For example: .. code-block:: json { "download_image": "upload_image": } #. Define which roles or policies can manage which properties in a property protections configuration file. For example: .. code-block:: ini [x_none_read] create = context_is_admin read = ! update = ! delete = ! [x_none_update] create = context_is_admin read = context_is_admin update = ! delete = context_is_admin [x_none_delete] create = context_is_admin read = context_is_admin update = context_is_admin delete = ! - A value of ``@`` allows the corresponding operation for a property. - A value of ``!`` disallows the corresponding operation for a property. #. In the ``glance-api.conf`` file, define the location of a property protections configuration file. .. code-block:: ini property_protection_file = {file_name} This file contains the rules for property protections and the roles and policies associated with it. By default, property protections are not enforced. If you specify a file name value and the file is not found, the ``glance-api`` service does not start. To view a sample configuration file, see `glance-api.conf <../configuration/glance_api.html>`__. #. Optionally, in the ``glance-api.conf`` file, specify whether roles or policies are used in the property protections configuration file .. code-block:: ini property_protection_rule_format = roles The default is ``roles``. To view a sample configuration file, see `glance-api.conf <../configuration/glance_api.html>`__. Image download: how it works ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Prior to starting a virtual machine, transfer the virtual machine image to the compute node from the Image service. How this works can change depending on the settings chosen for the compute node and the Image service. Typically, the Compute service will use the image identifier passed to it by the scheduler service and request the image from the Image API. Though images are not stored in glance—rather in a back end, which could be Object Storage, a filesystem or any other supported method—the connection is made from the compute node to the Image service and the image is transferred over this connection. The Image service streams the image from the back end to the compute node. It is possible to set up the Object Storage node on a separate network, and still allow image traffic to flow between the compute and object storage nodes. Configure the ``my_block_storage_ip`` option in the storage node configuration file to allow block storage traffic to reach the compute node. Certain back ends support a more direct method, where on request the Image service will return a URL that links directly to the back-end store. You can download the image using this approach. Currently, the only store to support the direct download approach is the filesystem store. Configured the approach using the ``filesystems`` option in the ``image_file_url`` section of the ``nova.conf`` file on compute nodes. Compute nodes also implement caching of images, meaning that if an image has been used before it won't necessarily be downloaded every time. Information on the configuration options for caching on compute nodes can be found in the `Configuration Reference <../configuration/>`__. Instance building blocks ~~~~~~~~~~~~~~~~~~~~~~~~ In OpenStack, the base operating system is usually copied from an image stored in the OpenStack Image service. This results in an ephemeral instance that starts from a known template state and loses all accumulated states on shutdown. You can also put an operating system on a persistent volume in Compute or the Block Storage volume system. This gives a more traditional, persistent system that accumulates states that are preserved across restarts. To get a list of available images on your system, run: .. code-block:: console $ openstack image list +--------------------------------------+-----------------------------+--------+ | ID | Name | Status | +--------------------------------------+-----------------------------+--------+ | aee1d242-730f-431f-88c1-87630c0f07ba | Ubuntu 14.04 cloudimg amd64 | active | +--------------------------------------+-----------------------------+--------+ | 0b27baa1-0ca6-49a7-b3f4-48388e440245 | Ubuntu 14.10 cloudimg amd64 | active | +--------------------------------------+-----------------------------+--------+ | df8d56fc-9cea-4dfd-a8d3-28764de3cb08 | jenkins | active | +--------------------------------------+-----------------------------+--------+ The displayed image attributes are: ``ID`` Automatically generated UUID of the image. ``Name`` Free form, human-readable name for the image. ``Status`` The status of the image. Images marked ``ACTIVE`` are available for use. ``Server`` For images that are created as snapshots of running instances, this is the UUID of the instance the snapshot derives from. For uploaded images, this field is blank. Virtual hardware templates are called ``flavors``, and are defined by administrators. Prior to the Newton release, a default installation also includes five predefined flavors. For a list of flavors that are available on your system, run: .. code-block:: console $ openstack flavor list +-----+-----------+-------+------+-----------+-------+-----------+ | ID | Name | RAM | Disk | Ephemeral | VCPUs | Is_Public | +-----+-----------+-------+------+-----------+-------+-----------+ | 1 | m1.tiny | 512 | 1 | 0 | 1 | True | | 2 | m1.small | 2048 | 20 | 0 | 1 | True | | 3 | m1.medium | 4096 | 40 | 0 | 2 | True | | 4 | m1.large | 8192 | 80 | 0 | 4 | True | | 5 | m1.xlarge | 16384 | 160 | 0 | 8 | True | +-----+-----------+-------+------+-----------+-------+-----------+ By default, administrative users can configure the flavors. You can change this behavior by redefining the access controls for ``compute_extension:flavormanage`` in ``/etc/nova/policy.json`` on the ``compute-api`` server. Instance management tools ~~~~~~~~~~~~~~~~~~~~~~~~~ OpenStack provides command-line, web interface, and API-based instance management tools. Third-party management tools are also available, using either the native API or the provided EC2-compatible API. The OpenStack python-openstackclient package provides a basic command-line utility, which uses the :command:`openstack` command. This is available as a native package for most Linux distributions, or you can install the latest version using the pip python package installer: .. code-block:: console # pip install python-openstackclient For more information about python-openstackclient and other command-line tools, see the `OpenStack End User Guide <../cli/index.html>`__. Control where instances run ~~~~~~~~~~~~~~~~~~~~~~~~~~~ The `Scheduling section `__ of OpenStack Configuration Reference provides detailed information on controlling where your instances run, including ensuring a set of instances run on different compute nodes for service resiliency or on the same node for high performance inter-instance communications. Administrative users can specify which compute node their instances run on. To do this, specify the ``--availability-zone AVAILABILITY_ZONE:COMPUTE_HOST`` parameter. Launch instances with UEFI ~~~~~~~~~~~~~~~~~~~~~~~~~~ Unified Extensible Firmware Interface (UEFI) is a standard firmware designed to replace legacy BIOS. There is a slow but steady trend for operating systems to move to the UEFI format and, in some cases, make it their only format. **To configure UEFI environment** To successfully launch an instance from an UEFI image in QEMU/KVM environment, the administrator has to install the following packages on compute node: - OVMF, a port of Intel's tianocore firmware to QEMU virtual machine. - libvirt, which has been supporting UEFI boot since version 1.2.9. Because default UEFI loader path is ``/usr/share/OVMF/OVMF_CODE.fd``, the administrator must create one link to this location after UEFI package is installed. **To upload UEFI images** To launch instances from a UEFI image, the administrator first has to upload one UEFI image. To do so, ``hw_firmware_type`` property must be set to ``uefi`` when the image is created. For example: .. code-block:: console $ openstack image create --container-format bare --disk-format qcow2 \ --property hw_firmware_type=uefi --file /tmp/cloud-uefi.qcow --name uefi After that, you can launch instances from this UEFI image. glance-16.0.0/doc/source/admin/requirements.rst0000666000175100017510000000662713245511421021500 0ustar zuulzuul00000000000000.. Copyright 2016-present OpenStack Foundation All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Requirements ============ External Requirements Affecting Glance ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Like other OpenStack projects, Glance uses some external libraries for a subset of its features. Some examples include the ``qemu-img`` utility used by the tasks feature, ``sendfile`` to utilize the "zero-copy" way of copying data faster, ``pydev`` to debug using popular IDEs, ``python-xattr`` for Image Cache using "xattr" driver. On the other hand, if ``dnspython`` is installed in the environment, Glance provides a workaround to make it work with IPV6. Additionally, some libraries like ``xattr`` are not compatible when using Glance on Windows (see :ref:`the documentation on config options affecting the Image Cache `). Guideline to include your requirement in the requirements.txt file ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ As described above, we don't include all the possible requirements needed by Glance features in the source tree requirements file. So, when an operator decides to use an **advanced feature** in Glance, we ask them to check the documentation/guidelines for those features to set up the feature in a workable way. In order to reduce the operator pain, the development team likes to work with different operators to figure out when a popular feature should have its dependencies included in the requirements file. However, there's a tradeoff in including more of requirements in source tree as it becomes more painful for packagers. So, it is a bit of a haggle among different stakeholders and a judicious decision is taken by the project PTL or release liaison to determine the outcome. To simplify the identification of an **advanced feature** in Glance we can think of it as something not being used and deployed by most of the upstream/known community members. To name a few features that have been identified as advanced: * glance tasks * image signing * image prefetcher * glance db purge utility * image locations Steps to include your requirement in the requirements.txt file ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 1. First step is to propose a change against the ``openstack/requirements`` project to include the requirement(s) as a part of ``global-requirements`` and ``upper-constraints`` files. 2. If your requirement is not a part of the project, you will have to propose a change adding that requirement to the requirements.txt file in Glance. Please include a ``Depends-On: `` flag in the commit message, where the ``ChangeID`` is the gerrit ID of corresponding change against ``openstack/requirements`` project. 3. A sync bot then syncs the global requirements into project requirements on a regular basis, so any updates to the requirements are synchronized on a timely basis. glance-16.0.0/doc/source/admin/cache.rst0000666000175100017510000001641613245511421020015 0ustar zuulzuul00000000000000.. Copyright 2011 OpenStack Foundation All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. .. _image-cache: The Glance Image Cache ====================== The Glance API server may be configured to have an optional local image cache. A local image cache stores a copy of image files, essentially enabling multiple API servers to serve the same image file, resulting in an increase in scalability due to an increased number of endpoints serving an image file. This local image cache is transparent to the end user -- in other words, the end user doesn't know that the Glance API is streaming an image file from its local cache or from the actual backend storage system. Managing the Glance Image Cache ------------------------------- While image files are automatically placed in the image cache on successful requests to ``GET /images/``, the image cache is not automatically managed. Here, we describe the basics of how to manage the local image cache on Glance API servers and how to automate this cache management. Configuration options for the Image Cache ----------------------------------------- The Glance cache uses two files: one for configuring the server and another for the utilities. The ``glance-api.conf`` is for the server and the ``glance-cache.conf`` is for the utilities. The following options are in both configuration files. These need the same values otherwise the cache will potentially run into problems. - ``image_cache_dir`` This is the base directory where Glance stores the cache data (Required to be set, as does not have a default). - ``image_cache_sqlite_db`` Path to the sqlite file database that will be used for cache management. This is a relative path from the ``image_cache_dir`` directory (Default:``cache.db``). - ``image_cache_driver`` The driver used for cache management. (Default:``sqlite``) - ``image_cache_max_size`` The size when the glance-cache-pruner will remove the oldest images, to reduce the bytes until under this value. (Default:``10 GB``) - ``image_cache_stall_time`` The amount of time an incomplete image will stay in the cache, after this the incomplete image will be deleted. (Default:``1 day``) The following values are the ones that are specific to the ``glance-cache.conf`` and are only required for the prefetcher to run correctly. - ``admin_user`` The username for an admin account, this is so it can get the image data into the cache. - ``admin_password`` The password to the admin account. - ``admin_tenant_name`` The tenant of the admin account. - ``auth_url`` The URL used to authenticate to keystone. This will be taken from the environment variables if it exists. - ``filesystem_store_datadir`` This is used if using the filesystem store, points to where the data is kept. - ``filesystem_store_datadirs`` This is used to point to multiple filesystem stores. - ``registry_host`` The URL to the Glance registry. Controlling the Growth of the Image Cache ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The image cache has a configurable maximum size (the ``image_cache_max_size`` configuration file option). The ``image_cache_max_size`` is an upper limit beyond which pruner, if running, starts cleaning the images cache. However, when images are successfully returned from a call to ``GET /images/``, the image cache automatically writes the image file to its cache, regardless of whether the resulting write would make the image cache's size exceed the value of ``image_cache_max_size``. In order to keep the image cache at or below this maximum cache size, you need to run the ``glance-cache-pruner`` executable. The recommended practice is to use ``cron`` to fire ``glance-cache-pruner`` at a regular interval. Cleaning the Image Cache ~~~~~~~~~~~~~~~~~~~~~~~~ Over time, the image cache can accumulate image files that are either in a stalled or invalid state. Stalled image files are the result of an image cache write failing to complete. Invalid image files are the result of an image file not being written properly to disk. To remove these types of files, you run the ``glance-cache-cleaner`` executable. The recommended practice is to use ``cron`` to fire ``glance-cache-cleaner`` at a semi-regular interval. Prefetching Images into the Image Cache ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Some installations have base (sometimes called "golden") images that are very commonly used to boot virtual machines. When spinning up a new API server, administrators may wish to prefetch these image files into the local image cache to ensure that reads of those popular image files come from a local cache. To queue an image for prefetching, you can use one of the following methods: * If the ``cache_manage`` middleware is enabled in the application pipeline, you may call ``PUT /queued-images/`` to queue the image with identifier ```` Alternately, you can use the ``glance-cache-manage`` program to queue the image. This program may be run from a different host than the host containing the image cache. Example usage:: $> glance-cache-manage --host= queue-image This will queue the image with identifier ```` for prefetching Once you have queued the images you wish to prefetch, call the ``glance-cache-prefetcher`` executable, which will prefetch all queued images concurrently, logging the results of the fetch for each image. Finding Which Images are in the Image Cache ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ You can find out which images are in the image cache using one of the following methods: * If the ``cachemanage`` middleware is enabled in the application pipeline, you may call ``GET /cached-images`` to see a JSON-serialized list of mappings that show cached images, the number of cache hits on each image, the size of the image, and the times they were last accessed. Alternately, you can use the ``glance-cache-manage`` program. This program may be run from a different host than the host containing the image cache. Example usage:: $> glance-cache-manage --host= list-cached * You can issue the following call on \*nix systems (on the host that contains the image cache):: $> ls -lhR $IMAGE_CACHE_DIR where ``$IMAGE_CACHE_DIR`` is the value of the ``image_cache_dir`` configuration variable. Note that the image's cache hit is not shown using this method. Manually Removing Images from the Image Cache ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ If the ``cachemanage`` middleware is enabled, you may call ``DELETE /cached-images/`` to remove the image file for image with identifier ```` from the cache. Alternately, you can use the ``glance-cache-manage`` program. Example usage:: $> glance-cache-manage --host= delete-cached-image glance-16.0.0/doc/source/admin/policies.rst0000666000175100017510000001340213245511421020551 0ustar zuulzuul00000000000000.. Copyright 2012 OpenStack Foundation All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Policies ======== Glance's public API calls may be restricted to certain sets of users using a policy configuration file. This document explains exactly how policies are configured and what they apply to. A policy is composed of a set of rules that are used by the policy "Brain" in determining if a particular action may be performed by the authorized tenant. Constructing a Policy Configuration File ---------------------------------------- A policy configuration file is a simply JSON object that contain sets of rules. Each top-level key is the name of a rule. Each rule is a string that describes an action that may be performed in the Glance API. The actions that may have a rule enforced on them are: * ``get_images`` - List available image entities * ``GET /v1/images`` * ``GET /v1/images/detail`` * ``GET /v2/images`` * ``get_image`` - Retrieve a specific image entity * ``HEAD /v1/images/`` * ``GET /v1/images/`` * ``GET /v2/images/`` * ``download_image`` - Download binary image data * ``GET /v1/images/`` * ``GET /v2/images//file`` * ``upload_image`` - Upload binary image data * ``POST /v1/images`` * ``PUT /v1/images/`` * ``PUT /v2/images//file`` * ``copy_from`` - Copy binary image data from URL * ``POST /v1/images`` * ``PUT /v1/images/`` * ``add_image`` - Create an image entity * ``POST /v1/images`` * ``POST /v2/images`` * ``modify_image`` - Update an image entity * ``PUT /v1/images/`` * ``PUT /v2/images/`` * ``publicize_image`` - Create or update public images * ``POST /v1/images`` with attribute ``is_public`` = ``true`` * ``PUT /v1/images/`` with attribute ``is_public`` = ``true`` * ``POST /v2/images`` with attribute ``visibility`` = ``public`` * ``PUT /v2/images/`` with attribute ``visibility`` = ``public`` * ``communitize_image`` - Create or update community images * ``POST /v2/images`` with attribute ``visibility`` = ``community`` * ``PUT /v2/images/`` with attribute ``visibility`` = ``community`` * ``delete_image`` - Delete an image entity and associated binary data * ``DELETE /v1/images/`` * ``DELETE /v2/images/`` * ``add_member`` - Add a membership to the member repo of an image * ``POST /v2/images//members`` * ``get_members`` - List the members of an image * ``GET /v1/images//members`` * ``GET /v2/images//members`` * ``delete_member`` - Delete a membership of an image * ``DELETE /v1/images//members/`` * ``DELETE /v2/images//members/`` * ``modify_member`` - Create or update the membership of an image * ``PUT /v1/images//members/`` * ``PUT /v1/images//members`` * ``POST /v2/images//members`` * ``PUT /v2/images//members/`` * ``manage_image_cache`` - Allowed to use the image cache management API To limit an action to a particular role or roles, you list the roles like so :: { "delete_image": ["role:admin", "role:superuser"] } The above would add a rule that only allowed users that had roles of either "admin" or "superuser" to delete an image. Writing Rules ------------- Role checks are going to continue to work exactly as they already do. If the role defined in the check is one that the user holds, then that will pass, e.g., ``role:admin``. To write a generic rule, you need to know that there are three values provided by Glance that can be used in a rule on the left side of the colon (``:``). Those values are the current user's credentials in the form of: - role - tenant - owner The left side of the colon can also contain any value that Python can understand, e.g.,: - ``True`` - ``False`` - ``"a string"`` - &c. Using ``tenant`` and ``owner`` will only work with images. Consider the following rule:: tenant:%(owner)s This will use the ``tenant`` value of the currently authenticated user. It will also use ``owner`` from the image it is acting upon. If those two values are equivalent the check will pass. All attributes on an image (as well as extra image properties) are available for use on the right side of the colon. The most useful are the following: - ``owner`` - ``protected`` - ``is_public`` Therefore, you could construct a set of rules like the following:: { "not_protected": "False:%(protected)s", "is_owner": "tenant:%(owner)s", "is_owner_or_admin": "rule:is_owner or role:admin", "not_protected_and_is_owner": "rule:not_protected and rule:is_owner", "get_image": "rule:is_owner_or_admin", "delete_image": "rule:not_protected_and_is_owner", "add_member": "rule:not_protected_and_is_owner" } Examples -------- Example 1. (The default policy configuration) :: { "default": "" } Note that an empty JSON list means that all methods of the Glance API are callable by anyone. Example 2. Disallow modification calls to non-admins :: { "default": "", "add_image": "role:admin", "modify_image": "role:admin", "delete_image": "role:admin" } glance-16.0.0/doc/source/admin/db.rst0000666000175100017510000001201513245511421017326 0ustar zuulzuul00000000000000.. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. .. _database-management: Database Management =================== The default metadata driver for Glance uses `SQLAlchemy`_, which implies there exists a backend database which must be managed. The ``glance-manage`` binary provides a set of commands for making this easier. The commands should be executed as a subcommand of 'db':: glance-manage db .. note:: In the Ocata release (14.0.0), the database migration engine was changed from *SQLAlchemy Migrate* to *Alembic*. This necessitated some changes in the ``glance-manage`` tool. While the user interface has been kept as similar as possible, the ``glance-manage`` tool included with the Ocata and more recent releases is incompatible with the "legacy" tool. If you are consulting these documents for information about the ``glance-manage`` tool in the Newton or earlier releases, please see the :ref:`legacy-database-management` page. .. _`SQLAlchemy`: http://www.sqlalchemy.org/ Migration Scripts ----------------- The migration scripts are stored in the directory: ``glance/db/sqlalchemy/alembic_migrations/versions`` As mentioned above, these scripts utilize the Alembic migration engine, which was first introduced in the Ocata release. All database migrations up through the Liberty release are consolidated into one Alembic migration script named ``liberty_initial``. Mitaka migrations are retained, but have been rewritten for Alembic and named using the new naming convention. A fresh Glance installation will apply the following migrations: * ``liberty-initial`` * ``mitaka01`` * ``mitaka02`` * ``ocata01`` .. note:: The "old-style" migration scripts have been retained in their `current directory`_ in the Ocata release so that interested operators can correlate them with the new migrations. This directory will be removed in future releases. In particular, the "old-style" script for the Ocata migration, `045_add_visibility.py`_ is retained for operators who are conversant in SQLAlchemy Migrate and are interested in comparing it with a "new-style" Alembic migration script. The Alembic script, which is the one actually used to do the upgrade to Ocata, is `ocata01_add_visibility_remove_is_public.py`_. .. _`current directory`: http://git.openstack.org/cgit/openstack/glance/tree/glance/db/sqlalchemy/migrate_repo/versions?h=stable/ocata .. _`045_add_visibility.py`: http://git.openstack.org/cgit/openstack/glance/tree/glance/db/sqlalchemy/migrate_repo/versions/045_add_visibility.py?h=stable/ocata .. _`ocata01_add_visibility_remove_is_public.py`: http://git.openstack.org/cgit/openstack/glance/tree/glance/db/sqlalchemy/alembic_migrations/versions/ocata01_add_visibility_remove_is_public.py?h=stable/ocata Sync the Database ----------------- :: glance-manage db sync [VERSION] Place an existing database under migration control and upgrade it to the specified VERSION or to the latest migration level if VERSION is not specified. .. note:: Prior to Ocata release the database version was a numeric value. For example: for the Newton release, the latest migration level was ``44``. Starting with Ocata, database version is a revision name corresponding to the latest migration included in the release. For the Ocata release, there is only one database migration and it is identified by revision ``ocata01``. So, the database version for Ocata release is ``ocata01``. This naming convention will change slightly with the introduction of zero-downtime upgrades, which is EXPERIMENTAL in Ocata, but is projected to be the official upgrade method beginning with the Pike release. See :ref:`zero-downtime` for more information. Determining the Database Version -------------------------------- :: glance-manage db version This will print the current migration level of a Glance database. Upgrading an Existing Database ------------------------------ :: glance-manage db upgrade [VERSION] This will take an existing database and upgrade it to the specified VERSION. .. _downgrades: Downgrading an Existing Database -------------------------------- Downgrading an existing database is **NOT SUPPORTED**. Upgrades involve complex operations and can fail. Before attempting any upgrade, you should make a full database backup of your production data. As of the OpenStack Kilo release (April 2013), database downgrades are not supported, and the only method available to get back to a prior database version is to restore from backup. glance-16.0.0/doc/source/admin/index.rst0000666000175100017510000000061313245511421020051 0ustar zuulzuul00000000000000====================== Administration guide ====================== .. toctree:: :maxdepth: 2 authentication cache policies property-protections apache-httpd notifications tasks controllingservers flows interoperable-image-import db db-sqlalchemy-migrate zero-downtime-db-upgrade rollingupgrades troubleshooting manage-images requirements glance-16.0.0/doc/source/admin/manage-images.rst0000666000175100017510000002736013245511421021445 0ustar zuulzuul00000000000000============= Manage images ============= The cloud operator assigns roles to users. Roles determine who can upload and manage images. The operator might restrict image upload and management to only cloud administrators or operators. You can upload images through the :command:`openstack image create` command or the Image service API. You can use the ``openstack`` client for the image management. It provides mechanisms to list and delete images, set and delete image metadata, and create images of a running instance or snapshot and backup types. After you upload an image, you cannot change it. For details about image creation, see the `Virtual Machine Image Guide `__. List or get details for images (glance) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ To get a list of images and to get further details about a single image, use :command:`openstack image list` and :command:`openstack image show` commands. .. code-block:: console $ openstack image list +--------------------------------------+---------------------------------+--------+ | ID | Name | Status | +--------------------------------------+---------------------------------+--------+ | dfc1dfb0-d7bf-4fff-8994-319dd6f703d7 | cirros-0.3.5-x86_64-uec | active | | a3867e29-c7a1-44b0-9e7f-10db587cad20 | cirros-0.3.5-x86_64-uec-kernel | active | | 4b916fba-6775-4092-92df-f41df7246a6b | cirros-0.3.5-x86_64-uec-ramdisk | active | | d07831df-edc3-4817-9881-89141f9134c3 | myCirrosImage | active | +--------------------------------------+---------------------------------+--------+ .. code-block:: console $ openstack image show myCirrosImage +------------------+------------------------------------------------------+ | Field | Value | +------------------+------------------------------------------------------+ | checksum | ee1eca47dc88f4879d8a229cc70a07c6 | | container_format | ami | | created_at | 2016-08-11T15:07:26Z | | disk_format | ami | | file | /v2/images/d07831df-edc3-4817-9881-89141f9134c3/file | | id | d07831df-edc3-4817-9881-89141f9134c3 | | min_disk | 0 | | min_ram | 0 | | name | myCirrosImage | | owner | d88310717a8e4ebcae84ed075f82c51e | | protected | False | | schema | /v2/schemas/image | | size | 13287936 | | status | active | | tags | | | updated_at | 2016-08-11T15:20:02Z | | virtual_size | None | | visibility | private | +------------------+------------------------------------------------------+ When viewing a list of images, you can also use ``grep`` to filter the list, as follows: .. code-block:: console $ openstack image list | grep 'cirros' | dfc1dfb0-d7bf-4fff-8994-319dd6f703d7 | cirros-0.3.5-x86_64-uec | active | | a3867e29-c7a1-44b0-9e7f-10db587cad20 | cirros-0.3.5-x86_64-uec-kernel | active | | 4b916fba-6775-4092-92df-f41df7246a6b | cirros-0.3.5-x86_64-uec-ramdisk | active | .. note:: To store location metadata for images, which enables direct file access for a client, update the ``/etc/glance/glance-api.conf`` file with the following statements: * ``show_multiple_locations = True`` * ``filesystem_store_metadata_file = filePath`` where filePath points to a JSON file that defines the mount point for OpenStack images on your system and a unique ID. For example: .. code-block:: json [{ "id": "2d9bb53f-70ea-4066-a68b-67960eaae673", "mountpoint": "/var/lib/glance/images/" }] After you restart the Image service, you can use the following syntax to view the image's location information: .. code-block:: console $ openstack --os-image-api-version 2 image show imageID For example, using the image ID shown above, you would issue the command as follows: .. code-block:: console $ openstack --os-image-api-version 2 image show 2d9bb53f-70ea-4066-a68b-67960eaae673 Create or update an image (glance) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ To create an image, use :command:`openstack image create`: .. code-block:: console $ openstack image create imageName To update an image by name or ID, use :command:`openstack image set`: .. code-block:: console $ openstack image set imageName The following list explains the optional arguments that you can use with the ``create`` and ``set`` commands to modify image properties. For more information, refer to the `OpenStack Image command reference `_. The following example shows the command that you would use to upload a CentOS 6.3 image in qcow2 format and configure it for public access: .. code-block:: console $ openstack image create --disk-format qcow2 --container-format bare \ --public --file ./centos63.qcow2 centos63-image The following example shows how to update an existing image with a properties that describe the disk bus, the CD-ROM bus, and the VIF model: .. note:: When you use OpenStack with VMware vCenter Server, you need to specify the ``vmware_disktype`` and ``vmware_adaptertype`` properties with :command:`openstack image create`. Also, we recommend that you set the ``hypervisor_type="vmware"`` property. For more information, see `Images with VMware vSphere `_ in the OpenStack Configuration Reference. .. code-block:: console $ openstack image set \ --property hw_disk_bus=scsi \ --property hw_cdrom_bus=ide \ --property hw_vif_model=e1000 \ f16-x86_64-openstack-sda Currently the libvirt virtualization tool determines the disk, CD-ROM, and VIF device models based on the configured hypervisor type (``libvirt_type`` in ``/etc/nova/nova.conf`` file). For the sake of optimal performance, libvirt defaults to using virtio for both disk and VIF (NIC) models. The disadvantage of this approach is that it is not possible to run operating systems that lack virtio drivers, for example, BSD, Solaris, and older versions of Linux and Windows. If you specify a disk or CD-ROM bus model that is not supported, see the Disk_and_CD-ROM_bus_model_values_table_. If you specify a VIF model that is not supported, the instance fails to launch. See the VIF_model_values_table_. The valid model values depend on the ``libvirt_type`` setting, as shown in the following tables. .. _Disk_and_CD-ROM_bus_model_values_table: **Disk and CD-ROM bus model values** +-------------------------+--------------------------+ | libvirt\_type setting | Supported model values | +=========================+==========================+ | qemu or kvm | * fdc | | | | | | * ide | | | | | | * scsi | | | | | | * sata | | | | | | * virtio | | | | | | * usb | +-------------------------+--------------------------+ | xen | * ide | | | | | | * xen | +-------------------------+--------------------------+ .. _VIF_model_values_table: **VIF model values** +-------------------------+--------------------------+ | libvirt\_type setting | Supported model values | +=========================+==========================+ | qemu or kvm | * e1000 | | | | | | * ne2k\_pci | | | | | | * pcnet | | | | | | * rtl8139 | | | | | | * virtio | +-------------------------+--------------------------+ | xen | * e1000 | | | | | | * netfront | | | | | | * ne2k\_pci | | | | | | * pcnet | | | | | | * rtl8139 | +-------------------------+--------------------------+ | vmware | * VirtualE1000 | | | | | | * VirtualPCNet32 | | | | | | * VirtualVmxnet | +-------------------------+--------------------------+ .. note:: By default, hardware properties are retrieved from the image properties. However, if this information is not available, the ``libosinfo`` database provides an alternative source for these values. If the guest operating system is not in the database, or if the use of ``libosinfo`` is disabled, the default system values are used. Users can set the operating system ID or a ``short-id`` in image properties. For example: .. code-block:: console $ openstack image set --property short-id=fedora23 \ name-of-my-fedora-image Alternatively, users can set ``id`` to a URL: .. code-block:: console $ openstack image set \ --property id=http://fedoraproject.org/fedora/23 \ ID-of-my-fedora-image Create an image from ISO image ------------------------------ You can upload ISO images to the Image service (glance). You can subsequently boot an ISO image using Compute. In the Image service, run the following command: .. code-block:: console $ openstack image create ISO_IMAGE --file IMAGE.iso \ --disk-format iso --container-format bare Optionally, to confirm the upload in Image service, run: .. code-block:: console $ openstack image list Troubleshoot image creation ~~~~~~~~~~~~~~~~~~~~~~~~~~~ If you encounter problems in creating an image in the Image service or Compute, the following information may help you troubleshoot the creation process. * Ensure that the version of qemu you are using is version 0.14 or later. Earlier versions of qemu result in an ``unknown option -s`` error message in the ``/var/log/nova/nova-compute.log`` file. * Examine the ``/var/log/nova/nova-api.log`` and ``/var/log/nova/nova-compute.log`` log files for error messages. glance-16.0.0/doc/source/admin/interoperable-image-import.rst0000666000175100017510000004061013245511421024166 0ustar zuulzuul00000000000000.. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. .. _iir: Interoperable Image Import ========================== Version 2.6 of the Image Service API introduces new API calls that implement an interoperable image import process. These API calls, and the workflow for using them, are described in the `Interoperable Image Import`_ section of the `Image Service API reference`_. That documentation explains the end user's view of interoperable image import. In this section, we discuss what configuration options are available to operators. The interoperable image import process uses Glance tasks, but does *not* require that the Tasks API be exposed to end users. Further, it requires the **taskflow** task executor. The following configuration options must be set: * in the ``[task]`` option group: * ``task_executor`` must either be set to **taskflow** or be used in its default value * in the ``[taskflow_executor]`` options group: * The default values are fine. It's a good idea to read through the descriptions in the sample **glance-api.conf** file to see what options are available. .. note:: You can find an example glance-api.conf_ file in the **etc/** subdirectory of the Glance source code tree. Make sure that you are looking in the correct branch for the OpenStack release you are working with. * in the default options group: * ``enable_image_import`` must be either absent or set to **True** (the default). .. note :: In the Pike release, the default value for this option was **False**. This option is DEPRECATED and will be removed in the Rocky release. If ``enable_image_import`` is set **False**, requests to the v2 endpoint for URIs defined only in v2.6 will return 404 (Not Found) with a message in the response body stating "Image import is not supported at this site." Additionally, the image-create response will not contain the "OpenStack-image-import-methods" header. * ``node_staging_uri`` must specify a location writable by the glance user. If you have multiple Glance API nodes, this should be a reference to a shared filesystem available to all the nodes. * ``enabled_import_methods`` must specify the import methods you are exposing at your installation. The default value for this setting is ``['glance-direct','web-download']``. See the next section for a description of these import methods. Additionally, your policies must be such that an ordinary end user can manipulate tasks. In releases prior to Pike, we recommended that the task-related policies be admin-only so that end users could not access the Tasks API. In Pike, a new policy was introduced that controls access to the Tasks API. Thus it is now possible to keep the individual task policies unrestricted while not exposing the Tasks API to end users. Thus, the following is the recommended configuration for the task-related policies: .. code-block:: ini "get_task": "", "get_tasks": "", "add_task": "", "modify_task": "", "tasks_api_access": "role:admin", Image Import Methods -------------------- Glance provides two import methods that you can make available to your users: ``glance-direct`` and ``web-download``. By default, both methods are enabled. * The ``glance-direct`` import method allows your users to upload image data directly to Glance. * The ``web-download`` method allows an end user to import an image from a remote URL. The image data is retrieved from the URL and stored in the Glance backend. (In other words, this is a *copy-from* operation.) .. note:: The ``web-download`` import method replaces the copy-from functionality that was available in the Image API v1 but previously absent from v2. This note is a gentle reminder that the Image API v1 is DEPRECATED and will be removed from Glance during the Rocky development cycle. The Queens release of Glance (16.x.x) is the final version in which you can expect to find the Image API v1. You control which methods are available to API users by the ``enabled_import_methods`` configuration option in the default section of the **glance-api.conf** file. Configuring the glance-direct method ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ For the ``glance-direct`` method, make sure that ``glance-direct`` is included in the list specified by your ``enabled_import_methods`` setting, and that all the options described above are set properly. Configuring the web-download method ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ To enable the ``web-download`` import method, make sure that it is included in the list of methods in the ``enabled_import_methods`` option, and that all the options described above are set properly. .. note:: You must configure the ``node_staging_uri`` for the ``web-download`` import method because that is where Glance will store the downloaded content. This gives you the opportunity to have the image data be processed by the same plugin chain for each of the import methods. See :ref:`iir_plugins` for more information. Additionally, you have the following configuration available. Depending on the nature of your cloud and the sophistication of your users, you may wish to restrict what URIs they may use for the web-download import method. .. note:: You should be aware of OSSN-0078_, "copy_from in Image Service API v1 allows network port scan". The v1 copy_from feature does not have the configurability described here. You can do this by configuring options in the ``[import_filtering_opts]`` section of the **glance-image-import.conf** file. .. note:: The **glance-image-import.conf** is an optional file. (See below for a discussion of the default settings if you don't include this file.) You can find an example file named glance-image-import.conf.sample_ in the **etc/** subdirectory of the Glance source code tree. Make sure that you are looking in the correct branch for the OpenStack release you are working with. You can whitelist ("allow *only* these") or blacklist ("do *not* allow these") at three levels: * scheme (``allowed_schemes``, ``disallowed_schemes``) * host (``allowed_hosts``, ``disallowed_hosts``) * port (``allowed_ports``, ``disallowed_ports``) There are six configuration options, but the way it works is that if you specify both at any level, the whitelist is honored and the blacklist is ignored. (So why have both? Well, you may want to whitelist a scheme, but blacklist a host, and whitelist a particular port.) Validation of a URI happens as follows: 1. The scheme is checked. a. missing scheme: reject b. If there's a whitelist, and the scheme is not in it: reject. Otherwise, skip c and continue on to 2. c. If there's a blacklist, and the scheme is in it: reject. 2. The hostname is checked. a. missing hostname: reject b. If there's a whitelist, and the host is not in it: reject. Otherwise, skip c and continue on to 3. c. If there's a blacklist, and the host is in it: reject. 3. If there's a port in the URI, the port is checked. a. If there's a whitelist, and the port is not in it: reject. Otherwise, skip b and continue on to 4. b. If there's a blacklist, and the port is in it: reject. 4. The URI is accepted as valid. Note that if you allow a scheme, either by whitelisting it or by not blacklisting it, any URI that uses the default port for that scheme by not including a port in the URI is allowed. If it does include a port in the URI, the URI will be validated according to the above rules. Default settings ++++++++++++++++ The **glance-image-import.conf** is an optional file. Here are the default settings for these options: * ``allowed_schemes`` - ``['http', 'https']`` * ``disallowed_schemes`` - empty list * ``allowed_hosts`` - empty list * ``disallowed_hosts`` - empty list * ``allowed_ports`` - ``[80, 443]`` * ``disallowed_ports`` - empty list Thus if you use the defaults, end users will only be able to access URIs using the http or https scheme. The only ports users will be able to specify are 80 and 443. (Users do not have to specify a port, but if they do, it must be either 80 or 443.) .. note:: The **glance-image-import.conf** is an optional file. You can find an example file named glance-image-import.conf.sample_ in the **etc/** subdirectory of the Glance source code tree. Make sure that you are looking in the correct branch for the OpenStack release you are working with. .. _iir_plugins: Customizing the image import process ------------------------------------ When a user issues the image-import call, Glance retrieves the staged image data, processes it, and saves the result in the backing store. You can customize the nature of this processing by using *plugins*. Some plugins are provided by the Glance project team, you can use third-party plugins, or you can write your own. Technical information ~~~~~~~~~~~~~~~~~~~~~ The import step of interoperable image import is performed by a `Taskflow`_ "flow" object. This object, provided by Glance, will call any plugins you have specified in the ``glance-image-import.conf`` file. The plugins are loaded by `Stevedore`_ and must be listed in the entry point registry in the namespace ``glance.image_import.plugins``. (If you are using only plugins provided by the Glance project team, these are already registered for you.) A plugin must be written in Python as a `Taskflow "Task" object`_. The file containing this object must be present in the ``glance/async/flows/plugins`` directory. The plugin file must contain a ``get_flow`` function that returns a Taskflow Task object wrapped in a linear flow. See the ``no_op`` plugin, located at ``glance/async/flows/plugins/no_op.py`` for an example of how to do this. Specifying the plugins to be used ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ First, the plugin code must exist in the directory ``glance/async/flows/plugins``. The name of a plugin is the filename (without extension) of the file containing the plugin code. For example, a file named ``fred_mertz.py`` would contain the plugin ``fred_mertz``. Second, the plugin must be listed in the entry point list for the ``glance.image_import.plugins`` namespace. (If you are using only plugins provided with Glance, this will have already been done for you, but it never hurts to check.) The entry point list is in ``setup.cfg``. Find the section with the heading ``[entry_points]`` and look for the line beginning with ``glance.image_import.plugins =``. It will be followed by a series of lines of the form:: = :get_flow For example:: no_op = glance.async.flows.plugins.no_op:get_flow Make sure any plugin you want to use is included here. Third, the plugin must be listed in the ``glance-image-import.conf`` file as one of the plugin names in the list providing the value for the ``image_import_plugins`` option. Plugins are executed in the order they are specified in this list. The Image Property Injection Plugin ----------------------------------- .. list-table:: * - release introduced - Queens (Glance 16.0.0) * - configuration file - ``glance-image-import.conf`` * - configuration file section - ``[inject_metadata_properties]`` This plugin implements the Glance spec `Inject metadata properties automatically to non-admin images`_. One use case for this plugin is a situation where an operator wants to put specific metadata on images imported by end users so that virtual machines booted from these images will be located on specific compute nodes. Since it's unlikely that an end user (the image owner) will know the appropriate properties or values, an operator may use this plugin to inject the properties automatically upon image import. .. note:: This plugin may only be used as part of the interoperable image import workflow (``POST v2/images/{image_id}/import``). *It has no effect on the image data upload call* (``PUT v2/images/{image_id}/file``). You can guarantee that your end users must use interoperable image import by restricting the ``upload_image`` policy appropriately in the Glance ``policy.json`` file. By default, this policy is unrestricted (that is, any authorized user may make the image upload call). For example, to allow only admin or service users to make the image upload call, the policy could be restricted as follows: .. code-block:: text "upload_image": "role:admin or (service_user_id:) or (service_roles:)" where "service_role" is the role which is created for the service user and assigned to trusted services. To use the Image Property Injection Plugin, the following configuration is required. 1. You will need to configure 'glance-image-import.conf' file as shown below: .. code-block:: ini [image_import_opts] image_import_plugins = [inject_image_metadata] [inject_metadata_properties] ignore_user_roles = admin,... inject = "property1":"value1","property2":"value2",... The first section, ``image_import_opts``, is used to enable the plugin by specifying the plugin name as one of the elements of the list that is the value of the `image_import_plugins` parameter. The plugin name is simply the module name under glance/async/flows/plugins/ The second section, ``inject_metadata_properties``, is where you set the parameters for the injection plugin. (Note that the values you specify here only have an effect if the plugin has been enabled in the ``image_import_plugins`` list as described above.) * ``ignore_user_roles`` is a comma-separated list of Keystone roles that the plugin will ignore. In other words, if the user making the image import call has any of these roles, the plugin will not inject any properties into the image. * ``inject`` is a comma-separated list of properties and values that will be injected into the image record for the imported image. Each property and value should be quoted and separated by a colon (':') as shown in the example above. 2. If your use case is such that you don't want to allow end-users to create, modify, or delete metadata properties that you are injecting during the interoperable image import process, you will need to protect these properties using the Glance property protection feature (available since the Havana release). For example, suppose there is a property named 'property1' that you want injected during import, but you only want an administrator or service user to be able to create this property, and you want only an administrator to be able to modify or delete it. You could accomplish this by adding the following to the property protection configuration file: .. code-block:: ini [property1] create = admin,service_role read = admin,service_role,member,_member_ update = admin delete = admin See the :ref:`property-protections` section of this Guide for more information. .. _glance-api.conf: http://git.openstack.org/cgit/openstack/glance/tree/etc/glance-api.conf .. _glance-image-import.conf.sample: http://git.openstack.org/cgit/openstack/glance/tree/etc/glance-image-import.conf.sample .. _`Image Import Refactor`: https://specs.openstack.org/openstack/glance-specs/specs/mitaka/approved/image-import/image-import-refactor.html .. _`Image Service API reference`: https://developer.openstack.org/api-ref/image/ .. _`Inject metadata properties automatically to non-admin images`: https://specs.openstack.org/openstack/glance-specs/specs/queens/approved/glance/inject-automatic-metadata.html .. _`Interoperable Image Import`: https://developer.openstack.org/api-ref/image/v2/index.html#interoperable-image-import .. _OSSN-0078: https://wiki.openstack.org/wiki/OSSN/OSSN-0078 .. _`Stevedore`: https://docs.openstack.org/stevedore .. _`Taskflow`: https://docs.openstack.org/taskflow .. _`Taskflow "Task" object`: https://docs.openstack.org/taskflow/latest/user/atoms.html#task glance-16.0.0/doc/source/admin/tasks.rst0000666000175100017510000001561513245511421020077 0ustar zuulzuul00000000000000.. Copyright 2015 OpenStack Foundation All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. .. _tasks: Tasks ===== Conceptual Overview ------------------- Image files can be quite large, and processing images (converting an image from one format to another, for example) can be extremely resource intensive. Additionally, a one-size-fits-all approach to processing images is not desirable. A public cloud will have quite different security concerns than, for example, a small private cloud run by an academic department in which all users know and trust each other. Thus a public cloud deployer may wish to run various validation checks on an image that a user wants to bring in to the cloud, whereas the departmental cloud deployer may view such processing as a waste of resources. To address this situation, Glance contains *tasks*. Tasks are intended to offer end users a front end to long running asynchronous operations -- the type of operation you kick off and don't expect to finish until you've gone to the coffee shop, had a pleasant chat with your barista, had a coffee, had a pleasant walk home, etc. The asynchronous nature of tasks is emphasized up front in order to set end user expectations with respect to how long the task may take (hint: longer than other Glance operations). Having a set of operations performed by tasks allows a deployer flexibility with respect to how many operations will be processed simultaneously, which in turn allows flexibility with respect to what kind of resources need to be set aside for task processing. Thus, although large cloud deployers are certainly interested in tasks for the alternative custom image processing workflow they enable, smaller deployers find them useful as a means of controlling resource utilization. An additional reason tasks have been introduced into Glance is to support Glance's role in the OpenStack ecosystem. Glance provides cataloging, storage, and delivery of virtual machine images. As such, it needs to be responsive to other OpenStack components. Nova, for instance, requests images from Glance in order to boot instances; it uploads images to Glance as part of its workflow for the Nova image-create action; and it uses Glance to provide the data for the image-related API calls that are defined in the Compute API that Nova instantiates. It is necessary to the proper functioning of an OpenStack cloud that these synchronous operations not be compromised by excess load caused by non-essential functionality such as image import. By separating the tasks resource from the images resource in the Images API, it's easier for deployers to allocate resources and route requests for tasks separately from the resources required to support Glance's service role. At the same time this separation avoids confusion for users of an OpenStack cloud. Responses to requests to ``/v2/images`` should return fairly quickly, while requests to ``/v2/tasks`` may take a while. In short, tasks provide a common API across OpenStack installations for users of an OpenStack cloud to request image-related operations, yet at the same time tasks are customizable for individual cloud providers. Conceptual Details ------------------ A Glance task is a request to perform an asynchronous image-related operation. The request results in the creation of a *task resource* that can be polled for information about the status of the operation. A specific type of resource distinct from the traditional Glance image resource is appropriate here for several reasons: * A dedicated task resource can be developed independently of the traditional Glance image resource, both with respect to structure and workflow. * There may be multiple tasks (for example, image export or image conversion) operating on an image simultaneously. * A dedicated task resource allows for the delivery to the end user of clear, detailed error messages specific to the particular operation. * A dedicated task resource respects the principle of least surprise. For example, an import task does not create an image in Glance until it's clear that the bits submitted pass the deployer's tests for an allowable image. Upon reaching a final state (``success`` or ``error``) a task resource is assigned an expiration datetime that's displayed in the ``expires_at`` field. (The time between final state and expiration is configurable.) After that datetime, the task resource is subject to being deleted. The result of the task (for example, an imported image) will still exist. For details about the defined task statuses, please see :ref:`task-statuses`. Tasks expire eventually because there's no reason to keep them around, as the user will have the result of the task, which was the point of creating the task in the first place. The reason tasks aren't instantly deleted is that there may be information contained in the task resource that's not easily available elsewhere. (For example, a successful import task will eventually result in the creation of an image in Glance, and it would be useful to know the UUID of this image. Similarly, if the import task fails, we want to give the end user time to read the task resource to analyze the error message.) Task Entities ------------- A task entity is represented by a JSON-encoded data structure defined by the JSON schema available at ``/v2/schemas/task``. A task entity has an identifier (``id``) that is guaranteed to be unique within the endpoint to which it belongs. The id is used as a token in request URIs to interact with that specific task. In addition to the usual properties you'd expect (for example, ``created_at``, ``self``, ``type``, ``status``, ``updated_at``, etc.), tasks have these properties of interest: * ``input``: this is defined to be a JSON blob, the exact content of which will depend upon the requirements set by the specific cloud deployer. The intent is that each deployer will document these requirements for end users. * ``result``: this is also defined to be a JSON blob, the content of which will be documented by each cloud deployer. The ``result`` element will be null until the task has reached a final state, and if the final status is ``failure``, the result element remains null. * ``message``: this string field is expected to be null unless the task has entered ``failure`` status. At that point, it contains an informative human-readable message concerning the reason(s) for the task failure. glance-16.0.0/doc/source/admin/authentication.rst0000666000175100017510000001152413245511421021764 0ustar zuulzuul00000000000000.. Copyright 2010 OpenStack Foundation All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. .. _authentication: Authentication With Keystone ============================ Glance may optionally be integrated with Keystone. Setting this up is relatively straightforward, as the Keystone distribution includes the necessary middleware. Once you have installed Keystone and edited your configuration files, newly created images will have their `owner` attribute set to the tenant of the authenticated users, and the `is_public` attribute will cause access to those images for which it is `false` to be restricted to only the owner, users with admin context, or tenants/users with whom the image has been shared. Configuring the Glance servers to use Keystone ---------------------------------------------- Keystone is integrated with Glance through the use of middleware. The default configuration files for both the Glance API and the Glance Registry use a single piece of middleware called ``unauthenticated-context``, which generates a request context containing blank authentication information. In order to configure Glance to use Keystone, the ``authtoken`` and ``context`` middlewares must be deployed in place of the ``unauthenticated-context`` middleware. The ``authtoken`` middleware performs the authentication token validation and retrieves actual user authentication information. It can be found in the Keystone distribution. .. include:: ../deprecate-registry.inc Configuring Glance API to use Keystone -------------------------------------- Configuring Glance API to use Keystone is relatively straight forward. The first step is to ensure that declarations for the two pieces of middleware exist in the ``glance-api-paste.ini``. Here is an example for ``authtoken``:: [filter:authtoken] paste.filter_factory = keystonemiddleware.auth_token:filter_factory auth_url = http://localhost:35357 project_domain_id = default project_name = service_admins user_domain_id = default username = glance_admin password = password1234 The actual values for these variables will need to be set depending on your situation. For more information, please refer to the Keystone `documentation`_ on the ``auth_token`` middleware. .. _`documentation`: https://docs.openstack.org/keystonemiddleware/latest/middlewarearchitecture.html#configuration In short: * The ``auth_url`` variable points to the Keystone service. This information is used by the middleware to actually query Keystone about the validity of the authentication tokens. * The auth credentials (``project_name``, ``project_domain_id``, ``user_domain_id``, ``username``, and ``password``) will be used to retrieve a service token. That token will be used to authorize user tokens behind the scenes. Finally, to actually enable using Keystone authentication, the application pipeline must be modified. By default, it looks like:: [pipeline:glance-api] pipeline = versionnegotiation unauthenticated-context apiv1app Your particular pipeline may vary depending on other options, such as the image cache. This must be changed by replacing ``unauthenticated-context`` with ``authtoken`` and ``context``:: [pipeline:glance-api] pipeline = versionnegotiation authtoken context apiv1app Configuring Glance Registry to use Keystone ------------------------------------------- .. include:: ../deprecate-registry.inc Configuring Glance Registry to use Keystone is also relatively straight forward. The same middleware needs to be added to ``glance-registry-paste.ini`` as was needed by Glance API; see above for an example of the ``authtoken`` configuration. Again, to enable using Keystone authentication, the appropriate application pipeline must be selected. By default, it looks like:: [pipeline:glance-registry-keystone] pipeline = authtoken context registryapp To enable the above application pipeline, in your main ``glance-registry.conf`` configuration file, select the appropriate deployment flavor by adding a ``flavor`` attribute in the ``paste_deploy`` group:: [paste_deploy] flavor = keystone .. note:: If your authentication service uses a role other than ``admin`` to identify which users should be granted admin-level privileges, you must define it in the ``admin_role`` config attribute in both ``glance-registry.conf`` and ``glance-api.conf``. glance-16.0.0/doc/source/admin/property-protections.rst0000666000175100017510000001155613245511421023205 0ustar zuulzuul00000000000000.. Copyright 2013 OpenStack Foundation All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. .. _property-protections: Property Protections ==================== There are two types of image properties in Glance: * Core Properties, as specified by the image schema. * Meta Properties, which are arbitrary key/value pairs that can be added to an image. Access to meta properties through Glance's public API calls may be restricted to certain sets of users, using a property protections configuration file. This document explains exactly how property protections are configured and what they apply to. Constructing a Property Protections Configuration File ------------------------------------------------------ A property protections configuration file follows the format of the Glance API configuration file, which consists of sections, led by a ``[section]`` header and followed by ``name = value`` entries. Each section header is a regular expression matching a set of properties to be protected. .. note:: Section headers must compile to a valid regular expression, otherwise glance api service will not start. Regular expressions will be handled by python's re module which is PERL like. Each section describes four key-value pairs, where the key is one of ``create/read/update/delete``, and the value is a comma separated list of user roles that are permitted to perform that operation in the Glance API. **If any of the keys are not specified, then the glance api service will not start successfully.** In the list of user roles, ``@`` means all roles and ``!`` means no role. **If both @ and ! are specified for the same rule then the glance api service will not start** .. note:: Only one policy rule is allowed per property operation. **If multiple are specified, then the glance api service will not start.** The path to the file should be specified in the ``[DEFAULT]`` section of ``glance-api.conf`` as follows. :: property_protection_file=/path/to/file If this config value is not specified, property protections are not enforced. **If the path is invalid, glance api service will not start successfully.** The file may use either roles or policies to describe the property protections. The config value should be specified in the ``[DEFAULT]`` section of ``glance-api.conf`` as follows. :: property_protection_rule_format= The default value for ``property_protection_rule_format`` is ``roles``. Property protections are applied in the order specified in the configuration file. This means that if for example you specify a section with ``[.*]`` at the top of the file, all proceeding sections will be ignored. If a property does not match any of the given rules, all operations will be disabled for all roles. If an operation is misspelled or omitted, that operation will be disabled for all roles. Disallowing ``read`` operations will also disallow ``update/delete`` operations. A successful HTTP request will return status ``200 OK``. If the user is not permitted to perform the requested action, ``403 Forbidden`` will be returned. V1 API X-glance-registry-Purge-props ------------------------------------ .. include:: ../deprecate-registry.inc Property protections will still be honoured if ``X-glance-registry-Purge-props`` is set to ``True``. That is, if you request to modify properties with this header set to ``True``, you will not be able to delete or update properties for which you do not have the relevant permissions. Properties which are not included in the request and for which you do have delete permissions will still be removed. Examples -------- **Example 1**. Limit all property interactions to admin only. :: [.*] create = admin read = admin update = admin delete = admin **Example 2**. Allow both admins and users with the billing role to read and modify properties prefixed with ``x_billing_code_``. Allow admins to read and modify any properties. :: [^x_billing_code_.*] create = admin,billing read = admin, billing update = admin,billing delete = admin,billing [.*] create = admin read = admin update = admin delete = admin **Example 3**. Limit all property interactions to admin only using policy rule context_is_admin defined in policy.json. :: [.*] create = context_is_admin read = context_is_admin update = context_is_admin delete = context_is_admin glance-16.0.0/doc/source/admin/notifications.rst0000666000175100017510000001236613245511421021623 0ustar zuulzuul00000000000000.. Copyright 2011-2013 OpenStack Foundation All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. .. _notifications: Notifications ============= Notifications can be generated for several events in the image lifecycle. These can be used for auditing, troubleshooting, etc. Notification Drivers -------------------- * log This driver uses the standard Python logging infrastructure with the notifications ending up in file specified by the log_file configuration directive. * messaging This strategy sends notifications to a message queue configured using oslo.messaging configuration options. * noop This strategy produces no notifications. It is the default strategy. Notification Types ------------------ * ``image.create`` Emitted when an image record is created in Glance. Image record creation is independent of image data upload. * ``image.prepare`` Emitted when Glance begins uploading image data to its store. * ``image.upload`` Emitted when Glance has completed the upload of image data to its store. * ``image.activate`` Emitted when an image goes to `active` status. This occurs when Glance knows where the image data is located. * ``image.send`` Emitted upon completion of an image being sent to a consumer. * ``image.update`` Emitted when an image record is updated in Glance. * ``image.delete`` Emitted when an image deleted from Glance. * ``task.run`` Emitted when a task is picked up by the executor to be run. * ``task.processing`` Emitted when a task is sent over to the executor to begin processing. * ``task.success`` Emitted when a task is successfully completed. * ``task.failure`` Emitted when a task fails. Content ------- Every message contains a handful of attributes. * message_id UUID identifying the message. * publisher_id The hostname of the glance instance that generated the message. * event_type Event that generated the message. * priority One of WARN, INFO or ERROR. * timestamp UTC timestamp of when event was generated. * payload Data specific to the event type. Payload ------- * image.send The payload for INFO, WARN, and ERROR events contain the following: image_id ID of the image (UUID) owner_id Tenant or User ID that owns this image (string) receiver_tenant_id Tenant ID of the account receiving the image (string) receiver_user_id User ID of the account receiving the image (string) destination_ip The receiver's IP address to which the image was sent (string) bytes_sent The number of bytes actually sent * image.create For INFO events, it is the image metadata. WARN and ERROR events contain a text message in the payload. * image.prepare For INFO events, it is the image metadata. WARN and ERROR events contain a text message in the payload. * image.upload For INFO events, it is the image metadata. WARN and ERROR events contain a text message in the payload. * image.activate For INFO events, it is the image metadata. WARN and ERROR events contain a text message in the payload. * image.update For INFO events, it is the image metadata. WARN and ERROR events contain a text message in the payload. * image.delete For INFO events, it is the image id. WARN and ERROR events contain a text message in the payload. * task.run The payload for INFO, WARN, and ERROR events contain the following: task_id ID of the task (UUID) owner Tenant or User ID that created this task (string) task_type Type of the task. Example, task_type is "import". (string) status, status of the task. Status can be "pending", "processing", "success" or "failure". (string) task_input Input provided by the user when attempting to create a task. (dict) result Resulting output from a successful task. (dict) message Message shown in the task if it fails. None if task succeeds. (string) expires_at UTC time at which the task would not be visible to the user. (string) created_at UTC time at which the task was created. (string) updated_at UTC time at which the task was latest updated. (string) The exceptions are:- For INFO events, it is the task dict with result and message as None. WARN and ERROR events contain a text message in the payload. * task.processing For INFO events, it is the task dict with result and message as None. WARN and ERROR events contain a text message in the payload. * task.success For INFO events, it is the task dict with message as None and result is a dict. WARN and ERROR events contain a text message in the payload. * task.failure For INFO events, it is the task dict with result as None and message is text. WARN and ERROR events contain a text message in the payload. glance-16.0.0/doc/source/glossary.rst0000666000175100017510000035255213245511421017531 0ustar zuulzuul00000000000000======== Glossary ======== 0-9 ~~~ .. glossary:: 6to4 A mechanism that allows IPv6 packets to be transmitted over an IPv4 network, providing a strategy for migrating to IPv6. A ~ .. glossary:: absolute limit Impassable limits for guest VMs. Settings include total RAM size, maximum number of vCPUs, and maximum disk size. access control list (ACL) A list of permissions attached to an object. An ACL specifies which users or system processes have access to objects. It also defines which operations can be performed on specified objects. Each entry in a typical ACL specifies a subject and an operation. For instance, the ACL entry ``(Alice, delete)`` for a file gives Alice permission to delete the file. access key Alternative term for an Amazon EC2 access key. See EC2 access key. account The Object Storage context of an account. Do not confuse with a user account from an authentication service, such as Active Directory, /etc/passwd, OpenLDAP, OpenStack Identity, and so on. account auditor Checks for missing replicas and incorrect or corrupted objects in a specified Object Storage account by running queries against the back-end SQLite database. account database A SQLite database that contains Object Storage accounts and related metadata and that the accounts server accesses. account reaper An Object Storage worker that scans for and deletes account databases and that the account server has marked for deletion. account server Lists containers in Object Storage and stores container information in the account database. account service An Object Storage component that provides account services such as list, create, modify, and audit. Do not confuse with OpenStack Identity service, OpenLDAP, or similar user-account services. accounting The Compute service provides accounting information through the event notification and system usage data facilities. Active Directory Authentication and identity service by Microsoft, based on LDAP. Supported in OpenStack. active/active configuration In a high-availability setup with an active/active configuration, several systems share the load together and if one fails, the load is distributed to the remaining systems. active/passive configuration In a high-availability setup with an active/passive configuration, systems are set up to bring additional resources online to replace those that have failed. address pool A group of fixed and/or floating IP addresses that are assigned to a project and can be used by or assigned to the VM instances in a project. Address Resolution Protocol (ARP) The protocol by which layer-3 IP addresses are resolved into layer-2 link local addresses. admin API A subset of API calls that are accessible to authorized administrators and are generally not accessible to end users or the public Internet. They can exist as a separate service (keystone) or can be a subset of another API (nova). admin server In the context of the Identity service, the worker process that provides access to the admin API. administrator The person responsible for installing, configuring, and managing an OpenStack cloud. Advanced Message Queuing Protocol (AMQP) The open standard messaging protocol used by OpenStack components for intra-service communications, provided by RabbitMQ, Qpid, or ZeroMQ. Advanced RISC Machine (ARM) Lower power consumption CPU often found in mobile and embedded devices. Supported by OpenStack. alert The Compute service can send alerts through its notification system, which includes a facility to create custom notification drivers. Alerts can be sent to and displayed on the dashboard. allocate The process of taking a floating IP address from the address pool so it can be associated with a fixed IP on a guest VM instance. Amazon Kernel Image (AKI) Both a VM container format and disk format. Supported by Image service. Amazon Machine Image (AMI) Both a VM container format and disk format. Supported by Image service. Amazon Ramdisk Image (ARI) Both a VM container format and disk format. Supported by Image service. Anvil A project that ports the shell script-based project named DevStack to Python. aodh Part of the OpenStack :term:`Telemetry service `; provides alarming functionality. Apache The Apache Software Foundation supports the Apache community of open-source software projects. These projects provide software products for the public good. Apache License 2.0 All OpenStack core projects are provided under the terms of the Apache License 2.0 license. Apache Web Server The most common web server software currently used on the Internet. API endpoint The daemon, worker, or service that a client communicates with to access an API. API endpoints can provide any number of services, such as authentication, sales data, performance meters, Compute VM commands, census data, and so on. API extension Custom modules that extend some OpenStack core APIs. API extension plug-in Alternative term for a Networking plug-in or Networking API extension. API key Alternative term for an API token. API server Any node running a daemon or worker that provides an API endpoint. API token Passed to API requests and used by OpenStack to verify that the client is authorized to run the requested operation. API version In OpenStack, the API version for a project is part of the URL. For example, ``example.com/nova/v1/foobar``. applet A Java program that can be embedded into a web page. Application Catalog service (murano) The project that provides an application catalog service so that users can compose and deploy composite environments on an application abstraction level while managing the application lifecycle. Application Programming Interface (API) A collection of specifications used to access a service, application, or program. Includes service calls, required parameters for each call, and the expected return values. application server A piece of software that makes available another piece of software over a network. Application Service Provider (ASP) Companies that rent specialized applications that help businesses and organizations provide additional services with lower cost. arptables Tool used for maintaining Address Resolution Protocol packet filter rules in the Linux kernel firewall modules. Used along with iptables, ebtables, and ip6tables in Compute to provide firewall services for VMs. associate The process associating a Compute floating IP address with a fixed IP address. Asynchronous JavaScript and XML (AJAX) A group of interrelated web development techniques used on the client-side to create asynchronous web applications. Used extensively in horizon. ATA over Ethernet (AoE) A disk storage protocol tunneled within Ethernet. attach The process of connecting a VIF or vNIC to a L2 network in Networking. In the context of Compute, this process connects a storage volume to an instance. attachment (network) Association of an interface ID to a logical port. Plugs an interface into a port. auditing Provided in Compute through the system usage data facility. auditor A worker process that verifies the integrity of Object Storage objects, containers, and accounts. Auditors is the collective term for the Object Storage account auditor, container auditor, and object auditor. Austin The code name for the initial release of OpenStack. The first design summit took place in Austin, Texas, US. auth node Alternative term for an Object Storage authorization node. authentication The process that confirms that the user, process, or client is really who they say they are through private key, secret token, password, fingerprint, or similar method. authentication token A string of text provided to the client after authentication. Must be provided by the user or process in subsequent requests to the API endpoint. AuthN The Identity service component that provides authentication services. authorization The act of verifying that a user, process, or client is authorized to perform an action. authorization node An Object Storage node that provides authorization services. AuthZ The Identity component that provides high-level authorization services. Auto ACK Configuration setting within RabbitMQ that enables or disables message acknowledgment. Enabled by default. auto declare A Compute RabbitMQ setting that determines whether a message exchange is automatically created when the program starts. availability zone An Amazon EC2 concept of an isolated area that is used for fault tolerance. Do not confuse with an OpenStack Compute zone or cell. AWS CloudFormation template AWS CloudFormation allows Amazon Web Services (AWS) users to create and manage a collection of related resources. The Orchestration service supports a CloudFormation-compatible format (CFN). B ~ .. glossary:: back end Interactions and processes that are obfuscated from the user, such as Compute volume mount, data transmission to an iSCSI target by a daemon, or Object Storage object integrity checks. back-end catalog The storage method used by the Identity service catalog service to store and retrieve information about API endpoints that are available to the client. Examples include an SQL database, LDAP database, or KVS back end. back-end store The persistent data store used to save and retrieve information for a service, such as lists of Object Storage objects, current state of guest VMs, lists of user names, and so on. Also, the method that the Image service uses to get and store VM images. Options include Object Storage, locally mounted file system, RADOS block devices, VMware datastore, and HTTP. Backup, Restore, and Disaster Recovery service (freezer) The project that provides integrated tooling for backing up, restoring, and recovering file systems, instances, or database backups. bandwidth The amount of available data used by communication resources, such as the Internet. Represents the amount of data that is used to download things or the amount of data available to download. barbican Code name of the :term:`Key Manager service `. bare An Image service container format that indicates that no container exists for the VM image. Bare Metal service (ironic) The OpenStack service that provides a service and associated libraries capable of managing and provisioning physical machines in a security-aware and fault-tolerant manner. base image An OpenStack-provided image. Bell-LaPadula model A security model that focuses on data confidentiality and controlled access to classified information. This model divides the entities into subjects and objects. The clearance of a subject is compared to the classification of the object to determine if the subject is authorized for the specific access mode. The clearance or classification scheme is expressed in terms of a lattice. Benchmark service (rally) OpenStack project that provides a framework for performance analysis and benchmarking of individual OpenStack components as well as full production OpenStack cloud deployments. Bexar A grouped release of projects related to OpenStack that came out in February of 2011. It included only Compute (nova) and Object Storage (swift). Bexar is the code name for the second release of OpenStack. The design summit took place in San Antonio, Texas, US, which is the county seat for Bexar county. binary Information that consists solely of ones and zeroes, which is the language of computers. bit A bit is a single digit number that is in base of 2 (either a zero or one). Bandwidth usage is measured in bits per second. bits per second (BPS) The universal measurement of how quickly data is transferred from place to place. block device A device that moves data in the form of blocks. These device nodes interface the devices, such as hard disks, CD-ROM drives, flash drives, and other addressable regions of memory. block migration A method of VM live migration used by KVM to evacuate instances from one host to another with very little downtime during a user-initiated switchover. Does not require shared storage. Supported by Compute. Block Storage API An API on a separate endpoint for attaching, detaching, and creating block storage for compute VMs. Block Storage service (cinder) The OpenStack service that implement services and libraries to provide on-demand, self-service access to Block Storage resources via abstraction and automation on top of other block storage devices. BMC (Baseboard Management Controller) The intelligence in the IPMI architecture, which is a specialized micro-controller that is embedded on the motherboard of a computer and acts as a server. Manages the interface between system management software and platform hardware. bootable disk image A type of VM image that exists as a single, bootable file. Bootstrap Protocol (BOOTP) A network protocol used by a network client to obtain an IP address from a configuration server. Provided in Compute through the dnsmasq daemon when using either the FlatDHCP manager or VLAN manager network manager. Border Gateway Protocol (BGP) The Border Gateway Protocol is a dynamic routing protocol that connects autonomous systems. Considered the backbone of the Internet, this protocol connects disparate networks to form a larger network. browser Any client software that enables a computer or device to access the Internet. builder file Contains configuration information that Object Storage uses to reconfigure a ring or to re-create it from scratch after a serious failure. bursting The practice of utilizing a secondary environment to elastically build instances on-demand when the primary environment is resource constrained. button class A group of related button types within horizon. Buttons to start, stop, and suspend VMs are in one class. Buttons to associate and disassociate floating IP addresses are in another class, and so on. byte Set of bits that make up a single character; there are usually 8 bits to a byte. C ~ .. glossary:: cache pruner A program that keeps the Image service VM image cache at or below its configured maximum size. Cactus An OpenStack grouped release of projects that came out in the spring of 2011. It included Compute (nova), Object Storage (swift), and the Image service (glance). Cactus is a city in Texas, US and is the code name for the third release of OpenStack. When OpenStack releases went from three to six months long, the code name of the release changed to match a geography nearest the previous summit. CALL One of the RPC primitives used by the OpenStack message queue software. Sends a message and waits for a response. capability Defines resources for a cell, including CPU, storage, and networking. Can apply to the specific services within a cell or a whole cell. capacity cache A Compute back-end database table that contains the current workload, amount of free RAM, and number of VMs running on each host. Used to determine on which host a VM starts. capacity updater A notification driver that monitors VM instances and updates the capacity cache as needed. CAST One of the RPC primitives used by the OpenStack message queue software. Sends a message and does not wait for a response. catalog A list of API endpoints that are available to a user after authentication with the Identity service. catalog service An Identity service that lists API endpoints that are available to a user after authentication with the Identity service. ceilometer Part of the OpenStack :term:`Telemetry service `; gathers and stores metrics from other OpenStack services. cell Provides logical partitioning of Compute resources in a child and parent relationship. Requests are passed from parent cells to child cells if the parent cannot provide the requested resource. cell forwarding A Compute option that enables parent cells to pass resource requests to child cells if the parent cannot provide the requested resource. cell manager The Compute component that contains a list of the current capabilities of each host within the cell and routes requests as appropriate. CentOS A Linux distribution that is compatible with OpenStack. Ceph Massively scalable distributed storage system that consists of an object store, block store, and POSIX-compatible distributed file system. Compatible with OpenStack. CephFS The POSIX-compliant file system provided by Ceph. certificate authority (CA) In cryptography, an entity that issues digital certificates. The digital certificate certifies the ownership of a public key by the named subject of the certificate. This enables others (relying parties) to rely upon signatures or assertions made by the private key that corresponds to the certified public key. In this model of trust relationships, a CA is a trusted third party for both the subject (owner) of the certificate and the party relying upon the certificate. CAs are characteristic of many public key infrastructure (PKI) schemes. In OpenStack, a simple certificate authority is provided by Compute for cloudpipe VPNs and VM image decryption. Challenge-Handshake Authentication Protocol (CHAP) An iSCSI authentication method supported by Compute. chance scheduler A scheduling method used by Compute that randomly chooses an available host from the pool. changes since A Compute API parameter that downloads changes to the requested item since your last request, instead of downloading a new, fresh set of data and comparing it against the old data. Chef An operating system configuration management tool supporting OpenStack deployments. child cell If a requested resource such as CPU time, disk storage, or memory is not available in the parent cell, the request is forwarded to its associated child cells. If the child cell can fulfill the request, it does. Otherwise, it attempts to pass the request to any of its children. cinder Codename for :term:`Block Storage service `. CirrOS A minimal Linux distribution designed for use as a test image on clouds such as OpenStack. Cisco neutron plug-in A Networking plug-in for Cisco devices and technologies, including UCS and Nexus. cloud architect A person who plans, designs, and oversees the creation of clouds. Cloud Auditing Data Federation (CADF) Cloud Auditing Data Federation (CADF) is a specification for audit event data. CADF is supported by OpenStack Identity. cloud computing A model that enables access to a shared pool of configurable computing resources, such as networks, servers, storage, applications, and services, that can be rapidly provisioned and released with minimal management effort or service provider interaction. cloud controller Collection of Compute components that represent the global state of the cloud; talks to services, such as Identity authentication, Object Storage, and node/storage workers through a queue. cloud controller node A node that runs network, volume, API, scheduler, and image services. Each service may be broken out into separate nodes for scalability or availability. Cloud Data Management Interface (CDMI) SINA standard that defines a RESTful API for managing objects in the cloud, currently unsupported in OpenStack. Cloud Infrastructure Management Interface (CIMI) An in-progress specification for cloud management. Currently unsupported in OpenStack. cloud-init A package commonly installed in VM images that performs initialization of an instance after boot using information that it retrieves from the metadata service, such as the SSH public key and user data. cloudadmin One of the default roles in the Compute RBAC system. Grants complete system access. Cloudbase-Init A Windows project providing guest initialization features, similar to cloud-init. cloudpipe A compute service that creates VPNs on a per-project basis. cloudpipe image A pre-made VM image that serves as a cloudpipe server. Essentially, OpenVPN running on Linux. Clustering service (senlin) The project that implements clustering services and libraries for the management of groups of homogeneous objects exposed by other OpenStack services. command filter Lists allowed commands within the Compute rootwrap facility. Common Internet File System (CIFS) A file sharing protocol. It is a public or open variation of the original Server Message Block (SMB) protocol developed and used by Microsoft. Like the SMB protocol, CIFS runs at a higher level and uses the TCP/IP protocol. Common Libraries (oslo) The project that produces a set of python libraries containing code shared by OpenStack projects. The APIs provided by these libraries should be high quality, stable, consistent, documented and generally applicable. community project A project that is not officially endorsed by the OpenStack Foundation. If the project is successful enough, it might be elevated to an incubated project and then to a core project, or it might be merged with the main code trunk. compression Reducing the size of files by special encoding, the file can be decompressed again to its original content. OpenStack supports compression at the Linux file system level but does not support compression for things such as Object Storage objects or Image service VM images. Compute API (Nova API) The nova-api daemon provides access to nova services. Can communicate with other APIs, such as the Amazon EC2 API. compute controller The Compute component that chooses suitable hosts on which to start VM instances. compute host Physical host dedicated to running compute nodes. compute node A node that runs the nova-compute daemon that manages VM instances that provide a wide range of services, such as web applications and analytics. Compute service (nova) The OpenStack core project that implements services and associated libraries to provide massively-scalable, on-demand, self-service access to compute resources, including bare metal, virtual machines, and containers. compute worker The Compute component that runs on each compute node and manages the VM instance lifecycle, including run, reboot, terminate, attach/detach volumes, and so on. Provided by the nova-compute daemon. concatenated object A set of segment objects that Object Storage combines and sends to the client. conductor In Compute, conductor is the process that proxies database requests from the compute process. Using conductor improves security because compute nodes do not need direct access to the database. congress Code name for the :term:`Governance service `. consistency window The amount of time it takes for a new Object Storage object to become accessible to all clients. console log Contains the output from a Linux VM console in Compute. container Organizes and stores objects in Object Storage. Similar to the concept of a Linux directory but cannot be nested. Alternative term for an Image service container format. container auditor Checks for missing replicas or incorrect objects in specified Object Storage containers through queries to the SQLite back-end database. container database A SQLite database that stores Object Storage containers and container metadata. The container server accesses this database. container format A wrapper used by the Image service that contains a VM image and its associated metadata, such as machine state, OS disk size, and so on. Container Infrastructure Management service (magnum) The project which provides a set of services for provisioning, scaling, and managing container orchestration engines. container server An Object Storage server that manages containers. container service The Object Storage component that provides container services, such as create, delete, list, and so on. content delivery network (CDN) A content delivery network is a specialized network that is used to distribute content to clients, typically located close to the client for increased performance. controller node Alternative term for a cloud controller node. core API Depending on context, the core API is either the OpenStack API or the main API of a specific core project, such as Compute, Networking, Image service, and so on. core service An official OpenStack service defined as core by DefCore Committee. Currently, consists of Block Storage service (cinder), Compute service (nova), Identity service (keystone), Image service (glance), Networking service (neutron), and Object Storage service (swift). cost Under the Compute distributed scheduler, this is calculated by looking at the capabilities of each host relative to the flavor of the VM instance being requested. credentials Data that is only known to or accessible by a user and used to verify that the user is who he says he is. Credentials are presented to the server during authentication. Examples include a password, secret key, digital certificate, and fingerprint. CRL A Certificate Revocation List (CRL) in a PKI model is a list of certificates that have been revoked. End entities presenting these certificates should not be trusted. Cross-Origin Resource Sharing (CORS) A mechanism that allows many resources (for example, fonts, JavaScript) on a web page to be requested from another domain outside the domain from which the resource originated. In particular, JavaScript's AJAX calls can use the XMLHttpRequest mechanism. Crowbar An open source community project by SUSE that aims to provide all necessary services to quickly deploy and manage clouds. current workload An element of the Compute capacity cache that is calculated based on the number of build, snapshot, migrate, and resize operations currently in progress on a given host. customer Alternative term for project. customization module A user-created Python module that is loaded by horizon to change the look and feel of the dashboard. D ~ .. glossary:: daemon A process that runs in the background and waits for requests. May or may not listen on a TCP or UDP port. Do not confuse with a worker. Dashboard (horizon) OpenStack project which provides an extensible, unified, web-based user interface for all OpenStack services. data encryption Both Image service and Compute support encrypted virtual machine (VM) images (but not instances). In-transit data encryption is supported in OpenStack using technologies such as HTTPS, SSL, TLS, and SSH. Object Storage does not support object encryption at the application level but may support storage that uses disk encryption. Data loss prevention (DLP) software Software programs used to protect sensitive information and prevent it from leaking outside a network boundary through the detection and denying of the data transportation. Data Processing service (sahara) OpenStack project that provides a scalable data-processing stack and associated management interfaces. data store A database engine supported by the Database service. database ID A unique ID given to each replica of an Object Storage database. database replicator An Object Storage component that copies changes in the account, container, and object databases to other nodes. Database service (trove) An integrated project that provides scalable and reliable Cloud Database-as-a-Service functionality for both relational and non-relational database engines. deallocate The process of removing the association between a floating IP address and a fixed IP address. Once this association is removed, the floating IP returns to the address pool. Debian A Linux distribution that is compatible with OpenStack. deduplication The process of finding duplicate data at the disk block, file, and/or object level to minimize storage use—currently unsupported within OpenStack. default panel The default panel that is displayed when a user accesses the dashboard. default project New users are assigned to this project if no project is specified when a user is created. default token An Identity service token that is not associated with a specific project and is exchanged for a scoped token. delayed delete An option within Image service so that an image is deleted after a predefined number of seconds instead of immediately. delivery mode Setting for the Compute RabbitMQ message delivery mode; can be set to either transient or persistent. denial of service (DoS) Denial of service (DoS) is a short form for denial-of-service attack. This is a malicious attempt to prevent legitimate users from using a service. deprecated auth An option within Compute that enables administrators to create and manage users through the ``nova-manage`` command as opposed to using the Identity service. designate Code name for the :term:`DNS service `. Desktop-as-a-Service A platform that provides a suite of desktop environments that users access to receive a desktop experience from any location. This may provide general use, development, or even homogeneous testing environments. developer One of the default roles in the Compute RBAC system and the default role assigned to a new user. device ID Maps Object Storage partitions to physical storage devices. device weight Distributes partitions proportionately across Object Storage devices based on the storage capacity of each device. DevStack Community project that uses shell scripts to quickly build complete OpenStack development environments. DHCP agent OpenStack Networking agent that provides DHCP services for virtual networks. Diablo A grouped release of projects related to OpenStack that came out in the fall of 2011, the fourth release of OpenStack. It included Compute (nova 2011.3), Object Storage (swift 1.4.3), and the Image service (glance). Diablo is the code name for the fourth release of OpenStack. The design summit took place in the Bay Area near Santa Clara, California, US and Diablo is a nearby city. direct consumer An element of the Compute RabbitMQ that comes to life when a RPC call is executed. It connects to a direct exchange through a unique exclusive queue, sends the message, and terminates. direct exchange A routing table that is created within the Compute RabbitMQ during RPC calls; one is created for each RPC call that is invoked. direct publisher Element of RabbitMQ that provides a response to an incoming MQ message. disassociate The process of removing the association between a floating IP address and fixed IP and thus returning the floating IP address to the address pool. Discretionary Access Control (DAC) Governs the ability of subjects to access objects, while enabling users to make policy decisions and assign security attributes. The traditional UNIX system of users, groups, and read-write-execute permissions is an example of DAC. disk encryption The ability to encrypt data at the file system, disk partition, or whole-disk level. Supported within Compute VMs. disk format The underlying format that a disk image for a VM is stored as within the Image service back-end store. For example, AMI, ISO, QCOW2, VMDK, and so on. dispersion In Object Storage, tools to test and ensure dispersion of objects and containers to ensure fault tolerance. distributed virtual router (DVR) Mechanism for highly available multi-host routing when using OpenStack Networking (neutron). Django A web framework used extensively in horizon. DNS record A record that specifies information about a particular domain and belongs to the domain. DNS service (designate) OpenStack project that provides scalable, on demand, self service access to authoritative DNS services, in a technology-agnostic manner. dnsmasq Daemon that provides DNS, DHCP, BOOTP, and TFTP services for virtual networks. domain An Identity API v3 entity. Represents a collection of projects, groups and users that defines administrative boundaries for managing OpenStack Identity entities. On the Internet, separates a website from other sites. Often, the domain name has two or more parts that are separated by dots. For example, yahoo.com, usa.gov, harvard.edu, or mail.yahoo.com. Also, a domain is an entity or container of all DNS-related information containing one or more records. Domain Name System (DNS) A system by which Internet domain name-to-address and address-to-name resolutions are determined. DNS helps navigate the Internet by translating the IP address into an address that is easier to remember. For example, translating 111.111.111.1 into www.yahoo.com. All domains and their components, such as mail servers, utilize DNS to resolve to the appropriate locations. DNS servers are usually set up in a master-slave relationship such that failure of the master invokes the slave. DNS servers might also be clustered or replicated such that changes made to one DNS server are automatically propagated to other active servers. In Compute, the support that enables associating DNS entries with floating IP addresses, nodes, or cells so that hostnames are consistent across reboots. download The transfer of data, usually in the form of files, from one computer to another. durable exchange The Compute RabbitMQ message exchange that remains active when the server restarts. durable queue A Compute RabbitMQ message queue that remains active when the server restarts. Dynamic Host Configuration Protocol (DHCP) A network protocol that configures devices that are connected to a network so that they can communicate on that network by using the Internet Protocol (IP). The protocol is implemented in a client-server model where DHCP clients request configuration data, such as an IP address, a default route, and one or more DNS server addresses from a DHCP server. A method to automatically configure networking for a host at boot time. Provided by both Networking and Compute. Dynamic HyperText Markup Language (DHTML) Pages that use HTML, JavaScript, and Cascading Style Sheets to enable users to interact with a web page or show simple animation. E ~ .. glossary:: east-west traffic Network traffic between servers in the same cloud or data center. See also north-south traffic. EBS boot volume An Amazon EBS storage volume that contains a bootable VM image, currently unsupported in OpenStack. ebtables Filtering tool for a Linux bridging firewall, enabling filtering of network traffic passing through a Linux bridge. Used in Compute along with arptables, iptables, and ip6tables to ensure isolation of network communications. EC2 The Amazon commercial compute product, similar to Compute. EC2 access key Used along with an EC2 secret key to access the Compute EC2 API. EC2 API OpenStack supports accessing the Amazon EC2 API through Compute. EC2 Compatibility API A Compute component that enables OpenStack to communicate with Amazon EC2. EC2 secret key Used along with an EC2 access key when communicating with the Compute EC2 API; used to digitally sign each request. Elastic Block Storage (EBS) The Amazon commercial block storage product. encapsulation The practice of placing one packet type within another for the purposes of abstracting or securing data. Examples include GRE, MPLS, or IPsec. encryption OpenStack supports encryption technologies such as HTTPS, SSH, SSL, TLS, digital certificates, and data encryption. endpoint See API endpoint. endpoint registry Alternative term for an Identity service catalog. endpoint template A list of URL and port number endpoints that indicate where a service, such as Object Storage, Compute, Identity, and so on, can be accessed. entity Any piece of hardware or software that wants to connect to the network services provided by Networking, the network connectivity service. An entity can make use of Networking by implementing a VIF. ephemeral image A VM image that does not save changes made to its volumes and reverts them to their original state after the instance is terminated. ephemeral volume Volume that does not save the changes made to it and reverts to its original state when the current user relinquishes control. Essex A grouped release of projects related to OpenStack that came out in April 2012, the fifth release of OpenStack. It included Compute (nova 2012.1), Object Storage (swift 1.4.8), Image (glance), Identity (keystone), and Dashboard (horizon). Essex is the code name for the fifth release of OpenStack. The design summit took place in Boston, Massachusetts, US and Essex is a nearby city. ESXi An OpenStack-supported hypervisor. ETag MD5 hash of an object within Object Storage, used to ensure data integrity. euca2ools A collection of command-line tools for administering VMs; most are compatible with OpenStack. Eucalyptus Kernel Image (EKI) Used along with an ERI to create an EMI. Eucalyptus Machine Image (EMI) VM image container format supported by Image service. Eucalyptus Ramdisk Image (ERI) Used along with an EKI to create an EMI. evacuate The process of migrating one or all virtual machine (VM) instances from one host to another, compatible with both shared storage live migration and block migration. exchange Alternative term for a RabbitMQ message exchange. exchange type A routing algorithm in the Compute RabbitMQ. exclusive queue Connected to by a direct consumer in RabbitMQ—Compute, the message can be consumed only by the current connection. extended attributes (xattr) File system option that enables storage of additional information beyond owner, group, permissions, modification time, and so on. The underlying Object Storage file system must support extended attributes. extension Alternative term for an API extension or plug-in. In the context of Identity service, this is a call that is specific to the implementation, such as adding support for OpenID. external network A network segment typically used for instance Internet access. extra specs Specifies additional requirements when Compute determines where to start a new instance. Examples include a minimum amount of network bandwidth or a GPU. F ~ .. glossary:: FakeLDAP An easy method to create a local LDAP directory for testing Identity and Compute. Requires Redis. fan-out exchange Within RabbitMQ and Compute, it is the messaging interface that is used by the scheduler service to receive capability messages from the compute, volume, and network nodes. federated identity A method to establish trusts between identity providers and the OpenStack cloud. Fedora A Linux distribution compatible with OpenStack. Fibre Channel Storage protocol similar in concept to TCP/IP; encapsulates SCSI commands and data. Fibre Channel over Ethernet (FCoE) The fibre channel protocol tunneled within Ethernet. fill-first scheduler The Compute scheduling method that attempts to fill a host with VMs rather than starting new VMs on a variety of hosts. filter The step in the Compute scheduling process when hosts that cannot run VMs are eliminated and not chosen. firewall Used to restrict communications between hosts and/or nodes, implemented in Compute using iptables, arptables, ip6tables, and ebtables. FireWall-as-a-Service (FWaaS) A Networking extension that provides perimeter firewall functionality. fixed IP address An IP address that is associated with the same instance each time that instance boots, is generally not accessible to end users or the public Internet, and is used for management of the instance. Flat Manager The Compute component that gives IP addresses to authorized nodes and assumes DHCP, DNS, and routing configuration and services are provided by something else. flat mode injection A Compute networking method where the OS network configuration information is injected into the VM image before the instance starts. flat network Virtual network type that uses neither VLANs nor tunnels to segregate project traffic. Each flat network typically requires a separate underlying physical interface defined by bridge mappings. However, a flat network can contain multiple subnets. FlatDHCP Manager The Compute component that provides dnsmasq (DHCP, DNS, BOOTP, TFTP) and radvd (routing) services. flavor Alternative term for a VM instance type. flavor ID UUID for each Compute or Image service VM flavor or instance type. floating IP address An IP address that a project can associate with a VM so that the instance has the same public IP address each time that it boots. You create a pool of floating IP addresses and assign them to instances as they are launched to maintain a consistent IP address for maintaining DNS assignment. Folsom A grouped release of projects related to OpenStack that came out in the fall of 2012, the sixth release of OpenStack. It includes Compute (nova), Object Storage (swift), Identity (keystone), Networking (neutron), Image service (glance), and Volumes or Block Storage (cinder). Folsom is the code name for the sixth release of OpenStack. The design summit took place in San Francisco, California, US and Folsom is a nearby city. FormPost Object Storage middleware that uploads (posts) an image through a form on a web page. freezer Code name for the :term:`Backup, Restore, and Disaster Recovery service `. front end The point where a user interacts with a service; can be an API endpoint, the dashboard, or a command-line tool. G ~ .. glossary:: gateway An IP address, typically assigned to a router, that passes network traffic between different networks. generic receive offload (GRO) Feature of certain network interface drivers that combines many smaller received packets into a large packet before delivery to the kernel IP stack. generic routing encapsulation (GRE) Protocol that encapsulates a wide variety of network layer protocols inside virtual point-to-point links. glance Codename for the :term:`Image service`. glance API server Alternative name for the :term:`Image API`. glance registry Alternative term for the Image service :term:`image registry`. global endpoint template The Identity service endpoint template that contains services available to all projects. GlusterFS A file system designed to aggregate NAS hosts, compatible with OpenStack. gnocchi Part of the OpenStack :term:`Telemetry service `; provides an indexer and time-series database. golden image A method of operating system installation where a finalized disk image is created and then used by all nodes without modification. Governance service (congress) The project that provides Governance-as-a-Service across any collection of cloud services in order to monitor, enforce, and audit policy over dynamic infrastructure. Graphic Interchange Format (GIF) A type of image file that is commonly used for animated images on web pages. Graphics Processing Unit (GPU) Choosing a host based on the existence of a GPU is currently unsupported in OpenStack. Green Threads The cooperative threading model used by Python; reduces race conditions and only context switches when specific library calls are made. Each OpenStack service is its own thread. Grizzly The code name for the seventh release of OpenStack. The design summit took place in San Diego, California, US and Grizzly is an element of the state flag of California. Group An Identity v3 API entity. Represents a collection of users that is owned by a specific domain. guest OS An operating system instance running under the control of a hypervisor. H ~ .. glossary:: Hadoop Apache Hadoop is an open source software framework that supports data-intensive distributed applications. Hadoop Distributed File System (HDFS) A distributed, highly fault-tolerant file system designed to run on low-cost commodity hardware. handover An object state in Object Storage where a new replica of the object is automatically created due to a drive failure. HAProxy Provides a load balancer for TCP and HTTP-based applications that spreads requests across multiple servers. hard reboot A type of reboot where a physical or virtual power button is pressed as opposed to a graceful, proper shutdown of the operating system. Havana The code name for the eighth release of OpenStack. The design summit took place in Portland, Oregon, US and Havana is an unincorporated community in Oregon. health monitor Determines whether back-end members of a VIP pool can process a request. A pool can have several health monitors associated with it. When a pool has several monitors associated with it, all monitors check each member of the pool. All monitors must declare a member to be healthy for it to stay active. heat Codename for the :term:`Orchestration service `. Heat Orchestration Template (HOT) Heat input in the format native to OpenStack. high availability (HA) A high availability system design approach and associated service implementation ensures that a prearranged level of operational performance will be met during a contractual measurement period. High availability systems seek to minimize system downtime and data loss. horizon Codename for the :term:`Dashboard `. horizon plug-in A plug-in for the OpenStack Dashboard (horizon). host A physical computer, not a VM instance (node). host aggregate A method to further subdivide availability zones into hypervisor pools, a collection of common hosts. Host Bus Adapter (HBA) Device plugged into a PCI slot, such as a fibre channel or network card. hybrid cloud A hybrid cloud is a composition of two or more clouds (private, community or public) that remain distinct entities but are bound together, offering the benefits of multiple deployment models. Hybrid cloud can also mean the ability to connect colocation, managed and/or dedicated services with cloud resources. Hyper-V One of the hypervisors supported by OpenStack. hyperlink Any kind of text that contains a link to some other site, commonly found in documents where clicking on a word or words opens up a different website. Hypertext Transfer Protocol (HTTP) An application protocol for distributed, collaborative, hypermedia information systems. It is the foundation of data communication for the World Wide Web. Hypertext is structured text that uses logical links (hyperlinks) between nodes containing text. HTTP is the protocol to exchange or transfer hypertext. Hypertext Transfer Protocol Secure (HTTPS) An encrypted communications protocol for secure communication over a computer network, with especially wide deployment on the Internet. Technically, it is not a protocol in and of itself; rather, it is the result of simply layering the Hypertext Transfer Protocol (HTTP) on top of the TLS or SSL protocol, thus adding the security capabilities of TLS or SSL to standard HTTP communications. Most OpenStack API endpoints and many inter-component communications support HTTPS communication. hypervisor Software that arbitrates and controls VM access to the actual underlying hardware. hypervisor pool A collection of hypervisors grouped together through host aggregates. I ~ .. glossary:: Icehouse The code name for the ninth release of OpenStack. The design summit took place in Hong Kong and Ice House is a street in that city. ID number Unique numeric ID associated with each user in Identity, conceptually similar to a Linux or LDAP UID. Identity API Alternative term for the Identity service API. Identity back end The source used by Identity service to retrieve user information; an OpenLDAP server, for example. identity provider A directory service, which allows users to login with a user name and password. It is a typical source of authentication tokens. Identity service (keystone) The project that facilitates API client authentication, service discovery, distributed multi-project authorization, and auditing. It provides a central directory of users mapped to the OpenStack services they can access. It also registers endpoints for OpenStack services and acts as a common authentication system. Identity service API The API used to access the OpenStack Identity service provided through keystone. IETF Internet Engineering Task Force (IETF) is an open standards organization that develops Internet standards, particularly the standards pertaining to TCP/IP. image A collection of files for a specific operating system (OS) that you use to create or rebuild a server. OpenStack provides pre-built images. You can also create custom images, or snapshots, from servers that you have launched. Custom images can be used for data backups or as "gold" images for additional servers. Image API The Image service API endpoint for management of VM images. Processes client requests for VMs, updates Image service metadata on the registry server, and communicates with the store adapter to upload VM images from the back-end store. image cache Used by Image service to obtain images on the local host rather than re-downloading them from the image server each time one is requested. image ID Combination of a URI and UUID used to access Image service VM images through the image API. image membership A list of projects that can access a given VM image within Image service. image owner The project who owns an Image service virtual machine image. image registry A list of VM images that are available through Image service. Image service (glance) The OpenStack service that provide services and associated libraries to store, browse, share, distribute and manage bootable disk images, other data closely associated with initializing compute resources, and metadata definitions. image status The current status of a VM image in Image service, not to be confused with the status of a running instance. image store The back-end store used by Image service to store VM images, options include Object Storage, locally mounted file system, RADOS block devices, VMware datastore, or HTTP. image UUID UUID used by Image service to uniquely identify each VM image. incubated project A community project may be elevated to this status and is then promoted to a core project. Infrastructure Optimization service (watcher) OpenStack project that aims to provide a flexible and scalable resource optimization service for multi-project OpenStack-based clouds. Infrastructure-as-a-Service (IaaS) IaaS is a provisioning model in which an organization outsources physical components of a data center, such as storage, hardware, servers, and networking components. A service provider owns the equipment and is responsible for housing, operating and maintaining it. The client typically pays on a per-use basis. IaaS is a model for providing cloud services. ingress filtering The process of filtering incoming network traffic. Supported by Compute. INI format The OpenStack configuration files use an INI format to describe options and their values. It consists of sections and key value pairs. injection The process of putting a file into a virtual machine image before the instance is started. Input/Output Operations Per Second (IOPS) IOPS are a common performance measurement used to benchmark computer storage devices like hard disk drives, solid state drives, and storage area networks. instance A running VM, or a VM in a known state such as suspended, that can be used like a hardware server. instance ID Alternative term for instance UUID. instance state The current state of a guest VM image. instance tunnels network A network segment used for instance traffic tunnels between compute nodes and the network node. instance type Describes the parameters of the various virtual machine images that are available to users; includes parameters such as CPU, storage, and memory. Alternative term for flavor. instance type ID Alternative term for a flavor ID. instance UUID Unique ID assigned to each guest VM instance. Intelligent Platform Management Interface (IPMI) IPMI is a standardized computer system interface used by system administrators for out-of-band management of computer systems and monitoring of their operation. In layman's terms, it is a way to manage a computer using a direct network connection, whether it is turned on or not; connecting to the hardware rather than an operating system or login shell. interface A physical or virtual device that provides connectivity to another device or medium. interface ID Unique ID for a Networking VIF or vNIC in the form of a UUID. Internet Control Message Protocol (ICMP) A network protocol used by network devices for control messages. For example, :command:`ping` uses ICMP to test connectivity. Internet protocol (IP) Principal communications protocol in the internet protocol suite for relaying datagrams across network boundaries. Internet Service Provider (ISP) Any business that provides Internet access to individuals or businesses. Internet Small Computer System Interface (iSCSI) Storage protocol that encapsulates SCSI frames for transport over IP networks. Supported by Compute, Object Storage, and Image service. IP address Number that is unique to every computer system on the Internet. Two versions of the Internet Protocol (IP) are in use for addresses: IPv4 and IPv6. IP Address Management (IPAM) The process of automating IP address allocation, deallocation, and management. Currently provided by Compute, melange, and Networking. ip6tables Tool used to set up, maintain, and inspect the tables of IPv6 packet filter rules in the Linux kernel. In OpenStack Compute, ip6tables is used along with arptables, ebtables, and iptables to create firewalls for both nodes and VMs. ipset Extension to iptables that allows creation of firewall rules that match entire "sets" of IP addresses simultaneously. These sets reside in indexed data structures to increase efficiency, particularly on systems with a large quantity of rules. iptables Used along with arptables and ebtables, iptables create firewalls in Compute. iptables are the tables provided by the Linux kernel firewall (implemented as different Netfilter modules) and the chains and rules it stores. Different kernel modules and programs are currently used for different protocols: iptables applies to IPv4, ip6tables to IPv6, arptables to ARP, and ebtables to Ethernet frames. Requires root privilege to manipulate. ironic Codename for the :term:`Bare Metal service `. iSCSI Qualified Name (IQN) IQN is the format most commonly used for iSCSI names, which uniquely identify nodes in an iSCSI network. All IQNs follow the pattern iqn.yyyy-mm.domain:identifier, where 'yyyy-mm' is the year and month in which the domain was registered, 'domain' is the reversed domain name of the issuing organization, and 'identifier' is an optional string which makes each IQN under the same domain unique. For example, 'iqn.2015-10.org.openstack.408ae959bce1'. ISO9660 One of the VM image disk formats supported by Image service. itsec A default role in the Compute RBAC system that can quarantine an instance in any project. J ~ .. glossary:: Java A programming language that is used to create systems that involve more than one computer by way of a network. JavaScript A scripting language that is used to build web pages. JavaScript Object Notation (JSON) One of the supported response formats in OpenStack. jumbo frame Feature in modern Ethernet networks that supports frames up to approximately 9000 bytes. Juno The code name for the tenth release of OpenStack. The design summit took place in Atlanta, Georgia, US and Juno is an unincorporated community in Georgia. K ~ .. glossary:: Kerberos A network authentication protocol which works on the basis of tickets. Kerberos allows nodes communication over a non-secure network, and allows nodes to prove their identity to one another in a secure manner. kernel-based VM (KVM) An OpenStack-supported hypervisor. KVM is a full virtualization solution for Linux on x86 hardware containing virtualization extensions (Intel VT or AMD-V), ARM, IBM Power, and IBM zSeries. It consists of a loadable kernel module, that provides the core virtualization infrastructure and a processor specific module. Key Manager service (barbican) The project that produces a secret storage and generation system capable of providing key management for services wishing to enable encryption features. keystone Codename of the :term:`Identity service `. Kickstart A tool to automate system configuration and installation on Red Hat, Fedora, and CentOS-based Linux distributions. Kilo The code name for the eleventh release of OpenStack. The design summit took place in Paris, France. Due to delays in the name selection, the release was known only as K. Because ``k`` is the unit symbol for kilo and the kilogram reference artifact is stored near Paris in the Pavillon de Breteuil in Sèvres, the community chose Kilo as the release name. L ~ .. glossary:: large object An object within Object Storage that is larger than 5 GB. Launchpad The collaboration site for OpenStack. Layer-2 (L2) agent OpenStack Networking agent that provides layer-2 connectivity for virtual networks. Layer-2 network Term used in the OSI network architecture for the data link layer. The data link layer is responsible for media access control, flow control and detecting and possibly correcting errors that may occur in the physical layer. Layer-3 (L3) agent OpenStack Networking agent that provides layer-3 (routing) services for virtual networks. Layer-3 network Term used in the OSI network architecture for the network layer. The network layer is responsible for packet forwarding including routing from one node to another. Liberty The code name for the twelfth release of OpenStack. The design summit took place in Vancouver, Canada and Liberty is the name of a village in the Canadian province of Saskatchewan. libvirt Virtualization API library used by OpenStack to interact with many of its supported hypervisors. Lightweight Directory Access Protocol (LDAP) An application protocol for accessing and maintaining distributed directory information services over an IP network. Linux Unix-like computer operating system assembled under the model of free and open-source software development and distribution. Linux bridge Software that enables multiple VMs to share a single physical NIC within Compute. Linux Bridge neutron plug-in Enables a Linux bridge to understand a Networking port, interface attachment, and other abstractions. Linux containers (LXC) An OpenStack-supported hypervisor. live migration The ability within Compute to move running virtual machine instances from one host to another with only a small service interruption during switchover. load balancer A load balancer is a logical device that belongs to a cloud account. It is used to distribute workloads between multiple back-end systems or services, based on the criteria defined as part of its configuration. load balancing The process of spreading client requests between two or more nodes to improve performance and availability. Load-Balancer-as-a-Service (LBaaS) Enables Networking to distribute incoming requests evenly between designated instances. Load-balancing service (octavia) The project that aims to provide scalable, on demand, self service access to load-balancer services, in technology-agnostic manner. Logical Volume Manager (LVM) Provides a method of allocating space on mass-storage devices that is more flexible than conventional partitioning schemes. M ~ .. glossary:: magnum Code name for the :term:`Containers Infrastructure Management service`. management API Alternative term for an admin API. management network A network segment used for administration, not accessible to the public Internet. manager Logical groupings of related code, such as the Block Storage volume manager or network manager. manifest Used to track segments of a large object within Object Storage. manifest object A special Object Storage object that contains the manifest for a large object. manila Codename for OpenStack :term:`Shared File Systems service`. manila-share Responsible for managing Shared File System Service devices, specifically the back-end devices. maximum transmission unit (MTU) Maximum frame or packet size for a particular network medium. Typically 1500 bytes for Ethernet networks. mechanism driver A driver for the Modular Layer 2 (ML2) neutron plug-in that provides layer-2 connectivity for virtual instances. A single OpenStack installation can use multiple mechanism drivers. melange Project name for OpenStack Network Information Service. To be merged with Networking. membership The association between an Image service VM image and a project. Enables images to be shared with specified projects. membership list A list of projects that can access a given VM image within Image service. memcached A distributed memory object caching system that is used by Object Storage for caching. memory overcommit The ability to start new VM instances based on the actual memory usage of a host, as opposed to basing the decision on the amount of RAM each running instance thinks it has available. Also known as RAM overcommit. message broker The software package used to provide AMQP messaging capabilities within Compute. Default package is RabbitMQ. message bus The main virtual communication line used by all AMQP messages for inter-cloud communications within Compute. message queue Passes requests from clients to the appropriate workers and returns the output to the client after the job completes. Message service (zaqar) The project that provides a messaging service that affords a variety of distributed application patterns in an efficient, scalable and highly available manner, and to create and maintain associated Python libraries and documentation. Meta-Data Server (MDS) Stores CephFS metadata. Metadata agent OpenStack Networking agent that provides metadata services for instances. migration The process of moving a VM instance from one host to another. mistral Code name for :term:`Workflow service `. Mitaka The code name for the thirteenth release of OpenStack. The design summit took place in Tokyo, Japan. Mitaka is a city in Tokyo. Modular Layer 2 (ML2) neutron plug-in Can concurrently use multiple layer-2 networking technologies, such as 802.1Q and VXLAN, in Networking. monasca Codename for OpenStack :term:`Monitoring `. Monitor (LBaaS) LBaaS feature that provides availability monitoring using the ``ping`` command, TCP, and HTTP/HTTPS GET. Monitor (Mon) A Ceph component that communicates with external clients, checks data state and consistency, and performs quorum functions. Monitoring (monasca) The OpenStack service that provides a multi-project, highly scalable, performant, fault-tolerant monitoring-as-a-service solution for metrics, complex event processing and logging. To build an extensible platform for advanced monitoring services that can be used by both operators and projects to gain operational insight and visibility, ensuring availability and stability. multi-factor authentication Authentication method that uses two or more credentials, such as a password and a private key. Currently not supported in Identity. multi-host High-availability mode for legacy (nova) networking. Each compute node handles NAT and DHCP and acts as a gateway for all of the VMs on it. A networking failure on one compute node doesn't affect VMs on other compute nodes. multinic Facility in Compute that allows each virtual machine instance to have more than one VIF connected to it. murano Codename for the :term:`Application Catalog service `. N ~ .. glossary:: Nebula Released as open source by NASA in 2010 and is the basis for Compute. netadmin One of the default roles in the Compute RBAC system. Enables the user to allocate publicly accessible IP addresses to instances and change firewall rules. NetApp volume driver Enables Compute to communicate with NetApp storage devices through the NetApp OnCommand Provisioning Manager. network A virtual network that provides connectivity between entities. For example, a collection of virtual ports that share network connectivity. In Networking terminology, a network is always a layer-2 network. Network Address Translation (NAT) Process of modifying IP address information while in transit. Supported by Compute and Networking. network controller A Compute daemon that orchestrates the network configuration of nodes, including IP addresses, VLANs, and bridging. Also manages routing for both public and private networks. Network File System (NFS) A method for making file systems available over the network. Supported by OpenStack. network ID Unique ID assigned to each network segment within Networking. Same as network UUID. network manager The Compute component that manages various network components, such as firewall rules, IP address allocation, and so on. network namespace Linux kernel feature that provides independent virtual networking instances on a single host with separate routing tables and interfaces. Similar to virtual routing and forwarding (VRF) services on physical network equipment. network node Any compute node that runs the network worker daemon. network segment Represents a virtual, isolated OSI layer-2 subnet in Networking. Network Service Header (NSH) Provides a mechanism for metadata exchange along the instantiated service path. Network Time Protocol (NTP) Method of keeping a clock for a host or node correct via communication with a trusted, accurate time source. network UUID Unique ID for a Networking network segment. network worker The ``nova-network`` worker daemon; provides services such as giving an IP address to a booting nova instance. Networking API (Neutron API) API used to access OpenStack Networking. Provides an extensible architecture to enable custom plug-in creation. Networking service (neutron) The OpenStack project which implements services and associated libraries to provide on-demand, scalable, and technology-agnostic network abstraction. neutron Codename for OpenStack :term:`Networking service `. neutron API An alternative name for :term:`Networking API `. neutron manager Enables Compute and Networking integration, which enables Networking to perform network management for guest VMs. neutron plug-in Interface within Networking that enables organizations to create custom plug-ins for advanced features, such as QoS, ACLs, or IDS. Newton The code name for the fourteenth release of OpenStack. The design summit took place in Austin, Texas, US. The release is named after "Newton House" which is located at 1013 E. Ninth St., Austin, TX. which is listed on the National Register of Historic Places. Nexenta volume driver Provides support for NexentaStor devices in Compute. NFV Orchestration Service (tacker) OpenStack service that aims to implement Network Function Virtualization (NFV) orchestration services and libraries for end-to-end life-cycle management of network services and Virtual Network Functions (VNFs). Nginx An HTTP and reverse proxy server, a mail proxy server, and a generic TCP/UDP proxy server. No ACK Disables server-side message acknowledgment in the Compute RabbitMQ. Increases performance but decreases reliability. node A VM instance that runs on a host. non-durable exchange Message exchange that is cleared when the service restarts. Its data is not written to persistent storage. non-durable queue Message queue that is cleared when the service restarts. Its data is not written to persistent storage. non-persistent volume Alternative term for an ephemeral volume. north-south traffic Network traffic between a user or client (north) and a server (south), or traffic into the cloud (south) and out of the cloud (north). See also east-west traffic. nova Codename for OpenStack :term:`Compute service `. Nova API Alternative term for the :term:`Compute API `. nova-network A Compute component that manages IP address allocation, firewalls, and other network-related tasks. This is the legacy networking option and an alternative to Networking. O ~ .. glossary:: object A BLOB of data held by Object Storage; can be in any format. object auditor Opens all objects for an object server and verifies the MD5 hash, size, and metadata for each object. object expiration A configurable option within Object Storage to automatically delete objects after a specified amount of time has passed or a certain date is reached. object hash Unique ID for an Object Storage object. object path hash Used by Object Storage to determine the location of an object in the ring. Maps objects to partitions. object replicator An Object Storage component that copies an object to remote partitions for fault tolerance. object server An Object Storage component that is responsible for managing objects. Object Storage API API used to access OpenStack :term:`Object Storage`. Object Storage Device (OSD) The Ceph storage daemon. Object Storage service (swift) The OpenStack core project that provides eventually consistent and redundant storage and retrieval of fixed digital content. object versioning Allows a user to set a flag on an :term:`Object Storage` container so that all objects within the container are versioned. Ocata The code name for the fifteenth release of OpenStack. The design summit will take place in Barcelona, Spain. Ocata is a beach north of Barcelona. Octavia Code name for the :term:`Load-balancing service `. Oldie Term for an :term:`Object Storage` process that runs for a long time. Can indicate a hung process. Open Cloud Computing Interface (OCCI) A standardized interface for managing compute, data, and network resources, currently unsupported in OpenStack. Open Virtualization Format (OVF) Standard for packaging VM images. Supported in OpenStack. Open vSwitch Open vSwitch is a production quality, multilayer virtual switch licensed under the open source Apache 2.0 license. It is designed to enable massive network automation through programmatic extension, while still supporting standard management interfaces and protocols (for example NetFlow, sFlow, SPAN, RSPAN, CLI, LACP, 802.1ag). Open vSwitch (OVS) agent Provides an interface to the underlying Open vSwitch service for the Networking plug-in. Open vSwitch neutron plug-in Provides support for Open vSwitch in Networking. OpenLDAP An open source LDAP server. Supported by both Compute and Identity. OpenStack OpenStack is a cloud operating system that controls large pools of compute, storage, and networking resources throughout a data center, all managed through a dashboard that gives administrators control while empowering their users to provision resources through a web interface. OpenStack is an open source project licensed under the Apache License 2.0. OpenStack code name Each OpenStack release has a code name. Code names ascend in alphabetical order: Austin, Bexar, Cactus, Diablo, Essex, Folsom, Grizzly, Havana, Icehouse, Juno, Kilo, Liberty, Mitaka, Newton, Ocata, Pike, Queens, and Rocky. Code names are cities or counties near where the corresponding OpenStack design summit took place. An exception, called the Waldon exception, is granted to elements of the state flag that sound especially cool. Code names are chosen by popular vote. openSUSE A Linux distribution that is compatible with OpenStack. operator The person responsible for planning and maintaining an OpenStack installation. optional service An official OpenStack service defined as optional by DefCore Committee. Currently, consists of Dashboard (horizon), Telemetry service (Telemetry), Orchestration service (heat), Database service (trove), Bare Metal service (ironic), and so on. Orchestration service (heat) The OpenStack service which orchestrates composite cloud applications using a declarative template format through an OpenStack-native REST API. orphan In the context of Object Storage, this is a process that is not terminated after an upgrade, restart, or reload of the service. Oslo Codename for the :term:`Common Libraries project`. P ~ .. glossary:: panko Part of the OpenStack :term:`Telemetry service `; provides event storage. parent cell If a requested resource, such as CPU time, disk storage, or memory, is not available in the parent cell, the request is forwarded to associated child cells. partition A unit of storage within Object Storage used to store objects. It exists on top of devices and is replicated for fault tolerance. partition index Contains the locations of all Object Storage partitions within the ring. partition shift value Used by Object Storage to determine which partition data should reside on. path MTU discovery (PMTUD) Mechanism in IP networks to detect end-to-end MTU and adjust packet size accordingly. pause A VM state where no changes occur (no changes in memory, network communications stop, etc); the VM is frozen but not shut down. PCI passthrough Gives guest VMs exclusive access to a PCI device. Currently supported in OpenStack Havana and later releases. persistent message A message that is stored both in memory and on disk. The message is not lost after a failure or restart. persistent volume Changes to these types of disk volumes are saved. personality file A file used to customize a Compute instance. It can be used to inject SSH keys or a specific network configuration. Pike The code name for the sixteenth release of OpenStack. The design summit will take place in Boston, Massachusetts, US. The release is named after the Massachusetts Turnpike, abbreviated commonly as the Mass Pike, which is the easternmost stretch of Interstate 90. Platform-as-a-Service (PaaS) Provides to the consumer an operating system and, often, a language runtime and libraries (collectively, the "platform") upon which they can run their own application code, without providing any control over the underlying infrastructure. Examples of Platform-as-a-Service providers include Cloud Foundry and OpenShift. plug-in Software component providing the actual implementation for Networking APIs, or for Compute APIs, depending on the context. policy service Component of Identity that provides a rule-management interface and a rule-based authorization engine. policy-based routing (PBR) Provides a mechanism to implement packet forwarding and routing according to the policies defined by the network administrator. pool A logical set of devices, such as web servers, that you group together to receive and process traffic. The load balancing function chooses which member of the pool handles the new requests or connections received on the VIP address. Each VIP has one pool. pool member An application that runs on the back-end server in a load-balancing system. port A virtual network port within Networking; VIFs / vNICs are connected to a port. port UUID Unique ID for a Networking port. preseed A tool to automate system configuration and installation on Debian-based Linux distributions. private image An Image service VM image that is only available to specified projects. private IP address An IP address used for management and administration, not available to the public Internet. private network The Network Controller provides virtual networks to enable compute servers to interact with each other and with the public network. All machines must have a public and private network interface. A private network interface can be a flat or VLAN network interface. A flat network interface is controlled by the flat_interface with flat managers. A VLAN network interface is controlled by the ``vlan_interface`` option with VLAN managers. project Projects represent the base unit of “ownership†in OpenStack, in that all resources in OpenStack should be owned by a specific project. In OpenStack Identity, a project must be owned by a specific domain. project ID Unique ID assigned to each project by the Identity service. project VPN Alternative term for a cloudpipe. promiscuous mode Causes the network interface to pass all traffic it receives to the host rather than passing only the frames addressed to it. protected property Generally, extra properties on an Image service image to which only cloud administrators have access. Limits which user roles can perform CRUD operations on that property. The cloud administrator can configure any image property as protected. provider An administrator who has access to all hosts and instances. proxy node A node that provides the Object Storage proxy service. proxy server Users of Object Storage interact with the service through the proxy server, which in turn looks up the location of the requested data within the ring and returns the results to the user. public API An API endpoint used for both service-to-service communication and end-user interactions. public image An Image service VM image that is available to all projects. public IP address An IP address that is accessible to end-users. public key authentication Authentication method that uses keys rather than passwords. public network The Network Controller provides virtual networks to enable compute servers to interact with each other and with the public network. All machines must have a public and private network interface. The public network interface is controlled by the ``public_interface`` option. Puppet An operating system configuration-management tool supported by OpenStack. Python Programming language used extensively in OpenStack. Q ~ .. glossary:: QEMU Copy On Write 2 (QCOW2) One of the VM image disk formats supported by Image service. Qpid Message queue software supported by OpenStack; an alternative to RabbitMQ. Quality of Service (QoS) The ability to guarantee certain network or storage requirements to satisfy a Service Level Agreement (SLA) between an application provider and end users. Typically includes performance requirements like networking bandwidth, latency, jitter correction, and reliability as well as storage performance in Input/Output Operations Per Second (IOPS), throttling agreements, and performance expectations at peak load. quarantine If Object Storage finds objects, containers, or accounts that are corrupt, they are placed in this state, are not replicated, cannot be read by clients, and a correct copy is re-replicated. Queens The code name for the seventeenth release of OpenStack. The design summit will take place in Sydney, Australia. The release is named after the Queens Pound river in the South Coast region of New South Wales. Quick EMUlator (QEMU) QEMU is a generic and open source machine emulator and virtualizer. One of the hypervisors supported by OpenStack, generally used for development purposes. quota In Compute and Block Storage, the ability to set resource limits on a per-project basis. R ~ .. glossary:: RabbitMQ The default message queue software used by OpenStack. Rackspace Cloud Files Released as open source by Rackspace in 2010; the basis for Object Storage. RADOS Block Device (RBD) Ceph component that enables a Linux block device to be striped over multiple distributed data stores. radvd The router advertisement daemon, used by the Compute VLAN manager and FlatDHCP manager to provide routing services for VM instances. rally Codename for the :term:`Benchmark service`. RAM filter The Compute setting that enables or disables RAM overcommitment. RAM overcommit The ability to start new VM instances based on the actual memory usage of a host, as opposed to basing the decision on the amount of RAM each running instance thinks it has available. Also known as memory overcommit. rate limit Configurable option within Object Storage to limit database writes on a per-account and/or per-container basis. raw One of the VM image disk formats supported by Image service; an unstructured disk image. rebalance The process of distributing Object Storage partitions across all drives in the ring; used during initial ring creation and after ring reconfiguration. reboot Either a soft or hard reboot of a server. With a soft reboot, the operating system is signaled to restart, which enables a graceful shutdown of all processes. A hard reboot is the equivalent of power cycling the server. The virtualization platform should ensure that the reboot action has completed successfully, even in cases in which the underlying domain/VM is paused or halted/stopped. rebuild Removes all data on the server and replaces it with the specified image. Server ID and IP addresses remain the same. Recon An Object Storage component that collects meters. record Belongs to a particular domain and is used to specify information about the domain. There are several types of DNS records. Each record type contains particular information used to describe the purpose of that record. Examples include mail exchange (MX) records, which specify the mail server for a particular domain; and name server (NS) records, which specify the authoritative name servers for a domain. record ID A number within a database that is incremented each time a change is made. Used by Object Storage when replicating. Red Hat Enterprise Linux (RHEL) A Linux distribution that is compatible with OpenStack. reference architecture A recommended architecture for an OpenStack cloud. region A discrete OpenStack environment with dedicated API endpoints that typically shares only the Identity (keystone) with other regions. registry Alternative term for the Image service registry. registry server An Image service that provides VM image metadata information to clients. Reliable, Autonomic Distributed Object Store (RADOS) A collection of components that provides object storage within Ceph. Similar to OpenStack Object Storage. Remote Procedure Call (RPC) The method used by the Compute RabbitMQ for intra-service communications. replica Provides data redundancy and fault tolerance by creating copies of Object Storage objects, accounts, and containers so that they are not lost when the underlying storage fails. replica count The number of replicas of the data in an Object Storage ring. replication The process of copying data to a separate physical device for fault tolerance and performance. replicator The Object Storage back-end process that creates and manages object replicas. request ID Unique ID assigned to each request sent to Compute. rescue image A special type of VM image that is booted when an instance is placed into rescue mode. Allows an administrator to mount the file systems for an instance to correct the problem. resize Converts an existing server to a different flavor, which scales the server up or down. The original server is saved to enable rollback if a problem occurs. All resizes must be tested and explicitly confirmed, at which time the original server is removed. RESTful A kind of web service API that uses REST, or Representational State Transfer. REST is the style of architecture for hypermedia systems that is used for the World Wide Web. ring An entity that maps Object Storage data to partitions. A separate ring exists for each service, such as account, object, and container. ring builder Builds and manages rings within Object Storage, assigns partitions to devices, and pushes the configuration to other storage nodes. Rocky The code name for the eightteenth release of OpenStack. The design summit will take place in Vancouver, Kanada. The release is named after the Rocky Mountains. role A personality that a user assumes to perform a specific set of operations. A role includes a set of rights and privileges. A user assuming that role inherits those rights and privileges. Role Based Access Control (RBAC) Provides a predefined list of actions that the user can perform, such as start or stop VMs, reset passwords, and so on. Supported in both Identity and Compute and can be configured using the dashboard. role ID Alphanumeric ID assigned to each Identity service role. Root Cause Analysis (RCA) service (Vitrage) OpenStack project that aims to organize, analyze and visualize OpenStack alarms and events, yield insights regarding the root cause of problems and deduce their existence before they are directly detected. rootwrap A feature of Compute that allows the unprivileged "nova" user to run a specified list of commands as the Linux root user. round-robin scheduler Type of Compute scheduler that evenly distributes instances among available hosts. router A physical or virtual network device that passes network traffic between different networks. routing key The Compute direct exchanges, fanout exchanges, and topic exchanges use this key to determine how to process a message; processing varies depending on exchange type. RPC driver Modular system that allows the underlying message queue software of Compute to be changed. For example, from RabbitMQ to ZeroMQ or Qpid. rsync Used by Object Storage to push object replicas. RXTX cap Absolute limit on the amount of network traffic a Compute VM instance can send and receive. RXTX quota Soft limit on the amount of network traffic a Compute VM instance can send and receive. S ~ .. glossary:: sahara Codename for the :term:`Data Processing service`. SAML assertion Contains information about a user as provided by the identity provider. It is an indication that a user has been authenticated. scheduler manager A Compute component that determines where VM instances should start. Uses modular design to support a variety of scheduler types. scoped token An Identity service API access token that is associated with a specific project. scrubber Checks for and deletes unused VMs; the component of Image service that implements delayed delete. secret key String of text known only by the user; used along with an access key to make requests to the Compute API. secure boot Process whereby the system firmware validates the authenticity of the code involved in the boot process. secure shell (SSH) Open source tool used to access remote hosts through an encrypted communications channel, SSH key injection is supported by Compute. security group A set of network traffic filtering rules that are applied to a Compute instance. segmented object An Object Storage large object that has been broken up into pieces. The re-assembled object is called a concatenated object. self-service For IaaS, ability for a regular (non-privileged) account to manage a virtual infrastructure component such as networks without involving an administrator. SELinux Linux kernel security module that provides the mechanism for supporting access control policies. senlin Code name for the :term:`Clustering service `. server Computer that provides explicit services to the client software running on that system, often managing a variety of computer operations. A server is a VM instance in the Compute system. Flavor and image are requisite elements when creating a server. server image Alternative term for a VM image. server UUID Unique ID assigned to each guest VM instance. service An OpenStack service, such as Compute, Object Storage, or Image service. Provides one or more endpoints through which users can access resources and perform operations. service catalog Alternative term for the Identity service catalog. Service Function Chain (SFC) For a given service, SFC is the abstracted view of the required service functions and the order in which they are to be applied. service ID Unique ID assigned to each service that is available in the Identity service catalog. Service Level Agreement (SLA) Contractual obligations that ensure the availability of a service. service project Special project that contains all services that are listed in the catalog. service provider A system that provides services to other system entities. In case of federated identity, OpenStack Identity is the service provider. service registration An Identity service feature that enables services, such as Compute, to automatically register with the catalog. service token An administrator-defined token used by Compute to communicate securely with the Identity service. session back end The method of storage used by horizon to track client sessions, such as local memory, cookies, a database, or memcached. session persistence A feature of the load-balancing service. It attempts to force subsequent connections to a service to be redirected to the same node as long as it is online. session storage A horizon component that stores and tracks client session information. Implemented through the Django sessions framework. share A remote, mountable file system in the context of the :term:`Shared File Systems service`. You can mount a share to, and access a share from, several hosts by several users at a time. share network An entity in the context of the :term:`Shared File Systems service` that encapsulates interaction with the Networking service. If the driver you selected runs in the mode requiring such kind of interaction, you need to specify the share network to create a share. Shared File Systems API A Shared File Systems service that provides a stable RESTful API. The service authenticates and routes requests throughout the Shared File Systems service. There is python-manilaclient to interact with the API. Shared File Systems service (manila) The service that provides a set of services for management of shared file systems in a multi-project cloud environment, similar to how OpenStack provides block-based storage management through the OpenStack :term:`Block Storage service` project. With the Shared File Systems service, you can create a remote file system and mount the file system on your instances. You can also read and write data from your instances to and from your file system. shared IP address An IP address that can be assigned to a VM instance within the shared IP group. Public IP addresses can be shared across multiple servers for use in various high-availability scenarios. When an IP address is shared to another server, the cloud network restrictions are modified to enable each server to listen to and respond on that IP address. You can optionally specify that the target server network configuration be modified. Shared IP addresses can be used with many standard heartbeat facilities, such as keepalive, that monitor for failure and manage IP failover. shared IP group A collection of servers that can share IPs with other members of the group. Any server in a group can share one or more public IPs with any other server in the group. With the exception of the first server in a shared IP group, servers must be launched into shared IP groups. A server may be a member of only one shared IP group. shared storage Block storage that is simultaneously accessible by multiple clients, for example, NFS. Sheepdog Distributed block storage system for QEMU, supported by OpenStack. Simple Cloud Identity Management (SCIM) Specification for managing identity in the cloud, currently unsupported by OpenStack. Simple Protocol for Independent Computing Environments (SPICE) SPICE provides remote desktop access to guest virtual machines. It is an alternative to VNC. SPICE is supported by OpenStack. Single-root I/O Virtualization (SR-IOV) A specification that, when implemented by a physical PCIe device, enables it to appear as multiple separate PCIe devices. This enables multiple virtualized guests to share direct access to the physical device, offering improved performance over an equivalent virtual device. Currently supported in OpenStack Havana and later releases. SmokeStack Runs automated tests against the core OpenStack API; written in Rails. snapshot A point-in-time copy of an OpenStack storage volume or image. Use storage volume snapshots to back up volumes. Use image snapshots to back up data, or as "gold" images for additional servers. soft reboot A controlled reboot where a VM instance is properly restarted through operating system commands. Software Development Lifecycle Automation service (solum) OpenStack project that aims to make cloud services easier to consume and integrate with application development process by automating the source-to-image process, and simplifying app-centric deployment. Software-defined networking (SDN) Provides an approach for network administrators to manage computer network services through abstraction of lower-level functionality. SolidFire Volume Driver The Block Storage driver for the SolidFire iSCSI storage appliance. solum Code name for the :term:`Software Development Lifecycle Automation service `. spread-first scheduler The Compute VM scheduling algorithm that attempts to start a new VM on the host with the least amount of load. SQLAlchemy An open source SQL toolkit for Python, used in OpenStack. SQLite A lightweight SQL database, used as the default persistent storage method in many OpenStack services. stack A set of OpenStack resources created and managed by the Orchestration service according to a given template (either an AWS CloudFormation template or a Heat Orchestration Template (HOT)). StackTach Community project that captures Compute AMQP communications; useful for debugging. static IP address Alternative term for a fixed IP address. StaticWeb WSGI middleware component of Object Storage that serves container data as a static web page. storage back end The method that a service uses for persistent storage, such as iSCSI, NFS, or local disk. storage manager A XenAPI component that provides a pluggable interface to support a wide variety of persistent storage back ends. storage manager back end A persistent storage method supported by XenAPI, such as iSCSI or NFS. storage node An Object Storage node that provides container services, account services, and object services; controls the account databases, container databases, and object storage. storage services Collective name for the Object Storage object services, container services, and account services. strategy Specifies the authentication source used by Image service or Identity. In the Database service, it refers to the extensions implemented for a data store. subdomain A domain within a parent domain. Subdomains cannot be registered. Subdomains enable you to delegate domains. Subdomains can themselves have subdomains, so third-level, fourth-level, fifth-level, and deeper levels of nesting are possible. subnet Logical subdivision of an IP network. SUSE Linux Enterprise Server (SLES) A Linux distribution that is compatible with OpenStack. suspend The VM instance is paused and its state is saved to disk of the host. swap Disk-based virtual memory used by operating systems to provide more memory than is actually available on the system. swauth An authentication and authorization service for Object Storage, implemented through WSGI middleware; uses Object Storage itself as the persistent backing store. swift Codename for OpenStack :term:`Object Storage service`. swift All in One (SAIO) Creates a full Object Storage development environment within a single VM. swift middleware Collective term for Object Storage components that provide additional functionality. swift proxy server Acts as the gatekeeper to Object Storage and is responsible for authenticating the user. swift storage node A node that runs Object Storage account, container, and object services. sync point Point in time since the last container and accounts database sync among nodes within Object Storage. sysadmin One of the default roles in the Compute RBAC system. Enables a user to add other users to a project, interact with VM images that are associated with the project, and start and stop VM instances. system usage A Compute component that, along with the notification system, collects meters and usage information. This information can be used for billing. T ~ .. glossary:: tacker Code name for the :term:`NFV Orchestration service ` Telemetry service (telemetry) The OpenStack project which collects measurements of the utilization of the physical and virtual resources comprising deployed clouds, persists this data for subsequent retrieval and analysis, and triggers actions when defined criteria are met. TempAuth An authentication facility within Object Storage that enables Object Storage itself to perform authentication and authorization. Frequently used in testing and development. Tempest Automated software test suite designed to run against the trunk of the OpenStack core project. TempURL An Object Storage middleware component that enables creation of URLs for temporary object access. tenant A group of users; used to isolate access to Compute resources. An alternative term for a project. Tenant API An API that is accessible to projects. tenant endpoint An Identity service API endpoint that is associated with one or more projects. tenant ID An alternative term for :term:`project ID`. token An alpha-numeric string of text used to access OpenStack APIs and resources. token services An Identity service component that manages and validates tokens after a user or project has been authenticated. tombstone Used to mark Object Storage objects that have been deleted; ensures that the object is not updated on another node after it has been deleted. topic publisher A process that is created when a RPC call is executed; used to push the message to the topic exchange. Torpedo Community project used to run automated tests against the OpenStack API. transaction ID Unique ID assigned to each Object Storage request; used for debugging and tracing. transient Alternative term for non-durable. transient exchange Alternative term for a non-durable exchange. transient message A message that is stored in memory and is lost after the server is restarted. transient queue Alternative term for a non-durable queue. TripleO OpenStack-on-OpenStack program. The code name for the OpenStack Deployment program. trove Codename for OpenStack :term:`Database service `. trusted platform module (TPM) Specialized microprocessor for incorporating cryptographic keys into devices for authenticating and securing a hardware platform. U ~ .. glossary:: Ubuntu A Debian-based Linux distribution. unscoped token Alternative term for an Identity service default token. updater Collective term for a group of Object Storage components that processes queued and failed updates for containers and objects. user In OpenStack Identity, entities represent individual API consumers and are owned by a specific domain. In OpenStack Compute, a user can be associated with roles, projects, or both. user data A blob of data that the user can specify when they launch an instance. The instance can access this data through the metadata service or config drive. Commonly used to pass a shell script that the instance runs on boot. User Mode Linux (UML) An OpenStack-supported hypervisor. V ~ .. glossary:: VIF UUID Unique ID assigned to each Networking VIF. Virtual Central Processing Unit (vCPU) Subdivides physical CPUs. Instances can then use those divisions. Virtual Disk Image (VDI) One of the VM image disk formats supported by Image service. Virtual Extensible LAN (VXLAN) A network virtualization technology that attempts to reduce the scalability problems associated with large cloud computing deployments. It uses a VLAN-like encapsulation technique to encapsulate Ethernet frames within UDP packets. Virtual Hard Disk (VHD) One of the VM image disk formats supported by Image service. virtual IP address (VIP) An Internet Protocol (IP) address configured on the load balancer for use by clients connecting to a service that is load balanced. Incoming connections are distributed to back-end nodes based on the configuration of the load balancer. virtual machine (VM) An operating system instance that runs on top of a hypervisor. Multiple VMs can run at the same time on the same physical host. virtual network An L2 network segment within Networking. Virtual Network Computing (VNC) Open source GUI and CLI tools used for remote console access to VMs. Supported by Compute. Virtual Network InterFace (VIF) An interface that is plugged into a port in a Networking network. Typically a virtual network interface belonging to a VM. virtual networking A generic term for virtualization of network functions such as switching, routing, load balancing, and security using a combination of VMs and overlays on physical network infrastructure. virtual port Attachment point where a virtual interface connects to a virtual network. virtual private network (VPN) Provided by Compute in the form of cloudpipes, specialized instances that are used to create VPNs on a per-project basis. virtual server Alternative term for a VM or guest. virtual switch (vSwitch) Software that runs on a host or node and provides the features and functions of a hardware-based network switch. virtual VLAN Alternative term for a virtual network. VirtualBox An OpenStack-supported hypervisor. Vitrage Code name for the :term:`Root Cause Analysis service `. VLAN manager A Compute component that provides dnsmasq and radvd and sets up forwarding to and from cloudpipe instances. VLAN network The Network Controller provides virtual networks to enable compute servers to interact with each other and with the public network. All machines must have a public and private network interface. A VLAN network is a private network interface, which is controlled by the ``vlan_interface`` option with VLAN managers. VM disk (VMDK) One of the VM image disk formats supported by Image service. VM image Alternative term for an image. VM Remote Control (VMRC) Method to access VM instance consoles using a web browser. Supported by Compute. VMware API Supports interaction with VMware products in Compute. VMware NSX Neutron plug-in Provides support for VMware NSX in Neutron. VNC proxy A Compute component that provides users access to the consoles of their VM instances through VNC or VMRC. volume Disk-based data storage generally represented as an iSCSI target with a file system that supports extended attributes; can be persistent or ephemeral. Volume API Alternative name for the Block Storage API. volume controller A Block Storage component that oversees and coordinates storage volume actions. volume driver Alternative term for a volume plug-in. volume ID Unique ID applied to each storage volume under the Block Storage control. volume manager A Block Storage component that creates, attaches, and detaches persistent storage volumes. volume node A Block Storage node that runs the cinder-volume daemon. volume plug-in Provides support for new and specialized types of back-end storage for the Block Storage volume manager. volume worker A cinder component that interacts with back-end storage to manage the creation and deletion of volumes and the creation of compute volumes, provided by the cinder-volume daemon. vSphere An OpenStack-supported hypervisor. W ~ .. glossary:: Watcher Code name for the :term:`Infrastructure Optimization service `. weight Used by Object Storage devices to determine which storage devices are suitable for the job. Devices are weighted by size. weighted cost The sum of each cost used when deciding where to start a new VM instance in Compute. weighting A Compute process that determines the suitability of the VM instances for a job for a particular host. For example, not enough RAM on the host, too many CPUs on the host, and so on. worker A daemon that listens to a queue and carries out tasks in response to messages. For example, the cinder-volume worker manages volume creation and deletion on storage arrays. Workflow service (mistral) The OpenStack service that provides a simple YAML-based language to write workflows (tasks and transition rules) and a service that allows to upload them, modify, run them at scale and in a highly available manner, manage and monitor workflow execution state and state of individual tasks. X ~ .. glossary:: X.509 X.509 is the most widely used standard for defining digital certificates. It is a data structure that contains the subject (entity) identifiable information such as its name along with its public key. The certificate can contain a few other attributes as well depending upon the version. The most recent and standard version of X.509 is v3. Xen Xen is a hypervisor using a microkernel design, providing services that allow multiple computer operating systems to execute on the same computer hardware concurrently. Xen API The Xen administrative API, which is supported by Compute. Xen Cloud Platform (XCP) An OpenStack-supported hypervisor. Xen Storage Manager Volume Driver A Block Storage volume plug-in that enables communication with the Xen Storage Manager API. XenServer An OpenStack-supported hypervisor. XFS High-performance 64-bit file system created by Silicon Graphics. Excels in parallel I/O operations and data consistency. Z ~ .. glossary:: zaqar Codename for the :term:`Message service `. ZeroMQ Message queue software supported by OpenStack. An alternative to RabbitMQ. Also spelled 0MQ. Zuul Tool used in OpenStack development to ensure correctly ordered testing of changes in parallel. glance-16.0.0/LICENSE0000666000175100017510000002363713245511421014073 0ustar zuulzuul00000000000000 Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. glance-16.0.0/HACKING.rst0000666000175100017510000000172513245511421014656 0ustar zuulzuul00000000000000glance Style Commandments ========================= - Step 1: Read the OpenStack Style Commandments http://docs.openstack.org/developer/hacking/ - Step 2: Read on glance Specific Commandments ---------------------------- - [G316] Change assertTrue(isinstance(A, B)) by optimal assert like assertIsInstance(A, B) - [G317] Change assertEqual(type(A), B) by optimal assert like assertIsInstance(A, B) - [G318] Change assertEqual(A, None) or assertEqual(None, A) by optimal assert like assertIsNone(A) - [G319] Validate that debug level logs are not translated - [G320] For python 3 compatibility, use six.text_type() instead of unicode() - [G327] Prevent use of deprecated contextlib.nested - [G328] Must use a dict comprehension instead of a dict constructor with a sequence of key-value pairs - [G329] Python 3: Do not use xrange. - [G330] Python 3: do not use dict.iteritems. - [G331] Python 3: do not use dict.iterkeys. - [G332] Python 3: do not use dict.itervalues. glance-16.0.0/tox.ini0000666000175100017510000001077513245511426014405 0ustar zuulzuul00000000000000[tox] minversion = 2.3.1 envlist = py35,py27,pep8 skipsdist = True [testenv] # Set default python version basepython = python2.7 setenv = VIRTUAL_ENV={envdir} PYTHONWARNINGS=default::DeprecationWarning # NOTE(hemanthm): The environment variable "OS_TEST_DBAPI_ADMIN_CONNECTION" # must be set to force oslo.db tests to use a file-based sqlite database # instead of the default in-memory database, which doesn't work well with # alembic migrations. The file-based database pointed by the environment # variable itself is not used for testing. Neither is it ever created. Oslo.db # creates another file-based database for testing purposes and deletes it as a # part of its test clean-up. Think of setting this environment variable as a # clue for oslo.db to use file-based database. OS_TEST_DBAPI_ADMIN_CONNECTION=sqlite:////tmp/placeholder-never-created-nor-used.db usedevelop = True install_command = pip install -c{env:UPPER_CONSTRAINTS_FILE:https://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt?h=stable/queens} {opts} {packages} deps = -r{toxinidir}/test-requirements.txt commands = find . -type f -name "*.pyc" -delete whitelist_externals = bash find rm passenv = *_proxy *_PROXY [testenv:py27] commands = ostestr --slowest {posargs} [testenv:py35] basepython = python3.5 commands = ostestr --slowest {posargs} [testenv:functional] setenv = TEST_PATH = ./glance/tests/functional commands = ostestr {posargs} [testenv:functional-py35] basepython = python3.5 setenv = TEST_PATH = ./glance/tests/functional commands = ostestr '{posargs}' [testenv:pep8] commands = flake8 {posargs} # Run security linter bandit -c bandit.yaml -r glance -n5 -p gate # Check that .po and .pot files are valid: bash -c "find glance -type f -regex '.*\.pot?' -print0|xargs -0 -n 1 msgfmt --check-format -o /dev/null" doc8 {posargs} [testenv:genconfig] commands = oslo-config-generator --config-file etc/oslo-config-generator/glance-api.conf oslo-config-generator --config-file etc/oslo-config-generator/glance-registry.conf oslo-config-generator --config-file etc/oslo-config-generator/glance-scrubber.conf oslo-config-generator --config-file etc/oslo-config-generator/glance-cache.conf oslo-config-generator --config-file etc/oslo-config-generator/glance-manage.conf oslo-config-generator --config-file etc/oslo-config-generator/glance-image-import.conf [testenv:api-ref] # This environment is called from CI scripts to test and publish # the API Ref to developer.openstack.org. commands = rm -rf api-ref/build sphinx-build -W -b html -d api-ref/build/doctrees api-ref/source api-ref/build/html [testenv:bindep] # Do not install any requirements. We want this to be fast and work even if # system dependencies are missing, since it's used to tell you what system # dependencies are missing! This also means that bindep must be installed # separately, outside of the requirements files, and develop mode disabled # explicitly to avoid unnecessarily installing the checked-out repo too (this # further relies on "tox.skipsdist = True" above). deps = bindep commands = bindep test usedevelop = False [doc8] ignore-path = .venv,.git,.tox,*glance/locale*,*lib/python*,glance.egg*,api-ref/build,doc/build,doc/source/contributor/api [flake8] # TODO(dmllr): Analyze or fix the warnings blacklisted below # E711 comparison to None should be 'if cond is not None:' # E712 comparison to True should be 'if cond is True:' or 'if cond:' # H404 multi line docstring should start with a summary # H405 multi line docstring summary not separated with an empty line ignore = E711,E712,H404,H405 exclude = .venv,.git,.tox,dist,doc,etc,*glance/locale*,*lib/python*,*egg,build [hacking] local-check-factory = glance.hacking.checks.factory import_exceptions = glance.i18n [testenv:docs] commands = rm -fr doc/build python setup.py build_sphinx [testenv:venv] commands = {posargs} [testenv:bandit] commands = bandit -c bandit.yaml -r glance -n5 -p gate [testenv:releasenotes] commands = sphinx-build -a -E -W -d releasenotes/build/doctrees -b html releasenotes/source releasenotes/build/html [testenv:cover] setenv = PYTHON=coverage run --source glance --parallel-mode commands = stestr run {posargs} coverage combine coverage html -d cover coverage xml -o cover/coverage.xml [testenv:debug] commands = oslo_debug_helper {posargs} [testenv:debug-py27] commands = oslo_debug_helper {posargs} [testenv:debug-py35] basepython = python3.5 commands = oslo_debug_helper {posargs} glance-16.0.0/.mailmap0000666000175100017510000000226713245511421014503 0ustar zuulzuul00000000000000# Format is: # # Zhongyue Luo Zhenguo Niu David Koo glance-16.0.0/tools/0000775000175100017510000000000013245511661014217 5ustar zuulzuul00000000000000glance-16.0.0/tools/test-setup.sh0000777000175100017510000000370613245511421016675 0ustar zuulzuul00000000000000#!/bin/bash -xe # This script will be run by OpenStack CI before unit tests are run, # it sets up the test system as needed. # Developers should setup their test systems in a similar way. # This setup needs to be run as a user that can run sudo. # The root password for the MySQL database; pass it in via # MYSQL_ROOT_PW. DB_ROOT_PW=${MYSQL_ROOT_PW:-insecure_slave} # This user and its password are used by the tests, if you change it, # your tests might fail. DB_USER=openstack_citest DB_PW=openstack_citest sudo -H mysqladmin -u root password $DB_ROOT_PW # It's best practice to remove anonymous users from the database. If # a anonymous user exists, then it matches first for connections and # other connections from that host will not work. sudo -H mysql -u root -p$DB_ROOT_PW -h localhost -e " DELETE FROM mysql.user WHERE User=''; FLUSH PRIVILEGES; GRANT ALL PRIVILEGES ON *.* TO '$DB_USER'@'%' identified by '$DB_PW' WITH GRANT OPTION;" # Now create our database. mysql -u $DB_USER -p$DB_PW -h 127.0.0.1 -e " SET default_storage_engine=MYISAM; DROP DATABASE IF EXISTS openstack_citest; CREATE DATABASE openstack_citest CHARACTER SET utf8;" # Same for PostgreSQL # The root password for the PostgreSQL database; pass it in via # POSTGRES_ROOT_PW. DB_ROOT_PW=${POSTGRES_ROOT_PW:-insecure_slave} # Setup user root_roles=$(sudo -H -u postgres psql -t -c " SELECT 'HERE' from pg_roles where rolname='$DB_USER'") if [[ ${root_roles} == *HERE ]];then sudo -H -u postgres psql -c "ALTER ROLE $DB_USER WITH SUPERUSER LOGIN PASSWORD '$DB_PW'" else sudo -H -u postgres psql -c "CREATE ROLE $DB_USER WITH SUPERUSER LOGIN PASSWORD '$DB_PW'" fi # Store password for tests cat << EOF > $HOME/.pgpass *:*:*:$DB_USER:$DB_PW EOF chmod 0600 $HOME/.pgpass # Now create our database psql -h 127.0.0.1 -U $DB_USER -d template1 -c "DROP DATABASE IF EXISTS openstack_citest" createdb -h 127.0.0.1 -U $DB_USER -l C -T template0 -E utf8 openstack_citest glance-16.0.0/AUTHORS0000664000175100017510000005213613245511657014143 0ustar zuulzuul00000000000000Aaron Rosen Abhijeet Malawade Abhishek Chanda Abhishek Kekane Abhishek Kekane Adam Gandelman Adam Gandelman Ajaya Agrawal Ala Rezmerita Alberto Planas Alessandro Pilotti Alessio Ababilov Alessio Ababilov Alex Gaynor Alex Meade Alexander Bashmakov Alexander Gordeev Alexander Maretskiy Alexander Tivelkov Alexandra Settle Alexei Kornienko Alexey Galkin Alexey Yelistratov Alfredo Moralejo Amala Basha AmalaBasha AmalaBasha Anastasia Vlaskina Andreas Jaeger Andreas Jaeger Andrew Hutchings Andrew Melton Andrew Tranquada Andrey Brindeyev Andy McCrae Anita Kuno Archit Sharma Arnaud Legendre Artur Svechnikov Ashish Jain Ashwini Shukla Aswad Rangnekar Atsushi SAKAI Attila Fazekas Auktavian Garrett Avinash Prasad AvnishPal Balazs Gibizer Bartosz Fic Ben Nemec Ben Roble Bernhard M. Wiedemann Bertrand Lallau Bertrand Lallau Bhagyashri Shewale Bhuvan Arumugam Bin Zhou Bo Wang Boris Pavlovic Brandon Palm Brant Knudson Brian Cline Brian D. Elliott Brian Elliott Brian Elliott Brian Lamar Brian Rosmaita Brian Rosmaita Brian Waldon Brianna Poulos Béla Vancsics Cao ShuFeng Cao Xuan Hoang Castulo J. Martinez Cerberus Chang Bo Guo ChangBo Guo(gcb) Chen Fan Chmouel Boudjnah Chris Allnutt Chris Behrens Chris Buccella Chris Buccella Chris Fattarsi Chris St. Pierre Christian Berendt Christian Berendt Christopher MacGown Chuck Short Cindy Pallares Clark Boylan Clint Byrum Cory Benfield Cory Wright Cyril Roelandt DamonLi Dan Prince Dane Fichter Daniel Krook Daniel Pawlik Danny Al-Gaaf Darja Shakhray Darren White Davanum Srinivas Davanum Srinivas Dave Chen Dave McNally Dave Walker (Daviey) David Koo David Peraza David Rabel David Ripton David Sariel Dean Troyer DeepaJon Deepti Ramakrishna DennyZhang Derek Higgins Desmond Sponsor Dharini Chandrasekar Dina Belova Dinesh Bhor Dirk Mueller Dmitry Kulishenko Dolph Mathews Donal Lafferty Doron Chen Doug Hellmann Doug Hellmann Drew Varner Drew Varner Duncan McGreggor Eddie Sheffield Edgar Magana Edward Hope-Morley Eldar Nugaev Elena Ezhova Eoghan Glynn Eric Brown Eric Windisch Erno Kuvaja Erno Kuvaja Eugeniya Kudryashova Ewan Mellor Fabio M. Di Nitto Fei Long Wang Fei Long Wang Fengqian Gao Flaper Fesp Flavio Percoco Florent Flament Gabriel Hurley Gary Kotton Gauvain Pocentek Geetika Batra GeetikaBatra GeetikaBatra George Peristerakis Georgy Okrokvertskhov Gerardo Porras Gorka Eguileor Graham Hayes Grant Murphy Gregory Haynes Guoqiang Ding Gábor Antal Ha Van Tu Haikel Guemar Haiwei Xu Harsh Shah Harshada Mangesh Kakad He Yongli Hemanth Makkapati Hemanth Makkapati Hengqing Hu Henrique Truta Hirofumi Ichihara Hui Xiang Ian Cordasco Ian Cordasco Iccha Sethi Igor A. Lukyanenkov Ihar Hrachyshka Ildiko Vancsa Ilya Pekelny Inessa Vasilevskaya IonuÈ› ArțăriÈ™i Isaku Yamahata Itisha Dewan J. Daniel Schmidt Jakub Ruzicka James Carey James E. Blair James Li James Morgan James Polley Jamie Lennox Jamie Lennox Jared Culp Jasakov Artem Jason Koelker Jason Kölker Javeme Javier Pena Jay Pipes Jeremy Stanley Jesse Andrews Jesse J. Cook Jesse Pretorius Jia Dong Jin Li Jin Long Wang Jinwoo 'Joseph' Suh Joe Gordon Joe Gordon Johannes Erdfelt John Bresnahan John L. Villalovos John Lenihan John Warren Jon Bernard Jorge Niedbalski Joseph Suh Josh Durgin Josh Durgin Josh Kearney Joshua Harlow Joshua Harlow JuPing Juan Antonio Osorio Robles Juan Manuel Olle Juerg Haefliger Julia Varlamova Julien Danjou Jun Hong Li Justin Santa Barbara Justin Shepherd KATO Tomoyuki KIYOHIRO ADACHI Kamil Rykowski Karol Stepniewski Kasey Alusi Ken Pepple Ken Thomas Kent Wang Kentaro Takeda Keshava Bharadwaj Kevin L. Mitchell Kevin_Zheng Kirill Zaitsev Kui Shi Kun Huang Lakshmi N Sampath Lars Gellrich Leam Leandro I. Costantino Li Wei Lianhao Lu Lin Yang Liu Yuan Long Quan Sha Lorin Hochstein Louis Taylor Louis Taylor Luis A. Garcia Luong Anh Tuan Lyubov Kolesnikova M V P Nitesh Major Hayden Marc Abramowitz Mark J. Washenberger Mark J. Washenberger Mark McLoughlin Mark Washenberger Martin Kletzander Martin Mágr Martin Tsvetanov Maru Newby Masashi Ozawa Matt Dietz Matt Fischer Matt Riedemann Matthew Booth Matthew Edmonds Matthew Treinish Matthew Treinish Matthias Schmitz Maurice Leeflang Mauro S. M. Rodrigues Maxim Nestratov Mehdi Abaakouk Michael J Fork Michael Krotscheck Michael Still Michal Dulko Mike Abrams Mike Fedosin Mike Fedosin Mike Lundy Mike Turvey Mingda Sun Mitsuhiro SHIGEMATSU Mitsuhiro Tanino Monty Taylor Munoz, Obed N NAO NISHIJIMA Nassim Babaci Ngo Quoc Cuong Nguyen Hung Phuong Nguyen Van Trung Niall Bunting Niall Bunting NiallBunting Nicholas Kuechler Nicolas Simonds Nikhil Komawar Nikhil Komawar Nikhil Komawar Nikolaj Starodubtsev Noboru Arai Noboru arai Nolwenn Cauchois Oleksii Chuprykov Olena Logvinova OndÅ™ej Nový OpenStack Release Bot Pamela-Rose Virtucio Pankaj Mishra Patrick Mezard Paul Bourke Paul Bourke Paul McMillan Pavan Kumar Sunkara Pawel Koniszewski Pawel Skowron Peng Yong Pengju Jiao Pete Zaitcev Pranali Deore Pranali Deore PranaliDeore Preetika Pádraig Brady Pádraig Brady Qiaowei Ren Radu Rainya Mosher Rajesh Tailor Ravi Shekhar Jethani Ray Chen Reynolds Chin Rick Clark Rick Harris Robert Collins Rohan Kanade Roman Bogorodskiy Roman Bogorodskiy Roman Vasilets Ronald Bradford Rongze Zhu RongzeZhu Rui Yuan Dou Rui Zang Russell Bryant Russell Sim Ryan Selden Sabari Kumar Murugesan Sachi King Sachin Patil Sam Morrison Sam Stavinoha Samuel Merritt Sascha Peilicke Sascha Peilicke Sathish Nagappan Scott McClymont Sean Dague Sean Dague Sean McGinnis Sean McGinnis Sergey Nikitin Sergey Skripnick Sergey Vilgelm Sergey Vilgelm Sergio Cazzolato Shane Wang Shinya Kawabata Shuquan Huang Soren Hansen Stan Lagun Stephen Finucane Stephen Gordon Steve Kowalik Steve Lewis Stuart McLaren Sulochan Acharya Supalerk Jivorasetkul Svetlana Shturm Takeaki Matsumoto Taku Fukushima Tatyana Leontovich Therese McHale Thierry Carrez Thomas Bechtold Thomas Bechtold Thomas Leaman Tim Burke Tim Daly, Jr Timothy Symanczyk Toan Nguyen Tom Cocozzello Tom Hancock Tom Leaman Tomas Hancock Tomislav Sukser Tomoki Sekiyama Travis Tripp Travis Tripp Unmesh Gurjar Unmesh Gurjar Vaibhav Bhatkar Venkatesh Sampath Venkatesh Sampath Victor Morales Victor Sergeyev Victor Stinner Vincent Untz Vishvananda Ishaya Vitaliy Kolosov Vladislav Kuzmin Vyacheslav Vakhlyuev Waldemar Znoinski Wayne A. Walls Wayne Okuma Wen Cheng Ma WenjunWang1992 <10191230@zte.com.cn> Wu Wenxiang Xi Yang XiaBing Yao YAMAMOTO Takashi Yaguang Tang Yaguo Zhou Yanis Guenane Yufang Zhang Yuiko Takada Yuriy Taraday Yusuke Ide ZHANG Hua Zhenguo Niu Zhenguo Niu Zhi Yan Liu ZhiQiang Fan ZhiQiang Fan Zhiteng Huang Zhongyue Luo Zuul abhishek-kekane abhishekkekane amalaba ankitagrawal ankur annegentle april bbwang5827 bhagyashris bpankaj bria4010 chenaidong1 chenxing daisy-ycguo dangming deepak_mourya dineshbhor eddie-sheffield eos2102 ericxiett gengjh haobing1 henriquetruta houming-wang huangtianhua hussainchachuliya hzrandd <82433422@qq.com> iccha iccha-sethi iccha.sethi isethi itisha jakedahn jare6412 jaypipes@gmail.com <> jinxingfang jola-mirecka junboli kairat_kushaev ke-kuroki kylin7-sg lawrancejing leo.young leseb liangjingtao lijunbo ling-yun lingyongxu liuqing liuxiaoyang liyingjun liyingjun lizheming llg8212 ls1175 makocchi makocchi-git marianitadn mathrock nanhai liao neha.pandey oorgeron pawnesh.kumar pran1990 raiesmh08 ravikumar-venkatesan ricolin rsritesh rtmdk sai krishna sripada sarvesh-ranjan shaofeng_cheng shashi.kant shilpa.devharakar shreeduth-awasthi shrutiranade38 shu,xinxin space sridevik sridevik sudhir_agarwal tanlin ting.wang tmcpeak tobe venkatamahesh venkatamahesh wanghong wangxiyuan weiweigu xurong00037997 yangxurong yongiman yuyafei zhengwei6082 zhengyao1 zhiguo.li zhu.rong zhufl zwei “Akhila glance-16.0.0/README.rst0000666000175100017510000000472013245511421014545 0ustar zuulzuul00000000000000======================== Team and repository tags ======================== .. image:: http://governance.openstack.org/badges/glance.svg :target: http://governance.openstack.org/reference/tags/index.html :alt: The following tags have been asserted for the Glance project: "project:official", "tc:approved-release", "stable:follows-policy", "tc:starter-kit:compute", "vulnerability:managed", "team:diverse-affiliation", "assert:supports-upgrade", "assert:follows-standard-deprecation". Follow the link for an explanation of these tags. .. NOTE(rosmaita): the alt text above will have to be updated when additional tags are asserted for Glance. (The SVG in the governance repo is updated automatically.) .. Change things from this point on ====== Glance ====== Glance is a project that provides services and associated libraries to store, browse, share, distribute and manage bootable disk images, other data closely associated with initializing compute resources, and metadata definitions. Use the following resources to learn more: API --- To learn how to use Glance's API, consult the documentation available online at: * `Image Service APIs `_ Developers ---------- For information on how to contribute to Glance, please see the contents of the CONTRIBUTING.rst in this repository. Any new code must follow the development guidelines detailed in the HACKING.rst file, and pass all unit tests. Further developer focused documentation is available at: * `Official Glance documentation `_ * `Official Client documentation `_ Operators --------- To learn how to deploy and configure OpenStack Glance, consult the documentation available online at: * `Openstack Glance `_ In the unfortunate event that bugs are discovered, they should be reported to the appropriate bug tracker. You can raise bugs here: * `Bug Tracker `_ Other Information ----------------- During each design summit, we agree on what the whole community wants to focus on for the upcoming release. You can see image service plans: * `Image Service Plans `_ For more information about the Glance project please see: * `Glance Project `_ glance-16.0.0/.coveragerc0000666000175100017510000000013113245511421015167 0ustar zuulzuul00000000000000[run] branch = True source = glance omit = glance/tests/* [report] ignore_errors = True glance-16.0.0/api-ref/0000775000175100017510000000000013245511661014402 5ustar zuulzuul00000000000000glance-16.0.0/api-ref/source/0000775000175100017510000000000013245511661015702 5ustar zuulzuul00000000000000glance-16.0.0/api-ref/source/versions/0000775000175100017510000000000013245511661017552 5ustar zuulzuul00000000000000glance-16.0.0/api-ref/source/versions/versions.inc0000666000175100017510000000143613245511421022115 0ustar zuulzuul00000000000000.. -*- rst -*- .. _versions-call: API versions ************ List API versions ~~~~~~~~~~~~~~~~~ .. rest_method:: GET /versions Lists information about all Image service API versions supported by this deployment, including the URIs. Normal response codes: 200 Request ------- There are no request parameters. Response Example ---------------- .. literalinclude:: samples/image-versions-response.json :language: json List API versions ~~~~~~~~~~~~~~~~~ .. rest_method:: GET / Lists information about all Image service API versions supported by this deployment, including the URIs. Normal response codes: 300 Request ------- There are no request parameters. Response Example ---------------- .. literalinclude:: samples/image-versions-response.json :language: json glance-16.0.0/api-ref/source/versions/index.rst0000666000175100017510000000265413245511421021416 0ustar zuulzuul00000000000000.. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. :tocdepth: 2 ====================== Image Service Versions ====================== .. rest_expand_all:: .. include:: versions.inc Version History *************** **Queens changes** - version 2.6 is CURRENT - version 2.5 is SUPPORTED **Pike changes** - version 2.6 is EXPERIMENTAL **Ocata changes** - version 2.5 is CURRENT - version 2.4 is SUPPORTED **Newton changes** - version 2.4 is CURRENT - version 2.3 is SUPPORTED - version 1.1 is DEPRECATED - version 1.0 is DEPRECATED **Kilo changes** - version 2.3 is CURRENT - version 1.1 is SUPPORTED **Havana changes** - version 2.2 is CURRENT - version 2.1 is SUPPORTED **Grizzly changes** - version 2.1 is CURRENT - version 2.0 is SUPPORTED **Folson changes** - version 2.0 is CURRENT **Diablo changes** - version 1.1 is CURRENT - version 1.0 is SUPPORTED **Bexar changes** - version 1.0 is CURRENT glance-16.0.0/api-ref/source/versions/samples/0000775000175100017510000000000013245511661021216 5ustar zuulzuul00000000000000glance-16.0.0/api-ref/source/versions/samples/image-versions-response.json0000666000175100017510000000447713245511421026705 0ustar zuulzuul00000000000000{ "versions": [ { "id": "v2.6", "links": [ { "href": "http://glance.openstack.example.org/v2/", "rel": "self" } ], "status": "CURRENT" }, { "id": "v2.5", "links": [ { "href": "http://glance.openstack.example.org/v2/", "rel": "self" } ], "status": "SUPPORTED" }, { "id": "v2.4", "links": [ { "href": "http://glance.openstack.example.org/v2/", "rel": "self" } ], "status": "SUPPORTED" }, { "id": "v2.3", "links": [ { "href": "http://glance.openstack.example.org/v2/", "rel": "self" } ], "status": "SUPPORTED" }, { "id": "v2.2", "links": [ { "href": "http://glance.openstack.example.org/v2/", "rel": "self" } ], "status": "SUPPORTED" }, { "id": "v2.1", "links": [ { "href": "http://glance.openstack.example.org/v2/", "rel": "self" } ], "status": "SUPPORTED" }, { "id": "v2.0", "links": [ { "href": "http://glance.openstack.example.org/v2/", "rel": "self" } ], "status": "SUPPORTED" }, { "id": "v1.1", "links": [ { "href": "http://glance.openstack.example.org/v1/", "rel": "self" } ], "status": "DEPRECATED" }, { "id": "v1.0", "links": [ { "href": "http://glance.openstack.example.org/v1/", "rel": "self" } ], "status": "DEPRECATED" } ] } glance-16.0.0/api-ref/source/v2/0000775000175100017510000000000013245511661016231 5ustar zuulzuul00000000000000glance-16.0.0/api-ref/source/v2/images-data.inc0000666000175100017510000000673713245511421021111 0ustar zuulzuul00000000000000.. -*- rst -*- .. _image-data: Image data ********** Uploads and downloads raw image data. *These operations may be restricted to administrators. Consult your cloud operator's documentation for details.* Upload binary image data ~~~~~~~~~~~~~~~~~~~~~~~~ .. rest_method:: PUT /v2/images/{image_id}/file Uploads binary image data. *(Since Image API v2.0)* Set the ``Content-Type`` request header to ``application/octet-stream``. Example call: :: curl -i -X PUT -H "X-Auth-Token: $token" \ -H "Content-Type: application/octet-stream" \ -d @/home/glance/ubuntu-12.10.qcow2 \ $image_url/v2/images/{image_id}/file **Preconditions** Before you can store binary image data, you must meet the following preconditions: - The image must exist. - You must set the disk and container formats in the image. - The image status must be ``queued``. - Your image storage quota must be sufficient. - The size of the data that you want to store must not exceed the size that the OpenStack Image service allows. **Synchronous Postconditions** - With correct permissions, you can see the image status as ``active`` through API calls. - With correct access, you can see the stored data in the storage system that the OpenStack Image Service manages. **Troubleshooting** - If you cannot store the data, either your request lacks required information or you exceeded your allotted quota. Ensure that you meet the preconditions and run the request again. If the request fails again, review your API request. - The storage back ends for storing the data must have enough free storage space to accommodate the size of the data. Normal response codes: 204 Error response codes: 400, 401, 403, 404, 409, 410, 413, 415, 503 Request ------- .. rest_parameters:: images-parameters.yaml - Content-type: Content-Type-data - image_id: image_id-in-path Download binary image data ~~~~~~~~~~~~~~~~~~~~~~~~~~ .. rest_method:: GET /v2/images/{image_id}/file Downloads binary image data. *(Since Image API v2.0)* Example call: ``curl -i -X GET -H "X-Auth-Token: $token" $image_url/v2/images/{image_id}/file`` The response body contains the raw binary data that represents the actual virtual disk. The ``Content-Type`` header contains the ``application/octet-stream`` value. The ``Content-MD5`` header contains an MD5 checksum of the image data. Use this checksum to verify the integrity of the image data. **Preconditions** - The image must exist. **Synchronous Postconditions** - You can download the binary image data in your machine if the image has image data. - If image data exists, the call returns the HTTP ``200`` response code for a full image download request. - If image data exists, the call returns the HTTP ``206`` response code for a partial download request. - If no image data exists, the call returns the HTTP ``204`` (No Content) response code. - If no image record exists, the call returns the HTTP ``404`` response code for an attempted full image download request. - For an unsatisfiable partial download request, the call returns the HTTP ``416`` response code. Normal response codes: 200, 204, 206 Error response codes: 400, 403, 404, 416 Request ------- .. rest_parameters:: images-parameters.yaml - image_id: image_id-in-path - Range: Range Response -------- .. rest_parameters:: images-parameters.yaml - Content-Type: Content-Type-data-response - Content-Md5: Content-Md5 - Content-Length: Content-Length - Content-Range: Content-Range glance-16.0.0/api-ref/source/v2/metadefs-namespaces.inc0000666000175100017510000002110413245511421022623 0ustar zuulzuul00000000000000.. -*- rst -*- Metadata definition namespaces ****************************** Creates, lists, shows details for, updates, and deletes metadata definition namespaces. Defines namespaces that can contain property definitions, object definitions, and resource type associations. *Since API v2.2* Create namespace ~~~~~~~~~~~~~~~~ .. rest_method:: POST /v2/metadefs/namespaces Creates a namespace. A namespace must be unique across all users. Attempting to create an already existing namespace will result in a 409 (Conflict) response. The ``Location`` response header contains the newly-created URI for the namespace. Normal response codes: 201 Error response codes: 400, 401, 403, 409 Request ------- .. rest_parameters:: metadefs-parameters.yaml - namespace: namespace - display_name: display_name - description: description - visibility: visibility-in-request - protected: protected-in-request The request body may also contain properties, objects, and resource type associations, or these can be added later by the :ref:`v2-update-namespace` call. Request Example --------------- .. literalinclude:: samples/metadef-namespace-create-request-simple.json :language: json Response Parameters ------------------- .. rest_parameters:: metadefs-parameters.yaml - Location: Location - created_at: created_at - description: description - display_name: display_name - namespace: namespace - owner: owner - protected: protected - schema: schema-namespace - self: self - updated_at: updated_at - visibility: visibility If the request body contained properties, objects, or resource type associations, these will be included in the response. Response Example ---------------- .. code-block:: console HTTP/1.1 201 Created Content-Length: 427 Content-Type: application/json; charset=UTF-8 Location: http://glance.openstack.example.org/v2/metadefs/namespaces/FredCo::SomeCategory::Example X-Openstack-Request-Id: req-6d4a8ad2-c018-4bfc-8fe5-1a36c23c43eb Date: Thu, 19 May 2016 16:05:48 GMT .. literalinclude:: samples/metadef-namespace-create-response-simple.json :language: json List namespaces ~~~~~~~~~~~~~~~ .. rest_method:: GET /v2/metadefs/namespaces Lists available namespaces. Returns a list of namespaces to which the authenticated user has access. If the list is too large to fit in a single response, either because of operator configuration or because you've included a ``limit`` query parameter in the request to restrict the response size, the response will contain a link that you can use to get the next page of namespaces. Check for the presence of a ``next`` link and use it as the URI in a subsequent HTTP GET request. Follow this pattern until a ``next`` link is no longer provided. The ``next`` link preserves any query parameters that you send in your initial request. You can use the ``first`` link to return to the first page in the collection. If you prefer to paginate through namespaces manually, use the ``limit`` and ``marker`` parameters. The list operation accepts the ``resource_types`` and ``visibility`` query parameters, which you can use to filter the response. To sort the results of this operation, use the ``sort_key`` and ``sort_dir`` parameters. The API uses the natural sorting order in the namespace attribute that you provide as the ``sort_key`` parameter. Normal response codes: 200 Error response codes: 401, 403, 404 Request ------- .. rest_parameters:: metadefs-parameters.yaml - limit: limit - marker: marker - visibility: visibility-in-query - resource_types: resource_types-in-query - sort_key: sort_key - sort_dir: sort_dir Response Parameters ------------------- .. rest_parameters:: metadefs-parameters.yaml - first: first - namespaces: namespaces - next: next - schema: schema-namespaces Response Example ---------------- .. literalinclude:: samples/metadef-namespaces-list-response.json :language: json Get namespace details ~~~~~~~~~~~~~~~~~~~~~ .. rest_method:: GET /v2/metadefs/namespaces/{namespace_name} Gets details for a namespace. The response body shows a single namespace entity with all details including properties, objects, and resource type associations. If the namespace contains a resource type association that specifies a prefix, you may optionally include the name of the resource type as a query parameter. In that case, the prefix will be applied to all property names in the response. (See below for an example.) Normal response codes: 200 .. returns 400 if a request body is sent Error response codes: 400, 401, 403, 404 Request ------- .. rest_parameters:: metadefs-parameters.yaml - namespace_name: namespace_name - resource_type: resource_type-in-query-namespace-detail The request does not take a body. Response Parameters ------------------- .. rest_parameters:: metadefs-parameters.yaml - created_at: created_at - description: description - display_name: display_name - namespace: namespace - objects: objects - owner: owner - properties: properties-dict - protected: protected - resource_type_associations: resource_type_associations - schema: schema-namespace - self: self - visibility: visibility Response Example ---------------- .. literalinclude:: samples/metadef-namespace-details-response.json :language: json Response Example (with resource_type query parameter) ----------------------------------------------------- This is the result of the following request: ``GET /v2/metadefs/namespaces/OS::Compute::Libvirt?resource_type=OS::Glance::Image`` Note that the name of each property has had the appropriate prefix applied to it. .. literalinclude:: samples/metadef-namespace-details-with-rt-response.json :language: json .. _v2-update-namespace: Update namespace ~~~~~~~~~~~~~~~~ .. rest_method:: PUT /v2/metadefs/namespaces/{namespace_name} Updates a namespace. .. note:: Be careful using this call, especially when all you want to do is change the ``protected`` value so that you can delete some objects, properties, or resource type associations in the namespace. While only the ``namespace`` is required in the request body, if this call is made with *only* the ``namespace`` in request body, the other attributes listed below will be set to their default values -- which in the case of ``description`` and ``display_name``, is null. So if you want to change *only* the ``protected`` value with this call, be sure to also include the current values of the following parameters in the request body: - ``description`` - ``display_name`` - ``namespace`` - ``visibility`` The objects, properties, and resource type associations in a namespace are unaffected by this call. Normal response codes: 200 Error response codes: 400, 401, 403, 404, 409 Request ------- .. rest_parameters:: metadefs-parameters.yaml - namespace_name: namespace_name - description: description - display_name: display_name - namespace: namespace - protected: protected-in-request - visibility: visibility-in-request Request Example --------------- .. literalinclude:: samples/metadef-namespace-update-request.json :language: json Response Parameters ------------------- .. rest_parameters:: metadefs-parameters.yaml - created_at: created_at - description: description - display_name: display_name - namespace: namespace - owner: owner - protected: protected - schema: schema-namespace - self: self - updated_at: updated_at - visibility: visibility Response Example ---------------- .. literalinclude:: samples/metadef-namespace-update-response.json :language: json Delete namespace ~~~~~~~~~~~~~~~~ .. rest_method:: DELETE /v2/metadefs/namespaces/{namespace_name} Deletes a namespace and its properties, objects, and any resource type associations. .. note:: If the namespace is protected, that is, if the ``protected`` attribute of the namespace is ``true``, then you must first set the ``protected`` attribute to ``false`` on the namespace before you will be permitted to delete it. * If you try to delete a protected namespace, the call returns the ``403`` response code. * To change the ``protected`` attribute of a namespace, use the :ref:`Update namespace ` call. A successful operation returns the HTTP ``204`` (No Content) response code. Normal response codes: 204 Error response codes: 400, 401, 403, 404 Request ------- .. rest_parameters:: metadefs-parameters.yaml - namespace_name: namespace_name The request does not take a body. The request does not return a body. glance-16.0.0/api-ref/source/v2/tasks.inc0000666000175100017510000000672213245511421020054 0ustar zuulzuul00000000000000.. -*- rst -*- Tasks ***** Creates, lists, and shows details for tasks. *(Since API v2.2)* General Information ~~~~~~~~~~~~~~~~~~~ **API Status** This API was made admin-only by default in the OpenStack Mitaka release. Thus the following calls may not be available to end users in your cloud. Please consult your cloud provider's documentation for more information. **Conceptual Overview** Please see the `Tasks `_ section of the Glance Developers Documentation for a conceptual overview of tasks. **Task Status** The possible status values for tasks are presented in the following table. .. list-table:: :header-rows: 1 * - Status - Description * - pending - The task is waiting for execution. * - processing - Execution of the task is underway. * - success - The task completed successfully. The ``result`` element should be populated. * - failure - The task failed to complete. The ``message`` element should be a non-empty string. Create task ~~~~~~~~~~~ .. rest_method:: POST /v2/tasks Creates a task. Normal response codes: 201 Error response codes: 401, 413, 415 Request ------- .. rest_parameters:: tasks-parameters.yaml - type: type - input: input Request Example --------------- .. literalinclude:: samples/task-create-request.json :language: json Response Parameters ------------------- .. rest_parameters:: tasks-parameters.yaml - created_at: created_at - id: id - input: input - message: message - owner: owner - result: result - schema: schema-task - self: self - status: status - type: type - updated_at: updated_at Response Example ---------------- .. literalinclude:: samples/task-create-response.json :language: json List tasks ~~~~~~~~~~ .. rest_method:: GET /v2/tasks Lists tasks. Normal response codes: 200 Error response codes: 403, 404, 413 Request ------- .. rest_parameters:: tasks-parameters.yaml - limit: limit - marker: marker - sort_dir: sort_dir - sort_key: sort_key - status: status-in-query - type: type-in-query Response Parameters ------------------- .. rest_parameters:: tasks-parameters.yaml - first: first - next: next - schema: schema-tasks - tasks: tasks Response Example ---------------- .. literalinclude:: samples/tasks-list-response.json :language: json Show task details ~~~~~~~~~~~~~~~~~ .. rest_method:: GET /v2/tasks/{task_id} Shows details for a task. Normal response codes: 200 Error response codes: 404 Request ------- .. rest_parameters:: tasks-parameters.yaml - task_id: task_id Response Parameters ------------------- .. rest_parameters:: tasks-parameters.yaml - created_at: created_at - expires_at: expires_at - id: id - input: input - message: message - owner: owner - result: result - schema: schema-task - self: self - status: status - type: type - updated_at: updated_at Response Example (task status: processing) ------------------------------------------ .. literalinclude:: samples/task-show-processing-response.json :language: json Response Example (task status: success) ------------------------------------------ .. literalinclude:: samples/task-show-success-response.json :language: json Response Example (task status: failure) --------------------------------------- .. literalinclude:: samples/task-show-failure-response.json :language: json glance-16.0.0/api-ref/source/v2/images-parameters.yaml0000666000175100017510000004065213245511421022526 0ustar zuulzuul00000000000000# variables in header Content-Length: description: | The length of the body in octets (8-bit bytes) in: header required: true type: string Content-Md5: description: | The MD5 checksum of the body. in: header required: true type: string Content-Range: description: | The content range of image data. For details, see `Hypertext Transfer Protocol (HTTP/1.1): Range Requests `_. in: header required: false type: string Content-Type-data: description: | The media type descriptor for the request body. Use ``application/octet-stream`` in: header required: true type: string Content-Type-data-response: description: | The media type descriptor of the response body, namely ``application/octet-stream`` in: header required: true type: string Content-Type-json: description: | The media type descriptor for the request body. Use ``application/json``. in: header required: true type: string Content-Type-patch: description: | The media type descriptor for the request body. Use ``application/openstack-images-v2.1-json-patch``. (You can also use ``application/openstack-images-v2.0-json-patch``, but keep in mind that it's deprecated.) in: header required: true type: string import-header: description: | A comma separated list of import method identifiers. Included only if image import is enabled in your cloud. *Since Image API v2.6* in: header required: false type: string Location: description: | The URL to access the image file from the external store. in: header required: true type: string Range: description: | The range of image data requested. Note that multi range requests are not supported. For details, see `Hypertext Transfer Protocol (HTTP/1.1): Range Requests `_. in: header required: false type: string # variables in path image_id-in-path: description: | The UUID of the image. in: path required: true type: string member_id-in-path: description: | The ID of the image member. An image member is usually the project (also called the "tenant") with whom the image is shared. in: path required: true type: string tag-in-path: description: | The image tag. A tag is limited to 255 chars in length. You may wish to use characters that can easily be written in a URL. in: path required: true type: string # variables in query created_at-in-query: description: | Specify a *comparison filter* based on the date and time when the resource was created. (See :ref:`Time Comparison Filters `). The date and time stamp format is `ISO 8601 `_: :: CCYY-MM-DDThh:mm:ss±hh:mm The ``±hh:mm`` value, if included, is the time zone as an offset from UTC. For example, ``2015-08-27T09:49:58-05:00``. If you omit the time zone, the UTC time zone is assumed. in: query required: false type: string limit: description: | Requests a page size of items. Returns a number of items up to a limit value. Use the ``limit`` parameter to make an initial limited request and use the ID of the last-seen item from the response as the ``marker`` parameter value in a subsequent limited request. in: query required: false type: integer marker: description: | The ID of the last-seen item. Use the ``limit`` parameter to make an initial limited request and use the ID of the last-seen item from the response as the ``marker`` parameter value in a subsequent limited request. in: query required: false type: string member_status-in-query: description: | Filters the response by a member status. A valid value is ``accepted``, ``pending``, ``rejected``, or ``all``. Default is ``accepted``. in: query required: false type: string name-in-query: description: | Filters the response by a name, as a string. A valid value is the name of an image. in: query required: false type: string owner-in-query: description: | Filters the response by a project (also called a "tenant") ID. Shows only images that are shared with you by the specified owner. in: query required: false type: string protected-in-query: description: | Filters the response by the 'protected' image property. A valid value is one of 'true', 'false' (must be all lowercase). Any other value will result in a 400 response. in: query required: false type: boolean size_max: description: | Filters the response by a maximum image size, in bytes. in: query required: false type: string size_min: description: | Filters the response by a minimum image size, in bytes. in: query required: false type: string sort: description: | Sorts the response by one or more attribute and sort direction combinations. You can also set multiple sort keys and directions. Default direction is ``desc``. Use the comma (``,``) character to separate multiple values. For example: .. code-block:: none GET /v2/images?sort=name:asc,status:desc in: query required: false type: string sort_dir: description: | Sorts the response by a set of one or more sort direction and attribute (``sort_key``) combinations. A valid value for the sort direction is ``asc`` (ascending) or ``desc`` (descending). If you omit the sort direction in a set, the default is ``desc``. in: query required: false type: string sort_key: description: | Sorts the response by an attribute, such as ``name``, ``id``, or ``updated_at``. Default is ``created_at``. The API uses the natural sorting direction of the ``sort_key`` image attribute. in: query required: false type: string status-in-query: description: | Filters the response by an image status. in: query required: false type: integer tag-in-query: description: | Filters the response by the specified tag value. May be repeated, but keep in mind that you're making a conjunctive query, so only images containing *all* the tags specified will appear in the response. in: query required: false type: string updated_at-in-query: description: | Specify a *comparison filter* based on the date and time when the resource was most recently modified. (See :ref:`Time Comparison Filters `). The date and time stamp format is `ISO 8601 `_: :: CCYY-MM-DDThh:mm:ss±hh:mm The ``±hh:mm`` value, if included, is the time zone as an offset from UTC. For example, ``2015-08-27T09:49:58-05:00``. If you omit the time zone, the UTC time zone is assumed. in: query required: false type: string visibility-in-query: description: | Filters the response by an image visibility value. A valid value is ``public``, ``private``, ``community``, or ``shared``. (Note that if you filter on ``shared``, the images included in the response will only be those where your member status is ``accepted`` unless you explicitly include a ``member_status`` filter in the request.) If you omit this parameter, the response shows ``public``, ``private``, and those ``shared`` images with a member status of ``accepted``. in: query required: false type: string # variables in body checksum: description: | Hash that is used over the image data. The Image service uses this value for verification. The value might be ``null`` (JSON null data type). in: body required: true type: string container_format: description: | |container_format_description| in: body required: true type: enum container_format-in-request: description: | |container_format_description| in: body required: false type: enum created_at: description: | The date and time when the resource was created. The date and time stamp format is `ISO 8601 `_: :: CCYY-MM-DDThh:mm:ss±hh:mm For example, ``2015-08-27T09:49:58-05:00``. The ``±hh:mm`` value, if included, is the time zone as an offset from UTC. in: body required: true type: string direct_url: description: | The URL to access the image file kept in external store. *It is present only if the* ``show_image_direct_url`` *option is* ``true`` *in the Image service's configuration file.* **Because it presents a security risk, this option is disabled by default.** in: body required: false type: string disk_format: description: | |disk_format_description| in: body required: true type: enum disk_format-in-request: description: | |disk_format_description| in: body required: false type: enum file: description: | The URL for the virtual machine image file. in: body required: true type: string first: description: | The URI for the first page of response. in: body required: true type: string id: description: | A unique, user-defined image UUID, in the format: :: nnnnnnnn-nnnn-nnnn-nnnn-nnnnnnnnnnnn Where **n** is a hexadecimal digit from 0 to f, or F. For example: :: b2173dd3-7ad6-4362-baa6-a68bce3565cb If you omit this value, the API generates a UUID for the image. in: body required: true type: string id-in-request: description: | A unique, user-defined image UUID, in the format: :: nnnnnnnn-nnnn-nnnn-nnnn-nnnnnnnnnnnn Where **n** is a hexadecimal digit from 0 to f, or F. For example: :: b2173dd3-7ad6-4362-baa6-a68bce3565cb If you omit this value, the API generates a UUID for the image. If you specify a value that has already been assigned, the request fails with a ``409`` response code. in: body required: false type: string image_id-in-body: description: | The UUID of the image. in: body required: true type: string images: description: | A list of *image* objects, as described by the :ref:`Images Schema `. in: body required: true type: array import-methods: description: | A JSON object containing a ``value`` element, which is an array of string identifiers indicating what import methods are available in the cloud in which the call is made. This list may be empty. in: body required: true type: object locations: description: | A list of objects, each of which describes an image location. Each object contains a ``url`` key, whose value is a URL specifying a location, and a ``metadata`` key, whose value is a dict of key:value pairs containing information appropriate to the use of whatever external store is indicated by the URL. *This list appears only if the* ``show_multiple_locations`` *option is set to* ``true`` *in the Image service's configuration file.* **Because it presents a security risk, this option is disabled by default.** in: body required: false type: array member_id: description: | The ID of the image member. An image member is usually a project (also called the "tenant") with whom the image is shared. in: body required: true type: string member_status: description: | The status of this image member. Value is one of ``pending``, ``accepted``, ``rejected``. in: body required: true type: string members: description: | A list of *member* objects, as described by the :ref:`Image Members Schema `. Each *member* object describes a member with whom this image is being shared. in: body required: true type: array method-in-request: description: | A JSON object indicating what import method you wish to use to import your image. The content of this JSON object is another JSON object with a ``name`` field whose value is the identifier for the import method. in: body required: true type: object min_disk: description: | Amount of disk space in GB that is required to boot the image. The value might be ``null`` (JSON null data type). in: body required: true type: integer min_disk-in-request: description: | Amount of disk space in GB that is required to boot the image. in: body required: false type: integer min_ram: description: | Amount of RAM in MB that is required to boot the image. The value might be ``null`` (JSON null data type). in: body required: true type: integer min_ram-in-request: description: | Amount of RAM in MB that is required to boot the image. in: body required: false type: integer name: description: | The name of the image. Value might be ``null`` (JSON null data type). in: body required: true type: string name-in-request: description: | The name of the image. in: body required: false type: string next: description: | The URI for the next page of response. Will not be present on the last page of the response. in: body required: true type: string owner: description: | An identifier for the owner of the image, usually the project (also called the "tenant") ID. The value might be ``null`` (JSON null data type). in: body required: true type: string protected: description: | A boolean value that must be ``false`` or the image cannot be deleted. in: body required: true type: boolean protected-in-request: description: | Image protection for deletion. Valid value is ``true`` or ``false``. Default is ``false``. in: body required: false type: boolean schema-image: description: | The URL for the schema describing a virtual machine image. in: body required: true type: string schema-images: description: | The URL for the schema describing a list of images. in: body required: true type: string schema-member: description: | The URL for the schema describing an image member. in: body required: true type: string schema-members: description: | The URL for the schema describing an image member list. in: body required: true type: string self: description: | The URL for the virtual machine image. in: body required: true type: string size: description: | The size of the image data, in bytes. The value might be ``null`` (JSON null data type). in: body required: true type: integer status: description: | The image status. in: body required: true type: string tags: description: | List of tags for this image, possibly an empty list. in: body required: true type: array tags-in-request: description: | List of tags for this image. Each tag is a string of at most 255 chars. The maximum number of tags allowed on an image is set by the operator. in: body required: false type: array updated_at: description: | The date and time when the resource was updated. The date and time stamp format is `ISO 8601 `_: :: CCYY-MM-DDThh:mm:ss±hh:mm For example, ``2015-08-27T09:49:58-05:00``. The ``±hh:mm`` value, if included, is the time zone as an offset from UTC. In the previous example, the offset value is ``-05:00``. If the ``updated_at`` date and time stamp is not set, its value is ``null``. in: body required: true type: string url: description: | The URL to access the image file kept in external store. in: body required: true type: string value: description: | Value of image property used in add or replace operations expressed in JSON notation. For example, you must enclose strings in quotation marks, and you do not enclose numeric values in quotation marks. in: body required: true type: string virtual_size: description: | The virtual size of the image. The value might be ``null`` (JSON null data type). in: body required: true type: integer visibility: description: | Image visibility, that is, the access permission for the image. in: body required: true type: string visibility-in-request: description: | Visibility for this image. Valid value is one of: ``public``, ``private``, ``shared``, or ``community``. At most sites, only an administrator can make an image ``public``. Some sites may restrict what users can make an image ``community``. Some sites may restrict what users can perform member operations on a ``shared`` image. *Since the Image API v2.5, the default value is ``shared``.* in: body required: false type: string glance-16.0.0/api-ref/source/v2/tasks-parameters.yaml0000666000175100017510000001174713245511421022411 0ustar zuulzuul00000000000000# variables in header Content-Type-json: description: | The media type descriptor for the request body. Use ``application/json``. in: header required: true type: string # variables in path task_id: description: | The identifier for the task, a UUID. in: path required: true type: string # variables in query limit: description: | Requests a page size of items. Returns a number of items up to a limit value. Use the ``limit`` parameter to make an initial limited request and use the ID of the last-seen item from the response as the ``marker`` parameter value in a subsequent limited request. in: query required: false type: integer marker: description: | The ID of the last-seen item. Use the ``limit`` parameter to make an initial limited request and use the ID of the last-seen item from the response as the ``marker`` parameter value in a subsequent limited request. in: query required: false type: string sort_dir: description: | Sorts the response by a set of one or more sort direction and attribute (``sort_key``) combinations. A valid value for the sort direction is ``asc`` (ascending) or ``desc`` (descending). If you omit the sort direction in a set, the default is ``desc``. in: query required: false type: string sort_key: description: | Sorts the response by one of the following attributes: ``created_at``, ``expires_at``, ``status``, ``type``, ``updated_at``. Default is ``created_at``. in: query required: false type: string status-in-query: description: | Filters the response by a task status. A valid value is ``pending``, ``processing``, ``success``, or ``failure``. in: query required: false type: string type-in-query: description: | Filters the response by a task type. A valid value is ``import``. in: query required: false type: string # variables in body created_at: description: | The date and time when the task was created. The date and time stamp format is `ISO 8601 `_. in: body required: true type: string expires_at: description: | The date and time when the task is subject to removal. While the *task object*, that is, the record describing the task is subject to deletion, the result of the task (for example, an imported image) still exists. The date and time stamp format is `ISO 8601 `_. This value is only set when the task reaches status ``success`` or ``failure``. Otherwise its value is ``null``. It may not appear in the response when its value is ``null``. in: body required: true type: string first: description: | The URI for the first page of response. in: body required: true type: string id: description: | The UUID of the task. in: body required: true type: string input: description: | A JSON object specifying the input parameters to the task. Consult your cloud provider's documentation for details. in: body required: true type: object message: description: | Human-readable text, possibly an empty string, usually displayed in an error situation to provide more information about what has occurred. in: body required: true type: string next: description: | The URI for the next page of response. Will not be present on the last page of the response. in: body required: true type: string owner: description: | An identifier for the owner of the task, usually the tenant ID. in: body required: true type: string result: description: | A JSON object specifying information about the ultimate outcome of the task. Consult your cloud provider's documentation for details. in: body required: true type: object schema-task: description: | The URI for the schema describing an image task. in: body required: true type: string schema-tasks: description: | The URI for the schema describing an image task list. in: body required: true type: string self: description: | A URI for this task. in: body required: true type: string status: description: | The current status of this task. The value can be ``pending``, ``processing``, ``success`` or ``failure``. in: body required: true type: string tasks: description: | A list of sparse *task* objects. Each object contains the following fields: - ``created_at`` - ``id`` - ``owner`` - ``schema`` - ``self`` - ``status`` - ``type`` - ``updated_at`` in: body required: true type: array type: description: | The type of task represented by this content. in: body required: true type: string updated_at: description: | The date and time when the task was updated. The date and time stamp format is `ISO 8601 `_. If the ``updated_at`` date and time stamp is not set, its value is ``null``. in: body required: true type: string glance-16.0.0/api-ref/source/v2/metadefs-parameters.yaml0000666000175100017510000003230513245511421023045 0ustar zuulzuul00000000000000# variables in header Content-Type-json: description: | The media type descriptor for the request body. Use ``application/json``. in: header required: true type: string Location: description: | The newly-created URI for the namespace. in: header required: true type: string # variables in path name: description: | Name of the resource type. A Name is limited to 80 chars in length. in: path required: true type: string namespace_name: description: | The name of the namespace whose details you want to see. (The name is the value of a namespace's ``namespace`` field.) in: path required: true type: string object_name: description: | The name of the object. in: path required: true type: string property_name: description: | The name of the property. in: path required: true type: string resource_type_name: description: | The name of the resource type. in: path required: true type: string tag_name: description: | The name of the tag. A Name is limited to 80 chars in length. in: path required: true type: string # variables in query limit: description: | Requests a page size of items. Returns a number of items up to a limit value. Use the ``limit`` parameter to make an initial limited request and use the ID of the last-seen item from the response as the ``marker`` parameter value in a subsequent limited request. in: query required: false type: integer limit-tags: description: | Requests a page size of tags. Returns a number of tags up to a limit value. Use the ``limit`` parameter to make an initial limited request and use the name of the last-seen tag from the response as the ``marker`` parameter value in a subsequent limited request. in: query required: false type: integer marker: description: | Allows specification of a *namespace identifier*. When present, only namespaces occurring after that namespace will be listed, that is, those namespaces having a ``sort_key`` later than that of the marker in the ``sort_dir`` direction. in: query required: false type: string marker-tags: description: | Allows specification of a tag name. When present, only tags occurring *after* the named tag will be listed, that is, those namespaces having a ``sort_key`` later than that of the marker in the ``sort_dir`` direction. in: query required: false type: string resource_type-in-query: description: | Filters the response by property names that start with a prefix from an associated resource type. The API removes the prefix of the resource type from the property name in the response. in: query required: false type: string resource_type-in-query-namespace-detail: description: | Apply the prefix for the specified resource type to the names of the properties listed in the response. If the resource type specified does not have an association with this namespace, or if the resource type is associated but does not have a prefix defined in this namespace, this parameter is ignored. in: query required: false type: string resource_types-in-query: description: | Filters the response to include only those namespaces that contain the specified resource type or types as resource type associations. Use the comma (``,``) character to separate multiple values. For example, ``OS::Glance::Image,OS::Nova::Flavor`` shows only namespaces associated with these resource types. in: query required: false type: integer sort_dir: description: | Sorts the response. Use ``asc`` for ascending or ``desc`` for descending order. The default is ``desc``. in: query required: false type: string sort_key: description: | Sorts the response by an attribute. Accepted values are ``namespace``, ``created_at``, and ``updated_at``. Default is ``created_at``. in: query required: false type: string sort_key-tags: description: | Sorts the response by an attribute. Accepted values are ``name``, ``created_at``, and ``updated_at``. Default is ``created_at``. in: query required: false type: string visibility-in-query: description: | Filters the response by a namespace visibility value. A valid value is ``public`` or ``private``. If you omit this parameter, the response shows both ``public`` and ``private`` namespaces. in: query required: false type: string # variables in body additionalItems: description: | Describes extra items, if you use tuple typing. If the value of ``items`` is an array (tuple typing) and the instance is longer than the list of schemas in ``items``, the additional items are described by the schema in this property. If this value is ``false``, the instance cannot be longer than the list of schemas in ``items``. If this value is ``true``, that is equivalent to the empty schema (anything goes). in: body required: false type: string created_at: description: | The date and time when the resource was created. The date and time stamp format is `ISO 8601 `_. in: body required: true type: string default: description: | Default property description. in: body required: false type: string description: description: | The description of the namespace. in: body required: false type: string display_name: description: | User-friendly name to use in a UI to display the namespace name. in: body required: false type: string enum: description: | Enumerated list of property values. in: body required: true type: array enum-in-request: description: | Enumerated list of property values. in: body required: false type: array first: description: | The URI for the first page of response. in: body required: true type: string hypervisor_type: description: | Hypervisor type of property values. in: body required: true type: object items: description: | Schema for the items in an array. in: body required: false type: string maximum: description: | Maximum allowed numerical value. in: body required: false type: string maxItems: description: | Maximum length of an array. in: body required: false type: string maxLength: description: | Maximum allowed string length. in: body required: false type: string minimum: description: | Minimum allowed numerical value. in: body required: false type: string minItems: description: | Minimum length of an array. in: body required: false type: string minLength: description: | Minimum allowed string length. in: body required: false type: string name-property: description: | The name of the property. A Name is limited to 80 chars in length. in: body required: true type: string name-resource-type: description: | Name of the resource type. in: body required: true type: string name-tag: description: | The name of the tag. A Name is limited to 80 chars in length. in: body required: true type: string namespace: description: | An identifier (a name) for the namespace. The value must be unique across all users. in: body required: true type: string namespaces: description: | A list of ``namespace`` objects. in: body required: true type: array next: description: | The URI for the next page of response. Will not be present on the last page of the response. in: body required: true type: string object-description: description: | Detailed description of the object. in: body required: true type: string object-description-in-request: description: | Detailed description of the object. in: body required: false type: string object-name: description: | The name of the object, suitable for use as an identifier. A Name is limited to 80 chars in length. in: body required: true type: string object-properties: description: | A set of key:value pairs, where each value is a *property* entity. in: body required: true type: object object-properties-in-request: description: | A set of key:value pairs, where each value is a *property* entity. in: body required: false type: object object-required: description: | A list of the names of properties that are required on this object. in: body required: true type: array object-required-in-request: description: | A list of the names of properties that are required on this object. in: body required: false type: array object-schema: description: | The URI of the JSON schema describing an *object*. in: body required: true type: string objects: description: | One or more object definitions of the namespace. in: body required: true type: string objects-namespace: description: | Namespace object definitions, if any. in: body required: false type: object operators: description: | Operators property description. in: body required: false type: string owner: description: | An identifier for the owner of this resource, usually the tenant ID. in: body required: true type: string pattern: description: | A regular expression ( `ECMA 262 `_ ) that a string value must match. in: body required: false type: string prefix: description: | Prefix for any properties in the namespace that you want to apply to the resource type. If you specify a prefix, you must append a prefix separator, such as the colon (``:``) character. in: body required: false type: string properties-dict: description: | A dictionary of key:value pairs, where each value is a *property* object as defined by the :ref:`Metadefs Property Schema `. in: body required: true type: object properties-nonempty: description: | One or more property definitions for the namespace. in: body required: true type: object properties-nullable: description: | Namespace property definitions, if any. in: body required: false type: object properties_target: description: | Some resource types allow more than one key and value pair for each instance. For example, the Image service allows both user and image metadata on volumes. The ``properties_target`` parameter enables a namespace target to remove the ambiguity. in: body required: false type: string property-description: description: | Detailed description of the property. in: body required: true type: string property-description-in-request: description: | Detailed description of the property. in: body required: false type: string protected: description: | Namespace protection for deletion, either ``true`` or ``false``. in: body required: true type: boolean protected-in-request: description: | Namespace protection for deletion. A valid value is ``true`` or ``false``. Default is ``false``. in: body required: false type: boolean readonly: description: | Indicates whether this is a read-only property. in: body required: false type: boolean resource_type_associations: description: | A list, each element of which is described by the :ref:`Metadefs Resource Type Association Schema `. in: body required: true type: array resource_types-list: description: | A list of abbreviated *resource type* JSON objects, where each object contains the ``name`` of the resource type and its ``created_at`` and ``updated_at`` timestamps in `ISO 8601 Format `_. in: body required: true type: array schema-namespace: description: | The URI of the JSON schema describing a *namespace*. in: body required: true type: string schema-namespaces: description: | The URI of the JSON schema describing a *namespaces* entity, that is, an entity consisting of a list of abbreviated namespace objects. in: body required: true type: string self: description: | The URI for this resource. in: body required: true type: string tag-name: description: | The name of the tag. in: body required: true type: string tags: description: | A list of *tag* objects, where each object is defined by the :ref:`Metadefs Tag Schema `. in: body required: true type: array title: description: | The title of the property. in: body required: true type: string type: description: | The property type. in: body required: true type: string uniqueItems: description: | Indicates whether all values in the array must be distinct. in: body required: false type: string updated_at: description: | The date and time when the resource was last updated. The date and time stamp format is `ISO 8601 `_. in: body required: true type: string visibility: description: | The namespace visibility, either ``public`` or ``private``. in: body required: true type: enum visibility-in-request: description: | The namespace visibility. A valid value is ``public`` or ``private``. Default is ``private``. in: body required: false type: enum glance-16.0.0/api-ref/source/v2/images-images-v2.inc0000666000175100017510000004776213245511426022002 0ustar zuulzuul00000000000000.. -*- rst -*- Images ****** Creates, lists, shows, updates, deletes, and performs other operations on images. General information ~~~~~~~~~~~~~~~~~~~ **Images** An *image* is represented by a JSON Object, that is, as a set of key:value pairs. Some of these keys are *base properties* that are managed by the Image service. The remainder are properties put on the image by the operator or the image owner. .. note:: Another common term for "image properties" is "image metadata" because what we're talking about here are properties that *describe* the image data that can be consumed by various OpenStack services (for example, by the Compute service to boot a server, or by the Volume service to create a bootable volume). Here's some important information about image properties: * The base properties are always included in the image representation. A base property that doesn't have a value is displayed with its value set to ``null`` (that is, the JSON null data type). * Additional properties, whose value is always a string data type, are only included in the response if they have a value. * Since version 2.2, the Images API allows an operator to configure *property protections*, by which the create, read, update, and delete operations on specific image properties may be restricted to particular user roles. Consult the documentation of your cloud operator for details. * Arguably the most important properties of an image are its *id*, which uniquely identifies the image, its *status*, which indicates the current situation of the image (which, in turn, indicates what you can do with the image), and its *visibility*, which indicates who has access to the image. .. note:: In addition to image properties, there's usually a data payload that is accessible via the image. In order to give image consumers some guarantees about the data payload (for example, that the data associated with image ``06b73bc7-9d62-4d37-ad95-d4708f37734f`` is the same today as it was when you used it to boot a server yesterday) the Image service controls particular image properties (for example, ``checksum``) that cannot be modified. A shorthand way to refer to the way the image data payload is related to its representation as an *image* in the Images API is to say that "images are immutable". (This obviously applies to the image data payload, not its representation in the Image service.) See the :ref:`Image Data ` section of this document for more information. **Image status** The possible status values for images are presented in the following table. .. list-table:: :header-rows: 1 * - Status - Description * - queued - The Image service reserved an image ID for the image in the catalog but did not yet upload any image data. * - saving - The Image service is in the process of saving the raw data for the image into the backing store. * - active - The image is active and ready for consumption in the Image service. * - killed - An image data upload error occurred. * - deleted - The Image service retains information about the image but the image is no longer available for use. * - pending_delete - Similar to the ``deleted`` status. An image in this state is not recoverable. * - deactivated - The image data is not available for use. **Image visibility** The possible values for image visibility are presented in the following table. .. list-table:: :header-rows: 1 * - Visibility - Description * - ``public`` - Any user may read the image and its data payload. Additionally, the image appears in the default image list of all users. * - ``community`` - Any user may read the image and its data payload, but the image does *not* appear in the default image list of any user other than the owner. *(This visibility value was added in the Image API v2.5)* * - ``shared`` - An image must have this visibility in order for *image members* to be added to it. Only the owner and the specific image members who have been added to the image may read the image or its data payload. The image appears in the default image list of the owner. It also appears in the default image list of members who have *accepted* the image. See the :ref:`Image Sharing ` section of this document for more information. If you do not specify a visibility value when you create an image, it is assigned this visibility by default. Non-owners, however, will not have access to the image until they are added as image members. *(This visibility value was added in the Image API v2.5)* * - ``private`` - Only the owner image may read the image or its data payload. Additionally, the image appears in the owner's default image list. *Since Image API v2.5, an image with private visibility cannot have members added to it.* Note that the descriptions above discuss *read* access to images. Only the image owner (or an administrator) has write access to image properties and the image data payload. Further, in order to promise image immutability, the Image service will allow even the owner (or an administrator) only write-once permissions to specific image properties and the image data payload. .. _image-create: Create an image ~~~~~~~~~~~~~~~ .. rest_method:: POST /v2/images Creates a catalog record for an operating system disk image. *(Since Image API v2.0)* The ``Location`` response header contains the URI for the image. The response body contains the new image entity. Synchronous Postconditions - With correct permissions, you can see the image status as ``queued`` through API calls. Normal response codes: 201 Error response codes: 400, 401, 403, 409, 413, 415 Request ------- .. rest_parameters:: images-parameters.yaml - container_format: container_format-in-request - disk_format: disk_format-in-request - id: id-in-request - min_disk: min_disk-in-request - min_ram: min_ram-in-request - name: name-in-request - protected: protected-in-request - tags: tags-in-request - visibility: visibility-in-request Additionally, you may include additional properties specified as key:value pairs, where the value must be a string data type. Keys and values are limited to 255 chars in length. Available key names may be limited by the cloud's property protection configuration. Request Example --------------- .. literalinclude:: samples/image-create-request.json :language: json Response Parameters ------------------- .. rest_parameters:: images-parameters.yaml - Location: Location - OpenStack-image-import-methods: import-header - checksum: checksum - container_format: container_format - created_at: created_at - disk_format: disk_format - file: file - id: id - min_disk: min_disk - min_ram: min_ram - name: name - owner: owner - protected: protected - schema: schema-image - self: self - size: size - status: status - tags: tags - updated_at: updated_at - virtual_size: virtual_size - visibility: visibility - direct_url: direct_url - locations: locations The response may also include additional properties specified as key:value pairs if additional properties were specified in the request. Response Example ---------------- .. literalinclude:: samples/image-create-response.json :language: json Show image details ~~~~~~~~~~~~~~~~~~ .. rest_method:: GET /v2/images/{image_id} Shows details for an image. *(Since Image API v2.0)* The response body contains a single image entity. Preconditions - The image must exist. Normal response codes: 200 Error response codes: 400, 401, 403, 404 Request ------- .. rest_parameters:: images-parameters.yaml - image_id: image_id-in-path Response Parameters ------------------- .. rest_parameters:: images-parameters.yaml - checksum: checksum - container_format: container_format - created_at: created_at - disk_format: disk_format - file: file - id: id - min_disk: min_disk - min_ram: min_ram - name: name - owner: owner - protected: protected - schema: schema-image - self: self - size: size - status: status - tags: tags - updated_at: updated_at - virtual_size: virtual_size - visibility: visibility - direct_url: direct_url - locations: locations The response may also include additional properties specified as key:value pairs if such properties have been added to the image by the owner or an administrator. Response Example ---------------- .. literalinclude:: samples/image-show-response.json :language: json Show images ~~~~~~~~~~~ .. rest_method:: GET /v2/images Lists public virtual machine (VM) images. *(Since Image API v2.0)* **Pagination** Returns a subset of the larger collection of images and a link that you can use to get the next set of images. You should always check for the presence of a ``next`` link and use it as the URI in a subsequent HTTP GET request. You should follow this pattern until a ``next`` link is no longer provided. The ``next`` link preserves any query parameters that you send in your initial request. You can use the ``first`` link to jump back to the first page of the collection. If you prefer to paginate through images manually, use the ``limit`` and ``marker`` parameters. **Query Filters** The list operation accepts query parameters to filter the response. A client can provide direct comparison filters by using most image attributes, such as ``name=Ubuntu``, ``visibility=public``, and so on. To filter using image tags, use the filter ``tag`` (note the singular). To filter on multiple tags, include each tag separately in the query. For example, to find images with the tag **ready**, include ``tag=ready`` in your query string. To find images tagged with **ready** and **approved**, include ``tag=ready&tag=approved`` in your query string. (Note that only images containing *both* tags will be included in the response.) A client cannot use any ``link`` in the json-schema, such as self, file, or schema, to filter the response. You can list VM images that have a status of ``active``, ``queued``, or ``saving``. **The** ``in`` **Operator** As a convenience, you may specify several values for any of the following fields by using the ``in`` operator: * container_format * disk_format * id * name * status For most of these, usage is straight forward. For example, to list images in queued or saving status, use: ``GET /v2/images?status=in:saving,queued`` To find images in a particular list of image IDs, use: ``GET /v2/images?id=in:3afb79c1-131a-4c38-a87c-bc4b801d14e6,2e011209-660f-44b5-baf2-2eb4babae53d`` Using the ``in`` operator with the ``name`` property of images can be a bit trickier, depending upon how creatively you have named your images. The general rule is that if an image name contains a comma (``,``), you must enclose the entire name in quotation marks (``"``). As usual, you must URL encode any characters that require it. For example, to find images named ``glass, darkly`` or ``share me``, you would use the following filter specification: ``GET v2/images?name=in:"glass,%20darkly",share%20me`` As with regular filtering by name, you must specify the complete name you are looking for. Thus, for example, the query string ``name=in:glass,share`` will only match images with the exact name ``glass`` or the exact name ``share``. It will not find an image named ``glass, darkly`` or an image named ``share me``. **Size Comparison Filters** You can use the ``size_min`` and ``size_max`` query parameters to filter images that are greater than or less than the image size. The size, in bytes, is the size of an image on disk. For example, to filter the container to include only images that are from 1 to 4 MB, set the ``size_min`` query parameter to ``1048576`` and the ``size_max`` query parameter to ``4194304``. .. _v2-comparison-ops: **Time Comparison Filters** You can use a *comparison operator* along with the ``created_at`` or ``updated_at`` fields to filter your results. Specify the operator first, a colon (``:``) as a separator, and then the time in `ISO 8601 Format `_. Available comparison operators are: .. list-table:: :header-rows: 1 * - Operator - Description * - ``gt`` - Return results more recent than the specified time. * - ``gte`` - Return any results matching the specified time and also any more recent results. * - ``eq`` - Return any results matching the specified time exactly. * - ``neq`` - Return any results that do not match the specified time. * - ``lt`` - Return results older than the specified time. * - ``lte`` - Return any results matching the specified time and also any older results. For example: .. code-block:: console GET v2/images?created_at=gt:2016-04-18T21:38:54Z **Sorting** You can use query parameters to sort the results of this operation. - ``sort_key``. Sorts by an image attribute. Sorts in the natural sorting direction of the image attribute. - ``sort_dir``. Sorts in a sort direction. - ``sort``. Sorts by one or more sets of attribute and sort direction combinations. If you omit the sort direction in a set, the default is ``desc``. To sort the response, use the ``sort_key`` and ``sort_dir`` query parameters: .. code-block:: console GET /v2/images?sort_key=name&sort_dir=asc&sort_key=status&sort_dir=desc Alternatively, specify the ``sort`` query parameter: .. code-block:: console GET /v2/images?sort=name:asc,status:desc .. note:: Although this call has been available since version 2.0 of this API, it has been enhanced from release to release. The filtering and sorting functionality and syntax described above apply to the most recent release (Newton). Not everything described above will be available in prior releases. Normal response codes: 200 Error response codes: 400, 401, 403 Request ------- .. rest_parameters:: images-parameters.yaml - limit: limit - marker: marker - name: name-in-query - owner: owner-in-query - protected: protected-in-query - status: status-in-query - tag: tag-in-query - visibility: visibility-in-query - member_status: member_status-in-query - size_max: size_max - size_min: size_min - created_at: created_at-in-query - updated_at: updated_at-in-query - sort_dir: sort_dir - sort_key: sort_key - sort: sort Response Parameters ------------------- .. rest_parameters:: images-parameters.yaml - images: images - first: first - next: next - schema: schema-images Response Example ---------------- .. literalinclude:: samples/images-list-response.json :language: json .. _v2-image-update: Update an image ~~~~~~~~~~~~~~~ .. rest_method:: PATCH /v2/images/{image_id} Updates an image. *(Since Image API v2.0)* Conceptually, you update an image record by patching the JSON representation of the image, passing a request body conforming to one of the following media types: - ``application/openstack-images-v2.0-json-patch`` *(deprecated)* - ``application/openstack-images-v2.1-json-patch`` *(since Image API v2.1)* Attempting to make a PATCH call using some other media type will provoke a response code of 415 (Unsupported media type). The ``application/openstack-images-v2.1-json-patch`` media type provides a useful and compatible subset of the functionality defined in JavaScript Object Notation (JSON) Patch `RFC6902 `_, which defines the ``application/json-patch+json`` media type. .. note:: The ``application/openstack-images-v2.0-json-patch`` media type is based on draft 4 of the standard. Its use is deprecated. For information about the PATCH method and the available media types, see `Image API v2 HTTP PATCH media types `_. Attempting to modify some image properties will cause the entire request to fail with a 403 (Forbidden) response code: - An attempt to modify any of the "base" image properties that are managed by the Image Service. These are the properties specified as read only in the :ref:`Image Schema `. - An attempt to create or modify image properties for which you do not have permission to do so *(since Image API v2.2)*. This depends upon how property protections are configured in the OpenStack cloud in which you are making the call. Consult your cloud's documentation for details. - An attempt to delete the only image location, or to replace the image locations with an empty list *(since Image API v2.4)*. Attempting to add a location path to an image that is not in ``queued`` or ``active`` state will result in a 409 (Conflict) response code *(since Image API v2.4)*. Normal response codes: 200 Error response codes: 400, 401, 403, 404, 409, 413, 415 Request ------- .. rest_parameters:: images-parameters.yaml - Content-Type: Content-Type-patch - image_id: image_id-in-path The request body must conform to the ``application/openstack-images-v2.1-json-patch`` media type definition (see above). Request Example --------------- .. literalinclude:: samples/image-update-request.json :language: json Response Parameters ------------------- .. rest_parameters:: images-parameters.yaml - checksum: checksum - container_format: container_format - created_at: created_at - disk_format: disk_format - file: file - id: id - min_disk: min_disk - min_ram: min_ram - owner: owner - name: name - protected: protected - schema: schema-image - self: self - size: size - status: status - tags: tags - updated_at: updated_at - visibility: visibility - direct_url: direct_url - locations: locations Response Example ---------------- .. literalinclude:: samples/image-update-response.json :language: json Delete an image ~~~~~~~~~~~~~~~ .. rest_method:: DELETE /v2/images/{image_id} (Since Image API v2.0) Deletes an image. You cannot delete images with the ``protected`` attribute set to ``true`` (boolean). Preconditions - You can delete an image in any status except ``deleted``. - The ``protected`` attribute of the image cannot be ``true``. - You have permission to perform image deletion under the configured image deletion policy. Synchronous Postconditions - The response is empty and returns the HTTP ``204`` response code. - The API deletes the image from the images index. - If the image has associated binary image data in the storage backend, the OpenStack Image service deletes the data. Normal response codes: 204 Error response codes: 400, 401, 403, 404, 409 Request ------- .. rest_parameters:: images-parameters.yaml - image_id: image_id-in-path Deactivate image ~~~~~~~~~~~~~~~~ .. rest_method:: POST /v2/images/{image_id}/actions/deactivate Deactivates an image. *(Since Image API v2.3)* By default, this operation is restricted to administrators only. If you try to download a deactivated image, you will receive a 403 (Forbidden) response code. Additionally, only administrative users can view image locations for deactivated images. The deactivate operation returns an error if the image status is not ``active`` or ``deactivated``. Preconditions - The image must exist. Normal response codes: 204 Error response codes: 400, 403, 404 Request ------- .. rest_parameters:: images-parameters.yaml - image_id: image_id-in-path Reactivate image ~~~~~~~~~~~~~~~~ .. rest_method:: POST /v2/images/{image_id}/actions/reactivate Reactivates an image. *(Since Image API v2.3)* By default, this operation is restricted to administrators only. The reactivate operation returns an error if the image status is not ``active`` or ``deactivated``. Preconditions - The image must exist. Normal response codes: 204 Error response codes: 400, 403, 404 Request ------- .. rest_parameters:: images-parameters.yaml - image_id: image_id-in-path glance-16.0.0/api-ref/source/v2/metadefs-namespaces-objects.inc0000666000175100017510000001466113245511421024264 0ustar zuulzuul00000000000000.. -*- rst -*- Metadata definition objects *************************** Creates, lists, shows details for, updates, and deletes metadata definition objects. *Since API v2.2* Create object ~~~~~~~~~~~~~ .. rest_method:: POST /v2/metadefs/namespaces/{namespace_name}/objects Creates an object definition in a namespace. Normal response codes: 201 Error response codes: 400, 401, 403, 404, 409 Request ------- .. rest_parameters:: metadefs-parameters.yaml - namespace_name: namespace_name - name: object-name - description: object-description-in-request - properties: object-properties-in-request - required: object-required-in-request Request Example --------------- .. literalinclude:: samples/metadef-object-create-request.json :language: json Response Parameters ------------------- .. rest_parameters:: metadefs-parameters.yaml - created_at: created_at - description: object-description - name: object-name - properties: object-properties - required: object-required - schema: object-schema - self: self - updated_at: updated_at Response Example ---------------- .. literalinclude:: samples/metadef-object-create-response.json :language: json List objects ~~~~~~~~~~~~ .. rest_method:: GET /v2/metadefs/namespaces/{namespace_name}/objects Lists object definitions in a namespace. Returns a subset of the larger collection of namespaces and a link that you can use to get the next set of namespaces. You should always check for the presence of a ``next`` link and use it as the URI in a subsequent HTTP GET request. You should follow this pattern until a ``next`` link is no longer provided. The next link preserves any query parameters that you send in your initial request. You can use the ``first`` link to jump back to the first page of the collection. If you prefer to paginate through namespaces manually, use the ``limit`` and ``marker`` parameters. Use the ``resource_types`` and ``visibility`` query parameters to filter the response. For example, set the ``resource_types`` query parameter to ``OS::Glance::Image,OS::Nova::Flavor`` to filter the response to include only namespaces that are associated with the given resource types. You can sort the results of this operation by using the ``sort_key`` and ``sort_dir`` parameters. The API uses the natural sorting of whatever namespace attribute is provided as the ``sort_key``. Normal response codes: 200 Error response codes: 401, 403, 404 Request ------- .. rest_parameters:: metadefs-parameters.yaml - namespace_name: namespace_name - visibility: visibility-in-query - resource_types: resource_types-in-query - sort_key: sort_key - sort_dir: sort_dir Response Parameters ------------------- .. rest_parameters:: metadefs-parameters.yaml - display_name: display_name - description: description - namespace: namespace - visibility: visibility - protected: protected - namespaces: namespaces - resource_type_associations: resource_type_associations Response Example ---------------- .. literalinclude:: samples/metadef-objects-list-response.json :language: json Show object ~~~~~~~~~~~ .. rest_method:: GET /v2/metadefs/namespaces/{namespace_name}/objects/{object_name} Shows the definition for an object. The response body shows a single object entity. Normal response codes: 200 .. yep, 400 if the request includes a body Error response codes: 400, 401, 403, 404 Request ------- .. rest_parameters:: metadefs-parameters.yaml - namespace_name: namespace_name - object_name: object_name There is no request body. Response Parameters ------------------- .. rest_parameters:: metadefs-parameters.yaml - created_at: created_at - description: object-description - name: object-name - properties: object-properties - required: object-required - schema: object-schema - self: self - updated_at: updated_at Response Example ---------------- .. literalinclude:: samples/metadef-object-details-response.json :language: json Update object ~~~~~~~~~~~~~ .. rest_method:: PUT /v2/metadefs/namespaces/{namespace_name}/objects/{object_name} Updates an object definition in a namespace. The object resource is completely replaced by what you specify in the request body. Thus, if you leave out any of the optional parameters, and they exist in the current object, they will be eliminated by this call. It is possible to change the name of the object with this call; if you do, note that the URL for the object (specified by the ``self`` field) will change. Normal response codes: 200 Error response codes: 400, 401, 403, 404, 409 Request ------- .. rest_parameters:: metadefs-parameters.yaml - namespace_name: namespace_name - object_name: object_name - name: object-name - description: object-description-in-request - properties: object-properties-in-request - required: object-required-in-request Request Example --------------- .. literalinclude:: samples/metadef-object-update-request.json :language: json Response Parameters ------------------- .. rest_parameters:: metadefs-parameters.yaml - created_at: created_at - description: object-description - name: object-name - properties: object-properties - required: object-required - schema: object-schema - self: self - updated_at: updated_at Response Example ---------------- .. literalinclude:: samples/metadef-object-update-response.json :language: json Delete object ~~~~~~~~~~~~~ .. rest_method:: DELETE /v2/metadefs/namespaces/{namespace_name}/objects/{object_name} Deletes an object definition from a namespace. .. note:: If the namespace containing the object is protected, that is, if the ``protected`` attribute of the namespace is ``true``, then you must first set the ``protected`` attribute to ``false`` on the namespace before you will be permitted to delete the object. * If you try to delete an object from a protected namespace, the call returns the ``403`` response code. * To change the ``protected`` attribute of a namespace, use the :ref:`Update namespace ` call. When you successfully delete an object from a namespace, the response is empty and the response code is ``204``. Normal response codes: 204 Error response codes: 400, 401, 403, 404 Request ------- .. rest_parameters:: metadefs-parameters.yaml - namespace_name: namespace_name - object_name: object_name There is no request body. There is no response body. glance-16.0.0/api-ref/source/v2/metadefs-namespaces-tags.inc0000666000175100017510000001333313245511421023564 0ustar zuulzuul00000000000000.. -*- rst -*- Metadata definition tags ************************ Creates, lists, shows details for, updates, and deletes metadata definition tags. *Since API v2.2* Create tag definition ~~~~~~~~~~~~~~~~~~~~~ .. rest_method:: POST /v2/metadefs/namespaces/{namespace_name}/tags/{tag_name} Adds a tag to the list of namespace tag definitions. Normal response codes: 201 Error response codes: 400, 401, 403, 404, 409 Request ------- .. rest_parameters:: metadefs-parameters.yaml - namespace_name: namespace_name - tag_name: tag_name There is no request body. Response Parameters ------------------- .. rest_parameters:: metadefs-parameters.yaml - created_at: created_at - name: name-tag - updated_at: updated_at Response Example ---------------- .. literalinclude:: samples/metadef-tag-create-response.json :language: json Get tag definition ~~~~~~~~~~~~~~~~~~ .. rest_method:: GET /v2/metadefs/namespaces/{namespace_name}/tags/{tag_name} Gets a definition for a tag. The response body shows a single tag entity. Normal response codes: 200 Error response codes: 400, 401, 403, 404 Request ------- .. rest_parameters:: metadefs-parameters.yaml - tag_name: tag_name - namespace_name: namespace_name There is no request body. Response Parameters ------------------- .. rest_parameters:: metadefs-parameters.yaml - created_at: created_at - name: name-tag - updated_at: updated_at Response Example ---------------- .. literalinclude:: samples/metadef-tag-details-response.json :language: json Update tag definition ~~~~~~~~~~~~~~~~~~~~~ .. rest_method:: PUT /v2/metadefs/namespaces/{namespace_name}/tags/{tag_name} Renames a tag definition. Normal response codes: 200 Error response codes: 400, 401, 403, 404, 409 Request ------- .. rest_parameters:: metadefs-parameters.yaml - tag_name: tag_name - namespace_name: namespace_name - name: name-tag Request Example --------------- .. literalinclude:: samples/metadef-tag-update-request.json :language: json Response Parameters ------------------- .. rest_parameters:: metadefs-parameters.yaml - created_at: created_at - name: name-tag - updated_at: updated_at Response Example ---------------- .. literalinclude:: samples/metadef-tag-update-response.json :language: json Delete tag definition ~~~~~~~~~~~~~~~~~~~~~ .. rest_method:: DELETE /v2/metadefs/namespaces/{namespace_name}/tags/{tag_name} Deletes a tag definition within a namespace. .. note:: If the namespace containing the tag is protected, that is, if the ``protected`` attribute of the namespace is ``true``, then you must first set the ``protected`` attribute to ``false`` on the namespace before you will be permitted to delete the tag. * If you try to delete a tag from a protected namespace, the call returns the ``403`` response code. * To change the ``protected`` attribute of a namespace, use the :ref:`Update namespace ` call. When you successfully delete a tag from a namespace, the response is empty and the response code is ``204``. Normal response codes: 204 Error response codes: 400, 401, 403, 404 Request ------- .. rest_parameters:: metadefs-parameters.yaml - namespace_name: namespace_name - tag_name: tag_name Create tags ~~~~~~~~~~~ .. rest_method:: POST /v2/metadefs/namespaces/{namespace_name}/tags Creates one or more tag definitions in a namespace. Normal response codes: 201 Error response codes: 400, 401, 403, 404, 409 Request ------- .. rest_parameters:: metadefs-parameters.yaml - namespace_name: namespace_name - tags: tags Request Example --------------- .. literalinclude:: samples/metadef-tags-create-request.json :language: json Response Parameters ------------------- .. rest_parameters:: metadefs-parameters.yaml - name: name - tags: tags Response Example ---------------- .. literalinclude:: samples/metadef-tag-create-response.json :language: json List tags ~~~~~~~~~ .. rest_method:: GET /v2/metadefs/namespaces/{namespace_name}/tags Lists the tag definitions within a namespace. To manually paginate through the list of tags, use the ``limit`` and ``marker`` parameters. To sort the results of this operation use the ``sort_key`` and ``sort_dir`` parameters. The API uses the natural sort order of the tag attribute of the ``sort_key`` parameter. Normal response codes: 200 Error response codes: 401, 403, 404 Request ------- .. rest_parameters:: metadefs-parameters.yaml - namespace_name: namespace_name - limit: limit-tags - marker: marker-tags - sort_key: sort_key-tags - sort_dir: sort_dir There is no request body. Response Parameters ------------------- .. rest_parameters:: metadefs-parameters.yaml - tags: tags Response Example ---------------- .. literalinclude:: samples/metadef-tags-list-response.json :language: json Delete all tag definitions ~~~~~~~~~~~~~~~~~~~~~~~~~~ .. rest_method:: DELETE /v2/metadefs/namespaces/{namespace_name}/tags Deletes all tag definitions within a namespace. .. note:: If the namespace containing the tags is protected, that is, if the ``protected`` attribute of the namespace is ``true``, then you must first set the ``protected`` attribute to ``false`` on the namespace before you will be permitted to delete the tags. If you try to delete the tags from a protected namespace, the call returns the ``403`` response code. When you successfully delete the tags from a namespace, the response is empty and the response code is ``204``. Normal response codes: 204 Error response codes: 403, 404 Request ------- .. rest_parameters:: metadefs-parameters.yaml - namespace_name: namespace_name There is no request body. There is no response body. glance-16.0.0/api-ref/source/v2/images-tags.inc0000666000175100017510000000144313245511421021123 0ustar zuulzuul00000000000000.. -*- rst -*- Image tags ********** Adds and deletes image tags. Image tags may also be modfied by the :ref:`v2-image-update` call. Add image tag ~~~~~~~~~~~~~ .. rest_method:: PUT /v2/images/{image_id}/tags/{tag} Adds a tag to an image. *(Since Image API v2.0)* Normal response codes: 204 Error response codes: 400, 401, 403, 404, 413 Request ------- .. rest_parameters:: images-parameters.yaml - image_id: image_id-in-path - tag: tag-in-path Delete image tag ~~~~~~~~~~~~~~~~ .. rest_method:: DELETE /v2/images/{image_id}/tags/{tag} Deletes a tag from an image. *(Since Image API v2.0)* Normal response codes: 204 Error response codes: 400, 401, 403, 404 Request ------- .. rest_parameters:: images-parameters.yaml - image_id: image_id-in-path - tag: tag-in-path glance-16.0.0/api-ref/source/v2/images-import.inc0000666000175100017510000002101513245511421021474 0ustar zuulzuul00000000000000.. -*- rst -*- .. _image-import-process: Interoperable image import ************************** An interoperable image import process is introduced in the Image API v2.6. Use the :ref:`API versions call ` to determine what API versions are available in your cloud. General information ~~~~~~~~~~~~~~~~~~~ The exact workflow you use for interoperable image import depends upon the import methods available in the cloud in which you want to import an image. Each of these methods is well defined (which is what makes this process interoperable among different OpenStack clouds). Two import methods are defined, ``glance-direct`` and ``web-download``. .. note:: Use the :ref:`Import Method Discovery ` call to determine what import methods are available in the cloud to which you wish to import an image. The first step in each interoperable image import method is the same: you must create an image record. This will give you an image id to work with. This image id is how the OpenStack Image service will understand that the other calls you make are referring to this particular image. Thus, the first step is: 1. Create an image record using the :ref:`Image Create ` API call. You must do this first so that you have an image id to work with for the other calls. In a cloud in which interoperable image import is enabled, the :ref:`Image Create ` response will include a ``OpenStack-image-import-methods`` header listing the types of import methods available in that cloud. Alternatively, these methods may be determined independently of creating an image by making the :ref:`Import Method Discovery ` call. The glance-direct import method ------------------------------- The ``glance-direct`` workflow has **three** parts: 1. Create an image record as described above. 2. Upload the image data to a staging area using the :ref:`Image Stage ` API call. Note that this image data is not accessible until after the third step has successfully completed. 3. Issue the :ref:`Image Import ` call to complete the import process. You will specify that you are using the ``glance-direct`` import method in the body of the import call. The web-download import method ------------------------------ The ``web-download`` workflow has **two** parts: 1. Create an image record as described above. 2. Issue the :ref:`Image Import ` call to complete the import process. You will specify that you are using the ``web-download`` import method in the body of the import call. .. _image-stage-call: Stage binary image data ~~~~~~~~~~~~~~~~~~~~~~~ .. rest_method:: PUT /v2/images/{image_id}/stage Places the binary image data in a staging area. It is not stored in the storage backend and is not accessible for download until after the :ref:`Image Import ` call is made. *(Since Image API v2.6)* Set the ``Content-Type`` request header to ``application/octet-stream``. Example call: :: curl -i -X PUT -H "X-Auth-Token: $token" \ -H "Content-Type: application/octet-stream" \ -d @/home/glance/my.to-import.qcow2 \ $image_url/v2/images/{image_id}/stage **Preconditions** Before you can stage binary image data, you must meet the following preconditions: - The image record must exist. - The image status must be ``queued``. - Your image storage quota must be sufficient. - The size of the data that you want to store must not exceed the size that the OpenStack Image service allows. **Synchronous Postconditions** - With correct permissions, you can see the image status as ``uploading`` through API calls. **Troubleshooting** - If you cannot store the data, either your request lacks required information or you exceeded your allotted quota. Ensure that you meet the preconditions and run the request again. If the request fails again, review your API request. - The storage back ends for storing the data must have enough free storage space to accommodate the size of the data. Normal response codes: 204 Error response codes: 400, 401, 403, 404, 405, 409, 410, 413, 415, 503 If the image import process is not enabled in your cloud, this request will result in a 404 response code with an appropriate message. Request ------- .. rest_parameters:: images-parameters.yaml - Content-type: Content-Type-data - image_id: image_id-in-path .. _image-import-call: Import an image ~~~~~~~~~~~~~~~ .. rest_method:: POST /v2/images/{image_id}/import Signals the Image Service to complete the image import workflow by processing data that has been made available to the OpenStack image service. *(Since Image API v2.6)* In the ``glance-direct`` workflow, the data has been made available to the Image service via the :ref:`Stage binary image data ` API call. In the ``web-download`` workflow, the data is made available to the Image service by being posted to an accessible location with a URL that you know. Example call: ``curl -i -X POST -H "X-Auth-Token: $token" $image_url/v2/images/{image_id}/import`` The JSON request body specifies what import method you wish to use for this image request. **Preconditions** Before you can complete the interoperable image import workflow, you must meet the following preconditions: - The image record must exist. - You must set the disk and container formats in the image record. (This can be done at the time of image creation, or you can make the :ref:`Image Update ` API call. - Your image storage quota must be sufficient. - The size of the data that you want to store must not exceed the size that the OpenStack Image service allows. **Additional Preconditions** If you are using the ``glance-direct`` import method: - The image status must be ``uploading``. (This indicates that the image data has been uploaded to the stage.) - The body of your request must indicate that you are using the ``glance-direct`` import method. If you are using the ``web-download`` import method: - The image status must be ``queued``. (This indicates that no image data has yet been associated with the image.) - The body of your request must indicate that you are using the ``web-download`` import method, and it must contain the URL at which the data is to be found. .. note:: The acceptable set of URLs for the ``web-download`` import method may be restricted in a particular cloud. Consult the cloud's local documentation for details. **Synchronous Postconditions** - With correct permissions, you can see the image status as ``importing`` through API calls. (Be aware, however, that if the import process completes before you make the API call, the image may already show as ``active``.) Normal response codes: 202 Error response codes: 400, 401, 403, 404, 405, 409, 410, 413, 415, 503 If the image import process is not enabled in your cloud, this request will result in a 404 response code with an appropriate message. Request ------- .. rest_parameters:: images-parameters.yaml - Content-type: Content-Type-json - image_id: image_id-in-path - method: method-in-request Request Example - glance-direct import method --------------------------------------------- .. literalinclude:: samples/image-import-g-d-request.json :language: json Request Example - web-download import method -------------------------------------------- .. literalinclude:: samples/image-import-w-d-request.json :language: json .. _import-discovery-call: Import methods and values discovery ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. rest_method:: GET /v2/info/import Returns information concerning the constraints around image import in the cloud in which the call is made, for example, supported container formats, supported disk formats, maximum image size, etc. This call contains a ``import-methods`` field consisting of an array of string identifiers indicating what import methods are supported in the cloud in which the call is made. *(Since Image API v2.6)* .. note:: In the Image API v2.6, this discovery call contains **only** the ``import-methods`` field. Normal response codes: 200 Error response codes: 400, 401, 403 Request ------- There are no request parameters. This call does not allow a request body. Response Parameters ------------------- .. rest_parameters:: images-parameters.yaml - import-methods: import-methods Response Example ---------------- .. literalinclude:: samples/image-info-import-response.json :language: json glance-16.0.0/api-ref/source/v2/metadefs-schemas.inc0000666000175100017510000001541013245511421022132 0ustar zuulzuul00000000000000.. -*- rst -*- Metadata definition schemas *************************** Gets a JSON-schema document that represents a metadata definition entity. *(Since API v2.2)* Show metadata definition namespace schema ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. rest_method:: GET /v2/schemas/metadefs/namespace Shows a JSON schema document that represents a metadata definition *namespace* entity. The following schema document is an example. The authoritative response is the actual response to the API call. Normal response codes: 200 Error response codes: 400, 401 Request ------- There are no request parameters. The call does not take a request body. Response Example ---------------- .. literalinclude:: samples/schemas-metadef-namespace-show-response.json :language: json Show metadata definition namespaces schema ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. rest_method:: GET /v2/schemas/metadefs/namespaces Shows a JSON schema document that represents a metadata definition *namespaces* entity. A namespaces entity is a container for *namespace* entities. The following schema document is an example. The authoritative response is the actual response to the API call. Normal response codes: 200 Error response codes: 400, 401 Request ------- There are no request parameters. The call does not take a request body. Response Example ---------------- .. literalinclude:: samples/schemas-metadef-namespaces-list-response.json :language: json .. _md-schema-rt-assoc: Show metadata definition namespace resource type association schema ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. rest_method:: GET /v2/schemas/metadefs/resource_type Shows a JSON schema document that represents a metadata definition namespace *resource type association* entity. The following schema document is an example. The authoritative response is the actual response to the API call. Normal response codes: 200 Error response codes: 400, 401 Request ------- There are no request parameters. The call does not take a request body. Response Example ---------------- .. literalinclude:: samples/schemas-metadef-resource-type-association-show-response.json :language: json Show metadata definition namespace resource type associations schema ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. rest_method:: GET /v2/schemas/metadefs/resource_types Shows a JSON schema document that represents a metadata definition namespace *resource type associations* entity. A resource type associations entity is a container for *resource type association* entities. The following schema document is an example. The authoritative response is the actual response to the API call. Normal response codes: 200 Error response codes: 400, 401 Request ------- There are no request parameters. The call does not take a request body. Response Example ---------------- .. literalinclude:: samples/schemas-metadef-resource-type-associations-list-response.json :language: json Show metadata definition object schema ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. rest_method:: GET /v2/schemas/metadefs/object Shows a JSON schema document that represents a metadata definition *object* entity. The following schema document is an example. The authoritative response is the actual response to the API call. Normal response codes: 200 Error response codes: 400, 401 Request ------- There are no request parameters. The call does not take a request body. Response Example ---------------- .. literalinclude:: samples/schemas-metadef-object-show-response.json :language: json Show metadata definition objects schema ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. rest_method:: GET /v2/schemas/metadefs/objects Shows a JSON schema document that represents a metadata definition *objects* entity. An objects entity is a container for *object* entities. The following schema document is an example. The authoritative response is the actual response to the API call. Normal response codes: 200 Error response codes: 400, 401 Request ------- There are no request parameters. The call does not take a request body. Response Example ---------------- .. literalinclude:: samples/schemas-metadef-objects-list-response.json :language: json .. _md-schema-property: Show metadata definition property schema ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. rest_method:: GET /v2/schemas/metadefs/property Shows a JSON schema document that represents a metadata definition *property* entity. The following schema document is an example. The authoritative response is the actual response to the API call. Normal response codes: 200 Error response codes: 400, 401 Request ------- There are no request parameters. The call does not take a request body. Response Example ---------------- .. literalinclude:: samples/schemas-metadef-property-show-response.json :language: json Show metadata definition properties schema ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. rest_method:: GET /v2/schemas/metadefs/properties Shows a JSON schema document that represents a metadata definition *properties* entity. A properties entity is a container for *property* entities. The following schema document is an example. The authoritative response is the actual response to the API call. Normal response codes: 200 Error response codes: 400, 401 Request ------- There are no request parameters. The call does not take a request body. Response Example ---------------- .. literalinclude:: samples/schemas-metadef-properties-list-response.json :language: json .. _md-schema-tag: Show metadata definition tag schema ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. rest_method:: GET /v2/schemas/metadefs/tag Shows a JSON schema document that represents a metadata definition *tag* entity. The following schema document is an example. The authoritative response is the actual response to the API call. Normal response codes: 200 Error response codes: 400, 401 Request ------- There are no request parameters. The call does not take a request body. Response Example ---------------- .. literalinclude:: samples/schemas-metadef-tag-show-response.json :language: json Show metadata definition tags schema ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. rest_method:: GET /v2/schemas/metadefs/tags Shows a JSON schema document that represents a metadata definition *tags* entity. A tags entity is a container for *tag* entities. The following schema document is an example. The authoritative response is the actual response to the API call. Normal response codes: 200 Error response codes: 400, 401 Request ------- There are no request parameters. The call does not take a request body. Response Example ---------------- .. literalinclude:: samples/schemas-metadef-tags-list-response.json :language: json glance-16.0.0/api-ref/source/v2/images-schemas.inc0000666000175100017510000000520013245511421021603 0ustar zuulzuul00000000000000.. -*- rst -*- .. note: You can get a 400 on a GET if you pass a request body (see router.py) Image Schemas ************* Gets a JSON-schema document that represents the various entities talked about by the Images v2 API. .. _images-schema: Show images schema ~~~~~~~~~~~~~~~~~~ .. rest_method:: GET /v2/schemas/images *(Since Images v2.0)* Shows a JSON schema document that represents an *images* entity. An images entity is a container of image entities. The following schema is solely an example. Consider only the response to the API call as authoritative. Normal response codes: 200 Error response codes: 400, 401 Request ------- This operation has no request parameters and does not accept a request body. Response Example ---------------- .. literalinclude:: samples/schemas-images-list-response.json :language: json .. _image-schema: Show image schema ~~~~~~~~~~~~~~~~~ .. rest_method:: GET /v2/schemas/image *(Since Images v2.0)* Shows a JSON schema document that represents an *image* entity. The following schema is solely an example. Consider only the response to the API call as authoritative. Normal response codes: 200 Error response codes: 400, 401 Request ------- This operation has no request parameters and does not accept a request body. Response Example ---------------- .. literalinclude:: samples/schemas-image-show-response.json :language: json .. _image-members-schema: Show image members schema ~~~~~~~~~~~~~~~~~~~~~~~~~ .. rest_method:: GET /v2/schemas/members *(Since Images v2.1)* Shows a JSON schema document that represents an *image members* entity. An image members entity is a container of image member entities. The following schema is solely an example. Consider only the response to the API call as authoritative. Normal response codes: 200 Error response codes: 400, 401 Request ------- This operation has no request parameters and does not accept a request body. Response Example ---------------- .. literalinclude:: samples/schemas-image-members-list-response.json :language: json .. _image-member-schema: Show image member schema ~~~~~~~~~~~~~~~~~~~~~~~~ .. rest_method:: GET /v2/schemas/member *(Since Images v2.1)* Shows a JSON schema document that represents an *image member* entity. The following schema is solely an example. Consider only the response to the API call as authoritative. Normal response codes: 200 Error response codes: 400, 401 Request ------- This operation has no request parameters and does not accept a request body. Response Example ---------------- .. literalinclude:: samples/schemas-image-member-show-response.json :language: json glance-16.0.0/api-ref/source/v2/tasks-schemas.inc0000666000175100017510000000250113245511421021464 0ustar zuulzuul00000000000000.. -*- rst -*- Task Schemas ************ Gets a JSON-schema document that represents an individual task and a list of tasks. .. _tasks-schema: Show tasks schema ~~~~~~~~~~~~~~~~~ .. rest_method:: GET /v2/schemas/tasks *(Since Images v2.2)* Shows a JSON schema document that represents a list of *tasks*. An tasks list entity is a container of entities containing abbreviated information about individual tasks. The following schema is solely an example. Consider only the response to the API call as authoritative. Normal response codes: 200 Error response codes: 401 Request ------- This operation has no request parameters and does not accept a request body. Response Example ---------------- .. literalinclude:: samples/schemas-tasks-list-response.json :language: json .. _task-schema: Show task schema ~~~~~~~~~~~~~~~~ .. rest_method:: GET /v2/schemas/task *(Since Images v2.2)* Shows a JSON schema document that represents an *task* entity. The following schema is solely an example. Consider only the response to the API call as authoritative. Normal response codes: 200 Error response codes: 401 Request ------- This operation has no request parameters and does not accept a request body. Response Example ---------------- .. literalinclude:: samples/schemas-task-show-response.json :language: json glance-16.0.0/api-ref/source/v2/metadefs-namespaces-properties.inc0000666000175100017510000001434613245511421025027 0ustar zuulzuul00000000000000.. -*- rst -*- Metadata definition properties ****************************** Creates, lists, shows details for, updates, and deletes metadata definition properties. *Since API v2.2* Create property ~~~~~~~~~~~~~~~ .. rest_method:: POST /v2/metadefs/namespaces/{namespace_name}/properties Creates a property definition in a namespace. The schema is a subset of the JSON property definition schema. Normal response codes: 201 Error response codes: 400, 404 Request ------- .. rest_parameters:: metadefs-parameters.yaml - namespace_name: namespace_name - name: name - title: title - type: type - additionalItems: additionalItems - description: property-description-in-request - default: default - items: items - operators: operators - enum: enum - maximum: maximum - minItems: minItems - readonly: readonly - minimum: minimum - maxItems: maxItems - maxLength: maxLength - uniqueItems: uniqueItems - pattern: pattern - minLength: minLength Request Example --------------- .. literalinclude:: samples/metadef-property-create-request.json :language: json Response Parameters ------------------- .. rest_parameters:: metadefs-parameters.yaml - additionalItems: additionalItems - description: property-description - title: title - default: default - items: items - operators: operators - enum: enum - maximum: maximum - minItems: minItems - readonly: readonly - minimum: minimum - maxItems: maxItems - maxLength: maxLength - uniqueItems: uniqueItems - pattern: pattern - type: type - minLength: minLength - name: name Response Example ---------------- .. literalinclude:: samples/metadef-property-create-response.json :language: json List properties ~~~~~~~~~~~~~~~ .. rest_method:: GET /v2/metadefs/namespaces/{namespace_name}/properties Lists property definitions in a namespace. Normal response codes: 200 Error response codes: 400, 401, 403, 404 Request ------- .. rest_parameters:: metadefs-parameters.yaml - namespace_name: namespace_name There is no request body. Response Parameters ------------------- .. rest_parameters:: metadefs-parameters.yaml - properties: properties-dict Response Example ---------------- .. literalinclude:: samples/metadef-properties-list-response.json :language: json Show property definition ~~~~~~~~~~~~~~~~~~~~~~~~ .. rest_method:: GET /v2/metadefs/namespaces/{namespace_name}/properties/{property_name} Shows the definition for a property. If you use the ``resource_type`` query parameter, the API removes the prefix of the resource type from the property name before it submits the query. This enables you to look for a property name that starts with a prefix from an associated resource type. The response body shows a single property entity. Normal response codes: 200 Error response codes: 401, 403, 404 Request ------- .. rest_parameters:: metadefs-parameters.yaml - property_name: property_name - namespace_name: namespace_name - resource_type: resource_type-in-query Response Parameters ------------------- .. rest_parameters:: metadefs-parameters.yaml - additionalItems: additionalItems - description: property-description - title: title - default: default - items: items - operators: operators - enum: enum - maximum: maximum - minItems: minItems - readonly: readonly - minimum: minimum - maxItems: maxItems - maxLength: maxLength - uniqueItems: uniqueItems - pattern: pattern - type: type - minLength: minLength - name: name Response Example ---------------- .. literalinclude:: samples/metadef-property-details-response.json :language: json Update property definition ~~~~~~~~~~~~~~~~~~~~~~~~~~ .. rest_method:: PUT /v2/metadefs/namespaces/{namespace_name}/properties/{property_name} Updates a property definition. Normal response codes: 200 Error response codes: 400, 401, 403, 404, 409 Request ------- .. rest_parameters:: metadefs-parameters.yaml - namespace_name: namespace_name - property_name: property_name - name: name-property - title: title - type: type - additionalItems: additionalItems - description: description - default: default - items: items - operators: operators - enum: enum - maximum: maximum - minItems: minItems - readonly: readonly - minimum: minimum - maxItems: maxItems - maxLength: maxLength - uniqueItems: uniqueItems - pattern: pattern - minLength: minLength Request Example --------------- .. literalinclude:: samples/metadef-property-create-request.json :language: json Response Parameters ------------------- .. rest_parameters:: metadefs-parameters.yaml - additionalItems: additionalItems - description: description - title: title - default: default - items: items - operators: operators - enum: enum - maximum: maximum - minItems: minItems - readonly: readonly - minimum: minimum - maxItems: maxItems - maxLength: maxLength - uniqueItems: uniqueItems - pattern: pattern - type: type - minLength: minLength - name: name-property Response Example ---------------- .. literalinclude:: samples/metadef-property-update-response.json :language: json Remove property definition ~~~~~~~~~~~~~~~~~~~~~~~~~~ .. rest_method:: DELETE /v2/metadefs/namespaces/{namespace_name}/properties/{property_name} Removes a property definition from a namespace. .. note:: If the namespace containing the property is protected, that is, if the ``protected`` attribute of the namespace is ``true``, then you must first set the ``protected`` attribute to ``false`` on the namespace before you will be permitted to delete the property. * If you try to delete a property from a protected namespace, the call returns the ``403`` response code. * To change the ``protected`` attribute of a namespace, use the :ref:`Update namespace ` call. When you successfully delete a property from a namespace, the response is empty and the response code is ``204``. Normal response codes: 204 Error response codes: 401, 403, 404 Request ------- .. rest_parameters:: metadefs-parameters.yaml - property_name: property_name - namespace_name: namespace_name glance-16.0.0/api-ref/source/v2/index.rst0000666000175100017510000000206213245511421020066 0ustar zuulzuul00000000000000.. Copyright 2010 OpenStack Foundation All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. :tocdepth: 3 ============================== Image Service API v2 (CURRENT) ============================== .. rest_expand_all:: .. include:: images-parameters-descriptions.inc .. include:: images-images-v2.inc .. include:: images-sharing-v2.inc .. include:: images-tags.inc .. include:: images-schemas.inc .. include:: images-data.inc .. include:: images-import.inc .. include:: tasks.inc .. include:: tasks-schemas.inc glance-16.0.0/api-ref/source/v2/images-sharing-v2.inc0000666000175100017510000002337013245511421022150 0ustar zuulzuul00000000000000.. -*- rst -*- .. _image-sharing: Sharing ******* Images may be shared among projects by creating *members* on the image. Image members have read-only privileges on the image. The following calls allow you to create, list, update, and delete image members. .. note:: An image member is an identifier for a consumer with whom the image is shared. In most OpenStack clouds, where the value of the ``owner`` property of an image is a project ID, the appropriate identifier to use for the ``member_id`` is the consumer's project ID (also known as the "tenant ID"). In these clouds, image sharing is project-to-project, and all the individual users in the consuming project have access to the image. * Some deployments may choose instead to have the identifier of the user who created the image as the value of the ``owner`` property. In such clouds, the appropriate identifier to use for the ``member_id`` is the user ID of the person with whom you want to share the image. In these clouds, image sharing is user-to-user. * Note that you, as an image owner, do not have a choice about what value to use for the ``member_id``. If, like most OpenStack clouds, your cloud uses the tenant ID for the image ``owner``, sharing will not work if you use a user ID as the ``member_id`` of an image (and vice-versa). * Please consult your cloud's local documentation for details. When an image is shared, the member is given immediate access to the image. In order to prevent spamming other users' image lists, a shared image does not appear in a member's image list until the member "accepts" the image. Only the image owner may create members. Only an image member may modify his or her member status. .. TODO(rosmaita): update the following reference when the "narrative" API docs have a final resting place For a conceptual overview of image sharing, including a suggested workflow, please consult `Image API v2 Sharing`_. .. _Image API v2 Sharing: http://specs.openstack.org/openstack/glance-specs/specs/api/v2/sharing-image-api-v2.html .. note:: If you don't want to maintain a sharing relationship with particular image consumers, but instead want to make an image available to *all* users, you may update your image's ``visibility`` property to ``community``. * In some clouds, the ability to "communitize" an image may be prohibited or restricted to trusted users. Please consult your cloud's local documentation for details. Create image member ~~~~~~~~~~~~~~~~~~~ .. rest_method:: POST /v2/images/{image_id}/members Adds a tenant ID as an image member. *(Since Image API v2.1)* Preconditions - The image must exist. - The image must have a ``visibility`` value of ``shared``. - You must be the owner of the image. Synchronous Postconditions - With correct permissions, you can see the member status of the image member as ``pending`` through API calls. Troubleshooting - Even if you have correct permissions, if the ``visibility`` attribute is not set to ``shared``, the request returns the HTTP ``403`` response code. Ensure that you meet the preconditions and run the request again. If the request fails again, review your API request. - If the member is already a member for the image, the service returns the ``Conflict (409)`` response code. If you meant to specify a different member, run the request again. Normal response codes: 200 Error response codes: 400, 401, 403, 404, 409, 413 Request ------- .. rest_parameters:: images-parameters.yaml - image_id: image_id-in-path - member: member_id Request Example --------------- .. literalinclude:: samples/image-member-create-request.json :language: json Response Parameters ------------------- .. rest_parameters:: images-parameters.yaml - created_at: created_at - image_id: image_id-in-body - member_id: member_id - schema: schema-member - status: member_status - updated_at: updated_at Response Example ---------------- .. literalinclude:: samples/image-member-create-response.json :language: json Show image member details ~~~~~~~~~~~~~~~~~~~~~~~~~ .. rest_method:: GET /v2/images/{image_id}/members/{member_id} Shows image member details. *(Since Image API v2.1)* Response body is a single image member entity. Preconditions - The image must exist. - The image must have a ``visibility`` value of ``shared``. - You must be the owner or the member of the image who's referenced in the call. Normal response codes: 200 Error response codes: 400, 401, 404 Request ------- .. rest_parameters:: images-parameters.yaml - image_id: image_id-in-path - member_id: member_id-in-path Response Parameters ------------------- .. rest_parameters:: images-parameters.yaml - created_at: created_at - image_id: image_id-in-body - member_id: member_id - schema: schema-member - status: member_status - updated_at: updated_at Response Example ---------------- .. literalinclude:: samples/image-member-details-response.json :language: json List image members ~~~~~~~~~~~~~~~~~~ .. rest_method:: GET /v2/images/{image_id}/members Lists the tenants that share this image. *(Since Image API v2.1)* If the image owner makes this call, the complete member list is returned. If a user who is an image member makes this call, the member list contains only information for that user. If a user who is not an image member makes this call, the call returns the HTTP ``404`` response code. Preconditions - The image must exist. - The image must have a ``visibility`` value of ``shared``. - You must be the owner or a member of the image. Normal response codes: 200 Error response codes: 400, 401, 403, 404 Request ------- .. rest_parameters:: images-parameters.yaml - image_id: image_id-in-path Response Parameters ------------------- .. rest_parameters:: images-parameters.yaml - members: members - schema: schema-members Response Example ---------------- .. literalinclude:: samples/image-members-list-response.json :language: json Update image member ~~~~~~~~~~~~~~~~~~~ .. rest_method:: PUT /v2/images/{image_id}/members/{member_id} Sets the status for an image member. *(Since Image API v2.1)* This call allows an image member to change his or her *member status*. When an image is shared with you, you have immediate access to the image. What updating your member status on the image does for you is that it affects whether the image will appear in your image list response. - When an image is shared with you, your member_status is ``pending``. You won't see the image unless you go looking for it, either by making a show image detail request using the image's ID, or by making an image list call specifically looking for a shared image in member status ``pending``. This way, other users cannot "spam" your image list with images you may not want to see. - If you want to see a particular shared image in your image list, then you must use this call to change your member status on the image to ``accepted``. - The image owner can see what your member status is on an image, but the owner *cannot* change the status. Only you (or an administrator) can do that. - There are three member status values: ``pending``, ``accepted``, and ``rejected``. The ``pending`` and ``rejected`` statuses are functionally identical. The difference is that ``pending`` indicates to the owner that you haven't updated the image, so perhaps you aren't aware that it's been shared with you. The ``rejected`` status indicates that you are aware that the image exists and you specifically decided that you don't want to see it in your image list response. For a more detailed discussion of image sharing, please consult `Image API v2 Sharing`_. Preconditions - The image must exist. - The image must have a ``visibility`` value of ``shared``. - You must be the member of the image referenced in the call. Synchronous Postconditions - If you update the member status to ``accepted`` and have the correct permissions, you see the image in list images responses. - With correct permissions, you can make API calls to see the updated member status of the image. Normal response codes: 200 Error response codes: 400, 401, 404, 403 Request ------- .. rest_parameters:: images-parameters.yaml - image_id: image_id-in-path - member_id: member_id-in-path - status: member_status Request Example --------------- .. literalinclude:: samples/image-member-update-request.json :language: json Response Parameters ------------------- .. rest_parameters:: images-parameters.yaml - created_at: created_at - image_id: image_id-in-body - member_id: member_id - schema: schema-member - status: member_status - updated_at: updated_at Response Example ---------------- .. literalinclude:: samples/image-member-update-response.json :language: json Delete image member ~~~~~~~~~~~~~~~~~~~ .. rest_method:: DELETE /v2/images/{image_id}/members/{member_id} Deletes a tenant ID from the member list of an image. *(Since Image API v2.1)* Preconditions - The image must exist. - The image must have a ``visibility`` value of ``shared``. - You must be the owner of the image. Synchronous Postconditions - The API removes the member from the image members. Troubleshooting - Even if you have correct permissions, if you are not the owner of the image or you specify an incorrect image ID or member ID, the call returns the HTTP ``403`` or ``404`` response code. Ensure that you meet the preconditions and run the request again. If the request fails again, review your API request URI. Normal response codes: 204 Error response codes: 400, 401, 403, 404 Request ------- .. rest_parameters:: images-parameters.yaml - image_id: image_id-in-path - member_id: member_id-in-path glance-16.0.0/api-ref/source/v2/metadefs-resourcetypes.inc0000666000175100017510000000721413245511421023426 0ustar zuulzuul00000000000000.. -*- rst -*- Metadata definition resource types ********************************** Lists resource types. Also, creates, lists, and removes resource type associations in a namespace. *Since API v2.2* List resource types ~~~~~~~~~~~~~~~~~~~ .. rest_method:: GET /v2/metadefs/resource_types Lists all available resource types. Using the other API calls in this section, you can create and maintain *resource type associations* between metadata definition namespaces and the resource types that are returned by this call. Normal response codes: 200 Error response codes: 400, 401, 404 Request ------- There are no request parameters. Response Parameters ------------------- .. rest_parameters:: metadefs-parameters.yaml - resource_types: resource_types-list Response Example ---------------- .. literalinclude:: samples/metadef-resource-types-list-response.json :language: json Create resource type association ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. rest_method:: POST /v2/metadefs/namespaces/{namespace_name}/resource_types Creates a resource type association between a namespace and the resource type specified in the body of the request. .. note:: If the resource type name specified does not name an existing resource type, a new resource type will be created as a side effect of this operation. Normal response codes: 201 Error response codes: 400, 401, 403, 404, 409 Request ------- .. rest_parameters:: metadefs-parameters.yaml - namespace_name: namespace_name - name: name - prefix: prefix - properties_target: properties_target Request Example --------------- .. literalinclude:: samples/metadef-resource-type-create-request.json :language: json Response Parameters ------------------- .. rest_parameters:: metadefs-parameters.yaml - created_at: created_at - prefix: prefix - properties_target: properties_target - name: name - updated_at: updated_at List resource type associations ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. rest_method:: GET /v2/metadefs/namespaces/{namespace_name}/resource_types Lists resource type associations in a namespace. Normal response codes: 200 Error response codes: 400, 401, 403, 404 Request ------- .. rest_parameters:: metadefs-parameters.yaml - namespace_name: namespace_name There is no request body. Response Parameters ------------------- .. rest_parameters:: metadefs-parameters.yaml - resource_type_associations: resource_type_associations Response Example ---------------- .. literalinclude:: samples/metadef-resource-types-list-response.json :language: json Remove resource type association ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. rest_method:: DELETE /v2/metadefs/namespaces/{namespace_name}/resource_types/{name} Removes a resource type association in a namespace. .. note:: If the namespace containing the association is protected, that is, if the ``protected`` attribute of the namespace is ``true``, then you must first set the ``protected`` attribute to ``false`` on the namespace before you will be permitted to remove the resource type association. * If you try to delete a resource type association from a protected namespace, the call returns the ``403`` response code. * To change the ``protected`` attribute of a namespace, use the :ref:`Update namespace ` call. When you successfully delete a resource type association from a namespace, the response is empty and the response code is ``204``. Normal response codes: 204 Error response codes: 400, 401, 403, 404 Request ------- .. rest_parameters:: metadefs-parameters.yaml - namespace_name: namespace_name - name: resource_type_name glance-16.0.0/api-ref/source/v2/images-parameters-descriptions.inc0000666000175100017510000000251113245511421025031 0ustar zuulzuul00000000000000.. |p-start| raw:: html

.. |p-end| raw:: html

.. |html-br| raw:: html
.. |disk_format_description| replace:: |p-start|\ The format of the disk.\ |p-end| |p-start|\ Values may vary based on the configuration available in a particular OpenStack cloud. See the :ref:`Image Schema ` response from the cloud itself for the valid values available.\ |p-end| |p-start|\ Example formats are: ``ami``, ``ari``, ``aki``, ``vhd``, ``vhdx``, ``vmdk``, ``raw``, ``qcow2``, ``vdi``, ``ploop`` or ``iso``.\ |p-end| |p-start|\ The value might be ``null`` (JSON null data type).\ |p-end| |p-start|\ **Newton changes**: The ``vhdx`` disk format is a supported value.\ |html-br| **Ocata changes**: The ``ploop`` disk format is a supported value.\ |p-end| .. |container_format_description| replace:: |p-start|\ Format of the image container.\ |p-end| |p-start|\ Values may vary based on the configuration available in a particular OpenStack cloud. See the :ref:`Image Schema ` response from the cloud itself for the valid values available.\ |p-end| |p-start|\ Example formats are: ``ami``, ``ari``, ``aki``, ``bare``, ``ovf``, ``ova``, or ``docker``.\ |p-end| |p-start|\ The value might be ``null`` (JSON null data type).\ |p-end| glance-16.0.0/api-ref/source/v2/samples/0000775000175100017510000000000013245511661017675 5ustar zuulzuul00000000000000glance-16.0.0/api-ref/source/v2/samples/metadef-namespace-update-request.json0000666000175100017510000000051513245511421027072 0ustar zuulzuul00000000000000{ "description": "Choose capabilities that should be provided by the Compute Host. This provides the ability to fine tune the hardware specification required when a new vm is requested.", "display_name": "Hypervisor Selection", "namespace": "OS::Compute::Hypervisor", "protected": false, "visibility": "public" } glance-16.0.0/api-ref/source/v2/samples/image-member-update-request.json0000666000175100017510000000003513245511421026057 0ustar zuulzuul00000000000000{ "status": "accepted" } glance-16.0.0/api-ref/source/v2/samples/metadef-namespace-create-response-simple.json0000666000175100017510000000072613245511421030514 0ustar zuulzuul00000000000000{ "created_at": "2016-05-19T16:05:48Z", "description": "A metadata definitions namespace for example use.", "display_name": "An Example Namespace", "namespace": "FredCo::SomeCategory::Example", "owner": "c60b1d57c5034e0d86902aedf8c49be0", "protected": true, "schema": "/v2/schemas/metadefs/namespace", "self": "/v2/metadefs/namespaces/FredCo::SomeCategory::Example", "updated_at": "2016-05-19T16:05:48Z", "visibility": "public" } glance-16.0.0/api-ref/source/v2/samples/metadef-namespace-create-request.json0000666000175100017510000000211613245511421027052 0ustar zuulzuul00000000000000{ "description": "Choose capabilities that should be provided by the Compute Host. This provides the ability to fine tune the hardware specification required when a new vm is requested.", "display_name": "Hypervisor Selection", "namespace": "OS::Compute::Hypervisor", "properties": { "hypervisor_type": { "description": "The hypervisor type.", "enum": [ "xen", "qemu", "kvm", "lxc", "uml", "vmware", "hyperv" ], "title": "Hypervisor Type", "type": "string" }, "vm_mode": { "description": "The virtual machine mode.", "enum": [ "hvm", "xen", "uml", "exe" ], "title": "VM Mode", "type": "string" } }, "protected": true, "resource_type_associations": [ { "name": "OS::Glance::Image" } ], "visibility": "public" } glance-16.0.0/api-ref/source/v2/samples/task-show-success-response.json0000666000175100017510000000131213245511421026003 0ustar zuulzuul00000000000000{ "created_at": "2016-06-29T16:13:07Z", "expires_at": "2016-07-01T16:13:07Z", "id": "805f47d2-8814-4cd7-bef3-37037389a998", "input": { "image_properties": { "container_format": "ovf", "disk_format": "vhd" }, "import_from": "https://apps.openstack.org/excellent-image", "import_from_format": "qcow2" }, "message": "", "owner": "02a7fb2dd4ef434c8a628c511dcbbeb6", "result": { "image_id": "2b61ed2b-f800-4da0-99ff-396b742b8646" }, "schema": "/v2/schemas/task", "self": "/v2/tasks/805f47d2-8814-4cd7-bef3-37037389a998", "status": "success", "type": "import", "updated_at": "2016-06-29T16:13:07Z" } glance-16.0.0/api-ref/source/v2/samples/metadef-namespace-details-with-rt-response.json0000666000175100017510000000223313245511421030776 0ustar zuulzuul00000000000000{ "created_at": "2016-06-28T14:57:10Z", "description": "The libvirt compute driver options.", "display_name": "libvirt Driver Options", "namespace": "OS::Compute::Libvirt", "owner": "admin", "properties": { "hw_boot_menu": { "description": "If true, enables the BIOS bootmenu.", "enum": [ "true", "false" ], "title": "Boot Menu", "type": "string" }, "hw_serial_port_count": { "description": "Specifies the count of serial ports.", "minimum": 0, "title": "Serial Port Count", "type": "integer" } }, "protected": true, "resource_type_associations": [ { "created_at": "2016-06-28T14:57:10Z", "name": "OS::Glance::Image", "prefix": "hw_" }, { "created_at": "2016-06-28T14:57:10Z", "name": "OS::Nova::Flavor", "prefix": "hw:" } ], "schema": "/v2/schemas/metadefs/namespace", "self": "/v2/metadefs/namespaces/OS::Compute::Libvirt", "visibility": "public" } glance-16.0.0/api-ref/source/v2/samples/task-create-request.json0000666000175100017510000000042613245511421024457 0ustar zuulzuul00000000000000{ "type": "import", "input": { "import_from": "http://app-catalog.openstack.example.org/groovy-image", "import_from_format": "qcow2", "image_properties": { "disk_format": "vhd", "container_format": "ovf" } } } glance-16.0.0/api-ref/source/v2/samples/image-import-w-d-request.json0000666000175100017510000000022113245511421025324 0ustar zuulzuul00000000000000{ "method": { "name": "web-download", "uri": "https://download.cirros-cloud.net/0.4.0/cirros-0.4.0-ppc64le-disk.img" } } glance-16.0.0/api-ref/source/v2/samples/image-member-create-request.json0000666000175100017510000000006513245511421026043 0ustar zuulzuul00000000000000{ "member": "8989447062e04a818baf9e073fd04fa7" } glance-16.0.0/api-ref/source/v2/samples/metadef-namespace-details-response.json0000666000175100017510000000222513245511421027403 0ustar zuulzuul00000000000000{ "created_at": "2016-06-28T14:57:10Z", "description": "The libvirt compute driver options.", "display_name": "libvirt Driver Options", "namespace": "OS::Compute::Libvirt", "owner": "admin", "properties": { "boot_menu": { "description": "If true, enables the BIOS bootmenu.", "enum": [ "true", "false" ], "title": "Boot Menu", "type": "string" }, "serial_port_count": { "description": "Specifies the count of serial ports.", "minimum": 0, "title": "Serial Port Count", "type": "integer" } }, "protected": true, "resource_type_associations": [ { "created_at": "2016-06-28T14:57:10Z", "name": "OS::Glance::Image", "prefix": "hw_" }, { "created_at": "2016-06-28T14:57:10Z", "name": "OS::Nova::Flavor", "prefix": "hw:" } ], "schema": "/v2/schemas/metadefs/namespace", "self": "/v2/metadefs/namespaces/OS::Compute::Libvirt", "visibility": "public" } glance-16.0.0/api-ref/source/v2/samples/metadef-tag-update-response.json0000666000175100017510000000016313245511421026056 0ustar zuulzuul00000000000000{ "created_at": "2016-05-21T18:49:38Z", "name": "new-tag-name", "updated_at": "2016-05-21T19:04:22Z" } glance-16.0.0/api-ref/source/v2/samples/schemas-metadef-property-show-response.json0000666000175100017510000000545513245511421030317 0ustar zuulzuul00000000000000{ "additionalProperties": false, "definitions": { "positiveInteger": { "minimum": 0, "type": "integer" }, "positiveIntegerDefault0": { "allOf": [ { "$ref": "#/definitions/positiveInteger" }, { "default": 0 } ] }, "stringArray": { "items": { "type": "string" }, "minItems": 1, "type": "array", "uniqueItems": true } }, "name": "property", "properties": { "additionalItems": { "type": "boolean" }, "default": {}, "description": { "type": "string" }, "enum": { "type": "array" }, "items": { "properties": { "enum": { "type": "array" }, "type": { "enum": [ "array", "boolean", "integer", "number", "object", "string", null ], "type": "string" } }, "type": "object" }, "maxItems": { "$ref": "#/definitions/positiveInteger" }, "maxLength": { "$ref": "#/definitions/positiveInteger" }, "maximum": { "type": "number" }, "minItems": { "$ref": "#/definitions/positiveIntegerDefault0" }, "minLength": { "$ref": "#/definitions/positiveIntegerDefault0" }, "minimum": { "type": "number" }, "name": { "maxLength": 255, "type": "string" }, "operators": { "items": { "type": "string" }, "type": "array" }, "pattern": { "format": "regex", "type": "string" }, "readonly": { "type": "boolean" }, "required": { "$ref": "#/definitions/stringArray" }, "title": { "type": "string" }, "type": { "enum": [ "array", "boolean", "integer", "number", "object", "string", null ], "type": "string" }, "uniqueItems": { "default": false, "type": "boolean" } }, "required": [ "type", "title", "name" ] } glance-16.0.0/api-ref/source/v2/samples/metadef-tags-create-request.json0000666000175100017510000000027413245511421026057 0ustar zuulzuul00000000000000{ "tags": [ { "name": "sample-tag1" }, { "name": "sample-tag2" }, { "name": "sample-tag3" } ] } glance-16.0.0/api-ref/source/v2/samples/schemas-task-show-response.json0000666000175100017510000000403013245511421025756 0ustar zuulzuul00000000000000{ "name": "task", "properties": { "created_at": { "description": "Datetime when this resource was created", "type": "string" }, "expires_at": { "description": "Datetime when this resource would be subject to removal", "type": [ "null", "string" ] }, "id": { "description": "An identifier for the task", "pattern": "^([0-9a-fA-F]){8}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-fA-F]){12}$", "type": "string" }, "input": { "description": "The parameters required by task, JSON blob", "type": [ "null", "object" ] }, "message": { "description": "Human-readable informative message only included when appropriate (usually on failure)", "type": "string" }, "owner": { "description": "An identifier for the owner of this task", "type": "string" }, "result": { "description": "The result of current task, JSON blob", "type": [ "null", "object" ] }, "schema": { "readOnly": true, "type": "string" }, "self": { "readOnly": true, "type": "string" }, "status": { "description": "The current status of this task", "enum": [ "pending", "processing", "success", "failure" ], "type": "string" }, "type": { "description": "The type of task represented by this content", "enum": [ "import" ], "type": "string" }, "updated_at": { "description": "Datetime when this resource was updated", "type": "string" } } } glance-16.0.0/api-ref/source/v2/samples/metadef-namespace-create-request-simple.json0000666000175100017510000000033413245511421030341 0ustar zuulzuul00000000000000{ "namespace": "FredCo::SomeCategory::Example", "display_name": "An Example Namespace", "description": "A metadata definitions namespace for example use.", "visibility": "public", "protected": true } glance-16.0.0/api-ref/source/v2/samples/metadef-object-update-response.json0000666000175100017510000000151113245511421026547 0ustar zuulzuul00000000000000{ "created_at": "2014-09-19T19:20:56Z", "description": "You can configure the CPU limits with control parameters.", "name": "CPU Limits", "properties": { "quota:cpu_shares": { "description": "Specifies the proportional weighted share for the domain. If this element is omitted, the service defaults to the OS provided defaults. There is no unit for the value; it is a relative measure based on the setting of other VMs. For example, a VM configured with value 2048 gets twice as much CPU time as a VM configured with value 1024.", "title": "Quota: CPU Shares", "type": "integer" } }, "required": [], "schema": "/v2/schemas/metadefs/object", "self": "/v2/metadefs/namespaces/OS::Compute::Quota/objects/CPU Limits", "updated_at": "2014-09-19T19:20:56Z" } glance-16.0.0/api-ref/source/v2/samples/image-update-response.json0000666000175100017510000000127713245511421024771 0ustar zuulzuul00000000000000{ "checksum": "710544e7f0c828b42f51207342622d33", "container_format": "ovf", "created_at": "2016-06-29T16:13:07Z", "disk_format": "vhd", "file": "/v2/images/2b61ed2b-f800-4da0-99ff-396b742b8646/file", "id": "2b61ed2b-f800-4da0-99ff-396b742b8646", "min_disk": 20, "min_ram": 512, "name": "Fedora 17", "owner": "02a7fb2dd4ef434c8a628c511dcbbeb6", "protected": false, "schema": "/v2/schemas/image", "self": "/v2/images/2b61ed2b-f800-4da0-99ff-396b742b8646", "size": 21909, "status": "active", "tags": [ "beefy", "fedora" ], "updated_at": "2016-07-25T14:48:18Z", "virtual_size": null, "visibility": "private" } glance-16.0.0/api-ref/source/v2/samples/image-show-response.json0000666000175100017510000000125113245511421024457 0ustar zuulzuul00000000000000{ "status": "active", "name": "cirros-0.3.2-x86_64-disk", "tags": [], "container_format": "bare", "created_at": "2014-05-05T17:15:10Z", "disk_format": "qcow2", "updated_at": "2014-05-05T17:15:11Z", "visibility": "public", "self": "/v2/images/1bea47ed-f6a9-463b-b423-14b9cca9ad27", "min_disk": 0, "protected": false, "id": "1bea47ed-f6a9-463b-b423-14b9cca9ad27", "file": "/v2/images/1bea47ed-f6a9-463b-b423-14b9cca9ad27/file", "checksum": "64d7c1cd2b6f60c92c14662941cb7913", "owner": "5ef70662f8b34079a6eddb8da9d75fe8", "size": 13167616, "min_ram": 0, "schema": "/v2/schemas/image", "virtual_size": null } glance-16.0.0/api-ref/source/v2/samples/metadef-object-create-response.json0000666000175100017510000000346013245511421026535 0ustar zuulzuul00000000000000{ "created_at": "2014-09-19T18:20:56Z", "description": "You can configure the CPU limits with control parameters.", "name": "CPU Limits", "properties": { "quota:cpu_period": { "description": "Specifies the enforcement interval (unit: microseconds) for QEMU and LXC hypervisors. Within a period, each VCPU of the domain is not allowed to consume more than the quota worth of runtime. The value should be in range [1000, 1000000]. A period with value 0 means no value.", "maximum": 1000000, "minimum": 1000, "title": "Quota: CPU Period", "type": "integer" }, "quota:cpu_quota": { "description": "Specifies the maximum allowed bandwidth (unit: microseconds). A domain with a negative-value quota indicates that the domain has infinite bandwidth, which means that it is not bandwidth controlled. The value should be in range [1000, 18446744073709551] or less than 0. A quota with value 0 means no value. You can use this feature to ensure that all vCPUs run at the same speed.", "title": "Quota: CPU Quota", "type": "integer" }, "quota:cpu_shares": { "description": "Specifies the proportional weighted share for the domain. If this element is omitted, the service defaults to the OS provided defaults. There is no unit for the value; it is a relative measure based on the setting of other VMs. For example, a VM configured with value 2048 gets twice as much CPU time as a VM configured with value 1024.", "title": "Quota: CPU Shares", "type": "integer" } }, "required": [], "schema": "/v2/schemas/metadefs/object", "self": "/v2/metadefs/namespaces/OS::Compute::Quota/objects/CPU Limits", "updated_at": "2014-09-19T18:20:56Z" } glance-16.0.0/api-ref/source/v2/samples/metadef-tag-details-response.json0000666000175100017510000000016213245511421026220 0ustar zuulzuul00000000000000{ "created_at": "2015-05-06T23:16:12Z", "name": "sample-tag2", "updated_at": "2015-05-06T23:16:12Z" } glance-16.0.0/api-ref/source/v2/samples/task-show-failure-response.json0000666000175100017510000000127013245511421025765 0ustar zuulzuul00000000000000{ "created_at": "2016-06-24T14:57:20Z", "expires_at": "2016-06-26T14:57:20Z", "id": "bb480de2-7077-4ea9-bbe9-be1891290d3e", "input": { "image_properties": { "container_format": "ovf", "disk_format": "vhd" }, "import_from": "http://app-catalog.openstack.example.org/groovy-image", "import_from_format": "qcow2" }, "message": "Task failed due to Internal Error", "owner": "fa6c8c1600f4444281658a23ee6da8e8", "result": null, "schema": "/v2/schemas/task", "self": "/v2/tasks/bb480de2-7077-4ea9-bbe9-be1891290d3e", "status": "failure", "type": "import", "updated_at": "2016-06-24T14:57:20Z" } glance-16.0.0/api-ref/source/v2/samples/schemas-images-list-response.json0000666000175100017510000002431113245511421026260 0ustar zuulzuul00000000000000{ "links": [ { "href": "{first}", "rel": "first" }, { "href": "{next}", "rel": "next" }, { "href": "{schema}", "rel": "describedby" } ], "name": "images", "properties": { "first": { "type": "string" }, "images": { "items": { "additionalProperties": { "type": "string" }, "links": [ { "href": "{self}", "rel": "self" }, { "href": "{file}", "rel": "enclosure" }, { "href": "{schema}", "rel": "describedby" } ], "name": "image", "properties": { "architecture": { "description": "Operating system architecture as specified in https://docs.openstack.org/python-glanceclient/latest/cli/property-keys.html", "is_base": false, "type": "string" }, "checksum": { "description": "md5 hash of image contents.", "maxLength": 32, "readOnly": true, "type": [ "null", "string" ] }, "container_format": { "description": "Format of the container", "enum": [ null, "ami", "ari", "aki", "bare", "ovf", "ova", "docker" ], "type": [ "null", "string" ] }, "created_at": { "description": "Date and time of image registration", "readOnly": true, "type": "string" }, "direct_url": { "description": "URL to access the image file kept in external store", "readOnly": true, "type": "string" }, "disk_format": { "description": "Format of the disk", "enum": [ null, "ami", "ari", "aki", "vhd", "vhdx", "vmdk", "raw", "qcow2", "vdi", "iso", "ploop" ], "type": [ "null", "string" ] }, "file": { "description": "An image file url", "readOnly": true, "type": "string" }, "id": { "description": "An identifier for the image", "pattern": "^([0-9a-fA-F]){8}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-fA-F]){12}$", "type": "string" }, "instance_uuid": { "description": "Metadata which can be used to record which instance this image is associated with. (Informational only, does not create an instance snapshot.)", "is_base": false, "type": "string" }, "kernel_id": { "description": "ID of image stored in Glance that should be used as the kernel when booting an AMI-style image.", "is_base": false, "pattern": "^([0-9a-fA-F]){8}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-fA-F]){12}$", "type": [ "null", "string" ] }, "locations": { "description": "A set of URLs to access the image file kept in external store", "items": { "properties": { "metadata": { "type": "object" }, "url": { "maxLength": 255, "type": "string" } }, "required": [ "url", "metadata" ], "type": "object" }, "type": "array" }, "min_disk": { "description": "Amount of disk space (in GB) required to boot image.", "type": "integer" }, "min_ram": { "description": "Amount of ram (in MB) required to boot image.", "type": "integer" }, "name": { "description": "Descriptive name for the image", "maxLength": 255, "type": [ "null", "string" ] }, "os_distro": { "description": "Common name of operating system distribution as specified in https://docs.openstack.org/python-glanceclient/latest/cli/property-keys.html", "is_base": false, "type": "string" }, "os_version": { "description": "Operating system version as specified by the distributor", "is_base": false, "type": "string" }, "owner": { "description": "Owner of the image", "maxLength": 255, "type": [ "null", "string" ] }, "protected": { "description": "If true, image will not be deletable.", "type": "boolean" }, "ramdisk_id": { "description": "ID of image stored in Glance that should be used as the ramdisk when booting an AMI-style image.", "is_base": false, "pattern": "^([0-9a-fA-F]){8}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-fA-F]){12}$", "type": [ "null", "string" ] }, "schema": { "description": "An image schema url", "readOnly": true, "type": "string" }, "self": { "description": "An image self url", "readOnly": true, "type": "string" }, "size": { "description": "Size of image file in bytes", "readOnly": true, "type": [ "null", "integer" ] }, "status": { "description": "Status of the image", "enum": [ "queued", "saving", "active", "killed", "deleted", "pending_delete", "deactivated" ], "readOnly": true, "type": "string" }, "tags": { "description": "List of strings related to the image", "items": { "maxLength": 255, "type": "string" }, "type": "array" }, "updated_at": { "description": "Date and time of the last image modification", "readOnly": true, "type": "string" }, "virtual_size": { "description": "Virtual size of image in bytes", "readOnly": true, "type": [ "null", "integer" ] }, "visibility": { "description": "Scope of image accessibility", "enum": [ "public", "private" ], "type": "string" } } }, "type": "array" }, "next": { "type": "string" }, "schema": { "type": "string" } } } glance-16.0.0/api-ref/source/v2/samples/metadef-property-create-request.json0000666000175100017510000000110113245511421026773 0ustar zuulzuul00000000000000{ "description": "The hypervisor type. It may be used by the host properties filter for scheduling. The ImagePropertiesFilter filters compute nodes that satisfy any architecture, hypervisor type, or virtual machine mode properties specified on the instance's image properties. Image properties are contained in the image dictionary in the request_spec.", "enum": [ "xen", "qemu", "kvm", "lxc", "uml", "vmware", "hyperv" ], "name": "hypervisor_type", "title": "Hypervisor Type", "type": "string" } glance-16.0.0/api-ref/source/v2/samples/tasks-list-response.json0000666000175100017510000000161713245511421024523 0ustar zuulzuul00000000000000{ "first": "/v2/tasks", "schema": "/v2/schemas/tasks", "tasks": [ { "created_at": "2016-06-24T14:44:19Z", "id": "08b7e1c8-3821-4f54-b3b8-d6655d178cdf", "owner": "fa6c8c1600f4444281658a23ee6da8e8", "schema": "/v2/schemas/task", "self": "/v2/tasks/08b7e1c8-3821-4f54-b3b8-d6655d178cdf", "status": "processing", "type": "import", "updated_at": "2016-06-24T14:44:19Z" }, { "created_at": "2016-06-24T14:40:19Z", "id": "231c311d-3557-4e23-afc4-6d98af1419e7", "owner": "fa6c8c1600f4444281658a23ee6da8e8", "schema": "/v2/schemas/task", "self": "/v2/tasks/231c311d-3557-4e23-afc4-6d98af1419e7", "status": "processing", "type": "import", "updated_at": "2016-06-24T14:40:20Z" } ] } glance-16.0.0/api-ref/source/v2/samples/metadef-property-details-response.json0000666000175100017510000000110113245511421027323 0ustar zuulzuul00000000000000{ "description": "The hypervisor type. It may be used by the host properties filter for scheduling. The ImagePropertiesFilter filters compute nodes that satisfy any architecture, hypervisor type, or virtual machine mode properties specified on the instance's image properties. Image properties are contained in the image dictionary in the request_spec.", "enum": [ "xen", "qemu", "kvm", "lxc", "uml", "vmware", "hyperv" ], "name": "hypervisor_type", "title": "Hypervisor Type", "type": "string" } glance-16.0.0/api-ref/source/v2/samples/schemas-metadef-object-show-response.json0000666000175100017510000001165313245511421027676 0ustar zuulzuul00000000000000{ "additionalProperties": false, "definitions": { "positiveInteger": { "minimum": 0, "type": "integer" }, "positiveIntegerDefault0": { "allOf": [ { "$ref": "#/definitions/positiveInteger" }, { "default": 0 } ] }, "property": { "additionalProperties": { "properties": { "additionalItems": { "type": "boolean" }, "default": {}, "description": { "type": "string" }, "enum": { "type": "array" }, "items": { "properties": { "enum": { "type": "array" }, "type": { "enum": [ "array", "boolean", "integer", "number", "object", "string", null ], "type": "string" } }, "type": "object" }, "maxItems": { "$ref": "#/definitions/positiveInteger" }, "maxLength": { "$ref": "#/definitions/positiveInteger" }, "maximum": { "type": "number" }, "minItems": { "$ref": "#/definitions/positiveIntegerDefault0" }, "minLength": { "$ref": "#/definitions/positiveIntegerDefault0" }, "minimum": { "type": "number" }, "name": { "maxLength": 255, "type": "string" }, "operators": { "items": { "type": "string" }, "type": "array" }, "pattern": { "format": "regex", "type": "string" }, "readonly": { "type": "boolean" }, "required": { "$ref": "#/definitions/stringArray" }, "title": { "type": "string" }, "type": { "enum": [ "array", "boolean", "integer", "number", "object", "string", null ], "type": "string" }, "uniqueItems": { "default": false, "type": "boolean" } }, "required": [ "title", "type" ], "type": "object" }, "type": "object" }, "stringArray": { "items": { "type": "string" }, "type": "array", "uniqueItems": true } }, "name": "object", "properties": { "created_at": { "description": "Date and time of object creation", "format": "date-time", "readOnly": true, "type": "string" }, "description": { "type": "string" }, "name": { "maxLength": 255, "type": "string" }, "properties": { "$ref": "#/definitions/property" }, "required": { "$ref": "#/definitions/stringArray" }, "schema": { "readOnly": true, "type": "string" }, "self": { "readOnly": true, "type": "string" }, "updated_at": { "description": "Date and time of the last object modification", "format": "date-time", "readOnly": true, "type": "string" } }, "required": [ "name" ] } glance-16.0.0/api-ref/source/v2/samples/schemas-metadef-tags-list-response.json0000666000175100017510000000266213245511421027361 0ustar zuulzuul00000000000000{ "links": [ { "href": "{first}", "rel": "first" }, { "href": "{next}", "rel": "next" }, { "href": "{schema}", "rel": "describedby" } ], "name": "tags", "properties": { "first": { "type": "string" }, "next": { "type": "string" }, "schema": { "type": "string" }, "tags": { "items": { "additionalProperties": false, "name": "tag", "properties": { "created_at": { "description": "Date and time of tag creation", "format": "date-time", "readOnly": true, "type": "string" }, "name": { "maxLength": 255, "type": "string" }, "updated_at": { "description": "Date and time of the last tag modification", "format": "date-time", "readOnly": true, "type": "string" } }, "required": [ "name" ] }, "type": "array" } } } glance-16.0.0/api-ref/source/v2/samples/metadef-resource-type-assoc-create-response.json0000666000175100017510000000026013245511421031176 0ustar zuulzuul00000000000000{ "created_at": "2014-09-19T16:09:13Z", "name": "OS::Cinder::Volume", "prefix": "hw_", "properties_target": "image", "updated_at": "2014-09-19T16:09:13Z" } glance-16.0.0/api-ref/source/v2/samples/metadef-objects-list-response.json0000666000175100017510000001645513245511421026440 0ustar zuulzuul00000000000000{ "objects": [ { "created_at": "2014-09-18T18:16:35Z", "description": "You can configure the CPU limits with control parameters.", "name": "CPU Limits", "properties": { "quota:cpu_period": { "description": "Specifies the enforcement interval (unit: microseconds) for QEMU and LXC hypervisors. Within a period, each VCPU of the domain is not allowed to consume more than the quota worth of runtime. The value should be in range [1000, 1000000]. A period with value 0 means no value.", "maximum": 1000000, "minimum": 1000, "title": "Quota: CPU Period", "type": "integer" }, "quota:cpu_quota": { "description": "Specifies the maximum allowed bandwidth (unit: microseconds). A domain with a negative-value quota indicates that the domain has infinite bandwidth, which means that it is not bandwidth controlled. The value should be in range [1000, 18446744073709551] or less than 0. A quota with value 0 means no value. You can use this feature to ensure that all vCPUs run at the same speed.", "title": "Quota: CPU Quota", "type": "integer" }, "quota:cpu_shares": { "description": "Specifies the proportional weighted share for the domain. If this element is omitted, the service defaults to the OS provided defaults. There is no unit for the value; it is a relative measure based on the setting of other VMs. For example, a VM configured with value 2048 gets twice as much CPU time as a VM configured with value 1024.", "title": "Quota: CPU Shares", "type": "integer" } }, "required": [], "schema": "/v2/schemas/metadefs/object", "self": "/v2/metadefs/namespaces/OS::Compute::Quota/objects/CPU Limits" }, { "created_at": "2014-09-18T18:16:35Z", "description": "Using disk I/O quotas, you can set maximum disk write to 10 MB per second for a VM user.", "name": "Disk QoS", "properties": { "quota:disk_read_bytes_sec": { "description": "Sets disk I/O quota for disk read bytes / sec.", "title": "Quota: Disk read bytes / sec", "type": "integer" }, "quota:disk_read_iops_sec": { "description": "Sets disk I/O quota for disk read IOPS / sec.", "title": "Quota: Disk read IOPS / sec", "type": "integer" }, "quota:disk_total_bytes_sec": { "description": "Sets disk I/O quota for total disk bytes / sec.", "title": "Quota: Disk Total Bytes / sec", "type": "integer" }, "quota:disk_total_iops_sec": { "description": "Sets disk I/O quota for disk total IOPS / sec.", "title": "Quota: Disk Total IOPS / sec", "type": "integer" }, "quota:disk_write_bytes_sec": { "description": "Sets disk I/O quota for disk write bytes / sec.", "title": "Quota: Disk Write Bytes / sec", "type": "integer" }, "quota:disk_write_iops_sec": { "description": "Sets disk I/O quota for disk write IOPS / sec.", "title": "Quota: Disk Write IOPS / sec", "type": "integer" } }, "required": [], "schema": "/v2/schemas/metadefs/object", "self": "/v2/metadefs/namespaces/OS::Compute::Quota/objects/Disk QoS" }, { "created_at": "2014-09-18T18:16:35Z", "description": "Bandwidth QoS tuning for instance virtual interfaces (VIFs) may be specified with these properties. Incoming and outgoing traffic can be shaped independently. If not specified, no quality of service (QoS) is applied on that traffic direction. So, if you want to shape only the network's incoming traffic, use inbound only (and vice versa). The OpenStack Networking service abstracts the physical implementation of the network, allowing plugins to configure and manage physical resources. Virtual Interfaces (VIF) in the logical model are analogous to physical network interface cards (NICs). VIFs are typically owned a managed by an external service; for instance when OpenStack Networking is used for building OpenStack networks, VIFs would be created, owned, and managed in Nova. VIFs are connected to OpenStack Networking networks via ports. A port is analogous to a port on a network switch, and it has an administrative state. When a VIF is attached to a port the OpenStack Networking API creates an attachment object, which specifies the fact that a VIF with a given identifier is plugged into the port.", "name": "Virtual Interface QoS", "properties": { "quota:vif_inbound_average": { "description": "Network Virtual Interface (VIF) inbound average in kilobytes per second. Specifies average bit rate on the interface being shaped.", "title": "Quota: VIF Inbound Average", "type": "integer" }, "quota:vif_inbound_burst": { "description": "Network Virtual Interface (VIF) inbound burst in total kilobytes. Specifies the amount of bytes that can be burst at peak speed.", "title": "Quota: VIF Inbound Burst", "type": "integer" }, "quota:vif_inbound_peak": { "description": "Network Virtual Interface (VIF) inbound peak in kilobytes per second. Specifies maximum rate at which an interface can receive data.", "title": "Quota: VIF Inbound Peak", "type": "integer" }, "quota:vif_outbound_average": { "description": "Network Virtual Interface (VIF) outbound average in kilobytes per second. Specifies average bit rate on the interface being shaped.", "title": "Quota: VIF Outbound Average", "type": "integer" }, "quota:vif_outbound_burst": { "description": "Network Virtual Interface (VIF) outbound burst in total kilobytes. Specifies the amount of bytes that can be burst at peak speed.", "title": "Quota: VIF Outbound Burst", "type": "integer" }, "quota:vif_outbound_peak": { "description": "Network Virtual Interface (VIF) outbound peak in kilobytes per second. Specifies maximum rate at which an interface can send data.", "title": "Quota: VIF Outbound Burst", "type": "integer" } }, "required": [], "schema": "/v2/schemas/metadefs/object", "self": "/v2/metadefs/namespaces/OS::Compute::Quota/objects/Virtual Interface QoS" } ], "schema": "v2/schemas/metadefs/objects" } glance-16.0.0/api-ref/source/v2/samples/image-info-import-response.json0000666000175100017510000000030413245511421025740 0ustar zuulzuul00000000000000{ "import-methods": { "description": "Import methods available.", "type": "array", "value": [ "glance-direct", "web-download" ] } } glance-16.0.0/api-ref/source/v2/samples/image-update-request.json0000666000175100017510000000034513245511421024616 0ustar zuulzuul00000000000000[ { "op": "replace", "path": "/name", "value": "Fedora 17" }, { "op": "replace", "path": "/tags", "value": [ "fedora", "beefy" ] } ] glance-16.0.0/api-ref/source/v2/samples/metadef-namespaces-list-response.json0000666000175100017510000001103713245511421027115 0ustar zuulzuul00000000000000{ "first": "/v2/metadefs/namespaces?sort_key=created_at&sort_dir=asc", "namespaces": [ { "created_at": "2014-08-28T17:13:06Z", "description": "The libvirt compute driver options. These are properties specific to compute drivers. For a list of all hypervisors, see here: https://wiki.openstack.org/wiki/HypervisorSupportMatrix.", "display_name": "libvirt Driver Options", "namespace": "OS::Compute::Libvirt", "owner": "admin", "protected": true, "resource_type_associations": [ { "created_at": "2014-08-28T17:13:06Z", "name": "OS::Glance::Image", "updated_at": "2014-08-28T17:13:06Z" } ], "schema": "/v2/schemas/metadefs/namespace", "self": "/v2/metadefs/namespaces/OS::Compute::Libvirt", "updated_at": "2014-08-28T17:13:06Z", "visibility": "public" }, { "created_at": "2014-08-28T17:13:06Z", "description": "Compute drivers may enable quotas on CPUs available to a VM, disk tuning, bandwidth I/O, and instance VIF traffic control. See: http://docs.openstack.org/admin-guide-cloud/compute-flavors.html", "display_name": "Flavor Quota", "namespace": "OS::Compute::Quota", "owner": "admin", "protected": true, "resource_type_associations": [ { "created_at": "2014-08-28T17:13:06Z", "name": "OS::Nova::Flavor", "updated_at": "2014-08-28T17:13:06Z" } ], "schema": "/v2/schemas/metadefs/namespace", "self": "/v2/metadefs/namespaces/OS::Compute::Quota", "updated_at": "2014-08-28T17:13:06Z", "visibility": "public" }, { "created_at": "2014-08-28T17:13:06Z", "description": "Trusted compute pools with Intel\u00ae Trusted Execution Technology (Intel\u00ae TXT) support IT compliance by protecting virtualized data centers - private, public, and hybrid clouds against attacks toward hypervisor and BIOS, firmware, and other pre-launch software components.", "display_name": "Trusted Compute Pools (Intel\u00ae TXT)", "namespace": "OS::Compute::Trust", "owner": "admin", "protected": true, "resource_type_associations": [ { "created_at": "2014-08-28T17:13:06Z", "name": "OS::Nova::Flavor", "updated_at": "2014-08-28T17:13:06Z" } ], "schema": "/v2/schemas/metadefs/namespace", "self": "/v2/metadefs/namespaces/OS::Compute::Trust", "updated_at": "2014-08-28T17:13:06Z", "visibility": "public" }, { "created_at": "2014-08-28T17:13:06Z", "description": "This provides the preferred socket/core/thread counts for the virtual CPU instance exposed to guests. This enables the ability to avoid hitting limitations on vCPU topologies that OS vendors place on their products. See also: http://git.openstack.org/cgit/openstack/nova-specs/tree/specs/juno/virt-driver-vcpu-topology.rst", "display_name": "Virtual CPU Topology", "namespace": "OS::Compute::VirtCPUTopology", "owner": "admin", "protected": true, "resource_type_associations": [ { "created_at": "2014-08-28T17:13:06Z", "name": "OS::Glance::Image", "prefix": "hw_", "updated_at": "2014-08-28T17:13:06Z" }, { "created_at": "2014-08-28T17:13:06Z", "name": "OS::Cinder::Volume", "prefix": "hw_", "properties_target": "image", "updated_at": "2014-08-28T17:13:06Z" }, { "created_at": "2014-08-28T17:13:06Z", "name": "OS::Nova::Flavor", "prefix": "hw:", "updated_at": "2014-08-28T17:13:06Z" } ], "schema": "/v2/schemas/metadefs/namespace", "self": "/v2/metadefs/namespaces/OS::Compute::VirtCPUTopology", "updated_at": "2014-08-28T17:13:06Z", "visibility": "public" } ], "schema": "/v2/schemas/metadefs/namespaces" } glance-16.0.0/api-ref/source/v2/samples/schemas-tasks-list-response.json0000666000175100017510000000462713245511421026150 0ustar zuulzuul00000000000000{ "links": [ { "href": "{schema}", "rel": "describedby" } ], "name": "tasks", "properties": { "schema": { "type": "string" }, "tasks": { "items": { "name": "task", "properties": { "created_at": { "description": "Datetime when this resource was created", "type": "string" }, "expires_at": { "description": "Datetime when this resource would be subject to removal", "type": [ "null", "string" ] }, "id": { "description": "An identifier for the task", "pattern": "^([0-9a-fA-F]){8}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-fA-F]){12}$", "type": "string" }, "owner": { "description": "An identifier for the owner of this task", "type": "string" }, "schema": { "readOnly": true, "type": "string" }, "self": { "readOnly": true, "type": "string" }, "status": { "description": "The current status of this task", "enum": [ "pending", "processing", "success", "failure" ], "type": "string" }, "type": { "description": "The type of task represented by this content", "enum": [ "import" ], "type": "string" }, "updated_at": { "description": "Datetime when this resource was updated", "type": "string" } } }, "type": "array" } } } glance-16.0.0/api-ref/source/v2/samples/metadef-tag-create-response.json0000666000175100017510000000016713245511421026043 0ustar zuulzuul00000000000000{ "created_at": "2015-05-09T01:12:31Z", "name": "added-sample-tag", "updated_at": "2015-05-09T01:12:31Z" } glance-16.0.0/api-ref/source/v2/samples/schemas-image-show-response.json0000666000175100017510000001561113245511421026105 0ustar zuulzuul00000000000000{ "additionalProperties": { "type": "string" }, "links": [ { "href": "{self}", "rel": "self" }, { "href": "{file}", "rel": "enclosure" }, { "href": "{schema}", "rel": "describedby" } ], "name": "image", "properties": { "architecture": { "description": "Operating system architecture as specified in https://docs.openstack.org/python-glanceclient/latest/cli/property-keys.html", "is_base": false, "type": "string" }, "checksum": { "description": "md5 hash of image contents.", "maxLength": 32, "readOnly": true, "type": [ "null", "string" ] }, "container_format": { "description": "Format of the container", "enum": [ null, "ami", "ari", "aki", "bare", "ovf", "ova", "docker" ], "type": [ "null", "string" ] }, "created_at": { "description": "Date and time of image registration", "readOnly": true, "type": "string" }, "direct_url": { "description": "URL to access the image file kept in external store", "readOnly": true, "type": "string" }, "disk_format": { "description": "Format of the disk", "enum": [ null, "ami", "ari", "aki", "vhd", "vhdx", "vmdk", "raw", "qcow2", "vdi", "iso", "ploop" ], "type": [ "null", "string" ] }, "file": { "description": "An image file url", "readOnly": true, "type": "string" }, "id": { "description": "An identifier for the image", "pattern": "^([0-9a-fA-F]){8}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-fA-F]){12}$", "type": "string" }, "instance_uuid": { "description": "Metadata which can be used to record which instance this image is associated with. (Informational only, does not create an instance snapshot.)", "is_base": false, "type": "string" }, "kernel_id": { "description": "ID of image stored in Glance that should be used as the kernel when booting an AMI-style image.", "is_base": false, "pattern": "^([0-9a-fA-F]){8}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-fA-F]){12}$", "type": [ "null", "string" ] }, "locations": { "description": "A set of URLs to access the image file kept in external store", "items": { "properties": { "metadata": { "type": "object" }, "url": { "maxLength": 255, "type": "string" } }, "required": [ "url", "metadata" ], "type": "object" }, "type": "array" }, "min_disk": { "description": "Amount of disk space (in GB) required to boot image.", "type": "integer" }, "min_ram": { "description": "Amount of ram (in MB) required to boot image.", "type": "integer" }, "name": { "description": "Descriptive name for the image", "maxLength": 255, "type": [ "null", "string" ] }, "os_distro": { "description": "Common name of operating system distribution as specified in https://docs.openstack.org/python-glanceclient/latest/cli/property-keys.html", "is_base": false, "type": "string" }, "os_version": { "description": "Operating system version as specified by the distributor", "is_base": false, "type": "string" }, "owner": { "description": "Owner of the image", "maxLength": 255, "type": [ "null", "string" ] }, "protected": { "description": "If true, image will not be deletable.", "type": "boolean" }, "ramdisk_id": { "description": "ID of image stored in Glance that should be used as the ramdisk when booting an AMI-style image.", "is_base": false, "pattern": "^([0-9a-fA-F]){8}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-fA-F]){12}$", "type": [ "null", "string" ] }, "schema": { "description": "An image schema url", "readOnly": true, "type": "string" }, "self": { "description": "An image self url", "readOnly": true, "type": "string" }, "size": { "description": "Size of image file in bytes", "readOnly": true, "type": [ "null", "integer" ] }, "status": { "description": "Status of the image", "enum": [ "queued", "saving", "active", "killed", "deleted", "pending_delete", "deactivated" ], "readOnly": true, "type": "string" }, "tags": { "description": "List of strings related to the image", "items": { "maxLength": 255, "type": "string" }, "type": "array" }, "updated_at": { "description": "Date and time of the last image modification", "readOnly": true, "type": "string" }, "virtual_size": { "description": "Virtual size of image in bytes", "readOnly": true, "type": [ "null", "integer" ] }, "visibility": { "description": "Scope of image accessibility", "enum": [ "public", "private" ], "type": "string" } } } glance-16.0.0/api-ref/source/v2/samples/image-import-g-d-request.json0000666000175100017510000000007213245511421025310 0ustar zuulzuul00000000000000{ "method": { "name": "glance-direct" } } glance-16.0.0/api-ref/source/v2/samples/schemas-metadef-properties-list-response.json0000666000175100017510000001076113245511421030616 0ustar zuulzuul00000000000000{ "definitions": { "positiveInteger": { "minimum": 0, "type": "integer" }, "positiveIntegerDefault0": { "allOf": [ { "$ref": "#/definitions/positiveInteger" }, { "default": 0 } ] }, "stringArray": { "items": { "type": "string" }, "minItems": 1, "type": "array", "uniqueItems": true } }, "links": [ { "href": "{first}", "rel": "first" }, { "href": "{next}", "rel": "next" }, { "href": "{schema}", "rel": "describedby" } ], "name": "properties", "properties": { "first": { "type": "string" }, "next": { "type": "string" }, "properties": { "additionalProperties": { "additionalProperties": false, "name": "property", "properties": { "additionalItems": { "type": "boolean" }, "default": {}, "description": { "type": "string" }, "enum": { "type": "array" }, "items": { "properties": { "enum": { "type": "array" }, "type": { "enum": [ "array", "boolean", "integer", "number", "object", "string", null ], "type": "string" } }, "type": "object" }, "maxItems": { "$ref": "#/definitions/positiveInteger" }, "maxLength": { "$ref": "#/definitions/positiveInteger" }, "maximum": { "type": "number" }, "minItems": { "$ref": "#/definitions/positiveIntegerDefault0" }, "minLength": { "$ref": "#/definitions/positiveIntegerDefault0" }, "minimum": { "type": "number" }, "name": { "maxLength": 255, "type": "string" }, "operators": { "items": { "type": "string" }, "type": "array" }, "pattern": { "format": "regex", "type": "string" }, "readonly": { "type": "boolean" }, "required": { "$ref": "#/definitions/stringArray" }, "title": { "type": "string" }, "type": { "enum": [ "array", "boolean", "integer", "number", "object", "string", null ], "type": "string" }, "uniqueItems": { "default": false, "type": "boolean" } }, "required": [ "type", "title" ] }, "type": "object" }, "schema": { "type": "string" } } } ././@LongLink0000000000000000000000000000014600000000000011216 Lustar 00000000000000glance-16.0.0/api-ref/source/v2/samples/schemas-metadef-resource-type-associations-list-response.jsonglance-16.0.0/api-ref/source/v2/samples/schemas-metadef-resource-type-associations-list-response.jso0000666000175100017510000000510313245511421033541 0ustar zuulzuul00000000000000{ "links": [ { "href": "{first}", "rel": "first" }, { "href": "{next}", "rel": "next" }, { "href": "{schema}", "rel": "describedby" } ], "name": "resource_type_associations", "properties": { "first": { "type": "string" }, "next": { "type": "string" }, "resource_type_associations": { "items": { "additionalProperties": false, "name": "resource_type_association", "properties": { "created_at": { "description": "Date and time of resource type association", "format": "date-time", "readOnly": true, "type": "string" }, "name": { "description": "Resource type names should be aligned with Heat resource types whenever possible: http://docs.openstack.org/developer/heat/template_guide/openstack.html", "maxLength": 80, "type": "string" }, "prefix": { "description": "Specifies the prefix to use for the given resource type. Any properties in the namespace should be prefixed with this prefix when being applied to the specified resource type. Must include prefix separator (e.g. a colon :).", "maxLength": 80, "type": "string" }, "properties_target": { "description": "Some resource types allow more than one key / value pair per instance. For example, Cinder allows user and image metadata on volumes. Only the image properties metadata is evaluated by Nova (scheduling or drivers). This property allows a namespace target to remove the ambiguity.", "maxLength": 80, "type": "string" }, "updated_at": { "description": "Date and time of the last resource type association modification", "format": "date-time", "readOnly": true, "type": "string" } }, "required": [ "name" ] }, "type": "array" }, "schema": { "type": "string" } } } glance-16.0.0/api-ref/source/v2/samples/metadef-object-details-response.json0000666000175100017510000000346013245511421026717 0ustar zuulzuul00000000000000{ "created_at": "2014-09-19T18:20:56Z", "description": "You can configure the CPU limits with control parameters.", "name": "CPU Limits", "properties": { "quota:cpu_period": { "description": "Specifies the enforcement interval (unit: microseconds) for QEMU and LXC hypervisors. Within a period, each VCPU of the domain is not allowed to consume more than the quota worth of runtime. The value should be in range [1000, 1000000]. A period with value 0 means no value.", "maximum": 1000000, "minimum": 1000, "title": "Quota: CPU Period", "type": "integer" }, "quota:cpu_quota": { "description": "Specifies the maximum allowed bandwidth (unit: microseconds). A domain with a negative-value quota indicates that the domain has infinite bandwidth, which means that it is not bandwidth controlled. The value should be in range [1000, 18446744073709551] or less than 0. A quota with value 0 means no value. You can use this feature to ensure that all vCPUs run at the same speed.", "title": "Quota: CPU Quota", "type": "integer" }, "quota:cpu_shares": { "description": "Specifies the proportional weighted share for the domain. If this element is omitted, the service defaults to the OS provided defaults. There is no unit for the value; it is a relative measure based on the setting of other VMs. For example, a VM configured with value 2048 gets twice as much CPU time as a VM configured with value 1024.", "title": "Quota: CPU Shares", "type": "integer" } }, "required": [], "schema": "/v2/schemas/metadefs/object", "self": "/v2/metadefs/namespaces/OS::Compute::Quota/objects/CPU Limits", "updated_at": "2014-09-19T18:20:56Z" } glance-16.0.0/api-ref/source/v2/samples/metadef-properties-list-response.json0000666000175100017510000001052613245511421027174 0ustar zuulzuul00000000000000{ "properties": { "hw_disk_bus": { "description": "Specifies the type of disk controller to attach disk devices to.", "enum": [ "scsi", "virtio", "uml", "xen", "ide", "usb", "fdc", "sata" ], "title": "Disk Bus", "type": "string" }, "hw_machine_type": { "description": "Enables booting an ARM system using the specified machine type. By default, if an ARM image is used and its type is not specified, Compute uses vexpress-a15 (for ARMv7) or virt (for AArch64) machine types. Valid types can be viewed by using the virsh capabilities command (machine types are displayed in the machine tag).", "title": "Machine Type", "type": "string" }, "hw_qemu_guest_agent": { "description": "It is a daemon program running inside the domain which is supposed to help management applications with executing functions which need assistance of the guest OS. For example, freezing and thawing filesystems, entering suspend. However, guest agent (GA) is not bullet proof, and hostile guest OS can send spurious replies.", "enum": [ "yes", "no" ], "title": "QEMU Guest Agent", "type": "string" }, "hw_rng_model": { "default": "virtio", "description": "Adds a random-number generator device to the image's instances. The cloud administrator can enable and control device behavior by configuring the instance's flavor. By default: The generator device is disabled. /dev/random is used as the default entropy source. To specify a physical HW RNG device, use the following option in the nova.conf file: rng_dev_path=/dev/hwrng", "title": "Random Number Generator Device", "type": "string" }, "hw_scsi_model": { "default": "virtio-scsi", "description": "Enables the use of VirtIO SCSI (virtio-scsi) to provide block device access for compute instances; by default, instances use VirtIO Block (virtio-blk). VirtIO SCSI is a para-virtualized SCSI controller device that provides improved scalability and performance, and supports advanced SCSI hardware.", "title": "SCSI Model", "type": "string" }, "hw_video_model": { "description": "The video image driver used.", "enum": [ "vga", "cirrus", "vmvga", "xen", "qxl" ], "title": "Video Model", "type": "string" }, "hw_video_ram": { "description": "Maximum RAM for the video image. Used only if a hw_video:ram_max_mb value has been set in the flavor's extra_specs and that value is higher than the value set in hw_video_ram.", "title": "Max Video Ram", "type": "integer" }, "hw_vif_model": { "description": "Specifies the model of virtual network interface device to use. The valid options depend on the configured hypervisor. KVM and QEMU: e1000, ne2k_pci, pcnet, rtl8139, and virtio. VMware: e1000, e1000e, VirtualE1000, VirtualE1000e, VirtualPCNet32, VirtualSriovEthernetCard, and VirtualVmxnet. Xen: e1000, netfront, ne2k_pci, pcnet, and rtl8139.", "enum": [ "e1000", "ne2k_pci", "pcnet", "rtl8139", "virtio", "e1000", "e1000e", "VirtualE1000", "VirtualE1000e", "VirtualPCNet32", "VirtualSriovEthernetCard", "VirtualVmxnet", "netfront", "ne2k_pci" ], "title": "Virtual Network Interface", "type": "string" }, "os_command_line": { "description": "The kernel command line to be used by the libvirt driver, instead of the default. For linux containers (LXC), the value is used as arguments for initialization. This key is valid only for Amazon kernel, ramdisk, or machine images (aki, ari, or ami).", "title": "Kernel Command Line", "type": "string" } } } glance-16.0.0/api-ref/source/v2/samples/metadef-namespace-update-response.json0000666000175100017510000000110113245511421027230 0ustar zuulzuul00000000000000{ "created_at": "2014-09-19T13:31:37Z", "description": "Choose capabilities that should be provided by the Compute Host. This provides the ability to fine tune the hardware specification required when a new vm is requested.", "display_name": "Hypervisor Selection", "namespace": "OS::Compute::Hypervisor", "owner": "7ec22942411e427692e8a3436be1031a", "protected": false, "schema": "/v2/schemas/metadefs/namespace", "self": "/v2/metadefs/namespaces/OS::Compute::Hypervisor", "updated_at": "2014-09-19T13:31:37Z", "visibility": "public" } glance-16.0.0/api-ref/source/v2/samples/schemas-metadef-objects-list-response.json0000666000175100017510000001372513245511421030056 0ustar zuulzuul00000000000000{ "definitions": { "positiveInteger": { "minimum": 0, "type": "integer" }, "positiveIntegerDefault0": { "allOf": [ { "$ref": "#/definitions/positiveInteger" }, { "default": 0 } ] }, "property": { "additionalProperties": { "properties": { "additionalItems": { "type": "boolean" }, "default": {}, "description": { "type": "string" }, "enum": { "type": "array" }, "items": { "properties": { "enum": { "type": "array" }, "type": { "enum": [ "array", "boolean", "integer", "number", "object", "string", null ], "type": "string" } }, "type": "object" }, "maxItems": { "$ref": "#/definitions/positiveInteger" }, "maxLength": { "$ref": "#/definitions/positiveInteger" }, "maximum": { "type": "number" }, "minItems": { "$ref": "#/definitions/positiveIntegerDefault0" }, "minLength": { "$ref": "#/definitions/positiveIntegerDefault0" }, "minimum": { "type": "number" }, "name": { "maxLength": 255, "type": "string" }, "operators": { "items": { "type": "string" }, "type": "array" }, "pattern": { "format": "regex", "type": "string" }, "readonly": { "type": "boolean" }, "required": { "$ref": "#/definitions/stringArray" }, "title": { "type": "string" }, "type": { "enum": [ "array", "boolean", "integer", "number", "object", "string", null ], "type": "string" }, "uniqueItems": { "default": false, "type": "boolean" } }, "required": [ "title", "type" ], "type": "object" }, "type": "object" }, "stringArray": { "items": { "type": "string" }, "type": "array", "uniqueItems": true } }, "links": [ { "href": "{first}", "rel": "first" }, { "href": "{next}", "rel": "next" }, { "href": "{schema}", "rel": "describedby" } ], "name": "objects", "properties": { "first": { "type": "string" }, "next": { "type": "string" }, "objects": { "items": { "additionalProperties": false, "name": "object", "properties": { "created_at": { "description": "Date and time of object creation", "format": "date-time", "readOnly": true, "type": "string" }, "description": { "type": "string" }, "name": { "maxLength": 255, "type": "string" }, "properties": { "$ref": "#/definitions/property" }, "required": { "$ref": "#/definitions/stringArray" }, "schema": { "readOnly": true, "type": "string" }, "self": { "readOnly": true, "type": "string" }, "updated_at": { "description": "Date and time of the last object modification", "format": "date-time", "readOnly": true, "type": "string" } }, "required": [ "name" ] }, "type": "array" }, "schema": { "type": "string" } } } glance-16.0.0/api-ref/source/v2/samples/task-create-response.json0000666000175100017510000000115513245511421024625 0ustar zuulzuul00000000000000{ "created_at": "2016-06-24T14:57:19Z", "id": "bb480de2-7077-4ea9-bbe9-be1891290d3e", "input": { "image_properties": { "container_format": "ovf", "disk_format": "vhd" }, "import_from": "http://app-catalog.openstack.example.org/groovy-image", "import_from_format": "qcow2" }, "message": "", "owner": "fa6c8c1600f4444281658a23ee6da8e8", "result": null, "schema": "/v2/schemas/task", "self": "/v2/tasks/bb480de2-7077-4ea9-bbe9-be1891290d3e", "status": "pending", "type": "import", "updated_at": "2016-06-24T14:57:19Z" } glance-16.0.0/api-ref/source/v2/samples/schemas-metadef-namespaces-list-response.json0000666000175100017510000002164213245511421030541 0ustar zuulzuul00000000000000{ "definitions": { "positiveInteger": { "minimum": 0, "type": "integer" }, "positiveIntegerDefault0": { "allOf": [ { "$ref": "#/definitions/positiveInteger" }, { "default": 0 } ] }, "property": { "additionalProperties": { "properties": { "additionalItems": { "type": "boolean" }, "default": {}, "description": { "type": "string" }, "enum": { "type": "array" }, "items": { "properties": { "enum": { "type": "array" }, "type": { "enum": [ "array", "boolean", "integer", "number", "object", "string", null ], "type": "string" } }, "type": "object" }, "maxItems": { "$ref": "#/definitions/positiveInteger" }, "maxLength": { "$ref": "#/definitions/positiveInteger" }, "maximum": { "type": "number" }, "minItems": { "$ref": "#/definitions/positiveIntegerDefault0" }, "minLength": { "$ref": "#/definitions/positiveIntegerDefault0" }, "minimum": { "type": "number" }, "name": { "maxLength": 255, "type": "string" }, "operators": { "items": { "type": "string" }, "type": "array" }, "pattern": { "format": "regex", "type": "string" }, "readonly": { "type": "boolean" }, "required": { "$ref": "#/definitions/stringArray" }, "title": { "type": "string" }, "type": { "enum": [ "array", "boolean", "integer", "number", "object", "string", null ], "type": "string" }, "uniqueItems": { "default": false, "type": "boolean" } }, "required": [ "title", "type" ], "type": "object" }, "type": "object" }, "stringArray": { "items": { "type": "string" }, "type": "array", "uniqueItems": true } }, "links": [ { "href": "{first}", "rel": "first" }, { "href": "{next}", "rel": "next" }, { "href": "{schema}", "rel": "describedby" } ], "name": "namespaces", "properties": { "first": { "type": "string" }, "namespaces": { "items": { "additionalProperties": false, "name": "namespace", "properties": { "created_at": { "description": "Date and time of namespace creation", "format": "date-time", "readOnly": true, "type": "string" }, "description": { "description": "Provides a user friendly description of the namespace.", "maxLength": 500, "type": "string" }, "display_name": { "description": "The user friendly name for the namespace. Used by UI if available.", "maxLength": 80, "type": "string" }, "namespace": { "description": "The unique namespace text.", "maxLength": 80, "type": "string" }, "objects": { "items": { "properties": { "description": { "type": "string" }, "name": { "type": "string" }, "properties": { "$ref": "#/definitions/property" }, "required": { "$ref": "#/definitions/stringArray" } }, "type": "object" }, "type": "array" }, "owner": { "description": "Owner of the namespace.", "maxLength": 255, "type": "string" }, "properties": { "$ref": "#/definitions/property" }, "protected": { "description": "If true, namespace will not be deletable.", "type": "boolean" }, "resource_type_associations": { "items": { "properties": { "name": { "type": "string" }, "prefix": { "type": "string" }, "properties_target": { "type": "string" } }, "type": "object" }, "type": "array" }, "schema": { "readOnly": true, "type": "string" }, "self": { "readOnly": true, "type": "string" }, "tags": { "items": { "properties": { "name": { "type": "string" } }, "type": "object" }, "type": "array" }, "updated_at": { "description": "Date and time of the last namespace modification", "format": "date-time", "readOnly": true, "type": "string" }, "visibility": { "description": "Scope of namespace accessibility.", "enum": [ "public", "private" ], "type": "string" } }, "required": [ "namespace" ] }, "type": "array" }, "next": { "type": "string" }, "schema": { "type": "string" } } } glance-16.0.0/api-ref/source/v2/samples/metadef-resource-types-list-response.json0000666000175100017510000000151113245511421027763 0ustar zuulzuul00000000000000{ "resource_types": [ { "created_at": "2014-08-28T18:13:04Z", "name": "OS::Glance::Image", "updated_at": "2014-08-28T18:13:04Z" }, { "created_at": "2014-08-28T18:13:04Z", "name": "OS::Cinder::Volume", "updated_at": "2014-08-28T18:13:04Z" }, { "created_at": "2014-08-28T18:13:04Z", "name": "OS::Nova::Flavor", "updated_at": "2014-08-28T18:13:04Z" }, { "created_at": "2014-08-28T18:13:04Z", "name": "OS::Nova::Aggregate", "updated_at": "2014-08-28T18:13:04Z" }, { "created_at": "2014-08-28T18:13:04Z", "name": "OS::Nova::Instance", "updated_at": "2014-08-28T18:13:04Z" } ] } glance-16.0.0/api-ref/source/v2/samples/image-member-update-response.json0000666000175100017510000000040213245511421026223 0ustar zuulzuul00000000000000{ "created_at": "2013-09-20T19:22:19Z", "image_id": "a96be11e-8536-4910-92cb-de50aa19dfe6", "member_id": "8989447062e04a818baf9e073fd04fa7", "schema": "/v2/schemas/member", "status": "accepted", "updated_at": "2013-09-20T20:15:31Z" } glance-16.0.0/api-ref/source/v2/samples/metadef-object-create-request.json0000666000175100017510000000314213245511421026364 0ustar zuulzuul00000000000000{ "description": "You can configure the CPU limits with control parameters.", "name": "CPU Limits", "properties": { "quota:cpu_period": { "description": "Specifies the enforcement interval (unit: microseconds) for QEMU and LXC hypervisors. Within a period, each VCPU of the domain is not allowed to consume more than the quota worth of runtime. The value should be in range [1000, 1000000]. A period with value 0 means no value.", "maximum": 1000000, "minimum": 1000, "title": "Quota: CPU Period", "type": "integer" }, "quota:cpu_quota": { "description": "Specifies the maximum allowed bandwidth (unit: microseconds). A domain with a negative-value quota indicates that the domain has infinite bandwidth, which means that it is not bandwidth controlled. The value should be in range [1000, 18446744073709551] or less than 0. A quota with value 0 means no value. You can use this feature to ensure that all vCPUs run at the same speed.", "title": "Quota: CPU Quota", "type": "integer" }, "quota:cpu_shares": { "description": "Specifies the proportional weighted share for the domain. If this element is omitted, the service defaults to the OS provided defaults. There is no unit for the value; it is a relative measure based on the setting of other VMs. For example, a VM configured with value 2048 gets twice as much CPU time as a VM configured with value 1024.", "title": "Quota: CPU Shares", "type": "integer" } }, "required": [] } glance-16.0.0/api-ref/source/v2/samples/metadef-tags-create-response.json0000666000175100017510000000027413245511421026225 0ustar zuulzuul00000000000000{ "tags": [ { "name": "sample-tag1" }, { "name": "sample-tag2" }, { "name": "sample-tag3" } ] } glance-16.0.0/api-ref/source/v2/samples/image-create-request.json0000666000175100017510000000020513245511421024572 0ustar zuulzuul00000000000000{ "container_format": "bare", "disk_format": "raw", "name": "Ubuntu", "id": "b2173dd3-7ad6-4362-baa6-a68bce3565cb" } glance-16.0.0/api-ref/source/v2/samples/metadef-property-update-response.json0000666000175100017510000000110113245511421027160 0ustar zuulzuul00000000000000{ "description": "The hypervisor type. It may be used by the host properties filter for scheduling. The ImagePropertiesFilter filters compute nodes that satisfy any architecture, hypervisor type, or virtual machine mode properties specified on the instance's image properties. Image properties are contained in the image dictionary in the request_spec.", "enum": [ "xen", "qemu", "kvm", "lxc", "uml", "vmware", "hyperv" ], "name": "hypervisor_type", "title": "Hypervisor Type", "type": "string" } glance-16.0.0/api-ref/source/v2/samples/images-list-response.json0000666000175100017510000000337013245511421024641 0ustar zuulzuul00000000000000{ "images": [ { "status": "active", "name": "cirros-0.3.2-x86_64-disk", "tags": [], "container_format": "bare", "created_at": "2014-11-07T17:07:06Z", "disk_format": "qcow2", "updated_at": "2014-11-07T17:19:09Z", "visibility": "public", "self": "/v2/images/1bea47ed-f6a9-463b-b423-14b9cca9ad27", "min_disk": 0, "protected": false, "id": "1bea47ed-f6a9-463b-b423-14b9cca9ad27", "file": "/v2/images/1bea47ed-f6a9-463b-b423-14b9cca9ad27/file", "checksum": "64d7c1cd2b6f60c92c14662941cb7913", "owner": "5ef70662f8b34079a6eddb8da9d75fe8", "size": 13167616, "min_ram": 0, "schema": "/v2/schemas/image", "virtual_size": null }, { "status": "active", "name": "F17-x86_64-cfntools", "tags": [], "container_format": "bare", "created_at": "2014-10-30T08:23:39Z", "disk_format": "qcow2", "updated_at": "2014-11-03T16:40:10Z", "visibility": "public", "self": "/v2/images/781b3762-9469-4cec-b58d-3349e5de4e9c", "min_disk": 0, "protected": false, "id": "781b3762-9469-4cec-b58d-3349e5de4e9c", "file": "/v2/images/781b3762-9469-4cec-b58d-3349e5de4e9c/file", "checksum": "afab0f79bac770d61d24b4d0560b5f70", "owner": "5ef70662f8b34079a6eddb8da9d75fe8", "size": 476704768, "min_ram": 0, "schema": "/v2/schemas/image", "virtual_size": null } ], "schema": "/v2/schemas/images", "first": "/v2/images" } glance-16.0.0/api-ref/source/v2/samples/image-create-response.json0000666000175100017510000000121113245511421024736 0ustar zuulzuul00000000000000{ "status": "queued", "name": "Ubuntu", "tags": [], "container_format": "bare", "created_at": "2015-11-29T22:21:42Z", "size": null, "disk_format": "raw", "updated_at": "2015-11-29T22:21:42Z", "visibility": "private", "locations": [], "self": "/v2/images/b2173dd3-7ad6-4362-baa6-a68bce3565cb", "min_disk": 0, "protected": false, "id": "b2173dd3-7ad6-4362-baa6-a68bce3565cb", "file": "/v2/images/b2173dd3-7ad6-4362-baa6-a68bce3565cb/file", "checksum": null, "owner": "bab7d5c60cd041a0a36f7c4b6e1dd978", "virtual_size": null, "min_ram": 0, "schema": "/v2/schemas/image" } glance-16.0.0/api-ref/source/v2/samples/metadef-resource-type-create-request.json0000666000175100017510000000013413245511421027722 0ustar zuulzuul00000000000000{ "name": "OS::Cinder::Volume", "prefix": "hw_", "properties_target": "image" } glance-16.0.0/api-ref/source/v2/samples/schemas-image-member-show-response.json0000666000175100017510000000201113245511421027340 0ustar zuulzuul00000000000000{ "name": "member", "properties": { "created_at": { "description": "Date and time of image member creation", "type": "string" }, "image_id": { "description": "An identifier for the image", "pattern": "^([0-9a-fA-F]){8}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-fA-F]){12}$", "type": "string" }, "member_id": { "description": "An identifier for the image member (tenantId)", "type": "string" }, "schema": { "readOnly": true, "type": "string" }, "status": { "description": "The status of this image member", "enum": [ "pending", "accepted", "rejected" ], "type": "string" }, "updated_at": { "description": "Date and time of last modification of image member", "type": "string" } } } glance-16.0.0/api-ref/source/v2/samples/schemas-metadef-resource-type-association-show-response.json0000666000175100017510000000307313245511421033545 0ustar zuulzuul00000000000000{ "additionalProperties": false, "name": "resource_type_association", "properties": { "created_at": { "description": "Date and time of resource type association", "format": "date-time", "readOnly": true, "type": "string" }, "name": { "description": "Resource type names should be aligned with Heat resource types whenever possible: http://docs.openstack.org/developer/heat/template_guide/openstack.html", "maxLength": 80, "type": "string" }, "prefix": { "description": "Specifies the prefix to use for the given resource type. Any properties in the namespace should be prefixed with this prefix when being applied to the specified resource type. Must include prefix separator (e.g. a colon :).", "maxLength": 80, "type": "string" }, "properties_target": { "description": "Some resource types allow more than one key / value pair per instance. For example, Cinder allows user and image metadata on volumes. Only the image properties metadata is evaluated by Nova (scheduling or drivers). This property allows a namespace target to remove the ambiguity.", "maxLength": 80, "type": "string" }, "updated_at": { "description": "Date and time of the last resource type association modification", "format": "date-time", "readOnly": true, "type": "string" } }, "required": [ "name" ] } glance-16.0.0/api-ref/source/v2/samples/task-show-processing-response.json0000666000175100017510000000111513245511421026510 0ustar zuulzuul00000000000000{ "created_at": "2016-06-24T14:40:19Z", "id": "231c311d-3557-4e23-afc4-6d98af1419e7", "input": { "image_properties": { "container_format": "ovf", "disk_format": "vhd" }, "import_from": "http://example.com", "import_from_format": "qcow2" }, "message": "", "owner": "fa6c8c1600f4444281658a23ee6da8e8", "result": null, "schema": "/v2/schemas/task", "self": "/v2/tasks/231c311d-3557-4e23-afc4-6d98af1419e7", "status": "processing", "type": "import", "updated_at": "2016-06-24T14:40:20Z" } glance-16.0.0/api-ref/source/v2/samples/metadef-property-update-request.json0000666000175100017510000000110113245511421027012 0ustar zuulzuul00000000000000{ "description": "The hypervisor type. It may be used by the host properties filter for scheduling. The ImagePropertiesFilter filters compute nodes that satisfy any architecture, hypervisor type, or virtual machine mode properties specified on the instance's image properties. Image properties are contained in the image dictionary in the request_spec.", "enum": [ "xen", "qemu", "kvm", "lxc", "uml", "vmware", "hyperv" ], "name": "hypervisor_type", "title": "Hypervisor Type", "type": "string" } glance-16.0.0/api-ref/source/v2/samples/metadef-property-create-response.json0000666000175100017510000000110113245511421027141 0ustar zuulzuul00000000000000{ "description": "The hypervisor type. It may be used by the host properties filter for scheduling. The ImagePropertiesFilter filters compute nodes that satisfy any architecture, hypervisor type, or virtual machine mode properties specified on the instance's image properties. Image properties are contained in the image dictionary in the request_spec.", "enum": [ "xen", "qemu", "kvm", "lxc", "uml", "vmware", "hyperv" ], "name": "hypervisor_type", "title": "Hypervisor Type", "type": "string" } glance-16.0.0/api-ref/source/v2/samples/image-member-create-response.json0000666000175100017510000000040113245511421026203 0ustar zuulzuul00000000000000{ "created_at": "2013-09-20T19:22:19Z", "image_id": "a96be11e-8536-4910-92cb-de50aa19dfe6", "member_id": "8989447062e04a818baf9e073fd04fa7", "schema": "/v2/schemas/member", "status": "pending", "updated_at": "2013-09-20T19:25:31Z" } glance-16.0.0/api-ref/source/v2/samples/schemas-metadef-namespace-show-response.json0000666000175100017510000001605213245511421030362 0ustar zuulzuul00000000000000{ "additionalProperties": false, "definitions": { "positiveInteger": { "minimum": 0, "type": "integer" }, "positiveIntegerDefault0": { "allOf": [ { "$ref": "#/definitions/positiveInteger" }, { "default": 0 } ] }, "property": { "additionalProperties": { "properties": { "additionalItems": { "type": "boolean" }, "default": {}, "description": { "type": "string" }, "enum": { "type": "array" }, "items": { "properties": { "enum": { "type": "array" }, "type": { "enum": [ "array", "boolean", "integer", "number", "object", "string", null ], "type": "string" } }, "type": "object" }, "maxItems": { "$ref": "#/definitions/positiveInteger" }, "maxLength": { "$ref": "#/definitions/positiveInteger" }, "maximum": { "type": "number" }, "minItems": { "$ref": "#/definitions/positiveIntegerDefault0" }, "minLength": { "$ref": "#/definitions/positiveIntegerDefault0" }, "minimum": { "type": "number" }, "name": { "maxLength": 255, "type": "string" }, "operators": { "items": { "type": "string" }, "type": "array" }, "pattern": { "format": "regex", "type": "string" }, "readonly": { "type": "boolean" }, "required": { "$ref": "#/definitions/stringArray" }, "title": { "type": "string" }, "type": { "enum": [ "array", "boolean", "integer", "number", "object", "string", null ], "type": "string" }, "uniqueItems": { "default": false, "type": "boolean" } }, "required": [ "title", "type" ], "type": "object" }, "type": "object" }, "stringArray": { "items": { "type": "string" }, "type": "array", "uniqueItems": true } }, "name": "namespace", "properties": { "created_at": { "description": "Date and time of namespace creation", "format": "date-time", "readOnly": true, "type": "string" }, "description": { "description": "Provides a user friendly description of the namespace.", "maxLength": 500, "type": "string" }, "display_name": { "description": "The user friendly name for the namespace. Used by UI if available.", "maxLength": 80, "type": "string" }, "namespace": { "description": "The unique namespace text.", "maxLength": 80, "type": "string" }, "objects": { "items": { "properties": { "description": { "type": "string" }, "name": { "type": "string" }, "properties": { "$ref": "#/definitions/property" }, "required": { "$ref": "#/definitions/stringArray" } }, "type": "object" }, "type": "array" }, "owner": { "description": "Owner of the namespace.", "maxLength": 255, "type": "string" }, "properties": { "$ref": "#/definitions/property" }, "protected": { "description": "If true, namespace will not be deletable.", "type": "boolean" }, "resource_type_associations": { "items": { "properties": { "name": { "type": "string" }, "prefix": { "type": "string" }, "properties_target": { "type": "string" } }, "type": "object" }, "type": "array" }, "schema": { "readOnly": true, "type": "string" }, "self": { "readOnly": true, "type": "string" }, "tags": { "items": { "properties": { "name": { "type": "string" } }, "type": "object" }, "type": "array" }, "updated_at": { "description": "Date and time of the last namespace modification", "format": "date-time", "readOnly": true, "type": "string" }, "visibility": { "description": "Scope of namespace accessibility.", "enum": [ "public", "private" ], "type": "string" } }, "required": [ "namespace" ] } glance-16.0.0/api-ref/source/v2/samples/metadef-namespace-create-response.json0000666000175100017510000000227513245511421027226 0ustar zuulzuul00000000000000{ "description": "Choose capabilities that should be provided by the Compute Host. This provides the ability to fine tune the hardware specification required when a new vm is requested.", "display_name": "Hypervisor Selection", "namespace": "OS::Compute::Hypervisor", "properties": { "hypervisor_type": { "description": "The hypervisor type.", "enum": [ "xen", "qemu", "kvm", "lxc", "uml", "vmware", "hyperv" ], "title": "Hypervisor Type", "type": "string" }, "vm_mode": { "description": "The virtual machine mode.", "enum": [ "hvm", "xen", "uml", "exe" ], "title": "VM Mode", "type": "string" } }, "protected": true, "resource_type_associations": [ { "name": "OS::Glance::Image" } ], "schema": "/v2/schemas/metadefs/namespace", "self": "/v2/metadefs/namespaces/OS::Compute::Hypervisor", "visibility": "public" } glance-16.0.0/api-ref/source/v2/samples/image-members-list-response.json0000666000175100017510000000122613245511421026104 0ustar zuulzuul00000000000000{ "members": [ { "created_at": "2013-10-07T17:58:03Z", "image_id": "dbc999e3-c52f-4200-bedd-3b18fe7f87fe", "member_id": "123456789", "schema": "/v2/schemas/member", "status": "pending", "updated_at": "2013-10-07T17:58:03Z" }, { "created_at": "2013-10-07T17:58:55Z", "image_id": "dbc999e3-c52f-4200-bedd-3b18fe7f87fe", "member_id": "987654321", "schema": "/v2/schemas/member", "status": "accepted", "updated_at": "2013-10-08T12:08:55Z" } ], "schema": "/v2/schemas/members" } glance-16.0.0/api-ref/source/v2/samples/metadef-object-update-request.json0000666000175100017510000000117313245511421026405 0ustar zuulzuul00000000000000{ "description": "You can configure the CPU limits with control parameters.", "name": "CPU Limits", "properties": { "quota:cpu_shares": { "description": "Specifies the proportional weighted share for the domain. If this element is omitted, the service defaults to the OS provided defaults. There is no unit for the value; it is a relative measure based on the setting of other VMs. For example, a VM configured with value 2048 gets twice as much CPU time as a VM configured with value 1024.", "title": "Quota: CPU Shares", "type": "integer" } }, "required": [] } glance-16.0.0/api-ref/source/v2/samples/metadef-tags-list-response.json0000666000175100017510000000027413245511421025735 0ustar zuulzuul00000000000000{ "tags": [ { "name": "sample-tag1" }, { "name": "sample-tag2" }, { "name": "sample-tag3" } ] } glance-16.0.0/api-ref/source/v2/samples/schemas-metadef-tag-show-response.json0000666000175100017510000000113213245511421027172 0ustar zuulzuul00000000000000{ "additionalProperties": false, "name": "tag", "properties": { "created_at": { "description": "Date and time of tag creation", "format": "date-time", "readOnly": true, "type": "string" }, "name": { "maxLength": 255, "type": "string" }, "updated_at": { "description": "Date and time of the last tag modification", "format": "date-time", "readOnly": true, "type": "string" } }, "required": [ "name" ] } glance-16.0.0/api-ref/source/v2/samples/image-details-deactivate-response.json0000666000175100017510000000125613245511421027240 0ustar zuulzuul00000000000000{ "status": "deactivated", "name": "cirros-0.3.2-x86_64-disk", "tags": [], "container_format": "bare", "created_at": "2014-05-05T17:15:10Z", "disk_format": "qcow2", "updated_at": "2014-05-05T17:15:11Z", "visibility": "public", "self": "/v2/images/1bea47ed-f6a9-463b-b423-14b9cca9ad27", "min_disk": 0, "protected": false, "id": "1bea47ed-f6a9-463b-b423-14b9cca9ad27", "file": "/v2/images/1bea47ed-f6a9-463b-b423-14b9cca9ad27/file", "checksum": "64d7c1cd2b6f60c92c14662941cb7913", "owner": "5ef70662f8b34079a6eddb8da9d75fe8", "size": 13167616, "min_ram": 0, "schema": "/v2/schemas/image", "virtual_size": null } glance-16.0.0/api-ref/source/v2/samples/metadef-tag-update-request.json0000666000175100017510000000003713245511421025710 0ustar zuulzuul00000000000000{ "name": "new-tag-name" } glance-16.0.0/api-ref/source/v2/samples/schemas-image-members-list-response.json0000666000175100017510000000331613245511421027527 0ustar zuulzuul00000000000000{ "links": [ { "href": "{schema}", "rel": "describedby" } ], "name": "members", "properties": { "members": { "items": { "name": "member", "properties": { "created_at": { "description": "Date and time of image member creation", "type": "string" }, "image_id": { "description": "An identifier for the image", "pattern": "^([0-9a-fA-F]){8}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-fA-F]){12}$", "type": "string" }, "member_id": { "description": "An identifier for the image member (tenantId)", "type": "string" }, "schema": { "readOnly": true, "type": "string" }, "status": { "description": "The status of this image member", "enum": [ "pending", "accepted", "rejected" ], "type": "string" }, "updated_at": { "description": "Date and time of last modification of image member", "type": "string" } } }, "type": "array" }, "schema": { "type": "string" } } } glance-16.0.0/api-ref/source/v2/samples/image-member-details-response.json0000666000175100017510000000040113245511421026365 0ustar zuulzuul00000000000000{ "status": "pending", "created_at": "2013-11-26T07:21:21Z", "updated_at": "2013-11-26T07:21:21Z", "image_id": "0ae74cc5-5147-4239-9ce2-b0c580f7067e", "member_id": "8989447062e04a818baf9e073fd04fa7", "schema": "/v2/schemas/member" } glance-16.0.0/api-ref/source/v2/metadefs-index.rst0000666000175100017510000000476313245511421021666 0ustar zuulzuul00000000000000.. Copyright 2010 OpenStack Foundation All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. :tocdepth: 3 ============================================= Metadata Definitions Service API v2 (CURRENT) ============================================= .. rest_expand_all:: Metadefs ******** General information ~~~~~~~~~~~~~~~~~~~ The Metadata Definitions Service ("metadefs", for short) provides a common API for vendors, operators, administrators, services, and users to meaningfully define available key:value pairs that can be used on different types of cloud resources (for example, images, artifacts, volumes, flavors, aggregates, and other resources). To get you started, Glance contains a default catalog of metadefs that may be installed at your site; see the `README `_ in the code repository for details. Once a common catalog of metadata definitions has been created, the catalog is available for querying through the API. Note that this service stores only the *catalog*, because metadefs are meta-metadata. Metadefs provide information *about* resource metadata, but do not themselves serve as actual metadata. Actual key:value pairs are stored on the resources to which they apply using the metadata facilities provided by the appropriate API. (For example, the Images API would be used to put specific key:value pairs on a virtual machine image.) A metadefs definition includes a property’s key, its description, its constraints, and the resource types to which it can be associated. See `Metadata Definition Concepts `_ in the Glance Developer documentation for more information. .. include:: metadefs-namespaces.inc .. include:: metadefs-resourcetypes.inc .. include:: metadefs-namespaces-objects.inc .. include:: metadefs-namespaces-properties.inc .. include:: metadefs-namespaces-tags.inc .. include:: metadefs-schemas.inc glance-16.0.0/api-ref/source/conf.py0000666000175100017510000001726313245511421017206 0ustar zuulzuul00000000000000# -*- coding: utf-8 -*- # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # # glance api-ref build config file, copied from: # nova documentation build configuration file, created by # sphinx-quickstart on Sat May 1 15:17:47 2010. # # This file is execfile()d with the current directory set to # its containing dir. # # Note that not all possible configuration values are present in this # autogenerated file. # # All configuration values have a default; values that are commented out # serve to show the default. import os import subprocess import sys import warnings import openstackdocstheme extensions = [ 'os_api_ref', ] html_theme = 'openstackdocs' html_theme_path = [openstackdocstheme.get_html_theme_path()] html_theme_options = { "sidebar_mode": "toc", } # If extensions (or modules to document with autodoc) are in another directory, # add these directories to sys.path here. If the directory is relative to the # documentation root, use os.path.abspath to make it absolute, like shown here. sys.path.insert(0, os.path.abspath('../../')) sys.path.insert(0, os.path.abspath('../')) sys.path.insert(0, os.path.abspath('./')) # -- General configuration ---------------------------------------------------- # Add any Sphinx extension module names here, as strings. They can be # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones. # The suffix of source filenames. source_suffix = '.rst' # The encoding of source files. # # source_encoding = 'utf-8' # The master toctree document. master_doc = 'index' # General information about the project. project = u'Image Service API Reference' copyright = u'2010-present, OpenStack Foundation' # The version info for the project you're documenting, acts as replacement for # |version| and |release|, also used in various other places throughout the # built documents. # from glance.version import version_info # The full version, including alpha/beta/rc tags. release = version_info.release_string() # The short X.Y version. version = version_info.version_string() # The language for content autogenerated by Sphinx. Refer to documentation # for a list of supported languages. # # language = None # There are two options for replacing |today|: either, you set today to some # non-false value, then it is used: # today = '' # Else, today_fmt is used as the format for a strftime call. # today_fmt = '%B %d, %Y' # The reST default role (used for this markup: `text`) to use # for all documents. # default_role = None # If true, '()' will be appended to :func: etc. cross-reference text. # add_function_parentheses = True # If true, the current module name will be prepended to all description # unit titles (such as .. function::). add_module_names = False # If true, sectionauthor and moduleauthor directives will be shown in the # output. They are ignored by default. show_authors = False # The name of the Pygments (syntax highlighting) style to use. pygments_style = 'sphinx' # Config logABug feature # source tree giturl = ( u'https://git.openstack.org/cgit/openstack/glance/tree/api-ref/source') # html_context allows us to pass arbitrary values into the html template html_context = {'bug_tag': 'api-ref', 'giturl': giturl, 'bug_project': 'glance'} # -- Options for man page output ---------------------------------------------- # Grouping the document tree for man pages. # List of tuples 'sourcefile', 'target', u'title', u'Authors name', 'manual' # -- Options for HTML output -------------------------------------------------- # The theme to use for HTML and HTML Help pages. Major themes that come with # Sphinx are currently 'default' and 'sphinxdoc'. # html_theme_path = ["."] # html_theme = '_theme' # Theme options are theme-specific and customize the look and feel of a theme # further. For a list of options available for each theme, see the # documentation. # html_theme_options = {} # Add any paths that contain custom themes here, relative to this directory. # html_theme_path = [] # The name for this set of Sphinx documents. If None, it defaults to # " v documentation". # html_title = None # A shorter title for the navigation bar. Default is the same as html_title. # html_short_title = None # The name of an image file (relative to this directory) to place at the top # of the sidebar. # html_logo = None # The name of an image file (within the static path) to use as favicon of the # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 # pixels large. # html_favicon = None # Add any paths that contain custom static files (such as style sheets) here, # relative to this directory. They are copied after the builtin static files, # so a file named "default.css" will overwrite the builtin "default.css". # html_static_path = ['_static'] # If not '', a 'Last updated on:' timestamp is inserted at every page bottom, # using the given strftime format. # html_last_updated_fmt = '%b %d, %Y' git_cmd = [ "git", "log", "--pretty=format:'%ad, commit %h'", "--date=local", "-n1" ] try: html_last_updated_fmt = subprocess.check_output(git_cmd).decode('utf-8') except Exception: warnings.warn('Cannot get last updated time from git repository. ' 'Not setting "html_last_updated_fmt".') # If true, SmartyPants will be used to convert quotes and dashes to # typographically correct entities. # html_use_smartypants = True # Custom sidebar templates, maps document names to template names. # html_sidebars = {} # Additional templates that should be rendered to pages, maps page names to # template names. # html_additional_pages = {} # If false, no module index is generated. # html_use_modindex = True # If false, no index is generated. # html_use_index = True # If true, the index is split into individual pages for each letter. # html_split_index = False # If true, links to the reST sources are added to the pages. # html_show_sourcelink = True # If true, an OpenSearch description file will be output, and all pages will # contain a tag referring to it. The value of this option must be the # base URL from which the finished HTML is served. # html_use_opensearch = '' # If nonempty, this is the file name suffix for HTML files (e.g. ".xhtml"). # html_file_suffix = '' # Output file base name for HTML help builder. htmlhelp_basename = 'glancedoc' # -- Options for LaTeX output ------------------------------------------------- # The paper size ('letter' or 'a4'). # latex_paper_size = 'letter' # The font size ('10pt', '11pt' or '12pt'). # latex_font_size = '10pt' # Grouping the document tree into LaTeX files. List of tuples # (source start file, target name, title, author, documentclass # [howto/manual]). latex_documents = [ ('index', 'Glance.tex', u'OpenStack Image Service API Documentation', u'OpenStack Foundation', 'manual'), ] # The name of an image file (relative to this directory) to place at the top of # the title page. # latex_logo = None # For "manual" documents, if this is true, then toplevel headings are parts, # not chapters. # latex_use_parts = False # Additional stuff for the LaTeX preamble. # latex_preamble = '' # Documents to append as an appendix to all manuals. # latex_appendices = [] # If false, no module index is generated. # latex_use_modindex = True glance-16.0.0/api-ref/source/heading-level-guide.txt0000666000175100017510000000107113245511421022235 0ustar zuulzuul00000000000000=============== Heading level 1 =============== ReStructured Text doesn't care what markers you use for headings, but it does require you to be consistent. Here's what we are using in the Image API reference documents. Level 1 is mostly used in the .rst files. For the .inc files, the top-level heading will most likely be a Level 2. Heading level 2 *************** Heading level 3 ~~~~~~~~~~~~~~~ Heading level 4 --------------- Heading level 5 +++++++++++++++ Heading level 6 ############### Heading level 7 """"""""""""""" Heading level 8 ''''''''''''''' glance-16.0.0/api-ref/source/index.rst0000666000175100017510000000153113245511421017537 0ustar zuulzuul00000000000000.. Copyright 2010 OpenStack Foundation All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ================== Image Service APIs ================== API content can be searched using the :ref:`search`. .. toctree:: :maxdepth: 2 versions/index v2/index v2/metadefs-index v1/index glance-16.0.0/api-ref/source/v1/0000775000175100017510000000000013245511661016230 5ustar zuulzuul00000000000000glance-16.0.0/api-ref/source/v1/images-sharing-v1.inc0000666000175100017510000000676113245511421022153 0ustar zuulzuul00000000000000.. -*- rst -*- Sharing ******* Image sharing provides a means for one tenant (the "producer") to make a private image available to other tenants (the "consumers"). This ability can unfortunately be misused to spam tenants' image lists, so these calls may not be exposed in some deployments. (The Images v2 API has a more sophisticated sharing scheme that contains an anti-spam provision.) Add member to image ~~~~~~~~~~~~~~~~~~~ .. rest_method:: PUT /v1/images/{image_id}/members/{member_id} Adds the tenant whose tenant ID is ``member_id`` as a member of the image denoted by ``image_id``. By default, an image member cannot further share the image with other tenants. This behavior can be overridden by supplying a request body with the call that specifies ``can_share`` as ``true``. Thus: - If you omit the request body, this call adds the specified tenant as a member of the image with the ``can_share`` attribute set to ``false``. - If you include a request body, the ``can_share`` attribute will be set to the appropriate boolean value you have supplied in the request body. - If the specified tenant is already a member, and there is no request body, the membership (including the ``can_share`` attribute) remains unmodified. - If the specified tenant is already a member and the request includes a body, the ``can_share`` attribute of the tenant will be set to whatever value is specified in the request body. Normal response codes: 204 Error response codes: 404 Request ------- .. rest_parameters:: parameters.yaml - image_id: image_id-in-path - member_id: member_id-in-path - can_share: can_share - member_id: member_id Request Example --------------- .. literalinclude:: samples/image-member-add-request.json :language: json Replace membership list for an image ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. rest_method:: PUT /v1/images/{image_id}/members Replaces the membership list for an image so that the tenants whose tenant IDs are listed in the member objects comprising the request body become all and only the members of the image denoted by ``image_id``. If the ``can_share`` attribute is omitted for in any member object: - If the member already exists on the image, that member's ``can_share`` setting remains unchanged. - If the member did not already exist on the image, that member's ``can_share`` attribute is set to ``false``. Normal response codes: 204 Error response codes: 404 Request ------- .. rest_parameters:: parameters.yaml - image_id: image_id-in-path - memberships: memberships Request Example --------------- .. literalinclude:: samples/image-members-add-request.json :language: json Remove member ~~~~~~~~~~~~~ .. rest_method:: DELETE /v1/images/{image_id}/members/{member_id} Removes a member from an image. Normal response codes: 204 Error response codes: 404 Request ------- .. rest_parameters:: parameters.yaml - image_id: image_id-in-path - member_id: member_id-in-path List shared images ~~~~~~~~~~~~~~~~~~ .. rest_method:: GET /v1/shared-images/{owner_id} Lists the VM images that an owner shares. The ``owner_id`` is the tenant ID of the image owner. Normal response codes: 200 Error response codes: 404 Request ------- .. rest_parameters:: parameters.yaml - owner_id: owner_id-in-path Response Parameters ------------------- .. rest_parameters:: parameters.yaml - shared_images: shared_images Response Example ---------------- .. literalinclude:: samples/shared-images-list-response.json :language: json glance-16.0.0/api-ref/source/v1/parameters.yaml0000666000175100017510000001717613245511421021267 0ustar zuulzuul00000000000000# variables in header location: description: | A URI location for the image record. format: uri in: header required: true type: string x-image-meta-container_format: description: | The image ``container_format`` property. (Optional when only reserving an image.) A container format defines the file format of the file that contains the image and metadata about the actual VM. For a VM image with a ``bare`` container format, the image is a blob of unstructured data. You can set the container format to one of these values: - ``aki`` - Amazon kernel image. - ``ami`` - Amazon machine image. - ``ari`` - Amazon ramdisk image. - ``bare`` - No container or metadata envelope for the image. - ``docker`` - Docker tar archive of the container filesystem. - ``ova`` - OVA container format. - ``ovf`` - OVF container format. in: header required: true type: enum x-image-meta-disk_format: description: | The image ``disk_format`` property. (Optional when only reserving an image.) The disk format of a VM image is the format of the underlying disk image. Virtual appliance vendors have different formats for laying out the information contained in a VM disk image. You can set the disk format for your image to one of these values: - ``aki`` - An Amazon kernel image. - ``ami`` - An Amazon machine image. - ``ari`` - An Amazon ramdisk image. - ``iso`` - An archive format for the data contents of an optical disc, such as CDROM. - ``qcow2``- Supported by the QEMU emulator that can expand dynamically and supports Copy on Write. - ``raw`` - Unstructured disk image format. - ``vhd`` - VHD disk format, a common disk format used by hypervisors from VMWare, Xen, Microsoft, VirtualBox, and others. - ``vdi`` - Supported by VirtualBox VM monitor and the QEMU emulator. - ``vmdk`` - A common disk format that supported by many hypervisors. in: header required: true type: enum x-image-meta-name: description: | The image ``name`` property. (Optional when only reserving an image.) An image name is not required to be unique, though of course it will be easier to tell your images apart if you give them distinct descriptive names. Names are limited to 255 chars. in: header required: true type: string x-openstack-request-id: description: | Request identifier passed through by the various OpenStack services. in: header required: false type: string # variables in path image_id-in-path: description: | Image ID stored through the image API. Typically a UUID. in: path required: true type: string member_id-in-path: description: | The tenant ID of the tenant with whom an image is shared, that is, the tenant ID of the image member. in: path required: true type: string owner_id-in-path: description: | Owner ID, which is the tenant ID. in: path required: true type: string # variables in query changes-since: description: | Filters the image list to those images that have changed since a time stamp value. in: query required: false type: string container_format-in-query: description: | Filters the image list by a container format. A valid value is ``aki``, ``ami``, ``ari``, ``bare``, ``docker``, ``ova``, or ``ovf``. in: query required: false type: string disk_format-in-query: description: | Filters the image list by a disk format. A valid value is ``aki``, ``ami``, ``ari``, ``iso``, ``qcow2``, ``raw``, ``vhd``, ``vdi``, or ``vmdk``. in: query required: false type: string name-in-query: description: | Filters the image list by an image name, in string format. in: query required: false type: string size_max: description: | Filters the image list by a maximum image size, in bytes. in: query required: false type: int size_min: description: | Filters the image list by a minimum image size, in bytes. in: query required: false type: int status-in-query: description: | Filters the image list by a status. A valid value is ``queued``, ``saving``, ``active``, ``killed``, ``deleted``, or ``pending_delete``. in: query required: false type: string # variables in body can_share: description: | Indicates whether the image member whose tenant ID is ``member_id`` is authorized to share the image. If the member can share the image, this value is ``true``. Otherwise, this value is ``false``. in: body required: false type: boolean createImage: description: | The virtual machine image data. Do not include this if you are only reserving an image. in: body required: true type: binary image-object: description: | A JSON representation of the image. Includes all metadata fields. in: body required: true type: object images-detail-list: description: | A list of image objects. Each object contains the following fields: - ``checksum`` - The MD5 checksum of the image data. - ``container_format`` - The container format. - ``created_at`` - Timestamp of image record creation. - ``deleted`` - ``true`` if the image is deleted, ``false`` otherwise. - ``deleted_at`` - Timestamp when the image went to ``deleted`` status. - ``disk_format`` - The disk format. - ``id`` - The image ID, typically a UUID. - ``is_public`` - This is ``true`` if the image is public, ``false`` otherwise. - ``name`` - The name of the image. - ``owner`` - The image owner, usually the tenant_id. - ``properties`` - A dict of user-specified key:value pairs (that is, custom image metadata). - ``protected`` - A boolean value that must be ``false`` or the image cannot be deleted. Default value is ``false``. - ``size`` - The size of the stored image data in bytes. - ``status`` - The image status. - ``updated_at`` - Timestamp of when the image record was most recently modified. - ``virtual_size`` - The size of the virtual machine image (the virtual disk itself, not the containing package, if any) in bytes. in: body required: true type: array images-list: description: | A list of image objects in a sparse representation. Each object contains the following fields: - ``checksum`` - The MD5 checksum of the image data. - ``container_format`` - The container format. - ``disk_format`` - The disk format. - ``id`` - The image ID, typically a UUID. - ``name`` - The name of the image. - ``size`` - The size of the image in bytes. in: body required: true type: array member_id: description: | The tenant ID of the tenant with whom an image is shared, that is, the tenant ID of the image member. in: body required: true type: string memberships: description: | List of image member objects. in: body required: true type: array next: description: | Show the next item in the list. format: uri in: body required: false type: string previous: description: | Show the previous item in the list. format: uri in: body required: false type: string shared_images: description: | A list of objects, each of which contains an ``image_id`` and a ``can_share`` field. If all the members of the image are such that ``can_share`` is ``true`` for each member, then the ``can_share`` value in this object will be ``true``, otherwise it will be ``false``. in: body required: true type: array glance-16.0.0/api-ref/source/v1/images-images-v1.inc0000666000175100017510000001735713245511421021770 0ustar zuulzuul00000000000000.. -*- rst -*- Images ****** Create image ~~~~~~~~~~~~ .. rest_method:: POST /v1/images Creates a metadata record of a virtual machine (VM) image and optionally stores the image data. Image metadata fields are passed as HTTP headers prefixed with one of the strings ``x-image-meta-`` or ``x-image-meta-property-``. See the API documentation for details. If there is no request body, an image record will be created in status ``queued``. This is called *reserving an image*. The image data can be uploaded later using the `Update image`_ call. If image data will be uploaded as part of this request, then the following image metadata must be included among the request headers: - ``name`` - ``disk_format`` - ``container_format`` Additionally, if image data is uploaded as part of this request, the API will return a 400 under the following circumstances: - The ``x-image-meta-size`` header is present and the length in bytes of the request body does not match the value of this header. - The ``x-image-meta-checksum`` header is present and MD5 checksum generated by the backend store while storing the data does not match the value of this header. Normal response codes: 201 Error response codes: 400, 409 Request ------- .. rest_parameters:: parameters.yaml - image data: createImage - x-image-meta-name: x-image-meta-name - x-image-meta-container_format: x-image-meta-container_format - x-image-meta-disk_format: x-image-meta-disk_format Response Parameters ------------------- .. rest_parameters:: parameters.yaml - location: location - image: image-object Response Example (create with data) ----------------------------------- :: HTTP/1.1 100 Continue HTTP/1.1 201 Created Content-Type: application/json Content-Length: 491 Location: http://glance.openstack.example.org/v1/images/de2f2211-3ac7-4260-9142-41db0ecfb425 Etag: 7b1b10607acc1319506185e7227ca30d X-Openstack-Request-Id: req-70adeab4-740c-4db3-a002-fd1559ecf40f Date: Tue, 10 May 2016 21:41:41 GMT .. literalinclude:: samples/images-create-with-data-response.json :language: json Response Example (reserve an image) ----------------------------------- This is an extreme example of reserving an image. It was created by a POST with no headers specified and no data passed. Here's the response: :: HTTP/1.1 201 Created Content-Type: application/json Content-Length: 447 Location: http://glance.openstack.example.org/v1/images/6b3ecfca-d445-4946-a8d1-c4938352b251 X-Openstack-Request-Id: req-db1ff3c7-3d4f-451f-9ef1-c414343f809d Date: Tue, 10 May 2016 21:35:14 GMT .. literalinclude:: samples/images-create-reserve-response.json :language: json List images ~~~~~~~~~~~ .. rest_method:: GET /v1/images Lists all VM images available to the user making the call. This list will include all public images, any images owned by the requestor, and any images shared with the requestor. Various query filters can be applied to the URL to restrict the content of the response. Normal response codes: 200 Error response codes: 400, 403 .. note:: need to add info about sorting and pagination Request ------- .. rest_parameters:: parameters.yaml - name: name-in-query - container_format: container_format-in-query - disk_format: disk_format-in-query - status: status-in-query - size_min: size_min - size_max: size_max - changes-since: changes-since Response Parameters ------------------- .. rest_parameters:: parameters.yaml - images: images-list Response Example ---------------- .. literalinclude:: samples/images-list-response.json :language: json List images with details ~~~~~~~~~~~~~~~~~~~~~~~~ .. rest_method:: GET /v1/images/detail Lists all available images with details. Various query filters can be applied to the URL to restrict the content of the response. Normal response codes: 200 Error response codes: 400, 403 .. note:: need to add info about sorting and pagination Request ------- .. rest_parameters:: parameters.yaml - name: name-in-query - container_format: container_format-in-query - disk_format: disk_format-in-query - status: status-in-query - size_min: size_min - size_max: size_max - changes-since: changes-since Response Parameters ------------------- .. rest_parameters:: parameters.yaml - images: images-detail-list - previous: previous - next: next Response Example ---------------- .. literalinclude:: samples/images-list-details-response.json :language: json Update image ~~~~~~~~~~~~ .. rest_method:: PUT /v1/images/{image_id} Updates the metadata for an image or uploads an image file. Image metadata is updated by passing HTTP headers prefixed with one of the strings ``x-image-meta-`` or ``x-image-meta-property-``. See the API documentation for details. If the image is in ``queued`` status, image data may be added by including it in the request body. Otherwise, attempting to add data will result in a 409 Conflict response. If the request contains a body, the API will return a 400 under the following circumstances: - The ``x-image-meta-size`` header is present and the length in bytes of the request body does not match the value of this header. - The ``x-image-meta-checksum`` header is present and MD5 checksum generated by the backend store while storing the data does not match the value of this header. Normal response codes: 200 Error response codes: 400, 404, 409 Request ------- .. rest_parameters:: parameters.yaml - image_id: image_id-in-path Response Parameters ------------------- .. rest_parameters:: parameters.yaml - image: image-object Response Example ---------------- .. literalinclude:: samples/image-update-response.json :language: json Show image details and image data ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. rest_method:: GET /v1/images/{image_id} Returns the image metadata as headers; the image data is returned in the body of the response. Standard image properties are returned in headers prefixed by ``x-image-meta-`` (for example, ``x-image-meta-name``). Custom image properties are returned in headers prefixed by the string ``x-image-meta-property-`` (for example, ``x-image-meta-property-foo``). Normal response codes: 200 Error response codes: 404, 403 Request ------- .. rest_parameters:: parameters.yaml - image_id: image_id-in-path Show image metadata ~~~~~~~~~~~~~~~~~~~ .. rest_method:: HEAD /v1/images/{image_id} Returns the image metadata information as response headers. The Image system does not return a response body for the HEAD operation. If the request succeeds, the operation returns the ``200`` response code. Normal response codes: 200 Error response codes: 404, 409 Request ------- .. rest_parameters:: parameters.yaml - image_id: image_id-in-path Response Example ---------------- :: X-Image-Meta-Checksum: 8a40c862b5735975d82605c1dd395796 X-Image-Meta-Container_format: aki X-Image-Meta-Created_at: 2016-01-06T03:22:20.000000 X-Image-Meta-Deleted: false X-Image-Meta-Disk_format: aki X-Image-Meta-Id: 03bc0a8b-659c-4de9-b6bd-13c6e86e6455 X-Image-Meta-Is_public: true X-Image-Meta-Min_disk: 0 X-Image-Meta-Min_ram: 0 X-Image-Meta-Name: cirros-0.3.4-x86_64-uec-kernel X-Image-Meta-Owner: 13cc6052265b41529e2fd0fc461fa8ef X-Image-Meta-Protected: false X-Image-Meta-Size: 4979632 X-Image-Meta-Status: deactivated X-Image-Meta-Updated_at: 2016-02-25T03:02:05.000000 X-Openstack-Request-Id: req-d5208320-28ed-4c22-b628-12dc6456d983 Delete image ~~~~~~~~~~~~ .. rest_method:: DELETE /v1/images/{image_id} Deletes an image. Normal response codes: 204 Error response codes: 404, 403 Request ------- .. rest_parameters:: parameters.yaml - image_id: image_id-in-path glance-16.0.0/api-ref/source/v1/index.rst0000666000175100017510000000153413245511421020070 0ustar zuulzuul00000000000000.. Copyright 2010 OpenStack Foundation All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. :tocdepth: 3 ================================= Image Service API v1 (DEPRECATED) ================================= .. rest_expand_all:: .. include:: images-images-v1.inc .. include:: images-sharing-v1.inc glance-16.0.0/api-ref/source/v1/samples/0000775000175100017510000000000013245511661017674 5ustar zuulzuul00000000000000glance-16.0.0/api-ref/source/v1/samples/image-member-add-request.json0000666000175100017510000000012013245511421025317 0ustar zuulzuul00000000000000{ "member_id": "eb5d80bd5f1e49f1818988148c70eabf", "can_share": false } glance-16.0.0/api-ref/source/v1/samples/image-members-add-request.json0000666000175100017510000000040013245511421025503 0ustar zuulzuul00000000000000{ "memberships": [ { "member_id": "eb5d80bd5f1e49f1818988148c70eabf", "can_share": false }, { "member_id": "8f450f44647d4080a0e7ca505057b5ca", "can_share": false } ] } glance-16.0.0/api-ref/source/v1/samples/image-memberships-list-response.json0000666000175100017510000000024413245511421026766 0ustar zuulzuul00000000000000{ "memberships": [ { "member_id": "tenant1", "can_share": false }, { "...": "..." } ] } glance-16.0.0/api-ref/source/v1/samples/image-update-response.json0000666000175100017510000000133013245511421024756 0ustar zuulzuul00000000000000{ "image": { "checksum": "eb9139e4942121f22bbc2afc0400b2a4", "container_format": "bare", "created_at": "2016-03-15T15:09:07.000000", "deleted": false, "deleted_at": null, "disk_format": "vmdk", "id": "1086fa65-8c63-4081-9a0a-ddf7e88e485b", "is_public": false, "min_disk": 22, "min_ram": 11, "name": "Silas Marner", "owner": "c60b1d57c5034e0d86902aedf8c49be0", "properties": { "foo": "bar", "qe_status": "approved" }, "protected": false, "size": 25165824, "status": "active", "updated_at": "2016-05-10T21:14:04.000000", "virtual_size": null } } glance-16.0.0/api-ref/source/v1/samples/images-list-details-response.json0000666000175100017510000000155713245511421026270 0ustar zuulzuul00000000000000{ "images": [ { "checksum": "eb9139e4942121f22bbc2afc0400b2a4", "container_format": "bare", "created_at": "2016-03-15T15:09:07.000000", "deleted": false, "deleted_at": null, "disk_format": "vmdk", "id": "1086fa65-8c63-4081-9a0a-ddf7e88e485b", "is_public": false, "min_disk": 22, "min_ram": 11, "name": "Silas Marner", "owner": "c60b1d57c5034e0d86902aedf8c49be0", "properties": { "foo": "bar", "qe_status": "approved" }, "protected": false, "size": 25165824, "status": "active", "updated_at": "2016-05-10T21:14:04.000000", "virtual_size": null }, { "...": "..." } ] } glance-16.0.0/api-ref/source/v1/samples/images-create-reserve-response.json0000666000175100017510000000113413245511421026575 0ustar zuulzuul00000000000000{ "image": { "checksum": null, "container_format": null, "created_at": "2016-05-10T21:35:15.000000", "deleted": false, "deleted_at": null, "disk_format": null, "id": "6b3ecfca-d445-4946-a8d1-c4938352b251", "is_public": false, "min_disk": 0, "min_ram": 0, "name": null, "owner": "c60b1d57c5034e0d86902aedf8c49be0", "properties": {}, "protected": false, "size": 0, "status": "queued", "updated_at": "2016-05-10T21:35:15.000000", "virtual_size": null } } glance-16.0.0/api-ref/source/v1/samples/shared-images-list-response.json0000666000175100017510000000046513245511421026106 0ustar zuulzuul00000000000000{ "shared_images": [ { "can_share": false, "image_id": "008cc101-c3ee-40dd-8477-cd8d99dcbf3d" }, { "can_share": true, "image_id": "de2f2211-3ac7-4260-9142-41db0ecfb425" }, { "...": "..." } ] } glance-16.0.0/api-ref/source/v1/samples/images-list-response.json0000666000175100017510000000052613245511421024640 0ustar zuulzuul00000000000000{ "images": [ { "checksum": "eb9139e4942121f22bbc2afc0400b2a4", "container_format": "ovf", "disk_format": "vmdk", "id": "008cc101-c3ee-40dd-8477-cd8d99dcbf3d", "name": "Harry", "size": 25165824 }, { "...": "..." } ] } glance-16.0.0/api-ref/source/v1/samples/images-create-with-data-response.json0000666000175100017510000000121013245511421026777 0ustar zuulzuul00000000000000{ "image": { "checksum": "7b1b10607acc1319506185e7227ca30d", "container_format": "bare", "created_at": "2016-05-10T21:41:41.000000", "deleted": false, "deleted_at": null, "disk_format": "raw", "id": "de2f2211-3ac7-4260-9142-41db0ecfb425", "is_public": false, "min_disk": 0, "min_ram": 0, "name": "Fake Image", "owner": "c60b1d57c5034e0d86902aedf8c49be0", "properties": {}, "protected": false, "size": 3908, "status": "active", "updated_at": "2016-05-10T21:41:41.000000", "virtual_size": null } } glance-16.0.0/bindep.txt0000666000175100017510000000146513245511421015063 0ustar zuulzuul00000000000000# This is a cross-platform list tracking distribution packages needed for install and tests; # see http://docs.openstack.org/infra/bindep/ for additional information. build-essential [platform:dpkg test] gcc [platform:rpm test] gettext [!platform:suse] gettext-runtime [platform:suse] libffi-dev [platform:dpkg] libffi-devel [platform:redhat] libffi48-devel [platform:suse] virtual/libffi [platform:gentoo] locales [platform:debian] mariadb [platform:rpm] mariadb-server [platform:redhat] mariadb-devel [platform:redhat] libmysqlclient-dev [platform:dpkg] libmysqlclient-devel [platform:suse] mysql-client [platform:dpkg] mysql-server [platform:dpkg] postgresql postgresql-client [platform:dpkg] postgresql-devel [platform:rpm] postgresql-server [platform:rpm] glance-16.0.0/setup.cfg0000666000175100017510000000641413245511661014707 0ustar zuulzuul00000000000000[metadata] name = glance summary = OpenStack Image Service description-file = README.rst author = OpenStack author-email = openstack-dev@lists.openstack.org home-page = https://docs.openstack.org/glance/latest/ classifier = Environment :: OpenStack Intended Audience :: Information Technology Intended Audience :: System Administrators License :: OSI Approved :: Apache Software License Operating System :: POSIX :: Linux Programming Language :: Python Programming Language :: Python :: 2 Programming Language :: Python :: 2.7 [files] data_files = etc/glance = etc/glance-api.conf etc/glance-cache.conf etc/glance-manage.conf etc/glance-registry.conf etc/glance-scrubber.conf etc/glance-api-paste.ini etc/glance-registry-paste.ini etc/policy.json etc/rootwrap.conf etc/glance/metadefs = etc/metadefs/* packages = glance [entry_points] console_scripts = glance-api = glance.cmd.api:main glance-cache-prefetcher = glance.cmd.cache_prefetcher:main glance-cache-pruner = glance.cmd.cache_pruner:main glance-cache-manage = glance.cmd.cache_manage:main glance-cache-cleaner = glance.cmd.cache_cleaner:main glance-control = glance.cmd.control:main glance-manage = glance.cmd.manage:main glance-registry = glance.cmd.registry:main glance-replicator = glance.cmd.replicator:main glance-scrubber = glance.cmd.scrubber:main wsgi_scripts = glance-wsgi-api = glance.common.wsgi_app:init_app glance.common.image_location_strategy.modules = location_order_strategy = glance.common.location_strategy.location_order store_type_strategy = glance.common.location_strategy.store_type oslo.config.opts = glance.api = glance.opts:list_api_opts glance.registry = glance.opts:list_registry_opts glance.scrubber = glance.opts:list_scrubber_opts glance.cache= glance.opts:list_cache_opts glance.manage = glance.opts:list_manage_opts glance = glance.opts:list_image_import_opts oslo.config.opts.defaults = glance.api = glance.common.config:set_cors_middleware_defaults glance.database.migration_backend = sqlalchemy = oslo_db.sqlalchemy.migration glance.database.metadata_backend = sqlalchemy = glance.db.sqlalchemy.metadata glance.flows = api_image_import = glance.async.flows.api_image_import:get_flow import = glance.async.flows.base_import:get_flow glance.flows.import = convert = glance.async.flows.convert:get_flow introspect = glance.async.flows.introspect:get_flow ovf_process = glance.async.flows.ovf_process:get_flow glance.image_import.plugins = no_op = glance.async.flows.plugins.no_op:get_flow inject_image_metadata=glance.async.flows.plugins.inject_image_metadata:get_flow glance.image_import.internal_plugins = web_download = glance.async.flows._internal_plugins.web_download:get_flow [build_sphinx] builder = html man all_files = 1 build-dir = doc/build source-dir = doc/source warning-is-error = 1 [egg_info] tag_build = tag_date = 0 tag_svn_revision = 0 [compile_catalog] directory = glance/locale domain = glance [update_catalog] domain = glance output_dir = glance/locale input_file = glance/locale/glance.pot [extract_messages] keywords = _ gettext ngettext l_ lazy_gettext mapping_file = babel.cfg output_file = glance/locale/glance.pot [pbr] autodoc_index_modules = True autodoc_exclude_modules = glance.tests.* glance.db.sqlalchemy.* api_doc_dir = contributor/api glance-16.0.0/babel.cfg0000666000175100017510000000002013245511421014571 0ustar zuulzuul00000000000000[python: **.py] glance-16.0.0/glance.egg-info/0000775000175100017510000000000013245511661016002 5ustar zuulzuul00000000000000glance-16.0.0/glance.egg-info/SOURCES.txt0000664000175100017510000010064213245511661017671 0ustar zuulzuul00000000000000.coveragerc .mailmap .stestr.conf .zuul.yaml AUTHORS CONTRIBUTING.rst ChangeLog HACKING.rst LICENSE README.rst babel.cfg bandit.yaml bindep.txt pylintrc requirements.txt setup.cfg setup.py test-requirements.txt tox.ini api-ref/source/conf.py api-ref/source/heading-level-guide.txt api-ref/source/index.rst api-ref/source/v1/images-images-v1.inc api-ref/source/v1/images-sharing-v1.inc api-ref/source/v1/index.rst api-ref/source/v1/parameters.yaml api-ref/source/v1/samples/image-member-add-request.json api-ref/source/v1/samples/image-members-add-request.json api-ref/source/v1/samples/image-memberships-list-response.json api-ref/source/v1/samples/image-update-response.json api-ref/source/v1/samples/images-create-reserve-response.json api-ref/source/v1/samples/images-create-with-data-response.json api-ref/source/v1/samples/images-list-details-response.json api-ref/source/v1/samples/images-list-response.json api-ref/source/v1/samples/shared-images-list-response.json api-ref/source/v2/images-data.inc api-ref/source/v2/images-images-v2.inc api-ref/source/v2/images-import.inc api-ref/source/v2/images-parameters-descriptions.inc api-ref/source/v2/images-parameters.yaml api-ref/source/v2/images-schemas.inc api-ref/source/v2/images-sharing-v2.inc api-ref/source/v2/images-tags.inc api-ref/source/v2/index.rst api-ref/source/v2/metadefs-index.rst api-ref/source/v2/metadefs-namespaces-objects.inc api-ref/source/v2/metadefs-namespaces-properties.inc api-ref/source/v2/metadefs-namespaces-tags.inc api-ref/source/v2/metadefs-namespaces.inc api-ref/source/v2/metadefs-parameters.yaml api-ref/source/v2/metadefs-resourcetypes.inc api-ref/source/v2/metadefs-schemas.inc api-ref/source/v2/tasks-parameters.yaml api-ref/source/v2/tasks-schemas.inc api-ref/source/v2/tasks.inc api-ref/source/v2/samples/image-create-request.json api-ref/source/v2/samples/image-create-response.json api-ref/source/v2/samples/image-details-deactivate-response.json api-ref/source/v2/samples/image-import-g-d-request.json api-ref/source/v2/samples/image-import-w-d-request.json api-ref/source/v2/samples/image-info-import-response.json api-ref/source/v2/samples/image-member-create-request.json api-ref/source/v2/samples/image-member-create-response.json api-ref/source/v2/samples/image-member-details-response.json api-ref/source/v2/samples/image-member-update-request.json api-ref/source/v2/samples/image-member-update-response.json api-ref/source/v2/samples/image-members-list-response.json api-ref/source/v2/samples/image-show-response.json api-ref/source/v2/samples/image-update-request.json api-ref/source/v2/samples/image-update-response.json api-ref/source/v2/samples/images-list-response.json api-ref/source/v2/samples/metadef-namespace-create-request-simple.json api-ref/source/v2/samples/metadef-namespace-create-request.json api-ref/source/v2/samples/metadef-namespace-create-response-simple.json api-ref/source/v2/samples/metadef-namespace-create-response.json api-ref/source/v2/samples/metadef-namespace-details-response.json api-ref/source/v2/samples/metadef-namespace-details-with-rt-response.json api-ref/source/v2/samples/metadef-namespace-update-request.json api-ref/source/v2/samples/metadef-namespace-update-response.json api-ref/source/v2/samples/metadef-namespaces-list-response.json api-ref/source/v2/samples/metadef-object-create-request.json api-ref/source/v2/samples/metadef-object-create-response.json api-ref/source/v2/samples/metadef-object-details-response.json api-ref/source/v2/samples/metadef-object-update-request.json api-ref/source/v2/samples/metadef-object-update-response.json api-ref/source/v2/samples/metadef-objects-list-response.json api-ref/source/v2/samples/metadef-properties-list-response.json api-ref/source/v2/samples/metadef-property-create-request.json api-ref/source/v2/samples/metadef-property-create-response.json api-ref/source/v2/samples/metadef-property-details-response.json api-ref/source/v2/samples/metadef-property-update-request.json api-ref/source/v2/samples/metadef-property-update-response.json api-ref/source/v2/samples/metadef-resource-type-assoc-create-response.json api-ref/source/v2/samples/metadef-resource-type-create-request.json api-ref/source/v2/samples/metadef-resource-types-list-response.json api-ref/source/v2/samples/metadef-tag-create-response.json api-ref/source/v2/samples/metadef-tag-details-response.json api-ref/source/v2/samples/metadef-tag-update-request.json api-ref/source/v2/samples/metadef-tag-update-response.json api-ref/source/v2/samples/metadef-tags-create-request.json api-ref/source/v2/samples/metadef-tags-create-response.json api-ref/source/v2/samples/metadef-tags-list-response.json api-ref/source/v2/samples/schemas-image-member-show-response.json api-ref/source/v2/samples/schemas-image-members-list-response.json api-ref/source/v2/samples/schemas-image-show-response.json api-ref/source/v2/samples/schemas-images-list-response.json api-ref/source/v2/samples/schemas-metadef-namespace-show-response.json api-ref/source/v2/samples/schemas-metadef-namespaces-list-response.json api-ref/source/v2/samples/schemas-metadef-object-show-response.json api-ref/source/v2/samples/schemas-metadef-objects-list-response.json api-ref/source/v2/samples/schemas-metadef-properties-list-response.json api-ref/source/v2/samples/schemas-metadef-property-show-response.json api-ref/source/v2/samples/schemas-metadef-resource-type-association-show-response.json api-ref/source/v2/samples/schemas-metadef-resource-type-associations-list-response.json api-ref/source/v2/samples/schemas-metadef-tag-show-response.json api-ref/source/v2/samples/schemas-metadef-tags-list-response.json api-ref/source/v2/samples/schemas-task-show-response.json api-ref/source/v2/samples/schemas-tasks-list-response.json api-ref/source/v2/samples/task-create-request.json api-ref/source/v2/samples/task-create-response.json api-ref/source/v2/samples/task-show-failure-response.json api-ref/source/v2/samples/task-show-processing-response.json api-ref/source/v2/samples/task-show-success-response.json api-ref/source/v2/samples/tasks-list-response.json api-ref/source/versions/index.rst api-ref/source/versions/versions.inc api-ref/source/versions/samples/image-versions-response.json doc/source/conf.py doc/source/deprecate-registry.inc doc/source/deprecation-note.inc doc/source/glossary.rst doc/source/index.rst doc/source/_static/.placeholder doc/source/admin/apache-httpd.rst doc/source/admin/authentication.rst doc/source/admin/cache.rst doc/source/admin/controllingservers.rst doc/source/admin/db-sqlalchemy-migrate.rst doc/source/admin/db.rst doc/source/admin/flows.rst doc/source/admin/index.rst doc/source/admin/interoperable-image-import.rst doc/source/admin/manage-images.rst doc/source/admin/notifications.rst doc/source/admin/policies.rst doc/source/admin/property-protections.rst doc/source/admin/requirements.rst doc/source/admin/rollingupgrades.rst doc/source/admin/tasks.rst doc/source/admin/troubleshooting.rst doc/source/admin/zero-downtime-db-upgrade.rst doc/source/cli/footer.txt doc/source/cli/general_options.txt doc/source/cli/glanceapi.rst doc/source/cli/glancecachecleaner.rst doc/source/cli/glancecachemanage.rst doc/source/cli/glancecacheprefetcher.rst doc/source/cli/glancecachepruner.rst doc/source/cli/glancecontrol.rst doc/source/cli/glancemanage.rst doc/source/cli/glanceregistry.rst doc/source/cli/glancereplicator.rst doc/source/cli/glancescrubber.rst doc/source/cli/header.txt doc/source/cli/index.rst doc/source/cli/openstack_options.txt doc/source/configuration/configuring.rst doc/source/configuration/glance_api.rst doc/source/configuration/glance_cache.rst doc/source/configuration/glance_manage.rst doc/source/configuration/glance_registry.rst doc/source/configuration/glance_scrubber.rst doc/source/configuration/index.rst doc/source/configuration/sample-configuration.rst doc/source/contributor/architecture.rst doc/source/contributor/blueprints.rst doc/source/contributor/database_architecture.rst doc/source/contributor/database_migrations.rst doc/source/contributor/documentation.rst doc/source/contributor/domain_implementation.rst doc/source/contributor/domain_model.rst doc/source/contributor/index.rst doc/source/contributor/minor-code-changes.rst doc/source/contributor/modules.rst doc/source/contributor/refreshing-configs.rst doc/source/contributor/release-cpl.rst doc/source/images/architecture.png doc/source/images/glance_db.png doc/source/images/glance_layers.png doc/source/images/image_status_transition.png doc/source/images/instance-life-1.png doc/source/images/instance-life-2.png doc/source/images/instance-life-3.png doc/source/images_src/architecture.graphml doc/source/images_src/glance_db.graphml doc/source/images_src/glance_layers.graphml doc/source/images_src/image_status_transition.dot doc/source/install/get-started.rst doc/source/install/index.rst doc/source/install/install-debian.rst doc/source/install/install-obs.rst doc/source/install/install-rdo.rst doc/source/install/install-ubuntu.rst doc/source/install/install.rst doc/source/install/note_configuration_vary_by_distribution.txt doc/source/install/verify.rst doc/source/user/common-image-properties.rst doc/source/user/formats.rst doc/source/user/glanceapi.rst doc/source/user/glanceclient.rst doc/source/user/glancemetadefcatalogapi.rst doc/source/user/identifiers.rst doc/source/user/index.rst doc/source/user/metadefs-concepts.rst doc/source/user/signature.rst doc/source/user/statuses.rst etc/glance-api-paste.ini etc/glance-api.conf etc/glance-cache.conf etc/glance-image-import.conf.sample etc/glance-manage.conf etc/glance-registry-paste.ini etc/glance-registry.conf etc/glance-scrubber.conf etc/glance-swift.conf.sample etc/ovf-metadata.json.sample etc/policy.json etc/property-protections-policies.conf.sample etc/property-protections-roles.conf.sample etc/rootwrap.conf etc/schema-image.json etc/metadefs/README etc/metadefs/cim-processor-allocation-setting-data.json etc/metadefs/cim-resource-allocation-setting-data.json etc/metadefs/cim-storage-allocation-setting-data.json etc/metadefs/cim-virtual-system-setting-data.json etc/metadefs/compute-aggr-disk-filter.json etc/metadefs/compute-aggr-iops-filter.json etc/metadefs/compute-aggr-num-instances.json etc/metadefs/compute-cpu-pinning.json etc/metadefs/compute-guest-memory-backing.json etc/metadefs/compute-guest-shutdown.json etc/metadefs/compute-host-capabilities.json etc/metadefs/compute-hypervisor.json etc/metadefs/compute-instance-data.json etc/metadefs/compute-libvirt-image.json etc/metadefs/compute-libvirt.json etc/metadefs/compute-quota.json etc/metadefs/compute-randomgen.json etc/metadefs/compute-trust.json etc/metadefs/compute-vcputopology.json etc/metadefs/compute-vmware-flavor.json etc/metadefs/compute-vmware-quota-flavor.json etc/metadefs/compute-vmware.json etc/metadefs/compute-watchdog.json etc/metadefs/compute-xenapi.json etc/metadefs/glance-common-image-props.json etc/metadefs/image-signature-verification.json etc/metadefs/operating-system.json etc/metadefs/software-databases.json etc/metadefs/software-runtimes.json etc/metadefs/software-webservers.json etc/metadefs/storage-volume-type.json etc/oslo-config-generator/glance-api.conf etc/oslo-config-generator/glance-cache.conf etc/oslo-config-generator/glance-image-import.conf etc/oslo-config-generator/glance-manage.conf etc/oslo-config-generator/glance-registry.conf etc/oslo-config-generator/glance-scrubber.conf glance/__init__.py glance/context.py glance/gateway.py glance/i18n.py glance/location.py glance/notifier.py glance/opts.py glance/schema.py glance/scrubber.py glance/version.py glance.egg-info/PKG-INFO glance.egg-info/SOURCES.txt glance.egg-info/dependency_links.txt glance.egg-info/entry_points.txt glance.egg-info/not-zip-safe glance.egg-info/pbr.json glance.egg-info/requires.txt glance.egg-info/top_level.txt glance/api/__init__.py glance/api/authorization.py glance/api/cached_images.py glance/api/common.py glance/api/policy.py glance/api/property_protections.py glance/api/versions.py glance/api/middleware/__init__.py glance/api/middleware/cache.py glance/api/middleware/cache_manage.py glance/api/middleware/context.py glance/api/middleware/gzip.py glance/api/middleware/version_negotiation.py glance/api/v1/__init__.py glance/api/v1/controller.py glance/api/v1/filters.py glance/api/v1/images.py glance/api/v1/members.py glance/api/v1/router.py glance/api/v1/upload_utils.py glance/api/v2/__init__.py glance/api/v2/discovery.py glance/api/v2/image_actions.py glance/api/v2/image_data.py glance/api/v2/image_members.py glance/api/v2/image_tags.py glance/api/v2/images.py glance/api/v2/metadef_namespaces.py glance/api/v2/metadef_objects.py glance/api/v2/metadef_properties.py glance/api/v2/metadef_resource_types.py glance/api/v2/metadef_tags.py glance/api/v2/router.py glance/api/v2/schemas.py glance/api/v2/tasks.py glance/api/v2/model/__init__.py glance/api/v2/model/metadef_namespace.py glance/api/v2/model/metadef_object.py glance/api/v2/model/metadef_property_item_type.py glance/api/v2/model/metadef_property_type.py glance/api/v2/model/metadef_resource_type.py glance/api/v2/model/metadef_tag.py glance/async/__init__.py glance/async/taskflow_executor.py glance/async/utils.py glance/async/flows/__init__.py glance/async/flows/api_image_import.py glance/async/flows/base_import.py glance/async/flows/convert.py glance/async/flows/introspect.py glance/async/flows/ovf_process.py glance/async/flows/_internal_plugins/__init__.py glance/async/flows/_internal_plugins/web_download.py glance/async/flows/plugins/__init__.py glance/async/flows/plugins/inject_image_metadata.py glance/async/flows/plugins/no_op.py glance/async/flows/plugins/plugin_opts.py glance/cmd/__init__.py glance/cmd/api.py glance/cmd/cache_cleaner.py glance/cmd/cache_manage.py glance/cmd/cache_prefetcher.py glance/cmd/cache_pruner.py glance/cmd/control.py glance/cmd/manage.py glance/cmd/registry.py glance/cmd/replicator.py glance/cmd/scrubber.py glance/common/__init__.py glance/common/auth.py glance/common/client.py glance/common/config.py glance/common/crypt.py glance/common/exception.py glance/common/property_utils.py glance/common/rpc.py glance/common/store_utils.py glance/common/swift_store_utils.py glance/common/timeutils.py glance/common/trust_auth.py glance/common/utils.py glance/common/wsgi.py glance/common/wsgi_app.py glance/common/wsme_utils.py glance/common/location_strategy/__init__.py glance/common/location_strategy/location_order.py glance/common/location_strategy/store_type.py glance/common/scripts/__init__.py glance/common/scripts/utils.py glance/common/scripts/api_image_import/__init__.py glance/common/scripts/api_image_import/main.py glance/common/scripts/image_import/__init__.py glance/common/scripts/image_import/main.py glance/db/__init__.py glance/db/metadata.py glance/db/migration.py glance/db/utils.py glance/db/registry/__init__.py glance/db/registry/api.py glance/db/simple/__init__.py glance/db/simple/api.py glance/db/sqlalchemy/__init__.py glance/db/sqlalchemy/api.py glance/db/sqlalchemy/metadata.py glance/db/sqlalchemy/models.py glance/db/sqlalchemy/models_metadef.py glance/db/sqlalchemy/alembic_migrations/README glance/db/sqlalchemy/alembic_migrations/__init__.py glance/db/sqlalchemy/alembic_migrations/add_artifacts_tables.py glance/db/sqlalchemy/alembic_migrations/add_images_tables.py glance/db/sqlalchemy/alembic_migrations/add_metadefs_tables.py glance/db/sqlalchemy/alembic_migrations/add_tasks_tables.py glance/db/sqlalchemy/alembic_migrations/alembic.ini glance/db/sqlalchemy/alembic_migrations/env.py glance/db/sqlalchemy/alembic_migrations/migrate.cfg glance/db/sqlalchemy/alembic_migrations/script.py.mako glance/db/sqlalchemy/alembic_migrations/data_migrations/__init__.py glance/db/sqlalchemy/alembic_migrations/data_migrations/ocata_migrate01_community_images.py glance/db/sqlalchemy/alembic_migrations/data_migrations/pike_migrate01_empty.py glance/db/sqlalchemy/alembic_migrations/data_migrations/queens_migrate01_empty.py glance/db/sqlalchemy/alembic_migrations/versions/__init__.py glance/db/sqlalchemy/alembic_migrations/versions/liberty_initial.py glance/db/sqlalchemy/alembic_migrations/versions/mitaka01_add_image_created_updated_idx.py glance/db/sqlalchemy/alembic_migrations/versions/mitaka02_update_metadef_os_nova_server.py glance/db/sqlalchemy/alembic_migrations/versions/ocata_contract01_drop_is_public.py glance/db/sqlalchemy/alembic_migrations/versions/ocata_expand01_add_visibility.py glance/db/sqlalchemy/alembic_migrations/versions/pike_contract01_drop_artifacts_tables.py glance/db/sqlalchemy/alembic_migrations/versions/pike_expand01_empty.py glance/db/sqlalchemy/alembic_migrations/versions/queens_contract01_empty.py glance/db/sqlalchemy/alembic_migrations/versions/queens_expand01_empty.py glance/db/sqlalchemy/metadef_api/__init__.py glance/db/sqlalchemy/metadef_api/namespace.py glance/db/sqlalchemy/metadef_api/object.py glance/db/sqlalchemy/metadef_api/property.py glance/db/sqlalchemy/metadef_api/resource_type.py glance/db/sqlalchemy/metadef_api/resource_type_association.py glance/db/sqlalchemy/metadef_api/tag.py glance/db/sqlalchemy/metadef_api/utils.py glance/db/sqlalchemy/migrate_repo/README glance/db/sqlalchemy/migrate_repo/__init__.py glance/db/sqlalchemy/migrate_repo/manage.py glance/db/sqlalchemy/migrate_repo/migrate.cfg glance/db/sqlalchemy/migrate_repo/schema.py glance/db/sqlalchemy/migrate_repo/versions/001_add_images_table.py glance/db/sqlalchemy/migrate_repo/versions/002_add_image_properties_table.py glance/db/sqlalchemy/migrate_repo/versions/003_add_disk_format.py glance/db/sqlalchemy/migrate_repo/versions/003_sqlite_upgrade.sql glance/db/sqlalchemy/migrate_repo/versions/004_add_checksum.py glance/db/sqlalchemy/migrate_repo/versions/005_size_big_integer.py glance/db/sqlalchemy/migrate_repo/versions/006_key_to_name.py glance/db/sqlalchemy/migrate_repo/versions/006_mysql_upgrade.sql glance/db/sqlalchemy/migrate_repo/versions/006_sqlite_upgrade.sql glance/db/sqlalchemy/migrate_repo/versions/007_add_owner.py glance/db/sqlalchemy/migrate_repo/versions/008_add_image_members_table.py glance/db/sqlalchemy/migrate_repo/versions/009_add_mindisk_and_minram.py glance/db/sqlalchemy/migrate_repo/versions/010_default_update_at.py glance/db/sqlalchemy/migrate_repo/versions/011_make_mindisk_and_minram_notnull.py glance/db/sqlalchemy/migrate_repo/versions/011_sqlite_upgrade.sql glance/db/sqlalchemy/migrate_repo/versions/012_id_to_uuid.py glance/db/sqlalchemy/migrate_repo/versions/013_add_protected.py glance/db/sqlalchemy/migrate_repo/versions/014_add_image_tags_table.py glance/db/sqlalchemy/migrate_repo/versions/015_quote_swift_credentials.py glance/db/sqlalchemy/migrate_repo/versions/016_add_status_image_member.py glance/db/sqlalchemy/migrate_repo/versions/017_quote_encrypted_swift_credentials.py glance/db/sqlalchemy/migrate_repo/versions/018_add_image_locations_table.py glance/db/sqlalchemy/migrate_repo/versions/019_migrate_image_locations.py glance/db/sqlalchemy/migrate_repo/versions/020_drop_images_table_location.py glance/db/sqlalchemy/migrate_repo/versions/021_set_engine_mysql_innodb.py glance/db/sqlalchemy/migrate_repo/versions/022_image_member_index.py glance/db/sqlalchemy/migrate_repo/versions/023_placeholder.py glance/db/sqlalchemy/migrate_repo/versions/024_placeholder.py glance/db/sqlalchemy/migrate_repo/versions/025_placeholder.py glance/db/sqlalchemy/migrate_repo/versions/026_add_location_storage_information.py glance/db/sqlalchemy/migrate_repo/versions/027_checksum_index.py glance/db/sqlalchemy/migrate_repo/versions/028_owner_index.py glance/db/sqlalchemy/migrate_repo/versions/029_location_meta_data_pickle_to_string.py glance/db/sqlalchemy/migrate_repo/versions/030_add_tasks_table.py glance/db/sqlalchemy/migrate_repo/versions/031_remove_duplicated_locations.py glance/db/sqlalchemy/migrate_repo/versions/032_add_task_info_table.py glance/db/sqlalchemy/migrate_repo/versions/033_add_location_status.py glance/db/sqlalchemy/migrate_repo/versions/034_add_virtual_size.py glance/db/sqlalchemy/migrate_repo/versions/035_add_metadef_tables.py glance/db/sqlalchemy/migrate_repo/versions/036_rename_metadef_schema_columns.py glance/db/sqlalchemy/migrate_repo/versions/037_add_changes_to_satisfy_models.py glance/db/sqlalchemy/migrate_repo/versions/037_sqlite_upgrade.sql glance/db/sqlalchemy/migrate_repo/versions/038_add_metadef_tags_table.py glance/db/sqlalchemy/migrate_repo/versions/039_add_changes_to_satisfy_models_metadef.py glance/db/sqlalchemy/migrate_repo/versions/040_add_changes_to_satisfy_metadefs_tags.py glance/db/sqlalchemy/migrate_repo/versions/041_add_artifact_tables.py glance/db/sqlalchemy/migrate_repo/versions/042_add_changes_to_reinstall_unique_metadef_constraints.py glance/db/sqlalchemy/migrate_repo/versions/043_add_image_created_updated_idx.py glance/db/sqlalchemy/migrate_repo/versions/044_update_metadef_os_nova_server.py glance/db/sqlalchemy/migrate_repo/versions/045_add_visibility.py glance/db/sqlalchemy/migrate_repo/versions/045_sqlite_upgrade.sql glance/db/sqlalchemy/migrate_repo/versions/__init__.py glance/domain/__init__.py glance/domain/proxy.py glance/hacking/__init__.py glance/hacking/checks.py glance/image_cache/__init__.py glance/image_cache/base.py glance/image_cache/cleaner.py glance/image_cache/client.py glance/image_cache/prefetcher.py glance/image_cache/pruner.py glance/image_cache/drivers/__init__.py glance/image_cache/drivers/base.py glance/image_cache/drivers/sqlite.py glance/image_cache/drivers/xattr.py glance/locale/de/LC_MESSAGES/glance.po glance/locale/en_GB/LC_MESSAGES/glance.po glance/locale/es/LC_MESSAGES/glance.po glance/locale/fr/LC_MESSAGES/glance.po glance/locale/it/LC_MESSAGES/glance.po glance/locale/ja/LC_MESSAGES/glance.po glance/locale/ko_KR/LC_MESSAGES/glance.po glance/locale/pt_BR/LC_MESSAGES/glance.po glance/locale/ru/LC_MESSAGES/glance.po glance/locale/tr_TR/LC_MESSAGES/glance.po glance/locale/zh_CN/LC_MESSAGES/glance.po glance/locale/zh_TW/LC_MESSAGES/glance.po glance/quota/__init__.py glance/registry/__init__.py glance/registry/api/__init__.py glance/registry/api/v1/__init__.py glance/registry/api/v1/images.py glance/registry/api/v1/members.py glance/registry/api/v2/__init__.py glance/registry/api/v2/rpc.py glance/registry/client/__init__.py glance/registry/client/v1/__init__.py glance/registry/client/v1/api.py glance/registry/client/v1/client.py glance/registry/client/v2/__init__.py glance/registry/client/v2/api.py glance/registry/client/v2/client.py glance/tests/__init__.py glance/tests/stubs.py glance/tests/test_hacking.py glance/tests/utils.py glance/tests/etc/glance-swift.conf glance/tests/etc/policy.json glance/tests/etc/property-protections-policies.conf glance/tests/etc/property-protections.conf glance/tests/etc/schema-image.json glance/tests/functional/__init__.py glance/tests/functional/store_utils.py glance/tests/functional/test_api.py glance/tests/functional/test_bin_glance_cache_manage.py glance/tests/functional/test_cache_middleware.py glance/tests/functional/test_client_exceptions.py glance/tests/functional/test_client_redirects.py glance/tests/functional/test_cors_middleware.py glance/tests/functional/test_glance_manage.py glance/tests/functional/test_glance_replicator.py glance/tests/functional/test_gzip_middleware.py glance/tests/functional/test_healthcheck_middleware.py glance/tests/functional/test_logging.py glance/tests/functional/test_reload.py glance/tests/functional/test_scrubber.py glance/tests/functional/test_sqlite.py glance/tests/functional/test_ssl.py glance/tests/functional/test_wsgi.py glance/tests/functional/db/__init__.py glance/tests/functional/db/base.py glance/tests/functional/db/base_metadef.py glance/tests/functional/db/test_migrations.py glance/tests/functional/db/test_registry.py glance/tests/functional/db/test_rpc_endpoint.py glance/tests/functional/db/test_simple.py glance/tests/functional/db/test_sqlalchemy.py glance/tests/functional/db/migrations/__init__.py glance/tests/functional/db/migrations/test_mitaka01.py glance/tests/functional/db/migrations/test_mitaka02.py glance/tests/functional/db/migrations/test_ocata_contract01.py glance/tests/functional/db/migrations/test_ocata_expand01.py glance/tests/functional/db/migrations/test_ocata_migrate01.py glance/tests/functional/db/migrations/test_pike_contract01.py glance/tests/functional/db/migrations/test_pike_expand01.py glance/tests/functional/db/migrations/test_pike_migrate01.py glance/tests/functional/v1/__init__.py glance/tests/functional/v1/test_api.py glance/tests/functional/v1/test_copy_to_file.py glance/tests/functional/v1/test_misc.py glance/tests/functional/v1/test_multiprocessing.py glance/tests/functional/v2/__init__.py glance/tests/functional/v2/registry_data_api.py glance/tests/functional/v2/test_images.py glance/tests/functional/v2/test_metadef_namespaces.py glance/tests/functional/v2/test_metadef_objects.py glance/tests/functional/v2/test_metadef_properties.py glance/tests/functional/v2/test_metadef_resourcetypes.py glance/tests/functional/v2/test_metadef_tags.py glance/tests/functional/v2/test_schemas.py glance/tests/functional/v2/test_tasks.py glance/tests/integration/__init__.py glance/tests/integration/legacy_functional/__init__.py glance/tests/integration/legacy_functional/base.py glance/tests/integration/legacy_functional/test_v1_api.py glance/tests/integration/v2/__init__.py glance/tests/integration/v2/base.py glance/tests/integration/v2/test_property_quota_violations.py glance/tests/integration/v2/test_tasks_api.py glance/tests/unit/__init__.py glance/tests/unit/base.py glance/tests/unit/fake_rados.py glance/tests/unit/fixtures.py glance/tests/unit/test_auth.py glance/tests/unit/test_cache_middleware.py glance/tests/unit/test_cached_images.py glance/tests/unit/test_context.py glance/tests/unit/test_context_middleware.py glance/tests/unit/test_data_migration_framework.py glance/tests/unit/test_db.py glance/tests/unit/test_db_metadef.py glance/tests/unit/test_domain.py glance/tests/unit/test_domain_proxy.py glance/tests/unit/test_glance_manage.py glance/tests/unit/test_glance_replicator.py glance/tests/unit/test_image_cache.py glance/tests/unit/test_image_cache_client.py glance/tests/unit/test_manage.py glance/tests/unit/test_misc.py glance/tests/unit/test_notifier.py glance/tests/unit/test_policy.py glance/tests/unit/test_quota.py glance/tests/unit/test_schema.py glance/tests/unit/test_scrubber.py glance/tests/unit/test_store_image.py glance/tests/unit/test_store_location.py glance/tests/unit/test_versions.py glance/tests/unit/utils.py glance/tests/unit/api/__init__.py glance/tests/unit/api/test_cmd.py glance/tests/unit/api/test_cmd_cache_manage.py glance/tests/unit/api/test_common.py glance/tests/unit/api/test_property_protections.py glance/tests/unit/api/middleware/__init__.py glance/tests/unit/api/middleware/test_cache_manage.py glance/tests/unit/async/__init__.py glance/tests/unit/async/test_async.py glance/tests/unit/async/test_taskflow_executor.py glance/tests/unit/async/flows/__init__.py glance/tests/unit/async/flows/test_convert.py glance/tests/unit/async/flows/test_import.py glance/tests/unit/async/flows/test_introspect.py glance/tests/unit/async/flows/test_ovf_process.py glance/tests/unit/async/flows/plugins/__init__.py glance/tests/unit/async/flows/plugins/test_inject_image_metadata.py glance/tests/unit/common/__init__.py glance/tests/unit/common/test_client.py glance/tests/unit/common/test_config.py glance/tests/unit/common/test_exception.py glance/tests/unit/common/test_location_strategy.py glance/tests/unit/common/test_property_utils.py glance/tests/unit/common/test_rpc.py glance/tests/unit/common/test_scripts.py glance/tests/unit/common/test_swift_store_utils.py glance/tests/unit/common/test_timeutils.py glance/tests/unit/common/test_utils.py glance/tests/unit/common/test_wsgi.py glance/tests/unit/common/scripts/__init__.py glance/tests/unit/common/scripts/test_scripts_utils.py glance/tests/unit/common/scripts/image_import/__init__.py glance/tests/unit/common/scripts/image_import/test_main.py glance/tests/unit/image_cache/__init__.py glance/tests/unit/image_cache/drivers/__init__.py glance/tests/unit/image_cache/drivers/test_sqlite.py glance/tests/unit/v1/__init__.py glance/tests/unit/v1/test_api.py glance/tests/unit/v1/test_registry_api.py glance/tests/unit/v1/test_registry_client.py glance/tests/unit/v1/test_upload_utils.py glance/tests/unit/v2/__init__.py glance/tests/unit/v2/test_discovery_image_import.py glance/tests/unit/v2/test_image_actions_resource.py glance/tests/unit/v2/test_image_data_resource.py glance/tests/unit/v2/test_image_members_resource.py glance/tests/unit/v2/test_image_tags_resource.py glance/tests/unit/v2/test_images_resource.py glance/tests/unit/v2/test_metadef_resources.py glance/tests/unit/v2/test_registry_api.py glance/tests/unit/v2/test_registry_client.py glance/tests/unit/v2/test_schemas_resource.py glance/tests/unit/v2/test_tasks_resource.py glance/tests/var/ca.crt glance/tests/var/ca.key glance/tests/var/certificate.crt glance/tests/var/privatekey.key glance/tests/var/testserver-bad-ovf.ova glance/tests/var/testserver-no-disk.ova glance/tests/var/testserver-no-ovf.ova glance/tests/var/testserver-not-tar.ova glance/tests/var/testserver.ova httpd/README httpd/glance-api-uwsgi.ini httpd/uwsgi-glance-api.conf rally-jobs/README.rst rally-jobs/glance.yaml rally-jobs/extra/README.rst rally-jobs/extra/fake.img rally-jobs/plugins/README.rst releasenotes/notes/.placeholder releasenotes/notes/Prevent-removing-last-image-location-d5ee3e00efe14f34.yaml releasenotes/notes/add-cpu-thread-pinning-metadata-09b1866b875c4647.yaml releasenotes/notes/add-ploop-format-fdd583849504ab15.yaml releasenotes/notes/add-processlimits-to-qemu-img-c215f5d90f741d8a.yaml releasenotes/notes/add-vhdx-format-2be99354ad320cca.yaml releasenotes/notes/alembic-migrations-902b31edae7a5d7d.yaml releasenotes/notes/api-2-6-current-9eeb83b7ecc0a562.yaml releasenotes/notes/api-minor-ver-bump-2-6-aa3591fc58f08055.yaml releasenotes/notes/api-minor-version-bump-bbd69dc457fc731c.yaml releasenotes/notes/bp-inject-image-metadata-0a08af539bcce7f2.yaml releasenotes/notes/bug-1537903-54b2822eac6cfc09.yaml releasenotes/notes/bug-1593177-8ef35458d29ec93c.yaml releasenotes/notes/bug-1719252-name-validation-443a2e2a36be2cec.yaml releasenotes/notes/bump-api-2-4-efa266aef0928e04.yaml releasenotes/notes/clean-up-acceptable-values-store_type_preference-39081e4045894731.yaml releasenotes/notes/consistent-store-names-57374b9505d530d0.yaml releasenotes/notes/deprecate-glance-api-opts-23bdbd1ad7625999.yaml releasenotes/notes/deprecate-registry-ff286df90df793f0.yaml releasenotes/notes/deprecate-show-multiple-location-9890a1e961def2f6.yaml releasenotes/notes/deprecate-v1-api-6c7dbefb90fd8772.yaml releasenotes/notes/exp-emc-mig-fix-a7e28d547ac38f9e.yaml releasenotes/notes/glare-ectomy-72a1f80f306f2e3b.yaml releasenotes/notes/image-visibility-changes-fa5aa18dc67244c4.yaml releasenotes/notes/implement-lite-spec-db-sync-check-3e2e147aec0ae82b.yaml releasenotes/notes/improved-config-options-221c58a8c37602ba.yaml releasenotes/notes/location-add-status-checks-b70db66100bc96b7.yaml releasenotes/notes/lock_path_config_option-2771feaa649e4563.yaml releasenotes/notes/make-task-api-admin-only-by-default-7def996262e18f7a.yaml releasenotes/notes/new_image_filters-c888361e6ecf495c.yaml releasenotes/notes/newton-1-release-065334d464f78fc5.yaml releasenotes/notes/newton-bugs-06ed3727b973c271.yaml releasenotes/notes/oslo-log-use-stderr-changes-07f5daf3e6abdcd6.yaml releasenotes/notes/pike-metadefs-changes-95b54e0bf8bbefd6.yaml releasenotes/notes/pike-rc-1-a5d3f6e8877b52c6.yaml releasenotes/notes/pike-rc-2-acc173005045e16a.yaml releasenotes/notes/queens-metadefs-changes-daf02bef18d049f4.yaml releasenotes/notes/queens-release-b6a9f9882c794c24.yaml releasenotes/notes/queens-uwsgi-issues-4cee9e4fdf62c646.yaml releasenotes/notes/range-header-request-83cf11eebf865fb1.yaml releasenotes/notes/remove-db-downgrade-0d1cc45b97605775.yaml releasenotes/notes/remove-osprofiler-paste-ini-options-c620dedc8f9728ff.yaml releasenotes/notes/remove-s3-driver-639c60b71761eb6f.yaml releasenotes/notes/reordered-store-config-opts-newton-3a6575b5908c0e0f.yaml releasenotes/notes/restrict_location_updates-05454bb765a8c92c.yaml releasenotes/notes/scrubber-exit-e5d77f6f1a38ffb7.yaml releasenotes/notes/scrubber-refactor-73ddbd61ebbf1e86.yaml releasenotes/notes/soft_delete-tasks-43ea983695faa565.yaml releasenotes/notes/trust-support-registry-cfd17a6a9ab21d70.yaml releasenotes/notes/update-show_multiple_locations-helptext-7fa692642b6b6d52.yaml releasenotes/notes/use-cursive-c6b15d94845232da.yaml releasenotes/notes/virtuozzo-hypervisor-fada477b64ae829d.yaml releasenotes/notes/wsgi-containerization-369880238a5e793d.yaml releasenotes/source/conf.py releasenotes/source/index.rst releasenotes/source/liberty.rst releasenotes/source/mitaka.rst releasenotes/source/newton.rst releasenotes/source/ocata.rst releasenotes/source/pike.rst releasenotes/source/unreleased.rst releasenotes/source/_static/.placeholder releasenotes/source/_templates/.placeholder tools/test-setup.shglance-16.0.0/glance.egg-info/dependency_links.txt0000664000175100017510000000000113245511657022055 0ustar zuulzuul00000000000000 glance-16.0.0/glance.egg-info/entry_points.txt0000664000175100017510000000347413245511657021315 0ustar zuulzuul00000000000000[console_scripts] glance-api = glance.cmd.api:main glance-cache-cleaner = glance.cmd.cache_cleaner:main glance-cache-manage = glance.cmd.cache_manage:main glance-cache-prefetcher = glance.cmd.cache_prefetcher:main glance-cache-pruner = glance.cmd.cache_pruner:main glance-control = glance.cmd.control:main glance-manage = glance.cmd.manage:main glance-registry = glance.cmd.registry:main glance-replicator = glance.cmd.replicator:main glance-scrubber = glance.cmd.scrubber:main [glance.common.image_location_strategy.modules] location_order_strategy = glance.common.location_strategy.location_order store_type_strategy = glance.common.location_strategy.store_type [glance.database.metadata_backend] sqlalchemy = glance.db.sqlalchemy.metadata [glance.database.migration_backend] sqlalchemy = oslo_db.sqlalchemy.migration [glance.flows] api_image_import = glance.async.flows.api_image_import:get_flow import = glance.async.flows.base_import:get_flow [glance.flows.import] convert = glance.async.flows.convert:get_flow introspect = glance.async.flows.introspect:get_flow ovf_process = glance.async.flows.ovf_process:get_flow [glance.image_import.internal_plugins] web_download = glance.async.flows._internal_plugins.web_download:get_flow [glance.image_import.plugins] inject_image_metadata = glance.async.flows.plugins.inject_image_metadata:get_flow no_op = glance.async.flows.plugins.no_op:get_flow [oslo.config.opts] glance = glance.opts:list_image_import_opts glance.api = glance.opts:list_api_opts glance.cache = glance.opts:list_cache_opts glance.manage = glance.opts:list_manage_opts glance.registry = glance.opts:list_registry_opts glance.scrubber = glance.opts:list_scrubber_opts [oslo.config.opts.defaults] glance.api = glance.common.config:set_cors_middleware_defaults [wsgi_scripts] glance-wsgi-api = glance.common.wsgi_app:init_app glance-16.0.0/glance.egg-info/not-zip-safe0000664000175100017510000000000113245511560020226 0ustar zuulzuul00000000000000 glance-16.0.0/glance.egg-info/requires.txt0000664000175100017510000000151013245511657020404 0ustar zuulzuul00000000000000pbr!=2.1.0,>=2.0.0 defusedxml>=0.5.0 SQLAlchemy!=1.1.5,!=1.1.6,!=1.1.7,!=1.1.8,>=1.0.10 eventlet!=0.18.3,!=0.20.1,<0.21.0,>=0.18.2 PasteDeploy>=1.5.0 Routes>=2.3.1 WebOb>=1.7.1 sqlalchemy-migrate>=0.11.0 sqlparse>=0.2.2 alembic>=0.8.10 httplib2>=0.9.1 oslo.config>=5.1.0 oslo.concurrency>=3.25.0 oslo.context>=2.19.2 oslo.utils>=3.33.0 stevedore>=1.20.0 futurist>=1.2.0 taskflow>=2.16.0 keystoneauth1>=3.3.0 keystonemiddleware>=4.17.0 WSME>=0.8.0 PrettyTable<0.8,>=0.7.1 Paste>=2.0.2 jsonschema<3.0.0,>=2.6.0 python-keystoneclient>=3.8.0 pyOpenSSL>=16.2.0 six>=1.10.0 oslo.db>=4.27.0 oslo.i18n>=3.15.3 oslo.log>=3.36.0 oslo.messaging>=5.29.0 oslo.middleware>=3.31.0 oslo.policy>=1.30.0 retrying!=1.3.0,>=1.2.3 osprofiler>=1.4.0 glance-store>=0.22.0 debtcollector>=1.2.0 cryptography!=2.0,>=1.9 cursive>=0.2.1 iso8601>=0.1.11 monotonic>=0.6 glance-16.0.0/glance.egg-info/pbr.json0000664000175100017510000000005613245511657017466 0ustar zuulzuul00000000000000{"git_version": "ceb8b9a", "is_release": true}glance-16.0.0/glance.egg-info/top_level.txt0000664000175100017510000000000713245511657020536 0ustar zuulzuul00000000000000glance glance-16.0.0/glance.egg-info/PKG-INFO0000664000175100017510000000737613245511657017121 0ustar zuulzuul00000000000000Metadata-Version: 1.1 Name: glance Version: 16.0.0 Summary: OpenStack Image Service Home-page: https://docs.openstack.org/glance/latest/ Author: OpenStack Author-email: openstack-dev@lists.openstack.org License: UNKNOWN Description-Content-Type: UNKNOWN Description: ======================== Team and repository tags ======================== .. image:: http://governance.openstack.org/badges/glance.svg :target: http://governance.openstack.org/reference/tags/index.html :alt: The following tags have been asserted for the Glance project: "project:official", "tc:approved-release", "stable:follows-policy", "tc:starter-kit:compute", "vulnerability:managed", "team:diverse-affiliation", "assert:supports-upgrade", "assert:follows-standard-deprecation". Follow the link for an explanation of these tags. .. NOTE(rosmaita): the alt text above will have to be updated when additional tags are asserted for Glance. (The SVG in the governance repo is updated automatically.) .. Change things from this point on ====== Glance ====== Glance is a project that provides services and associated libraries to store, browse, share, distribute and manage bootable disk images, other data closely associated with initializing compute resources, and metadata definitions. Use the following resources to learn more: API --- To learn how to use Glance's API, consult the documentation available online at: * `Image Service APIs `_ Developers ---------- For information on how to contribute to Glance, please see the contents of the CONTRIBUTING.rst in this repository. Any new code must follow the development guidelines detailed in the HACKING.rst file, and pass all unit tests. Further developer focused documentation is available at: * `Official Glance documentation `_ * `Official Client documentation `_ Operators --------- To learn how to deploy and configure OpenStack Glance, consult the documentation available online at: * `Openstack Glance `_ In the unfortunate event that bugs are discovered, they should be reported to the appropriate bug tracker. You can raise bugs here: * `Bug Tracker `_ Other Information ----------------- During each design summit, we agree on what the whole community wants to focus on for the upcoming release. You can see image service plans: * `Image Service Plans `_ For more information about the Glance project please see: * `Glance Project `_ Platform: UNKNOWN Classifier: Environment :: OpenStack Classifier: Intended Audience :: Information Technology Classifier: Intended Audience :: System Administrators Classifier: License :: OSI Approved :: Apache Software License Classifier: Operating System :: POSIX :: Linux Classifier: Programming Language :: Python Classifier: Programming Language :: Python :: 2 Classifier: Programming Language :: Python :: 2.7 glance-16.0.0/rally-jobs/0000775000175100017510000000000013245511661015135 5ustar zuulzuul00000000000000glance-16.0.0/rally-jobs/glance.yaml0000666000175100017510000000172313245511421017251 0ustar zuulzuul00000000000000--- version: 2 title: Task used by gate-rally-dsvm-glance-ubuntu-xenial-nv and gate-rally-dsvm-py35-glance-nv subtasks: - title: Test Glance upload and list image performance scenario: GlanceImages.create_and_list_image: image_location: "~/.rally/extra/fake.img" container_format: "bare" disk_format: "qcow2" runner: constant: times: 700 concurrency: 7 contexts: users: tenants: 1 users_per_tenant: 1 - title: Test Glance upload and delete image performance scenario: GlanceImages.create_and_delete_image: image_location: "http://download.cirros-cloud.net/0.3.1/cirros-0.3.1-x86_64-disk.img" container_format: "bare" disk_format: "qcow2" runner: constant: times: 20 concurrency: 5 contexts: users: tenants: 5 users_per_tenant: 2 glance-16.0.0/rally-jobs/plugins/0000775000175100017510000000000013245511661016616 5ustar zuulzuul00000000000000glance-16.0.0/rally-jobs/plugins/README.rst0000666000175100017510000000061213245511421020300 0ustar zuulzuul00000000000000Rally plugins ============= All ``*.py`` modules from this directory will be auto-loaded by Rally and all plugins will be discoverable. There is no need of any extra configuration and there is no difference between writing them here and in rally code base. Note that it is better to push all interesting and useful benchmarks to Rally code base, this simplifies administration for Operators. glance-16.0.0/rally-jobs/README.rst0000666000175100017510000000176113245511421016625 0ustar zuulzuul00000000000000Rally job related files ======================= This directory contains rally tasks and plugins that are run by OpenStack CI. Structure --------- * plugins - directory where you can add rally plugins. Almost everything in Rally is a plugin. Benchmark context, Benchmark scenario, SLA checks, Generic cleanup resources, .... * extra - all files from this directory will be copy pasted to gates, so you are able to use absolute paths in rally tasks. Files will be located in ~/.rally/extra/* * glance.yaml is a task that is run in gates against OpenStack (nova network) deployed by DevStack Useful links ------------ * More about Rally: https://rally.readthedocs.org/en/latest/ * Rally release notes: https://rally.readthedocs.org/en/latest/release_notes.html * How to add rally-gates: https://rally.readthedocs.org/en/latest/gates.html * About plugins: https://rally.readthedocs.org/en/latest/plugins.html * Plugin samples: https://github.com/openstack/rally/tree/master/samples/plugins glance-16.0.0/rally-jobs/extra/0000775000175100017510000000000013245511661016260 5ustar zuulzuul00000000000000glance-16.0.0/rally-jobs/extra/fake.img0000666000175100017510000000000013245511421017646 0ustar zuulzuul00000000000000glance-16.0.0/rally-jobs/extra/README.rst0000666000175100017510000000025413245511421017744 0ustar zuulzuul00000000000000Extra files =========== All files from this directory will be copy pasted to gates, so you are able to use absolute path in rally tasks. Files will be in ~/.rally/extra/* glance-16.0.0/etc/0000775000175100017510000000000013245511661013632 5ustar zuulzuul00000000000000glance-16.0.0/etc/ovf-metadata.json.sample0000666000175100017510000000017613245511421020355 0ustar zuulzuul00000000000000{ "cim_pasd": [ "ProcessorArchitecture", "InstructionSet", "InstructionSetExtensionName" ] } glance-16.0.0/etc/glance-image-import.conf.sample0000666000175100017510000001545513245511421021610 0ustar zuulzuul00000000000000[DEFAULT] [image_import_opts] # # From glance # # # Image import plugins to be enabled for task processing. # # Provide list of strings reflecting to the task Objects # that should be included to the Image Import flow. The # task objects needs to be defined in the 'glance/async/ # flows/plugins/*' and may be implemented by OpenStack # Glance project team, deployer or 3rd party. # # By default no plugins are enabled and to take advantage # of the plugin model the list of plugins must be set # explicitly in the glance-image-import.conf file. # # The allowed values for this option is comma separated # list of object names in between ``[`` and ``]``. # # Possible values: # * no_op (only logs debug level message that the # plugin has been executed) # * Any provided Task object name to be included # in to the flow. # (list value) #image_import_plugins = [no_op] [import_filtering_opts] # # From glance # # # Specify the "whitelist" of allowed url schemes for web-download. # # This option provides whitelisting of uri schemes that will be allowed when # an end user imports an image using the web-download import method. The # whitelist has priority such that if there is also a blacklist defined for # schemes, the blacklist will be ignored. Host and port filtering, however, # will be applied. # # See the Glance Administration Guide for more information. # # Possible values: # * List containing normalized url schemes as they are returned from # urllib.parse. For example ['ftp','https'] # * Hint: leave the whitelist empty if you want the disallowed_schemes # blacklist to be processed # # Related options: # * disallowed_schemes # * allowed_hosts # * disallowed_hosts # * allowed_ports # * disallowed_ports # # (list value) #allowed_schemes = http,https # # Specify the "blacklist" of uri schemes disallowed for web-download. # # This option provides blacklisting of uri schemes that will be rejected when # an end user imports an image using the web-download import method. Note # that if a scheme whitelist is defined using the 'allowed_schemes' option, # *this option will be ignored*. Host and port filtering, however, will be # applied. # # See the Glance Administration Guide for more information. # # Possible values: # * List containing normalized url schemes as they are returned from # urllib.parse. For example ['ftp','https'] # * By default the list is empty # # Related options: # * allowed_schemes # * allowed_hosts # * disallowed_hosts # * allowed_ports # * disallowed_ports # # (list value) #disallowed_schemes = # # Specify the "whitelist" of allowed target hosts for web-download. # # This option provides whitelisting of hosts that will be allowed when an end # user imports an image using the web-download import method. The whitelist # has priority such that if there is also a blacklist defined for hosts, the # blacklist will be ignored. The uri must have already passed scheme # filtering before this host filter will be applied. If the uri passes, port # filtering will then be applied. # # See the Glance Administration Guide for more information. # # Possible values: # * List containing normalized hostname or ip like it would be returned # in the urllib.parse netloc without the port # * By default the list is empty # * Hint: leave the whitelist empty if you want the disallowed_hosts # blacklist to be processed # # Related options: # * allowed_schemes # * disallowed_schemes # * disallowed_hosts # * allowed_ports # * disallowed_ports # # (list value) #allowed_hosts = # # Specify the "blacklist" of hosts disallowed for web-download. # # This option provides blacklisting of hosts that will be rejected when an end # user imports an image using the web-download import method. Note that if a # host whitelist is defined using the 'allowed_hosts' option, *this option # will be ignored*. # # The uri must have already passed scheme filtering before this host filter # will be applied. If the uri passes, port filtering will then be applied. # # See the Glance Administration Guide for more information. # # Possible values: # * List containing normalized hostname or ip like it would be returned # in the urllib.parse netloc without the port # * By default the list is empty # # Related options: # * allowed_schemes # * disallowed_schemes # * allowed_hosts # * allowed_ports # * disallowed_ports # # (list value) #disallowed_hosts = # # Specify the "whitelist" of allowed ports for web-download. # # This option provides whitelisting of ports that will be allowed when an end # user imports an image using the web-download import method. The whitelist # has priority such that if there is also a blacklist defined for ports, the # blacklist will be ignored. Note that scheme and host filtering have already # been applied by the time a uri hits the port filter. # # See the Glance Administration Guide for more information. # # Possible values: # * List containing ports as they are returned from urllib.parse netloc # field. Thus the value is a list of integer values, for example # [80, 443] # * Hint: leave the whitelist empty if you want the disallowed_ports # blacklist to be processed # # Related options: # * allowed_schemes # * disallowed_schemes # * allowed_hosts # * disallowed_hosts # * disallowed_ports # (list value) #allowed_ports = 80,443 # # Specify the "blacklist" of disallowed ports for web-download. # # This option provides blacklisting of target ports that will be rejected when # an end user imports an image using the web-download import method. Note # that if a port whitelist is defined using the 'allowed_ports' option, *this # option will be ignored*. Note that scheme and host filtering have already # been applied by the time a uri hits the port filter. # # See the Glance Administration Guide for more information. # # Possible values: # * List containing ports as they are returned from urllib.parse netloc # field. Thus the value is a list of integer values, for example # [22, 88] # * By default this list is empty # # Related options: # * allowed_schemes # * disallowed_schemes # * allowed_hosts # * disallowed_hosts # * allowed_ports # # (list value) #disallowed_ports = [inject_metadata_properties] # # From glance # # # Specify name of user roles to be ignored for injecting metadata # properties in the image. # # Possible values: # * List containing user roles. For example: [admin,member] # # (list value) #ignore_user_roles = admin # # Dictionary contains metadata properties to be injected in image. # # Possible values: # * Dictionary containing key/value pairs. Key characters # length should be <= 255. For example: k1:v1,k2:v2 # # # (dict value) #inject = glance-16.0.0/etc/glance-registry-paste.ini0000666000175100017510000000232113245511421020536 0ustar zuulzuul00000000000000# Use this pipeline for no auth - DEFAULT [pipeline:glance-registry] pipeline = healthcheck osprofiler unauthenticated-context registryapp # Use this pipeline for keystone auth [pipeline:glance-registry-keystone] pipeline = healthcheck osprofiler authtoken context registryapp # Use this pipeline for authZ only. This means that the registry will treat a # user as authenticated without making requests to keystone to reauthenticate # the user. [pipeline:glance-registry-trusted-auth] pipeline = healthcheck osprofiler context registryapp [app:registryapp] paste.app_factory = glance.registry.api:API.factory [filter:healthcheck] paste.filter_factory = oslo_middleware:Healthcheck.factory backends = disable_by_file disable_by_file_path = /etc/glance/healthcheck_disable [filter:context] paste.filter_factory = glance.api.middleware.context:ContextMiddleware.factory [filter:unauthenticated-context] paste.filter_factory = glance.api.middleware.context:UnauthenticatedContextMiddleware.factory [filter:authtoken] paste.filter_factory = keystonemiddleware.auth_token:filter_factory [filter:osprofiler] paste.filter_factory = osprofiler.web:WsgiMiddleware.factory hmac_keys = SECRET_KEY #DEPRECATED enabled = yes #DEPRECATED glance-16.0.0/etc/policy.json0000666000175100017510000000255413245511421016026 0ustar zuulzuul00000000000000{ "context_is_admin": "role:admin", "default": "role:admin", "add_image": "", "delete_image": "", "get_image": "", "get_images": "", "modify_image": "", "publicize_image": "role:admin", "communitize_image": "", "copy_from": "", "download_image": "", "upload_image": "", "delete_image_location": "", "get_image_location": "", "set_image_location": "", "add_member": "", "delete_member": "", "get_member": "", "get_members": "", "modify_member": "", "manage_image_cache": "role:admin", "get_task": "", "get_tasks": "", "add_task": "", "modify_task": "", "tasks_api_access": "role:admin", "deactivate": "", "reactivate": "", "get_metadef_namespace": "", "get_metadef_namespaces":"", "modify_metadef_namespace":"", "add_metadef_namespace":"", "get_metadef_object":"", "get_metadef_objects":"", "modify_metadef_object":"", "add_metadef_object":"", "list_metadef_resource_types":"", "get_metadef_resource_type":"", "add_metadef_resource_type_association":"", "get_metadef_property":"", "get_metadef_properties":"", "modify_metadef_property":"", "add_metadef_property":"", "get_metadef_tag":"", "get_metadef_tags":"", "modify_metadef_tag":"", "add_metadef_tag":"", "add_metadef_tags":"" } glance-16.0.0/etc/glance-registry.conf0000666000175100017510000024310013245511426017601 0ustar zuulzuul00000000000000[DEFAULT] # # From glance.registry # # # Set the image owner to tenant or the authenticated user. # # Assign a boolean value to determine the owner of an image. When set to # True, the owner of the image is the tenant. When set to False, the # owner of the image will be the authenticated user issuing the request. # Setting it to False makes the image private to the associated user and # sharing with other users within the same tenant (or "project") # requires explicit image sharing via image membership. # # Possible values: # * True # * False # # Related options: # * None # # (boolean value) #owner_is_tenant = true # # Role used to identify an authenticated user as administrator. # # Provide a string value representing a Keystone role to identify an # administrative user. Users with this role will be granted # administrative privileges. The default value for this option is # 'admin'. # # Possible values: # * A string value which is a valid Keystone role # # Related options: # * None # # (string value) #admin_role = admin # # Allow limited access to unauthenticated users. # # Assign a boolean to determine API access for unathenticated # users. When set to False, the API cannot be accessed by # unauthenticated users. When set to True, unauthenticated users can # access the API with read-only privileges. This however only applies # when using ContextMiddleware. # # Possible values: # * True # * False # # Related options: # * None # # (boolean value) #allow_anonymous_access = false # # Limit the request ID length. # # Provide an integer value to limit the length of the request ID to # the specified length. The default value is 64. Users can change this # to any ineteger value between 0 and 16384 however keeping in mind that # a larger value may flood the logs. # # Possible values: # * Integer value between 0 and 16384 # # Related options: # * None # # (integer value) # Minimum value: 0 #max_request_id_length = 64 # # Allow users to add additional/custom properties to images. # # Glance defines a standard set of properties (in its schema) that # appear on every image. These properties are also known as # ``base properties``. In addition to these properties, Glance # allows users to add custom properties to images. These are known # as ``additional properties``. # # By default, this configuration option is set to ``True`` and users # are allowed to add additional properties. The number of additional # properties that can be added to an image can be controlled via # ``image_property_quota`` configuration option. # # Possible values: # * True # * False # # Related options: # * image_property_quota # # (boolean value) #allow_additional_image_properties = true # # Maximum number of image members per image. # # This limits the maximum of users an image can be shared with. Any negative # value is interpreted as unlimited. # # Related options: # * None # # (integer value) #image_member_quota = 128 # # Maximum number of properties allowed on an image. # # This enforces an upper limit on the number of additional properties an image # can have. Any negative value is interpreted as unlimited. # # NOTE: This won't have any impact if additional properties are disabled. Please # refer to ``allow_additional_image_properties``. # # Related options: # * ``allow_additional_image_properties`` # # (integer value) #image_property_quota = 128 # # Maximum number of tags allowed on an image. # # Any negative value is interpreted as unlimited. # # Related options: # * None # # (integer value) #image_tag_quota = 128 # # Maximum number of locations allowed on an image. # # Any negative value is interpreted as unlimited. # # Related options: # * None # # (integer value) #image_location_quota = 10 # DEPRECATED: # Python module path of data access API. # # Specifies the path to the API to use for accessing the data model. # This option determines how the image catalog data will be accessed. # # Possible values: # * glance.db.sqlalchemy.api # * glance.db.registry.api # * glance.db.simple.api # # If this option is set to ``glance.db.sqlalchemy.api`` then the image # catalog data is stored in and read from the database via the # SQLAlchemy Core and ORM APIs. # # Setting this option to ``glance.db.registry.api`` will force all # database access requests to be routed through the Registry service. # This avoids data access from the Glance API nodes for an added layer # of security, scalability and manageability. # # NOTE: In v2 OpenStack Images API, the registry service is optional. # In order to use the Registry API in v2, the option # ``enable_v2_registry`` must be set to ``True``. # # Finally, when this configuration option is set to # ``glance.db.simple.api``, image catalog data is stored in and read # from an in-memory data structure. This is primarily used for testing. # # Related options: # * enable_v2_api # * enable_v2_registry # # (string value) # This option is deprecated for removal since Queens. # Its value may be silently ignored in the future. # Reason: # Glance registry service is deprecated for removal. # # More information can be found from the spec: # http://specs.openstack.org/openstack/glance-specs/specs/queens/approved/glance # /deprecate-registry.html #data_api = glance.db.sqlalchemy.api # # The default number of results to return for a request. # # Responses to certain API requests, like list images, may return # multiple items. The number of results returned can be explicitly # controlled by specifying the ``limit`` parameter in the API request. # However, if a ``limit`` parameter is not specified, this # configuration value will be used as the default number of results to # be returned for any API request. # # NOTES: # * The value of this configuration option may not be greater than # the value specified by ``api_limit_max``. # * Setting this to a very large value may slow down database # queries and increase response times. Setting this to a # very low value may result in poor user experience. # # Possible values: # * Any positive integer # # Related options: # * api_limit_max # # (integer value) # Minimum value: 1 #limit_param_default = 25 # # Maximum number of results that could be returned by a request. # # As described in the help text of ``limit_param_default``, some # requests may return multiple results. The number of results to be # returned are governed either by the ``limit`` parameter in the # request or the ``limit_param_default`` configuration option. # The value in either case, can't be greater than the absolute maximum # defined by this configuration option. Anything greater than this # value is trimmed down to the maximum value defined here. # # NOTE: Setting this to a very large value may slow down database # queries and increase response times. Setting this to a # very low value may result in poor user experience. # # Possible values: # * Any positive integer # # Related options: # * limit_param_default # # (integer value) # Minimum value: 1 #api_limit_max = 1000 # # Show direct image location when returning an image. # # This configuration option indicates whether to show the direct image # location when returning image details to the user. The direct image # location is where the image data is stored in backend storage. This # image location is shown under the image property ``direct_url``. # # When multiple image locations exist for an image, the best location # is displayed based on the location strategy indicated by the # configuration option ``location_strategy``. # # NOTES: # * Revealing image locations can present a GRAVE SECURITY RISK as # image locations can sometimes include credentials. Hence, this # is set to ``False`` by default. Set this to ``True`` with # EXTREME CAUTION and ONLY IF you know what you are doing! # * If an operator wishes to avoid showing any image location(s) # to the user, then both this option and # ``show_multiple_locations`` MUST be set to ``False``. # # Possible values: # * True # * False # # Related options: # * show_multiple_locations # * location_strategy # # (boolean value) #show_image_direct_url = false # DEPRECATED: # Show all image locations when returning an image. # # This configuration option indicates whether to show all the image # locations when returning image details to the user. When multiple # image locations exist for an image, the locations are ordered based # on the location strategy indicated by the configuration opt # ``location_strategy``. The image locations are shown under the # image property ``locations``. # # NOTES: # * Revealing image locations can present a GRAVE SECURITY RISK as # image locations can sometimes include credentials. Hence, this # is set to ``False`` by default. Set this to ``True`` with # EXTREME CAUTION and ONLY IF you know what you are doing! # * If an operator wishes to avoid showing any image location(s) # to the user, then both this option and # ``show_image_direct_url`` MUST be set to ``False``. # # Possible values: # * True # * False # # Related options: # * show_image_direct_url # * location_strategy # # (boolean value) # This option is deprecated for removal since Newton. # Its value may be silently ignored in the future. # Reason: This option will be removed in the Pike release or later because the # same functionality can be achieved with greater granularity by using policies. # Please see the Newton release notes for more information. #show_multiple_locations = false # # Maximum size of image a user can upload in bytes. # # An image upload greater than the size mentioned here would result # in an image creation failure. This configuration option defaults to # 1099511627776 bytes (1 TiB). # # NOTES: # * This value should only be increased after careful # consideration and must be set less than or equal to # 8 EiB (9223372036854775808). # * This value must be set with careful consideration of the # backend storage capacity. Setting this to a very low value # may result in a large number of image failures. And, setting # this to a very large value may result in faster consumption # of storage. Hence, this must be set according to the nature of # images created and storage capacity available. # # Possible values: # * Any positive number less than or equal to 9223372036854775808 # # (integer value) # Minimum value: 1 # Maximum value: 9223372036854775808 #image_size_cap = 1099511627776 # # Maximum amount of image storage per tenant. # # This enforces an upper limit on the cumulative storage consumed by all images # of a tenant across all stores. This is a per-tenant limit. # # The default unit for this configuration option is Bytes. However, storage # units can be specified using case-sensitive literals ``B``, ``KB``, ``MB``, # ``GB`` and ``TB`` representing Bytes, KiloBytes, MegaBytes, GigaBytes and # TeraBytes respectively. Note that there should not be any space between the # value and unit. Value ``0`` signifies no quota enforcement. Negative values # are invalid and result in errors. # # Possible values: # * A string that is a valid concatenation of a non-negative integer # representing the storage value and an optional string literal # representing storage units as mentioned above. # # Related options: # * None # # (string value) #user_storage_quota = 0 # # Deploy the v1 OpenStack Images API. # # When this option is set to ``True``, Glance service will respond to # requests on registered endpoints conforming to the v1 OpenStack # Images API. # # NOTES: # * If this option is enabled, then ``enable_v1_registry`` must # also be set to ``True`` to enable mandatory usage of Registry # service with v1 API. # # * If this option is disabled, then the ``enable_v1_registry`` # option, which is enabled by default, is also recommended # to be disabled. # # * This option is separate from ``enable_v2_api``, both v1 and v2 # OpenStack Images API can be deployed independent of each # other. # # * If deploying only the v2 Images API, this option, which is # enabled by default, should be disabled. # # Possible values: # * True # * False # # Related options: # * enable_v1_registry # * enable_v2_api # # (boolean value) #enable_v1_api = true # # Deploy the v2 OpenStack Images API. # # When this option is set to ``True``, Glance service will respond # to requests on registered endpoints conforming to the v2 OpenStack # Images API. # # NOTES: # * If this option is disabled, then the ``enable_v2_registry`` # option, which is enabled by default, is also recommended # to be disabled. # # * This option is separate from ``enable_v1_api``, both v1 and v2 # OpenStack Images API can be deployed independent of each # other. # # * If deploying only the v1 Images API, this option, which is # enabled by default, should be disabled. # # Possible values: # * True # * False # # Related options: # * enable_v2_registry # * enable_v1_api # # (boolean value) #enable_v2_api = true # # Deploy the v1 API Registry service. # # When this option is set to ``True``, the Registry service # will be enabled in Glance for v1 API requests. # # NOTES: # * Use of Registry is mandatory in v1 API, so this option must # be set to ``True`` if the ``enable_v1_api`` option is enabled. # # * If deploying only the v2 OpenStack Images API, this option, # which is enabled by default, should be disabled. # # Possible values: # * True # * False # # Related options: # * enable_v1_api # # (boolean value) #enable_v1_registry = true # DEPRECATED: # Deploy the v2 API Registry service. # # When this option is set to ``True``, the Registry service # will be enabled in Glance for v2 API requests. # # NOTES: # * Use of Registry is optional in v2 API, so this option # must only be enabled if both ``enable_v2_api`` is set to # ``True`` and the ``data_api`` option is set to # ``glance.db.registry.api``. # # * If deploying only the v1 OpenStack Images API, this option, # which is enabled by default, should be disabled. # # Possible values: # * True # * False # # Related options: # * enable_v2_api # * data_api # # (boolean value) # This option is deprecated for removal since Queens. # Its value may be silently ignored in the future. # Reason: # Glance registry service is deprecated for removal. # # More information can be found from the spec: # http://specs.openstack.org/openstack/glance-specs/specs/queens/approved/glance # /deprecate-registry.html #enable_v2_registry = true # # Host address of the pydev server. # # Provide a string value representing the hostname or IP of the # pydev server to use for debugging. The pydev server listens for # debug connections on this address, facilitating remote debugging # in Glance. # # Possible values: # * Valid hostname # * Valid IP address # # Related options: # * None # # (unknown value) #pydev_worker_debug_host = localhost # # Port number that the pydev server will listen on. # # Provide a port number to bind the pydev server to. The pydev # process accepts debug connections on this port and facilitates # remote debugging in Glance. # # Possible values: # * A valid port number # # Related options: # * None # # (port value) # Minimum value: 0 # Maximum value: 65535 #pydev_worker_debug_port = 5678 # # AES key for encrypting store location metadata. # # Provide a string value representing the AES cipher to use for # encrypting Glance store metadata. # # NOTE: The AES key to use must be set to a random string of length # 16, 24 or 32 bytes. # # Possible values: # * String value representing a valid AES key # # Related options: # * None # # (string value) #metadata_encryption_key = # # Digest algorithm to use for digital signature. # # Provide a string value representing the digest algorithm to # use for generating digital signatures. By default, ``sha256`` # is used. # # To get a list of the available algorithms supported by the version # of OpenSSL on your platform, run the command: # ``openssl list-message-digest-algorithms``. # Examples are 'sha1', 'sha256', and 'sha512'. # # NOTE: ``digest_algorithm`` is not related to Glance's image signing # and verification. It is only used to sign the universally unique # identifier (UUID) as a part of the certificate file and key file # validation. # # Possible values: # * An OpenSSL message digest algorithm identifier # # Relation options: # * None # # (string value) #digest_algorithm = sha256 # # The URL provides location where the temporary data will be stored # # This option is for Glance internal use only. Glance will save the # image data uploaded by the user to 'staging' endpoint during the # image import process. # # This option does not change the 'staging' API endpoint by any means. # # NOTE: It is discouraged to use same path as [task]/work_dir # # NOTE: 'file://' is the only option # api_image_import flow will support for now. # # NOTE: The staging path must be on shared filesystem available to all # Glance API nodes. # # Possible values: # * String starting with 'file://' followed by absolute FS path # # Related options: # * [task]/work_dir # * [DEFAULT]/enable_image_import (*deprecated*) # # (string value) #node_staging_uri = file:///tmp/staging/ # DEPRECATED: # Enables the Image Import workflow introduced in Pike # # As '[DEFAULT]/node_staging_uri' is required for the Image # Import, it's disabled per default in Pike, enabled per # default in Queens and removed in Rocky. This allows Glance to # operate with previous version configs upon upgrade. # # Setting this option to False will disable the endpoints related # to Image Import Refactoring work. # # Related options: # * [DEFAULT]/node_staging_uri (boolean value) # This option is deprecated for removal since Pike. # Its value may be silently ignored in the future. # Reason: # This option is deprecated for removal in Rocky. # # It was introduced to make sure that the API is not enabled # before the '[DEFAULT]/node_staging_uri' is defined and is # long term redundant. #enable_image_import = true # # List of enabled Image Import Methods # # Both 'glance-direct' and 'web-download' are enabled by default. # # Related options: # * [DEFAULT]/node_staging_uri # * [DEFAULT]/enable_image_import (list value) #enabled_import_methods = glance-direct,web-download # # IP address to bind the glance servers to. # # Provide an IP address to bind the glance server to. The default # value is ``0.0.0.0``. # # Edit this option to enable the server to listen on one particular # IP address on the network card. This facilitates selection of a # particular network interface for the server. # # Possible values: # * A valid IPv4 address # * A valid IPv6 address # # Related options: # * None # # (unknown value) #bind_host = 0.0.0.0 # # Port number on which the server will listen. # # Provide a valid port number to bind the server's socket to. This # port is then set to identify processes and forward network messages # that arrive at the server. The default bind_port value for the API # server is 9292 and for the registry server is 9191. # # Possible values: # * A valid port number (0 to 65535) # # Related options: # * None # # (port value) # Minimum value: 0 # Maximum value: 65535 #bind_port = # # Set the number of incoming connection requests. # # Provide a positive integer value to limit the number of requests in # the backlog queue. The default queue size is 4096. # # An incoming connection to a TCP listener socket is queued before a # connection can be established with the server. Setting the backlog # for a TCP socket ensures a limited queue size for incoming traffic. # # Possible values: # * Positive integer # # Related options: # * None # # (integer value) # Minimum value: 1 #backlog = 4096 # # Set the wait time before a connection recheck. # # Provide a positive integer value representing time in seconds which # is set as the idle wait time before a TCP keep alive packet can be # sent to the host. The default value is 600 seconds. # # Setting ``tcp_keepidle`` helps verify at regular intervals that a # connection is intact and prevents frequent TCP connection # reestablishment. # # Possible values: # * Positive integer value representing time in seconds # # Related options: # * None # # (integer value) # Minimum value: 1 #tcp_keepidle = 600 # # Absolute path to the CA file. # # Provide a string value representing a valid absolute path to # the Certificate Authority file to use for client authentication. # # A CA file typically contains necessary trusted certificates to # use for the client authentication. This is essential to ensure # that a secure connection is established to the server via the # internet. # # Possible values: # * Valid absolute path to the CA file # # Related options: # * None # # (string value) #ca_file = /etc/ssl/cafile # # Absolute path to the certificate file. # # Provide a string value representing a valid absolute path to the # certificate file which is required to start the API service # securely. # # A certificate file typically is a public key container and includes # the server's public key, server name, server information and the # signature which was a result of the verification process using the # CA certificate. This is required for a secure connection # establishment. # # Possible values: # * Valid absolute path to the certificate file # # Related options: # * None # # (string value) #cert_file = /etc/ssl/certs # # Absolute path to a private key file. # # Provide a string value representing a valid absolute path to a # private key file which is required to establish the client-server # connection. # # Possible values: # * Absolute path to the private key file # # Related options: # * None # # (string value) #key_file = /etc/ssl/key/key-file.pem # DEPRECATED: The HTTP header used to determine the scheme for the original # request, even if it was removed by an SSL terminating proxy. Typical value is # "HTTP_X_FORWARDED_PROTO". (string value) # This option is deprecated for removal. # Its value may be silently ignored in the future. # Reason: Use the http_proxy_to_wsgi middleware instead. #secure_proxy_ssl_header = # # Number of Glance worker processes to start. # # Provide a non-negative integer value to set the number of child # process workers to service requests. By default, the number of CPUs # available is set as the value for ``workers`` limited to 8. For # example if the processor count is 6, 6 workers will be used, if the # processor count is 24 only 8 workers will be used. The limit will only # apply to the default value, if 24 workers is configured, 24 is used. # # Each worker process is made to listen on the port set in the # configuration file and contains a greenthread pool of size 1000. # # NOTE: Setting the number of workers to zero, triggers the creation # of a single API process with a greenthread pool of size 1000. # # Possible values: # * 0 # * Positive integer value (typically equal to the number of CPUs) # # Related options: # * None # # (integer value) # Minimum value: 0 #workers = # # Maximum line size of message headers. # # Provide an integer value representing a length to limit the size of # message headers. The default value is 16384. # # NOTE: ``max_header_line`` may need to be increased when using large # tokens (typically those generated by the Keystone v3 API with big # service catalogs). However, it is to be kept in mind that larger # values for ``max_header_line`` would flood the logs. # # Setting ``max_header_line`` to 0 sets no limit for the line size of # message headers. # # Possible values: # * 0 # * Positive integer # # Related options: # * None # # (integer value) # Minimum value: 0 #max_header_line = 16384 # # Set keep alive option for HTTP over TCP. # # Provide a boolean value to determine sending of keep alive packets. # If set to ``False``, the server returns the header # "Connection: close". If set to ``True``, the server returns a # "Connection: Keep-Alive" in its responses. This enables retention of # the same TCP connection for HTTP conversations instead of opening a # new one with each new request. # # This option must be set to ``False`` if the client socket connection # needs to be closed explicitly after the response is received and # read successfully by the client. # # Possible values: # * True # * False # # Related options: # * None # # (boolean value) #http_keepalive = true # # Timeout for client connections' socket operations. # # Provide a valid integer value representing time in seconds to set # the period of wait before an incoming connection can be closed. The # default value is 900 seconds. # # The value zero implies wait forever. # # Possible values: # * Zero # * Positive integer # # Related options: # * None # # (integer value) # Minimum value: 0 #client_socket_timeout = 900 # # From oslo.log # # If set to true, the logging level will be set to DEBUG instead of the default # INFO level. (boolean value) # Note: This option can be changed without restarting. #debug = false # The name of a logging configuration file. This file is appended to any # existing logging configuration files. For details about logging configuration # files, see the Python logging module documentation. Note that when logging # configuration files are used then all logging configuration is set in the # configuration file and other logging configuration options are ignored (for # example, logging_context_format_string). (string value) # Note: This option can be changed without restarting. # Deprecated group/name - [DEFAULT]/log_config #log_config_append = # Defines the format string for %%(asctime)s in log records. Default: # %(default)s . This option is ignored if log_config_append is set. (string # value) #log_date_format = %Y-%m-%d %H:%M:%S # (Optional) Name of log file to send logging output to. If no default is set, # logging will go to stderr as defined by use_stderr. This option is ignored if # log_config_append is set. (string value) # Deprecated group/name - [DEFAULT]/logfile #log_file = # (Optional) The base directory used for relative log_file paths. This option # is ignored if log_config_append is set. (string value) # Deprecated group/name - [DEFAULT]/logdir #log_dir = # Uses logging handler designed to watch file system. When log file is moved or # removed this handler will open a new log file with specified path # instantaneously. It makes sense only if log_file option is specified and Linux # platform is used. This option is ignored if log_config_append is set. (boolean # value) #watch_log_file = false # Use syslog for logging. Existing syslog format is DEPRECATED and will be # changed later to honor RFC5424. This option is ignored if log_config_append is # set. (boolean value) #use_syslog = false # Enable journald for logging. If running in a systemd environment you may wish # to enable journal support. Doing so will use the journal native protocol which # includes structured metadata in addition to log messages.This option is # ignored if log_config_append is set. (boolean value) #use_journal = false # Syslog facility to receive log lines. This option is ignored if # log_config_append is set. (string value) #syslog_log_facility = LOG_USER # Use JSON formatting for logging. This option is ignored if log_config_append # is set. (boolean value) #use_json = false # Log output to standard error. This option is ignored if log_config_append is # set. (boolean value) #use_stderr = false # Format string to use for log messages with context. (string value) #logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s # Format string to use for log messages when context is undefined. (string # value) #logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s # Additional data to append to log message when logging level for the message is # DEBUG. (string value) #logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d # Prefix each line of exception output with this format. (string value) #logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s # Defines the format string for %(user_identity)s that is used in # logging_context_format_string. (string value) #logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s # List of package logging levels in logger=LEVEL pairs. This option is ignored # if log_config_append is set. (list value) #default_log_levels = amqp=WARN,amqplib=WARN,boto=WARN,qpid=WARN,sqlalchemy=WARN,suds=INFO,oslo.messaging=INFO,oslo_messaging=INFO,iso8601=WARN,requests.packages.urllib3.connectionpool=WARN,urllib3.connectionpool=WARN,websocket=WARN,requests.packages.urllib3.util.retry=WARN,urllib3.util.retry=WARN,keystonemiddleware=WARN,routes.middleware=WARN,stevedore=WARN,taskflow=WARN,keystoneauth=WARN,oslo.cache=INFO,dogpile.core.dogpile=INFO # Enables or disables publication of error events. (boolean value) #publish_errors = false # The format for an instance that is passed with the log message. (string value) #instance_format = "[instance: %(uuid)s] " # The format for an instance UUID that is passed with the log message. (string # value) #instance_uuid_format = "[instance: %(uuid)s] " # Interval, number of seconds, of log rate limiting. (integer value) #rate_limit_interval = 0 # Maximum number of logged messages per rate_limit_interval. (integer value) #rate_limit_burst = 0 # Log level name used by rate limiting: CRITICAL, ERROR, INFO, WARNING, DEBUG or # empty string. Logs with level greater or equal to rate_limit_except_level are # not filtered. An empty string means that all levels are filtered. (string # value) #rate_limit_except_level = CRITICAL # Enables or disables fatal status of deprecations. (boolean value) #fatal_deprecations = false # # From oslo.messaging # # Size of RPC connection pool. (integer value) #rpc_conn_pool_size = 30 # The pool size limit for connections expiration policy (integer value) #conn_pool_min_size = 2 # The time-to-live in sec of idle connections in the pool (integer value) #conn_pool_ttl = 1200 # ZeroMQ bind address. Should be a wildcard (*), an ethernet interface, or IP. # The "host" option should point or resolve to this address. (string value) #rpc_zmq_bind_address = * # MatchMaker driver. (string value) # Possible values: # redis - # sentinel - # dummy - #rpc_zmq_matchmaker = redis # Number of ZeroMQ contexts, defaults to 1. (integer value) #rpc_zmq_contexts = 1 # Maximum number of ingress messages to locally buffer per topic. Default is # unlimited. (integer value) #rpc_zmq_topic_backlog = # Directory for holding IPC sockets. (string value) #rpc_zmq_ipc_dir = /var/run/openstack # Name of this node. Must be a valid hostname, FQDN, or IP address. Must match # "host" option, if running Nova. (string value) #rpc_zmq_host = localhost # Number of seconds to wait before all pending messages will be sent after # closing a socket. The default value of -1 specifies an infinite linger period. # The value of 0 specifies no linger period. Pending messages shall be discarded # immediately when the socket is closed. Positive values specify an upper bound # for the linger period. (integer value) # Deprecated group/name - [DEFAULT]/rpc_cast_timeout #zmq_linger = -1 # The default number of seconds that poll should wait. Poll raises timeout # exception when timeout expired. (integer value) #rpc_poll_timeout = 1 # Expiration timeout in seconds of a name service record about existing target ( # < 0 means no timeout). (integer value) #zmq_target_expire = 300 # Update period in seconds of a name service record about existing target. # (integer value) #zmq_target_update = 180 # Use PUB/SUB pattern for fanout methods. PUB/SUB always uses proxy. (boolean # value) #use_pub_sub = false # Use ROUTER remote proxy. (boolean value) #use_router_proxy = false # This option makes direct connections dynamic or static. It makes sense only # with use_router_proxy=False which means to use direct connections for direct # message types (ignored otherwise). (boolean value) #use_dynamic_connections = false # How many additional connections to a host will be made for failover reasons. # This option is actual only in dynamic connections mode. (integer value) #zmq_failover_connections = 2 # Minimal port number for random ports range. (port value) # Minimum value: 0 # Maximum value: 65535 #rpc_zmq_min_port = 49153 # Maximal port number for random ports range. (integer value) # Minimum value: 1 # Maximum value: 65536 #rpc_zmq_max_port = 65536 # Number of retries to find free port number before fail with ZMQBindError. # (integer value) #rpc_zmq_bind_port_retries = 100 # Default serialization mechanism for serializing/deserializing # outgoing/incoming messages (string value) # Possible values: # json - # msgpack - #rpc_zmq_serialization = json # This option configures round-robin mode in zmq socket. True means not keeping # a queue when server side disconnects. False means to keep queue and messages # even if server is disconnected, when the server appears we send all # accumulated messages to it. (boolean value) #zmq_immediate = true # Enable/disable TCP keepalive (KA) mechanism. The default value of -1 (or any # other negative value) means to skip any overrides and leave it to OS default; # 0 and 1 (or any other positive value) mean to disable and enable the option # respectively. (integer value) #zmq_tcp_keepalive = -1 # The duration between two keepalive transmissions in idle condition. The unit # is platform dependent, for example, seconds in Linux, milliseconds in Windows # etc. The default value of -1 (or any other negative value and 0) means to skip # any overrides and leave it to OS default. (integer value) #zmq_tcp_keepalive_idle = -1 # The number of retransmissions to be carried out before declaring that remote # end is not available. The default value of -1 (or any other negative value and # 0) means to skip any overrides and leave it to OS default. (integer value) #zmq_tcp_keepalive_cnt = -1 # The duration between two successive keepalive retransmissions, if # acknowledgement to the previous keepalive transmission is not received. The # unit is platform dependent, for example, seconds in Linux, milliseconds in # Windows etc. The default value of -1 (or any other negative value and 0) means # to skip any overrides and leave it to OS default. (integer value) #zmq_tcp_keepalive_intvl = -1 # Maximum number of (green) threads to work concurrently. (integer value) #rpc_thread_pool_size = 100 # Expiration timeout in seconds of a sent/received message after which it is not # tracked anymore by a client/server. (integer value) #rpc_message_ttl = 300 # Wait for message acknowledgements from receivers. This mechanism works only # via proxy without PUB/SUB. (boolean value) #rpc_use_acks = false # Number of seconds to wait for an ack from a cast/call. After each retry # attempt this timeout is multiplied by some specified multiplier. (integer # value) #rpc_ack_timeout_base = 15 # Number to multiply base ack timeout by after each retry attempt. (integer # value) #rpc_ack_timeout_multiplier = 2 # Default number of message sending attempts in case of any problems occurred: # positive value N means at most N retries, 0 means no retries, None or -1 (or # any other negative values) mean to retry forever. This option is used only if # acknowledgments are enabled. (integer value) #rpc_retry_attempts = 3 # List of publisher hosts SubConsumer can subscribe on. This option has higher # priority then the default publishers list taken from the matchmaker. (list # value) #subscribe_on = # Size of executor thread pool when executor is threading or eventlet. (integer # value) # Deprecated group/name - [DEFAULT]/rpc_thread_pool_size #executor_thread_pool_size = 64 # Seconds to wait for a response from a call. (integer value) #rpc_response_timeout = 60 # The network address and optional user credentials for connecting to the # messaging backend, in URL format. The expected format is: # # driver://[user:pass@]host:port[,[userN:passN@]hostN:portN]/virtual_host?query # # Example: rabbit://rabbitmq:password@127.0.0.1:5672// # # For full details on the fields in the URL see the documentation of # oslo_messaging.TransportURL at # https://docs.openstack.org/oslo.messaging/latest/reference/transport.html # (string value) #transport_url = # DEPRECATED: The messaging driver to use, defaults to rabbit. Other drivers # include amqp and zmq. (string value) # This option is deprecated for removal. # Its value may be silently ignored in the future. # Reason: Replaced by [DEFAULT]/transport_url #rpc_backend = rabbit # The default exchange under which topics are scoped. May be overridden by an # exchange name specified in the transport_url option. (string value) #control_exchange = openstack [database] # # From oslo.db # # If True, SQLite uses synchronous mode. (boolean value) #sqlite_synchronous = true # The back end to use for the database. (string value) # Deprecated group/name - [DEFAULT]/db_backend #backend = sqlalchemy # The SQLAlchemy connection string to use to connect to the database. (string # value) # Deprecated group/name - [DEFAULT]/sql_connection # Deprecated group/name - [DATABASE]/sql_connection # Deprecated group/name - [sql]/connection #connection = # The SQLAlchemy connection string to use to connect to the slave database. # (string value) #slave_connection = # The SQL mode to be used for MySQL sessions. This option, including the # default, overrides any server-set SQL mode. To use whatever SQL mode is set by # the server configuration, set this to no value. Example: mysql_sql_mode= # (string value) #mysql_sql_mode = TRADITIONAL # If True, transparently enables support for handling MySQL Cluster (NDB). # (boolean value) #mysql_enable_ndb = false # Connections which have been present in the connection pool longer than this # number of seconds will be replaced with a new one the next time they are # checked out from the pool. (integer value) # Deprecated group/name - [DATABASE]/idle_timeout # Deprecated group/name - [database]/idle_timeout # Deprecated group/name - [DEFAULT]/sql_idle_timeout # Deprecated group/name - [DATABASE]/sql_idle_timeout # Deprecated group/name - [sql]/idle_timeout #connection_recycle_time = 3600 # Minimum number of SQL connections to keep open in a pool. (integer value) # Deprecated group/name - [DEFAULT]/sql_min_pool_size # Deprecated group/name - [DATABASE]/sql_min_pool_size #min_pool_size = 1 # Maximum number of SQL connections to keep open in a pool. Setting a value of 0 # indicates no limit. (integer value) # Deprecated group/name - [DEFAULT]/sql_max_pool_size # Deprecated group/name - [DATABASE]/sql_max_pool_size #max_pool_size = 5 # Maximum number of database connection retries during startup. Set to -1 to # specify an infinite retry count. (integer value) # Deprecated group/name - [DEFAULT]/sql_max_retries # Deprecated group/name - [DATABASE]/sql_max_retries #max_retries = 10 # Interval between retries of opening a SQL connection. (integer value) # Deprecated group/name - [DEFAULT]/sql_retry_interval # Deprecated group/name - [DATABASE]/reconnect_interval #retry_interval = 10 # If set, use this value for max_overflow with SQLAlchemy. (integer value) # Deprecated group/name - [DEFAULT]/sql_max_overflow # Deprecated group/name - [DATABASE]/sqlalchemy_max_overflow #max_overflow = 50 # Verbosity of SQL debugging information: 0=None, 100=Everything. (integer # value) # Minimum value: 0 # Maximum value: 100 # Deprecated group/name - [DEFAULT]/sql_connection_debug #connection_debug = 0 # Add Python stack traces to SQL as comment strings. (boolean value) # Deprecated group/name - [DEFAULT]/sql_connection_trace #connection_trace = false # If set, use this value for pool_timeout with SQLAlchemy. (integer value) # Deprecated group/name - [DATABASE]/sqlalchemy_pool_timeout #pool_timeout = # Enable the experimental use of database reconnect on connection lost. (boolean # value) #use_db_reconnect = false # Seconds between retries of a database transaction. (integer value) #db_retry_interval = 1 # If True, increases the interval between retries of a database operation up to # db_max_retry_interval. (boolean value) #db_inc_retry_interval = true # If db_inc_retry_interval is set, the maximum seconds between retries of a # database operation. (integer value) #db_max_retry_interval = 10 # Maximum retries in case of connection error or deadlock error before error is # raised. Set to -1 to specify an infinite retry count. (integer value) #db_max_retries = 20 # # From oslo.db.concurrency # # Enable the experimental use of thread pooling for all DB API calls (boolean # value) # Deprecated group/name - [DEFAULT]/dbapi_use_tpool #use_tpool = false [keystone_authtoken] # # From keystonemiddleware.auth_token # # Complete "public" Identity API endpoint. This endpoint should not be an # "admin" endpoint, as it should be accessible by all end users. Unauthenticated # clients are redirected to this endpoint to authenticate. Although this # endpoint should ideally be unversioned, client support in the wild varies. If # you're using a versioned v2 endpoint here, then this should *not* be the same # endpoint the service user utilizes for validating tokens, because normal end # users may not be able to reach that endpoint. (string value) # Deprecated group/name - [keystone_authtoken]/auth_uri #www_authenticate_uri = # DEPRECATED: Complete "public" Identity API endpoint. This endpoint should not # be an "admin" endpoint, as it should be accessible by all end users. # Unauthenticated clients are redirected to this endpoint to authenticate. # Although this endpoint should ideally be unversioned, client support in the # wild varies. If you're using a versioned v2 endpoint here, then this should # *not* be the same endpoint the service user utilizes for validating tokens, # because normal end users may not be able to reach that endpoint. This option # is deprecated in favor of www_authenticate_uri and will be removed in the S # release. (string value) # This option is deprecated for removal since Queens. # Its value may be silently ignored in the future. # Reason: The auth_uri option is deprecated in favor of www_authenticate_uri and # will be removed in the S release. #auth_uri = # API version of the admin Identity API endpoint. (string value) #auth_version = # Do not handle authorization requests within the middleware, but delegate the # authorization decision to downstream WSGI components. (boolean value) #delay_auth_decision = false # Request timeout value for communicating with Identity API server. (integer # value) #http_connect_timeout = # How many times are we trying to reconnect when communicating with Identity API # Server. (integer value) #http_request_max_retries = 3 # Request environment key where the Swift cache object is stored. When # auth_token middleware is deployed with a Swift cache, use this option to have # the middleware share a caching backend with swift. Otherwise, use the # ``memcached_servers`` option instead. (string value) #cache = # Required if identity server requires client certificate (string value) #certfile = # Required if identity server requires client certificate (string value) #keyfile = # A PEM encoded Certificate Authority to use when verifying HTTPs connections. # Defaults to system CAs. (string value) #cafile = # Verify HTTPS connections. (boolean value) #insecure = false # The region in which the identity server can be found. (string value) #region_name = # DEPRECATED: Directory used to cache files related to PKI tokens. This option # has been deprecated in the Ocata release and will be removed in the P release. # (string value) # This option is deprecated for removal since Ocata. # Its value may be silently ignored in the future. # Reason: PKI token format is no longer supported. #signing_dir = # Optionally specify a list of memcached server(s) to use for caching. If left # undefined, tokens will instead be cached in-process. (list value) # Deprecated group/name - [keystone_authtoken]/memcache_servers #memcached_servers = # In order to prevent excessive effort spent validating tokens, the middleware # caches previously-seen tokens for a configurable duration (in seconds). Set to # -1 to disable caching completely. (integer value) #token_cache_time = 300 # DEPRECATED: Determines the frequency at which the list of revoked tokens is # retrieved from the Identity service (in seconds). A high number of revocation # events combined with a low cache duration may significantly reduce # performance. Only valid for PKI tokens. This option has been deprecated in the # Ocata release and will be removed in the P release. (integer value) # This option is deprecated for removal since Ocata. # Its value may be silently ignored in the future. # Reason: PKI token format is no longer supported. #revocation_cache_time = 10 # (Optional) If defined, indicate whether token data should be authenticated or # authenticated and encrypted. If MAC, token data is authenticated (with HMAC) # in the cache. If ENCRYPT, token data is encrypted and authenticated in the # cache. If the value is not one of these options or empty, auth_token will # raise an exception on initialization. (string value) # Possible values: # None - # MAC - # ENCRYPT - #memcache_security_strategy = None # (Optional, mandatory if memcache_security_strategy is defined) This string is # used for key derivation. (string value) #memcache_secret_key = # (Optional) Number of seconds memcached server is considered dead before it is # tried again. (integer value) #memcache_pool_dead_retry = 300 # (Optional) Maximum total number of open connections to every memcached server. # (integer value) #memcache_pool_maxsize = 10 # (Optional) Socket timeout in seconds for communicating with a memcached # server. (integer value) #memcache_pool_socket_timeout = 3 # (Optional) Number of seconds a connection to memcached is held unused in the # pool before it is closed. (integer value) #memcache_pool_unused_timeout = 60 # (Optional) Number of seconds that an operation will wait to get a memcached # client connection from the pool. (integer value) #memcache_pool_conn_get_timeout = 10 # (Optional) Use the advanced (eventlet safe) memcached client pool. The # advanced pool will only work under python 2.x. (boolean value) #memcache_use_advanced_pool = false # (Optional) Indicate whether to set the X-Service-Catalog header. If False, # middleware will not ask for service catalog on token validation and will not # set the X-Service-Catalog header. (boolean value) #include_service_catalog = true # Used to control the use and type of token binding. Can be set to: "disabled" # to not check token binding. "permissive" (default) to validate binding # information if the bind type is of a form known to the server and ignore it if # not. "strict" like "permissive" but if the bind type is unknown the token will # be rejected. "required" any form of token binding is needed to be allowed. # Finally the name of a binding method that must be present in tokens. (string # value) #enforce_token_bind = permissive # DEPRECATED: If true, the revocation list will be checked for cached tokens. # This requires that PKI tokens are configured on the identity server. (boolean # value) # This option is deprecated for removal since Ocata. # Its value may be silently ignored in the future. # Reason: PKI token format is no longer supported. #check_revocations_for_cached = false # DEPRECATED: Hash algorithms to use for hashing PKI tokens. This may be a # single algorithm or multiple. The algorithms are those supported by Python # standard hashlib.new(). The hashes will be tried in the order given, so put # the preferred one first for performance. The result of the first hash will be # stored in the cache. This will typically be set to multiple values only while # migrating from a less secure algorithm to a more secure one. Once all the old # tokens are expired this option should be set to a single value for better # performance. (list value) # This option is deprecated for removal since Ocata. # Its value may be silently ignored in the future. # Reason: PKI token format is no longer supported. #hash_algorithms = md5 # A choice of roles that must be present in a service token. Service tokens are # allowed to request that an expired token can be used and so this check should # tightly control that only actual services should be sending this token. Roles # here are applied as an ANY check so any role in this list must be present. For # backwards compatibility reasons this currently only affects the allow_expired # check. (list value) #service_token_roles = service # For backwards compatibility reasons we must let valid service tokens pass that # don't pass the service_token_roles check as valid. Setting this true will # become the default in a future release and should be enabled if possible. # (boolean value) #service_token_roles_required = false # Authentication type to load (string value) # Deprecated group/name - [keystone_authtoken]/auth_plugin #auth_type = # Config Section from which to load plugin specific options (string value) #auth_section = [matchmaker_redis] # # From oslo.messaging # # DEPRECATED: Host to locate redis. (string value) # This option is deprecated for removal. # Its value may be silently ignored in the future. # Reason: Replaced by [DEFAULT]/transport_url #host = 127.0.0.1 # DEPRECATED: Use this port to connect to redis host. (port value) # Minimum value: 0 # Maximum value: 65535 # This option is deprecated for removal. # Its value may be silently ignored in the future. # Reason: Replaced by [DEFAULT]/transport_url #port = 6379 # DEPRECATED: Password for Redis server (optional). (string value) # This option is deprecated for removal. # Its value may be silently ignored in the future. # Reason: Replaced by [DEFAULT]/transport_url #password = # DEPRECATED: List of Redis Sentinel hosts (fault tolerance mode), e.g., # [host:port, host1:port ... ] (list value) # This option is deprecated for removal. # Its value may be silently ignored in the future. # Reason: Replaced by [DEFAULT]/transport_url #sentinel_hosts = # Redis replica set name. (string value) #sentinel_group_name = oslo-messaging-zeromq # Time in ms to wait between connection attempts. (integer value) #wait_timeout = 2000 # Time in ms to wait before the transaction is killed. (integer value) #check_timeout = 20000 # Timeout in ms on blocking socket operations. (integer value) #socket_timeout = 10000 [oslo_messaging_amqp] # # From oslo.messaging # # Name for the AMQP container. must be globally unique. Defaults to a generated # UUID (string value) #container_name = # Timeout for inactive connections (in seconds) (integer value) #idle_timeout = 0 # Debug: dump AMQP frames to stdout (boolean value) #trace = false # Attempt to connect via SSL. If no other ssl-related parameters are given, it # will use the system's CA-bundle to verify the server's certificate. (boolean # value) #ssl = false # CA certificate PEM file used to verify the server's certificate (string value) #ssl_ca_file = # Self-identifying certificate PEM file for client authentication (string value) #ssl_cert_file = # Private key PEM file used to sign ssl_cert_file certificate (optional) (string # value) #ssl_key_file = # Password for decrypting ssl_key_file (if encrypted) (string value) #ssl_key_password = # By default SSL checks that the name in the server's certificate matches the # hostname in the transport_url. In some configurations it may be preferable to # use the virtual hostname instead, for example if the server uses the Server # Name Indication TLS extension (rfc6066) to provide a certificate per virtual # host. Set ssl_verify_vhost to True if the server's SSL certificate uses the # virtual host name instead of the DNS name. (boolean value) #ssl_verify_vhost = false # DEPRECATED: Accept clients using either SSL or plain TCP (boolean value) # This option is deprecated for removal. # Its value may be silently ignored in the future. # Reason: Not applicable - not a SSL server #allow_insecure_clients = false # Space separated list of acceptable SASL mechanisms (string value) #sasl_mechanisms = # Path to directory that contains the SASL configuration (string value) #sasl_config_dir = # Name of configuration file (without .conf suffix) (string value) #sasl_config_name = # SASL realm to use if no realm present in username (string value) #sasl_default_realm = # DEPRECATED: User name for message broker authentication (string value) # This option is deprecated for removal. # Its value may be silently ignored in the future. # Reason: Should use configuration option transport_url to provide the username. #username = # DEPRECATED: Password for message broker authentication (string value) # This option is deprecated for removal. # Its value may be silently ignored in the future. # Reason: Should use configuration option transport_url to provide the password. #password = # Seconds to pause before attempting to re-connect. (integer value) # Minimum value: 1 #connection_retry_interval = 1 # Increase the connection_retry_interval by this many seconds after each # unsuccessful failover attempt. (integer value) # Minimum value: 0 #connection_retry_backoff = 2 # Maximum limit for connection_retry_interval + connection_retry_backoff # (integer value) # Minimum value: 1 #connection_retry_interval_max = 30 # Time to pause between re-connecting an AMQP 1.0 link that failed due to a # recoverable error. (integer value) # Minimum value: 1 #link_retry_delay = 10 # The maximum number of attempts to re-send a reply message which failed due to # a recoverable error. (integer value) # Minimum value: -1 #default_reply_retry = 0 # The deadline for an rpc reply message delivery. (integer value) # Minimum value: 5 #default_reply_timeout = 30 # The deadline for an rpc cast or call message delivery. Only used when caller # does not provide a timeout expiry. (integer value) # Minimum value: 5 #default_send_timeout = 30 # The deadline for a sent notification message delivery. Only used when caller # does not provide a timeout expiry. (integer value) # Minimum value: 5 #default_notify_timeout = 30 # The duration to schedule a purge of idle sender links. Detach link after # expiry. (integer value) # Minimum value: 1 #default_sender_link_timeout = 600 # Indicates the addressing mode used by the driver. # Permitted values: # 'legacy' - use legacy non-routable addressing # 'routable' - use routable addresses # 'dynamic' - use legacy addresses if the message bus does not support routing # otherwise use routable addressing (string value) #addressing_mode = dynamic # Enable virtual host support for those message buses that do not natively # support virtual hosting (such as qpidd). When set to true the virtual host # name will be added to all message bus addresses, effectively creating a # private 'subnet' per virtual host. Set to False if the message bus supports # virtual hosting using the 'hostname' field in the AMQP 1.0 Open performative # as the name of the virtual host. (boolean value) #pseudo_vhost = true # address prefix used when sending to a specific server (string value) #server_request_prefix = exclusive # address prefix used when broadcasting to all servers (string value) #broadcast_prefix = broadcast # address prefix when sending to any server in group (string value) #group_request_prefix = unicast # Address prefix for all generated RPC addresses (string value) #rpc_address_prefix = openstack.org/om/rpc # Address prefix for all generated Notification addresses (string value) #notify_address_prefix = openstack.org/om/notify # Appended to the address prefix when sending a fanout message. Used by the # message bus to identify fanout messages. (string value) #multicast_address = multicast # Appended to the address prefix when sending to a particular RPC/Notification # server. Used by the message bus to identify messages sent to a single # destination. (string value) #unicast_address = unicast # Appended to the address prefix when sending to a group of consumers. Used by # the message bus to identify messages that should be delivered in a round-robin # fashion across consumers. (string value) #anycast_address = anycast # Exchange name used in notification addresses. # Exchange name resolution precedence: # Target.exchange if set # else default_notification_exchange if set # else control_exchange if set # else 'notify' (string value) #default_notification_exchange = # Exchange name used in RPC addresses. # Exchange name resolution precedence: # Target.exchange if set # else default_rpc_exchange if set # else control_exchange if set # else 'rpc' (string value) #default_rpc_exchange = # Window size for incoming RPC Reply messages. (integer value) # Minimum value: 1 #reply_link_credit = 200 # Window size for incoming RPC Request messages (integer value) # Minimum value: 1 #rpc_server_credit = 100 # Window size for incoming Notification messages (integer value) # Minimum value: 1 #notify_server_credit = 100 # Send messages of this type pre-settled. # Pre-settled messages will not receive acknowledgement # from the peer. Note well: pre-settled messages may be # silently discarded if the delivery fails. # Permitted values: # 'rpc-call' - send RPC Calls pre-settled # 'rpc-reply'- send RPC Replies pre-settled # 'rpc-cast' - Send RPC Casts pre-settled # 'notify' - Send Notifications pre-settled # (multi valued) #pre_settled = rpc-cast #pre_settled = rpc-reply [oslo_messaging_kafka] # # From oslo.messaging # # DEPRECATED: Default Kafka broker Host (string value) # This option is deprecated for removal. # Its value may be silently ignored in the future. # Reason: Replaced by [DEFAULT]/transport_url #kafka_default_host = localhost # DEPRECATED: Default Kafka broker Port (port value) # Minimum value: 0 # Maximum value: 65535 # This option is deprecated for removal. # Its value may be silently ignored in the future. # Reason: Replaced by [DEFAULT]/transport_url #kafka_default_port = 9092 # Max fetch bytes of Kafka consumer (integer value) #kafka_max_fetch_bytes = 1048576 # Default timeout(s) for Kafka consumers (floating point value) #kafka_consumer_timeout = 1.0 # DEPRECATED: Pool Size for Kafka Consumers (integer value) # This option is deprecated for removal. # Its value may be silently ignored in the future. # Reason: Driver no longer uses connection pool. #pool_size = 10 # DEPRECATED: The pool size limit for connections expiration policy (integer # value) # This option is deprecated for removal. # Its value may be silently ignored in the future. # Reason: Driver no longer uses connection pool. #conn_pool_min_size = 2 # DEPRECATED: The time-to-live in sec of idle connections in the pool (integer # value) # This option is deprecated for removal. # Its value may be silently ignored in the future. # Reason: Driver no longer uses connection pool. #conn_pool_ttl = 1200 # Group id for Kafka consumer. Consumers in one group will coordinate message # consumption (string value) #consumer_group = oslo_messaging_consumer # Upper bound on the delay for KafkaProducer batching in seconds (floating point # value) #producer_batch_timeout = 0.0 # Size of batch for the producer async send (integer value) #producer_batch_size = 16384 [oslo_messaging_notifications] # # From oslo.messaging # # The Drivers(s) to handle sending notifications. Possible values are messaging, # messagingv2, routing, log, test, noop (multi valued) # Deprecated group/name - [DEFAULT]/notification_driver #driver = # A URL representing the messaging driver to use for notifications. If not set, # we fall back to the same configuration used for RPC. (string value) # Deprecated group/name - [DEFAULT]/notification_transport_url #transport_url = # AMQP topic used for OpenStack notifications. (list value) # Deprecated group/name - [rpc_notifier2]/topics # Deprecated group/name - [DEFAULT]/notification_topics #topics = notifications # The maximum number of attempts to re-send a notification message which failed # to be delivered due to a recoverable error. 0 - No retry, -1 - indefinite # (integer value) #retry = -1 [oslo_messaging_rabbit] # # From oslo.messaging # # Use durable queues in AMQP. (boolean value) # Deprecated group/name - [DEFAULT]/amqp_durable_queues # Deprecated group/name - [DEFAULT]/rabbit_durable_queues #amqp_durable_queues = false # Auto-delete queues in AMQP. (boolean value) #amqp_auto_delete = false # Enable SSL (boolean value) #ssl = # SSL version to use (valid only if SSL enabled). Valid values are TLSv1 and # SSLv23. SSLv2, SSLv3, TLSv1_1, and TLSv1_2 may be available on some # distributions. (string value) # Deprecated group/name - [oslo_messaging_rabbit]/kombu_ssl_version #ssl_version = # SSL key file (valid only if SSL enabled). (string value) # Deprecated group/name - [oslo_messaging_rabbit]/kombu_ssl_keyfile #ssl_key_file = # SSL cert file (valid only if SSL enabled). (string value) # Deprecated group/name - [oslo_messaging_rabbit]/kombu_ssl_certfile #ssl_cert_file = # SSL certification authority file (valid only if SSL enabled). (string value) # Deprecated group/name - [oslo_messaging_rabbit]/kombu_ssl_ca_certs #ssl_ca_file = # How long to wait before reconnecting in response to an AMQP consumer cancel # notification. (floating point value) #kombu_reconnect_delay = 1.0 # EXPERIMENTAL: Possible values are: gzip, bz2. If not set compression will not # be used. This option may not be available in future versions. (string value) #kombu_compression = # How long to wait a missing client before abandoning to send it its replies. # This value should not be longer than rpc_response_timeout. (integer value) # Deprecated group/name - [oslo_messaging_rabbit]/kombu_reconnect_timeout #kombu_missing_consumer_retry_timeout = 60 # Determines how the next RabbitMQ node is chosen in case the one we are # currently connected to becomes unavailable. Takes effect only if more than one # RabbitMQ node is provided in config. (string value) # Possible values: # round-robin - # shuffle - #kombu_failover_strategy = round-robin # DEPRECATED: The RabbitMQ broker address where a single node is used. (string # value) # This option is deprecated for removal. # Its value may be silently ignored in the future. # Reason: Replaced by [DEFAULT]/transport_url #rabbit_host = localhost # DEPRECATED: The RabbitMQ broker port where a single node is used. (port value) # Minimum value: 0 # Maximum value: 65535 # This option is deprecated for removal. # Its value may be silently ignored in the future. # Reason: Replaced by [DEFAULT]/transport_url #rabbit_port = 5672 # DEPRECATED: RabbitMQ HA cluster host:port pairs. (list value) # This option is deprecated for removal. # Its value may be silently ignored in the future. # Reason: Replaced by [DEFAULT]/transport_url #rabbit_hosts = $rabbit_host:$rabbit_port # DEPRECATED: The RabbitMQ userid. (string value) # This option is deprecated for removal. # Its value may be silently ignored in the future. # Reason: Replaced by [DEFAULT]/transport_url #rabbit_userid = guest # DEPRECATED: The RabbitMQ password. (string value) # This option is deprecated for removal. # Its value may be silently ignored in the future. # Reason: Replaced by [DEFAULT]/transport_url #rabbit_password = guest # The RabbitMQ login method. (string value) # Possible values: # PLAIN - # AMQPLAIN - # RABBIT-CR-DEMO - #rabbit_login_method = AMQPLAIN # DEPRECATED: The RabbitMQ virtual host. (string value) # This option is deprecated for removal. # Its value may be silently ignored in the future. # Reason: Replaced by [DEFAULT]/transport_url #rabbit_virtual_host = / # How frequently to retry connecting with RabbitMQ. (integer value) #rabbit_retry_interval = 1 # How long to backoff for between retries when connecting to RabbitMQ. (integer # value) #rabbit_retry_backoff = 2 # Maximum interval of RabbitMQ connection retries. Default is 30 seconds. # (integer value) #rabbit_interval_max = 30 # DEPRECATED: Maximum number of RabbitMQ connection retries. Default is 0 # (infinite retry count). (integer value) # This option is deprecated for removal. # Its value may be silently ignored in the future. #rabbit_max_retries = 0 # Try to use HA queues in RabbitMQ (x-ha-policy: all). If you change this # option, you must wipe the RabbitMQ database. In RabbitMQ 3.0, queue mirroring # is no longer controlled by the x-ha-policy argument when declaring a queue. If # you just want to make sure that all queues (except those with auto-generated # names) are mirrored across all nodes, run: "rabbitmqctl set_policy HA # '^(?!amq\.).*' '{"ha-mode": "all"}' " (boolean value) #rabbit_ha_queues = false # Positive integer representing duration in seconds for queue TTL (x-expires). # Queues which are unused for the duration of the TTL are automatically deleted. # The parameter affects only reply and fanout queues. (integer value) # Minimum value: 1 #rabbit_transient_queues_ttl = 1800 # Specifies the number of messages to prefetch. Setting to zero allows unlimited # messages. (integer value) #rabbit_qos_prefetch_count = 0 # Number of seconds after which the Rabbit broker is considered down if # heartbeat's keep-alive fails (0 disable the heartbeat). EXPERIMENTAL (integer # value) #heartbeat_timeout_threshold = 60 # How often times during the heartbeat_timeout_threshold we check the heartbeat. # (integer value) #heartbeat_rate = 2 # Deprecated, use rpc_backend=kombu+memory or rpc_backend=fake (boolean value) #fake_rabbit = false # Maximum number of channels to allow (integer value) #channel_max = # The maximum byte size for an AMQP frame (integer value) #frame_max = # How often to send heartbeats for consumer's connections (integer value) #heartbeat_interval = 3 # Arguments passed to ssl.wrap_socket (dict value) #ssl_options = # Set socket timeout in seconds for connection's socket (floating point value) #socket_timeout = 0.25 # Set TCP_USER_TIMEOUT in seconds for connection's socket (floating point value) #tcp_user_timeout = 0.25 # Set delay for reconnection to some host which has connection error (floating # point value) #host_connection_reconnect_delay = 0.25 # Connection factory implementation (string value) # Possible values: # new - # single - # read_write - #connection_factory = single # Maximum number of connections to keep queued. (integer value) #pool_max_size = 30 # Maximum number of connections to create above `pool_max_size`. (integer value) #pool_max_overflow = 0 # Default number of seconds to wait for a connections to available (integer # value) #pool_timeout = 30 # Lifetime of a connection (since creation) in seconds or None for no recycling. # Expired connections are closed on acquire. (integer value) #pool_recycle = 600 # Threshold at which inactive (since release) connections are considered stale # in seconds or None for no staleness. Stale connections are closed on acquire. # (integer value) #pool_stale = 60 # Default serialization mechanism for serializing/deserializing # outgoing/incoming messages (string value) # Possible values: # json - # msgpack - #default_serializer_type = json # Persist notification messages. (boolean value) #notification_persistence = false # Exchange name for sending notifications (string value) #default_notification_exchange = ${control_exchange}_notification # Max number of not acknowledged message which RabbitMQ can send to notification # listener. (integer value) #notification_listener_prefetch_count = 100 # Reconnecting retry count in case of connectivity problem during sending # notification, -1 means infinite retry. (integer value) #default_notification_retry_attempts = -1 # Reconnecting retry delay in case of connectivity problem during sending # notification message (floating point value) #notification_retry_delay = 0.25 # Time to live for rpc queues without consumers in seconds. (integer value) #rpc_queue_expiration = 60 # Exchange name for sending RPC messages (string value) #default_rpc_exchange = ${control_exchange}_rpc # Exchange name for receiving RPC replies (string value) #rpc_reply_exchange = ${control_exchange}_rpc_reply # Max number of not acknowledged message which RabbitMQ can send to rpc # listener. (integer value) #rpc_listener_prefetch_count = 100 # Max number of not acknowledged message which RabbitMQ can send to rpc reply # listener. (integer value) #rpc_reply_listener_prefetch_count = 100 # Reconnecting retry count in case of connectivity problem during sending reply. # -1 means infinite retry during rpc_timeout (integer value) #rpc_reply_retry_attempts = -1 # Reconnecting retry delay in case of connectivity problem during sending reply. # (floating point value) #rpc_reply_retry_delay = 0.25 # Reconnecting retry count in case of connectivity problem during sending RPC # message, -1 means infinite retry. If actual retry attempts in not 0 the rpc # request could be processed more than one time (integer value) #default_rpc_retry_attempts = -1 # Reconnecting retry delay in case of connectivity problem during sending RPC # message (floating point value) #rpc_retry_delay = 0.25 [oslo_messaging_zmq] # # From oslo.messaging # # ZeroMQ bind address. Should be a wildcard (*), an ethernet interface, or IP. # The "host" option should point or resolve to this address. (string value) #rpc_zmq_bind_address = * # MatchMaker driver. (string value) # Possible values: # redis - # sentinel - # dummy - #rpc_zmq_matchmaker = redis # Number of ZeroMQ contexts, defaults to 1. (integer value) #rpc_zmq_contexts = 1 # Maximum number of ingress messages to locally buffer per topic. Default is # unlimited. (integer value) #rpc_zmq_topic_backlog = # Directory for holding IPC sockets. (string value) #rpc_zmq_ipc_dir = /var/run/openstack # Name of this node. Must be a valid hostname, FQDN, or IP address. Must match # "host" option, if running Nova. (string value) #rpc_zmq_host = localhost # Number of seconds to wait before all pending messages will be sent after # closing a socket. The default value of -1 specifies an infinite linger period. # The value of 0 specifies no linger period. Pending messages shall be discarded # immediately when the socket is closed. Positive values specify an upper bound # for the linger period. (integer value) # Deprecated group/name - [DEFAULT]/rpc_cast_timeout #zmq_linger = -1 # The default number of seconds that poll should wait. Poll raises timeout # exception when timeout expired. (integer value) #rpc_poll_timeout = 1 # Expiration timeout in seconds of a name service record about existing target ( # < 0 means no timeout). (integer value) #zmq_target_expire = 300 # Update period in seconds of a name service record about existing target. # (integer value) #zmq_target_update = 180 # Use PUB/SUB pattern for fanout methods. PUB/SUB always uses proxy. (boolean # value) #use_pub_sub = false # Use ROUTER remote proxy. (boolean value) #use_router_proxy = false # This option makes direct connections dynamic or static. It makes sense only # with use_router_proxy=False which means to use direct connections for direct # message types (ignored otherwise). (boolean value) #use_dynamic_connections = false # How many additional connections to a host will be made for failover reasons. # This option is actual only in dynamic connections mode. (integer value) #zmq_failover_connections = 2 # Minimal port number for random ports range. (port value) # Minimum value: 0 # Maximum value: 65535 #rpc_zmq_min_port = 49153 # Maximal port number for random ports range. (integer value) # Minimum value: 1 # Maximum value: 65536 #rpc_zmq_max_port = 65536 # Number of retries to find free port number before fail with ZMQBindError. # (integer value) #rpc_zmq_bind_port_retries = 100 # Default serialization mechanism for serializing/deserializing # outgoing/incoming messages (string value) # Possible values: # json - # msgpack - #rpc_zmq_serialization = json # This option configures round-robin mode in zmq socket. True means not keeping # a queue when server side disconnects. False means to keep queue and messages # even if server is disconnected, when the server appears we send all # accumulated messages to it. (boolean value) #zmq_immediate = true # Enable/disable TCP keepalive (KA) mechanism. The default value of -1 (or any # other negative value) means to skip any overrides and leave it to OS default; # 0 and 1 (or any other positive value) mean to disable and enable the option # respectively. (integer value) #zmq_tcp_keepalive = -1 # The duration between two keepalive transmissions in idle condition. The unit # is platform dependent, for example, seconds in Linux, milliseconds in Windows # etc. The default value of -1 (or any other negative value and 0) means to skip # any overrides and leave it to OS default. (integer value) #zmq_tcp_keepalive_idle = -1 # The number of retransmissions to be carried out before declaring that remote # end is not available. The default value of -1 (or any other negative value and # 0) means to skip any overrides and leave it to OS default. (integer value) #zmq_tcp_keepalive_cnt = -1 # The duration between two successive keepalive retransmissions, if # acknowledgement to the previous keepalive transmission is not received. The # unit is platform dependent, for example, seconds in Linux, milliseconds in # Windows etc. The default value of -1 (or any other negative value and 0) means # to skip any overrides and leave it to OS default. (integer value) #zmq_tcp_keepalive_intvl = -1 # Maximum number of (green) threads to work concurrently. (integer value) #rpc_thread_pool_size = 100 # Expiration timeout in seconds of a sent/received message after which it is not # tracked anymore by a client/server. (integer value) #rpc_message_ttl = 300 # Wait for message acknowledgements from receivers. This mechanism works only # via proxy without PUB/SUB. (boolean value) #rpc_use_acks = false # Number of seconds to wait for an ack from a cast/call. After each retry # attempt this timeout is multiplied by some specified multiplier. (integer # value) #rpc_ack_timeout_base = 15 # Number to multiply base ack timeout by after each retry attempt. (integer # value) #rpc_ack_timeout_multiplier = 2 # Default number of message sending attempts in case of any problems occurred: # positive value N means at most N retries, 0 means no retries, None or -1 (or # any other negative values) mean to retry forever. This option is used only if # acknowledgments are enabled. (integer value) #rpc_retry_attempts = 3 # List of publisher hosts SubConsumer can subscribe on. This option has higher # priority then the default publishers list taken from the matchmaker. (list # value) #subscribe_on = [oslo_policy] # # From oslo.policy # # This option controls whether or not to enforce scope when evaluating policies. # If ``True``, the scope of the token used in the request is compared to the # ``scope_types`` of the policy being enforced. If the scopes do not match, an # ``InvalidScope`` exception will be raised. If ``False``, a message will be # logged informing operators that policies are being invoked with mismatching # scope. (boolean value) #enforce_scope = false # The file that defines policies. (string value) #policy_file = policy.json # Default rule. Enforced when a requested rule is not found. (string value) #policy_default_rule = default # Directories where policy configuration files are stored. They can be relative # to any directory in the search path defined by the config_dir option, or # absolute paths. The file defined by policy_file must exist for these # directories to be searched. Missing or empty directories are ignored. (multi # valued) #policy_dirs = policy.d # Content Type to send and receive data for REST based policy check (string # value) # Possible values: # application/x-www-form-urlencoded - # application/json - #remote_content_type = application/x-www-form-urlencoded # server identity verification for REST based policy check (boolean value) #remote_ssl_verify_server_crt = false # Absolute path to ca cert file for REST based policy check (string value) #remote_ssl_ca_crt_file = # Absolute path to client cert for REST based policy check (string value) #remote_ssl_client_crt_file = # Absolute path client key file REST based policy check (string value) #remote_ssl_client_key_file = [paste_deploy] # # From glance.registry # # # Deployment flavor to use in the server application pipeline. # # Provide a string value representing the appropriate deployment # flavor used in the server application pipleline. This is typically # the partial name of a pipeline in the paste configuration file with # the service name removed. # # For example, if your paste section name in the paste configuration # file is [pipeline:glance-api-keystone], set ``flavor`` to # ``keystone``. # # Possible values: # * String value representing a partial pipeline name. # # Related Options: # * config_file # # (string value) #flavor = keystone # # Name of the paste configuration file. # # Provide a string value representing the name of the paste # configuration file to use for configuring piplelines for # server application deployments. # # NOTES: # * Provide the name or the path relative to the glance directory # for the paste configuration file and not the absolute path. # * The sample paste configuration file shipped with Glance need # not be edited in most cases as it comes with ready-made # pipelines for all common deployment flavors. # # If no value is specified for this option, the ``paste.ini`` file # with the prefix of the corresponding Glance service's configuration # file name will be searched for in the known configuration # directories. (For example, if this option is missing from or has no # value set in ``glance-api.conf``, the service will look for a file # named ``glance-api-paste.ini``.) If the paste configuration file is # not found, the service will not start. # # Possible values: # * A string value representing the name of the paste configuration # file. # # Related Options: # * flavor # # (string value) #config_file = glance-api-paste.ini [profiler] # # From glance.registry # # # Enables the profiling for all services on this node. Default value is False # (fully disable the profiling feature). # # Possible values: # # * True: Enables the feature # * False: Disables the feature. The profiling cannot be started via this # project # operations. If the profiling is triggered by another project, this project # part # will be empty. # (boolean value) # Deprecated group/name - [profiler]/profiler_enabled #enabled = false # # Enables SQL requests profiling in services. Default value is False (SQL # requests won't be traced). # # Possible values: # # * True: Enables SQL requests profiling. Each SQL query will be part of the # trace and can the be analyzed by how much time was spent for that. # * False: Disables SQL requests profiling. The spent time is only shown on a # higher level of operations. Single SQL queries cannot be analyzed this # way. # (boolean value) #trace_sqlalchemy = false # # Secret key(s) to use for encrypting context data for performance profiling. # This string value should have the following format: [,,...], # where each key is some random string. A user who triggers the profiling via # the REST API has to set one of these keys in the headers of the REST API call # to include profiling results of this node for this particular project. # # Both "enabled" flag and "hmac_keys" config options should be set to enable # profiling. Also, to generate correct profiling information across all services # at least one key needs to be consistent between OpenStack projects. This # ensures it can be used from client side to generate the trace, containing # information from all possible resources. (string value) #hmac_keys = SECRET_KEY # # Connection string for a notifier backend. Default value is messaging:// which # sets the notifier to oslo_messaging. # # Examples of possible values: # # * messaging://: use oslo_messaging driver for sending notifications. # * mongodb://127.0.0.1:27017 : use mongodb driver for sending notifications. # * elasticsearch://127.0.0.1:9200 : use elasticsearch driver for sending # notifications. # (string value) #connection_string = messaging:// # # Document type for notification indexing in elasticsearch. # (string value) #es_doc_type = notification # # This parameter is a time value parameter (for example: es_scroll_time=2m), # indicating for how long the nodes that participate in the search will maintain # relevant resources in order to continue and support it. # (string value) #es_scroll_time = 2m # # Elasticsearch splits large requests in batches. This parameter defines # maximum size of each batch (for example: es_scroll_size=10000). # (integer value) #es_scroll_size = 10000 # # Redissentinel provides a timeout option on the connections. # This parameter defines that timeout (for example: socket_timeout=0.1). # (floating point value) #socket_timeout = 0.1 # # Redissentinel uses a service name to identify a master redis service. # This parameter defines the name (for example: # sentinal_service_name=mymaster). # (string value) #sentinel_service_name = mymaster glance-16.0.0/etc/glance-api-paste.ini0000666000175100017510000000666613245511421017457 0ustar zuulzuul00000000000000# Use this pipeline for no auth or image caching - DEFAULT [pipeline:glance-api] pipeline = cors healthcheck http_proxy_to_wsgi versionnegotiation osprofiler unauthenticated-context rootapp # Use this pipeline for image caching and no auth [pipeline:glance-api-caching] pipeline = cors healthcheck http_proxy_to_wsgi versionnegotiation osprofiler unauthenticated-context cache rootapp # Use this pipeline for caching w/ management interface but no auth [pipeline:glance-api-cachemanagement] pipeline = cors healthcheck http_proxy_to_wsgi versionnegotiation osprofiler unauthenticated-context cache cachemanage rootapp # Use this pipeline for keystone auth [pipeline:glance-api-keystone] pipeline = cors healthcheck http_proxy_to_wsgi versionnegotiation osprofiler authtoken context rootapp # Use this pipeline for keystone auth with image caching [pipeline:glance-api-keystone+caching] pipeline = cors healthcheck http_proxy_to_wsgi versionnegotiation osprofiler authtoken context cache rootapp # Use this pipeline for keystone auth with caching and cache management [pipeline:glance-api-keystone+cachemanagement] pipeline = cors healthcheck http_proxy_to_wsgi versionnegotiation osprofiler authtoken context cache cachemanage rootapp # Use this pipeline for authZ only. This means that the registry will treat a # user as authenticated without making requests to keystone to reauthenticate # the user. [pipeline:glance-api-trusted-auth] pipeline = cors healthcheck http_proxy_to_wsgi versionnegotiation osprofiler context rootapp # Use this pipeline for authZ only. This means that the registry will treat a # user as authenticated without making requests to keystone to reauthenticate # the user and uses cache management [pipeline:glance-api-trusted-auth+cachemanagement] pipeline = cors healthcheck http_proxy_to_wsgi versionnegotiation osprofiler context cache cachemanage rootapp [composite:rootapp] paste.composite_factory = glance.api:root_app_factory /: apiversions /v1: apiv1app /v2: apiv2app [app:apiversions] paste.app_factory = glance.api.versions:create_resource [app:apiv1app] paste.app_factory = glance.api.v1.router:API.factory [app:apiv2app] paste.app_factory = glance.api.v2.router:API.factory [filter:healthcheck] paste.filter_factory = oslo_middleware:Healthcheck.factory backends = disable_by_file disable_by_file_path = /etc/glance/healthcheck_disable [filter:versionnegotiation] paste.filter_factory = glance.api.middleware.version_negotiation:VersionNegotiationFilter.factory [filter:cache] paste.filter_factory = glance.api.middleware.cache:CacheFilter.factory [filter:cachemanage] paste.filter_factory = glance.api.middleware.cache_manage:CacheManageFilter.factory [filter:context] paste.filter_factory = glance.api.middleware.context:ContextMiddleware.factory [filter:unauthenticated-context] paste.filter_factory = glance.api.middleware.context:UnauthenticatedContextMiddleware.factory [filter:authtoken] paste.filter_factory = keystonemiddleware.auth_token:filter_factory delay_auth_decision = true [filter:gzip] paste.filter_factory = glance.api.middleware.gzip:GzipMiddleware.factory [filter:osprofiler] paste.filter_factory = osprofiler.web:WsgiMiddleware.factory hmac_keys = SECRET_KEY #DEPRECATED enabled = yes #DEPRECATED [filter:cors] paste.filter_factory = oslo_middleware.cors:filter_factory oslo_config_project = glance oslo_config_program = glance-api [filter:http_proxy_to_wsgi] paste.filter_factory = oslo_middleware:HTTPProxyToWSGI.factory glance-16.0.0/etc/glance-cache.conf0000666000175100017510000024566413245511421017010 0ustar zuulzuul00000000000000[DEFAULT] # # From glance.cache # # # Allow users to add additional/custom properties to images. # # Glance defines a standard set of properties (in its schema) that # appear on every image. These properties are also known as # ``base properties``. In addition to these properties, Glance # allows users to add custom properties to images. These are known # as ``additional properties``. # # By default, this configuration option is set to ``True`` and users # are allowed to add additional properties. The number of additional # properties that can be added to an image can be controlled via # ``image_property_quota`` configuration option. # # Possible values: # * True # * False # # Related options: # * image_property_quota # # (boolean value) #allow_additional_image_properties = true # # Maximum number of image members per image. # # This limits the maximum of users an image can be shared with. Any negative # value is interpreted as unlimited. # # Related options: # * None # # (integer value) #image_member_quota = 128 # # Maximum number of properties allowed on an image. # # This enforces an upper limit on the number of additional properties an image # can have. Any negative value is interpreted as unlimited. # # NOTE: This won't have any impact if additional properties are disabled. Please # refer to ``allow_additional_image_properties``. # # Related options: # * ``allow_additional_image_properties`` # # (integer value) #image_property_quota = 128 # # Maximum number of tags allowed on an image. # # Any negative value is interpreted as unlimited. # # Related options: # * None # # (integer value) #image_tag_quota = 128 # # Maximum number of locations allowed on an image. # # Any negative value is interpreted as unlimited. # # Related options: # * None # # (integer value) #image_location_quota = 10 # DEPRECATED: # Python module path of data access API. # # Specifies the path to the API to use for accessing the data model. # This option determines how the image catalog data will be accessed. # # Possible values: # * glance.db.sqlalchemy.api # * glance.db.registry.api # * glance.db.simple.api # # If this option is set to ``glance.db.sqlalchemy.api`` then the image # catalog data is stored in and read from the database via the # SQLAlchemy Core and ORM APIs. # # Setting this option to ``glance.db.registry.api`` will force all # database access requests to be routed through the Registry service. # This avoids data access from the Glance API nodes for an added layer # of security, scalability and manageability. # # NOTE: In v2 OpenStack Images API, the registry service is optional. # In order to use the Registry API in v2, the option # ``enable_v2_registry`` must be set to ``True``. # # Finally, when this configuration option is set to # ``glance.db.simple.api``, image catalog data is stored in and read # from an in-memory data structure. This is primarily used for testing. # # Related options: # * enable_v2_api # * enable_v2_registry # # (string value) # This option is deprecated for removal since Queens. # Its value may be silently ignored in the future. # Reason: # Glance registry service is deprecated for removal. # # More information can be found from the spec: # http://specs.openstack.org/openstack/glance-specs/specs/queens/approved/glance # /deprecate-registry.html #data_api = glance.db.sqlalchemy.api # # The default number of results to return for a request. # # Responses to certain API requests, like list images, may return # multiple items. The number of results returned can be explicitly # controlled by specifying the ``limit`` parameter in the API request. # However, if a ``limit`` parameter is not specified, this # configuration value will be used as the default number of results to # be returned for any API request. # # NOTES: # * The value of this configuration option may not be greater than # the value specified by ``api_limit_max``. # * Setting this to a very large value may slow down database # queries and increase response times. Setting this to a # very low value may result in poor user experience. # # Possible values: # * Any positive integer # # Related options: # * api_limit_max # # (integer value) # Minimum value: 1 #limit_param_default = 25 # # Maximum number of results that could be returned by a request. # # As described in the help text of ``limit_param_default``, some # requests may return multiple results. The number of results to be # returned are governed either by the ``limit`` parameter in the # request or the ``limit_param_default`` configuration option. # The value in either case, can't be greater than the absolute maximum # defined by this configuration option. Anything greater than this # value is trimmed down to the maximum value defined here. # # NOTE: Setting this to a very large value may slow down database # queries and increase response times. Setting this to a # very low value may result in poor user experience. # # Possible values: # * Any positive integer # # Related options: # * limit_param_default # # (integer value) # Minimum value: 1 #api_limit_max = 1000 # # Show direct image location when returning an image. # # This configuration option indicates whether to show the direct image # location when returning image details to the user. The direct image # location is where the image data is stored in backend storage. This # image location is shown under the image property ``direct_url``. # # When multiple image locations exist for an image, the best location # is displayed based on the location strategy indicated by the # configuration option ``location_strategy``. # # NOTES: # * Revealing image locations can present a GRAVE SECURITY RISK as # image locations can sometimes include credentials. Hence, this # is set to ``False`` by default. Set this to ``True`` with # EXTREME CAUTION and ONLY IF you know what you are doing! # * If an operator wishes to avoid showing any image location(s) # to the user, then both this option and # ``show_multiple_locations`` MUST be set to ``False``. # # Possible values: # * True # * False # # Related options: # * show_multiple_locations # * location_strategy # # (boolean value) #show_image_direct_url = false # DEPRECATED: # Show all image locations when returning an image. # # This configuration option indicates whether to show all the image # locations when returning image details to the user. When multiple # image locations exist for an image, the locations are ordered based # on the location strategy indicated by the configuration opt # ``location_strategy``. The image locations are shown under the # image property ``locations``. # # NOTES: # * Revealing image locations can present a GRAVE SECURITY RISK as # image locations can sometimes include credentials. Hence, this # is set to ``False`` by default. Set this to ``True`` with # EXTREME CAUTION and ONLY IF you know what you are doing! # * If an operator wishes to avoid showing any image location(s) # to the user, then both this option and # ``show_image_direct_url`` MUST be set to ``False``. # # Possible values: # * True # * False # # Related options: # * show_image_direct_url # * location_strategy # # (boolean value) # This option is deprecated for removal since Newton. # Its value may be silently ignored in the future. # Reason: This option will be removed in the Pike release or later because the # same functionality can be achieved with greater granularity by using policies. # Please see the Newton release notes for more information. #show_multiple_locations = false # # Maximum size of image a user can upload in bytes. # # An image upload greater than the size mentioned here would result # in an image creation failure. This configuration option defaults to # 1099511627776 bytes (1 TiB). # # NOTES: # * This value should only be increased after careful # consideration and must be set less than or equal to # 8 EiB (9223372036854775808). # * This value must be set with careful consideration of the # backend storage capacity. Setting this to a very low value # may result in a large number of image failures. And, setting # this to a very large value may result in faster consumption # of storage. Hence, this must be set according to the nature of # images created and storage capacity available. # # Possible values: # * Any positive number less than or equal to 9223372036854775808 # # (integer value) # Minimum value: 1 # Maximum value: 9223372036854775808 #image_size_cap = 1099511627776 # # Maximum amount of image storage per tenant. # # This enforces an upper limit on the cumulative storage consumed by all images # of a tenant across all stores. This is a per-tenant limit. # # The default unit for this configuration option is Bytes. However, storage # units can be specified using case-sensitive literals ``B``, ``KB``, ``MB``, # ``GB`` and ``TB`` representing Bytes, KiloBytes, MegaBytes, GigaBytes and # TeraBytes respectively. Note that there should not be any space between the # value and unit. Value ``0`` signifies no quota enforcement. Negative values # are invalid and result in errors. # # Possible values: # * A string that is a valid concatenation of a non-negative integer # representing the storage value and an optional string literal # representing storage units as mentioned above. # # Related options: # * None # # (string value) #user_storage_quota = 0 # # Deploy the v1 OpenStack Images API. # # When this option is set to ``True``, Glance service will respond to # requests on registered endpoints conforming to the v1 OpenStack # Images API. # # NOTES: # * If this option is enabled, then ``enable_v1_registry`` must # also be set to ``True`` to enable mandatory usage of Registry # service with v1 API. # # * If this option is disabled, then the ``enable_v1_registry`` # option, which is enabled by default, is also recommended # to be disabled. # # * This option is separate from ``enable_v2_api``, both v1 and v2 # OpenStack Images API can be deployed independent of each # other. # # * If deploying only the v2 Images API, this option, which is # enabled by default, should be disabled. # # Possible values: # * True # * False # # Related options: # * enable_v1_registry # * enable_v2_api # # (boolean value) #enable_v1_api = true # # Deploy the v2 OpenStack Images API. # # When this option is set to ``True``, Glance service will respond # to requests on registered endpoints conforming to the v2 OpenStack # Images API. # # NOTES: # * If this option is disabled, then the ``enable_v2_registry`` # option, which is enabled by default, is also recommended # to be disabled. # # * This option is separate from ``enable_v1_api``, both v1 and v2 # OpenStack Images API can be deployed independent of each # other. # # * If deploying only the v1 Images API, this option, which is # enabled by default, should be disabled. # # Possible values: # * True # * False # # Related options: # * enable_v2_registry # * enable_v1_api # # (boolean value) #enable_v2_api = true # # Deploy the v1 API Registry service. # # When this option is set to ``True``, the Registry service # will be enabled in Glance for v1 API requests. # # NOTES: # * Use of Registry is mandatory in v1 API, so this option must # be set to ``True`` if the ``enable_v1_api`` option is enabled. # # * If deploying only the v2 OpenStack Images API, this option, # which is enabled by default, should be disabled. # # Possible values: # * True # * False # # Related options: # * enable_v1_api # # (boolean value) #enable_v1_registry = true # DEPRECATED: # Deploy the v2 API Registry service. # # When this option is set to ``True``, the Registry service # will be enabled in Glance for v2 API requests. # # NOTES: # * Use of Registry is optional in v2 API, so this option # must only be enabled if both ``enable_v2_api`` is set to # ``True`` and the ``data_api`` option is set to # ``glance.db.registry.api``. # # * If deploying only the v1 OpenStack Images API, this option, # which is enabled by default, should be disabled. # # Possible values: # * True # * False # # Related options: # * enable_v2_api # * data_api # # (boolean value) # This option is deprecated for removal since Queens. # Its value may be silently ignored in the future. # Reason: # Glance registry service is deprecated for removal. # # More information can be found from the spec: # http://specs.openstack.org/openstack/glance-specs/specs/queens/approved/glance # /deprecate-registry.html #enable_v2_registry = true # # Host address of the pydev server. # # Provide a string value representing the hostname or IP of the # pydev server to use for debugging. The pydev server listens for # debug connections on this address, facilitating remote debugging # in Glance. # # Possible values: # * Valid hostname # * Valid IP address # # Related options: # * None # # (unknown value) #pydev_worker_debug_host = localhost # # Port number that the pydev server will listen on. # # Provide a port number to bind the pydev server to. The pydev # process accepts debug connections on this port and facilitates # remote debugging in Glance. # # Possible values: # * A valid port number # # Related options: # * None # # (port value) # Minimum value: 0 # Maximum value: 65535 #pydev_worker_debug_port = 5678 # # AES key for encrypting store location metadata. # # Provide a string value representing the AES cipher to use for # encrypting Glance store metadata. # # NOTE: The AES key to use must be set to a random string of length # 16, 24 or 32 bytes. # # Possible values: # * String value representing a valid AES key # # Related options: # * None # # (string value) #metadata_encryption_key = # # Digest algorithm to use for digital signature. # # Provide a string value representing the digest algorithm to # use for generating digital signatures. By default, ``sha256`` # is used. # # To get a list of the available algorithms supported by the version # of OpenSSL on your platform, run the command: # ``openssl list-message-digest-algorithms``. # Examples are 'sha1', 'sha256', and 'sha512'. # # NOTE: ``digest_algorithm`` is not related to Glance's image signing # and verification. It is only used to sign the universally unique # identifier (UUID) as a part of the certificate file and key file # validation. # # Possible values: # * An OpenSSL message digest algorithm identifier # # Relation options: # * None # # (string value) #digest_algorithm = sha256 # # The URL provides location where the temporary data will be stored # # This option is for Glance internal use only. Glance will save the # image data uploaded by the user to 'staging' endpoint during the # image import process. # # This option does not change the 'staging' API endpoint by any means. # # NOTE: It is discouraged to use same path as [task]/work_dir # # NOTE: 'file://' is the only option # api_image_import flow will support for now. # # NOTE: The staging path must be on shared filesystem available to all # Glance API nodes. # # Possible values: # * String starting with 'file://' followed by absolute FS path # # Related options: # * [task]/work_dir # * [DEFAULT]/enable_image_import (*deprecated*) # # (string value) #node_staging_uri = file:///tmp/staging/ # DEPRECATED: # Enables the Image Import workflow introduced in Pike # # As '[DEFAULT]/node_staging_uri' is required for the Image # Import, it's disabled per default in Pike, enabled per # default in Queens and removed in Rocky. This allows Glance to # operate with previous version configs upon upgrade. # # Setting this option to False will disable the endpoints related # to Image Import Refactoring work. # # Related options: # * [DEFAULT]/node_staging_uri (boolean value) # This option is deprecated for removal since Pike. # Its value may be silently ignored in the future. # Reason: # This option is deprecated for removal in Rocky. # # It was introduced to make sure that the API is not enabled # before the '[DEFAULT]/node_staging_uri' is defined and is # long term redundant. #enable_image_import = true # # List of enabled Image Import Methods # # Both 'glance-direct' and 'web-download' are enabled by default. # # Related options: # * [DEFAULT]/node_staging_uri # * [DEFAULT]/enable_image_import (list value) #enabled_import_methods = glance-direct,web-download # # The relative path to sqlite file database that will be used for image cache # management. # # This is a relative path to the sqlite file database that tracks the age and # usage statistics of image cache. The path is relative to image cache base # directory, specified by the configuration option ``image_cache_dir``. # # This is a lightweight database with just one table. # # Possible values: # * A valid relative path to sqlite file database # # Related options: # * ``image_cache_dir`` # # (string value) #image_cache_sqlite_db = cache.db # # The driver to use for image cache management. # # This configuration option provides the flexibility to choose between the # different image-cache drivers available. An image-cache driver is responsible # for providing the essential functions of image-cache like write images to/read # images from cache, track age and usage of cached images, provide a list of # cached images, fetch size of the cache, queue images for caching and clean up # the cache, etc. # # The essential functions of a driver are defined in the base class # ``glance.image_cache.drivers.base.Driver``. All image-cache drivers (existing # and prospective) must implement this interface. Currently available drivers # are ``sqlite`` and ``xattr``. These drivers primarily differ in the way they # store the information about cached images: # * The ``sqlite`` driver uses a sqlite database (which sits on every glance # node locally) to track the usage of cached images. # * The ``xattr`` driver uses the extended attributes of files to store this # information. It also requires a filesystem that sets ``atime`` on the # files # when accessed. # # Possible values: # * sqlite # * xattr # # Related options: # * None # # (string value) # Possible values: # sqlite - # xattr - #image_cache_driver = sqlite # # The upper limit on cache size, in bytes, after which the cache-pruner cleans # up the image cache. # # NOTE: This is just a threshold for cache-pruner to act upon. It is NOT a # hard limit beyond which the image cache would never grow. In fact, depending # on how often the cache-pruner runs and how quickly the cache fills, the image # cache can far exceed the size specified here very easily. Hence, care must be # taken to appropriately schedule the cache-pruner and in setting this limit. # # Glance caches an image when it is downloaded. Consequently, the size of the # image cache grows over time as the number of downloads increases. To keep the # cache size from becoming unmanageable, it is recommended to run the # cache-pruner as a periodic task. When the cache pruner is kicked off, it # compares the current size of image cache and triggers a cleanup if the image # cache grew beyond the size specified here. After the cleanup, the size of # cache is less than or equal to size specified here. # # Possible values: # * Any non-negative integer # # Related options: # * None # # (integer value) # Minimum value: 0 #image_cache_max_size = 10737418240 # # The amount of time, in seconds, an incomplete image remains in the cache. # # Incomplete images are images for which download is in progress. Please see the # description of configuration option ``image_cache_dir`` for more detail. # Sometimes, due to various reasons, it is possible the download may hang and # the incompletely downloaded image remains in the ``incomplete`` directory. # This configuration option sets a time limit on how long the incomplete images # should remain in the ``incomplete`` directory before they are cleaned up. # Once an incomplete image spends more time than is specified here, it'll be # removed by cache-cleaner on its next run. # # It is recommended to run cache-cleaner as a periodic task on the Glance API # nodes to keep the incomplete images from occupying disk space. # # Possible values: # * Any non-negative integer # # Related options: # * None # # (integer value) # Minimum value: 0 #image_cache_stall_time = 86400 # # Base directory for image cache. # # This is the location where image data is cached and served out of. All cached # images are stored directly under this directory. This directory also contains # three subdirectories, namely, ``incomplete``, ``invalid`` and ``queue``. # # The ``incomplete`` subdirectory is the staging area for downloading images. An # image is first downloaded to this directory. When the image download is # successful it is moved to the base directory. However, if the download fails, # the partially downloaded image file is moved to the ``invalid`` subdirectory. # # The ``queue``subdirectory is used for queuing images for download. This is # used primarily by the cache-prefetcher, which can be scheduled as a periodic # task like cache-pruner and cache-cleaner, to cache images ahead of their # usage. # Upon receiving the request to cache an image, Glance touches a file in the # ``queue`` directory with the image id as the file name. The cache-prefetcher, # when running, polls for the files in ``queue`` directory and starts # downloading them in the order they were created. When the download is # successful, the zero-sized file is deleted from the ``queue`` directory. # If the download fails, the zero-sized file remains and it'll be retried the # next time cache-prefetcher runs. # # Possible values: # * A valid path # # Related options: # * ``image_cache_sqlite_db`` # # (string value) #image_cache_dir = # DEPRECATED: # Address the registry server is hosted on. # # Possible values: # * A valid IP or hostname # # Related options: # * None # # (unknown value) # This option is deprecated for removal since Queens. # Its value may be silently ignored in the future. # Reason: # Glance registry service is deprecated for removal. # # More information can be found from the spec: # http://specs.openstack.org/openstack/glance-specs/specs/queens/approved/glance # /deprecate-registry.html #registry_host = 0.0.0.0 # DEPRECATED: # Port the registry server is listening on. # # Possible values: # * A valid port number # # Related options: # * None # # (port value) # Minimum value: 0 # Maximum value: 65535 # This option is deprecated for removal since Queens. # Its value may be silently ignored in the future. # Reason: # Glance registry service is deprecated for removal. # # More information can be found from the spec: # http://specs.openstack.org/openstack/glance-specs/specs/queens/approved/glance # /deprecate-registry.html #registry_port = 9191 # DEPRECATED: # Protocol to use for communication with the registry server. # # Provide a string value representing the protocol to use for # communication with the registry server. By default, this option is # set to ``http`` and the connection is not secure. # # This option can be set to ``https`` to establish a secure connection # to the registry server. In this case, provide a key to use for the # SSL connection using the ``registry_client_key_file`` option. Also # include the CA file and cert file using the options # ``registry_client_ca_file`` and ``registry_client_cert_file`` # respectively. # # Possible values: # * http # * https # # Related options: # * registry_client_key_file # * registry_client_cert_file # * registry_client_ca_file # # (string value) # Possible values: # http - # https - # This option is deprecated for removal since Queens. # Its value may be silently ignored in the future. # Reason: # Glance registry service is deprecated for removal. # # More information can be found from the spec: # http://specs.openstack.org/openstack/glance-specs/specs/queens/approved/glance # /deprecate-registry.html #registry_client_protocol = http # DEPRECATED: # Absolute path to the private key file. # # Provide a string value representing a valid absolute path to the # private key file to use for establishing a secure connection to # the registry server. # # NOTE: This option must be set if ``registry_client_protocol`` is # set to ``https``. Alternatively, the GLANCE_CLIENT_KEY_FILE # environment variable may be set to a filepath of the key file. # # Possible values: # * String value representing a valid absolute path to the key # file. # # Related options: # * registry_client_protocol # # (string value) # This option is deprecated for removal since Queens. # Its value may be silently ignored in the future. # Reason: # Glance registry service is deprecated for removal. # # More information can be found from the spec: # http://specs.openstack.org/openstack/glance-specs/specs/queens/approved/glance # /deprecate-registry.html #registry_client_key_file = /etc/ssl/key/key-file.pem # DEPRECATED: # Absolute path to the certificate file. # # Provide a string value representing a valid absolute path to the # certificate file to use for establishing a secure connection to # the registry server. # # NOTE: This option must be set if ``registry_client_protocol`` is # set to ``https``. Alternatively, the GLANCE_CLIENT_CERT_FILE # environment variable may be set to a filepath of the certificate # file. # # Possible values: # * String value representing a valid absolute path to the # certificate file. # # Related options: # * registry_client_protocol # # (string value) # This option is deprecated for removal since Queens. # Its value may be silently ignored in the future. # Reason: # Glance registry service is deprecated for removal. # # More information can be found from the spec: # http://specs.openstack.org/openstack/glance-specs/specs/queens/approved/glance # /deprecate-registry.html #registry_client_cert_file = /etc/ssl/certs/file.crt # DEPRECATED: # Absolute path to the Certificate Authority file. # # Provide a string value representing a valid absolute path to the # certificate authority file to use for establishing a secure # connection to the registry server. # # NOTE: This option must be set if ``registry_client_protocol`` is # set to ``https``. Alternatively, the GLANCE_CLIENT_CA_FILE # environment variable may be set to a filepath of the CA file. # This option is ignored if the ``registry_client_insecure`` option # is set to ``True``. # # Possible values: # * String value representing a valid absolute path to the CA # file. # # Related options: # * registry_client_protocol # * registry_client_insecure # # (string value) # This option is deprecated for removal since Queens. # Its value may be silently ignored in the future. # Reason: # Glance registry service is deprecated for removal. # # More information can be found from the spec: # http://specs.openstack.org/openstack/glance-specs/specs/queens/approved/glance # /deprecate-registry.html #registry_client_ca_file = /etc/ssl/cafile/file.ca # DEPRECATED: # Set verification of the registry server certificate. # # Provide a boolean value to determine whether or not to validate # SSL connections to the registry server. By default, this option # is set to ``False`` and the SSL connections are validated. # # If set to ``True``, the connection to the registry server is not # validated via a certifying authority and the # ``registry_client_ca_file`` option is ignored. This is the # registry's equivalent of specifying --insecure on the command line # using glanceclient for the API. # # Possible values: # * True # * False # # Related options: # * registry_client_protocol # * registry_client_ca_file # # (boolean value) # This option is deprecated for removal since Queens. # Its value may be silently ignored in the future. # Reason: # Glance registry service is deprecated for removal. # # More information can be found from the spec: # http://specs.openstack.org/openstack/glance-specs/specs/queens/approved/glance # /deprecate-registry.html #registry_client_insecure = false # DEPRECATED: # Timeout value for registry requests. # # Provide an integer value representing the period of time in seconds # that the API server will wait for a registry request to complete. # The default value is 600 seconds. # # A value of 0 implies that a request will never timeout. # # Possible values: # * Zero # * Positive integer # # Related options: # * None # # (integer value) # Minimum value: 0 # This option is deprecated for removal since Queens. # Its value may be silently ignored in the future. # Reason: # Glance registry service is deprecated for removal. # # More information can be found from the spec: # http://specs.openstack.org/openstack/glance-specs/specs/queens/approved/glance # /deprecate-registry.html #registry_client_timeout = 600 # DEPRECATED: Whether to pass through the user token when making requests to the # registry. To prevent failures with token expiration during big files upload, # it is recommended to set this parameter to False.If "use_user_token" is not in # effect, then admin credentials can be specified. (boolean value) # This option is deprecated for removal. # Its value may be silently ignored in the future. # Reason: This option was considered harmful and has been deprecated in M # release. It will be removed in O release. For more information read OSSN-0060. # Related functionality with uploading big images has been implemented with # Keystone trusts support. #use_user_token = true # DEPRECATED: The administrators user name. If "use_user_token" is not in # effect, then admin credentials can be specified. (string value) # This option is deprecated for removal. # Its value may be silently ignored in the future. # Reason: This option was considered harmful and has been deprecated in M # release. It will be removed in O release. For more information read OSSN-0060. # Related functionality with uploading big images has been implemented with # Keystone trusts support. #admin_user = # DEPRECATED: The administrators password. If "use_user_token" is not in effect, # then admin credentials can be specified. (string value) # This option is deprecated for removal. # Its value may be silently ignored in the future. # Reason: This option was considered harmful and has been deprecated in M # release. It will be removed in O release. For more information read OSSN-0060. # Related functionality with uploading big images has been implemented with # Keystone trusts support. #admin_password = # DEPRECATED: The tenant name of the administrative user. If "use_user_token" is # not in effect, then admin tenant name can be specified. (string value) # This option is deprecated for removal. # Its value may be silently ignored in the future. # Reason: This option was considered harmful and has been deprecated in M # release. It will be removed in O release. For more information read OSSN-0060. # Related functionality with uploading big images has been implemented with # Keystone trusts support. #admin_tenant_name = # DEPRECATED: The URL to the keystone service. If "use_user_token" is not in # effect and using keystone auth, then URL of keystone can be specified. (string # value) # This option is deprecated for removal. # Its value may be silently ignored in the future. # Reason: This option was considered harmful and has been deprecated in M # release. It will be removed in O release. For more information read OSSN-0060. # Related functionality with uploading big images has been implemented with # Keystone trusts support. #auth_url = # DEPRECATED: The strategy to use for authentication. If "use_user_token" is not # in effect, then auth strategy can be specified. (string value) # This option is deprecated for removal. # Its value may be silently ignored in the future. # Reason: This option was considered harmful and has been deprecated in M # release. It will be removed in O release. For more information read OSSN-0060. # Related functionality with uploading big images has been implemented with # Keystone trusts support. #auth_strategy = noauth # DEPRECATED: The region for the authentication service. If "use_user_token" is # not in effect and using keystone auth, then region name can be specified. # (string value) # This option is deprecated for removal. # Its value may be silently ignored in the future. # Reason: This option was considered harmful and has been deprecated in M # release. It will be removed in O release. For more information read OSSN-0060. # Related functionality with uploading big images has been implemented with # Keystone trusts support. #auth_region = # # From oslo.log # # If set to true, the logging level will be set to DEBUG instead of the default # INFO level. (boolean value) # Note: This option can be changed without restarting. #debug = false # The name of a logging configuration file. This file is appended to any # existing logging configuration files. For details about logging configuration # files, see the Python logging module documentation. Note that when logging # configuration files are used then all logging configuration is set in the # configuration file and other logging configuration options are ignored (for # example, logging_context_format_string). (string value) # Note: This option can be changed without restarting. # Deprecated group/name - [DEFAULT]/log_config #log_config_append = # Defines the format string for %%(asctime)s in log records. Default: # %(default)s . This option is ignored if log_config_append is set. (string # value) #log_date_format = %Y-%m-%d %H:%M:%S # (Optional) Name of log file to send logging output to. If no default is set, # logging will go to stderr as defined by use_stderr. This option is ignored if # log_config_append is set. (string value) # Deprecated group/name - [DEFAULT]/logfile #log_file = # (Optional) The base directory used for relative log_file paths. This option # is ignored if log_config_append is set. (string value) # Deprecated group/name - [DEFAULT]/logdir #log_dir = # Uses logging handler designed to watch file system. When log file is moved or # removed this handler will open a new log file with specified path # instantaneously. It makes sense only if log_file option is specified and Linux # platform is used. This option is ignored if log_config_append is set. (boolean # value) #watch_log_file = false # Use syslog for logging. Existing syslog format is DEPRECATED and will be # changed later to honor RFC5424. This option is ignored if log_config_append is # set. (boolean value) #use_syslog = false # Enable journald for logging. If running in a systemd environment you may wish # to enable journal support. Doing so will use the journal native protocol which # includes structured metadata in addition to log messages.This option is # ignored if log_config_append is set. (boolean value) #use_journal = false # Syslog facility to receive log lines. This option is ignored if # log_config_append is set. (string value) #syslog_log_facility = LOG_USER # Use JSON formatting for logging. This option is ignored if log_config_append # is set. (boolean value) #use_json = false # Log output to standard error. This option is ignored if log_config_append is # set. (boolean value) #use_stderr = false # Format string to use for log messages with context. (string value) #logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s # Format string to use for log messages when context is undefined. (string # value) #logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s # Additional data to append to log message when logging level for the message is # DEBUG. (string value) #logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d # Prefix each line of exception output with this format. (string value) #logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s # Defines the format string for %(user_identity)s that is used in # logging_context_format_string. (string value) #logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s # List of package logging levels in logger=LEVEL pairs. This option is ignored # if log_config_append is set. (list value) #default_log_levels = amqp=WARN,amqplib=WARN,boto=WARN,qpid=WARN,sqlalchemy=WARN,suds=INFO,oslo.messaging=INFO,oslo_messaging=INFO,iso8601=WARN,requests.packages.urllib3.connectionpool=WARN,urllib3.connectionpool=WARN,websocket=WARN,requests.packages.urllib3.util.retry=WARN,urllib3.util.retry=WARN,keystonemiddleware=WARN,routes.middleware=WARN,stevedore=WARN,taskflow=WARN,keystoneauth=WARN,oslo.cache=INFO,dogpile.core.dogpile=INFO # Enables or disables publication of error events. (boolean value) #publish_errors = false # The format for an instance that is passed with the log message. (string value) #instance_format = "[instance: %(uuid)s] " # The format for an instance UUID that is passed with the log message. (string # value) #instance_uuid_format = "[instance: %(uuid)s] " # Interval, number of seconds, of log rate limiting. (integer value) #rate_limit_interval = 0 # Maximum number of logged messages per rate_limit_interval. (integer value) #rate_limit_burst = 0 # Log level name used by rate limiting: CRITICAL, ERROR, INFO, WARNING, DEBUG or # empty string. Logs with level greater or equal to rate_limit_except_level are # not filtered. An empty string means that all levels are filtered. (string # value) #rate_limit_except_level = CRITICAL # Enables or disables fatal status of deprecations. (boolean value) #fatal_deprecations = false [glance_store] # # From glance.store # # # List of enabled Glance stores. # # Register the storage backends to use for storing disk images # as a comma separated list. The default stores enabled for # storing disk images with Glance are ``file`` and ``http``. # # Possible values: # * A comma separated list that could include: # * file # * http # * swift # * rbd # * sheepdog # * cinder # * vmware # # Related Options: # * default_store # # (list value) #stores = file,http # # The default scheme to use for storing images. # # Provide a string value representing the default scheme to use for # storing images. If not set, Glance uses ``file`` as the default # scheme to store images with the ``file`` store. # # NOTE: The value given for this configuration option must be a valid # scheme for a store registered with the ``stores`` configuration # option. # # Possible values: # * file # * filesystem # * http # * https # * swift # * swift+http # * swift+https # * swift+config # * rbd # * sheepdog # * cinder # * vsphere # # Related Options: # * stores # # (string value) # Possible values: # file - # filesystem - # http - # https - # swift - # swift+http - # swift+https - # swift+config - # rbd - # sheepdog - # cinder - # vsphere - #default_store = file # # Minimum interval in seconds to execute updating dynamic storage # capabilities based on current backend status. # # Provide an integer value representing time in seconds to set the # minimum interval before an update of dynamic storage capabilities # for a storage backend can be attempted. Setting # ``store_capabilities_update_min_interval`` does not mean updates # occur periodically based on the set interval. Rather, the update # is performed at the elapse of this interval set, if an operation # of the store is triggered. # # By default, this option is set to zero and is disabled. Provide an # integer value greater than zero to enable this option. # # NOTE: For more information on store capabilities and their updates, # please visit: https://specs.openstack.org/openstack/glance-specs/specs/kilo # /store-capabilities.html # # For more information on setting up a particular store in your # deployment and help with the usage of this feature, please contact # the storage driver maintainers listed here: # http://docs.openstack.org/developer/glance_store/drivers/index.html # # Possible values: # * Zero # * Positive integer # # Related Options: # * None # # (integer value) # Minimum value: 0 #store_capabilities_update_min_interval = 0 # # Information to match when looking for cinder in the service catalog. # # When the ``cinder_endpoint_template`` is not set and any of # ``cinder_store_auth_address``, ``cinder_store_user_name``, # ``cinder_store_project_name``, ``cinder_store_password`` is not set, # cinder store uses this information to lookup cinder endpoint from the service # catalog in the current context. ``cinder_os_region_name``, if set, is taken # into consideration to fetch the appropriate endpoint. # # The service catalog can be listed by the ``openstack catalog list`` command. # # Possible values: # * A string of of the following form: # ``::`` # At least ``service_type`` and ``interface`` should be specified. # ``service_name`` can be omitted. # # Related options: # * cinder_os_region_name # * cinder_endpoint_template # * cinder_store_auth_address # * cinder_store_user_name # * cinder_store_project_name # * cinder_store_password # # (string value) #cinder_catalog_info = volumev2::publicURL # # Override service catalog lookup with template for cinder endpoint. # # When this option is set, this value is used to generate cinder endpoint, # instead of looking up from the service catalog. # This value is ignored if ``cinder_store_auth_address``, # ``cinder_store_user_name``, ``cinder_store_project_name``, and # ``cinder_store_password`` are specified. # # If this configuration option is set, ``cinder_catalog_info`` will be ignored. # # Possible values: # * URL template string for cinder endpoint, where ``%%(tenant)s`` is # replaced with the current tenant (project) name. # For example: ``http://cinder.openstack.example.org/v2/%%(tenant)s`` # # Related options: # * cinder_store_auth_address # * cinder_store_user_name # * cinder_store_project_name # * cinder_store_password # * cinder_catalog_info # # (string value) #cinder_endpoint_template = # # Region name to lookup cinder service from the service catalog. # # This is used only when ``cinder_catalog_info`` is used for determining the # endpoint. If set, the lookup for cinder endpoint by this node is filtered to # the specified region. It is useful when multiple regions are listed in the # catalog. If this is not set, the endpoint is looked up from every region. # # Possible values: # * A string that is a valid region name. # # Related options: # * cinder_catalog_info # # (string value) # Deprecated group/name - [glance_store]/os_region_name #cinder_os_region_name = # # Location of a CA certificates file used for cinder client requests. # # The specified CA certificates file, if set, is used to verify cinder # connections via HTTPS endpoint. If the endpoint is HTTP, this value is # ignored. # ``cinder_api_insecure`` must be set to ``True`` to enable the verification. # # Possible values: # * Path to a ca certificates file # # Related options: # * cinder_api_insecure # # (string value) #cinder_ca_certificates_file = # # Number of cinderclient retries on failed http calls. # # When a call failed by any errors, cinderclient will retry the call up to the # specified times after sleeping a few seconds. # # Possible values: # * A positive integer # # Related options: # * None # # (integer value) # Minimum value: 0 #cinder_http_retries = 3 # # Time period, in seconds, to wait for a cinder volume transition to # complete. # # When the cinder volume is created, deleted, or attached to the glance node to # read/write the volume data, the volume's state is changed. For example, the # newly created volume status changes from ``creating`` to ``available`` after # the creation process is completed. This specifies the maximum time to wait for # the status change. If a timeout occurs while waiting, or the status is changed # to an unexpected value (e.g. `error``), the image creation fails. # # Possible values: # * A positive integer # # Related options: # * None # # (integer value) # Minimum value: 0 #cinder_state_transition_timeout = 300 # # Allow to perform insecure SSL requests to cinder. # # If this option is set to True, HTTPS endpoint connection is verified using the # CA certificates file specified by ``cinder_ca_certificates_file`` option. # # Possible values: # * True # * False # # Related options: # * cinder_ca_certificates_file # # (boolean value) #cinder_api_insecure = false # # The address where the cinder authentication service is listening. # # When all of ``cinder_store_auth_address``, ``cinder_store_user_name``, # ``cinder_store_project_name``, and ``cinder_store_password`` options are # specified, the specified values are always used for the authentication. # This is useful to hide the image volumes from users by storing them in a # project/tenant specific to the image service. It also enables users to share # the image volume among other projects under the control of glance's ACL. # # If either of these options are not set, the cinder endpoint is looked up # from the service catalog, and current context's user and project are used. # # Possible values: # * A valid authentication service address, for example: # ``http://openstack.example.org/identity/v2.0`` # # Related options: # * cinder_store_user_name # * cinder_store_password # * cinder_store_project_name # # (string value) #cinder_store_auth_address = # # User name to authenticate against cinder. # # This must be used with all the following related options. If any of these are # not specified, the user of the current context is used. # # Possible values: # * A valid user name # # Related options: # * cinder_store_auth_address # * cinder_store_password # * cinder_store_project_name # # (string value) #cinder_store_user_name = # # Password for the user authenticating against cinder. # # This must be used with all the following related options. If any of these are # not specified, the user of the current context is used. # # Possible values: # * A valid password for the user specified by ``cinder_store_user_name`` # # Related options: # * cinder_store_auth_address # * cinder_store_user_name # * cinder_store_project_name # # (string value) #cinder_store_password = # # Project name where the image volume is stored in cinder. # # If this configuration option is not set, the project in current context is # used. # # This must be used with all the following related options. If any of these are # not specified, the project of the current context is used. # # Possible values: # * A valid project name # # Related options: # * ``cinder_store_auth_address`` # * ``cinder_store_user_name`` # * ``cinder_store_password`` # # (string value) #cinder_store_project_name = # # Path to the rootwrap configuration file to use for running commands as root. # # The cinder store requires root privileges to operate the image volumes (for # connecting to iSCSI/FC volumes and reading/writing the volume data, etc.). # The configuration file should allow the required commands by cinder store and # os-brick library. # # Possible values: # * Path to the rootwrap config file # # Related options: # * None # # (string value) #rootwrap_config = /etc/glance/rootwrap.conf # # Volume type that will be used for volume creation in cinder. # # Some cinder backends can have several volume types to optimize storage usage. # Adding this option allows an operator to choose a specific volume type # in cinder that can be optimized for images. # # If this is not set, then the default volume type specified in the cinder # configuration will be used for volume creation. # # Possible values: # * A valid volume type from cinder # # Related options: # * None # # (string value) #cinder_volume_type = # # Directory to which the filesystem backend store writes images. # # Upon start up, Glance creates the directory if it doesn't already # exist and verifies write access to the user under which # ``glance-api`` runs. If the write access isn't available, a # ``BadStoreConfiguration`` exception is raised and the filesystem # store may not be available for adding new images. # # NOTE: This directory is used only when filesystem store is used as a # storage backend. Either ``filesystem_store_datadir`` or # ``filesystem_store_datadirs`` option must be specified in # ``glance-api.conf``. If both options are specified, a # ``BadStoreConfiguration`` will be raised and the filesystem store # may not be available for adding new images. # # Possible values: # * A valid path to a directory # # Related options: # * ``filesystem_store_datadirs`` # * ``filesystem_store_file_perm`` # # (string value) #filesystem_store_datadir = /var/lib/glance/images # # List of directories and their priorities to which the filesystem # backend store writes images. # # The filesystem store can be configured to store images in multiple # directories as opposed to using a single directory specified by the # ``filesystem_store_datadir`` configuration option. When using # multiple directories, each directory can be given an optional # priority to specify the preference order in which they should # be used. Priority is an integer that is concatenated to the # directory path with a colon where a higher value indicates higher # priority. When two directories have the same priority, the directory # with most free space is used. When no priority is specified, it # defaults to zero. # # More information on configuring filesystem store with multiple store # directories can be found at # http://docs.openstack.org/developer/glance/configuring.html # # NOTE: This directory is used only when filesystem store is used as a # storage backend. Either ``filesystem_store_datadir`` or # ``filesystem_store_datadirs`` option must be specified in # ``glance-api.conf``. If both options are specified, a # ``BadStoreConfiguration`` will be raised and the filesystem store # may not be available for adding new images. # # Possible values: # * List of strings of the following form: # * ``:`` # # Related options: # * ``filesystem_store_datadir`` # * ``filesystem_store_file_perm`` # # (multi valued) #filesystem_store_datadirs = # # Filesystem store metadata file. # # The path to a file which contains the metadata to be returned with # any location associated with the filesystem store. The file must # contain a valid JSON object. The object should contain the keys # ``id`` and ``mountpoint``. The value for both keys should be a # string. # # Possible values: # * A valid path to the store metadata file # # Related options: # * None # # (string value) #filesystem_store_metadata_file = # # File access permissions for the image files. # # Set the intended file access permissions for image data. This provides # a way to enable other services, e.g. Nova, to consume images directly # from the filesystem store. The users running the services that are # intended to be given access to could be made a member of the group # that owns the files created. Assigning a value less then or equal to # zero for this configuration option signifies that no changes be made # to the default permissions. This value will be decoded as an octal # digit. # # For more information, please refer the documentation at # http://docs.openstack.org/developer/glance/configuring.html # # Possible values: # * A valid file access permission # * Zero # * Any negative integer # # Related options: # * None # # (integer value) #filesystem_store_file_perm = 0 # # Path to the CA bundle file. # # This configuration option enables the operator to use a custom # Certificate Authority file to verify the remote server certificate. If # this option is set, the ``https_insecure`` option will be ignored and # the CA file specified will be used to authenticate the server # certificate and establish a secure connection to the server. # # Possible values: # * A valid path to a CA file # # Related options: # * https_insecure # # (string value) #https_ca_certificates_file = # # Set verification of the remote server certificate. # # This configuration option takes in a boolean value to determine # whether or not to verify the remote server certificate. If set to # True, the remote server certificate is not verified. If the option is # set to False, then the default CA truststore is used for verification. # # This option is ignored if ``https_ca_certificates_file`` is set. # The remote server certificate will then be verified using the file # specified using the ``https_ca_certificates_file`` option. # # Possible values: # * True # * False # # Related options: # * https_ca_certificates_file # # (boolean value) #https_insecure = true # # The http/https proxy information to be used to connect to the remote # server. # # This configuration option specifies the http/https proxy information # that should be used to connect to the remote server. The proxy # information should be a key value pair of the scheme and proxy, for # example, http:10.0.0.1:3128. You can also specify proxies for multiple # schemes by separating the key value pairs with a comma, for example, # http:10.0.0.1:3128, https:10.0.0.1:1080. # # Possible values: # * A comma separated list of scheme:proxy pairs as described above # # Related options: # * None # # (dict value) #http_proxy_information = # # Size, in megabytes, to chunk RADOS images into. # # Provide an integer value representing the size in megabytes to chunk # Glance images into. The default chunk size is 8 megabytes. For optimal # performance, the value should be a power of two. # # When Ceph's RBD object storage system is used as the storage backend # for storing Glance images, the images are chunked into objects of the # size set using this option. These chunked objects are then stored # across the distributed block data store to use for Glance. # # Possible Values: # * Any positive integer value # # Related options: # * None # # (integer value) # Minimum value: 1 #rbd_store_chunk_size = 8 # # RADOS pool in which images are stored. # # When RBD is used as the storage backend for storing Glance images, the # images are stored by means of logical grouping of the objects (chunks # of images) into a ``pool``. Each pool is defined with the number of # placement groups it can contain. The default pool that is used is # 'images'. # # More information on the RBD storage backend can be found here: # http://ceph.com/planet/how-data-is-stored-in-ceph-cluster/ # # Possible Values: # * A valid pool name # # Related options: # * None # # (string value) #rbd_store_pool = images # # RADOS user to authenticate as. # # This configuration option takes in the RADOS user to authenticate as. # This is only needed when RADOS authentication is enabled and is # applicable only if the user is using Cephx authentication. If the # value for this option is not set by the user or is set to None, a # default value will be chosen, which will be based on the client. # section in rbd_store_ceph_conf. # # Possible Values: # * A valid RADOS user # # Related options: # * rbd_store_ceph_conf # # (string value) #rbd_store_user = # # Ceph configuration file path. # # This configuration option takes in the path to the Ceph configuration # file to be used. If the value for this option is not set by the user # or is set to None, librados will locate the default configuration file # which is located at /etc/ceph/ceph.conf. If using Cephx # authentication, this file should include a reference to the right # keyring in a client. section # # Possible Values: # * A valid path to a configuration file # # Related options: # * rbd_store_user # # (string value) #rbd_store_ceph_conf = /etc/ceph/ceph.conf # # Timeout value for connecting to Ceph cluster. # # This configuration option takes in the timeout value in seconds used # when connecting to the Ceph cluster i.e. it sets the time to wait for # glance-api before closing the connection. This prevents glance-api # hangups during the connection to RBD. If the value for this option # is set to less than or equal to 0, no timeout is set and the default # librados value is used. # # Possible Values: # * Any integer value # # Related options: # * None # # (integer value) #rados_connect_timeout = 0 # # Chunk size for images to be stored in Sheepdog data store. # # Provide an integer value representing the size in mebibyte # (1048576 bytes) to chunk Glance images into. The default # chunk size is 64 mebibytes. # # When using Sheepdog distributed storage system, the images are # chunked into objects of this size and then stored across the # distributed data store to use for Glance. # # Chunk sizes, if a power of two, help avoid fragmentation and # enable improved performance. # # Possible values: # * Positive integer value representing size in mebibytes. # # Related Options: # * None # # (integer value) # Minimum value: 1 #sheepdog_store_chunk_size = 64 # # Port number on which the sheep daemon will listen. # # Provide an integer value representing a valid port number on # which you want the Sheepdog daemon to listen on. The default # port is 7000. # # The Sheepdog daemon, also called 'sheep', manages the storage # in the distributed cluster by writing objects across the storage # network. It identifies and acts on the messages it receives on # the port number set using ``sheepdog_store_port`` option to store # chunks of Glance images. # # Possible values: # * A valid port number (0 to 65535) # # Related Options: # * sheepdog_store_address # # (port value) # Minimum value: 0 # Maximum value: 65535 #sheepdog_store_port = 7000 # # Address to bind the Sheepdog daemon to. # # Provide a string value representing the address to bind the # Sheepdog daemon to. The default address set for the 'sheep' # is 127.0.0.1. # # The Sheepdog daemon, also called 'sheep', manages the storage # in the distributed cluster by writing objects across the storage # network. It identifies and acts on the messages directed to the # address set using ``sheepdog_store_address`` option to store # chunks of Glance images. # # Possible values: # * A valid IPv4 address # * A valid IPv6 address # * A valid hostname # # Related Options: # * sheepdog_store_port # # (unknown value) #sheepdog_store_address = 127.0.0.1 # # Set verification of the server certificate. # # This boolean determines whether or not to verify the server # certificate. If this option is set to True, swiftclient won't check # for a valid SSL certificate when authenticating. If the option is set # to False, then the default CA truststore is used for verification. # # Possible values: # * True # * False # # Related options: # * swift_store_cacert # # (boolean value) #swift_store_auth_insecure = false # # Path to the CA bundle file. # # This configuration option enables the operator to specify the path to # a custom Certificate Authority file for SSL verification when # connecting to Swift. # # Possible values: # * A valid path to a CA file # # Related options: # * swift_store_auth_insecure # # (string value) #swift_store_cacert = /etc/ssl/certs/ca-certificates.crt # # The region of Swift endpoint to use by Glance. # # Provide a string value representing a Swift region where Glance # can connect to for image storage. By default, there is no region # set. # # When Glance uses Swift as the storage backend to store images # for a specific tenant that has multiple endpoints, setting of a # Swift region with ``swift_store_region`` allows Glance to connect # to Swift in the specified region as opposed to a single region # connectivity. # # This option can be configured for both single-tenant and # multi-tenant storage. # # NOTE: Setting the region with ``swift_store_region`` is # tenant-specific and is necessary ``only if`` the tenant has # multiple endpoints across different regions. # # Possible values: # * A string value representing a valid Swift region. # # Related Options: # * None # # (string value) #swift_store_region = RegionTwo # # The URL endpoint to use for Swift backend storage. # # Provide a string value representing the URL endpoint to use for # storing Glance images in Swift store. By default, an endpoint # is not set and the storage URL returned by ``auth`` is used. # Setting an endpoint with ``swift_store_endpoint`` overrides the # storage URL and is used for Glance image storage. # # NOTE: The URL should include the path up to, but excluding the # container. The location of an object is obtained by appending # the container and object to the configured URL. # # Possible values: # * String value representing a valid URL path up to a Swift container # # Related Options: # * None # # (string value) #swift_store_endpoint = https://swift.openstack.example.org/v1/path_not_including_container_name # # Endpoint Type of Swift service. # # This string value indicates the endpoint type to use to fetch the # Swift endpoint. The endpoint type determines the actions the user will # be allowed to perform, for instance, reading and writing to the Store. # This setting is only used if swift_store_auth_version is greater than # 1. # # Possible values: # * publicURL # * adminURL # * internalURL # # Related options: # * swift_store_endpoint # # (string value) # Possible values: # publicURL - # adminURL - # internalURL - #swift_store_endpoint_type = publicURL # # Type of Swift service to use. # # Provide a string value representing the service type to use for # storing images while using Swift backend storage. The default # service type is set to ``object-store``. # # NOTE: If ``swift_store_auth_version`` is set to 2, the value for # this configuration option needs to be ``object-store``. If using # a higher version of Keystone or a different auth scheme, this # option may be modified. # # Possible values: # * A string representing a valid service type for Swift storage. # # Related Options: # * None # # (string value) #swift_store_service_type = object-store # # Name of single container to store images/name prefix for multiple containers # # When a single container is being used to store images, this configuration # option indicates the container within the Glance account to be used for # storing all images. When multiple containers are used to store images, this # will be the name prefix for all containers. Usage of single/multiple # containers can be controlled using the configuration option # ``swift_store_multiple_containers_seed``. # # When using multiple containers, the containers will be named after the value # set for this configuration option with the first N chars of the image UUID # as the suffix delimited by an underscore (where N is specified by # ``swift_store_multiple_containers_seed``). # # Example: if the seed is set to 3 and swift_store_container = ``glance``, then # an image with UUID ``fdae39a1-bac5-4238-aba4-69bcc726e848`` would be placed in # the container ``glance_fda``. All dashes in the UUID are included when # creating the container name but do not count toward the character limit, so # when N=10 the container name would be ``glance_fdae39a1-ba.`` # # Possible values: # * If using single container, this configuration option can be any string # that is a valid swift container name in Glance's Swift account # * If using multiple containers, this configuration option can be any # string as long as it satisfies the container naming rules enforced by # Swift. The value of ``swift_store_multiple_containers_seed`` should be # taken into account as well. # # Related options: # * ``swift_store_multiple_containers_seed`` # * ``swift_store_multi_tenant`` # * ``swift_store_create_container_on_put`` # # (string value) #swift_store_container = glance # # The size threshold, in MB, after which Glance will start segmenting image # data. # # Swift has an upper limit on the size of a single uploaded object. By default, # this is 5GB. To upload objects bigger than this limit, objects are segmented # into multiple smaller objects that are tied together with a manifest file. # For more detail, refer to # http://docs.openstack.org/developer/swift/overview_large_objects.html # # This configuration option specifies the size threshold over which the Swift # driver will start segmenting image data into multiple smaller files. # Currently, the Swift driver only supports creating Dynamic Large Objects. # # NOTE: This should be set by taking into account the large object limit # enforced by the Swift cluster in consideration. # # Possible values: # * A positive integer that is less than or equal to the large object limit # enforced by the Swift cluster in consideration. # # Related options: # * ``swift_store_large_object_chunk_size`` # # (integer value) # Minimum value: 1 #swift_store_large_object_size = 5120 # # The maximum size, in MB, of the segments when image data is segmented. # # When image data is segmented to upload images that are larger than the limit # enforced by the Swift cluster, image data is broken into segments that are no # bigger than the size specified by this configuration option. # Refer to ``swift_store_large_object_size`` for more detail. # # For example: if ``swift_store_large_object_size`` is 5GB and # ``swift_store_large_object_chunk_size`` is 1GB, an image of size 6.2GB will be # segmented into 7 segments where the first six segments will be 1GB in size and # the seventh segment will be 0.2GB. # # Possible values: # * A positive integer that is less than or equal to the large object limit # enforced by Swift cluster in consideration. # # Related options: # * ``swift_store_large_object_size`` # # (integer value) # Minimum value: 1 #swift_store_large_object_chunk_size = 200 # # Create container, if it doesn't already exist, when uploading image. # # At the time of uploading an image, if the corresponding container doesn't # exist, it will be created provided this configuration option is set to True. # By default, it won't be created. This behavior is applicable for both single # and multiple containers mode. # # Possible values: # * True # * False # # Related options: # * None # # (boolean value) #swift_store_create_container_on_put = false # # Store images in tenant's Swift account. # # This enables multi-tenant storage mode which causes Glance images to be stored # in tenant specific Swift accounts. If this is disabled, Glance stores all # images in its own account. More details multi-tenant store can be found at # https://wiki.openstack.org/wiki/GlanceSwiftTenantSpecificStorage # # NOTE: If using multi-tenant swift store, please make sure # that you do not set a swift configuration file with the # 'swift_store_config_file' option. # # Possible values: # * True # * False # # Related options: # * swift_store_config_file # # (boolean value) #swift_store_multi_tenant = false # # Seed indicating the number of containers to use for storing images. # # When using a single-tenant store, images can be stored in one or more than one # containers. When set to 0, all images will be stored in one single container. # When set to an integer value between 1 and 32, multiple containers will be # used to store images. This configuration option will determine how many # containers are created. The total number of containers that will be used is # equal to 16^N, so if this config option is set to 2, then 16^2=256 containers # will be used to store images. # # Please refer to ``swift_store_container`` for more detail on the naming # convention. More detail about using multiple containers can be found at # https://specs.openstack.org/openstack/glance-specs/specs/kilo/swift-store- # multiple-containers.html # # NOTE: This is used only when swift_store_multi_tenant is disabled. # # Possible values: # * A non-negative integer less than or equal to 32 # # Related options: # * ``swift_store_container`` # * ``swift_store_multi_tenant`` # * ``swift_store_create_container_on_put`` # # (integer value) # Minimum value: 0 # Maximum value: 32 #swift_store_multiple_containers_seed = 0 # # List of tenants that will be granted admin access. # # This is a list of tenants that will be granted read/write access on # all Swift containers created by Glance in multi-tenant mode. The # default value is an empty list. # # Possible values: # * A comma separated list of strings representing UUIDs of Keystone # projects/tenants # # Related options: # * None # # (list value) #swift_store_admin_tenants = # # SSL layer compression for HTTPS Swift requests. # # Provide a boolean value to determine whether or not to compress # HTTPS Swift requests for images at the SSL layer. By default, # compression is enabled. # # When using Swift as the backend store for Glance image storage, # SSL layer compression of HTTPS Swift requests can be set using # this option. If set to False, SSL layer compression of HTTPS # Swift requests is disabled. Disabling this option may improve # performance for images which are already in a compressed format, # for example, qcow2. # # Possible values: # * True # * False # # Related Options: # * None # # (boolean value) #swift_store_ssl_compression = true # # The number of times a Swift download will be retried before the # request fails. # # Provide an integer value representing the number of times an image # download must be retried before erroring out. The default value is # zero (no retry on a failed image download). When set to a positive # integer value, ``swift_store_retry_get_count`` ensures that the # download is attempted this many more times upon a download failure # before sending an error message. # # Possible values: # * Zero # * Positive integer value # # Related Options: # * None # # (integer value) # Minimum value: 0 #swift_store_retry_get_count = 0 # # Time in seconds defining the size of the window in which a new # token may be requested before the current token is due to expire. # # Typically, the Swift storage driver fetches a new token upon the # expiration of the current token to ensure continued access to # Swift. However, some Swift transactions (like uploading image # segments) may not recover well if the token expires on the fly. # # Hence, by fetching a new token before the current token expiration, # we make sure that the token does not expire or is close to expiry # before a transaction is attempted. By default, the Swift storage # driver requests for a new token 60 seconds or less before the # current token expiration. # # Possible values: # * Zero # * Positive integer value # # Related Options: # * None # # (integer value) # Minimum value: 0 #swift_store_expire_soon_interval = 60 # # Use trusts for multi-tenant Swift store. # # This option instructs the Swift store to create a trust for each # add/get request when the multi-tenant store is in use. Using trusts # allows the Swift store to avoid problems that can be caused by an # authentication token expiring during the upload or download of data. # # By default, ``swift_store_use_trusts`` is set to ``True``(use of # trusts is enabled). If set to ``False``, a user token is used for # the Swift connection instead, eliminating the overhead of trust # creation. # # NOTE: This option is considered only when # ``swift_store_multi_tenant`` is set to ``True`` # # Possible values: # * True # * False # # Related options: # * swift_store_multi_tenant # # (boolean value) #swift_store_use_trusts = true # # Buffer image segments before upload to Swift. # # Provide a boolean value to indicate whether or not Glance should # buffer image data to disk while uploading to swift. This enables # Glance to resume uploads on error. # # NOTES: # When enabling this option, one should take great care as this # increases disk usage on the API node. Be aware that depending # upon how the file system is configured, the disk space used # for buffering may decrease the actual disk space available for # the glance image cache. Disk utilization will cap according to # the following equation: # (``swift_store_large_object_chunk_size`` * ``workers`` * 1000) # # Possible values: # * True # * False # # Related options: # * swift_upload_buffer_dir # # (boolean value) #swift_buffer_on_upload = false # # Reference to default Swift account/backing store parameters. # # Provide a string value representing a reference to the default set # of parameters required for using swift account/backing store for # image storage. The default reference value for this configuration # option is 'ref1'. This configuration option dereferences the # parameters and facilitates image storage in Swift storage backend # every time a new image is added. # # Possible values: # * A valid string value # # Related options: # * None # # (string value) #default_swift_reference = ref1 # DEPRECATED: Version of the authentication service to use. Valid versions are 2 # and 3 for keystone and 1 (deprecated) for swauth and rackspace. (string value) # This option is deprecated for removal. # Its value may be silently ignored in the future. # Reason: # The option 'auth_version' in the Swift back-end configuration file is # used instead. #swift_store_auth_version = 2 # DEPRECATED: The address where the Swift authentication service is listening. # (string value) # This option is deprecated for removal. # Its value may be silently ignored in the future. # Reason: # The option 'auth_address' in the Swift back-end configuration file is # used instead. #swift_store_auth_address = # DEPRECATED: The user to authenticate against the Swift authentication service. # (string value) # This option is deprecated for removal. # Its value may be silently ignored in the future. # Reason: # The option 'user' in the Swift back-end configuration file is set instead. #swift_store_user = # DEPRECATED: Auth key for the user authenticating against the Swift # authentication service. (string value) # This option is deprecated for removal. # Its value may be silently ignored in the future. # Reason: # The option 'key' in the Swift back-end configuration file is used # to set the authentication key instead. #swift_store_key = # # Absolute path to the file containing the swift account(s) # configurations. # # Include a string value representing the path to a configuration # file that has references for each of the configured Swift # account(s)/backing stores. By default, no file path is specified # and customized Swift referencing is disabled. Configuring this # option is highly recommended while using Swift storage backend for # image storage as it avoids storage of credentials in the database. # # NOTE: Please do not configure this option if you have set # ``swift_store_multi_tenant`` to ``True``. # # Possible values: # * String value representing an absolute path on the glance-api # node # # Related options: # * swift_store_multi_tenant # # (string value) #swift_store_config_file = # # Directory to buffer image segments before upload to Swift. # # Provide a string value representing the absolute path to the # directory on the glance node where image segments will be # buffered briefly before they are uploaded to swift. # # NOTES: # * This is required only when the configuration option # ``swift_buffer_on_upload`` is set to True. # * This directory should be provisioned keeping in mind the # ``swift_store_large_object_chunk_size`` and the maximum # number of images that could be uploaded simultaneously by # a given glance node. # # Possible values: # * String value representing an absolute directory path # # Related options: # * swift_buffer_on_upload # * swift_store_large_object_chunk_size # # (string value) #swift_upload_buffer_dir = # # Address of the ESX/ESXi or vCenter Server target system. # # This configuration option sets the address of the ESX/ESXi or vCenter # Server target system. This option is required when using the VMware # storage backend. The address can contain an IP address (127.0.0.1) or # a DNS name (www.my-domain.com). # # Possible Values: # * A valid IPv4 or IPv6 address # * A valid DNS name # # Related options: # * vmware_server_username # * vmware_server_password # # (unknown value) #vmware_server_host = 127.0.0.1 # # Server username. # # This configuration option takes the username for authenticating with # the VMware ESX/ESXi or vCenter Server. This option is required when # using the VMware storage backend. # # Possible Values: # * Any string that is the username for a user with appropriate # privileges # # Related options: # * vmware_server_host # * vmware_server_password # # (string value) #vmware_server_username = root # # Server password. # # This configuration option takes the password for authenticating with # the VMware ESX/ESXi or vCenter Server. This option is required when # using the VMware storage backend. # # Possible Values: # * Any string that is a password corresponding to the username # specified using the "vmware_server_username" option # # Related options: # * vmware_server_host # * vmware_server_username # # (string value) #vmware_server_password = vmware # # The number of VMware API retries. # # This configuration option specifies the number of times the VMware # ESX/VC server API must be retried upon connection related issues or # server API call overload. It is not possible to specify 'retry # forever'. # # Possible Values: # * Any positive integer value # # Related options: # * None # # (integer value) # Minimum value: 1 #vmware_api_retry_count = 10 # # Interval in seconds used for polling remote tasks invoked on VMware # ESX/VC server. # # This configuration option takes in the sleep time in seconds for polling an # on-going async task as part of the VMWare ESX/VC server API call. # # Possible Values: # * Any positive integer value # # Related options: # * None # # (integer value) # Minimum value: 1 #vmware_task_poll_interval = 5 # # The directory where the glance images will be stored in the datastore. # # This configuration option specifies the path to the directory where the # glance images will be stored in the VMware datastore. If this option # is not set, the default directory where the glance images are stored # is openstack_glance. # # Possible Values: # * Any string that is a valid path to a directory # # Related options: # * None # # (string value) #vmware_store_image_dir = /openstack_glance # # Set verification of the ESX/vCenter server certificate. # # This configuration option takes a boolean value to determine # whether or not to verify the ESX/vCenter server certificate. If this # option is set to True, the ESX/vCenter server certificate is not # verified. If this option is set to False, then the default CA # truststore is used for verification. # # This option is ignored if the "vmware_ca_file" option is set. In that # case, the ESX/vCenter server certificate will then be verified using # the file specified using the "vmware_ca_file" option . # # Possible Values: # * True # * False # # Related options: # * vmware_ca_file # # (boolean value) # Deprecated group/name - [glance_store]/vmware_api_insecure #vmware_insecure = false # # Absolute path to the CA bundle file. # # This configuration option enables the operator to use a custom # Cerificate Authority File to verify the ESX/vCenter certificate. # # If this option is set, the "vmware_insecure" option will be ignored # and the CA file specified will be used to authenticate the ESX/vCenter # server certificate and establish a secure connection to the server. # # Possible Values: # * Any string that is a valid absolute path to a CA file # # Related options: # * vmware_insecure # # (string value) #vmware_ca_file = /etc/ssl/certs/ca-certificates.crt # # The datastores where the image can be stored. # # This configuration option specifies the datastores where the image can # be stored in the VMWare store backend. This option may be specified # multiple times for specifying multiple datastores. The datastore name # should be specified after its datacenter path, separated by ":". An # optional weight may be given after the datastore name, separated again # by ":" to specify the priority. Thus, the required format becomes # ::. # # When adding an image, the datastore with highest weight will be # selected, unless there is not enough free space available in cases # where the image size is already known. If no weight is given, it is # assumed to be zero and the directory will be considered for selection # last. If multiple datastores have the same weight, then the one with # the most free space available is selected. # # Possible Values: # * Any string of the format: # :: # # Related options: # * None # # (multi valued) #vmware_datastores = [oslo_policy] # # From oslo.policy # # This option controls whether or not to enforce scope when evaluating policies. # If ``True``, the scope of the token used in the request is compared to the # ``scope_types`` of the policy being enforced. If the scopes do not match, an # ``InvalidScope`` exception will be raised. If ``False``, a message will be # logged informing operators that policies are being invoked with mismatching # scope. (boolean value) #enforce_scope = false # The file that defines policies. (string value) #policy_file = policy.json # Default rule. Enforced when a requested rule is not found. (string value) #policy_default_rule = default # Directories where policy configuration files are stored. They can be relative # to any directory in the search path defined by the config_dir option, or # absolute paths. The file defined by policy_file must exist for these # directories to be searched. Missing or empty directories are ignored. (multi # valued) #policy_dirs = policy.d # Content Type to send and receive data for REST based policy check (string # value) # Possible values: # application/x-www-form-urlencoded - # application/json - #remote_content_type = application/x-www-form-urlencoded # server identity verification for REST based policy check (boolean value) #remote_ssl_verify_server_crt = false # Absolute path to ca cert file for REST based policy check (string value) #remote_ssl_ca_crt_file = # Absolute path to client cert for REST based policy check (string value) #remote_ssl_client_crt_file = # Absolute path client key file REST based policy check (string value) #remote_ssl_client_key_file = glance-16.0.0/etc/property-protections-policies.conf.sample0000666000175100017510000000224213245511421024015 0ustar zuulzuul00000000000000# property-protections-policies.conf.sample # # This file is an example config file for when # property_protection_rule_format=policies is enabled. # # Specify regular expression for which properties will be protected in [] # For each section, specify CRUD permissions. You may refer to policies defined # in policy.json. # The property rules will be applied in the order specified. Once # a match is found the remaining property rules will not be applied. # # WARNING: # * If the reg ex specified below does not compile, then # the glance-api service fails to start. (Guide for reg ex python compiler # used: # http://docs.python.org/2/library/re.html#regular-expression-syntax) # * If an operation(create, read, update, delete) is not specified or misspelt # then the glance-api service fails to start. # So, remember, with GREAT POWER comes GREAT RESPONSIBILITY! # # NOTE: Only one policy can be specified per action. If multiple policies are # specified, then the glance-api service fails to start. [^x_.*] create = default read = default update = default delete = default [.*] create = context_is_admin read = context_is_admin update = context_is_admin delete = context_is_admin glance-16.0.0/etc/rootwrap.conf0000666000175100017510000000165513245511421016361 0ustar zuulzuul00000000000000# Configuration for glance-rootwrap # This file should be owned by (and only-writable by) the root user [DEFAULT] # List of directories to load filter definitions from (separated by ','). # These directories MUST all be only writeable by root ! filters_path=/etc/glance/rootwrap.d,/usr/share/glance/rootwrap # List of directories to search executables in, in case filters do not # explicitely specify a full path (separated by ',') # If not specified, defaults to system PATH environment variable. # These directories MUST all be only writeable by root ! exec_dirs=/sbin,/usr/sbin,/bin,/usr/bin # Enable logging to syslog # Default value is False use_syslog=False # Which syslog facility to use. # Valid values include auth, authpriv, syslog, local0, local1... # Default value is 'syslog' syslog_log_facility=syslog # Which messages to log. # INFO means log all usage # ERROR means only log unsuccessful attempts syslog_log_level=ERROR glance-16.0.0/etc/glance-swift.conf.sample0000666000175100017510000000113613245511421020341 0ustar zuulzuul00000000000000# glance-swift.conf.sample # # This file is an example config file when # multiple swift accounts/backing stores are enabled. # # Specify the reference name in [] # For each section, specify the auth_address, user and key. # # WARNING: # * If any of auth_address, user or key is not specified, # the glance-api's swift store will fail to configure [ref1] user = tenant:user1 key = key1 auth_version = 2 auth_address = http://localhost:5000/v2.0 [ref2] user = project_name:user_name2 key = key2 user_domain_id = default project_domain_id = default auth_version = 3 auth_address = http://localhost:5000/v3 glance-16.0.0/etc/property-protections-roles.conf.sample0000666000175100017510000000211513245511421023331 0ustar zuulzuul00000000000000# property-protections-roles.conf.sample # # This file is an example config file for when # property_protection_rule_format=roles is enabled. # # Specify regular expression for which properties will be protected in [] # For each section, specify CRUD permissions. # The property rules will be applied in the order specified. Once # a match is found the remaining property rules will not be applied. # # WARNING: # * If the reg ex specified below does not compile, then # glance-api service will not start. (Guide for reg ex python compiler used: # http://docs.python.org/2/library/re.html#regular-expression-syntax) # * If an operation(create, read, update, delete) is not specified or misspelt # then the glance-api service will not start. # So, remember, with GREAT POWER comes GREAT RESPONSIBILITY! # # NOTE: Multiple roles can be specified for a given operation. These roles must # be comma separated. [^x_.*] create = admin,member,_member_ read = admin,member,_member_ update = admin,member,_member_ delete = admin,member,_member_ [.*] create = admin read = admin update = admin delete = admin glance-16.0.0/etc/glance-manage.conf0000666000175100017510000002170613245511421017162 0ustar zuulzuul00000000000000[DEFAULT] # # From oslo.log # # If set to true, the logging level will be set to DEBUG instead of the default # INFO level. (boolean value) # Note: This option can be changed without restarting. #debug = false # The name of a logging configuration file. This file is appended to any # existing logging configuration files. For details about logging configuration # files, see the Python logging module documentation. Note that when logging # configuration files are used then all logging configuration is set in the # configuration file and other logging configuration options are ignored (for # example, logging_context_format_string). (string value) # Note: This option can be changed without restarting. # Deprecated group/name - [DEFAULT]/log_config #log_config_append = # Defines the format string for %%(asctime)s in log records. Default: # %(default)s . This option is ignored if log_config_append is set. (string # value) #log_date_format = %Y-%m-%d %H:%M:%S # (Optional) Name of log file to send logging output to. If no default is set, # logging will go to stderr as defined by use_stderr. This option is ignored if # log_config_append is set. (string value) # Deprecated group/name - [DEFAULT]/logfile #log_file = # (Optional) The base directory used for relative log_file paths. This option # is ignored if log_config_append is set. (string value) # Deprecated group/name - [DEFAULT]/logdir #log_dir = # Uses logging handler designed to watch file system. When log file is moved or # removed this handler will open a new log file with specified path # instantaneously. It makes sense only if log_file option is specified and Linux # platform is used. This option is ignored if log_config_append is set. (boolean # value) #watch_log_file = false # Use syslog for logging. Existing syslog format is DEPRECATED and will be # changed later to honor RFC5424. This option is ignored if log_config_append is # set. (boolean value) #use_syslog = false # Enable journald for logging. If running in a systemd environment you may wish # to enable journal support. Doing so will use the journal native protocol which # includes structured metadata in addition to log messages.This option is # ignored if log_config_append is set. (boolean value) #use_journal = false # Syslog facility to receive log lines. This option is ignored if # log_config_append is set. (string value) #syslog_log_facility = LOG_USER # Use JSON formatting for logging. This option is ignored if log_config_append # is set. (boolean value) #use_json = false # Log output to standard error. This option is ignored if log_config_append is # set. (boolean value) #use_stderr = false # Format string to use for log messages with context. (string value) #logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s # Format string to use for log messages when context is undefined. (string # value) #logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s # Additional data to append to log message when logging level for the message is # DEBUG. (string value) #logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d # Prefix each line of exception output with this format. (string value) #logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s # Defines the format string for %(user_identity)s that is used in # logging_context_format_string. (string value) #logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s # List of package logging levels in logger=LEVEL pairs. This option is ignored # if log_config_append is set. (list value) #default_log_levels = amqp=WARN,amqplib=WARN,boto=WARN,qpid=WARN,sqlalchemy=WARN,suds=INFO,oslo.messaging=INFO,oslo_messaging=INFO,iso8601=WARN,requests.packages.urllib3.connectionpool=WARN,urllib3.connectionpool=WARN,websocket=WARN,requests.packages.urllib3.util.retry=WARN,urllib3.util.retry=WARN,keystonemiddleware=WARN,routes.middleware=WARN,stevedore=WARN,taskflow=WARN,keystoneauth=WARN,oslo.cache=INFO,dogpile.core.dogpile=INFO # Enables or disables publication of error events. (boolean value) #publish_errors = false # The format for an instance that is passed with the log message. (string value) #instance_format = "[instance: %(uuid)s] " # The format for an instance UUID that is passed with the log message. (string # value) #instance_uuid_format = "[instance: %(uuid)s] " # Interval, number of seconds, of log rate limiting. (integer value) #rate_limit_interval = 0 # Maximum number of logged messages per rate_limit_interval. (integer value) #rate_limit_burst = 0 # Log level name used by rate limiting: CRITICAL, ERROR, INFO, WARNING, DEBUG or # empty string. Logs with level greater or equal to rate_limit_except_level are # not filtered. An empty string means that all levels are filtered. (string # value) #rate_limit_except_level = CRITICAL # Enables or disables fatal status of deprecations. (boolean value) #fatal_deprecations = false [database] # # From oslo.db # # If True, SQLite uses synchronous mode. (boolean value) #sqlite_synchronous = true # The back end to use for the database. (string value) # Deprecated group/name - [DEFAULT]/db_backend #backend = sqlalchemy # The SQLAlchemy connection string to use to connect to the database. (string # value) # Deprecated group/name - [DEFAULT]/sql_connection # Deprecated group/name - [DATABASE]/sql_connection # Deprecated group/name - [sql]/connection #connection = # The SQLAlchemy connection string to use to connect to the slave database. # (string value) #slave_connection = # The SQL mode to be used for MySQL sessions. This option, including the # default, overrides any server-set SQL mode. To use whatever SQL mode is set by # the server configuration, set this to no value. Example: mysql_sql_mode= # (string value) #mysql_sql_mode = TRADITIONAL # If True, transparently enables support for handling MySQL Cluster (NDB). # (boolean value) #mysql_enable_ndb = false # Connections which have been present in the connection pool longer than this # number of seconds will be replaced with a new one the next time they are # checked out from the pool. (integer value) # Deprecated group/name - [DATABASE]/idle_timeout # Deprecated group/name - [database]/idle_timeout # Deprecated group/name - [DEFAULT]/sql_idle_timeout # Deprecated group/name - [DATABASE]/sql_idle_timeout # Deprecated group/name - [sql]/idle_timeout #connection_recycle_time = 3600 # Minimum number of SQL connections to keep open in a pool. (integer value) # Deprecated group/name - [DEFAULT]/sql_min_pool_size # Deprecated group/name - [DATABASE]/sql_min_pool_size #min_pool_size = 1 # Maximum number of SQL connections to keep open in a pool. Setting a value of 0 # indicates no limit. (integer value) # Deprecated group/name - [DEFAULT]/sql_max_pool_size # Deprecated group/name - [DATABASE]/sql_max_pool_size #max_pool_size = 5 # Maximum number of database connection retries during startup. Set to -1 to # specify an infinite retry count. (integer value) # Deprecated group/name - [DEFAULT]/sql_max_retries # Deprecated group/name - [DATABASE]/sql_max_retries #max_retries = 10 # Interval between retries of opening a SQL connection. (integer value) # Deprecated group/name - [DEFAULT]/sql_retry_interval # Deprecated group/name - [DATABASE]/reconnect_interval #retry_interval = 10 # If set, use this value for max_overflow with SQLAlchemy. (integer value) # Deprecated group/name - [DEFAULT]/sql_max_overflow # Deprecated group/name - [DATABASE]/sqlalchemy_max_overflow #max_overflow = 50 # Verbosity of SQL debugging information: 0=None, 100=Everything. (integer # value) # Minimum value: 0 # Maximum value: 100 # Deprecated group/name - [DEFAULT]/sql_connection_debug #connection_debug = 0 # Add Python stack traces to SQL as comment strings. (boolean value) # Deprecated group/name - [DEFAULT]/sql_connection_trace #connection_trace = false # If set, use this value for pool_timeout with SQLAlchemy. (integer value) # Deprecated group/name - [DATABASE]/sqlalchemy_pool_timeout #pool_timeout = # Enable the experimental use of database reconnect on connection lost. (boolean # value) #use_db_reconnect = false # Seconds between retries of a database transaction. (integer value) #db_retry_interval = 1 # If True, increases the interval between retries of a database operation up to # db_max_retry_interval. (boolean value) #db_inc_retry_interval = true # If db_inc_retry_interval is set, the maximum seconds between retries of a # database operation. (integer value) #db_max_retry_interval = 10 # Maximum retries in case of connection error or deadlock error before error is # raised. Set to -1 to specify an infinite retry count. (integer value) #db_max_retries = 20 # # From oslo.db.concurrency # # Enable the experimental use of thread pooling for all DB API calls (boolean # value) # Deprecated group/name - [DEFAULT]/dbapi_use_tpool #use_tpool = false glance-16.0.0/etc/glance-scrubber.conf0000666000175100017510000022567513245511421017554 0ustar zuulzuul00000000000000[DEFAULT] # # From glance.scrubber # # # Allow users to add additional/custom properties to images. # # Glance defines a standard set of properties (in its schema) that # appear on every image. These properties are also known as # ``base properties``. In addition to these properties, Glance # allows users to add custom properties to images. These are known # as ``additional properties``. # # By default, this configuration option is set to ``True`` and users # are allowed to add additional properties. The number of additional # properties that can be added to an image can be controlled via # ``image_property_quota`` configuration option. # # Possible values: # * True # * False # # Related options: # * image_property_quota # # (boolean value) #allow_additional_image_properties = true # # Maximum number of image members per image. # # This limits the maximum of users an image can be shared with. Any negative # value is interpreted as unlimited. # # Related options: # * None # # (integer value) #image_member_quota = 128 # # Maximum number of properties allowed on an image. # # This enforces an upper limit on the number of additional properties an image # can have. Any negative value is interpreted as unlimited. # # NOTE: This won't have any impact if additional properties are disabled. Please # refer to ``allow_additional_image_properties``. # # Related options: # * ``allow_additional_image_properties`` # # (integer value) #image_property_quota = 128 # # Maximum number of tags allowed on an image. # # Any negative value is interpreted as unlimited. # # Related options: # * None # # (integer value) #image_tag_quota = 128 # # Maximum number of locations allowed on an image. # # Any negative value is interpreted as unlimited. # # Related options: # * None # # (integer value) #image_location_quota = 10 # DEPRECATED: # Python module path of data access API. # # Specifies the path to the API to use for accessing the data model. # This option determines how the image catalog data will be accessed. # # Possible values: # * glance.db.sqlalchemy.api # * glance.db.registry.api # * glance.db.simple.api # # If this option is set to ``glance.db.sqlalchemy.api`` then the image # catalog data is stored in and read from the database via the # SQLAlchemy Core and ORM APIs. # # Setting this option to ``glance.db.registry.api`` will force all # database access requests to be routed through the Registry service. # This avoids data access from the Glance API nodes for an added layer # of security, scalability and manageability. # # NOTE: In v2 OpenStack Images API, the registry service is optional. # In order to use the Registry API in v2, the option # ``enable_v2_registry`` must be set to ``True``. # # Finally, when this configuration option is set to # ``glance.db.simple.api``, image catalog data is stored in and read # from an in-memory data structure. This is primarily used for testing. # # Related options: # * enable_v2_api # * enable_v2_registry # # (string value) # This option is deprecated for removal since Queens. # Its value may be silently ignored in the future. # Reason: # Glance registry service is deprecated for removal. # # More information can be found from the spec: # http://specs.openstack.org/openstack/glance-specs/specs/queens/approved/glance # /deprecate-registry.html #data_api = glance.db.sqlalchemy.api # # The default number of results to return for a request. # # Responses to certain API requests, like list images, may return # multiple items. The number of results returned can be explicitly # controlled by specifying the ``limit`` parameter in the API request. # However, if a ``limit`` parameter is not specified, this # configuration value will be used as the default number of results to # be returned for any API request. # # NOTES: # * The value of this configuration option may not be greater than # the value specified by ``api_limit_max``. # * Setting this to a very large value may slow down database # queries and increase response times. Setting this to a # very low value may result in poor user experience. # # Possible values: # * Any positive integer # # Related options: # * api_limit_max # # (integer value) # Minimum value: 1 #limit_param_default = 25 # # Maximum number of results that could be returned by a request. # # As described in the help text of ``limit_param_default``, some # requests may return multiple results. The number of results to be # returned are governed either by the ``limit`` parameter in the # request or the ``limit_param_default`` configuration option. # The value in either case, can't be greater than the absolute maximum # defined by this configuration option. Anything greater than this # value is trimmed down to the maximum value defined here. # # NOTE: Setting this to a very large value may slow down database # queries and increase response times. Setting this to a # very low value may result in poor user experience. # # Possible values: # * Any positive integer # # Related options: # * limit_param_default # # (integer value) # Minimum value: 1 #api_limit_max = 1000 # # Show direct image location when returning an image. # # This configuration option indicates whether to show the direct image # location when returning image details to the user. The direct image # location is where the image data is stored in backend storage. This # image location is shown under the image property ``direct_url``. # # When multiple image locations exist for an image, the best location # is displayed based on the location strategy indicated by the # configuration option ``location_strategy``. # # NOTES: # * Revealing image locations can present a GRAVE SECURITY RISK as # image locations can sometimes include credentials. Hence, this # is set to ``False`` by default. Set this to ``True`` with # EXTREME CAUTION and ONLY IF you know what you are doing! # * If an operator wishes to avoid showing any image location(s) # to the user, then both this option and # ``show_multiple_locations`` MUST be set to ``False``. # # Possible values: # * True # * False # # Related options: # * show_multiple_locations # * location_strategy # # (boolean value) #show_image_direct_url = false # DEPRECATED: # Show all image locations when returning an image. # # This configuration option indicates whether to show all the image # locations when returning image details to the user. When multiple # image locations exist for an image, the locations are ordered based # on the location strategy indicated by the configuration opt # ``location_strategy``. The image locations are shown under the # image property ``locations``. # # NOTES: # * Revealing image locations can present a GRAVE SECURITY RISK as # image locations can sometimes include credentials. Hence, this # is set to ``False`` by default. Set this to ``True`` with # EXTREME CAUTION and ONLY IF you know what you are doing! # * If an operator wishes to avoid showing any image location(s) # to the user, then both this option and # ``show_image_direct_url`` MUST be set to ``False``. # # Possible values: # * True # * False # # Related options: # * show_image_direct_url # * location_strategy # # (boolean value) # This option is deprecated for removal since Newton. # Its value may be silently ignored in the future. # Reason: This option will be removed in the Pike release or later because the # same functionality can be achieved with greater granularity by using policies. # Please see the Newton release notes for more information. #show_multiple_locations = false # # Maximum size of image a user can upload in bytes. # # An image upload greater than the size mentioned here would result # in an image creation failure. This configuration option defaults to # 1099511627776 bytes (1 TiB). # # NOTES: # * This value should only be increased after careful # consideration and must be set less than or equal to # 8 EiB (9223372036854775808). # * This value must be set with careful consideration of the # backend storage capacity. Setting this to a very low value # may result in a large number of image failures. And, setting # this to a very large value may result in faster consumption # of storage. Hence, this must be set according to the nature of # images created and storage capacity available. # # Possible values: # * Any positive number less than or equal to 9223372036854775808 # # (integer value) # Minimum value: 1 # Maximum value: 9223372036854775808 #image_size_cap = 1099511627776 # # Maximum amount of image storage per tenant. # # This enforces an upper limit on the cumulative storage consumed by all images # of a tenant across all stores. This is a per-tenant limit. # # The default unit for this configuration option is Bytes. However, storage # units can be specified using case-sensitive literals ``B``, ``KB``, ``MB``, # ``GB`` and ``TB`` representing Bytes, KiloBytes, MegaBytes, GigaBytes and # TeraBytes respectively. Note that there should not be any space between the # value and unit. Value ``0`` signifies no quota enforcement. Negative values # are invalid and result in errors. # # Possible values: # * A string that is a valid concatenation of a non-negative integer # representing the storage value and an optional string literal # representing storage units as mentioned above. # # Related options: # * None # # (string value) #user_storage_quota = 0 # # Deploy the v1 OpenStack Images API. # # When this option is set to ``True``, Glance service will respond to # requests on registered endpoints conforming to the v1 OpenStack # Images API. # # NOTES: # * If this option is enabled, then ``enable_v1_registry`` must # also be set to ``True`` to enable mandatory usage of Registry # service with v1 API. # # * If this option is disabled, then the ``enable_v1_registry`` # option, which is enabled by default, is also recommended # to be disabled. # # * This option is separate from ``enable_v2_api``, both v1 and v2 # OpenStack Images API can be deployed independent of each # other. # # * If deploying only the v2 Images API, this option, which is # enabled by default, should be disabled. # # Possible values: # * True # * False # # Related options: # * enable_v1_registry # * enable_v2_api # # (boolean value) #enable_v1_api = true # # Deploy the v2 OpenStack Images API. # # When this option is set to ``True``, Glance service will respond # to requests on registered endpoints conforming to the v2 OpenStack # Images API. # # NOTES: # * If this option is disabled, then the ``enable_v2_registry`` # option, which is enabled by default, is also recommended # to be disabled. # # * This option is separate from ``enable_v1_api``, both v1 and v2 # OpenStack Images API can be deployed independent of each # other. # # * If deploying only the v1 Images API, this option, which is # enabled by default, should be disabled. # # Possible values: # * True # * False # # Related options: # * enable_v2_registry # * enable_v1_api # # (boolean value) #enable_v2_api = true # # Deploy the v1 API Registry service. # # When this option is set to ``True``, the Registry service # will be enabled in Glance for v1 API requests. # # NOTES: # * Use of Registry is mandatory in v1 API, so this option must # be set to ``True`` if the ``enable_v1_api`` option is enabled. # # * If deploying only the v2 OpenStack Images API, this option, # which is enabled by default, should be disabled. # # Possible values: # * True # * False # # Related options: # * enable_v1_api # # (boolean value) #enable_v1_registry = true # DEPRECATED: # Deploy the v2 API Registry service. # # When this option is set to ``True``, the Registry service # will be enabled in Glance for v2 API requests. # # NOTES: # * Use of Registry is optional in v2 API, so this option # must only be enabled if both ``enable_v2_api`` is set to # ``True`` and the ``data_api`` option is set to # ``glance.db.registry.api``. # # * If deploying only the v1 OpenStack Images API, this option, # which is enabled by default, should be disabled. # # Possible values: # * True # * False # # Related options: # * enable_v2_api # * data_api # # (boolean value) # This option is deprecated for removal since Queens. # Its value may be silently ignored in the future. # Reason: # Glance registry service is deprecated for removal. # # More information can be found from the spec: # http://specs.openstack.org/openstack/glance-specs/specs/queens/approved/glance # /deprecate-registry.html #enable_v2_registry = true # # Host address of the pydev server. # # Provide a string value representing the hostname or IP of the # pydev server to use for debugging. The pydev server listens for # debug connections on this address, facilitating remote debugging # in Glance. # # Possible values: # * Valid hostname # * Valid IP address # # Related options: # * None # # (unknown value) #pydev_worker_debug_host = localhost # # Port number that the pydev server will listen on. # # Provide a port number to bind the pydev server to. The pydev # process accepts debug connections on this port and facilitates # remote debugging in Glance. # # Possible values: # * A valid port number # # Related options: # * None # # (port value) # Minimum value: 0 # Maximum value: 65535 #pydev_worker_debug_port = 5678 # # AES key for encrypting store location metadata. # # Provide a string value representing the AES cipher to use for # encrypting Glance store metadata. # # NOTE: The AES key to use must be set to a random string of length # 16, 24 or 32 bytes. # # Possible values: # * String value representing a valid AES key # # Related options: # * None # # (string value) #metadata_encryption_key = # # Digest algorithm to use for digital signature. # # Provide a string value representing the digest algorithm to # use for generating digital signatures. By default, ``sha256`` # is used. # # To get a list of the available algorithms supported by the version # of OpenSSL on your platform, run the command: # ``openssl list-message-digest-algorithms``. # Examples are 'sha1', 'sha256', and 'sha512'. # # NOTE: ``digest_algorithm`` is not related to Glance's image signing # and verification. It is only used to sign the universally unique # identifier (UUID) as a part of the certificate file and key file # validation. # # Possible values: # * An OpenSSL message digest algorithm identifier # # Relation options: # * None # # (string value) #digest_algorithm = sha256 # # The URL provides location where the temporary data will be stored # # This option is for Glance internal use only. Glance will save the # image data uploaded by the user to 'staging' endpoint during the # image import process. # # This option does not change the 'staging' API endpoint by any means. # # NOTE: It is discouraged to use same path as [task]/work_dir # # NOTE: 'file://' is the only option # api_image_import flow will support for now. # # NOTE: The staging path must be on shared filesystem available to all # Glance API nodes. # # Possible values: # * String starting with 'file://' followed by absolute FS path # # Related options: # * [task]/work_dir # * [DEFAULT]/enable_image_import (*deprecated*) # # (string value) #node_staging_uri = file:///tmp/staging/ # DEPRECATED: # Enables the Image Import workflow introduced in Pike # # As '[DEFAULT]/node_staging_uri' is required for the Image # Import, it's disabled per default in Pike, enabled per # default in Queens and removed in Rocky. This allows Glance to # operate with previous version configs upon upgrade. # # Setting this option to False will disable the endpoints related # to Image Import Refactoring work. # # Related options: # * [DEFAULT]/node_staging_uri (boolean value) # This option is deprecated for removal since Pike. # Its value may be silently ignored in the future. # Reason: # This option is deprecated for removal in Rocky. # # It was introduced to make sure that the API is not enabled # before the '[DEFAULT]/node_staging_uri' is defined and is # long term redundant. #enable_image_import = true # # List of enabled Image Import Methods # # Both 'glance-direct' and 'web-download' are enabled by default. # # Related options: # * [DEFAULT]/node_staging_uri # * [DEFAULT]/enable_image_import (list value) #enabled_import_methods = glance-direct,web-download # # The amount of time, in seconds, to delay image scrubbing. # # When delayed delete is turned on, an image is put into ``pending_delete`` # state upon deletion until the scrubber deletes its image data. Typically, soon # after the image is put into ``pending_delete`` state, it is available for # scrubbing. However, scrubbing can be delayed until a later point using this # configuration option. This option denotes the time period an image spends in # ``pending_delete`` state before it is available for scrubbing. # # It is important to realize that this has storage implications. The larger the # ``scrub_time``, the longer the time to reclaim backend storage from deleted # images. # # Possible values: # * Any non-negative integer # # Related options: # * ``delayed_delete`` # # (integer value) # Minimum value: 0 #scrub_time = 0 # # The size of thread pool to be used for scrubbing images. # # When there are a large number of images to scrub, it is beneficial to scrub # images in parallel so that the scrub queue stays in control and the backend # storage is reclaimed in a timely fashion. This configuration option denotes # the maximum number of images to be scrubbed in parallel. The default value is # one, which signifies serial scrubbing. Any value above one indicates parallel # scrubbing. # # Possible values: # * Any non-zero positive integer # # Related options: # * ``delayed_delete`` # # (integer value) # Minimum value: 1 #scrub_pool_size = 1 # # Turn on/off delayed delete. # # Typically when an image is deleted, the ``glance-api`` service puts the image # into ``deleted`` state and deletes its data at the same time. Delayed delete # is a feature in Glance that delays the actual deletion of image data until a # later point in time (as determined by the configuration option # ``scrub_time``). # When delayed delete is turned on, the ``glance-api`` service puts the image # into ``pending_delete`` state upon deletion and leaves the image data in the # storage backend for the image scrubber to delete at a later time. The image # scrubber will move the image into ``deleted`` state upon successful deletion # of image data. # # NOTE: When delayed delete is turned on, image scrubber MUST be running as a # periodic task to prevent the backend storage from filling up with undesired # usage. # # Possible values: # * True # * False # # Related options: # * ``scrub_time`` # * ``wakeup_time`` # * ``scrub_pool_size`` # # (boolean value) #delayed_delete = false # # Time interval, in seconds, between scrubber runs in daemon mode. # # Scrubber can be run either as a cron job or daemon. When run as a daemon, this # configuration time specifies the time period between two runs. When the # scrubber wakes up, it fetches and scrubs all ``pending_delete`` images that # are available for scrubbing after taking ``scrub_time`` into consideration. # # If the wakeup time is set to a large number, there may be a large number of # images to be scrubbed for each run. Also, this impacts how quickly the backend # storage is reclaimed. # # Possible values: # * Any non-negative integer # # Related options: # * ``daemon`` # * ``delayed_delete`` # # (integer value) # Minimum value: 0 #wakeup_time = 300 # # Run scrubber as a daemon. # # This boolean configuration option indicates whether scrubber should # run as a long-running process that wakes up at regular intervals to # scrub images. The wake up interval can be specified using the # configuration option ``wakeup_time``. # # If this configuration option is set to ``False``, which is the # default value, scrubber runs once to scrub images and exits. In this # case, if the operator wishes to implement continuous scrubbing of # images, scrubber needs to be scheduled as a cron job. # # Possible values: # * True # * False # # Related options: # * ``wakeup_time`` # # (boolean value) #daemon = false # # From oslo.log # # If set to true, the logging level will be set to DEBUG instead of the default # INFO level. (boolean value) # Note: This option can be changed without restarting. #debug = false # The name of a logging configuration file. This file is appended to any # existing logging configuration files. For details about logging configuration # files, see the Python logging module documentation. Note that when logging # configuration files are used then all logging configuration is set in the # configuration file and other logging configuration options are ignored (for # example, logging_context_format_string). (string value) # Note: This option can be changed without restarting. # Deprecated group/name - [DEFAULT]/log_config #log_config_append = # Defines the format string for %%(asctime)s in log records. Default: # %(default)s . This option is ignored if log_config_append is set. (string # value) #log_date_format = %Y-%m-%d %H:%M:%S # (Optional) Name of log file to send logging output to. If no default is set, # logging will go to stderr as defined by use_stderr. This option is ignored if # log_config_append is set. (string value) # Deprecated group/name - [DEFAULT]/logfile #log_file = # (Optional) The base directory used for relative log_file paths. This option # is ignored if log_config_append is set. (string value) # Deprecated group/name - [DEFAULT]/logdir #log_dir = # Uses logging handler designed to watch file system. When log file is moved or # removed this handler will open a new log file with specified path # instantaneously. It makes sense only if log_file option is specified and Linux # platform is used. This option is ignored if log_config_append is set. (boolean # value) #watch_log_file = false # Use syslog for logging. Existing syslog format is DEPRECATED and will be # changed later to honor RFC5424. This option is ignored if log_config_append is # set. (boolean value) #use_syslog = false # Enable journald for logging. If running in a systemd environment you may wish # to enable journal support. Doing so will use the journal native protocol which # includes structured metadata in addition to log messages.This option is # ignored if log_config_append is set. (boolean value) #use_journal = false # Syslog facility to receive log lines. This option is ignored if # log_config_append is set. (string value) #syslog_log_facility = LOG_USER # Use JSON formatting for logging. This option is ignored if log_config_append # is set. (boolean value) #use_json = false # Log output to standard error. This option is ignored if log_config_append is # set. (boolean value) #use_stderr = false # Format string to use for log messages with context. (string value) #logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s # Format string to use for log messages when context is undefined. (string # value) #logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s # Additional data to append to log message when logging level for the message is # DEBUG. (string value) #logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d # Prefix each line of exception output with this format. (string value) #logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s # Defines the format string for %(user_identity)s that is used in # logging_context_format_string. (string value) #logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s # List of package logging levels in logger=LEVEL pairs. This option is ignored # if log_config_append is set. (list value) #default_log_levels = amqp=WARN,amqplib=WARN,boto=WARN,qpid=WARN,sqlalchemy=WARN,suds=INFO,oslo.messaging=INFO,oslo_messaging=INFO,iso8601=WARN,requests.packages.urllib3.connectionpool=WARN,urllib3.connectionpool=WARN,websocket=WARN,requests.packages.urllib3.util.retry=WARN,urllib3.util.retry=WARN,keystonemiddleware=WARN,routes.middleware=WARN,stevedore=WARN,taskflow=WARN,keystoneauth=WARN,oslo.cache=INFO,dogpile.core.dogpile=INFO # Enables or disables publication of error events. (boolean value) #publish_errors = false # The format for an instance that is passed with the log message. (string value) #instance_format = "[instance: %(uuid)s] " # The format for an instance UUID that is passed with the log message. (string # value) #instance_uuid_format = "[instance: %(uuid)s] " # Interval, number of seconds, of log rate limiting. (integer value) #rate_limit_interval = 0 # Maximum number of logged messages per rate_limit_interval. (integer value) #rate_limit_burst = 0 # Log level name used by rate limiting: CRITICAL, ERROR, INFO, WARNING, DEBUG or # empty string. Logs with level greater or equal to rate_limit_except_level are # not filtered. An empty string means that all levels are filtered. (string # value) #rate_limit_except_level = CRITICAL # Enables or disables fatal status of deprecations. (boolean value) #fatal_deprecations = false [database] # # From oslo.db # # If True, SQLite uses synchronous mode. (boolean value) #sqlite_synchronous = true # The back end to use for the database. (string value) # Deprecated group/name - [DEFAULT]/db_backend #backend = sqlalchemy # The SQLAlchemy connection string to use to connect to the database. (string # value) # Deprecated group/name - [DEFAULT]/sql_connection # Deprecated group/name - [DATABASE]/sql_connection # Deprecated group/name - [sql]/connection #connection = # The SQLAlchemy connection string to use to connect to the slave database. # (string value) #slave_connection = # The SQL mode to be used for MySQL sessions. This option, including the # default, overrides any server-set SQL mode. To use whatever SQL mode is set by # the server configuration, set this to no value. Example: mysql_sql_mode= # (string value) #mysql_sql_mode = TRADITIONAL # If True, transparently enables support for handling MySQL Cluster (NDB). # (boolean value) #mysql_enable_ndb = false # Connections which have been present in the connection pool longer than this # number of seconds will be replaced with a new one the next time they are # checked out from the pool. (integer value) # Deprecated group/name - [DATABASE]/idle_timeout # Deprecated group/name - [database]/idle_timeout # Deprecated group/name - [DEFAULT]/sql_idle_timeout # Deprecated group/name - [DATABASE]/sql_idle_timeout # Deprecated group/name - [sql]/idle_timeout #connection_recycle_time = 3600 # Minimum number of SQL connections to keep open in a pool. (integer value) # Deprecated group/name - [DEFAULT]/sql_min_pool_size # Deprecated group/name - [DATABASE]/sql_min_pool_size #min_pool_size = 1 # Maximum number of SQL connections to keep open in a pool. Setting a value of 0 # indicates no limit. (integer value) # Deprecated group/name - [DEFAULT]/sql_max_pool_size # Deprecated group/name - [DATABASE]/sql_max_pool_size #max_pool_size = 5 # Maximum number of database connection retries during startup. Set to -1 to # specify an infinite retry count. (integer value) # Deprecated group/name - [DEFAULT]/sql_max_retries # Deprecated group/name - [DATABASE]/sql_max_retries #max_retries = 10 # Interval between retries of opening a SQL connection. (integer value) # Deprecated group/name - [DEFAULT]/sql_retry_interval # Deprecated group/name - [DATABASE]/reconnect_interval #retry_interval = 10 # If set, use this value for max_overflow with SQLAlchemy. (integer value) # Deprecated group/name - [DEFAULT]/sql_max_overflow # Deprecated group/name - [DATABASE]/sqlalchemy_max_overflow #max_overflow = 50 # Verbosity of SQL debugging information: 0=None, 100=Everything. (integer # value) # Minimum value: 0 # Maximum value: 100 # Deprecated group/name - [DEFAULT]/sql_connection_debug #connection_debug = 0 # Add Python stack traces to SQL as comment strings. (boolean value) # Deprecated group/name - [DEFAULT]/sql_connection_trace #connection_trace = false # If set, use this value for pool_timeout with SQLAlchemy. (integer value) # Deprecated group/name - [DATABASE]/sqlalchemy_pool_timeout #pool_timeout = # Enable the experimental use of database reconnect on connection lost. (boolean # value) #use_db_reconnect = false # Seconds between retries of a database transaction. (integer value) #db_retry_interval = 1 # If True, increases the interval between retries of a database operation up to # db_max_retry_interval. (boolean value) #db_inc_retry_interval = true # If db_inc_retry_interval is set, the maximum seconds between retries of a # database operation. (integer value) #db_max_retry_interval = 10 # Maximum retries in case of connection error or deadlock error before error is # raised. Set to -1 to specify an infinite retry count. (integer value) #db_max_retries = 20 # # From oslo.db.concurrency # # Enable the experimental use of thread pooling for all DB API calls (boolean # value) # Deprecated group/name - [DEFAULT]/dbapi_use_tpool #use_tpool = false [glance_store] # # From glance.store # # # List of enabled Glance stores. # # Register the storage backends to use for storing disk images # as a comma separated list. The default stores enabled for # storing disk images with Glance are ``file`` and ``http``. # # Possible values: # * A comma separated list that could include: # * file # * http # * swift # * rbd # * sheepdog # * cinder # * vmware # # Related Options: # * default_store # # (list value) #stores = file,http # # The default scheme to use for storing images. # # Provide a string value representing the default scheme to use for # storing images. If not set, Glance uses ``file`` as the default # scheme to store images with the ``file`` store. # # NOTE: The value given for this configuration option must be a valid # scheme for a store registered with the ``stores`` configuration # option. # # Possible values: # * file # * filesystem # * http # * https # * swift # * swift+http # * swift+https # * swift+config # * rbd # * sheepdog # * cinder # * vsphere # # Related Options: # * stores # # (string value) # Possible values: # file - # filesystem - # http - # https - # swift - # swift+http - # swift+https - # swift+config - # rbd - # sheepdog - # cinder - # vsphere - #default_store = file # # Minimum interval in seconds to execute updating dynamic storage # capabilities based on current backend status. # # Provide an integer value representing time in seconds to set the # minimum interval before an update of dynamic storage capabilities # for a storage backend can be attempted. Setting # ``store_capabilities_update_min_interval`` does not mean updates # occur periodically based on the set interval. Rather, the update # is performed at the elapse of this interval set, if an operation # of the store is triggered. # # By default, this option is set to zero and is disabled. Provide an # integer value greater than zero to enable this option. # # NOTE: For more information on store capabilities and their updates, # please visit: https://specs.openstack.org/openstack/glance-specs/specs/kilo # /store-capabilities.html # # For more information on setting up a particular store in your # deployment and help with the usage of this feature, please contact # the storage driver maintainers listed here: # http://docs.openstack.org/developer/glance_store/drivers/index.html # # Possible values: # * Zero # * Positive integer # # Related Options: # * None # # (integer value) # Minimum value: 0 #store_capabilities_update_min_interval = 0 # # Information to match when looking for cinder in the service catalog. # # When the ``cinder_endpoint_template`` is not set and any of # ``cinder_store_auth_address``, ``cinder_store_user_name``, # ``cinder_store_project_name``, ``cinder_store_password`` is not set, # cinder store uses this information to lookup cinder endpoint from the service # catalog in the current context. ``cinder_os_region_name``, if set, is taken # into consideration to fetch the appropriate endpoint. # # The service catalog can be listed by the ``openstack catalog list`` command. # # Possible values: # * A string of of the following form: # ``::`` # At least ``service_type`` and ``interface`` should be specified. # ``service_name`` can be omitted. # # Related options: # * cinder_os_region_name # * cinder_endpoint_template # * cinder_store_auth_address # * cinder_store_user_name # * cinder_store_project_name # * cinder_store_password # # (string value) #cinder_catalog_info = volumev2::publicURL # # Override service catalog lookup with template for cinder endpoint. # # When this option is set, this value is used to generate cinder endpoint, # instead of looking up from the service catalog. # This value is ignored if ``cinder_store_auth_address``, # ``cinder_store_user_name``, ``cinder_store_project_name``, and # ``cinder_store_password`` are specified. # # If this configuration option is set, ``cinder_catalog_info`` will be ignored. # # Possible values: # * URL template string for cinder endpoint, where ``%%(tenant)s`` is # replaced with the current tenant (project) name. # For example: ``http://cinder.openstack.example.org/v2/%%(tenant)s`` # # Related options: # * cinder_store_auth_address # * cinder_store_user_name # * cinder_store_project_name # * cinder_store_password # * cinder_catalog_info # # (string value) #cinder_endpoint_template = # # Region name to lookup cinder service from the service catalog. # # This is used only when ``cinder_catalog_info`` is used for determining the # endpoint. If set, the lookup for cinder endpoint by this node is filtered to # the specified region. It is useful when multiple regions are listed in the # catalog. If this is not set, the endpoint is looked up from every region. # # Possible values: # * A string that is a valid region name. # # Related options: # * cinder_catalog_info # # (string value) # Deprecated group/name - [glance_store]/os_region_name #cinder_os_region_name = # # Location of a CA certificates file used for cinder client requests. # # The specified CA certificates file, if set, is used to verify cinder # connections via HTTPS endpoint. If the endpoint is HTTP, this value is # ignored. # ``cinder_api_insecure`` must be set to ``True`` to enable the verification. # # Possible values: # * Path to a ca certificates file # # Related options: # * cinder_api_insecure # # (string value) #cinder_ca_certificates_file = # # Number of cinderclient retries on failed http calls. # # When a call failed by any errors, cinderclient will retry the call up to the # specified times after sleeping a few seconds. # # Possible values: # * A positive integer # # Related options: # * None # # (integer value) # Minimum value: 0 #cinder_http_retries = 3 # # Time period, in seconds, to wait for a cinder volume transition to # complete. # # When the cinder volume is created, deleted, or attached to the glance node to # read/write the volume data, the volume's state is changed. For example, the # newly created volume status changes from ``creating`` to ``available`` after # the creation process is completed. This specifies the maximum time to wait for # the status change. If a timeout occurs while waiting, or the status is changed # to an unexpected value (e.g. `error``), the image creation fails. # # Possible values: # * A positive integer # # Related options: # * None # # (integer value) # Minimum value: 0 #cinder_state_transition_timeout = 300 # # Allow to perform insecure SSL requests to cinder. # # If this option is set to True, HTTPS endpoint connection is verified using the # CA certificates file specified by ``cinder_ca_certificates_file`` option. # # Possible values: # * True # * False # # Related options: # * cinder_ca_certificates_file # # (boolean value) #cinder_api_insecure = false # # The address where the cinder authentication service is listening. # # When all of ``cinder_store_auth_address``, ``cinder_store_user_name``, # ``cinder_store_project_name``, and ``cinder_store_password`` options are # specified, the specified values are always used for the authentication. # This is useful to hide the image volumes from users by storing them in a # project/tenant specific to the image service. It also enables users to share # the image volume among other projects under the control of glance's ACL. # # If either of these options are not set, the cinder endpoint is looked up # from the service catalog, and current context's user and project are used. # # Possible values: # * A valid authentication service address, for example: # ``http://openstack.example.org/identity/v2.0`` # # Related options: # * cinder_store_user_name # * cinder_store_password # * cinder_store_project_name # # (string value) #cinder_store_auth_address = # # User name to authenticate against cinder. # # This must be used with all the following related options. If any of these are # not specified, the user of the current context is used. # # Possible values: # * A valid user name # # Related options: # * cinder_store_auth_address # * cinder_store_password # * cinder_store_project_name # # (string value) #cinder_store_user_name = # # Password for the user authenticating against cinder. # # This must be used with all the following related options. If any of these are # not specified, the user of the current context is used. # # Possible values: # * A valid password for the user specified by ``cinder_store_user_name`` # # Related options: # * cinder_store_auth_address # * cinder_store_user_name # * cinder_store_project_name # # (string value) #cinder_store_password = # # Project name where the image volume is stored in cinder. # # If this configuration option is not set, the project in current context is # used. # # This must be used with all the following related options. If any of these are # not specified, the project of the current context is used. # # Possible values: # * A valid project name # # Related options: # * ``cinder_store_auth_address`` # * ``cinder_store_user_name`` # * ``cinder_store_password`` # # (string value) #cinder_store_project_name = # # Path to the rootwrap configuration file to use for running commands as root. # # The cinder store requires root privileges to operate the image volumes (for # connecting to iSCSI/FC volumes and reading/writing the volume data, etc.). # The configuration file should allow the required commands by cinder store and # os-brick library. # # Possible values: # * Path to the rootwrap config file # # Related options: # * None # # (string value) #rootwrap_config = /etc/glance/rootwrap.conf # # Volume type that will be used for volume creation in cinder. # # Some cinder backends can have several volume types to optimize storage usage. # Adding this option allows an operator to choose a specific volume type # in cinder that can be optimized for images. # # If this is not set, then the default volume type specified in the cinder # configuration will be used for volume creation. # # Possible values: # * A valid volume type from cinder # # Related options: # * None # # (string value) #cinder_volume_type = # # Directory to which the filesystem backend store writes images. # # Upon start up, Glance creates the directory if it doesn't already # exist and verifies write access to the user under which # ``glance-api`` runs. If the write access isn't available, a # ``BadStoreConfiguration`` exception is raised and the filesystem # store may not be available for adding new images. # # NOTE: This directory is used only when filesystem store is used as a # storage backend. Either ``filesystem_store_datadir`` or # ``filesystem_store_datadirs`` option must be specified in # ``glance-api.conf``. If both options are specified, a # ``BadStoreConfiguration`` will be raised and the filesystem store # may not be available for adding new images. # # Possible values: # * A valid path to a directory # # Related options: # * ``filesystem_store_datadirs`` # * ``filesystem_store_file_perm`` # # (string value) #filesystem_store_datadir = /var/lib/glance/images # # List of directories and their priorities to which the filesystem # backend store writes images. # # The filesystem store can be configured to store images in multiple # directories as opposed to using a single directory specified by the # ``filesystem_store_datadir`` configuration option. When using # multiple directories, each directory can be given an optional # priority to specify the preference order in which they should # be used. Priority is an integer that is concatenated to the # directory path with a colon where a higher value indicates higher # priority. When two directories have the same priority, the directory # with most free space is used. When no priority is specified, it # defaults to zero. # # More information on configuring filesystem store with multiple store # directories can be found at # http://docs.openstack.org/developer/glance/configuring.html # # NOTE: This directory is used only when filesystem store is used as a # storage backend. Either ``filesystem_store_datadir`` or # ``filesystem_store_datadirs`` option must be specified in # ``glance-api.conf``. If both options are specified, a # ``BadStoreConfiguration`` will be raised and the filesystem store # may not be available for adding new images. # # Possible values: # * List of strings of the following form: # * ``:`` # # Related options: # * ``filesystem_store_datadir`` # * ``filesystem_store_file_perm`` # # (multi valued) #filesystem_store_datadirs = # # Filesystem store metadata file. # # The path to a file which contains the metadata to be returned with # any location associated with the filesystem store. The file must # contain a valid JSON object. The object should contain the keys # ``id`` and ``mountpoint``. The value for both keys should be a # string. # # Possible values: # * A valid path to the store metadata file # # Related options: # * None # # (string value) #filesystem_store_metadata_file = # # File access permissions for the image files. # # Set the intended file access permissions for image data. This provides # a way to enable other services, e.g. Nova, to consume images directly # from the filesystem store. The users running the services that are # intended to be given access to could be made a member of the group # that owns the files created. Assigning a value less then or equal to # zero for this configuration option signifies that no changes be made # to the default permissions. This value will be decoded as an octal # digit. # # For more information, please refer the documentation at # http://docs.openstack.org/developer/glance/configuring.html # # Possible values: # * A valid file access permission # * Zero # * Any negative integer # # Related options: # * None # # (integer value) #filesystem_store_file_perm = 0 # # Path to the CA bundle file. # # This configuration option enables the operator to use a custom # Certificate Authority file to verify the remote server certificate. If # this option is set, the ``https_insecure`` option will be ignored and # the CA file specified will be used to authenticate the server # certificate and establish a secure connection to the server. # # Possible values: # * A valid path to a CA file # # Related options: # * https_insecure # # (string value) #https_ca_certificates_file = # # Set verification of the remote server certificate. # # This configuration option takes in a boolean value to determine # whether or not to verify the remote server certificate. If set to # True, the remote server certificate is not verified. If the option is # set to False, then the default CA truststore is used for verification. # # This option is ignored if ``https_ca_certificates_file`` is set. # The remote server certificate will then be verified using the file # specified using the ``https_ca_certificates_file`` option. # # Possible values: # * True # * False # # Related options: # * https_ca_certificates_file # # (boolean value) #https_insecure = true # # The http/https proxy information to be used to connect to the remote # server. # # This configuration option specifies the http/https proxy information # that should be used to connect to the remote server. The proxy # information should be a key value pair of the scheme and proxy, for # example, http:10.0.0.1:3128. You can also specify proxies for multiple # schemes by separating the key value pairs with a comma, for example, # http:10.0.0.1:3128, https:10.0.0.1:1080. # # Possible values: # * A comma separated list of scheme:proxy pairs as described above # # Related options: # * None # # (dict value) #http_proxy_information = # # Size, in megabytes, to chunk RADOS images into. # # Provide an integer value representing the size in megabytes to chunk # Glance images into. The default chunk size is 8 megabytes. For optimal # performance, the value should be a power of two. # # When Ceph's RBD object storage system is used as the storage backend # for storing Glance images, the images are chunked into objects of the # size set using this option. These chunked objects are then stored # across the distributed block data store to use for Glance. # # Possible Values: # * Any positive integer value # # Related options: # * None # # (integer value) # Minimum value: 1 #rbd_store_chunk_size = 8 # # RADOS pool in which images are stored. # # When RBD is used as the storage backend for storing Glance images, the # images are stored by means of logical grouping of the objects (chunks # of images) into a ``pool``. Each pool is defined with the number of # placement groups it can contain. The default pool that is used is # 'images'. # # More information on the RBD storage backend can be found here: # http://ceph.com/planet/how-data-is-stored-in-ceph-cluster/ # # Possible Values: # * A valid pool name # # Related options: # * None # # (string value) #rbd_store_pool = images # # RADOS user to authenticate as. # # This configuration option takes in the RADOS user to authenticate as. # This is only needed when RADOS authentication is enabled and is # applicable only if the user is using Cephx authentication. If the # value for this option is not set by the user or is set to None, a # default value will be chosen, which will be based on the client. # section in rbd_store_ceph_conf. # # Possible Values: # * A valid RADOS user # # Related options: # * rbd_store_ceph_conf # # (string value) #rbd_store_user = # # Ceph configuration file path. # # This configuration option takes in the path to the Ceph configuration # file to be used. If the value for this option is not set by the user # or is set to None, librados will locate the default configuration file # which is located at /etc/ceph/ceph.conf. If using Cephx # authentication, this file should include a reference to the right # keyring in a client. section # # Possible Values: # * A valid path to a configuration file # # Related options: # * rbd_store_user # # (string value) #rbd_store_ceph_conf = /etc/ceph/ceph.conf # # Timeout value for connecting to Ceph cluster. # # This configuration option takes in the timeout value in seconds used # when connecting to the Ceph cluster i.e. it sets the time to wait for # glance-api before closing the connection. This prevents glance-api # hangups during the connection to RBD. If the value for this option # is set to less than or equal to 0, no timeout is set and the default # librados value is used. # # Possible Values: # * Any integer value # # Related options: # * None # # (integer value) #rados_connect_timeout = 0 # # Chunk size for images to be stored in Sheepdog data store. # # Provide an integer value representing the size in mebibyte # (1048576 bytes) to chunk Glance images into. The default # chunk size is 64 mebibytes. # # When using Sheepdog distributed storage system, the images are # chunked into objects of this size and then stored across the # distributed data store to use for Glance. # # Chunk sizes, if a power of two, help avoid fragmentation and # enable improved performance. # # Possible values: # * Positive integer value representing size in mebibytes. # # Related Options: # * None # # (integer value) # Minimum value: 1 #sheepdog_store_chunk_size = 64 # # Port number on which the sheep daemon will listen. # # Provide an integer value representing a valid port number on # which you want the Sheepdog daemon to listen on. The default # port is 7000. # # The Sheepdog daemon, also called 'sheep', manages the storage # in the distributed cluster by writing objects across the storage # network. It identifies and acts on the messages it receives on # the port number set using ``sheepdog_store_port`` option to store # chunks of Glance images. # # Possible values: # * A valid port number (0 to 65535) # # Related Options: # * sheepdog_store_address # # (port value) # Minimum value: 0 # Maximum value: 65535 #sheepdog_store_port = 7000 # # Address to bind the Sheepdog daemon to. # # Provide a string value representing the address to bind the # Sheepdog daemon to. The default address set for the 'sheep' # is 127.0.0.1. # # The Sheepdog daemon, also called 'sheep', manages the storage # in the distributed cluster by writing objects across the storage # network. It identifies and acts on the messages directed to the # address set using ``sheepdog_store_address`` option to store # chunks of Glance images. # # Possible values: # * A valid IPv4 address # * A valid IPv6 address # * A valid hostname # # Related Options: # * sheepdog_store_port # # (unknown value) #sheepdog_store_address = 127.0.0.1 # # Set verification of the server certificate. # # This boolean determines whether or not to verify the server # certificate. If this option is set to True, swiftclient won't check # for a valid SSL certificate when authenticating. If the option is set # to False, then the default CA truststore is used for verification. # # Possible values: # * True # * False # # Related options: # * swift_store_cacert # # (boolean value) #swift_store_auth_insecure = false # # Path to the CA bundle file. # # This configuration option enables the operator to specify the path to # a custom Certificate Authority file for SSL verification when # connecting to Swift. # # Possible values: # * A valid path to a CA file # # Related options: # * swift_store_auth_insecure # # (string value) #swift_store_cacert = /etc/ssl/certs/ca-certificates.crt # # The region of Swift endpoint to use by Glance. # # Provide a string value representing a Swift region where Glance # can connect to for image storage. By default, there is no region # set. # # When Glance uses Swift as the storage backend to store images # for a specific tenant that has multiple endpoints, setting of a # Swift region with ``swift_store_region`` allows Glance to connect # to Swift in the specified region as opposed to a single region # connectivity. # # This option can be configured for both single-tenant and # multi-tenant storage. # # NOTE: Setting the region with ``swift_store_region`` is # tenant-specific and is necessary ``only if`` the tenant has # multiple endpoints across different regions. # # Possible values: # * A string value representing a valid Swift region. # # Related Options: # * None # # (string value) #swift_store_region = RegionTwo # # The URL endpoint to use for Swift backend storage. # # Provide a string value representing the URL endpoint to use for # storing Glance images in Swift store. By default, an endpoint # is not set and the storage URL returned by ``auth`` is used. # Setting an endpoint with ``swift_store_endpoint`` overrides the # storage URL and is used for Glance image storage. # # NOTE: The URL should include the path up to, but excluding the # container. The location of an object is obtained by appending # the container and object to the configured URL. # # Possible values: # * String value representing a valid URL path up to a Swift container # # Related Options: # * None # # (string value) #swift_store_endpoint = https://swift.openstack.example.org/v1/path_not_including_container_name # # Endpoint Type of Swift service. # # This string value indicates the endpoint type to use to fetch the # Swift endpoint. The endpoint type determines the actions the user will # be allowed to perform, for instance, reading and writing to the Store. # This setting is only used if swift_store_auth_version is greater than # 1. # # Possible values: # * publicURL # * adminURL # * internalURL # # Related options: # * swift_store_endpoint # # (string value) # Possible values: # publicURL - # adminURL - # internalURL - #swift_store_endpoint_type = publicURL # # Type of Swift service to use. # # Provide a string value representing the service type to use for # storing images while using Swift backend storage. The default # service type is set to ``object-store``. # # NOTE: If ``swift_store_auth_version`` is set to 2, the value for # this configuration option needs to be ``object-store``. If using # a higher version of Keystone or a different auth scheme, this # option may be modified. # # Possible values: # * A string representing a valid service type for Swift storage. # # Related Options: # * None # # (string value) #swift_store_service_type = object-store # # Name of single container to store images/name prefix for multiple containers # # When a single container is being used to store images, this configuration # option indicates the container within the Glance account to be used for # storing all images. When multiple containers are used to store images, this # will be the name prefix for all containers. Usage of single/multiple # containers can be controlled using the configuration option # ``swift_store_multiple_containers_seed``. # # When using multiple containers, the containers will be named after the value # set for this configuration option with the first N chars of the image UUID # as the suffix delimited by an underscore (where N is specified by # ``swift_store_multiple_containers_seed``). # # Example: if the seed is set to 3 and swift_store_container = ``glance``, then # an image with UUID ``fdae39a1-bac5-4238-aba4-69bcc726e848`` would be placed in # the container ``glance_fda``. All dashes in the UUID are included when # creating the container name but do not count toward the character limit, so # when N=10 the container name would be ``glance_fdae39a1-ba.`` # # Possible values: # * If using single container, this configuration option can be any string # that is a valid swift container name in Glance's Swift account # * If using multiple containers, this configuration option can be any # string as long as it satisfies the container naming rules enforced by # Swift. The value of ``swift_store_multiple_containers_seed`` should be # taken into account as well. # # Related options: # * ``swift_store_multiple_containers_seed`` # * ``swift_store_multi_tenant`` # * ``swift_store_create_container_on_put`` # # (string value) #swift_store_container = glance # # The size threshold, in MB, after which Glance will start segmenting image # data. # # Swift has an upper limit on the size of a single uploaded object. By default, # this is 5GB. To upload objects bigger than this limit, objects are segmented # into multiple smaller objects that are tied together with a manifest file. # For more detail, refer to # http://docs.openstack.org/developer/swift/overview_large_objects.html # # This configuration option specifies the size threshold over which the Swift # driver will start segmenting image data into multiple smaller files. # Currently, the Swift driver only supports creating Dynamic Large Objects. # # NOTE: This should be set by taking into account the large object limit # enforced by the Swift cluster in consideration. # # Possible values: # * A positive integer that is less than or equal to the large object limit # enforced by the Swift cluster in consideration. # # Related options: # * ``swift_store_large_object_chunk_size`` # # (integer value) # Minimum value: 1 #swift_store_large_object_size = 5120 # # The maximum size, in MB, of the segments when image data is segmented. # # When image data is segmented to upload images that are larger than the limit # enforced by the Swift cluster, image data is broken into segments that are no # bigger than the size specified by this configuration option. # Refer to ``swift_store_large_object_size`` for more detail. # # For example: if ``swift_store_large_object_size`` is 5GB and # ``swift_store_large_object_chunk_size`` is 1GB, an image of size 6.2GB will be # segmented into 7 segments where the first six segments will be 1GB in size and # the seventh segment will be 0.2GB. # # Possible values: # * A positive integer that is less than or equal to the large object limit # enforced by Swift cluster in consideration. # # Related options: # * ``swift_store_large_object_size`` # # (integer value) # Minimum value: 1 #swift_store_large_object_chunk_size = 200 # # Create container, if it doesn't already exist, when uploading image. # # At the time of uploading an image, if the corresponding container doesn't # exist, it will be created provided this configuration option is set to True. # By default, it won't be created. This behavior is applicable for both single # and multiple containers mode. # # Possible values: # * True # * False # # Related options: # * None # # (boolean value) #swift_store_create_container_on_put = false # # Store images in tenant's Swift account. # # This enables multi-tenant storage mode which causes Glance images to be stored # in tenant specific Swift accounts. If this is disabled, Glance stores all # images in its own account. More details multi-tenant store can be found at # https://wiki.openstack.org/wiki/GlanceSwiftTenantSpecificStorage # # NOTE: If using multi-tenant swift store, please make sure # that you do not set a swift configuration file with the # 'swift_store_config_file' option. # # Possible values: # * True # * False # # Related options: # * swift_store_config_file # # (boolean value) #swift_store_multi_tenant = false # # Seed indicating the number of containers to use for storing images. # # When using a single-tenant store, images can be stored in one or more than one # containers. When set to 0, all images will be stored in one single container. # When set to an integer value between 1 and 32, multiple containers will be # used to store images. This configuration option will determine how many # containers are created. The total number of containers that will be used is # equal to 16^N, so if this config option is set to 2, then 16^2=256 containers # will be used to store images. # # Please refer to ``swift_store_container`` for more detail on the naming # convention. More detail about using multiple containers can be found at # https://specs.openstack.org/openstack/glance-specs/specs/kilo/swift-store- # multiple-containers.html # # NOTE: This is used only when swift_store_multi_tenant is disabled. # # Possible values: # * A non-negative integer less than or equal to 32 # # Related options: # * ``swift_store_container`` # * ``swift_store_multi_tenant`` # * ``swift_store_create_container_on_put`` # # (integer value) # Minimum value: 0 # Maximum value: 32 #swift_store_multiple_containers_seed = 0 # # List of tenants that will be granted admin access. # # This is a list of tenants that will be granted read/write access on # all Swift containers created by Glance in multi-tenant mode. The # default value is an empty list. # # Possible values: # * A comma separated list of strings representing UUIDs of Keystone # projects/tenants # # Related options: # * None # # (list value) #swift_store_admin_tenants = # # SSL layer compression for HTTPS Swift requests. # # Provide a boolean value to determine whether or not to compress # HTTPS Swift requests for images at the SSL layer. By default, # compression is enabled. # # When using Swift as the backend store for Glance image storage, # SSL layer compression of HTTPS Swift requests can be set using # this option. If set to False, SSL layer compression of HTTPS # Swift requests is disabled. Disabling this option may improve # performance for images which are already in a compressed format, # for example, qcow2. # # Possible values: # * True # * False # # Related Options: # * None # # (boolean value) #swift_store_ssl_compression = true # # The number of times a Swift download will be retried before the # request fails. # # Provide an integer value representing the number of times an image # download must be retried before erroring out. The default value is # zero (no retry on a failed image download). When set to a positive # integer value, ``swift_store_retry_get_count`` ensures that the # download is attempted this many more times upon a download failure # before sending an error message. # # Possible values: # * Zero # * Positive integer value # # Related Options: # * None # # (integer value) # Minimum value: 0 #swift_store_retry_get_count = 0 # # Time in seconds defining the size of the window in which a new # token may be requested before the current token is due to expire. # # Typically, the Swift storage driver fetches a new token upon the # expiration of the current token to ensure continued access to # Swift. However, some Swift transactions (like uploading image # segments) may not recover well if the token expires on the fly. # # Hence, by fetching a new token before the current token expiration, # we make sure that the token does not expire or is close to expiry # before a transaction is attempted. By default, the Swift storage # driver requests for a new token 60 seconds or less before the # current token expiration. # # Possible values: # * Zero # * Positive integer value # # Related Options: # * None # # (integer value) # Minimum value: 0 #swift_store_expire_soon_interval = 60 # # Use trusts for multi-tenant Swift store. # # This option instructs the Swift store to create a trust for each # add/get request when the multi-tenant store is in use. Using trusts # allows the Swift store to avoid problems that can be caused by an # authentication token expiring during the upload or download of data. # # By default, ``swift_store_use_trusts`` is set to ``True``(use of # trusts is enabled). If set to ``False``, a user token is used for # the Swift connection instead, eliminating the overhead of trust # creation. # # NOTE: This option is considered only when # ``swift_store_multi_tenant`` is set to ``True`` # # Possible values: # * True # * False # # Related options: # * swift_store_multi_tenant # # (boolean value) #swift_store_use_trusts = true # # Buffer image segments before upload to Swift. # # Provide a boolean value to indicate whether or not Glance should # buffer image data to disk while uploading to swift. This enables # Glance to resume uploads on error. # # NOTES: # When enabling this option, one should take great care as this # increases disk usage on the API node. Be aware that depending # upon how the file system is configured, the disk space used # for buffering may decrease the actual disk space available for # the glance image cache. Disk utilization will cap according to # the following equation: # (``swift_store_large_object_chunk_size`` * ``workers`` * 1000) # # Possible values: # * True # * False # # Related options: # * swift_upload_buffer_dir # # (boolean value) #swift_buffer_on_upload = false # # Reference to default Swift account/backing store parameters. # # Provide a string value representing a reference to the default set # of parameters required for using swift account/backing store for # image storage. The default reference value for this configuration # option is 'ref1'. This configuration option dereferences the # parameters and facilitates image storage in Swift storage backend # every time a new image is added. # # Possible values: # * A valid string value # # Related options: # * None # # (string value) #default_swift_reference = ref1 # DEPRECATED: Version of the authentication service to use. Valid versions are 2 # and 3 for keystone and 1 (deprecated) for swauth and rackspace. (string value) # This option is deprecated for removal. # Its value may be silently ignored in the future. # Reason: # The option 'auth_version' in the Swift back-end configuration file is # used instead. #swift_store_auth_version = 2 # DEPRECATED: The address where the Swift authentication service is listening. # (string value) # This option is deprecated for removal. # Its value may be silently ignored in the future. # Reason: # The option 'auth_address' in the Swift back-end configuration file is # used instead. #swift_store_auth_address = # DEPRECATED: The user to authenticate against the Swift authentication service. # (string value) # This option is deprecated for removal. # Its value may be silently ignored in the future. # Reason: # The option 'user' in the Swift back-end configuration file is set instead. #swift_store_user = # DEPRECATED: Auth key for the user authenticating against the Swift # authentication service. (string value) # This option is deprecated for removal. # Its value may be silently ignored in the future. # Reason: # The option 'key' in the Swift back-end configuration file is used # to set the authentication key instead. #swift_store_key = # # Absolute path to the file containing the swift account(s) # configurations. # # Include a string value representing the path to a configuration # file that has references for each of the configured Swift # account(s)/backing stores. By default, no file path is specified # and customized Swift referencing is disabled. Configuring this # option is highly recommended while using Swift storage backend for # image storage as it avoids storage of credentials in the database. # # NOTE: Please do not configure this option if you have set # ``swift_store_multi_tenant`` to ``True``. # # Possible values: # * String value representing an absolute path on the glance-api # node # # Related options: # * swift_store_multi_tenant # # (string value) #swift_store_config_file = # # Directory to buffer image segments before upload to Swift. # # Provide a string value representing the absolute path to the # directory on the glance node where image segments will be # buffered briefly before they are uploaded to swift. # # NOTES: # * This is required only when the configuration option # ``swift_buffer_on_upload`` is set to True. # * This directory should be provisioned keeping in mind the # ``swift_store_large_object_chunk_size`` and the maximum # number of images that could be uploaded simultaneously by # a given glance node. # # Possible values: # * String value representing an absolute directory path # # Related options: # * swift_buffer_on_upload # * swift_store_large_object_chunk_size # # (string value) #swift_upload_buffer_dir = # # Address of the ESX/ESXi or vCenter Server target system. # # This configuration option sets the address of the ESX/ESXi or vCenter # Server target system. This option is required when using the VMware # storage backend. The address can contain an IP address (127.0.0.1) or # a DNS name (www.my-domain.com). # # Possible Values: # * A valid IPv4 or IPv6 address # * A valid DNS name # # Related options: # * vmware_server_username # * vmware_server_password # # (unknown value) #vmware_server_host = 127.0.0.1 # # Server username. # # This configuration option takes the username for authenticating with # the VMware ESX/ESXi or vCenter Server. This option is required when # using the VMware storage backend. # # Possible Values: # * Any string that is the username for a user with appropriate # privileges # # Related options: # * vmware_server_host # * vmware_server_password # # (string value) #vmware_server_username = root # # Server password. # # This configuration option takes the password for authenticating with # the VMware ESX/ESXi or vCenter Server. This option is required when # using the VMware storage backend. # # Possible Values: # * Any string that is a password corresponding to the username # specified using the "vmware_server_username" option # # Related options: # * vmware_server_host # * vmware_server_username # # (string value) #vmware_server_password = vmware # # The number of VMware API retries. # # This configuration option specifies the number of times the VMware # ESX/VC server API must be retried upon connection related issues or # server API call overload. It is not possible to specify 'retry # forever'. # # Possible Values: # * Any positive integer value # # Related options: # * None # # (integer value) # Minimum value: 1 #vmware_api_retry_count = 10 # # Interval in seconds used for polling remote tasks invoked on VMware # ESX/VC server. # # This configuration option takes in the sleep time in seconds for polling an # on-going async task as part of the VMWare ESX/VC server API call. # # Possible Values: # * Any positive integer value # # Related options: # * None # # (integer value) # Minimum value: 1 #vmware_task_poll_interval = 5 # # The directory where the glance images will be stored in the datastore. # # This configuration option specifies the path to the directory where the # glance images will be stored in the VMware datastore. If this option # is not set, the default directory where the glance images are stored # is openstack_glance. # # Possible Values: # * Any string that is a valid path to a directory # # Related options: # * None # # (string value) #vmware_store_image_dir = /openstack_glance # # Set verification of the ESX/vCenter server certificate. # # This configuration option takes a boolean value to determine # whether or not to verify the ESX/vCenter server certificate. If this # option is set to True, the ESX/vCenter server certificate is not # verified. If this option is set to False, then the default CA # truststore is used for verification. # # This option is ignored if the "vmware_ca_file" option is set. In that # case, the ESX/vCenter server certificate will then be verified using # the file specified using the "vmware_ca_file" option . # # Possible Values: # * True # * False # # Related options: # * vmware_ca_file # # (boolean value) # Deprecated group/name - [glance_store]/vmware_api_insecure #vmware_insecure = false # # Absolute path to the CA bundle file. # # This configuration option enables the operator to use a custom # Cerificate Authority File to verify the ESX/vCenter certificate. # # If this option is set, the "vmware_insecure" option will be ignored # and the CA file specified will be used to authenticate the ESX/vCenter # server certificate and establish a secure connection to the server. # # Possible Values: # * Any string that is a valid absolute path to a CA file # # Related options: # * vmware_insecure # # (string value) #vmware_ca_file = /etc/ssl/certs/ca-certificates.crt # # The datastores where the image can be stored. # # This configuration option specifies the datastores where the image can # be stored in the VMWare store backend. This option may be specified # multiple times for specifying multiple datastores. The datastore name # should be specified after its datacenter path, separated by ":". An # optional weight may be given after the datastore name, separated again # by ":" to specify the priority. Thus, the required format becomes # ::. # # When adding an image, the datastore with highest weight will be # selected, unless there is not enough free space available in cases # where the image size is already known. If no weight is given, it is # assumed to be zero and the directory will be considered for selection # last. If multiple datastores have the same weight, then the one with # the most free space available is selected. # # Possible Values: # * Any string of the format: # :: # # Related options: # * None # # (multi valued) #vmware_datastores = [oslo_concurrency] # # From oslo.concurrency # # Enables or disables inter-process locks. (boolean value) #disable_process_locking = false # Directory to use for lock files. For security, the specified directory should # only be writable by the user running the processes that need locking. Defaults # to environment variable OSLO_LOCK_PATH. If external locks are used, a lock # path must be set. (string value) #lock_path = [oslo_policy] # # From oslo.policy # # This option controls whether or not to enforce scope when evaluating policies. # If ``True``, the scope of the token used in the request is compared to the # ``scope_types`` of the policy being enforced. If the scopes do not match, an # ``InvalidScope`` exception will be raised. If ``False``, a message will be # logged informing operators that policies are being invoked with mismatching # scope. (boolean value) #enforce_scope = false # The file that defines policies. (string value) #policy_file = policy.json # Default rule. Enforced when a requested rule is not found. (string value) #policy_default_rule = default # Directories where policy configuration files are stored. They can be relative # to any directory in the search path defined by the config_dir option, or # absolute paths. The file defined by policy_file must exist for these # directories to be searched. Missing or empty directories are ignored. (multi # valued) #policy_dirs = policy.d # Content Type to send and receive data for REST based policy check (string # value) # Possible values: # application/x-www-form-urlencoded - # application/json - #remote_content_type = application/x-www-form-urlencoded # server identity verification for REST based policy check (boolean value) #remote_ssl_verify_server_crt = false # Absolute path to ca cert file for REST based policy check (string value) #remote_ssl_ca_crt_file = # Absolute path to client cert for REST based policy check (string value) #remote_ssl_client_crt_file = # Absolute path client key file REST based policy check (string value) #remote_ssl_client_key_file = glance-16.0.0/etc/schema-image.json0000666000175100017510000000253013245511421017041 0ustar zuulzuul00000000000000{ "kernel_id": { "type": ["null", "string"], "pattern": "^([0-9a-fA-F]){8}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-fA-F]){12}$", "description": "ID of image stored in Glance that should be used as the kernel when booting an AMI-style image." }, "ramdisk_id": { "type": ["null", "string"], "pattern": "^([0-9a-fA-F]){8}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-fA-F]){12}$", "description": "ID of image stored in Glance that should be used as the ramdisk when booting an AMI-style image." }, "instance_uuid": { "type": "string", "description": "Metadata which can be used to record which instance this image is associated with. (Informational only, does not create an instance snapshot.)" }, "architecture": { "description": "Operating system architecture as specified in https://docs.openstack.org/python-glanceclient/latest/cli/property-keys.html", "type": "string" }, "os_distro": { "description": "Common name of operating system distribution as specified in https://docs.openstack.org/python-glanceclient/latest/cli/property-keys.html", "type": "string" }, "os_version": { "description": "Operating system version as specified by the distributor", "type": "string" } } glance-16.0.0/etc/metadefs/0000775000175100017510000000000013245511661015422 5ustar zuulzuul00000000000000glance-16.0.0/etc/metadefs/compute-vmware.json0000666000175100017510000001765513245511421021302 0ustar zuulzuul00000000000000{ "namespace": "OS::Compute::VMware", "display_name": "VMware Driver Options", "description": "The VMware compute driver options. \n\nThese are properties specific to VMWare compute drivers and will only have an effect if the VMWare compute driver is enabled in Nova. For a list of all hypervisors, see here: https://wiki.openstack.org/wiki/HypervisorSupportMatrix.", "visibility": "public", "protected": true, "resource_type_associations": [ { "name": "OS::Glance::Image" } ], "properties": { "img_linked_clone":{ "title": "Linked Clone", "description": "By default, the VMware compute driver creates linked clones when possible (though this can be turned off by the operator). You can use this image property on a per-image basis to control whether virtual machines booted from the image are treated as full clones (value: false) or linked clones (value: true). Please refer to VMware documentation for information about full vs. linked clones.", "type": "boolean" }, "vmware_adaptertype": { "title": "Disk Adapter Type", "description": "The virtual SCSI or IDE controller used by the hypervisor.", "type": "string", "enum": [ "lsiLogic", "lsiLogicsas", "paraVirtual", "busLogic", "ide" ], "default" : "lsiLogic" }, "vmware_disktype": { "title": "Disk Provisioning Type", "description": "When performing operations such as creating a virtual disk, cloning, or migrating, the disk provisioning type may be specified. Please refer to VMware documentation for more.", "type": "string", "enum": [ "streamOptimized", "sparse", "preallocated" ], "default" : "preallocated" }, "vmware_ostype": { "title": "OS Type", "description": "A VMware GuestID which describes the operating system installed in the image. This value is passed to the hypervisor when creating a virtual machine. If not specified, the key defaults to otherGuest. See thinkvirt.com.", "type": "string", "enum": [ "asianux3_64Guest", "asianux3Guest", "asianux4_64Guest", "asianux4Guest", "asianux5_64Guest", "asianux7_64Guest", "centos64Guest", "centosGuest", "centos6Guest", "centos6_64Guest", "centos7_64Guest", "coreos64Guest", "darwin10_64Guest", "darwin10Guest", "darwin11_64Guest", "darwin11Guest", "darwin12_64Guest", "darwin13_64Guest", "darwin14_64Guest", "darwin15_64Guest", "darwin16_64Guest", "darwin64Guest", "darwinGuest", "debian4_64Guest", "debian4Guest", "debian5_64Guest", "debian5Guest", "debian6_64Guest", "debian6Guest", "debian7_64Guest", "debian7Guest", "debian8_64Guest", "debian8Guest", "debian9_64Guest", "debian9Guest", "debian10_64Guest", "debian10Guest", "dosGuest", "eComStation2Guest", "eComStationGuest", "fedora64Guest", "fedoraGuest", "freebsd64Guest", "freebsdGuest", "genericLinuxGuest", "mandrakeGuest", "mandriva64Guest", "mandrivaGuest", "netware4Guest", "netware5Guest", "netware6Guest", "nld9Guest", "oesGuest", "openServer5Guest", "openServer6Guest", "opensuse64Guest", "opensuseGuest", "oracleLinux64Guest", "oracleLinuxGuest", "oracleLinux6Guest", "oracleLinux6_64Guest", "oracleLinux7_64Guest", "os2Guest", "other24xLinux64Guest", "other24xLinuxGuest", "other26xLinux64Guest", "other26xLinuxGuest", "other3xLinux64Guest", "other3xLinuxGuest", "otherGuest", "otherGuest64", "otherLinux64Guest", "otherLinuxGuest", "redhatGuest", "rhel2Guest", "rhel3_64Guest", "rhel3Guest", "rhel4_64Guest", "rhel4Guest", "rhel5_64Guest", "rhel5Guest", "rhel6_64Guest", "rhel6Guest", "rhel7_64Guest", "rhel7Guest", "sjdsGuest", "sles10_64Guest", "sles10Guest", "sles11_64Guest", "sles11Guest", "sles12_64Guest", "sles12Guest", "sles64Guest", "slesGuest", "solaris10_64Guest", "solaris10Guest", "solaris11_64Guest", "solaris6Guest", "solaris7Guest", "solaris8Guest", "solaris9Guest", "turboLinux64Guest", "turboLinuxGuest", "ubuntu64Guest", "ubuntuGuest", "unixWare7Guest", "vmkernel5Guest", "vmkernel6Guest", "vmkernel65Guest", "vmkernelGuest", "vmwarePhoton64Guest", "win2000AdvServGuest", "win2000ProGuest", "win2000ServGuest", "win31Guest", "win95Guest", "win98Guest", "windows7_64Guest", "windows7Guest", "windows7Server64Guest", "windows8_64Guest", "windows8Guest", "windows8Server64Guest", "windows9_64Guest", "windows9Guest", "windows9Server64Guest", "windowsHyperVGuest", "winLonghorn64Guest", "winLonghornGuest", "winMeGuest", "winNetBusinessGuest", "winNetDatacenter64Guest", "winNetDatacenterGuest", "winNetEnterprise64Guest", "winNetEnterpriseGuest", "winNetStandard64Guest", "winNetStandardGuest", "winNetWebGuest", "winNTGuest", "winVista64Guest", "winVistaGuest", "winXPHomeGuest", "winXPPro64Guest", "winXPProGuest" ], "default": "otherGuest" }, "hw_vif_model": { "title": "Virtual Network Interface", "description": "Specifies the model of virtual network interface device to use. The valid options depend on the hypervisor. VMware driver supported options: e1000, e1000e, VirtualE1000, VirtualE1000e, VirtualPCNet32, VirtualSriovEthernetCard, and VirtualVmxnet.", "type": "string", "enum": [ "e1000", "e1000e", "VirtualE1000", "VirtualE1000e", "VirtualPCNet32", "VirtualSriovEthernetCard", "VirtualVmxnet", "VirtualVmxnet3" ], "default" : "e1000" } }, "objects": [] } glance-16.0.0/etc/metadefs/compute-hypervisor.json0000666000175100017510000000426513245511421022204 0ustar zuulzuul00000000000000{ "namespace": "OS::Compute::Hypervisor", "display_name": "Hypervisor Selection", "description": "OpenStack Compute supports many hypervisors, although most installations use only one hypervisor. For installations with multiple supported hypervisors, you can schedule different hypervisors using the ImagePropertiesFilter. This filters compute nodes that satisfy any architecture, hypervisor type, or virtual machine mode properties specified on the instance's image properties.", "visibility": "public", "protected": true, "resource_type_associations": [ { "name": "OS::Glance::Image" } ], "properties": { "hypervisor_type": { "title": "Hypervisor Type", "description": "Hypervisor type required by the image. Used with the ImagePropertiesFilter. \n\n KVM - Kernel-based Virtual Machine. LXC - Linux Containers (through libvirt). QEMU - Quick EMUlator. UML - User Mode Linux. hyperv - Microsoft® hyperv. vmware - VMware® vsphere. Baremetal - physical provisioning. VZ - Virtuozzo OS Containers and Virtual Machines (through libvirt). For more information, see: http://docs.openstack.org/trunk/config-reference/content/section_compute-hypervisors.html", "type": "string", "enum": [ "baremetal", "hyperv", "kvm", "lxc", "qemu", "uml", "vmware", "vz", "xen" ] }, "vm_mode": { "title": "VM Mode", "description": "The virtual machine mode. This represents the host/guest ABI (application binary interface) used for the virtual machine. Used with the ImagePropertiesFilter. \n\n hvm — Fully virtualized - This is the virtual machine mode (vm_mode) used by QEMU and KVM. \n\n xen - Xen 3.0 paravirtualized. \n\n uml — User Mode Linux paravirtualized. \n\n exe — Executables in containers. This is the mode used by LXC.", "type": "string", "enum": [ "hvm", "xen", "uml", "exe" ] } }, "objects": [] } glance-16.0.0/etc/metadefs/operating-system.json0000666000175100017510000000244213245511421021625 0ustar zuulzuul00000000000000{ "display_name": "Common Operating System Properties", "namespace": "OS::OperatingSystem", "description": "Details of the operating system contained within this image as well as common operating system properties that can be set on a VM instance created from this image.", "protected": true, "resource_type_associations" : [ { "name": "OS::Glance::Image" }, { "name": "OS::Cinder::Volume", "properties_target": "image" } ], "properties": { "os_distro": { "title": "OS Distro", "description": "The common name of the operating system distribution in lowercase (uses the same data vocabulary as the libosinfo project). Specify only a recognized value for this field. Deprecated values are listed to assist you in searching for the recognized value.", "type": "string" }, "os_version": { "title": "OS Version", "description": "Operating system version as specified by the distributor. (for example, '11.10')", "type": "string" }, "os_admin_user": { "title": "OS Admin User", "description": "The name of the user with admin privileges.", "type": "string" } } } glance-16.0.0/etc/metadefs/compute-aggr-iops-filter.json0000666000175100017510000000202313245511421023133 0ustar zuulzuul00000000000000{ "namespace": "OS::Compute::AggregateIoOpsFilter", "display_name": "IO Ops per Host", "description": "Properties related to the Nova scheduler filter AggregateIoOpsFilter. Filters aggregate hosts based on the number of instances currently changing state. Hosts in the aggregate with too many instances changing state will be filtered out. The filter must be enabled in the Nova scheduler to use these properties.", "visibility": "public", "protected": true, "resource_type_associations": [ { "name": "OS::Nova::Aggregate" } ], "properties": { "max_io_ops_per_host": { "title": "Maximum IO Operations per Host", "description": "Prevents hosts in the aggregate that have this many or more instances currently in build, resize, snapshot, migrate, rescue or unshelve to be scheduled for new instances.", "type": "integer", "readonly": false, "default": 8, "minimum": 1 } }, "objects": [] } glance-16.0.0/etc/metadefs/compute-randomgen.json0000666000175100017510000000202613245511421021735 0ustar zuulzuul00000000000000{ "namespace": "OS::Compute::RandomNumberGenerator", "display_name": "Random Number Generator", "description": "If a random-number generator device has been added to the instance through its image properties, the device can be enabled and configured.", "visibility": "public", "protected": true, "resource_type_associations": [ { "name": "OS::Nova::Flavor" } ], "properties": { "hw_rng:allowed": { "title": "Random Number Generator Allowed", "description": "", "type": "boolean" }, "hw_rng:rate_bytes": { "title": "Random number generator limits.", "description": "Allowed amount of bytes that the guest can read from the host's entropy per period.", "type": "integer" }, "hw_rng:rate_period": { "title": "Random number generator read period.", "description": "Duration of the read period in seconds.", "type": "integer" } } }glance-16.0.0/etc/metadefs/cim-resource-allocation-setting-data.json0000666000175100017510000001561313245511421025421 0ustar zuulzuul00000000000000{ "namespace": "CIM::ResourceAllocationSettingData", "display_name": "CIM Resource Allocation Setting Data", "description": "Properties from Common Information Model (CIM) schema (http://www.dmtf.org/standards/cim) that represent settings specifically related to an allocated resource that are outside the scope of the CIM class typically used to represent the resource itself. These properties may be specified to volume, host aggregate and flavor. For each property details, please refer to http://schemas.dmtf.org/wbem/cim-html/2/CIM_ResourceAllocationSettingData.html.", "visibility": "public", "protected": true, "resource_type_associations": [ { "name": "OS::Cinder::Volume", "prefix": "CIM_RASD_", "properties_target": "image" }, { "name": "OS::Nova::Aggregate", "prefix": "CIM_RASD_" }, { "name": "OS::Nova::Flavor", "prefix": "CIM_RASD_" } ], "properties": { "Address": { "title": "Address", "description": "The address of the resource.", "type": "string" }, "AddressOnParent": { "title": "Address On Parent", "description": "Describes the address of this resource in the context of the Parent.", "type": "string" }, "AllocationUnits": { "title": "Allocation Units", "description": "This property specifies the units of allocation used by the Reservation and Limit properties.", "type": "string" }, "AutomaticAllocation": { "title": "Automatic Allocation", "description": "This property specifies if the resource will be automatically allocated.", "type": "boolean" }, "AutomaticDeallocation": { "title": "Automatic Deallocation", "description": "This property specifies if the resource will be automatically de-allocated.", "type": "boolean" }, "ConsumerVisibility": { "title": "Consumer Visibility", "description": "Describes the consumers visibility to the allocated resource.", "operators": [""], "type": "string", "enum": [ "Unknown", "Passed-Through", "Virtualized", "Not represented", "DMTF reserved", "Vendor Reserved" ] }, "Limit": { "title": "Limit", "description": "This property specifies the upper bound, or maximum amount of resource that will be granted for this allocation.", "type": "string" }, "MappingBehavior": { "title": "Mapping Behavior", "description": "Specifies how this resource maps to underlying resources. If the HostResource array contains any entries, this property reflects how the resource maps to those specific resources.", "operators": [""], "type": "string", "enum": [ "Unknown", "Not Supported", "Dedicated", "Soft Affinity", "Hard Affinity", "DMTF Reserved", "Vendor Reserved" ] }, "OtherResourceType": { "title": "Other Resource Type", "description": "A string that describes the resource type when a well defined value is not available and ResourceType has the value 'Other'.", "type": "string" }, "Parent": { "title": "Parent", "description": "The Parent of the resource.", "type": "string" }, "PoolID": { "title": "Pool ID", "description": "This property specifies which ResourcePool the resource is currently allocated from, or which ResourcePool the resource will be allocated from when the allocation occurs.", "type": "string" }, "Reservation": { "title": "Reservation", "description": "This property specifies the amount of resource guaranteed to be available for this allocation.", "type": "string" }, "ResourceSubType": { "title": "Resource Sub Type", "description": "A string describing an implementation specific sub-type for this resource.", "type": "string" }, "ResourceType": { "title": "Resource Type", "description": "The type of resource this allocation setting represents.", "operators": [""], "type": "string", "enum": [ "Other", "Computer System", "Processor", "Memory", "IDE Controller", "Parallel SCSI HBA", "FC HBA", "iSCSI HBA", "IB HCA", "Ethernet Adapter", "Other Network Adapter", "I/O Slot", "I/O Device", "Floppy Drive", "CD Drive", "DVD drive", "Disk Drive", "Tape Drive", "Storage Extent", "Other storage device", "Serial port", "Parallel port", "USB Controller", "Graphics controller", "IEEE 1394 Controller", "Partitionable Unit", "Base Partitionable Unit", "Power", "Cooling Capacity", "Ethernet Switch Port", "Logical Disk", "Storage Volume", "Ethernet Connection", "DMTF reserved", "Vendor Reserved" ] }, "VirtualQuantity": { "title": "Virtual Quantity", "description": "This property specifies the quantity of resources presented to the consumer.", "type": "string" }, "VirtualQuantityUnits": { "title": "Virtual Quantity Units", "description": "This property specifies the units used by the VirtualQuantity property.", "type": "string" }, "Weight": { "title": "Weight", "description": "This property specifies a relative priority for this allocation in relation to other allocations from the same ResourcePool.", "type": "string" }, "Connection": { "title": "Connection", "description": "The thing to which this resource is connected.", "type": "string" }, "HostResource": { "title": "Host Resource", "description": "This property exposes specific assignment of resources.", "type": "string" } }, "objects": [] } glance-16.0.0/etc/metadefs/storage-volume-type.json0000666000175100017510000000211413245511421022237 0ustar zuulzuul00000000000000{ "namespace": "OS::Cinder::Volumetype", "display_name": "Cinder Volume Type", "description": "The Cinder volume type configuration option. Volume type assignment provides a mechanism not only to provide scheduling to a specific storage back-end, but also can be used to specify specific information for a back-end storage device to act upon.", "visibility": "public", "protected": true, "resource_type_associations": [ { "name": "OS::Glance::Image", "prefix": "cinder_" } ], "properties": { "img_volume_type": { "title": "Image Volume Type", "description": "Specifies the volume type that should be applied during new volume creation with a image. This value is passed to Cinder when creating a new volume. Priority of volume type related parameters are 1.volume_type(via API or CLI), 2.cinder_img_volume_type, 3.default_volume_type(via cinder.conf). If not specified, volume_type or default_volume_type will be referred based on their priority.", "type": "string" } } } glance-16.0.0/etc/metadefs/compute-libvirt-image.json0000666000175100017510000001221113245511421022513 0ustar zuulzuul00000000000000{ "namespace": "OS::Compute::LibvirtImage", "display_name": "libvirt Driver Options for Images", "description": "The libvirt Compute Driver Options for Glance Images. \n\nThese are properties specific to compute drivers. For a list of all hypervisors, see here: https://wiki.openstack.org/wiki/HypervisorSupportMatrix.", "visibility": "public", "protected": true, "resource_type_associations": [ { "name": "OS::Glance::Image" } ], "properties": { "hw_disk_bus": { "title": "Disk Bus", "description": "Specifies the type of disk controller to attach disk devices to.", "type": "string", "enum": [ "scsi", "virtio", "uml", "xen", "ide", "usb", "fdc", "sata" ] }, "hw_rng_model": { "title": "Random Number Generator Device", "description": "Adds a random-number generator device to the image's instances. The cloud administrator can enable and control device behavior by configuring the instance's flavor. By default: The generator device is disabled. /dev/random is used as the default entropy source. To specify a physical HW RNG device, use the following option in the nova.conf file: rng_dev_path=/dev/hwrng", "type": "string", "default": "virtio" }, "hw_machine_type": { "title": "Machine Type", "description": "Enables booting an ARM system using the specified machine type. By default, if an ARM image is used and its type is not specified, Compute uses vexpress-a15 (for ARMv7) or virt (for AArch64) machine types. Valid types can be viewed by using the virsh capabilities command (machine types are displayed in the machine tag).", "type": "string" }, "hw_scsi_model": { "title": "SCSI Model", "description": "Enables the use of VirtIO SCSI (virtio-scsi) to provide block device access for compute instances; by default, instances use VirtIO Block (virtio-blk). VirtIO SCSI is a para-virtualized SCSI controller device that provides improved scalability and performance, and supports advanced SCSI hardware.", "type": "string", "default": "virtio-scsi" }, "hw_video_model": { "title": "Video Model", "description": "The video image driver used.", "type": "string", "enum": [ "vga", "cirrus", "vmvga", "xen", "qxl" ] }, "hw_video_ram": { "title": "Max Video Ram", "description": "Maximum RAM (unit: MB) for the video image. Used only if a hw_video:ram_max_mb value has been set in the flavor's extra_specs and that value is higher than the value set in hw_video_ram.", "type": "integer", "minimum": 0 }, "os_command_line": { "title": "Kernel Command Line", "description": "The kernel command line to be used by the libvirt driver, instead of the default. For linux containers (LXC), the value is used as arguments for initialization. This key is valid only for Amazon kernel, ramdisk, or machine images (aki, ari, or ami).", "type": "string" }, "hw_vif_model": { "title": "Virtual Network Interface", "description": "Specifies the model of virtual network interface device to use. The valid options depend on the hypervisor configuration. libvirt driver options: KVM and QEMU: e1000, ne2k_pci, pcnet, rtl8139, spapr-vlan, and virtio. Xen: e1000, netfront, ne2k_pci, pcnet, and rtl8139.", "type": "string", "enum": [ "e1000", "e1000e", "ne2k_pci", "netfront", "pcnet", "rtl8139", "spapr-vlan", "virtio" ] }, "hw_qemu_guest_agent": { "title": "QEMU Guest Agent", "description": "This is a background process which helps management applications execute guest OS level commands. For example, freezing and thawing filesystems, entering suspend. However, guest agent (GA) is not bullet proof, and hostile guest OS can send spurious replies.", "type": "string", "enum": ["yes", "no"] }, "hw_pointer_model": { "title": "Pointer Model", "description": "Input devices allow interaction with a graphical framebuffer. For example to provide a graphic tablet for absolute cursor movement. Currently only supported by the KVM/QEMU hypervisor configuration and VNC or SPICE consoles must be enabled.", "type": "string", "enum": ["usbtablet"] }, "img_hide_hypervisor_id": { "title": "Hide hypervisor id", "description": "Enables hiding the host hypervisor signature in the guest OS.", "type": "string", "enum": ["yes", "no"] } }, "objects": [] } glance-16.0.0/etc/metadefs/compute-cpu-pinning.json0000666000175100017510000000241713245511421022216 0ustar zuulzuul00000000000000{ "namespace": "OS::Compute::CPUPinning", "display_name": "CPU Pinning", "description": "This provides the preferred CPU pinning and CPU thread pinning policy to be used when pinning vCPU of the guest to pCPU of the host. See http://docs.openstack.org/admin-guide/compute-numa-cpu-pinning.html", "visibility": "public", "protected": true, "resource_type_associations": [ { "name": "OS::Glance::Image", "prefix": "hw_" }, { "name": "OS::Cinder::Volume", "prefix": "hw_", "properties_target": "image" }, { "name": "OS::Nova::Flavor", "prefix": "hw:" } ], "properties": { "cpu_policy": { "title": "CPU Pinning policy", "description": "Type of CPU pinning policy.", "type": "string", "enum": [ "shared", "dedicated" ] }, "cpu_thread_policy": { "title": "CPU Thread Pinning Policy.", "description": "Type of CPU thread pinning policy.", "type": "string", "enum": [ "isolate", "prefer", "require" ] } } } glance-16.0.0/etc/metadefs/compute-vmware-flavor.json0000666000175100017510000000335513245511421022561 0ustar zuulzuul00000000000000{ "namespace": "OS::Compute::VMwareFlavor", "display_name": "VMware Driver Options for Flavors", "description": "VMware Driver Options for Flavors may be used to customize and manage Nova Flavors. These are properties specific to VMWare compute drivers and will only have an effect if the VMWare compute driver is enabled in Nova. See: http://docs.openstack.org/admin-guide/compute-flavors.html", "visibility": "public", "protected": true, "resource_type_associations": [ { "name": "OS::Nova::Flavor" } ], "properties": { "vmware:hw_version": { "title": "VMware Hardware Version", "description": "Specifies the hardware version VMware uses to create images. If the hardware version needs to be compatible with a cluster version, for backward compatibility or other circumstances, the vmware:hw_version key specifies a virtual machine hardware version. In the event that a cluster has mixed host version types, the key will enable the vCenter to place the cluster on the correct host.", "type": "string", "enum": [ "vmx-13", "vmx-11", "vmx-10", "vmx-09", "vmx-08", "vmx-07", "vmx-04", "vmx-03" ] }, "vmware:storage_policy": { "title": "VMware Storage Policy", "description": "Specifies the storage policy to be applied for newly created instance. If not provided, the default storage policy specified in config file will be used. If Storage Policy Based Management (SPBM) is not enabled in config file, this value won't be used.", "type": "string" } } } glance-16.0.0/etc/metadefs/cim-virtual-system-setting-data.json0000666000175100017510000001216713245511421024460 0ustar zuulzuul00000000000000{ "namespace": "CIM::VirtualSystemSettingData", "display_name": "CIM Virtual System Setting Data", "description": "A set of virtualization specific properties from Common Information Model (CIM) schema (http://www.dmtf.org/standards/cim), which define the virtual aspects of a virtual system. These properties may be specified to host aggregate and flavor. For each property details, please refer to http://schemas.dmtf.org/wbem/cim-html/2/CIM_VirtualSystemSettingData.html.", "visibility": "public", "protected": true, "resource_type_associations": [ { "name": "OS::Nova::Aggregate", "prefix": "CIM_VSSD_" }, { "name": "OS::Nova::Flavor", "prefix": "CIM_VSSD_" } ], "properties": { "AutomaticRecoveryAction": { "title": "Automatic Recovery Action", "description": "Action to take for the virtual system when the software executed by the virtual system fails.", "operators": [""], "type": "string", "enum": [ "None", "Restart", "Revert to snapshot", "DMTF Reserved" ] }, "AutomaticShutdownAction": { "title": "Automatic Shutdown Action", "description": "Action to take for the virtual system when the host is shut down.", "operators": [""], "type": "string", "enum": [ "Turn Off", "Save state", "Shutdown", "DMTF Reserved" ] }, "AutomaticStartupAction": { "title": "Automatic Startup Action", "description": "Action to take for the virtual system when the host is started.", "operators": [""], "type": "string", "enum": [ "None", "Restart if previously active", "Always startup", "DMTF Reserved" ] }, "AutomaticStartupActionDelay": { "title": "Automatic Startup Action Delay", "description": "Delay applicable to startup action.", "type": "string" }, "AutomaticStartupActionSequenceNumber": { "title": "Automatic Startup Action Sequence Number", "description": "Number indicating the relative sequence of virtual system activation when the host system is started.", "type": "string" }, "ConfigurationDataRoot": { "title": "Configuration Data Root", "description": "Filepath of a directory where information about the virtual system configuration is stored.", "type": "string" }, "ConfigurationFile": { "title": "Configuration File", "description": "Filepath of a file where information about the virtual system configuration is stored.", "type": "string" }, "ConfigurationID": { "title": "Configuration ID", "description": "Unique id of the virtual system configuration.", "type": "string" }, "CreationTime": { "title": "Creation Time", "description": "Time when the virtual system configuration was created.", "type": "string" }, "LogDataRoot": { "title": "Log Data Root", "description": "Filepath of a directory where log information about the virtual system is stored.", "type": "string" }, "RecoveryFile": { "title": "Recovery File", "description": "Filepath of a file where recovery relateded information of the virtual system is stored.", "type": "string" }, "SnapshotDataRoot": { "title": "Snapshot Data Root", "description": "Filepath of a directory where information about virtual system snapshots is stored.", "type": "string" }, "SuspendDataRoot": { "title": "Suspend Data Root", "description": "Filepath of a directory where suspend related information about the virtual system is stored.", "type": "string" }, "SwapFileDataRoot": { "title": "Swap File Data Root", "description": "Filepath of a directory where swapfiles of the virtual system are stored.", "type": "string" }, "VirtualSystemIdentifier": { "title": "Virtual System Identifier", "description": "VirtualSystemIdentifier shall reflect a unique name for the system as it is used within the virtualization platform.", "type": "string" }, "VirtualSystemType": { "title": "Virtual System Type", "description": "VirtualSystemType shall reflect a particular type of virtual system.", "type": "string" }, "Notes": { "title": "Notes", "description": "End-user supplied notes that are related to the virtual system.", "type": "string" } }, "objects": [] } glance-16.0.0/etc/metadefs/compute-aggr-disk-filter.json0000666000175100017510000000210213245511421023111 0ustar zuulzuul00000000000000{ "namespace": "OS::Compute::AggregateDiskFilter", "display_name": "Disk Allocation per Host", "description": "Properties related to the Nova scheduler filter AggregateDiskFilter. Filters aggregate hosts based on the available disk space compared to the requested disk space. Hosts in the aggregate with not enough usable disk will be filtered out. The filter must be enabled in the Nova scheduler to use these properties.", "visibility": "public", "protected": true, "resource_type_associations": [ { "name": "OS::Nova::Aggregate" } ], "properties": { "disk_allocation_ratio": { "title": "Disk Subscription Ratio", "description": "Allows the host to be under and over subscribed for the amount of disk space requested for an instance. A ratio greater than 1.0 allows for over subscription (hosts may have less usable disk space than requested). A ratio less than 1.0 allows for under subscription.", "type": "number", "readonly": false } }, "objects": [] } glance-16.0.0/etc/metadefs/software-databases.json0000666000175100017510000004445313245511421022102 0ustar zuulzuul00000000000000{ "namespace": "OS::Software::DBMS", "display_name": "Database Software", "description": "A database is an organized collection of data. The data is typically organized to model aspects of reality in a way that supports processes requiring information. Database management systems are computer software applications that interact with the user, other applications, and the database itself to capture and analyze data. (http://en.wikipedia.org/wiki/Database)", "visibility": "public", "protected": true, "resource_type_associations": [ { "name": "OS::Glance::Image" }, { "name": "OS::Cinder::Volume", "properties_target": "image" }, { "name": "OS::Nova::Server", "properties_target": "metadata" }, { "name": "OS::Trove::Instance" } ], "objects": [ { "name": "MySQL", "description": "MySQL is an object-relational database management system (ORDBMS). The MySQL development project has made its source code available under the terms of the GNU General Public License, as well as under a variety of proprietary agreements. MySQL was owned and sponsored by a single for-profit firm, the Swedish company MySQL AB, now owned by Oracle Corporation. MySQL is a popular choice of database for use in web applications, and is a central component of the widely used LAMP open source web application software stack (and other 'AMP' stacks). (http://en.wikipedia.org/wiki/MySQL)", "properties": { "sw_database_mysql_version": { "title": "Version", "description": "The specific version of MySQL.", "type": "string" }, "sw_database_mysql_listen_port": { "title": "Listen Port", "description": "The configured TCP/IP port which MySQL listens for incoming connections.", "type": "integer", "minimum": 1, "maximum": 65535, "default": 3606 }, "sw_database_mysql_admin": { "title": "Admin User", "description": "The primary user with privileges to perform administrative operations.", "type": "string", "default": "root" } } }, { "name": "PostgreSQL", "description": "PostgreSQL, often simply 'Postgres', is an object-relational database management system (ORDBMS) with an emphasis on extensibility and standards-compliance. PostgreSQL is cross-platform and runs on many operating systems. (http://en.wikipedia.org/wiki/PostgreSQL)", "properties": { "sw_database_postgresql_version": { "title": "Version", "description": "The specific version of PostgreSQL.", "type": "string" }, "sw_database_postgresql_listen_port": { "title": "Listen Port", "description": "Specifies the TCP/IP port or local Unix domain socket file extension on which PostgreSQL is to listen for connections from client applications.", "type": "integer", "minimum": 1, "maximum": 65535, "default": 5432 }, "sw_database_postgresql_admin": { "title": "Admin User", "description": "The primary user with privileges to perform administrative operations.", "type": "string", "default": "postgres" } } }, { "name": "SQL Server", "description": "Microsoft SQL Server is a relational database management system developed by Microsoft. There are at least a dozen different editions of Microsoft SQL Server aimed at different audiences and for workloads ranging from small single-machine applications to large Internet-facing applications with many concurrent users. Its primary query languages are T-SQL and ANSI SQL. (http://en.wikipedia.org/wiki/Microsoft_SQL_Server)", "properties": { "sw_database_sqlserver_version": { "title": "Version", "description": "The specific version of Microsoft SQL Server.", "type": "string" }, "sw_database_sqlserver_edition": { "title": "Edition", "description": "SQL Server is available in multiple editions, with different feature sets and targeting different users.", "type": "string", "default": "Express", "enum": [ "Datacenter", "Enterprise", "Standard", "Web", "Business Intelligence", "Workgroup", "Express", "Compact (SQL CE)", "Developer", "Embedded (SSEE)", "Express", "Fast Track", "LocalDB", "Parallel Data Warehouse (PDW)", "Business Intelligence", "Datawarehouse Appliance Edition" ] }, "sw_database_sqlserver_listen_port": { "title": "Listen Port", "description": "Specifies the TCP/IP port or local Unix domain socket file extension on which SQL Server is to listen for connections from client applications. The default SQL Server port is 1433, and client ports are assigned a random value between 1024 and 5000.", "type": "integer", "minimum": 1, "maximum": 65535, "default": 1433 }, "sw_database_postsqlserver_admin": { "title": "Admin User", "description": "The primary user with privileges to perform administrative operations.", "type": "string", "default": "sa" } } }, { "name": "Oracle", "description": "Oracle Database (commonly referred to as Oracle RDBMS or simply as Oracle) is an object-relational database management system produced and marketed by Oracle Corporation. (http://en.wikipedia.org/wiki/Oracle_Database)", "properties": { "sw_database_oracle_version": { "title": "Version", "description": "The specific version of Oracle.", "type": "string" }, "sw_database_oracle_edition": { "title": "Edition", "description": "Over and above the different versions of the Oracle database management software developed over time, Oracle Corporation subdivides its product into varying editions.", "type": "string", "default": "Express", "enum": [ "Enterprise", "Standard", "Standard Edition One", "Express (XE)", "Workgroup", "Lite" ] }, "sw_database_oracle_listen_port": { "title": "Listen Port", "description": "Specifies the TCP/IP port or local Unix domain socket file extension on which Oracle is to listen for connections from client applications.", "type": "integer", "minimum": 1, "maximum": 65535, "default": 1521 } } }, { "name": "DB2", "description": "IBM DB2 is a family of database server products developed by IBM. These products all support the relational model, but in recent years some products have been extended to support object-relational features and non-relational structures, in particular XML. (http://en.wikipedia.org/wiki/IBM_DB2)", "properties": { "sw_database_db2_version": { "title": "Version", "description": "The specific version of DB2.", "type": "string" }, "sw_database_db2_port": { "title": "Listen Port", "description": "Specifies the TCP/IP port or local Unix domain socket file extension on which DB2 is to listen for connections from client applications.", "type": "integer", "minimum": 1, "maximum": 65535, "default": 5432 }, "sw_database_db2_admin": { "title": "Admin User", "description": "The primary user with privileges to perform administrative operations.", "type": "string" } } }, { "name": "MongoDB", "description": "MongoDB is a cross-platform document-oriented database. Classified as a NoSQL database, MongoDB uses JSON-like documents with dynamic schemas (MongoDB calls the format BSON), making the integration of data in certain types of applications easier and faster. Released under a combination of the GNU Affero General Public License and the Apache License, MongoDB is free and open-source software. (http://en.wikipedia.org/wiki/MongoDB)", "properties": { "sw_database_mongodb_version": { "title": "Version", "description": "The specific version of MongoDB.", "type": "string" }, "sw_database_mongodb_listen_port": { "title": "Listen Port", "description": "Specifies the TCP/IP port or local Unix domain socket file extension on which MongoDB is to listen for connections from client applications.", "type": "integer", "minimum": 1, "maximum": 65535, "default": 27017 }, "sw_database_mongodb_admin": { "title": "Admin User", "description": "The primary user with privileges to perform administrative operations.", "type": "string" } } }, { "name": "Couchbase Server", "description": "Couchbase Server, originally known as Membase, is an open source, distributed (shared-nothing architecture) NoSQL document-oriented database that is optimized for interactive applications. These applications must serve many concurrent users by creating, storing, retrieving, aggregating, manipulating and presenting data. In support of these kinds of application needs, Couchbase is designed to provide easy-to-scale key-value or document access with low latency and high sustained throughput. (http://en.wikipedia.org/wiki/Couchbase_Server)", "properties": { "sw_database_couchbaseserver_version": { "title": "Version", "description": "The specific version of Couchbase Server.", "type": "string" }, "sw_database_couchbaseserver_listen_port": { "title": "Listen Port", "description": "Specifies the TCP/IP port or local Unix domain socket file extension on which Couchbase is to listen for connections from client applications.", "type": "integer", "minimum": 1, "maximum": 65535, "default": 11211 }, "sw_database_couchbaseserver_admin": { "title": "Admin User", "description": "The primary user with privileges to perform administrative operations.", "type": "string", "default": "admin" } } }, { "name": "Redis", "description": "Redis is a data structure server (NoSQL). It is open-source, networked, in-memory, and stores keys with optional durability. The development of Redis has been sponsored by Pivotal Software since May 2013; before that, it was sponsored by VMware. The name Redis means REmote DIctionary Server. (http://en.wikipedia.org/wiki/Redis)", "properties": { "sw_database_redis_version": { "title": "Version", "description": "The specific version of Redis.", "type": "string" }, "sw_database_redis_listen_port": { "title": "Listen Port", "description": "Specifies the TCP/IP port or local Unix domain socket file extension on which Redis is to listen for connections from client applications.", "type": "integer", "minimum": 1, "maximum": 65535, "default": 6379 }, "sw_database_redis_admin": { "title": "Admin User", "description": "The primary user with privileges to perform administrative operations.", "type": "string", "default": "admin" } } }, { "name": "CouchDB", "description": "Apache CouchDB, commonly referred to as CouchDB, is an open source NoSQL database. It is a NoSQL database that uses JSON to store data, JavaScript as its query language using MapReduce, and HTTP for an API. One of its distinguishing features is multi-master replication. CouchDB was first released in 2005 and later became an Apache project in 2008. (http://en.wikipedia.org/wiki/CouchDB)", "properties": { "sw_database_couchdb_version": { "title": "Version", "description": "The specific version of CouchDB.", "type": "string" }, "sw_database_couchdb_listen_port": { "title": "Listen Port", "description": "Specifies the TCP/IP port or local Unix domain socket file extension on which CouchDB is to listen for connections from client applications.", "type": "integer", "minimum": 1, "maximum": 65535, "default": 5984 }, "sw_database_couchdb_admin": { "title": "Admin User", "description": "The primary user with privileges to perform administrative operations.", "type": "string" } } }, { "name": "Apache Cassandra", "description": "Apache Cassandra is an open source distributed NoSQL database management system designed to handle large amounts of data across many commodity servers, providing high availability with no single point of failure. (http://en.wikipedia.org/wiki/Apache_Cassandra)", "properties": { "sw_database_cassandra_version": { "title": "Version", "description": "The specific version of Apache Cassandra.", "type": "string" }, "sw_database_cassandra_listen_port": { "title": "Listen Port", "description": "Specifies the TCP/IP port or local Unix domain socket file extension on which Cassandra is to listen for connections from client applications.", "type": "integer", "minimum": 1, "maximum": 65535, "default": 9160 }, "sw_database_cassandra_admin": { "title": "Admin User", "description": "The primary user with privileges to perform administrative operations.", "type": "string", "default": "cassandra" } } }, { "name": "HBase", "description": "HBase is an open source, non-relational (NoSQL), distributed database modeled after Google's BigTable and written in Java. It is developed as part of Apache Software Foundation's Apache Hadoop project and runs on top of HDFS (Hadoop Distributed Filesystem), providing BigTable-like capabilities for Hadoop. (http://en.wikipedia.org/wiki/Apache_HBase)", "properties": { "sw_database_hbase_version": { "title": "Version", "description": "The specific version of HBase.", "type": "string" } } }, { "name": "Hazlecast", "description": "In computing, Hazelcast is an in-memory open source software data grid based on Java. By having multiple nodes form a cluster, data is evenly distributed among the nodes. This allows for horizontal scaling both in terms of available storage space and processing power. Backups are also distributed in a similar fashion to other nodes, based on configuration, thereby protecting against single node failure. (http://en.wikipedia.org/wiki/Hazelcast)", "properties": { "sw_database_hazlecast_version": { "title": "Version", "description": "The specific version of Hazlecast.", "type": "string" }, "sw_database_hazlecast_port": { "title": "Listen Port", "description": "Specifies the TCP/IP port or local Unix domain socket file extension on which Hazlecast is to listen for connections between members.", "type": "integer", "minimum": 1, "maximum": 65535, "default": 5701 } } } ] } glance-16.0.0/etc/metadefs/README0000666000175100017510000000041613245511421016277 0ustar zuulzuul00000000000000This directory contains predefined namespaces for Glance Metadata Definitions Catalog. Files from this directory can be loaded into the database using db_load_metadefs command for glance-manage. Similarly you can unload the definitions using db_unload_metadefs command. glance-16.0.0/etc/metadefs/cim-processor-allocation-setting-data.json0000666000175100017510000001210513245511421025602 0ustar zuulzuul00000000000000{ "namespace": "CIM::ProcessorAllocationSettingData", "display_name": "CIM Processor Allocation Setting", "description": "Properties related to the resource allocation settings of a processor (CPU) from Common Information Model (CIM) schema (http://www.dmtf.org/standards/cim). These are properties that identify processor setting data and may be specified to volume, image, host aggregate, flavor and Nova server as scheduler hint. For each property details, please refer to http://schemas.dmtf.org/wbem/cim-html/2/CIM_ProcessorAllocationSettingData.html.", "visibility": "public", "protected": true, "resource_type_associations": [ { "name": "OS::Cinder::Volume", "prefix": "CIM_PASD_", "properties_target": "image" }, { "name": "OS::Glance::Image", "prefix": "CIM_PASD_" }, { "name": "OS::Nova::Aggregate", "prefix": "CIM_PASD_" }, { "name": "OS::Nova::Flavor", "prefix": "CIM_PASD_" }, { "name": "OS::Nova::Server", "properties_target": "scheduler_hint" } ], "properties": { "InstructionSet": { "title": "Instruction Set", "description": "Identifies the instruction set of the processor within a processor architecture.", "operators": [""], "type": "string", "enum": [ "x86:i386", "x86:i486", "x86:i586", "x86:i686", "x86:64", "IA-64:IA-64", "AS/400:TIMI", "Power:Power_2.03", "Power:Power_2.04", "Power:Power_2.05", "Power:Power_2.06", "S/390:ESA/390", "S/390:z/Architecture", "S/390:z/Architecture_2", "PA-RISC:PA-RISC_1.0", "PA-RISC:PA-RISC_2.0", "ARM:A32", "ARM:A64", "MIPS:MIPS_I", "MIPS:MIPS_II", "MIPS:MIPS_III", "MIPS:MIPS_IV", "MIPS:MIPS_V", "MIPS:MIPS32", "MIPS64:MIPS64", "Alpha:Alpha", "SPARC:SPARC_V7", "SPARC:SPARC_V8", "SPARC:SPARC_V9", "SPARC:SPARC_JPS1", "SPARC:UltraSPARC2005", "SPARC:UltraSPARC2007", "68k:68000", "68k:68010", "68k:68020", "68k:68030", "68k:68040", "68k:68060" ] }, "ProcessorArchitecture": { "title": "Processor Architecture", "description": "Identifies the processor architecture of the processor.", "operators": [""], "type": "string", "enum": [ "x86", "IA-64", "AS/400", "Power", "S/390", "PA-RISC", "ARM", "MIPS", "Alpha", "SPARC", "68k" ] }, "InstructionSetExtensionName": { "title": "Instruction Set Extension", "description": "Identifies the instruction set extensions of the processor within a processor architecture.", "operators": ["", ""], "type": "array", "items": { "type": "string", "enum": [ "x86:3DNow", "x86:3DNowExt", "x86:ABM", "x86:AES", "x86:AVX", "x86:AVX2", "x86:BMI", "x86:CX16", "x86:F16C", "x86:FSGSBASE", "x86:LWP", "x86:MMX", "x86:PCLMUL", "x86:RDRND", "x86:SSE2", "x86:SSE3", "x86:SSSE3", "x86:SSE4A", "x86:SSE41", "x86:SSE42", "x86:FMA3", "x86:FMA4", "x86:XOP", "x86:TBM", "x86:VT-d", "x86:VT-x", "x86:EPT", "x86:SVM", "PA-RISC:MAX", "PA-RISC:MAX2", "ARM:DSP", "ARM:Jazelle-DBX", "ARM:Thumb", "ARM:Thumb-2", "ARM:ThumbEE)", "ARM:VFP", "ARM:NEON", "ARM:TrustZone", "MIPS:MDMX", "MIPS:MIPS-3D", "Alpha:BWX", "Alpha:FIX", "Alpha:CIX", "Alpha:MVI" ] } } }, "objects": [] } glance-16.0.0/etc/metadefs/compute-aggr-num-instances.json0000666000175100017510000000160113245511421023463 0ustar zuulzuul00000000000000{ "namespace": "OS::Compute::AggregateNumInstancesFilter", "display_name": "Instances per Host", "description": "Properties related to the Nova scheduler filter AggregateNumInstancesFilter. Filters aggregate hosts by the number of running instances on it. Hosts in the aggregate with too many instances will be filtered out. The filter must be enabled in the Nova scheduler to use these properties.", "visibility": "public", "protected": false, "resource_type_associations": [ { "name": "OS::Nova::Aggregate" } ], "properties": { "max_instances_per_host": { "title": "Max Instances Per Host", "description": "Maximum number of instances allowed to run on a host in the aggregate.", "type": "integer", "readonly": false, "minimum": 0 } }, "objects": [] } glance-16.0.0/etc/metadefs/compute-instance-data.json0000666000175100017510000000350513245511421022501 0ustar zuulzuul00000000000000{ "namespace": "OS::Compute::InstanceData", "display_name": "Instance Config Data", "description": "Instances can perform self-configuration based on data made available to the running instance. These properties affect instance configuration.", "visibility": "public", "protected": true, "resource_type_associations": [ { "name": "OS::Glance::Image" }, { "name": "OS::Cinder::Volume", "properties_target": "image" } ], "properties": { "img_config_drive": { "title": "Config Drive", "description": "This property specifies whether or not Nova should use a config drive when booting the image. Mandatory means that Nova will always use a config drive when booting the image. OpenStack can be configured to write metadata to a special configuration drive that will be attached to the instance when it boots. The instance can retrieve any information from the config drive. One use case for the config drive is to pass network configuration information to the instance. See also: http://docs.openstack.org/user-guide/cli_config_drive.html", "type": "string", "enum": [ "optional", "mandatory" ] }, "os_require_quiesce": { "title": "Require Quiescent File system", "description": "This property specifies whether or not the filesystem must be quiesced during snapshot processing. For volume backed and image backed snapshots, yes means that snapshotting is aborted when quiescing fails, whereas, no means quiescing will be skipped and snapshot processing will continue after the quiesce failure.", "type": "string", "enum": [ "yes", "no" ] } } } glance-16.0.0/etc/metadefs/compute-guest-memory-backing.json0000666000175100017510000000217113245511421024015 0ustar zuulzuul00000000000000{ "namespace": "OS::Compute::GuestMemoryBacking", "display_name": "Guest Memory Backing", "description": "This provides the preferred backing option for guest RAM. Guest's memory can be backed by hugepages to limit TLB lookups. See also: https://wiki.openstack.org/wiki/VirtDriverGuestCPUMemoryPlacement", "visibility": "public", "protected": true, "resource_type_associations": [ { "name": "OS::Nova::Flavor", "prefix": "hw:" }, { "name": "OS::Glance::Image", "prefix": "hw_" }, { "name": "OS::Cinder::Volume", "prefix": "hw_", "properties_target": "image" } ], "properties": { "mem_page_size": { "title": "Size of memory page", "description": "Page size to be used for Guest memory backing. Value can be specified as (i.e.: 2MB, 1GB) or 'any', 'small', 'large'. If this property is set in Image metadata then only 'any' and 'large' values are accepted in Flavor metadata by Nova API.", "type": "string" } } }glance-16.0.0/etc/metadefs/image-signature-verification.json0000666000175100017510000000322713245511421024056 0ustar zuulzuul00000000000000{ "namespace": "OS::Glance::Signatures", "display_name": "Image Signature Verification", "description": "Image signature verification allows the user to verify that an image has not been modified prior to booting the image.", "visibility": "public", "protected": false, "resource_type_associations": [ { "name": "OS::Glance::Image" } ], "properties": { "img_signature": { "title": "Image Signature", "description": "The signature of the image data encoded in base64 format.", "type": "string" }, "img_signature_certificate_uuid": { "title": "Image Signature Certificate UUID", "description": "The UUID used to retrieve the certificate from the key manager.", "type": "string" }, "img_signature_hash_method": { "title": "Image Signature Hash Method", "description": "The hash method used in creating the signature.", "type": "string", "enum": [ "SHA-224", "SHA-256", "SHA-384", "SHA-512" ] }, "img_signature_key_type": { "title": "Image Signature Key Type", "description": "The key type used in creating the signature.", "type": "string", "enum": [ "RSA-PSS", "DSA", "ECC_SECT571K1", "ECC_SECT409K1", "ECC_SECT571R1", "ECC_SECT409R1", "ECC_SECP521R1", "ECC_SECP384R1" ] } } } glance-16.0.0/etc/metadefs/glance-common-image-props.json0000666000175100017510000000436113245511421023255 0ustar zuulzuul00000000000000{ "display_name": "Common Image Properties", "namespace": "OS::Glance::CommonImageProperties", "description": "When adding an image to Glance, you may specify some common image properties that may prove useful to consumers of your image.", "protected": true, "resource_type_associations" : [ ], "properties": { "kernel_id": { "title": "Kernel ID", "type": "string", "pattern": "^([0-9a-fA-F]){8}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-fA-F]){12}$", "description": "ID of image stored in Glance that should be used as the kernel when booting an AMI-style image." }, "ramdisk_id": { "title": "Ramdisk ID", "type": "string", "pattern": "^([0-9a-fA-F]){8}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-fA-F]){12}$", "description": "ID of image stored in Glance that should be used as the ramdisk when booting an AMI-style image." }, "instance_uuid": { "title": "Instance ID", "type": "string", "description": "Metadata which can be used to record which instance this image is associated with. (Informational only, does not create an instance snapshot.)" }, "architecture": { "title": "CPU Architecture", "description": "The CPU architecture that must be supported by the hypervisor. For example, x86_64, arm, or ppc64. Run uname -m to get the architecture of a machine. We strongly recommend using the architecture data vocabulary defined by the libosinfo project for this purpose.", "type": "string" }, "os_distro": { "title": "OS Distro", "description": "The common name of the operating system distribution in lowercase (uses the same data vocabulary as the libosinfo project). Specify only a recognized value for this field. Deprecated values are listed to assist you in searching for the recognized value.", "type": "string" }, "os_version": { "title": "OS Version", "description": "Operating system version as specified by the distributor. (for example, '11.10')", "type": "string" } } } glance-16.0.0/etc/metadefs/compute-guest-shutdown.json0000666000175100017510000000166013245511421022766 0ustar zuulzuul00000000000000{ "namespace": "OS::Compute::GuestShutdownBehavior", "display_name": "Shutdown Behavior", "description": "These properties allow modifying the shutdown behavior for stop, rescue, resize, and shelve operations.", "visibility": "public", "protected": true, "resource_type_associations": [ { "name": "OS::Glance::Image" } ], "properties": { "os_shutdown_timeout": { "title": "Shutdown timeout", "description": "By default, guests will be given 60 seconds to perform a graceful shutdown. After that, the VM is powered off. This property allows overriding the amount of time (unit: seconds) to allow a guest OS to cleanly shut down before power off. A value of 0 (zero) means the guest will be powered off immediately with no opportunity for guest OS clean-up.", "type": "integer", "minimum": 0 } }, "objects": [] } glance-16.0.0/etc/metadefs/software-webservers.json0000666000175100017510000001321413245511421022331 0ustar zuulzuul00000000000000{ "namespace": "OS::Software::WebServers", "display_name": "Web Servers", "description": "A web server is a computer system that processes requests via HTTP, the basic network protocol used to distribute information on the World Wide Web. The most common use of web servers is to host websites, but there are other uses such as gaming, data storage, running enterprise applications, handling email, FTP, or other web uses. (http://en.wikipedia.org/wiki/Web_server)", "visibility": "public", "protected": true, "resource_type_associations": [ { "name": "OS::Glance::Image" }, { "name": "OS::Cinder::Volume", "properties_target": "image" }, { "name": "OS::Nova::Server", "properties_target": "metadata" } ], "objects": [ { "name": "Apache HTTP Server", "description": "The Apache HTTP Server, colloquially called Apache, is a Web server application notable for playing a key role in the initial growth of the World Wide Web. Apache is developed and maintained by an open community of developers under the auspices of the Apache Software Foundation. Most commonly used on a Unix-like system, the software is available for a wide variety of operating systems, including Unix, FreeBSD, Linux, Solaris, Novell NetWare, OS X, Microsoft Windows, OS/2, TPF, OpenVMS and eComStation. Released under the Apache License, Apache is open-source software. (http://en.wikipedia.org/wiki/Apache_HTTP_Server)", "properties": { "sw_webserver_apache_version": { "title": "Version", "description": "The specific version of Apache.", "type": "string" }, "sw_webserver_apache_http_port": { "title": "HTTP Port", "description": "The configured TCP/IP port on which the web server listens for incoming HTTP connections.", "type": "integer", "minimum": 1, "maximum": 65535, "default": 80 }, "sw_webserver_apache_https_port": { "title": "HTTPS Port", "description": "The configured TCP/IP port on which the web server listens for incoming HTTPS connections.", "type": "integer", "minimum": 1, "maximum": 65535, "default": 443 } } }, { "name": "Nginx", "description": "Nginx (pronounced 'engine-x') is an open source reverse proxy server for HTTP, HTTPS, SMTP, POP3, and IMAP protocols, as well as a load balancer, HTTP cache, and a web server (origin server). The nginx project started with a strong focus on high concurrency, high performance and low memory usage. It is licensed under the 2-clause BSD-like license and it runs on Linux, BSD variants, Mac OS X, Solaris, AIX, HP-UX, as well as on other *nix flavors. It also has a proof of concept port for Microsoft Windows. (http://en.wikipedia.org/wiki/Nginx)", "properties": { "sw_webserver_nginx_version": { "title": "Version", "description": "The specific version of Nginx.", "type": "string" }, "sw_webserver_nginx_http_port": { "title": "HTTP Port", "description": "The configured TCP/IP port on which the web server listens for incoming HTTP connections.", "type": "integer", "minimum": 1, "maximum": 65535, "default": 80 }, "sw_webserver_nginx_https_port": { "title": "HTTPS Port", "description": "The configured TCP/IP port on which the web server listens for incoming HTTPS connections.", "type": "integer", "minimum": 1, "maximum": 65535, "default": 443 } } }, { "name": "IIS", "description": "Internet Information Services (IIS, formerly Internet Information Server) is an extensible web server created by Microsoft. IIS supports HTTP, HTTPS, FTP, FTPS, SMTP and NNTP. IIS is not turned on by default when Windows is installed. The IIS Manager is accessed through the Microsoft Management Console or Administrative Tools in the Control Panel. (http://en.wikipedia.org/wiki/Internet_Information_Services)", "properties": { "sw_webserver_iis_version": { "title": "Version", "description": "The specific version of IIS.", "type": "string" }, "sw_webserver_iis_http_port": { "title": "HTTP Port", "description": "The configured TCP/IP port on which the web server listens for incoming HTTP connections.", "type": "integer", "minimum": 1, "maximum": 65535, "default": 80 }, "sw_webserver_iis_https_port": { "title": "HTTPS Port", "description": "The configured TCP/IP port on which the web server listens for incoming HTTPS connections.", "type": "integer", "minimum": 1, "maximum": 65535, "default": 443 } } } ] } glance-16.0.0/etc/metadefs/cim-storage-allocation-setting-data.json0000666000175100017510000001206013245511421025227 0ustar zuulzuul00000000000000{ "namespace": "CIM::StorageAllocationSettingData", "display_name": "CIM Storage Allocation Setting Data", "description": "Properties related to the allocation of virtual storage from Common Information Model (CIM) schema (http://www.dmtf.org/standards/cim). These properties may be specified to volume, host aggregate and flavor. For each property details, please refer to http://schemas.dmtf.org/wbem/cim-html/2/CIM_StorageAllocationSettingData.html.", "visibility": "public", "protected": true, "resource_type_associations": [ { "name": "OS::Cinder::Volume", "prefix": "CIM_SASD_" }, { "name": "OS::Nova::Aggregate", "prefix": "CIM_SASD_" }, { "name": "OS::Nova::Flavor", "prefix": "CIM_SASD_" } ], "properties": { "Access": { "title": "Access", "description": "Access describes whether the allocated storage extent is 1 (readable), 2 (writeable), or 3 (both).", "operators": [""], "type": "string", "enum": [ "Unknown", "Readable", "Writeable", "Read/Write Supported", "DMTF Reserved" ] }, "HostExtentName": { "title": "Host Extent Name", "description": "A unique identifier for the host extent.", "type": "string" }, "HostExtentNameFormat": { "title": "Host Extent Name Format", "description": "The HostExtentNameFormat property identifies the format that is used for the value of the HostExtentName property.", "operators": [""], "type": "string", "enum": [ "Unknown", "Other", "SNVM", "NAA", "EUI64", "T10VID", "OS Device Name", "DMTF Reserved" ] }, "HostExtentNameNamespace": { "title": "Host Extent Name Namespace", "description": "If the host extent is a SCSI volume, then the preferred source for SCSI volume names is SCSI VPD Page 83 responses.", "operators": [""], "type": "string", "enum": [ "Unknown", "Other", "VPD83Type3", "VPD83Type2", "VPD83Type1", "VPD80", "NodeWWN", "SNVM", "OS Device Namespace", "DMTF Reserved" ] }, "HostExtentStartingAddress": { "title": "Host Extent Starting Address", "description": "The HostExtentStartingAddress property identifies the starting address on the host storage extent identified by the value of the HostExtentName property that is used for the allocation of the virtual storage extent.", "type": "string" }, "HostResourceBlockSize": { "title": "Host Resource Block Size", "description": "Size in bytes of the blocks that are allocated at the host as the result of this storage resource allocation or storage resource allocation request.", "type": "string" }, "Limit": { "title": "Limit", "description": "The maximum amount of blocks that will be granted for this storage resource allocation at the host.", "type": "string" }, "OtherHostExtentNameFormat": { "title": "Other Host Extent Name Format", "description": "A string describing the format of the HostExtentName property if the value of the HostExtentNameFormat property is 1 (Other).", "type": "string" }, "OtherHostExtentNameNamespace": { "title": "Other Host Extent Name Namespace", "description": "A string describing the namespace of the HostExtentName property if the value of the HostExtentNameNamespace matches 1 (Other).", "type": "string" }, "Reservation": { "title": "Reservation", "description": "The amount of blocks that are guaranteed to be available for this storage resource allocation at the host.", "type": "string" }, "VirtualQuantity": { "title": "Virtual Quantity", "description": "Number of blocks that are presented to the consumer.", "type": "string" }, "VirtualQuantityUnits": { "title": "Virtual Quantity Units", "description": "This property specifies the units used by the VirtualQuantity property.", "type": "string" }, "VirtualResourceBlockSize": { "title": "Virtual Resource Block Size", "description": "Size in bytes of the blocks that are presented to the consumer as the result of this storage resource allocation or storage resource allocation request.", "type": "string" } }, "objects": [] } glance-16.0.0/etc/metadefs/compute-trust.json0000666000175100017510000000226013245511421021144 0ustar zuulzuul00000000000000{ "namespace": "OS::Compute::Trust", "display_name": "Trusted Compute Pools (Intel® TXT)", "description": "Trusted compute pools with Intel® Trusted Execution Technology (Intel® TXT) support IT compliance by protecting virtualized data centers - private, public, and hybrid clouds against attacks toward hypervisor and BIOS, firmware, and other pre-launch software components. The Nova trust scheduling filter must be enabled and configured with the trust attestation service in order to use this feature.", "visibility": "public", "protected": true, "resource_type_associations": [ { "name": "OS::Nova::Flavor" } ], "properties": { "trust:trusted_host": { "title": "Intel® TXT attestation", "description": "Select to ensure that node has been attested by Intel® Trusted Execution Technology (Intel® TXT). The Nova trust scheduling filter must be enabled and configured with the trust attestation service in order to use this feature.", "type": "string", "enum": [ "trusted", "untrusted", "unknown" ] } } }glance-16.0.0/etc/metadefs/compute-host-capabilities.json0000666000175100017510000002202413245511421023367 0ustar zuulzuul00000000000000{ "namespace": "OS::Compute::HostCapabilities", "display_name": "Compute Host Capabilities", "description": "Hardware capabilities provided by the compute host. This provides the ability to fine tune the hardware specification required when an instance is requested. The ComputeCapabilitiesFilter should be enabled in the Nova scheduler to use these properties. When enabled, this filter checks that the capabilities provided by the compute host satisfy any extra specifications requested. Only hosts that can provide the requested capabilities will be eligible for hosting the instance.", "visibility": "public", "protected": true, "resource_type_associations": [ { "name": "OS::Nova::Flavor", "prefix": "capabilities:" }, { "name": "OS::Nova::Aggregate", "prefix": "aggregate_instance_extra_specs:" } ], "properties": { "cpu_info:vendor": { "title": "Vendor", "description": "Specifies the CPU manufacturer.", "operators": [""], "type": "string", "enum": [ "Intel", "AMD" ] }, "cpu_info:model": { "title": "Model", "description": "Specifies the CPU model. Use this property to ensure that your vm runs on a a specific cpu model.", "operators": [""], "type": "string", "enum": [ "Conroe", "Core2Duo", "Penryn", "Nehalem", "Westmere", "SandyBridge", "IvyBridge", "Haswell", "Broadwell", "Delhi", "Seoul", "Abu Dhabi", "Interlagos", "Kabini", "Valencia", "Zurich", "Budapest", "Barcelona", "Suzuka", "Shanghai", "Istanbul", "Lisbon", "Magny-Cours", "Valencia", "Cortex-A57", "Cortex-A53", "Cortex-A12", "Cortex-A17", "Cortex-A15", "Coretx-A7", "X-Gene" ] }, "cpu_info:arch": { "title": "Architecture", "description": "Specifies the CPU architecture. Use this property to specify the architecture supported by the hypervisor.", "operators": [""], "type": "string", "enum": [ "x86", "x86_64", "i686", "ia64", "ARMv8-A", "ARMv7-A" ] }, "cpu_info:topology:cores": { "title": "cores", "description": "Number of cores.", "type": "integer", "readonly": false, "default": 1 }, "cpu_info:topology:threads": { "title": "threads", "description": "Number of threads.", "type": "integer", "readonly": false, "default": 1 }, "cpu_info:topology:sockets": { "title": "sockets", "description": "Number of sockets.", "type": "integer", "readonly": false, "default": 1 }, "cpu_info:features": { "title": "Features", "description": "Specifies CPU flags/features. Using this property you can specify the required set of instructions supported by a vm.", "operators": ["", ""], "type": "array", "items": { "type": "string", "enum": [ "fpu", "vme", "de", "pse", "tsc", "msr", "pae", "mce", "cx8", "apic", "sep", "mtrr", "pge", "mca", "cmov", "pat", "pse36", "pn", "clflush", "dts", "acpi", "mmx", "fxsr", "sse", "sse2", "ss", "ht", "tm", "ia64", "pbe", "syscall", "mp", "nx", "mmxext", "fxsr_opt", "pdpe1gb", "rdtscp", "lm", "3dnowext", "3dnow", "arch_perfmon", "pebs", "bts", "rep_good", "nopl", "xtopology", "tsc_reliable", "nonstop_tsc", "extd_apicid", "amd_dcm", "aperfmperf", "eagerfpu", "nonstop_tsc_s3", "pni", "pclmulqdq", "dtes64", "monitor", "ds_cpl", "vmx", "smx", "est", "tm2", "ssse3", "cid", "fma", "cx16", "xtpr", "pdcm", "pcid", "dca", "sse4_1", "sse4_2", "x2apic", "movbe", "popcnt", "tsc_deadline_timer", "aes", "xsave", "avx", "f16c", "rdrand", "hypervisor", "rng", "rng_en", "ace", "ace_en", "ace2", "ace2_en", "phe", "phe_en", "pmm", "pmm_en", "lahf_lm", "cmp_legacy", "svm", "extapic", "cr8_legacy", "abm", "sse4a", "misalignsse", "3dnowprefetch", "osvw", "ibs", "xop", "skinit", "wdt", "lwp", "fma4", "tce", "nodeid_msr", "tbm", "topoext", "perfctr_core", "perfctr_nb", "bpext", "perfctr_l2", "mwaitx", "ida", "arat", "cpb", "epb", "pln", "pts", "dtherm", "hw_pstate", "proc_feedback", "hwp", "hwp_notify", "hwp_act_window", "hwp_epp", "hwp_pkg_req", "intel_pt", "tpr_shadow", "vnmi", "flexpriority", "ept", "vpid", "npt", "lbrv", "svm_lock", "nrip_save", "tsc_scale", "vmcb_clean", "flushbyasid", "decodeassists", "pausefilter", "pfthreshold", "vmmcall", "fsgsbase", "tsc_adjust", "bmi1", "hle", "avx2", "smep", "bmi2", "erms", "invpcid", "rtm", "cqm", "mpx", "avx512f", "rdseed", "adx", "smap", "pcommit", "clflushopt", "clwb", "avx512pf", "avx512er", "avx512cd", "sha_ni", "xsaveopt", "xsavec", "xgetbv1", "xsaves", "cqm_llc", "cqm_occup_llc", "clzero" ] } } }, "objects": [] } glance-16.0.0/etc/metadefs/compute-vcputopology.json0000666000175100017510000000370213245511421022537 0ustar zuulzuul00000000000000{ "namespace": "OS::Compute::VirtCPUTopology", "display_name": "Virtual CPU Topology", "description": "This provides the preferred socket/core/thread counts for the virtual CPU instance exposed to guests. This enables the ability to avoid hitting limitations on vCPU topologies that OS vendors place on their products. See also: http://git.openstack.org/cgit/openstack/nova-specs/tree/specs/juno/virt-driver-vcpu-topology.rst", "visibility": "public", "protected": true, "resource_type_associations": [ { "name": "OS::Glance::Image", "prefix": "hw_" }, { "name": "OS::Cinder::Volume", "prefix": "hw_", "properties_target": "image" }, { "name": "OS::Nova::Flavor", "prefix": "hw:" } ], "properties": { "cpu_sockets": { "title": "vCPU Sockets", "description": "Preferred number of sockets to expose to the guest.", "type": "integer" }, "cpu_cores": { "title": "vCPU Cores", "description": "Preferred number of cores to expose to the guest.", "type": "integer" }, "cpu_threads": { "title": " vCPU Threads", "description": "Preferred number of threads to expose to the guest.", "type": "integer" }, "cpu_maxsockets": { "title": "Max vCPU Sockets", "description": "Maximum number of sockets to expose to the guest.", "type": "integer" }, "cpu_maxcores": { "title": "Max vCPU Cores", "description": "Maximum number of cores to expose to the guest.", "type": "integer" }, "cpu_maxthreads": { "title": "Max vCPU Threads", "description": "Maximum number of threads to expose to the guest.", "type": "integer" } } } glance-16.0.0/etc/metadefs/compute-vmware-quota-flavor.json0000666000175100017510000000315413245511421023705 0ustar zuulzuul00000000000000{ "namespace": "OS::Compute::VMwareQuotaFlavor", "display_name": "VMware Quota for Flavors", "description": "The VMware compute driver allows various compute quotas to be specified on flavors. When specified, the VMWare driver will ensure that the quota is enforced. These are properties specific to VMWare compute drivers and will only have an effect if the VMWare compute driver is enabled in Nova. For a list of hypervisors, see: https://wiki.openstack.org/wiki/HypervisorSupportMatrix. For flavor customization, see: http://docs.openstack.org/admin-guide/compute-flavors.html", "visibility": "public", "protected": true, "resource_type_associations": [ { "name": "OS::Nova::Flavor" } ], "properties": { "quota:cpu_limit": { "title": "Quota: CPU Limit", "description": "Specifies the upper limit for CPU allocation in MHz. This parameter ensures that a machine never uses more than the defined amount of CPU time. It can be used to enforce a limit on the machine's CPU performance. The value should be a numerical value in MHz. If zero is supplied then the cpu_limit is unlimited.", "type": "integer", "minimum": 0 }, "quota:cpu_reservation": { "title": "Quota: CPU Reservation Limit", "description": "Specifies the guaranteed minimum CPU reservation in MHz. This means that if needed, the machine will definitely get allocated the reserved amount of CPU cycles. The value should be a numerical value in MHz.", "type": "integer", "minimum": 0 } } } glance-16.0.0/etc/metadefs/compute-libvirt.json0000666000175100017510000000301213245511421021432 0ustar zuulzuul00000000000000{ "namespace": "OS::Compute::Libvirt", "display_name": "libvirt Driver Options", "description": "The libvirt compute driver options. \n\nThese are properties that affect the libvirt compute driver and may be specified on flavors and images. For a list of all hypervisors, see here: https://wiki.openstack.org/wiki/HypervisorSupportMatrix.", "visibility": "public", "protected": true, "resource_type_associations": [ { "name": "OS::Glance::Image", "prefix": "hw_" }, { "name": "OS::Nova::Flavor", "prefix": "hw:" } ], "properties": { "serial_port_count": { "title": "Serial Port Count", "description": "Specifies the count of serial ports that should be provided. If hw:serial_port_count is not set in the flavor's extra_specs, then any count is permitted. If hw:serial_port_count is set, then this provides the default serial port count. It is permitted to override the default serial port count, but only with a lower value.", "type": "integer", "minimum": 0 }, "boot_menu": { "title": "Boot Menu", "description": "If true, enables the BIOS bootmenu. In cases where both the image metadata and Extra Spec are set, the Extra Spec setting is used. This allows for flexibility in setting/overriding the default behavior as needed.", "type": "string", "enum": ["true", "false"] } }, "objects": [] } glance-16.0.0/etc/metadefs/software-runtimes.json0000666000175100017510000001220413245511421022006 0ustar zuulzuul00000000000000{ "namespace": "OS::Software::Runtimes", "display_name": "Runtime Environment", "description": "Software is written in a specific programming language and the language must execute within a runtime environment. The runtime environment provides an abstraction to utilizing a computer's processor, memory (RAM), and other system resources.", "visibility": "public", "protected": true, "resource_type_associations": [ { "name": "OS::Glance::Image" }, { "name": "OS::Cinder::Volume", "properties_target": "image" }, { "name": "OS::Nova::Server", "properties_target": "metadata" } ], "objects": [ { "name": "PHP", "description": "PHP is a server-side scripting language designed for web development but also used as a general-purpose programming language. PHP code can be simply mixed with HTML code, or it can be used in combination with various templating engines and web frameworks. PHP code is usually processed by a PHP interpreter, which is usually implemented as a web server's native module or a Common Gateway Interface (CGI) executable. After the PHP code is interpreted and executed, the web server sends resulting output to its client, usually in form of a part of the generated web page – for example, PHP code can generate a web page's HTML code, an image, or some other data. PHP has also evolved to include a command-line interface (CLI) capability and can be used in standalone graphical applications. (http://en.wikipedia.org/wiki/PHP)", "properties": { "sw_runtime_php_version": { "title": "Version", "description": "The specific version of PHP.", "type": "string" } } }, { "name": "Python", "description": "Python is a widely used general-purpose, high-level programming language. Its design philosophy emphasizes code readability, and its syntax allows programmers to express concepts in fewer lines of code than would be possible in languages such as C++ or Java. The language provides constructs intended to enable clear programs on both a small and large scale. Python supports multiple programming paradigms, including object-oriented, imperative and functional programming or procedural styles. It features a dynamic type system and automatic memory management and has a large and comprehensive standard library. (http://en.wikipedia.org/wiki/Python_(programming_language))", "properties": { "sw_runtime_python_version": { "title": "Version", "description": "The specific version of python.", "type": "string" } } }, { "name": "Java", "description": "Java is a functional computer programming language that is concurrent, class-based, object-oriented, and specifically designed to have as few implementation dependencies as possible. It is intended to let application developers write once, run anywhere (WORA), meaning that code that runs on one platform does not need to be recompiled to run on another. Java applications are typically compiled to bytecode that can run on any Java virtual machine (JVM) regardless of computer architecture. (http://en.wikipedia.org/wiki/Java_(programming_language))", "properties": { "sw_runtime_java_version": { "title": "Version", "description": "The specific version of Java.", "type": "string" } } }, { "name": "Ruby", "description": "Ruby is a dynamic, reflective, object-oriented, general-purpose programming language. It was designed and developed in the mid-1990s by Yukihiro Matsumoto in Japan. According to its authors, Ruby was influenced by Perl, Smalltalk, Eiffel, Ada, and Lisp. It supports multiple programming paradigms, including functional, object-oriented, and imperative. It also has a dynamic type system and automatic memory management. (http://en.wikipedia.org/wiki/Python_(programming_language))", "properties": { "sw_runtime_ruby_version": { "title": "Version", "description": "The specific version of Ruby.", "type": "string" } } }, { "name": "Perl", "description": "Perl is a family of high-level, general-purpose, interpreted, dynamic programming languages. The languages in this family include Perl 5 and Perl 6. Though Perl is not officially an acronym, there are various backronyms in use, the most well-known being Practical Extraction and Reporting Language (http://en.wikipedia.org/wiki/Perl)", "properties": { "sw_runtime_perl_version": { "title": "Version", "description": "The specific version of Perl.", "type": "string" } } } ] } glance-16.0.0/etc/metadefs/compute-quota.json0000666000175100017510000001604413245511421021121 0ustar zuulzuul00000000000000{ "namespace": "OS::Compute::Quota", "display_name": "Flavor Quota", "description": "Compute drivers may enable quotas on CPUs available to a VM, disk tuning, bandwidth I/O, and instance VIF traffic control. See: http://docs.openstack.org/admin-guide/compute-flavors.html", "visibility": "public", "protected": true, "resource_type_associations": [ { "name": "OS::Nova::Flavor" } ], "objects": [ { "name": "CPU Limits", "description": "You can configure the CPU limits with control parameters.", "properties": { "quota:cpu_shares": { "title": "Quota: CPU Shares", "description": "Specifies the proportional weighted share for the domain. If this element is omitted, the service defaults to the OS provided defaults. There is no unit for the value; it is a relative measure based on the setting of other VMs. For example, a VM configured with value 2048 gets twice as much CPU time as a VM configured with value 1024.", "type": "integer" }, "quota:cpu_period": { "title": "Quota: CPU Period", "description": "Specifies the enforcement interval (unit: microseconds) for QEMU and LXC hypervisors. Within a period, each VCPU of the domain is not allowed to consume more than the quota worth of runtime. The value should be in range [1000, 1000000]. A period with value 0 means no value.", "type": "integer", "minimum": 1000, "maximum": 1000000 }, "quota:cpu_quota": { "title": "Quota: CPU Quota", "description": "Specifies the maximum allowed bandwidth (unit: microseconds). A domain with a negative-value quota indicates that the domain has infinite bandwidth, which means that it is not bandwidth controlled. The value should be in range [1000, 18446744073709551] or less than 0. A quota with value 0 means no value. You can use this feature to ensure that all vCPUs run at the same speed.", "type": "integer" } } }, { "name": "Disk QoS", "description": "Using disk I/O quotas, you can set maximum disk write to 10 MB per second for a VM user.", "properties": { "quota:disk_read_bytes_sec": { "title": "Quota: Disk read bytes / sec", "description": "Sets disk I/O quota for disk read bytes / sec.", "type": "integer" }, "quota:disk_read_iops_sec": { "title": "Quota: Disk read IOPS / sec", "description": "Sets disk I/O quota for disk read IOPS / sec.", "type": "integer" }, "quota:disk_write_bytes_sec": { "title": "Quota: Disk Write Bytes / sec", "description": "Sets disk I/O quota for disk write bytes / sec.", "type": "integer" }, "quota:disk_write_iops_sec": { "title": "Quota: Disk Write IOPS / sec", "description": "Sets disk I/O quota for disk write IOPS / sec.", "type": "integer" }, "quota:disk_total_bytes_sec": { "title": "Quota: Disk Total Bytes / sec", "description": "Sets disk I/O quota for total disk bytes / sec.", "type": "integer" }, "quota:disk_total_iops_sec": { "title": "Quota: Disk Total IOPS / sec", "description": "Sets disk I/O quota for disk total IOPS / sec.", "type": "integer" } } }, { "name": "Virtual Interface QoS", "description": "Bandwidth QoS tuning for instance virtual interfaces (VIFs) may be specified with these properties. Incoming and outgoing traffic can be shaped independently. If not specified, no quality of service (QoS) is applied on that traffic direction. So, if you want to shape only the network's incoming traffic, use inbound only (and vice versa). The OpenStack Networking service abstracts the physical implementation of the network, allowing plugins to configure and manage physical resources. Virtual Interfaces (VIF) in the logical model are analogous to physical network interface cards (NICs). VIFs are typically owned a managed by an external service; for instance when OpenStack Networking is used for building OpenStack networks, VIFs would be created, owned, and managed in Nova. VIFs are connected to OpenStack Networking networks via ports. A port is analogous to a port on a network switch, and it has an administrative state. When a VIF is attached to a port the OpenStack Networking API creates an attachment object, which specifies the fact that a VIF with a given identifier is plugged into the port.", "properties": { "quota:vif_inbound_average": { "title": "Quota: VIF Inbound Average", "description": "Network Virtual Interface (VIF) inbound average in kilobytes per second. Specifies average bit rate on the interface being shaped.", "type": "integer" }, "quota:vif_inbound_burst": { "title": "Quota: VIF Inbound Burst", "description": "Network Virtual Interface (VIF) inbound burst in total kilobytes. Specifies the amount of bytes that can be burst at peak speed.", "type": "integer" }, "quota:vif_inbound_peak": { "title": "Quota: VIF Inbound Peak", "description": "Network Virtual Interface (VIF) inbound peak in kilobytes per second. Specifies maximum rate at which an interface can receive data.", "type": "integer" }, "quota:vif_outbound_average": { "title": "Quota: VIF Outbound Average", "description": "Network Virtual Interface (VIF) outbound average in kilobytes per second. Specifies average bit rate on the interface being shaped.", "type": "integer" }, "quota:vif_outbound_burst": { "title": "Quota: VIF Outbound Burst", "description": "Network Virtual Interface (VIF) outbound burst in total kilobytes. Specifies the amount of bytes that can be burst at peak speed.", "type": "integer" }, "quota:vif_outbound_peak": { "title": "Quota: VIF Outbound Peak", "description": "Network Virtual Interface (VIF) outbound peak in kilobytes per second. Specifies maximum rate at which an interface can send data.", "type": "integer" } } } ] } glance-16.0.0/etc/metadefs/compute-xenapi.json0000666000175100017510000000302713245511421021251 0ustar zuulzuul00000000000000{ "namespace": "OS::Compute::XenAPI", "display_name": "XenAPI Driver Options", "description": "The XenAPI compute driver options. \n\nThese are properties specific to compute drivers. For a list of all hypervisors, see here: https://wiki.openstack.org/wiki/HypervisorSupportMatrix.", "visibility": "public", "protected": true, "resource_type_associations": [ { "name": "OS::Glance::Image" } ], "properties": { "os_type": { "title": "OS Type", "description": "The operating system installed on the image. The XenAPI driver contains logic that takes different actions depending on the value of the os_type parameter of the image. For example, for os_type=windows images, it creates a FAT32-based swap partition instead of a Linux swap partition, and it limits the injected host name to less than 16 characters.", "type": "string", "enum": [ "linux", "windows" ] }, "auto_disk_config": { "title": "Disk Adapter Type", "description": "If true, the root partition on the disk is automatically resized before the instance boots. This value is only taken into account by the Compute service when using a Xen-based hypervisor with the XenAPI driver. The Compute service will only attempt to resize if there is a single partition on the image, and only if the partition is in ext3 or ext4 format.", "type": "boolean" } }, "objects": [] } glance-16.0.0/etc/metadefs/compute-watchdog.json0000666000175100017510000000250213245511421021562 0ustar zuulzuul00000000000000{ "namespace": "OS::Compute::Watchdog", "display_name": "Watchdog Behavior", "description": "Compute drivers may enable watchdog behavior over instances. See: http://docs.openstack.org/admin-guide/compute-flavors.html", "visibility": "public", "protected": true, "resource_type_associations": [ { "name": "OS::Glance::Image" }, { "name": "OS::Cinder::Volume", "properties_target": "image" }, { "name": "OS::Nova::Flavor" } ], "properties": { "hw_watchdog_action": { "title": "Watchdog Action", "description": "For the libvirt driver, you can enable and set the behavior of a virtual hardware watchdog device for each flavor. Watchdog devices keep an eye on the guest server, and carry out the configured action, if the server hangs. The watchdog uses the i6300esb device (emulating a PCI Intel 6300ESB). If hw_watchdog_action is not specified, the watchdog is disabled. Watchdog behavior set using a specific image's properties will override behavior set using flavors.", "type": "string", "enum": [ "disabled", "reset", "poweroff", "pause", "none" ] } } } glance-16.0.0/etc/glance-api.conf0000666000175100017510000047417213245511426016521 0ustar zuulzuul00000000000000[DEFAULT] # # From glance.api # # # Set the image owner to tenant or the authenticated user. # # Assign a boolean value to determine the owner of an image. When set to # True, the owner of the image is the tenant. When set to False, the # owner of the image will be the authenticated user issuing the request. # Setting it to False makes the image private to the associated user and # sharing with other users within the same tenant (or "project") # requires explicit image sharing via image membership. # # Possible values: # * True # * False # # Related options: # * None # # (boolean value) #owner_is_tenant = true # # Role used to identify an authenticated user as administrator. # # Provide a string value representing a Keystone role to identify an # administrative user. Users with this role will be granted # administrative privileges. The default value for this option is # 'admin'. # # Possible values: # * A string value which is a valid Keystone role # # Related options: # * None # # (string value) #admin_role = admin # # Allow limited access to unauthenticated users. # # Assign a boolean to determine API access for unathenticated # users. When set to False, the API cannot be accessed by # unauthenticated users. When set to True, unauthenticated users can # access the API with read-only privileges. This however only applies # when using ContextMiddleware. # # Possible values: # * True # * False # # Related options: # * None # # (boolean value) #allow_anonymous_access = false # # Limit the request ID length. # # Provide an integer value to limit the length of the request ID to # the specified length. The default value is 64. Users can change this # to any ineteger value between 0 and 16384 however keeping in mind that # a larger value may flood the logs. # # Possible values: # * Integer value between 0 and 16384 # # Related options: # * None # # (integer value) # Minimum value: 0 #max_request_id_length = 64 # # Public url endpoint to use for Glance versions response. # # This is the public url endpoint that will appear in the Glance # "versions" response. If no value is specified, the endpoint that is # displayed in the version's response is that of the host running the # API service. Change the endpoint to represent the proxy URL if the # API service is running behind a proxy. If the service is running # behind a load balancer, add the load balancer's URL for this value. # # Possible values: # * None # * Proxy URL # * Load balancer URL # # Related options: # * None # # (string value) #public_endpoint = # # Allow users to add additional/custom properties to images. # # Glance defines a standard set of properties (in its schema) that # appear on every image. These properties are also known as # ``base properties``. In addition to these properties, Glance # allows users to add custom properties to images. These are known # as ``additional properties``. # # By default, this configuration option is set to ``True`` and users # are allowed to add additional properties. The number of additional # properties that can be added to an image can be controlled via # ``image_property_quota`` configuration option. # # Possible values: # * True # * False # # Related options: # * image_property_quota # # (boolean value) #allow_additional_image_properties = true # # Maximum number of image members per image. # # This limits the maximum of users an image can be shared with. Any negative # value is interpreted as unlimited. # # Related options: # * None # # (integer value) #image_member_quota = 128 # # Maximum number of properties allowed on an image. # # This enforces an upper limit on the number of additional properties an image # can have. Any negative value is interpreted as unlimited. # # NOTE: This won't have any impact if additional properties are disabled. Please # refer to ``allow_additional_image_properties``. # # Related options: # * ``allow_additional_image_properties`` # # (integer value) #image_property_quota = 128 # # Maximum number of tags allowed on an image. # # Any negative value is interpreted as unlimited. # # Related options: # * None # # (integer value) #image_tag_quota = 128 # # Maximum number of locations allowed on an image. # # Any negative value is interpreted as unlimited. # # Related options: # * None # # (integer value) #image_location_quota = 10 # DEPRECATED: # Python module path of data access API. # # Specifies the path to the API to use for accessing the data model. # This option determines how the image catalog data will be accessed. # # Possible values: # * glance.db.sqlalchemy.api # * glance.db.registry.api # * glance.db.simple.api # # If this option is set to ``glance.db.sqlalchemy.api`` then the image # catalog data is stored in and read from the database via the # SQLAlchemy Core and ORM APIs. # # Setting this option to ``glance.db.registry.api`` will force all # database access requests to be routed through the Registry service. # This avoids data access from the Glance API nodes for an added layer # of security, scalability and manageability. # # NOTE: In v2 OpenStack Images API, the registry service is optional. # In order to use the Registry API in v2, the option # ``enable_v2_registry`` must be set to ``True``. # # Finally, when this configuration option is set to # ``glance.db.simple.api``, image catalog data is stored in and read # from an in-memory data structure. This is primarily used for testing. # # Related options: # * enable_v2_api # * enable_v2_registry # # (string value) # This option is deprecated for removal since Queens. # Its value may be silently ignored in the future. # Reason: # Glance registry service is deprecated for removal. # # More information can be found from the spec: # http://specs.openstack.org/openstack/glance-specs/specs/queens/approved/glance # /deprecate-registry.html #data_api = glance.db.sqlalchemy.api # # The default number of results to return for a request. # # Responses to certain API requests, like list images, may return # multiple items. The number of results returned can be explicitly # controlled by specifying the ``limit`` parameter in the API request. # However, if a ``limit`` parameter is not specified, this # configuration value will be used as the default number of results to # be returned for any API request. # # NOTES: # * The value of this configuration option may not be greater than # the value specified by ``api_limit_max``. # * Setting this to a very large value may slow down database # queries and increase response times. Setting this to a # very low value may result in poor user experience. # # Possible values: # * Any positive integer # # Related options: # * api_limit_max # # (integer value) # Minimum value: 1 #limit_param_default = 25 # # Maximum number of results that could be returned by a request. # # As described in the help text of ``limit_param_default``, some # requests may return multiple results. The number of results to be # returned are governed either by the ``limit`` parameter in the # request or the ``limit_param_default`` configuration option. # The value in either case, can't be greater than the absolute maximum # defined by this configuration option. Anything greater than this # value is trimmed down to the maximum value defined here. # # NOTE: Setting this to a very large value may slow down database # queries and increase response times. Setting this to a # very low value may result in poor user experience. # # Possible values: # * Any positive integer # # Related options: # * limit_param_default # # (integer value) # Minimum value: 1 #api_limit_max = 1000 # # Show direct image location when returning an image. # # This configuration option indicates whether to show the direct image # location when returning image details to the user. The direct image # location is where the image data is stored in backend storage. This # image location is shown under the image property ``direct_url``. # # When multiple image locations exist for an image, the best location # is displayed based on the location strategy indicated by the # configuration option ``location_strategy``. # # NOTES: # * Revealing image locations can present a GRAVE SECURITY RISK as # image locations can sometimes include credentials. Hence, this # is set to ``False`` by default. Set this to ``True`` with # EXTREME CAUTION and ONLY IF you know what you are doing! # * If an operator wishes to avoid showing any image location(s) # to the user, then both this option and # ``show_multiple_locations`` MUST be set to ``False``. # # Possible values: # * True # * False # # Related options: # * show_multiple_locations # * location_strategy # # (boolean value) #show_image_direct_url = false # DEPRECATED: # Show all image locations when returning an image. # # This configuration option indicates whether to show all the image # locations when returning image details to the user. When multiple # image locations exist for an image, the locations are ordered based # on the location strategy indicated by the configuration opt # ``location_strategy``. The image locations are shown under the # image property ``locations``. # # NOTES: # * Revealing image locations can present a GRAVE SECURITY RISK as # image locations can sometimes include credentials. Hence, this # is set to ``False`` by default. Set this to ``True`` with # EXTREME CAUTION and ONLY IF you know what you are doing! # * If an operator wishes to avoid showing any image location(s) # to the user, then both this option and # ``show_image_direct_url`` MUST be set to ``False``. # # Possible values: # * True # * False # # Related options: # * show_image_direct_url # * location_strategy # # (boolean value) # This option is deprecated for removal since Newton. # Its value may be silently ignored in the future. # Reason: This option will be removed in the Pike release or later because the # same functionality can be achieved with greater granularity by using policies. # Please see the Newton release notes for more information. #show_multiple_locations = false # # Maximum size of image a user can upload in bytes. # # An image upload greater than the size mentioned here would result # in an image creation failure. This configuration option defaults to # 1099511627776 bytes (1 TiB). # # NOTES: # * This value should only be increased after careful # consideration and must be set less than or equal to # 8 EiB (9223372036854775808). # * This value must be set with careful consideration of the # backend storage capacity. Setting this to a very low value # may result in a large number of image failures. And, setting # this to a very large value may result in faster consumption # of storage. Hence, this must be set according to the nature of # images created and storage capacity available. # # Possible values: # * Any positive number less than or equal to 9223372036854775808 # # (integer value) # Minimum value: 1 # Maximum value: 9223372036854775808 #image_size_cap = 1099511627776 # # Maximum amount of image storage per tenant. # # This enforces an upper limit on the cumulative storage consumed by all images # of a tenant across all stores. This is a per-tenant limit. # # The default unit for this configuration option is Bytes. However, storage # units can be specified using case-sensitive literals ``B``, ``KB``, ``MB``, # ``GB`` and ``TB`` representing Bytes, KiloBytes, MegaBytes, GigaBytes and # TeraBytes respectively. Note that there should not be any space between the # value and unit. Value ``0`` signifies no quota enforcement. Negative values # are invalid and result in errors. # # Possible values: # * A string that is a valid concatenation of a non-negative integer # representing the storage value and an optional string literal # representing storage units as mentioned above. # # Related options: # * None # # (string value) #user_storage_quota = 0 # # Deploy the v1 OpenStack Images API. # # When this option is set to ``True``, Glance service will respond to # requests on registered endpoints conforming to the v1 OpenStack # Images API. # # NOTES: # * If this option is enabled, then ``enable_v1_registry`` must # also be set to ``True`` to enable mandatory usage of Registry # service with v1 API. # # * If this option is disabled, then the ``enable_v1_registry`` # option, which is enabled by default, is also recommended # to be disabled. # # * This option is separate from ``enable_v2_api``, both v1 and v2 # OpenStack Images API can be deployed independent of each # other. # # * If deploying only the v2 Images API, this option, which is # enabled by default, should be disabled. # # Possible values: # * True # * False # # Related options: # * enable_v1_registry # * enable_v2_api # # (boolean value) #enable_v1_api = true # # Deploy the v2 OpenStack Images API. # # When this option is set to ``True``, Glance service will respond # to requests on registered endpoints conforming to the v2 OpenStack # Images API. # # NOTES: # * If this option is disabled, then the ``enable_v2_registry`` # option, which is enabled by default, is also recommended # to be disabled. # # * This option is separate from ``enable_v1_api``, both v1 and v2 # OpenStack Images API can be deployed independent of each # other. # # * If deploying only the v1 Images API, this option, which is # enabled by default, should be disabled. # # Possible values: # * True # * False # # Related options: # * enable_v2_registry # * enable_v1_api # # (boolean value) #enable_v2_api = true # # Deploy the v1 API Registry service. # # When this option is set to ``True``, the Registry service # will be enabled in Glance for v1 API requests. # # NOTES: # * Use of Registry is mandatory in v1 API, so this option must # be set to ``True`` if the ``enable_v1_api`` option is enabled. # # * If deploying only the v2 OpenStack Images API, this option, # which is enabled by default, should be disabled. # # Possible values: # * True # * False # # Related options: # * enable_v1_api # # (boolean value) #enable_v1_registry = true # DEPRECATED: # Deploy the v2 API Registry service. # # When this option is set to ``True``, the Registry service # will be enabled in Glance for v2 API requests. # # NOTES: # * Use of Registry is optional in v2 API, so this option # must only be enabled if both ``enable_v2_api`` is set to # ``True`` and the ``data_api`` option is set to # ``glance.db.registry.api``. # # * If deploying only the v1 OpenStack Images API, this option, # which is enabled by default, should be disabled. # # Possible values: # * True # * False # # Related options: # * enable_v2_api # * data_api # # (boolean value) # This option is deprecated for removal since Queens. # Its value may be silently ignored in the future. # Reason: # Glance registry service is deprecated for removal. # # More information can be found from the spec: # http://specs.openstack.org/openstack/glance-specs/specs/queens/approved/glance # /deprecate-registry.html #enable_v2_registry = true # # Host address of the pydev server. # # Provide a string value representing the hostname or IP of the # pydev server to use for debugging. The pydev server listens for # debug connections on this address, facilitating remote debugging # in Glance. # # Possible values: # * Valid hostname # * Valid IP address # # Related options: # * None # # (unknown value) #pydev_worker_debug_host = localhost # # Port number that the pydev server will listen on. # # Provide a port number to bind the pydev server to. The pydev # process accepts debug connections on this port and facilitates # remote debugging in Glance. # # Possible values: # * A valid port number # # Related options: # * None # # (port value) # Minimum value: 0 # Maximum value: 65535 #pydev_worker_debug_port = 5678 # # AES key for encrypting store location metadata. # # Provide a string value representing the AES cipher to use for # encrypting Glance store metadata. # # NOTE: The AES key to use must be set to a random string of length # 16, 24 or 32 bytes. # # Possible values: # * String value representing a valid AES key # # Related options: # * None # # (string value) #metadata_encryption_key = # # Digest algorithm to use for digital signature. # # Provide a string value representing the digest algorithm to # use for generating digital signatures. By default, ``sha256`` # is used. # # To get a list of the available algorithms supported by the version # of OpenSSL on your platform, run the command: # ``openssl list-message-digest-algorithms``. # Examples are 'sha1', 'sha256', and 'sha512'. # # NOTE: ``digest_algorithm`` is not related to Glance's image signing # and verification. It is only used to sign the universally unique # identifier (UUID) as a part of the certificate file and key file # validation. # # Possible values: # * An OpenSSL message digest algorithm identifier # # Relation options: # * None # # (string value) #digest_algorithm = sha256 # # The URL provides location where the temporary data will be stored # # This option is for Glance internal use only. Glance will save the # image data uploaded by the user to 'staging' endpoint during the # image import process. # # This option does not change the 'staging' API endpoint by any means. # # NOTE: It is discouraged to use same path as [task]/work_dir # # NOTE: 'file://' is the only option # api_image_import flow will support for now. # # NOTE: The staging path must be on shared filesystem available to all # Glance API nodes. # # Possible values: # * String starting with 'file://' followed by absolute FS path # # Related options: # * [task]/work_dir # * [DEFAULT]/enable_image_import (*deprecated*) # # (string value) #node_staging_uri = file:///tmp/staging/ # DEPRECATED: # Enables the Image Import workflow introduced in Pike # # As '[DEFAULT]/node_staging_uri' is required for the Image # Import, it's disabled per default in Pike, enabled per # default in Queens and removed in Rocky. This allows Glance to # operate with previous version configs upon upgrade. # # Setting this option to False will disable the endpoints related # to Image Import Refactoring work. # # Related options: # * [DEFAULT]/node_staging_uri (boolean value) # This option is deprecated for removal since Pike. # Its value may be silently ignored in the future. # Reason: # This option is deprecated for removal in Rocky. # # It was introduced to make sure that the API is not enabled # before the '[DEFAULT]/node_staging_uri' is defined and is # long term redundant. #enable_image_import = true # # List of enabled Image Import Methods # # Both 'glance-direct' and 'web-download' are enabled by default. # # Related options: # * [DEFAULT]/node_staging_uri # * [DEFAULT]/enable_image_import (list value) #enabled_import_methods = glance-direct,web-download # # Strategy to determine the preference order of image locations. # # This configuration option indicates the strategy to determine # the order in which an image's locations must be accessed to # serve the image's data. Glance then retrieves the image data # from the first responsive active location it finds in this list. # # This option takes one of two possible values ``location_order`` # and ``store_type``. The default value is ``location_order``, # which suggests that image data be served by using locations in # the order they are stored in Glance. The ``store_type`` value # sets the image location preference based on the order in which # the storage backends are listed as a comma separated list for # the configuration option ``store_type_preference``. # # Possible values: # * location_order # * store_type # # Related options: # * store_type_preference # # (string value) # Possible values: # location_order - # store_type - #location_strategy = location_order # # The location of the property protection file. # # Provide a valid path to the property protection file which contains # the rules for property protections and the roles/policies associated # with them. # # A property protection file, when set, restricts the Glance image # properties to be created, read, updated and/or deleted by a specific # set of users that are identified by either roles or policies. # If this configuration option is not set, by default, property # protections won't be enforced. If a value is specified and the file # is not found, the glance-api service will fail to start. # More information on property protections can be found at: # https://docs.openstack.org/glance/latest/admin/property-protections.html # # Possible values: # * Empty string # * Valid path to the property protection configuration file # # Related options: # * property_protection_rule_format # # (string value) #property_protection_file = # # Rule format for property protection. # # Provide the desired way to set property protection on Glance # image properties. The two permissible values are ``roles`` # and ``policies``. The default value is ``roles``. # # If the value is ``roles``, the property protection file must # contain a comma separated list of user roles indicating # permissions for each of the CRUD operations on each property # being protected. If set to ``policies``, a policy defined in # policy.json is used to express property protections for each # of the CRUD operations. Examples of how property protections # are enforced based on ``roles`` or ``policies`` can be found at: # https://docs.openstack.org/glance/latest/admin/property- # protections.html#examples # # Possible values: # * roles # * policies # # Related options: # * property_protection_file # # (string value) # Possible values: # roles - # policies - #property_protection_rule_format = roles # # List of allowed exception modules to handle RPC exceptions. # # Provide a comma separated list of modules whose exceptions are # permitted to be recreated upon receiving exception data via an RPC # call made to Glance. The default list includes # ``glance.common.exception``, ``builtins``, and ``exceptions``. # # The RPC protocol permits interaction with Glance via calls across a # network or within the same system. Including a list of exception # namespaces with this option enables RPC to propagate the exceptions # back to the users. # # Possible values: # * A comma separated list of valid exception modules # # Related options: # * None # (list value) #allowed_rpc_exception_modules = glance.common.exception,builtins,exceptions # # IP address to bind the glance servers to. # # Provide an IP address to bind the glance server to. The default # value is ``0.0.0.0``. # # Edit this option to enable the server to listen on one particular # IP address on the network card. This facilitates selection of a # particular network interface for the server. # # Possible values: # * A valid IPv4 address # * A valid IPv6 address # # Related options: # * None # # (unknown value) #bind_host = 0.0.0.0 # # Port number on which the server will listen. # # Provide a valid port number to bind the server's socket to. This # port is then set to identify processes and forward network messages # that arrive at the server. The default bind_port value for the API # server is 9292 and for the registry server is 9191. # # Possible values: # * A valid port number (0 to 65535) # # Related options: # * None # # (port value) # Minimum value: 0 # Maximum value: 65535 #bind_port = # # Number of Glance worker processes to start. # # Provide a non-negative integer value to set the number of child # process workers to service requests. By default, the number of CPUs # available is set as the value for ``workers`` limited to 8. For # example if the processor count is 6, 6 workers will be used, if the # processor count is 24 only 8 workers will be used. The limit will only # apply to the default value, if 24 workers is configured, 24 is used. # # Each worker process is made to listen on the port set in the # configuration file and contains a greenthread pool of size 1000. # # NOTE: Setting the number of workers to zero, triggers the creation # of a single API process with a greenthread pool of size 1000. # # Possible values: # * 0 # * Positive integer value (typically equal to the number of CPUs) # # Related options: # * None # # (integer value) # Minimum value: 0 #workers = # # Maximum line size of message headers. # # Provide an integer value representing a length to limit the size of # message headers. The default value is 16384. # # NOTE: ``max_header_line`` may need to be increased when using large # tokens (typically those generated by the Keystone v3 API with big # service catalogs). However, it is to be kept in mind that larger # values for ``max_header_line`` would flood the logs. # # Setting ``max_header_line`` to 0 sets no limit for the line size of # message headers. # # Possible values: # * 0 # * Positive integer # # Related options: # * None # # (integer value) # Minimum value: 0 #max_header_line = 16384 # # Set keep alive option for HTTP over TCP. # # Provide a boolean value to determine sending of keep alive packets. # If set to ``False``, the server returns the header # "Connection: close". If set to ``True``, the server returns a # "Connection: Keep-Alive" in its responses. This enables retention of # the same TCP connection for HTTP conversations instead of opening a # new one with each new request. # # This option must be set to ``False`` if the client socket connection # needs to be closed explicitly after the response is received and # read successfully by the client. # # Possible values: # * True # * False # # Related options: # * None # # (boolean value) #http_keepalive = true # # Timeout for client connections' socket operations. # # Provide a valid integer value representing time in seconds to set # the period of wait before an incoming connection can be closed. The # default value is 900 seconds. # # The value zero implies wait forever. # # Possible values: # * Zero # * Positive integer # # Related options: # * None # # (integer value) # Minimum value: 0 #client_socket_timeout = 900 # # Set the number of incoming connection requests. # # Provide a positive integer value to limit the number of requests in # the backlog queue. The default queue size is 4096. # # An incoming connection to a TCP listener socket is queued before a # connection can be established with the server. Setting the backlog # for a TCP socket ensures a limited queue size for incoming traffic. # # Possible values: # * Positive integer # # Related options: # * None # # (integer value) # Minimum value: 1 #backlog = 4096 # # Set the wait time before a connection recheck. # # Provide a positive integer value representing time in seconds which # is set as the idle wait time before a TCP keep alive packet can be # sent to the host. The default value is 600 seconds. # # Setting ``tcp_keepidle`` helps verify at regular intervals that a # connection is intact and prevents frequent TCP connection # reestablishment. # # Possible values: # * Positive integer value representing time in seconds # # Related options: # * None # # (integer value) # Minimum value: 1 #tcp_keepidle = 600 # # Absolute path to the CA file. # # Provide a string value representing a valid absolute path to # the Certificate Authority file to use for client authentication. # # A CA file typically contains necessary trusted certificates to # use for the client authentication. This is essential to ensure # that a secure connection is established to the server via the # internet. # # Possible values: # * Valid absolute path to the CA file # # Related options: # * None # # (string value) #ca_file = /etc/ssl/cafile # # Absolute path to the certificate file. # # Provide a string value representing a valid absolute path to the # certificate file which is required to start the API service # securely. # # A certificate file typically is a public key container and includes # the server's public key, server name, server information and the # signature which was a result of the verification process using the # CA certificate. This is required for a secure connection # establishment. # # Possible values: # * Valid absolute path to the certificate file # # Related options: # * None # # (string value) #cert_file = /etc/ssl/certs # # Absolute path to a private key file. # # Provide a string value representing a valid absolute path to a # private key file which is required to establish the client-server # connection. # # Possible values: # * Absolute path to the private key file # # Related options: # * None # # (string value) #key_file = /etc/ssl/key/key-file.pem # DEPRECATED: The HTTP header used to determine the scheme for the original # request, even if it was removed by an SSL terminating proxy. Typical value is # "HTTP_X_FORWARDED_PROTO". (string value) # This option is deprecated for removal. # Its value may be silently ignored in the future. # Reason: Use the http_proxy_to_wsgi middleware instead. #secure_proxy_ssl_header = # # The relative path to sqlite file database that will be used for image cache # management. # # This is a relative path to the sqlite file database that tracks the age and # usage statistics of image cache. The path is relative to image cache base # directory, specified by the configuration option ``image_cache_dir``. # # This is a lightweight database with just one table. # # Possible values: # * A valid relative path to sqlite file database # # Related options: # * ``image_cache_dir`` # # (string value) #image_cache_sqlite_db = cache.db # # The driver to use for image cache management. # # This configuration option provides the flexibility to choose between the # different image-cache drivers available. An image-cache driver is responsible # for providing the essential functions of image-cache like write images to/read # images from cache, track age and usage of cached images, provide a list of # cached images, fetch size of the cache, queue images for caching and clean up # the cache, etc. # # The essential functions of a driver are defined in the base class # ``glance.image_cache.drivers.base.Driver``. All image-cache drivers (existing # and prospective) must implement this interface. Currently available drivers # are ``sqlite`` and ``xattr``. These drivers primarily differ in the way they # store the information about cached images: # * The ``sqlite`` driver uses a sqlite database (which sits on every glance # node locally) to track the usage of cached images. # * The ``xattr`` driver uses the extended attributes of files to store this # information. It also requires a filesystem that sets ``atime`` on the # files # when accessed. # # Possible values: # * sqlite # * xattr # # Related options: # * None # # (string value) # Possible values: # sqlite - # xattr - #image_cache_driver = sqlite # # The upper limit on cache size, in bytes, after which the cache-pruner cleans # up the image cache. # # NOTE: This is just a threshold for cache-pruner to act upon. It is NOT a # hard limit beyond which the image cache would never grow. In fact, depending # on how often the cache-pruner runs and how quickly the cache fills, the image # cache can far exceed the size specified here very easily. Hence, care must be # taken to appropriately schedule the cache-pruner and in setting this limit. # # Glance caches an image when it is downloaded. Consequently, the size of the # image cache grows over time as the number of downloads increases. To keep the # cache size from becoming unmanageable, it is recommended to run the # cache-pruner as a periodic task. When the cache pruner is kicked off, it # compares the current size of image cache and triggers a cleanup if the image # cache grew beyond the size specified here. After the cleanup, the size of # cache is less than or equal to size specified here. # # Possible values: # * Any non-negative integer # # Related options: # * None # # (integer value) # Minimum value: 0 #image_cache_max_size = 10737418240 # # The amount of time, in seconds, an incomplete image remains in the cache. # # Incomplete images are images for which download is in progress. Please see the # description of configuration option ``image_cache_dir`` for more detail. # Sometimes, due to various reasons, it is possible the download may hang and # the incompletely downloaded image remains in the ``incomplete`` directory. # This configuration option sets a time limit on how long the incomplete images # should remain in the ``incomplete`` directory before they are cleaned up. # Once an incomplete image spends more time than is specified here, it'll be # removed by cache-cleaner on its next run. # # It is recommended to run cache-cleaner as a periodic task on the Glance API # nodes to keep the incomplete images from occupying disk space. # # Possible values: # * Any non-negative integer # # Related options: # * None # # (integer value) # Minimum value: 0 #image_cache_stall_time = 86400 # # Base directory for image cache. # # This is the location where image data is cached and served out of. All cached # images are stored directly under this directory. This directory also contains # three subdirectories, namely, ``incomplete``, ``invalid`` and ``queue``. # # The ``incomplete`` subdirectory is the staging area for downloading images. An # image is first downloaded to this directory. When the image download is # successful it is moved to the base directory. However, if the download fails, # the partially downloaded image file is moved to the ``invalid`` subdirectory. # # The ``queue``subdirectory is used for queuing images for download. This is # used primarily by the cache-prefetcher, which can be scheduled as a periodic # task like cache-pruner and cache-cleaner, to cache images ahead of their # usage. # Upon receiving the request to cache an image, Glance touches a file in the # ``queue`` directory with the image id as the file name. The cache-prefetcher, # when running, polls for the files in ``queue`` directory and starts # downloading them in the order they were created. When the download is # successful, the zero-sized file is deleted from the ``queue`` directory. # If the download fails, the zero-sized file remains and it'll be retried the # next time cache-prefetcher runs. # # Possible values: # * A valid path # # Related options: # * ``image_cache_sqlite_db`` # # (string value) #image_cache_dir = # # Default publisher_id for outgoing Glance notifications. # # This is the value that the notification driver will use to identify # messages for events originating from the Glance service. Typically, # this is the hostname of the instance that generated the message. # # Possible values: # * Any reasonable instance identifier, for example: image.host1 # # Related options: # * None # # (string value) #default_publisher_id = image.localhost # # List of notifications to be disabled. # # Specify a list of notifications that should not be emitted. # A notification can be given either as a notification type to # disable a single event notification, or as a notification group # prefix to disable all event notifications within a group. # # Possible values: # A comma-separated list of individual notification types or # notification groups to be disabled. Currently supported groups: # * image # * image.member # * task # * metadef_namespace # * metadef_object # * metadef_property # * metadef_resource_type # * metadef_tag # For a complete listing and description of each event refer to: # http://docs.openstack.org/developer/glance/notifications.html # # The values must be specified as: . # For example: image.create,task.success,metadef_tag # # Related options: # * None # # (list value) #disabled_notifications = # DEPRECATED: # Address the registry server is hosted on. # # Possible values: # * A valid IP or hostname # # Related options: # * None # # (unknown value) # This option is deprecated for removal since Queens. # Its value may be silently ignored in the future. # Reason: # Glance registry service is deprecated for removal. # # More information can be found from the spec: # http://specs.openstack.org/openstack/glance-specs/specs/queens/approved/glance # /deprecate-registry.html #registry_host = 0.0.0.0 # DEPRECATED: # Port the registry server is listening on. # # Possible values: # * A valid port number # # Related options: # * None # # (port value) # Minimum value: 0 # Maximum value: 65535 # This option is deprecated for removal since Queens. # Its value may be silently ignored in the future. # Reason: # Glance registry service is deprecated for removal. # # More information can be found from the spec: # http://specs.openstack.org/openstack/glance-specs/specs/queens/approved/glance # /deprecate-registry.html #registry_port = 9191 # DEPRECATED: Whether to pass through the user token when making requests to the # registry. To prevent failures with token expiration during big files upload, # it is recommended to set this parameter to False.If "use_user_token" is not in # effect, then admin credentials can be specified. (boolean value) # This option is deprecated for removal. # Its value may be silently ignored in the future. # Reason: This option was considered harmful and has been deprecated in M # release. It will be removed in O release. For more information read OSSN-0060. # Related functionality with uploading big images has been implemented with # Keystone trusts support. #use_user_token = true # DEPRECATED: The administrators user name. If "use_user_token" is not in # effect, then admin credentials can be specified. (string value) # This option is deprecated for removal. # Its value may be silently ignored in the future. # Reason: This option was considered harmful and has been deprecated in M # release. It will be removed in O release. For more information read OSSN-0060. # Related functionality with uploading big images has been implemented with # Keystone trusts support. #admin_user = # DEPRECATED: The administrators password. If "use_user_token" is not in effect, # then admin credentials can be specified. (string value) # This option is deprecated for removal. # Its value may be silently ignored in the future. # Reason: This option was considered harmful and has been deprecated in M # release. It will be removed in O release. For more information read OSSN-0060. # Related functionality with uploading big images has been implemented with # Keystone trusts support. #admin_password = # DEPRECATED: The tenant name of the administrative user. If "use_user_token" is # not in effect, then admin tenant name can be specified. (string value) # This option is deprecated for removal. # Its value may be silently ignored in the future. # Reason: This option was considered harmful and has been deprecated in M # release. It will be removed in O release. For more information read OSSN-0060. # Related functionality with uploading big images has been implemented with # Keystone trusts support. #admin_tenant_name = # DEPRECATED: The URL to the keystone service. If "use_user_token" is not in # effect and using keystone auth, then URL of keystone can be specified. (string # value) # This option is deprecated for removal. # Its value may be silently ignored in the future. # Reason: This option was considered harmful and has been deprecated in M # release. It will be removed in O release. For more information read OSSN-0060. # Related functionality with uploading big images has been implemented with # Keystone trusts support. #auth_url = # DEPRECATED: The strategy to use for authentication. If "use_user_token" is not # in effect, then auth strategy can be specified. (string value) # This option is deprecated for removal. # Its value may be silently ignored in the future. # Reason: This option was considered harmful and has been deprecated in M # release. It will be removed in O release. For more information read OSSN-0060. # Related functionality with uploading big images has been implemented with # Keystone trusts support. #auth_strategy = noauth # DEPRECATED: The region for the authentication service. If "use_user_token" is # not in effect and using keystone auth, then region name can be specified. # (string value) # This option is deprecated for removal. # Its value may be silently ignored in the future. # Reason: This option was considered harmful and has been deprecated in M # release. It will be removed in O release. For more information read OSSN-0060. # Related functionality with uploading big images has been implemented with # Keystone trusts support. #auth_region = # DEPRECATED: # Protocol to use for communication with the registry server. # # Provide a string value representing the protocol to use for # communication with the registry server. By default, this option is # set to ``http`` and the connection is not secure. # # This option can be set to ``https`` to establish a secure connection # to the registry server. In this case, provide a key to use for the # SSL connection using the ``registry_client_key_file`` option. Also # include the CA file and cert file using the options # ``registry_client_ca_file`` and ``registry_client_cert_file`` # respectively. # # Possible values: # * http # * https # # Related options: # * registry_client_key_file # * registry_client_cert_file # * registry_client_ca_file # # (string value) # Possible values: # http - # https - # This option is deprecated for removal since Queens. # Its value may be silently ignored in the future. # Reason: # Glance registry service is deprecated for removal. # # More information can be found from the spec: # http://specs.openstack.org/openstack/glance-specs/specs/queens/approved/glance # /deprecate-registry.html #registry_client_protocol = http # DEPRECATED: # Absolute path to the private key file. # # Provide a string value representing a valid absolute path to the # private key file to use for establishing a secure connection to # the registry server. # # NOTE: This option must be set if ``registry_client_protocol`` is # set to ``https``. Alternatively, the GLANCE_CLIENT_KEY_FILE # environment variable may be set to a filepath of the key file. # # Possible values: # * String value representing a valid absolute path to the key # file. # # Related options: # * registry_client_protocol # # (string value) # This option is deprecated for removal since Queens. # Its value may be silently ignored in the future. # Reason: # Glance registry service is deprecated for removal. # # More information can be found from the spec: # http://specs.openstack.org/openstack/glance-specs/specs/queens/approved/glance # /deprecate-registry.html #registry_client_key_file = /etc/ssl/key/key-file.pem # DEPRECATED: # Absolute path to the certificate file. # # Provide a string value representing a valid absolute path to the # certificate file to use for establishing a secure connection to # the registry server. # # NOTE: This option must be set if ``registry_client_protocol`` is # set to ``https``. Alternatively, the GLANCE_CLIENT_CERT_FILE # environment variable may be set to a filepath of the certificate # file. # # Possible values: # * String value representing a valid absolute path to the # certificate file. # # Related options: # * registry_client_protocol # # (string value) # This option is deprecated for removal since Queens. # Its value may be silently ignored in the future. # Reason: # Glance registry service is deprecated for removal. # # More information can be found from the spec: # http://specs.openstack.org/openstack/glance-specs/specs/queens/approved/glance # /deprecate-registry.html #registry_client_cert_file = /etc/ssl/certs/file.crt # DEPRECATED: # Absolute path to the Certificate Authority file. # # Provide a string value representing a valid absolute path to the # certificate authority file to use for establishing a secure # connection to the registry server. # # NOTE: This option must be set if ``registry_client_protocol`` is # set to ``https``. Alternatively, the GLANCE_CLIENT_CA_FILE # environment variable may be set to a filepath of the CA file. # This option is ignored if the ``registry_client_insecure`` option # is set to ``True``. # # Possible values: # * String value representing a valid absolute path to the CA # file. # # Related options: # * registry_client_protocol # * registry_client_insecure # # (string value) # This option is deprecated for removal since Queens. # Its value may be silently ignored in the future. # Reason: # Glance registry service is deprecated for removal. # # More information can be found from the spec: # http://specs.openstack.org/openstack/glance-specs/specs/queens/approved/glance # /deprecate-registry.html #registry_client_ca_file = /etc/ssl/cafile/file.ca # DEPRECATED: # Set verification of the registry server certificate. # # Provide a boolean value to determine whether or not to validate # SSL connections to the registry server. By default, this option # is set to ``False`` and the SSL connections are validated. # # If set to ``True``, the connection to the registry server is not # validated via a certifying authority and the # ``registry_client_ca_file`` option is ignored. This is the # registry's equivalent of specifying --insecure on the command line # using glanceclient for the API. # # Possible values: # * True # * False # # Related options: # * registry_client_protocol # * registry_client_ca_file # # (boolean value) # This option is deprecated for removal since Queens. # Its value may be silently ignored in the future. # Reason: # Glance registry service is deprecated for removal. # # More information can be found from the spec: # http://specs.openstack.org/openstack/glance-specs/specs/queens/approved/glance # /deprecate-registry.html #registry_client_insecure = false # DEPRECATED: # Timeout value for registry requests. # # Provide an integer value representing the period of time in seconds # that the API server will wait for a registry request to complete. # The default value is 600 seconds. # # A value of 0 implies that a request will never timeout. # # Possible values: # * Zero # * Positive integer # # Related options: # * None # # (integer value) # Minimum value: 0 # This option is deprecated for removal since Queens. # Its value may be silently ignored in the future. # Reason: # Glance registry service is deprecated for removal. # # More information can be found from the spec: # http://specs.openstack.org/openstack/glance-specs/specs/queens/approved/glance # /deprecate-registry.html #registry_client_timeout = 600 # # Send headers received from identity when making requests to # registry. # # Typically, Glance registry can be deployed in multiple flavors, # which may or may not include authentication. For example, # ``trusted-auth`` is a flavor that does not require the registry # service to authenticate the requests it receives. However, the # registry service may still need a user context to be populated to # serve the requests. This can be achieved by the caller # (the Glance API usually) passing through the headers it received # from authenticating with identity for the same request. The typical # headers sent are ``X-User-Id``, ``X-Tenant-Id``, ``X-Roles``, # ``X-Identity-Status`` and ``X-Service-Catalog``. # # Provide a boolean value to determine whether to send the identity # headers to provide tenant and user information along with the # requests to registry service. By default, this option is set to # ``False``, which means that user and tenant information is not # available readily. It must be obtained by authenticating. Hence, if # this is set to ``False``, ``flavor`` must be set to value that # either includes authentication or authenticated user context. # # Possible values: # * True # * False # # Related options: # * flavor # # (boolean value) #send_identity_headers = false # # The amount of time, in seconds, to delay image scrubbing. # # When delayed delete is turned on, an image is put into ``pending_delete`` # state upon deletion until the scrubber deletes its image data. Typically, soon # after the image is put into ``pending_delete`` state, it is available for # scrubbing. However, scrubbing can be delayed until a later point using this # configuration option. This option denotes the time period an image spends in # ``pending_delete`` state before it is available for scrubbing. # # It is important to realize that this has storage implications. The larger the # ``scrub_time``, the longer the time to reclaim backend storage from deleted # images. # # Possible values: # * Any non-negative integer # # Related options: # * ``delayed_delete`` # # (integer value) # Minimum value: 0 #scrub_time = 0 # # The size of thread pool to be used for scrubbing images. # # When there are a large number of images to scrub, it is beneficial to scrub # images in parallel so that the scrub queue stays in control and the backend # storage is reclaimed in a timely fashion. This configuration option denotes # the maximum number of images to be scrubbed in parallel. The default value is # one, which signifies serial scrubbing. Any value above one indicates parallel # scrubbing. # # Possible values: # * Any non-zero positive integer # # Related options: # * ``delayed_delete`` # # (integer value) # Minimum value: 1 #scrub_pool_size = 1 # # Turn on/off delayed delete. # # Typically when an image is deleted, the ``glance-api`` service puts the image # into ``deleted`` state and deletes its data at the same time. Delayed delete # is a feature in Glance that delays the actual deletion of image data until a # later point in time (as determined by the configuration option # ``scrub_time``). # When delayed delete is turned on, the ``glance-api`` service puts the image # into ``pending_delete`` state upon deletion and leaves the image data in the # storage backend for the image scrubber to delete at a later time. The image # scrubber will move the image into ``deleted`` state upon successful deletion # of image data. # # NOTE: When delayed delete is turned on, image scrubber MUST be running as a # periodic task to prevent the backend storage from filling up with undesired # usage. # # Possible values: # * True # * False # # Related options: # * ``scrub_time`` # * ``wakeup_time`` # * ``scrub_pool_size`` # # (boolean value) #delayed_delete = false # # From oslo.log # # If set to true, the logging level will be set to DEBUG instead of the default # INFO level. (boolean value) # Note: This option can be changed without restarting. #debug = false # The name of a logging configuration file. This file is appended to any # existing logging configuration files. For details about logging configuration # files, see the Python logging module documentation. Note that when logging # configuration files are used then all logging configuration is set in the # configuration file and other logging configuration options are ignored (for # example, logging_context_format_string). (string value) # Note: This option can be changed without restarting. # Deprecated group/name - [DEFAULT]/log_config #log_config_append = # Defines the format string for %%(asctime)s in log records. Default: # %(default)s . This option is ignored if log_config_append is set. (string # value) #log_date_format = %Y-%m-%d %H:%M:%S # (Optional) Name of log file to send logging output to. If no default is set, # logging will go to stderr as defined by use_stderr. This option is ignored if # log_config_append is set. (string value) # Deprecated group/name - [DEFAULT]/logfile #log_file = # (Optional) The base directory used for relative log_file paths. This option # is ignored if log_config_append is set. (string value) # Deprecated group/name - [DEFAULT]/logdir #log_dir = # Uses logging handler designed to watch file system. When log file is moved or # removed this handler will open a new log file with specified path # instantaneously. It makes sense only if log_file option is specified and Linux # platform is used. This option is ignored if log_config_append is set. (boolean # value) #watch_log_file = false # Use syslog for logging. Existing syslog format is DEPRECATED and will be # changed later to honor RFC5424. This option is ignored if log_config_append is # set. (boolean value) #use_syslog = false # Enable journald for logging. If running in a systemd environment you may wish # to enable journal support. Doing so will use the journal native protocol which # includes structured metadata in addition to log messages.This option is # ignored if log_config_append is set. (boolean value) #use_journal = false # Syslog facility to receive log lines. This option is ignored if # log_config_append is set. (string value) #syslog_log_facility = LOG_USER # Use JSON formatting for logging. This option is ignored if log_config_append # is set. (boolean value) #use_json = false # Log output to standard error. This option is ignored if log_config_append is # set. (boolean value) #use_stderr = false # Format string to use for log messages with context. (string value) #logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s # Format string to use for log messages when context is undefined. (string # value) #logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s # Additional data to append to log message when logging level for the message is # DEBUG. (string value) #logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d # Prefix each line of exception output with this format. (string value) #logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s # Defines the format string for %(user_identity)s that is used in # logging_context_format_string. (string value) #logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s # List of package logging levels in logger=LEVEL pairs. This option is ignored # if log_config_append is set. (list value) #default_log_levels = amqp=WARN,amqplib=WARN,boto=WARN,qpid=WARN,sqlalchemy=WARN,suds=INFO,oslo.messaging=INFO,oslo_messaging=INFO,iso8601=WARN,requests.packages.urllib3.connectionpool=WARN,urllib3.connectionpool=WARN,websocket=WARN,requests.packages.urllib3.util.retry=WARN,urllib3.util.retry=WARN,keystonemiddleware=WARN,routes.middleware=WARN,stevedore=WARN,taskflow=WARN,keystoneauth=WARN,oslo.cache=INFO,dogpile.core.dogpile=INFO # Enables or disables publication of error events. (boolean value) #publish_errors = false # The format for an instance that is passed with the log message. (string value) #instance_format = "[instance: %(uuid)s] " # The format for an instance UUID that is passed with the log message. (string # value) #instance_uuid_format = "[instance: %(uuid)s] " # Interval, number of seconds, of log rate limiting. (integer value) #rate_limit_interval = 0 # Maximum number of logged messages per rate_limit_interval. (integer value) #rate_limit_burst = 0 # Log level name used by rate limiting: CRITICAL, ERROR, INFO, WARNING, DEBUG or # empty string. Logs with level greater or equal to rate_limit_except_level are # not filtered. An empty string means that all levels are filtered. (string # value) #rate_limit_except_level = CRITICAL # Enables or disables fatal status of deprecations. (boolean value) #fatal_deprecations = false # # From oslo.messaging # # Size of RPC connection pool. (integer value) #rpc_conn_pool_size = 30 # The pool size limit for connections expiration policy (integer value) #conn_pool_min_size = 2 # The time-to-live in sec of idle connections in the pool (integer value) #conn_pool_ttl = 1200 # ZeroMQ bind address. Should be a wildcard (*), an ethernet interface, or IP. # The "host" option should point or resolve to this address. (string value) #rpc_zmq_bind_address = * # MatchMaker driver. (string value) # Possible values: # redis - # sentinel - # dummy - #rpc_zmq_matchmaker = redis # Number of ZeroMQ contexts, defaults to 1. (integer value) #rpc_zmq_contexts = 1 # Maximum number of ingress messages to locally buffer per topic. Default is # unlimited. (integer value) #rpc_zmq_topic_backlog = # Directory for holding IPC sockets. (string value) #rpc_zmq_ipc_dir = /var/run/openstack # Name of this node. Must be a valid hostname, FQDN, or IP address. Must match # "host" option, if running Nova. (string value) #rpc_zmq_host = localhost # Number of seconds to wait before all pending messages will be sent after # closing a socket. The default value of -1 specifies an infinite linger period. # The value of 0 specifies no linger period. Pending messages shall be discarded # immediately when the socket is closed. Positive values specify an upper bound # for the linger period. (integer value) # Deprecated group/name - [DEFAULT]/rpc_cast_timeout #zmq_linger = -1 # The default number of seconds that poll should wait. Poll raises timeout # exception when timeout expired. (integer value) #rpc_poll_timeout = 1 # Expiration timeout in seconds of a name service record about existing target ( # < 0 means no timeout). (integer value) #zmq_target_expire = 300 # Update period in seconds of a name service record about existing target. # (integer value) #zmq_target_update = 180 # Use PUB/SUB pattern for fanout methods. PUB/SUB always uses proxy. (boolean # value) #use_pub_sub = false # Use ROUTER remote proxy. (boolean value) #use_router_proxy = false # This option makes direct connections dynamic or static. It makes sense only # with use_router_proxy=False which means to use direct connections for direct # message types (ignored otherwise). (boolean value) #use_dynamic_connections = false # How many additional connections to a host will be made for failover reasons. # This option is actual only in dynamic connections mode. (integer value) #zmq_failover_connections = 2 # Minimal port number for random ports range. (port value) # Minimum value: 0 # Maximum value: 65535 #rpc_zmq_min_port = 49153 # Maximal port number for random ports range. (integer value) # Minimum value: 1 # Maximum value: 65536 #rpc_zmq_max_port = 65536 # Number of retries to find free port number before fail with ZMQBindError. # (integer value) #rpc_zmq_bind_port_retries = 100 # Default serialization mechanism for serializing/deserializing # outgoing/incoming messages (string value) # Possible values: # json - # msgpack - #rpc_zmq_serialization = json # This option configures round-robin mode in zmq socket. True means not keeping # a queue when server side disconnects. False means to keep queue and messages # even if server is disconnected, when the server appears we send all # accumulated messages to it. (boolean value) #zmq_immediate = true # Enable/disable TCP keepalive (KA) mechanism. The default value of -1 (or any # other negative value) means to skip any overrides and leave it to OS default; # 0 and 1 (or any other positive value) mean to disable and enable the option # respectively. (integer value) #zmq_tcp_keepalive = -1 # The duration between two keepalive transmissions in idle condition. The unit # is platform dependent, for example, seconds in Linux, milliseconds in Windows # etc. The default value of -1 (or any other negative value and 0) means to skip # any overrides and leave it to OS default. (integer value) #zmq_tcp_keepalive_idle = -1 # The number of retransmissions to be carried out before declaring that remote # end is not available. The default value of -1 (or any other negative value and # 0) means to skip any overrides and leave it to OS default. (integer value) #zmq_tcp_keepalive_cnt = -1 # The duration between two successive keepalive retransmissions, if # acknowledgement to the previous keepalive transmission is not received. The # unit is platform dependent, for example, seconds in Linux, milliseconds in # Windows etc. The default value of -1 (or any other negative value and 0) means # to skip any overrides and leave it to OS default. (integer value) #zmq_tcp_keepalive_intvl = -1 # Maximum number of (green) threads to work concurrently. (integer value) #rpc_thread_pool_size = 100 # Expiration timeout in seconds of a sent/received message after which it is not # tracked anymore by a client/server. (integer value) #rpc_message_ttl = 300 # Wait for message acknowledgements from receivers. This mechanism works only # via proxy without PUB/SUB. (boolean value) #rpc_use_acks = false # Number of seconds to wait for an ack from a cast/call. After each retry # attempt this timeout is multiplied by some specified multiplier. (integer # value) #rpc_ack_timeout_base = 15 # Number to multiply base ack timeout by after each retry attempt. (integer # value) #rpc_ack_timeout_multiplier = 2 # Default number of message sending attempts in case of any problems occurred: # positive value N means at most N retries, 0 means no retries, None or -1 (or # any other negative values) mean to retry forever. This option is used only if # acknowledgments are enabled. (integer value) #rpc_retry_attempts = 3 # List of publisher hosts SubConsumer can subscribe on. This option has higher # priority then the default publishers list taken from the matchmaker. (list # value) #subscribe_on = # Size of executor thread pool when executor is threading or eventlet. (integer # value) # Deprecated group/name - [DEFAULT]/rpc_thread_pool_size #executor_thread_pool_size = 64 # Seconds to wait for a response from a call. (integer value) #rpc_response_timeout = 60 # The network address and optional user credentials for connecting to the # messaging backend, in URL format. The expected format is: # # driver://[user:pass@]host:port[,[userN:passN@]hostN:portN]/virtual_host?query # # Example: rabbit://rabbitmq:password@127.0.0.1:5672// # # For full details on the fields in the URL see the documentation of # oslo_messaging.TransportURL at # https://docs.openstack.org/oslo.messaging/latest/reference/transport.html # (string value) #transport_url = # DEPRECATED: The messaging driver to use, defaults to rabbit. Other drivers # include amqp and zmq. (string value) # This option is deprecated for removal. # Its value may be silently ignored in the future. # Reason: Replaced by [DEFAULT]/transport_url #rpc_backend = rabbit # The default exchange under which topics are scoped. May be overridden by an # exchange name specified in the transport_url option. (string value) #control_exchange = openstack [cors] # # From oslo.middleware.cors # # Indicate whether this resource may be shared with the domain received in the # requests "origin" header. Format: "://[:]", no trailing # slash. Example: https://horizon.example.com (list value) #allowed_origin = # Indicate that the actual request can include user credentials (boolean value) #allow_credentials = true # Indicate which headers are safe to expose to the API. Defaults to HTTP Simple # Headers. (list value) #expose_headers = X-Image-Meta-Checksum,X-Auth-Token,X-Subject-Token,X-Service-Token,X-OpenStack-Request-ID # Maximum cache age of CORS preflight requests. (integer value) #max_age = 3600 # Indicate which methods can be used during the actual request. (list value) #allow_methods = GET,PUT,POST,DELETE,PATCH # Indicate which header field names may be used during the actual request. (list # value) #allow_headers = Content-MD5,X-Image-Meta-Checksum,X-Storage-Token,Accept-Encoding,X-Auth-Token,X-Identity-Status,X-Roles,X-Service-Catalog,X-User-Id,X-Tenant-Id,X-OpenStack-Request-ID [database] # # From oslo.db # # If True, SQLite uses synchronous mode. (boolean value) #sqlite_synchronous = true # The back end to use for the database. (string value) # Deprecated group/name - [DEFAULT]/db_backend #backend = sqlalchemy # The SQLAlchemy connection string to use to connect to the database. (string # value) # Deprecated group/name - [DEFAULT]/sql_connection # Deprecated group/name - [DATABASE]/sql_connection # Deprecated group/name - [sql]/connection #connection = # The SQLAlchemy connection string to use to connect to the slave database. # (string value) #slave_connection = # The SQL mode to be used for MySQL sessions. This option, including the # default, overrides any server-set SQL mode. To use whatever SQL mode is set by # the server configuration, set this to no value. Example: mysql_sql_mode= # (string value) #mysql_sql_mode = TRADITIONAL # If True, transparently enables support for handling MySQL Cluster (NDB). # (boolean value) #mysql_enable_ndb = false # Connections which have been present in the connection pool longer than this # number of seconds will be replaced with a new one the next time they are # checked out from the pool. (integer value) # Deprecated group/name - [DATABASE]/idle_timeout # Deprecated group/name - [database]/idle_timeout # Deprecated group/name - [DEFAULT]/sql_idle_timeout # Deprecated group/name - [DATABASE]/sql_idle_timeout # Deprecated group/name - [sql]/idle_timeout #connection_recycle_time = 3600 # Minimum number of SQL connections to keep open in a pool. (integer value) # Deprecated group/name - [DEFAULT]/sql_min_pool_size # Deprecated group/name - [DATABASE]/sql_min_pool_size #min_pool_size = 1 # Maximum number of SQL connections to keep open in a pool. Setting a value of 0 # indicates no limit. (integer value) # Deprecated group/name - [DEFAULT]/sql_max_pool_size # Deprecated group/name - [DATABASE]/sql_max_pool_size #max_pool_size = 5 # Maximum number of database connection retries during startup. Set to -1 to # specify an infinite retry count. (integer value) # Deprecated group/name - [DEFAULT]/sql_max_retries # Deprecated group/name - [DATABASE]/sql_max_retries #max_retries = 10 # Interval between retries of opening a SQL connection. (integer value) # Deprecated group/name - [DEFAULT]/sql_retry_interval # Deprecated group/name - [DATABASE]/reconnect_interval #retry_interval = 10 # If set, use this value for max_overflow with SQLAlchemy. (integer value) # Deprecated group/name - [DEFAULT]/sql_max_overflow # Deprecated group/name - [DATABASE]/sqlalchemy_max_overflow #max_overflow = 50 # Verbosity of SQL debugging information: 0=None, 100=Everything. (integer # value) # Minimum value: 0 # Maximum value: 100 # Deprecated group/name - [DEFAULT]/sql_connection_debug #connection_debug = 0 # Add Python stack traces to SQL as comment strings. (boolean value) # Deprecated group/name - [DEFAULT]/sql_connection_trace #connection_trace = false # If set, use this value for pool_timeout with SQLAlchemy. (integer value) # Deprecated group/name - [DATABASE]/sqlalchemy_pool_timeout #pool_timeout = # Enable the experimental use of database reconnect on connection lost. (boolean # value) #use_db_reconnect = false # Seconds between retries of a database transaction. (integer value) #db_retry_interval = 1 # If True, increases the interval between retries of a database operation up to # db_max_retry_interval. (boolean value) #db_inc_retry_interval = true # If db_inc_retry_interval is set, the maximum seconds between retries of a # database operation. (integer value) #db_max_retry_interval = 10 # Maximum retries in case of connection error or deadlock error before error is # raised. Set to -1 to specify an infinite retry count. (integer value) #db_max_retries = 20 # # From oslo.db.concurrency # # Enable the experimental use of thread pooling for all DB API calls (boolean # value) # Deprecated group/name - [DEFAULT]/dbapi_use_tpool #use_tpool = false [glance_store] # # From glance.store # # # List of enabled Glance stores. # # Register the storage backends to use for storing disk images # as a comma separated list. The default stores enabled for # storing disk images with Glance are ``file`` and ``http``. # # Possible values: # * A comma separated list that could include: # * file # * http # * swift # * rbd # * sheepdog # * cinder # * vmware # # Related Options: # * default_store # # (list value) #stores = file,http # # The default scheme to use for storing images. # # Provide a string value representing the default scheme to use for # storing images. If not set, Glance uses ``file`` as the default # scheme to store images with the ``file`` store. # # NOTE: The value given for this configuration option must be a valid # scheme for a store registered with the ``stores`` configuration # option. # # Possible values: # * file # * filesystem # * http # * https # * swift # * swift+http # * swift+https # * swift+config # * rbd # * sheepdog # * cinder # * vsphere # # Related Options: # * stores # # (string value) # Possible values: # file - # filesystem - # http - # https - # swift - # swift+http - # swift+https - # swift+config - # rbd - # sheepdog - # cinder - # vsphere - #default_store = file # # Minimum interval in seconds to execute updating dynamic storage # capabilities based on current backend status. # # Provide an integer value representing time in seconds to set the # minimum interval before an update of dynamic storage capabilities # for a storage backend can be attempted. Setting # ``store_capabilities_update_min_interval`` does not mean updates # occur periodically based on the set interval. Rather, the update # is performed at the elapse of this interval set, if an operation # of the store is triggered. # # By default, this option is set to zero and is disabled. Provide an # integer value greater than zero to enable this option. # # NOTE: For more information on store capabilities and their updates, # please visit: https://specs.openstack.org/openstack/glance-specs/specs/kilo # /store-capabilities.html # # For more information on setting up a particular store in your # deployment and help with the usage of this feature, please contact # the storage driver maintainers listed here: # http://docs.openstack.org/developer/glance_store/drivers/index.html # # Possible values: # * Zero # * Positive integer # # Related Options: # * None # # (integer value) # Minimum value: 0 #store_capabilities_update_min_interval = 0 # # Information to match when looking for cinder in the service catalog. # # When the ``cinder_endpoint_template`` is not set and any of # ``cinder_store_auth_address``, ``cinder_store_user_name``, # ``cinder_store_project_name``, ``cinder_store_password`` is not set, # cinder store uses this information to lookup cinder endpoint from the service # catalog in the current context. ``cinder_os_region_name``, if set, is taken # into consideration to fetch the appropriate endpoint. # # The service catalog can be listed by the ``openstack catalog list`` command. # # Possible values: # * A string of of the following form: # ``::`` # At least ``service_type`` and ``interface`` should be specified. # ``service_name`` can be omitted. # # Related options: # * cinder_os_region_name # * cinder_endpoint_template # * cinder_store_auth_address # * cinder_store_user_name # * cinder_store_project_name # * cinder_store_password # # (string value) #cinder_catalog_info = volumev2::publicURL # # Override service catalog lookup with template for cinder endpoint. # # When this option is set, this value is used to generate cinder endpoint, # instead of looking up from the service catalog. # This value is ignored if ``cinder_store_auth_address``, # ``cinder_store_user_name``, ``cinder_store_project_name``, and # ``cinder_store_password`` are specified. # # If this configuration option is set, ``cinder_catalog_info`` will be ignored. # # Possible values: # * URL template string for cinder endpoint, where ``%%(tenant)s`` is # replaced with the current tenant (project) name. # For example: ``http://cinder.openstack.example.org/v2/%%(tenant)s`` # # Related options: # * cinder_store_auth_address # * cinder_store_user_name # * cinder_store_project_name # * cinder_store_password # * cinder_catalog_info # # (string value) #cinder_endpoint_template = # # Region name to lookup cinder service from the service catalog. # # This is used only when ``cinder_catalog_info`` is used for determining the # endpoint. If set, the lookup for cinder endpoint by this node is filtered to # the specified region. It is useful when multiple regions are listed in the # catalog. If this is not set, the endpoint is looked up from every region. # # Possible values: # * A string that is a valid region name. # # Related options: # * cinder_catalog_info # # (string value) # Deprecated group/name - [glance_store]/os_region_name #cinder_os_region_name = # # Location of a CA certificates file used for cinder client requests. # # The specified CA certificates file, if set, is used to verify cinder # connections via HTTPS endpoint. If the endpoint is HTTP, this value is # ignored. # ``cinder_api_insecure`` must be set to ``True`` to enable the verification. # # Possible values: # * Path to a ca certificates file # # Related options: # * cinder_api_insecure # # (string value) #cinder_ca_certificates_file = # # Number of cinderclient retries on failed http calls. # # When a call failed by any errors, cinderclient will retry the call up to the # specified times after sleeping a few seconds. # # Possible values: # * A positive integer # # Related options: # * None # # (integer value) # Minimum value: 0 #cinder_http_retries = 3 # # Time period, in seconds, to wait for a cinder volume transition to # complete. # # When the cinder volume is created, deleted, or attached to the glance node to # read/write the volume data, the volume's state is changed. For example, the # newly created volume status changes from ``creating`` to ``available`` after # the creation process is completed. This specifies the maximum time to wait for # the status change. If a timeout occurs while waiting, or the status is changed # to an unexpected value (e.g. `error``), the image creation fails. # # Possible values: # * A positive integer # # Related options: # * None # # (integer value) # Minimum value: 0 #cinder_state_transition_timeout = 300 # # Allow to perform insecure SSL requests to cinder. # # If this option is set to True, HTTPS endpoint connection is verified using the # CA certificates file specified by ``cinder_ca_certificates_file`` option. # # Possible values: # * True # * False # # Related options: # * cinder_ca_certificates_file # # (boolean value) #cinder_api_insecure = false # # The address where the cinder authentication service is listening. # # When all of ``cinder_store_auth_address``, ``cinder_store_user_name``, # ``cinder_store_project_name``, and ``cinder_store_password`` options are # specified, the specified values are always used for the authentication. # This is useful to hide the image volumes from users by storing them in a # project/tenant specific to the image service. It also enables users to share # the image volume among other projects under the control of glance's ACL. # # If either of these options are not set, the cinder endpoint is looked up # from the service catalog, and current context's user and project are used. # # Possible values: # * A valid authentication service address, for example: # ``http://openstack.example.org/identity/v2.0`` # # Related options: # * cinder_store_user_name # * cinder_store_password # * cinder_store_project_name # # (string value) #cinder_store_auth_address = # # User name to authenticate against cinder. # # This must be used with all the following related options. If any of these are # not specified, the user of the current context is used. # # Possible values: # * A valid user name # # Related options: # * cinder_store_auth_address # * cinder_store_password # * cinder_store_project_name # # (string value) #cinder_store_user_name = # # Password for the user authenticating against cinder. # # This must be used with all the following related options. If any of these are # not specified, the user of the current context is used. # # Possible values: # * A valid password for the user specified by ``cinder_store_user_name`` # # Related options: # * cinder_store_auth_address # * cinder_store_user_name # * cinder_store_project_name # # (string value) #cinder_store_password = # # Project name where the image volume is stored in cinder. # # If this configuration option is not set, the project in current context is # used. # # This must be used with all the following related options. If any of these are # not specified, the project of the current context is used. # # Possible values: # * A valid project name # # Related options: # * ``cinder_store_auth_address`` # * ``cinder_store_user_name`` # * ``cinder_store_password`` # # (string value) #cinder_store_project_name = # # Path to the rootwrap configuration file to use for running commands as root. # # The cinder store requires root privileges to operate the image volumes (for # connecting to iSCSI/FC volumes and reading/writing the volume data, etc.). # The configuration file should allow the required commands by cinder store and # os-brick library. # # Possible values: # * Path to the rootwrap config file # # Related options: # * None # # (string value) #rootwrap_config = /etc/glance/rootwrap.conf # # Volume type that will be used for volume creation in cinder. # # Some cinder backends can have several volume types to optimize storage usage. # Adding this option allows an operator to choose a specific volume type # in cinder that can be optimized for images. # # If this is not set, then the default volume type specified in the cinder # configuration will be used for volume creation. # # Possible values: # * A valid volume type from cinder # # Related options: # * None # # (string value) #cinder_volume_type = # # Directory to which the filesystem backend store writes images. # # Upon start up, Glance creates the directory if it doesn't already # exist and verifies write access to the user under which # ``glance-api`` runs. If the write access isn't available, a # ``BadStoreConfiguration`` exception is raised and the filesystem # store may not be available for adding new images. # # NOTE: This directory is used only when filesystem store is used as a # storage backend. Either ``filesystem_store_datadir`` or # ``filesystem_store_datadirs`` option must be specified in # ``glance-api.conf``. If both options are specified, a # ``BadStoreConfiguration`` will be raised and the filesystem store # may not be available for adding new images. # # Possible values: # * A valid path to a directory # # Related options: # * ``filesystem_store_datadirs`` # * ``filesystem_store_file_perm`` # # (string value) #filesystem_store_datadir = /var/lib/glance/images # # List of directories and their priorities to which the filesystem # backend store writes images. # # The filesystem store can be configured to store images in multiple # directories as opposed to using a single directory specified by the # ``filesystem_store_datadir`` configuration option. When using # multiple directories, each directory can be given an optional # priority to specify the preference order in which they should # be used. Priority is an integer that is concatenated to the # directory path with a colon where a higher value indicates higher # priority. When two directories have the same priority, the directory # with most free space is used. When no priority is specified, it # defaults to zero. # # More information on configuring filesystem store with multiple store # directories can be found at # http://docs.openstack.org/developer/glance/configuring.html # # NOTE: This directory is used only when filesystem store is used as a # storage backend. Either ``filesystem_store_datadir`` or # ``filesystem_store_datadirs`` option must be specified in # ``glance-api.conf``. If both options are specified, a # ``BadStoreConfiguration`` will be raised and the filesystem store # may not be available for adding new images. # # Possible values: # * List of strings of the following form: # * ``:`` # # Related options: # * ``filesystem_store_datadir`` # * ``filesystem_store_file_perm`` # # (multi valued) #filesystem_store_datadirs = # # Filesystem store metadata file. # # The path to a file which contains the metadata to be returned with # any location associated with the filesystem store. The file must # contain a valid JSON object. The object should contain the keys # ``id`` and ``mountpoint``. The value for both keys should be a # string. # # Possible values: # * A valid path to the store metadata file # # Related options: # * None # # (string value) #filesystem_store_metadata_file = # # File access permissions for the image files. # # Set the intended file access permissions for image data. This provides # a way to enable other services, e.g. Nova, to consume images directly # from the filesystem store. The users running the services that are # intended to be given access to could be made a member of the group # that owns the files created. Assigning a value less then or equal to # zero for this configuration option signifies that no changes be made # to the default permissions. This value will be decoded as an octal # digit. # # For more information, please refer the documentation at # http://docs.openstack.org/developer/glance/configuring.html # # Possible values: # * A valid file access permission # * Zero # * Any negative integer # # Related options: # * None # # (integer value) #filesystem_store_file_perm = 0 # # Path to the CA bundle file. # # This configuration option enables the operator to use a custom # Certificate Authority file to verify the remote server certificate. If # this option is set, the ``https_insecure`` option will be ignored and # the CA file specified will be used to authenticate the server # certificate and establish a secure connection to the server. # # Possible values: # * A valid path to a CA file # # Related options: # * https_insecure # # (string value) #https_ca_certificates_file = # # Set verification of the remote server certificate. # # This configuration option takes in a boolean value to determine # whether or not to verify the remote server certificate. If set to # True, the remote server certificate is not verified. If the option is # set to False, then the default CA truststore is used for verification. # # This option is ignored if ``https_ca_certificates_file`` is set. # The remote server certificate will then be verified using the file # specified using the ``https_ca_certificates_file`` option. # # Possible values: # * True # * False # # Related options: # * https_ca_certificates_file # # (boolean value) #https_insecure = true # # The http/https proxy information to be used to connect to the remote # server. # # This configuration option specifies the http/https proxy information # that should be used to connect to the remote server. The proxy # information should be a key value pair of the scheme and proxy, for # example, http:10.0.0.1:3128. You can also specify proxies for multiple # schemes by separating the key value pairs with a comma, for example, # http:10.0.0.1:3128, https:10.0.0.1:1080. # # Possible values: # * A comma separated list of scheme:proxy pairs as described above # # Related options: # * None # # (dict value) #http_proxy_information = # # Size, in megabytes, to chunk RADOS images into. # # Provide an integer value representing the size in megabytes to chunk # Glance images into. The default chunk size is 8 megabytes. For optimal # performance, the value should be a power of two. # # When Ceph's RBD object storage system is used as the storage backend # for storing Glance images, the images are chunked into objects of the # size set using this option. These chunked objects are then stored # across the distributed block data store to use for Glance. # # Possible Values: # * Any positive integer value # # Related options: # * None # # (integer value) # Minimum value: 1 #rbd_store_chunk_size = 8 # # RADOS pool in which images are stored. # # When RBD is used as the storage backend for storing Glance images, the # images are stored by means of logical grouping of the objects (chunks # of images) into a ``pool``. Each pool is defined with the number of # placement groups it can contain. The default pool that is used is # 'images'. # # More information on the RBD storage backend can be found here: # http://ceph.com/planet/how-data-is-stored-in-ceph-cluster/ # # Possible Values: # * A valid pool name # # Related options: # * None # # (string value) #rbd_store_pool = images # # RADOS user to authenticate as. # # This configuration option takes in the RADOS user to authenticate as. # This is only needed when RADOS authentication is enabled and is # applicable only if the user is using Cephx authentication. If the # value for this option is not set by the user or is set to None, a # default value will be chosen, which will be based on the client. # section in rbd_store_ceph_conf. # # Possible Values: # * A valid RADOS user # # Related options: # * rbd_store_ceph_conf # # (string value) #rbd_store_user = # # Ceph configuration file path. # # This configuration option takes in the path to the Ceph configuration # file to be used. If the value for this option is not set by the user # or is set to None, librados will locate the default configuration file # which is located at /etc/ceph/ceph.conf. If using Cephx # authentication, this file should include a reference to the right # keyring in a client. section # # Possible Values: # * A valid path to a configuration file # # Related options: # * rbd_store_user # # (string value) #rbd_store_ceph_conf = /etc/ceph/ceph.conf # # Timeout value for connecting to Ceph cluster. # # This configuration option takes in the timeout value in seconds used # when connecting to the Ceph cluster i.e. it sets the time to wait for # glance-api before closing the connection. This prevents glance-api # hangups during the connection to RBD. If the value for this option # is set to less than or equal to 0, no timeout is set and the default # librados value is used. # # Possible Values: # * Any integer value # # Related options: # * None # # (integer value) #rados_connect_timeout = 0 # # Chunk size for images to be stored in Sheepdog data store. # # Provide an integer value representing the size in mebibyte # (1048576 bytes) to chunk Glance images into. The default # chunk size is 64 mebibytes. # # When using Sheepdog distributed storage system, the images are # chunked into objects of this size and then stored across the # distributed data store to use for Glance. # # Chunk sizes, if a power of two, help avoid fragmentation and # enable improved performance. # # Possible values: # * Positive integer value representing size in mebibytes. # # Related Options: # * None # # (integer value) # Minimum value: 1 #sheepdog_store_chunk_size = 64 # # Port number on which the sheep daemon will listen. # # Provide an integer value representing a valid port number on # which you want the Sheepdog daemon to listen on. The default # port is 7000. # # The Sheepdog daemon, also called 'sheep', manages the storage # in the distributed cluster by writing objects across the storage # network. It identifies and acts on the messages it receives on # the port number set using ``sheepdog_store_port`` option to store # chunks of Glance images. # # Possible values: # * A valid port number (0 to 65535) # # Related Options: # * sheepdog_store_address # # (port value) # Minimum value: 0 # Maximum value: 65535 #sheepdog_store_port = 7000 # # Address to bind the Sheepdog daemon to. # # Provide a string value representing the address to bind the # Sheepdog daemon to. The default address set for the 'sheep' # is 127.0.0.1. # # The Sheepdog daemon, also called 'sheep', manages the storage # in the distributed cluster by writing objects across the storage # network. It identifies and acts on the messages directed to the # address set using ``sheepdog_store_address`` option to store # chunks of Glance images. # # Possible values: # * A valid IPv4 address # * A valid IPv6 address # * A valid hostname # # Related Options: # * sheepdog_store_port # # (unknown value) #sheepdog_store_address = 127.0.0.1 # # Set verification of the server certificate. # # This boolean determines whether or not to verify the server # certificate. If this option is set to True, swiftclient won't check # for a valid SSL certificate when authenticating. If the option is set # to False, then the default CA truststore is used for verification. # # Possible values: # * True # * False # # Related options: # * swift_store_cacert # # (boolean value) #swift_store_auth_insecure = false # # Path to the CA bundle file. # # This configuration option enables the operator to specify the path to # a custom Certificate Authority file for SSL verification when # connecting to Swift. # # Possible values: # * A valid path to a CA file # # Related options: # * swift_store_auth_insecure # # (string value) #swift_store_cacert = /etc/ssl/certs/ca-certificates.crt # # The region of Swift endpoint to use by Glance. # # Provide a string value representing a Swift region where Glance # can connect to for image storage. By default, there is no region # set. # # When Glance uses Swift as the storage backend to store images # for a specific tenant that has multiple endpoints, setting of a # Swift region with ``swift_store_region`` allows Glance to connect # to Swift in the specified region as opposed to a single region # connectivity. # # This option can be configured for both single-tenant and # multi-tenant storage. # # NOTE: Setting the region with ``swift_store_region`` is # tenant-specific and is necessary ``only if`` the tenant has # multiple endpoints across different regions. # # Possible values: # * A string value representing a valid Swift region. # # Related Options: # * None # # (string value) #swift_store_region = RegionTwo # # The URL endpoint to use for Swift backend storage. # # Provide a string value representing the URL endpoint to use for # storing Glance images in Swift store. By default, an endpoint # is not set and the storage URL returned by ``auth`` is used. # Setting an endpoint with ``swift_store_endpoint`` overrides the # storage URL and is used for Glance image storage. # # NOTE: The URL should include the path up to, but excluding the # container. The location of an object is obtained by appending # the container and object to the configured URL. # # Possible values: # * String value representing a valid URL path up to a Swift container # # Related Options: # * None # # (string value) #swift_store_endpoint = https://swift.openstack.example.org/v1/path_not_including_container_name # # Endpoint Type of Swift service. # # This string value indicates the endpoint type to use to fetch the # Swift endpoint. The endpoint type determines the actions the user will # be allowed to perform, for instance, reading and writing to the Store. # This setting is only used if swift_store_auth_version is greater than # 1. # # Possible values: # * publicURL # * adminURL # * internalURL # # Related options: # * swift_store_endpoint # # (string value) # Possible values: # publicURL - # adminURL - # internalURL - #swift_store_endpoint_type = publicURL # # Type of Swift service to use. # # Provide a string value representing the service type to use for # storing images while using Swift backend storage. The default # service type is set to ``object-store``. # # NOTE: If ``swift_store_auth_version`` is set to 2, the value for # this configuration option needs to be ``object-store``. If using # a higher version of Keystone or a different auth scheme, this # option may be modified. # # Possible values: # * A string representing a valid service type for Swift storage. # # Related Options: # * None # # (string value) #swift_store_service_type = object-store # # Name of single container to store images/name prefix for multiple containers # # When a single container is being used to store images, this configuration # option indicates the container within the Glance account to be used for # storing all images. When multiple containers are used to store images, this # will be the name prefix for all containers. Usage of single/multiple # containers can be controlled using the configuration option # ``swift_store_multiple_containers_seed``. # # When using multiple containers, the containers will be named after the value # set for this configuration option with the first N chars of the image UUID # as the suffix delimited by an underscore (where N is specified by # ``swift_store_multiple_containers_seed``). # # Example: if the seed is set to 3 and swift_store_container = ``glance``, then # an image with UUID ``fdae39a1-bac5-4238-aba4-69bcc726e848`` would be placed in # the container ``glance_fda``. All dashes in the UUID are included when # creating the container name but do not count toward the character limit, so # when N=10 the container name would be ``glance_fdae39a1-ba.`` # # Possible values: # * If using single container, this configuration option can be any string # that is a valid swift container name in Glance's Swift account # * If using multiple containers, this configuration option can be any # string as long as it satisfies the container naming rules enforced by # Swift. The value of ``swift_store_multiple_containers_seed`` should be # taken into account as well. # # Related options: # * ``swift_store_multiple_containers_seed`` # * ``swift_store_multi_tenant`` # * ``swift_store_create_container_on_put`` # # (string value) #swift_store_container = glance # # The size threshold, in MB, after which Glance will start segmenting image # data. # # Swift has an upper limit on the size of a single uploaded object. By default, # this is 5GB. To upload objects bigger than this limit, objects are segmented # into multiple smaller objects that are tied together with a manifest file. # For more detail, refer to # http://docs.openstack.org/developer/swift/overview_large_objects.html # # This configuration option specifies the size threshold over which the Swift # driver will start segmenting image data into multiple smaller files. # Currently, the Swift driver only supports creating Dynamic Large Objects. # # NOTE: This should be set by taking into account the large object limit # enforced by the Swift cluster in consideration. # # Possible values: # * A positive integer that is less than or equal to the large object limit # enforced by the Swift cluster in consideration. # # Related options: # * ``swift_store_large_object_chunk_size`` # # (integer value) # Minimum value: 1 #swift_store_large_object_size = 5120 # # The maximum size, in MB, of the segments when image data is segmented. # # When image data is segmented to upload images that are larger than the limit # enforced by the Swift cluster, image data is broken into segments that are no # bigger than the size specified by this configuration option. # Refer to ``swift_store_large_object_size`` for more detail. # # For example: if ``swift_store_large_object_size`` is 5GB and # ``swift_store_large_object_chunk_size`` is 1GB, an image of size 6.2GB will be # segmented into 7 segments where the first six segments will be 1GB in size and # the seventh segment will be 0.2GB. # # Possible values: # * A positive integer that is less than or equal to the large object limit # enforced by Swift cluster in consideration. # # Related options: # * ``swift_store_large_object_size`` # # (integer value) # Minimum value: 1 #swift_store_large_object_chunk_size = 200 # # Create container, if it doesn't already exist, when uploading image. # # At the time of uploading an image, if the corresponding container doesn't # exist, it will be created provided this configuration option is set to True. # By default, it won't be created. This behavior is applicable for both single # and multiple containers mode. # # Possible values: # * True # * False # # Related options: # * None # # (boolean value) #swift_store_create_container_on_put = false # # Store images in tenant's Swift account. # # This enables multi-tenant storage mode which causes Glance images to be stored # in tenant specific Swift accounts. If this is disabled, Glance stores all # images in its own account. More details multi-tenant store can be found at # https://wiki.openstack.org/wiki/GlanceSwiftTenantSpecificStorage # # NOTE: If using multi-tenant swift store, please make sure # that you do not set a swift configuration file with the # 'swift_store_config_file' option. # # Possible values: # * True # * False # # Related options: # * swift_store_config_file # # (boolean value) #swift_store_multi_tenant = false # # Seed indicating the number of containers to use for storing images. # # When using a single-tenant store, images can be stored in one or more than one # containers. When set to 0, all images will be stored in one single container. # When set to an integer value between 1 and 32, multiple containers will be # used to store images. This configuration option will determine how many # containers are created. The total number of containers that will be used is # equal to 16^N, so if this config option is set to 2, then 16^2=256 containers # will be used to store images. # # Please refer to ``swift_store_container`` for more detail on the naming # convention. More detail about using multiple containers can be found at # https://specs.openstack.org/openstack/glance-specs/specs/kilo/swift-store- # multiple-containers.html # # NOTE: This is used only when swift_store_multi_tenant is disabled. # # Possible values: # * A non-negative integer less than or equal to 32 # # Related options: # * ``swift_store_container`` # * ``swift_store_multi_tenant`` # * ``swift_store_create_container_on_put`` # # (integer value) # Minimum value: 0 # Maximum value: 32 #swift_store_multiple_containers_seed = 0 # # List of tenants that will be granted admin access. # # This is a list of tenants that will be granted read/write access on # all Swift containers created by Glance in multi-tenant mode. The # default value is an empty list. # # Possible values: # * A comma separated list of strings representing UUIDs of Keystone # projects/tenants # # Related options: # * None # # (list value) #swift_store_admin_tenants = # # SSL layer compression for HTTPS Swift requests. # # Provide a boolean value to determine whether or not to compress # HTTPS Swift requests for images at the SSL layer. By default, # compression is enabled. # # When using Swift as the backend store for Glance image storage, # SSL layer compression of HTTPS Swift requests can be set using # this option. If set to False, SSL layer compression of HTTPS # Swift requests is disabled. Disabling this option may improve # performance for images which are already in a compressed format, # for example, qcow2. # # Possible values: # * True # * False # # Related Options: # * None # # (boolean value) #swift_store_ssl_compression = true # # The number of times a Swift download will be retried before the # request fails. # # Provide an integer value representing the number of times an image # download must be retried before erroring out. The default value is # zero (no retry on a failed image download). When set to a positive # integer value, ``swift_store_retry_get_count`` ensures that the # download is attempted this many more times upon a download failure # before sending an error message. # # Possible values: # * Zero # * Positive integer value # # Related Options: # * None # # (integer value) # Minimum value: 0 #swift_store_retry_get_count = 0 # # Time in seconds defining the size of the window in which a new # token may be requested before the current token is due to expire. # # Typically, the Swift storage driver fetches a new token upon the # expiration of the current token to ensure continued access to # Swift. However, some Swift transactions (like uploading image # segments) may not recover well if the token expires on the fly. # # Hence, by fetching a new token before the current token expiration, # we make sure that the token does not expire or is close to expiry # before a transaction is attempted. By default, the Swift storage # driver requests for a new token 60 seconds or less before the # current token expiration. # # Possible values: # * Zero # * Positive integer value # # Related Options: # * None # # (integer value) # Minimum value: 0 #swift_store_expire_soon_interval = 60 # # Use trusts for multi-tenant Swift store. # # This option instructs the Swift store to create a trust for each # add/get request when the multi-tenant store is in use. Using trusts # allows the Swift store to avoid problems that can be caused by an # authentication token expiring during the upload or download of data. # # By default, ``swift_store_use_trusts`` is set to ``True``(use of # trusts is enabled). If set to ``False``, a user token is used for # the Swift connection instead, eliminating the overhead of trust # creation. # # NOTE: This option is considered only when # ``swift_store_multi_tenant`` is set to ``True`` # # Possible values: # * True # * False # # Related options: # * swift_store_multi_tenant # # (boolean value) #swift_store_use_trusts = true # # Buffer image segments before upload to Swift. # # Provide a boolean value to indicate whether or not Glance should # buffer image data to disk while uploading to swift. This enables # Glance to resume uploads on error. # # NOTES: # When enabling this option, one should take great care as this # increases disk usage on the API node. Be aware that depending # upon how the file system is configured, the disk space used # for buffering may decrease the actual disk space available for # the glance image cache. Disk utilization will cap according to # the following equation: # (``swift_store_large_object_chunk_size`` * ``workers`` * 1000) # # Possible values: # * True # * False # # Related options: # * swift_upload_buffer_dir # # (boolean value) #swift_buffer_on_upload = false # # Reference to default Swift account/backing store parameters. # # Provide a string value representing a reference to the default set # of parameters required for using swift account/backing store for # image storage. The default reference value for this configuration # option is 'ref1'. This configuration option dereferences the # parameters and facilitates image storage in Swift storage backend # every time a new image is added. # # Possible values: # * A valid string value # # Related options: # * None # # (string value) #default_swift_reference = ref1 # DEPRECATED: Version of the authentication service to use. Valid versions are 2 # and 3 for keystone and 1 (deprecated) for swauth and rackspace. (string value) # This option is deprecated for removal. # Its value may be silently ignored in the future. # Reason: # The option 'auth_version' in the Swift back-end configuration file is # used instead. #swift_store_auth_version = 2 # DEPRECATED: The address where the Swift authentication service is listening. # (string value) # This option is deprecated for removal. # Its value may be silently ignored in the future. # Reason: # The option 'auth_address' in the Swift back-end configuration file is # used instead. #swift_store_auth_address = # DEPRECATED: The user to authenticate against the Swift authentication service. # (string value) # This option is deprecated for removal. # Its value may be silently ignored in the future. # Reason: # The option 'user' in the Swift back-end configuration file is set instead. #swift_store_user = # DEPRECATED: Auth key for the user authenticating against the Swift # authentication service. (string value) # This option is deprecated for removal. # Its value may be silently ignored in the future. # Reason: # The option 'key' in the Swift back-end configuration file is used # to set the authentication key instead. #swift_store_key = # # Absolute path to the file containing the swift account(s) # configurations. # # Include a string value representing the path to a configuration # file that has references for each of the configured Swift # account(s)/backing stores. By default, no file path is specified # and customized Swift referencing is disabled. Configuring this # option is highly recommended while using Swift storage backend for # image storage as it avoids storage of credentials in the database. # # NOTE: Please do not configure this option if you have set # ``swift_store_multi_tenant`` to ``True``. # # Possible values: # * String value representing an absolute path on the glance-api # node # # Related options: # * swift_store_multi_tenant # # (string value) #swift_store_config_file = # # Directory to buffer image segments before upload to Swift. # # Provide a string value representing the absolute path to the # directory on the glance node where image segments will be # buffered briefly before they are uploaded to swift. # # NOTES: # * This is required only when the configuration option # ``swift_buffer_on_upload`` is set to True. # * This directory should be provisioned keeping in mind the # ``swift_store_large_object_chunk_size`` and the maximum # number of images that could be uploaded simultaneously by # a given glance node. # # Possible values: # * String value representing an absolute directory path # # Related options: # * swift_buffer_on_upload # * swift_store_large_object_chunk_size # # (string value) #swift_upload_buffer_dir = # # Address of the ESX/ESXi or vCenter Server target system. # # This configuration option sets the address of the ESX/ESXi or vCenter # Server target system. This option is required when using the VMware # storage backend. The address can contain an IP address (127.0.0.1) or # a DNS name (www.my-domain.com). # # Possible Values: # * A valid IPv4 or IPv6 address # * A valid DNS name # # Related options: # * vmware_server_username # * vmware_server_password # # (unknown value) #vmware_server_host = 127.0.0.1 # # Server username. # # This configuration option takes the username for authenticating with # the VMware ESX/ESXi or vCenter Server. This option is required when # using the VMware storage backend. # # Possible Values: # * Any string that is the username for a user with appropriate # privileges # # Related options: # * vmware_server_host # * vmware_server_password # # (string value) #vmware_server_username = root # # Server password. # # This configuration option takes the password for authenticating with # the VMware ESX/ESXi or vCenter Server. This option is required when # using the VMware storage backend. # # Possible Values: # * Any string that is a password corresponding to the username # specified using the "vmware_server_username" option # # Related options: # * vmware_server_host # * vmware_server_username # # (string value) #vmware_server_password = vmware # # The number of VMware API retries. # # This configuration option specifies the number of times the VMware # ESX/VC server API must be retried upon connection related issues or # server API call overload. It is not possible to specify 'retry # forever'. # # Possible Values: # * Any positive integer value # # Related options: # * None # # (integer value) # Minimum value: 1 #vmware_api_retry_count = 10 # # Interval in seconds used for polling remote tasks invoked on VMware # ESX/VC server. # # This configuration option takes in the sleep time in seconds for polling an # on-going async task as part of the VMWare ESX/VC server API call. # # Possible Values: # * Any positive integer value # # Related options: # * None # # (integer value) # Minimum value: 1 #vmware_task_poll_interval = 5 # # The directory where the glance images will be stored in the datastore. # # This configuration option specifies the path to the directory where the # glance images will be stored in the VMware datastore. If this option # is not set, the default directory where the glance images are stored # is openstack_glance. # # Possible Values: # * Any string that is a valid path to a directory # # Related options: # * None # # (string value) #vmware_store_image_dir = /openstack_glance # # Set verification of the ESX/vCenter server certificate. # # This configuration option takes a boolean value to determine # whether or not to verify the ESX/vCenter server certificate. If this # option is set to True, the ESX/vCenter server certificate is not # verified. If this option is set to False, then the default CA # truststore is used for verification. # # This option is ignored if the "vmware_ca_file" option is set. In that # case, the ESX/vCenter server certificate will then be verified using # the file specified using the "vmware_ca_file" option . # # Possible Values: # * True # * False # # Related options: # * vmware_ca_file # # (boolean value) # Deprecated group/name - [glance_store]/vmware_api_insecure #vmware_insecure = false # # Absolute path to the CA bundle file. # # This configuration option enables the operator to use a custom # Cerificate Authority File to verify the ESX/vCenter certificate. # # If this option is set, the "vmware_insecure" option will be ignored # and the CA file specified will be used to authenticate the ESX/vCenter # server certificate and establish a secure connection to the server. # # Possible Values: # * Any string that is a valid absolute path to a CA file # # Related options: # * vmware_insecure # # (string value) #vmware_ca_file = /etc/ssl/certs/ca-certificates.crt # # The datastores where the image can be stored. # # This configuration option specifies the datastores where the image can # be stored in the VMWare store backend. This option may be specified # multiple times for specifying multiple datastores. The datastore name # should be specified after its datacenter path, separated by ":". An # optional weight may be given after the datastore name, separated again # by ":" to specify the priority. Thus, the required format becomes # ::. # # When adding an image, the datastore with highest weight will be # selected, unless there is not enough free space available in cases # where the image size is already known. If no weight is given, it is # assumed to be zero and the directory will be considered for selection # last. If multiple datastores have the same weight, then the one with # the most free space available is selected. # # Possible Values: # * Any string of the format: # :: # # Related options: # * None # # (multi valued) #vmware_datastores = [image_format] # # From glance.api # # Supported values for the 'container_format' image attribute (list value) # Deprecated group/name - [DEFAULT]/container_formats #container_formats = ami,ari,aki,bare,ovf,ova,docker # Supported values for the 'disk_format' image attribute (list value) # Deprecated group/name - [DEFAULT]/disk_formats #disk_formats = ami,ari,aki,vhd,vhdx,vmdk,raw,qcow2,vdi,iso,ploop [keystone_authtoken] # # From keystonemiddleware.auth_token # # Complete "public" Identity API endpoint. This endpoint should not be an # "admin" endpoint, as it should be accessible by all end users. Unauthenticated # clients are redirected to this endpoint to authenticate. Although this # endpoint should ideally be unversioned, client support in the wild varies. If # you're using a versioned v2 endpoint here, then this should *not* be the same # endpoint the service user utilizes for validating tokens, because normal end # users may not be able to reach that endpoint. (string value) # Deprecated group/name - [keystone_authtoken]/auth_uri #www_authenticate_uri = # DEPRECATED: Complete "public" Identity API endpoint. This endpoint should not # be an "admin" endpoint, as it should be accessible by all end users. # Unauthenticated clients are redirected to this endpoint to authenticate. # Although this endpoint should ideally be unversioned, client support in the # wild varies. If you're using a versioned v2 endpoint here, then this should # *not* be the same endpoint the service user utilizes for validating tokens, # because normal end users may not be able to reach that endpoint. This option # is deprecated in favor of www_authenticate_uri and will be removed in the S # release. (string value) # This option is deprecated for removal since Queens. # Its value may be silently ignored in the future. # Reason: The auth_uri option is deprecated in favor of www_authenticate_uri and # will be removed in the S release. #auth_uri = # API version of the admin Identity API endpoint. (string value) #auth_version = # Do not handle authorization requests within the middleware, but delegate the # authorization decision to downstream WSGI components. (boolean value) #delay_auth_decision = false # Request timeout value for communicating with Identity API server. (integer # value) #http_connect_timeout = # How many times are we trying to reconnect when communicating with Identity API # Server. (integer value) #http_request_max_retries = 3 # Request environment key where the Swift cache object is stored. When # auth_token middleware is deployed with a Swift cache, use this option to have # the middleware share a caching backend with swift. Otherwise, use the # ``memcached_servers`` option instead. (string value) #cache = # Required if identity server requires client certificate (string value) #certfile = # Required if identity server requires client certificate (string value) #keyfile = # A PEM encoded Certificate Authority to use when verifying HTTPs connections. # Defaults to system CAs. (string value) #cafile = # Verify HTTPS connections. (boolean value) #insecure = false # The region in which the identity server can be found. (string value) #region_name = # DEPRECATED: Directory used to cache files related to PKI tokens. This option # has been deprecated in the Ocata release and will be removed in the P release. # (string value) # This option is deprecated for removal since Ocata. # Its value may be silently ignored in the future. # Reason: PKI token format is no longer supported. #signing_dir = # Optionally specify a list of memcached server(s) to use for caching. If left # undefined, tokens will instead be cached in-process. (list value) # Deprecated group/name - [keystone_authtoken]/memcache_servers #memcached_servers = # In order to prevent excessive effort spent validating tokens, the middleware # caches previously-seen tokens for a configurable duration (in seconds). Set to # -1 to disable caching completely. (integer value) #token_cache_time = 300 # DEPRECATED: Determines the frequency at which the list of revoked tokens is # retrieved from the Identity service (in seconds). A high number of revocation # events combined with a low cache duration may significantly reduce # performance. Only valid for PKI tokens. This option has been deprecated in the # Ocata release and will be removed in the P release. (integer value) # This option is deprecated for removal since Ocata. # Its value may be silently ignored in the future. # Reason: PKI token format is no longer supported. #revocation_cache_time = 10 # (Optional) If defined, indicate whether token data should be authenticated or # authenticated and encrypted. If MAC, token data is authenticated (with HMAC) # in the cache. If ENCRYPT, token data is encrypted and authenticated in the # cache. If the value is not one of these options or empty, auth_token will # raise an exception on initialization. (string value) # Possible values: # None - # MAC - # ENCRYPT - #memcache_security_strategy = None # (Optional, mandatory if memcache_security_strategy is defined) This string is # used for key derivation. (string value) #memcache_secret_key = # (Optional) Number of seconds memcached server is considered dead before it is # tried again. (integer value) #memcache_pool_dead_retry = 300 # (Optional) Maximum total number of open connections to every memcached server. # (integer value) #memcache_pool_maxsize = 10 # (Optional) Socket timeout in seconds for communicating with a memcached # server. (integer value) #memcache_pool_socket_timeout = 3 # (Optional) Number of seconds a connection to memcached is held unused in the # pool before it is closed. (integer value) #memcache_pool_unused_timeout = 60 # (Optional) Number of seconds that an operation will wait to get a memcached # client connection from the pool. (integer value) #memcache_pool_conn_get_timeout = 10 # (Optional) Use the advanced (eventlet safe) memcached client pool. The # advanced pool will only work under python 2.x. (boolean value) #memcache_use_advanced_pool = false # (Optional) Indicate whether to set the X-Service-Catalog header. If False, # middleware will not ask for service catalog on token validation and will not # set the X-Service-Catalog header. (boolean value) #include_service_catalog = true # Used to control the use and type of token binding. Can be set to: "disabled" # to not check token binding. "permissive" (default) to validate binding # information if the bind type is of a form known to the server and ignore it if # not. "strict" like "permissive" but if the bind type is unknown the token will # be rejected. "required" any form of token binding is needed to be allowed. # Finally the name of a binding method that must be present in tokens. (string # value) #enforce_token_bind = permissive # DEPRECATED: If true, the revocation list will be checked for cached tokens. # This requires that PKI tokens are configured on the identity server. (boolean # value) # This option is deprecated for removal since Ocata. # Its value may be silently ignored in the future. # Reason: PKI token format is no longer supported. #check_revocations_for_cached = false # DEPRECATED: Hash algorithms to use for hashing PKI tokens. This may be a # single algorithm or multiple. The algorithms are those supported by Python # standard hashlib.new(). The hashes will be tried in the order given, so put # the preferred one first for performance. The result of the first hash will be # stored in the cache. This will typically be set to multiple values only while # migrating from a less secure algorithm to a more secure one. Once all the old # tokens are expired this option should be set to a single value for better # performance. (list value) # This option is deprecated for removal since Ocata. # Its value may be silently ignored in the future. # Reason: PKI token format is no longer supported. #hash_algorithms = md5 # A choice of roles that must be present in a service token. Service tokens are # allowed to request that an expired token can be used and so this check should # tightly control that only actual services should be sending this token. Roles # here are applied as an ANY check so any role in this list must be present. For # backwards compatibility reasons this currently only affects the allow_expired # check. (list value) #service_token_roles = service # For backwards compatibility reasons we must let valid service tokens pass that # don't pass the service_token_roles check as valid. Setting this true will # become the default in a future release and should be enabled if possible. # (boolean value) #service_token_roles_required = false # Authentication type to load (string value) # Deprecated group/name - [keystone_authtoken]/auth_plugin #auth_type = # Config Section from which to load plugin specific options (string value) #auth_section = [matchmaker_redis] # # From oslo.messaging # # DEPRECATED: Host to locate redis. (string value) # This option is deprecated for removal. # Its value may be silently ignored in the future. # Reason: Replaced by [DEFAULT]/transport_url #host = 127.0.0.1 # DEPRECATED: Use this port to connect to redis host. (port value) # Minimum value: 0 # Maximum value: 65535 # This option is deprecated for removal. # Its value may be silently ignored in the future. # Reason: Replaced by [DEFAULT]/transport_url #port = 6379 # DEPRECATED: Password for Redis server (optional). (string value) # This option is deprecated for removal. # Its value may be silently ignored in the future. # Reason: Replaced by [DEFAULT]/transport_url #password = # DEPRECATED: List of Redis Sentinel hosts (fault tolerance mode), e.g., # [host:port, host1:port ... ] (list value) # This option is deprecated for removal. # Its value may be silently ignored in the future. # Reason: Replaced by [DEFAULT]/transport_url #sentinel_hosts = # Redis replica set name. (string value) #sentinel_group_name = oslo-messaging-zeromq # Time in ms to wait between connection attempts. (integer value) #wait_timeout = 2000 # Time in ms to wait before the transaction is killed. (integer value) #check_timeout = 20000 # Timeout in ms on blocking socket operations. (integer value) #socket_timeout = 10000 [oslo_concurrency] # # From oslo.concurrency # # Enables or disables inter-process locks. (boolean value) #disable_process_locking = false # Directory to use for lock files. For security, the specified directory should # only be writable by the user running the processes that need locking. Defaults # to environment variable OSLO_LOCK_PATH. If external locks are used, a lock # path must be set. (string value) #lock_path = [oslo_messaging_amqp] # # From oslo.messaging # # Name for the AMQP container. must be globally unique. Defaults to a generated # UUID (string value) #container_name = # Timeout for inactive connections (in seconds) (integer value) #idle_timeout = 0 # Debug: dump AMQP frames to stdout (boolean value) #trace = false # Attempt to connect via SSL. If no other ssl-related parameters are given, it # will use the system's CA-bundle to verify the server's certificate. (boolean # value) #ssl = false # CA certificate PEM file used to verify the server's certificate (string value) #ssl_ca_file = # Self-identifying certificate PEM file for client authentication (string value) #ssl_cert_file = # Private key PEM file used to sign ssl_cert_file certificate (optional) (string # value) #ssl_key_file = # Password for decrypting ssl_key_file (if encrypted) (string value) #ssl_key_password = # By default SSL checks that the name in the server's certificate matches the # hostname in the transport_url. In some configurations it may be preferable to # use the virtual hostname instead, for example if the server uses the Server # Name Indication TLS extension (rfc6066) to provide a certificate per virtual # host. Set ssl_verify_vhost to True if the server's SSL certificate uses the # virtual host name instead of the DNS name. (boolean value) #ssl_verify_vhost = false # DEPRECATED: Accept clients using either SSL or plain TCP (boolean value) # This option is deprecated for removal. # Its value may be silently ignored in the future. # Reason: Not applicable - not a SSL server #allow_insecure_clients = false # Space separated list of acceptable SASL mechanisms (string value) #sasl_mechanisms = # Path to directory that contains the SASL configuration (string value) #sasl_config_dir = # Name of configuration file (without .conf suffix) (string value) #sasl_config_name = # SASL realm to use if no realm present in username (string value) #sasl_default_realm = # DEPRECATED: User name for message broker authentication (string value) # This option is deprecated for removal. # Its value may be silently ignored in the future. # Reason: Should use configuration option transport_url to provide the username. #username = # DEPRECATED: Password for message broker authentication (string value) # This option is deprecated for removal. # Its value may be silently ignored in the future. # Reason: Should use configuration option transport_url to provide the password. #password = # Seconds to pause before attempting to re-connect. (integer value) # Minimum value: 1 #connection_retry_interval = 1 # Increase the connection_retry_interval by this many seconds after each # unsuccessful failover attempt. (integer value) # Minimum value: 0 #connection_retry_backoff = 2 # Maximum limit for connection_retry_interval + connection_retry_backoff # (integer value) # Minimum value: 1 #connection_retry_interval_max = 30 # Time to pause between re-connecting an AMQP 1.0 link that failed due to a # recoverable error. (integer value) # Minimum value: 1 #link_retry_delay = 10 # The maximum number of attempts to re-send a reply message which failed due to # a recoverable error. (integer value) # Minimum value: -1 #default_reply_retry = 0 # The deadline for an rpc reply message delivery. (integer value) # Minimum value: 5 #default_reply_timeout = 30 # The deadline for an rpc cast or call message delivery. Only used when caller # does not provide a timeout expiry. (integer value) # Minimum value: 5 #default_send_timeout = 30 # The deadline for a sent notification message delivery. Only used when caller # does not provide a timeout expiry. (integer value) # Minimum value: 5 #default_notify_timeout = 30 # The duration to schedule a purge of idle sender links. Detach link after # expiry. (integer value) # Minimum value: 1 #default_sender_link_timeout = 600 # Indicates the addressing mode used by the driver. # Permitted values: # 'legacy' - use legacy non-routable addressing # 'routable' - use routable addresses # 'dynamic' - use legacy addresses if the message bus does not support routing # otherwise use routable addressing (string value) #addressing_mode = dynamic # Enable virtual host support for those message buses that do not natively # support virtual hosting (such as qpidd). When set to true the virtual host # name will be added to all message bus addresses, effectively creating a # private 'subnet' per virtual host. Set to False if the message bus supports # virtual hosting using the 'hostname' field in the AMQP 1.0 Open performative # as the name of the virtual host. (boolean value) #pseudo_vhost = true # address prefix used when sending to a specific server (string value) #server_request_prefix = exclusive # address prefix used when broadcasting to all servers (string value) #broadcast_prefix = broadcast # address prefix when sending to any server in group (string value) #group_request_prefix = unicast # Address prefix for all generated RPC addresses (string value) #rpc_address_prefix = openstack.org/om/rpc # Address prefix for all generated Notification addresses (string value) #notify_address_prefix = openstack.org/om/notify # Appended to the address prefix when sending a fanout message. Used by the # message bus to identify fanout messages. (string value) #multicast_address = multicast # Appended to the address prefix when sending to a particular RPC/Notification # server. Used by the message bus to identify messages sent to a single # destination. (string value) #unicast_address = unicast # Appended to the address prefix when sending to a group of consumers. Used by # the message bus to identify messages that should be delivered in a round-robin # fashion across consumers. (string value) #anycast_address = anycast # Exchange name used in notification addresses. # Exchange name resolution precedence: # Target.exchange if set # else default_notification_exchange if set # else control_exchange if set # else 'notify' (string value) #default_notification_exchange = # Exchange name used in RPC addresses. # Exchange name resolution precedence: # Target.exchange if set # else default_rpc_exchange if set # else control_exchange if set # else 'rpc' (string value) #default_rpc_exchange = # Window size for incoming RPC Reply messages. (integer value) # Minimum value: 1 #reply_link_credit = 200 # Window size for incoming RPC Request messages (integer value) # Minimum value: 1 #rpc_server_credit = 100 # Window size for incoming Notification messages (integer value) # Minimum value: 1 #notify_server_credit = 100 # Send messages of this type pre-settled. # Pre-settled messages will not receive acknowledgement # from the peer. Note well: pre-settled messages may be # silently discarded if the delivery fails. # Permitted values: # 'rpc-call' - send RPC Calls pre-settled # 'rpc-reply'- send RPC Replies pre-settled # 'rpc-cast' - Send RPC Casts pre-settled # 'notify' - Send Notifications pre-settled # (multi valued) #pre_settled = rpc-cast #pre_settled = rpc-reply [oslo_messaging_kafka] # # From oslo.messaging # # DEPRECATED: Default Kafka broker Host (string value) # This option is deprecated for removal. # Its value may be silently ignored in the future. # Reason: Replaced by [DEFAULT]/transport_url #kafka_default_host = localhost # DEPRECATED: Default Kafka broker Port (port value) # Minimum value: 0 # Maximum value: 65535 # This option is deprecated for removal. # Its value may be silently ignored in the future. # Reason: Replaced by [DEFAULT]/transport_url #kafka_default_port = 9092 # Max fetch bytes of Kafka consumer (integer value) #kafka_max_fetch_bytes = 1048576 # Default timeout(s) for Kafka consumers (floating point value) #kafka_consumer_timeout = 1.0 # DEPRECATED: Pool Size for Kafka Consumers (integer value) # This option is deprecated for removal. # Its value may be silently ignored in the future. # Reason: Driver no longer uses connection pool. #pool_size = 10 # DEPRECATED: The pool size limit for connections expiration policy (integer # value) # This option is deprecated for removal. # Its value may be silently ignored in the future. # Reason: Driver no longer uses connection pool. #conn_pool_min_size = 2 # DEPRECATED: The time-to-live in sec of idle connections in the pool (integer # value) # This option is deprecated for removal. # Its value may be silently ignored in the future. # Reason: Driver no longer uses connection pool. #conn_pool_ttl = 1200 # Group id for Kafka consumer. Consumers in one group will coordinate message # consumption (string value) #consumer_group = oslo_messaging_consumer # Upper bound on the delay for KafkaProducer batching in seconds (floating point # value) #producer_batch_timeout = 0.0 # Size of batch for the producer async send (integer value) #producer_batch_size = 16384 [oslo_messaging_notifications] # # From oslo.messaging # # The Drivers(s) to handle sending notifications. Possible values are messaging, # messagingv2, routing, log, test, noop (multi valued) # Deprecated group/name - [DEFAULT]/notification_driver #driver = # A URL representing the messaging driver to use for notifications. If not set, # we fall back to the same configuration used for RPC. (string value) # Deprecated group/name - [DEFAULT]/notification_transport_url #transport_url = # AMQP topic used for OpenStack notifications. (list value) # Deprecated group/name - [rpc_notifier2]/topics # Deprecated group/name - [DEFAULT]/notification_topics #topics = notifications # The maximum number of attempts to re-send a notification message which failed # to be delivered due to a recoverable error. 0 - No retry, -1 - indefinite # (integer value) #retry = -1 [oslo_messaging_rabbit] # # From oslo.messaging # # Use durable queues in AMQP. (boolean value) # Deprecated group/name - [DEFAULT]/amqp_durable_queues # Deprecated group/name - [DEFAULT]/rabbit_durable_queues #amqp_durable_queues = false # Auto-delete queues in AMQP. (boolean value) #amqp_auto_delete = false # Enable SSL (boolean value) #ssl = # SSL version to use (valid only if SSL enabled). Valid values are TLSv1 and # SSLv23. SSLv2, SSLv3, TLSv1_1, and TLSv1_2 may be available on some # distributions. (string value) # Deprecated group/name - [oslo_messaging_rabbit]/kombu_ssl_version #ssl_version = # SSL key file (valid only if SSL enabled). (string value) # Deprecated group/name - [oslo_messaging_rabbit]/kombu_ssl_keyfile #ssl_key_file = # SSL cert file (valid only if SSL enabled). (string value) # Deprecated group/name - [oslo_messaging_rabbit]/kombu_ssl_certfile #ssl_cert_file = # SSL certification authority file (valid only if SSL enabled). (string value) # Deprecated group/name - [oslo_messaging_rabbit]/kombu_ssl_ca_certs #ssl_ca_file = # How long to wait before reconnecting in response to an AMQP consumer cancel # notification. (floating point value) #kombu_reconnect_delay = 1.0 # EXPERIMENTAL: Possible values are: gzip, bz2. If not set compression will not # be used. This option may not be available in future versions. (string value) #kombu_compression = # How long to wait a missing client before abandoning to send it its replies. # This value should not be longer than rpc_response_timeout. (integer value) # Deprecated group/name - [oslo_messaging_rabbit]/kombu_reconnect_timeout #kombu_missing_consumer_retry_timeout = 60 # Determines how the next RabbitMQ node is chosen in case the one we are # currently connected to becomes unavailable. Takes effect only if more than one # RabbitMQ node is provided in config. (string value) # Possible values: # round-robin - # shuffle - #kombu_failover_strategy = round-robin # DEPRECATED: The RabbitMQ broker address where a single node is used. (string # value) # This option is deprecated for removal. # Its value may be silently ignored in the future. # Reason: Replaced by [DEFAULT]/transport_url #rabbit_host = localhost # DEPRECATED: The RabbitMQ broker port where a single node is used. (port value) # Minimum value: 0 # Maximum value: 65535 # This option is deprecated for removal. # Its value may be silently ignored in the future. # Reason: Replaced by [DEFAULT]/transport_url #rabbit_port = 5672 # DEPRECATED: RabbitMQ HA cluster host:port pairs. (list value) # This option is deprecated for removal. # Its value may be silently ignored in the future. # Reason: Replaced by [DEFAULT]/transport_url #rabbit_hosts = $rabbit_host:$rabbit_port # DEPRECATED: The RabbitMQ userid. (string value) # This option is deprecated for removal. # Its value may be silently ignored in the future. # Reason: Replaced by [DEFAULT]/transport_url #rabbit_userid = guest # DEPRECATED: The RabbitMQ password. (string value) # This option is deprecated for removal. # Its value may be silently ignored in the future. # Reason: Replaced by [DEFAULT]/transport_url #rabbit_password = guest # The RabbitMQ login method. (string value) # Possible values: # PLAIN - # AMQPLAIN - # RABBIT-CR-DEMO - #rabbit_login_method = AMQPLAIN # DEPRECATED: The RabbitMQ virtual host. (string value) # This option is deprecated for removal. # Its value may be silently ignored in the future. # Reason: Replaced by [DEFAULT]/transport_url #rabbit_virtual_host = / # How frequently to retry connecting with RabbitMQ. (integer value) #rabbit_retry_interval = 1 # How long to backoff for between retries when connecting to RabbitMQ. (integer # value) #rabbit_retry_backoff = 2 # Maximum interval of RabbitMQ connection retries. Default is 30 seconds. # (integer value) #rabbit_interval_max = 30 # DEPRECATED: Maximum number of RabbitMQ connection retries. Default is 0 # (infinite retry count). (integer value) # This option is deprecated for removal. # Its value may be silently ignored in the future. #rabbit_max_retries = 0 # Try to use HA queues in RabbitMQ (x-ha-policy: all). If you change this # option, you must wipe the RabbitMQ database. In RabbitMQ 3.0, queue mirroring # is no longer controlled by the x-ha-policy argument when declaring a queue. If # you just want to make sure that all queues (except those with auto-generated # names) are mirrored across all nodes, run: "rabbitmqctl set_policy HA # '^(?!amq\.).*' '{"ha-mode": "all"}' " (boolean value) #rabbit_ha_queues = false # Positive integer representing duration in seconds for queue TTL (x-expires). # Queues which are unused for the duration of the TTL are automatically deleted. # The parameter affects only reply and fanout queues. (integer value) # Minimum value: 1 #rabbit_transient_queues_ttl = 1800 # Specifies the number of messages to prefetch. Setting to zero allows unlimited # messages. (integer value) #rabbit_qos_prefetch_count = 0 # Number of seconds after which the Rabbit broker is considered down if # heartbeat's keep-alive fails (0 disable the heartbeat). EXPERIMENTAL (integer # value) #heartbeat_timeout_threshold = 60 # How often times during the heartbeat_timeout_threshold we check the heartbeat. # (integer value) #heartbeat_rate = 2 # Deprecated, use rpc_backend=kombu+memory or rpc_backend=fake (boolean value) #fake_rabbit = false # Maximum number of channels to allow (integer value) #channel_max = # The maximum byte size for an AMQP frame (integer value) #frame_max = # How often to send heartbeats for consumer's connections (integer value) #heartbeat_interval = 3 # Arguments passed to ssl.wrap_socket (dict value) #ssl_options = # Set socket timeout in seconds for connection's socket (floating point value) #socket_timeout = 0.25 # Set TCP_USER_TIMEOUT in seconds for connection's socket (floating point value) #tcp_user_timeout = 0.25 # Set delay for reconnection to some host which has connection error (floating # point value) #host_connection_reconnect_delay = 0.25 # Connection factory implementation (string value) # Possible values: # new - # single - # read_write - #connection_factory = single # Maximum number of connections to keep queued. (integer value) #pool_max_size = 30 # Maximum number of connections to create above `pool_max_size`. (integer value) #pool_max_overflow = 0 # Default number of seconds to wait for a connections to available (integer # value) #pool_timeout = 30 # Lifetime of a connection (since creation) in seconds or None for no recycling. # Expired connections are closed on acquire. (integer value) #pool_recycle = 600 # Threshold at which inactive (since release) connections are considered stale # in seconds or None for no staleness. Stale connections are closed on acquire. # (integer value) #pool_stale = 60 # Default serialization mechanism for serializing/deserializing # outgoing/incoming messages (string value) # Possible values: # json - # msgpack - #default_serializer_type = json # Persist notification messages. (boolean value) #notification_persistence = false # Exchange name for sending notifications (string value) #default_notification_exchange = ${control_exchange}_notification # Max number of not acknowledged message which RabbitMQ can send to notification # listener. (integer value) #notification_listener_prefetch_count = 100 # Reconnecting retry count in case of connectivity problem during sending # notification, -1 means infinite retry. (integer value) #default_notification_retry_attempts = -1 # Reconnecting retry delay in case of connectivity problem during sending # notification message (floating point value) #notification_retry_delay = 0.25 # Time to live for rpc queues without consumers in seconds. (integer value) #rpc_queue_expiration = 60 # Exchange name for sending RPC messages (string value) #default_rpc_exchange = ${control_exchange}_rpc # Exchange name for receiving RPC replies (string value) #rpc_reply_exchange = ${control_exchange}_rpc_reply # Max number of not acknowledged message which RabbitMQ can send to rpc # listener. (integer value) #rpc_listener_prefetch_count = 100 # Max number of not acknowledged message which RabbitMQ can send to rpc reply # listener. (integer value) #rpc_reply_listener_prefetch_count = 100 # Reconnecting retry count in case of connectivity problem during sending reply. # -1 means infinite retry during rpc_timeout (integer value) #rpc_reply_retry_attempts = -1 # Reconnecting retry delay in case of connectivity problem during sending reply. # (floating point value) #rpc_reply_retry_delay = 0.25 # Reconnecting retry count in case of connectivity problem during sending RPC # message, -1 means infinite retry. If actual retry attempts in not 0 the rpc # request could be processed more than one time (integer value) #default_rpc_retry_attempts = -1 # Reconnecting retry delay in case of connectivity problem during sending RPC # message (floating point value) #rpc_retry_delay = 0.25 [oslo_messaging_zmq] # # From oslo.messaging # # ZeroMQ bind address. Should be a wildcard (*), an ethernet interface, or IP. # The "host" option should point or resolve to this address. (string value) #rpc_zmq_bind_address = * # MatchMaker driver. (string value) # Possible values: # redis - # sentinel - # dummy - #rpc_zmq_matchmaker = redis # Number of ZeroMQ contexts, defaults to 1. (integer value) #rpc_zmq_contexts = 1 # Maximum number of ingress messages to locally buffer per topic. Default is # unlimited. (integer value) #rpc_zmq_topic_backlog = # Directory for holding IPC sockets. (string value) #rpc_zmq_ipc_dir = /var/run/openstack # Name of this node. Must be a valid hostname, FQDN, or IP address. Must match # "host" option, if running Nova. (string value) #rpc_zmq_host = localhost # Number of seconds to wait before all pending messages will be sent after # closing a socket. The default value of -1 specifies an infinite linger period. # The value of 0 specifies no linger period. Pending messages shall be discarded # immediately when the socket is closed. Positive values specify an upper bound # for the linger period. (integer value) # Deprecated group/name - [DEFAULT]/rpc_cast_timeout #zmq_linger = -1 # The default number of seconds that poll should wait. Poll raises timeout # exception when timeout expired. (integer value) #rpc_poll_timeout = 1 # Expiration timeout in seconds of a name service record about existing target ( # < 0 means no timeout). (integer value) #zmq_target_expire = 300 # Update period in seconds of a name service record about existing target. # (integer value) #zmq_target_update = 180 # Use PUB/SUB pattern for fanout methods. PUB/SUB always uses proxy. (boolean # value) #use_pub_sub = false # Use ROUTER remote proxy. (boolean value) #use_router_proxy = false # This option makes direct connections dynamic or static. It makes sense only # with use_router_proxy=False which means to use direct connections for direct # message types (ignored otherwise). (boolean value) #use_dynamic_connections = false # How many additional connections to a host will be made for failover reasons. # This option is actual only in dynamic connections mode. (integer value) #zmq_failover_connections = 2 # Minimal port number for random ports range. (port value) # Minimum value: 0 # Maximum value: 65535 #rpc_zmq_min_port = 49153 # Maximal port number for random ports range. (integer value) # Minimum value: 1 # Maximum value: 65536 #rpc_zmq_max_port = 65536 # Number of retries to find free port number before fail with ZMQBindError. # (integer value) #rpc_zmq_bind_port_retries = 100 # Default serialization mechanism for serializing/deserializing # outgoing/incoming messages (string value) # Possible values: # json - # msgpack - #rpc_zmq_serialization = json # This option configures round-robin mode in zmq socket. True means not keeping # a queue when server side disconnects. False means to keep queue and messages # even if server is disconnected, when the server appears we send all # accumulated messages to it. (boolean value) #zmq_immediate = true # Enable/disable TCP keepalive (KA) mechanism. The default value of -1 (or any # other negative value) means to skip any overrides and leave it to OS default; # 0 and 1 (or any other positive value) mean to disable and enable the option # respectively. (integer value) #zmq_tcp_keepalive = -1 # The duration between two keepalive transmissions in idle condition. The unit # is platform dependent, for example, seconds in Linux, milliseconds in Windows # etc. The default value of -1 (or any other negative value and 0) means to skip # any overrides and leave it to OS default. (integer value) #zmq_tcp_keepalive_idle = -1 # The number of retransmissions to be carried out before declaring that remote # end is not available. The default value of -1 (or any other negative value and # 0) means to skip any overrides and leave it to OS default. (integer value) #zmq_tcp_keepalive_cnt = -1 # The duration between two successive keepalive retransmissions, if # acknowledgement to the previous keepalive transmission is not received. The # unit is platform dependent, for example, seconds in Linux, milliseconds in # Windows etc. The default value of -1 (or any other negative value and 0) means # to skip any overrides and leave it to OS default. (integer value) #zmq_tcp_keepalive_intvl = -1 # Maximum number of (green) threads to work concurrently. (integer value) #rpc_thread_pool_size = 100 # Expiration timeout in seconds of a sent/received message after which it is not # tracked anymore by a client/server. (integer value) #rpc_message_ttl = 300 # Wait for message acknowledgements from receivers. This mechanism works only # via proxy without PUB/SUB. (boolean value) #rpc_use_acks = false # Number of seconds to wait for an ack from a cast/call. After each retry # attempt this timeout is multiplied by some specified multiplier. (integer # value) #rpc_ack_timeout_base = 15 # Number to multiply base ack timeout by after each retry attempt. (integer # value) #rpc_ack_timeout_multiplier = 2 # Default number of message sending attempts in case of any problems occurred: # positive value N means at most N retries, 0 means no retries, None or -1 (or # any other negative values) mean to retry forever. This option is used only if # acknowledgments are enabled. (integer value) #rpc_retry_attempts = 3 # List of publisher hosts SubConsumer can subscribe on. This option has higher # priority then the default publishers list taken from the matchmaker. (list # value) #subscribe_on = [oslo_middleware] # # From oslo.middleware.http_proxy_to_wsgi # # Whether the application is behind a proxy or not. This determines if the # middleware should parse the headers or not. (boolean value) #enable_proxy_headers_parsing = false [oslo_policy] # # From oslo.policy # # This option controls whether or not to enforce scope when evaluating policies. # If ``True``, the scope of the token used in the request is compared to the # ``scope_types`` of the policy being enforced. If the scopes do not match, an # ``InvalidScope`` exception will be raised. If ``False``, a message will be # logged informing operators that policies are being invoked with mismatching # scope. (boolean value) #enforce_scope = false # The file that defines policies. (string value) #policy_file = policy.json # Default rule. Enforced when a requested rule is not found. (string value) #policy_default_rule = default # Directories where policy configuration files are stored. They can be relative # to any directory in the search path defined by the config_dir option, or # absolute paths. The file defined by policy_file must exist for these # directories to be searched. Missing or empty directories are ignored. (multi # valued) #policy_dirs = policy.d # Content Type to send and receive data for REST based policy check (string # value) # Possible values: # application/x-www-form-urlencoded - # application/json - #remote_content_type = application/x-www-form-urlencoded # server identity verification for REST based policy check (boolean value) #remote_ssl_verify_server_crt = false # Absolute path to ca cert file for REST based policy check (string value) #remote_ssl_ca_crt_file = # Absolute path to client cert for REST based policy check (string value) #remote_ssl_client_crt_file = # Absolute path client key file REST based policy check (string value) #remote_ssl_client_key_file = [paste_deploy] # # From glance.api # # # Deployment flavor to use in the server application pipeline. # # Provide a string value representing the appropriate deployment # flavor used in the server application pipleline. This is typically # the partial name of a pipeline in the paste configuration file with # the service name removed. # # For example, if your paste section name in the paste configuration # file is [pipeline:glance-api-keystone], set ``flavor`` to # ``keystone``. # # Possible values: # * String value representing a partial pipeline name. # # Related Options: # * config_file # # (string value) #flavor = keystone # # Name of the paste configuration file. # # Provide a string value representing the name of the paste # configuration file to use for configuring piplelines for # server application deployments. # # NOTES: # * Provide the name or the path relative to the glance directory # for the paste configuration file and not the absolute path. # * The sample paste configuration file shipped with Glance need # not be edited in most cases as it comes with ready-made # pipelines for all common deployment flavors. # # If no value is specified for this option, the ``paste.ini`` file # with the prefix of the corresponding Glance service's configuration # file name will be searched for in the known configuration # directories. (For example, if this option is missing from or has no # value set in ``glance-api.conf``, the service will look for a file # named ``glance-api-paste.ini``.) If the paste configuration file is # not found, the service will not start. # # Possible values: # * A string value representing the name of the paste configuration # file. # # Related Options: # * flavor # # (string value) #config_file = glance-api-paste.ini [profiler] # # From glance.api # # # Enables the profiling for all services on this node. Default value is False # (fully disable the profiling feature). # # Possible values: # # * True: Enables the feature # * False: Disables the feature. The profiling cannot be started via this # project # operations. If the profiling is triggered by another project, this project # part # will be empty. # (boolean value) # Deprecated group/name - [profiler]/profiler_enabled #enabled = false # # Enables SQL requests profiling in services. Default value is False (SQL # requests won't be traced). # # Possible values: # # * True: Enables SQL requests profiling. Each SQL query will be part of the # trace and can the be analyzed by how much time was spent for that. # * False: Disables SQL requests profiling. The spent time is only shown on a # higher level of operations. Single SQL queries cannot be analyzed this # way. # (boolean value) #trace_sqlalchemy = false # # Secret key(s) to use for encrypting context data for performance profiling. # This string value should have the following format: [,,...], # where each key is some random string. A user who triggers the profiling via # the REST API has to set one of these keys in the headers of the REST API call # to include profiling results of this node for this particular project. # # Both "enabled" flag and "hmac_keys" config options should be set to enable # profiling. Also, to generate correct profiling information across all services # at least one key needs to be consistent between OpenStack projects. This # ensures it can be used from client side to generate the trace, containing # information from all possible resources. (string value) #hmac_keys = SECRET_KEY # # Connection string for a notifier backend. Default value is messaging:// which # sets the notifier to oslo_messaging. # # Examples of possible values: # # * messaging://: use oslo_messaging driver for sending notifications. # * mongodb://127.0.0.1:27017 : use mongodb driver for sending notifications. # * elasticsearch://127.0.0.1:9200 : use elasticsearch driver for sending # notifications. # (string value) #connection_string = messaging:// # # Document type for notification indexing in elasticsearch. # (string value) #es_doc_type = notification # # This parameter is a time value parameter (for example: es_scroll_time=2m), # indicating for how long the nodes that participate in the search will maintain # relevant resources in order to continue and support it. # (string value) #es_scroll_time = 2m # # Elasticsearch splits large requests in batches. This parameter defines # maximum size of each batch (for example: es_scroll_size=10000). # (integer value) #es_scroll_size = 10000 # # Redissentinel provides a timeout option on the connections. # This parameter defines that timeout (for example: socket_timeout=0.1). # (floating point value) #socket_timeout = 0.1 # # Redissentinel uses a service name to identify a master redis service. # This parameter defines the name (for example: # sentinal_service_name=mymaster). # (string value) #sentinel_service_name = mymaster [store_type_location_strategy] # # From glance.api # # # Preference order of storage backends. # # Provide a comma separated list of store names in the order in # which images should be retrieved from storage backends. # These store names must be registered with the ``stores`` # configuration option. # # NOTE: The ``store_type_preference`` configuration option is applied # only if ``store_type`` is chosen as a value for the # ``location_strategy`` configuration option. An empty list will not # change the location order. # # Possible values: # * Empty list # * Comma separated list of registered store names. Legal values are: # * file # * http # * rbd # * swift # * sheepdog # * cinder # * vmware # # Related options: # * location_strategy # * stores # # (list value) #store_type_preference = [task] # # From glance.api # # Time in hours for which a task lives after, either succeeding or failing # (integer value) # Deprecated group/name - [DEFAULT]/task_time_to_live #task_time_to_live = 48 # # Task executor to be used to run task scripts. # # Provide a string value representing the executor to use for task # executions. By default, ``TaskFlow`` executor is used. # # ``TaskFlow`` helps make task executions easy, consistent, scalable # and reliable. It also enables creation of lightweight task objects # and/or functions that are combined together into flows in a # declarative manner. # # Possible values: # * taskflow # # Related Options: # * None # # (string value) #task_executor = taskflow # # Absolute path to the work directory to use for asynchronous # task operations. # # The directory set here will be used to operate over images - # normally before they are imported in the destination store. # # NOTE: When providing a value for ``work_dir``, please make sure # that enough space is provided for concurrent tasks to run # efficiently without running out of space. # # A rough estimation can be done by multiplying the number of # ``max_workers`` with an average image size (e.g 500MB). The image # size estimation should be done based on the average size in your # deployment. Note that depending on the tasks running you may need # to multiply this number by some factor depending on what the task # does. For example, you may want to double the available size if # image conversion is enabled. All this being said, remember these # are just estimations and you should do them based on the worst # case scenario and be prepared to act in case they were wrong. # # Possible values: # * String value representing the absolute path to the working # directory # # Related Options: # * None # # (string value) #work_dir = /work_dir [taskflow_executor] # # From glance.api # # # Set the taskflow engine mode. # # Provide a string type value to set the mode in which the taskflow # engine would schedule tasks to the workers on the hosts. Based on # this mode, the engine executes tasks either in single or multiple # threads. The possible values for this configuration option are: # ``serial`` and ``parallel``. When set to ``serial``, the engine runs # all the tasks in a single thread which results in serial execution # of tasks. Setting this to ``parallel`` makes the engine run tasks in # multiple threads. This results in parallel execution of tasks. # # Possible values: # * serial # * parallel # # Related options: # * max_workers # # (string value) # Possible values: # serial - # parallel - #engine_mode = parallel # # Set the number of engine executable tasks. # # Provide an integer value to limit the number of workers that can be # instantiated on the hosts. In other words, this number defines the # number of parallel tasks that can be executed at the same time by # the taskflow engine. This value can be greater than one when the # engine mode is set to parallel. # # Possible values: # * Integer value greater than or equal to 1 # # Related options: # * engine_mode # # (integer value) # Minimum value: 1 # Deprecated group/name - [task]/eventlet_executor_pool_size #max_workers = 10 # # Set the desired image conversion format. # # Provide a valid image format to which you want images to be # converted before they are stored for consumption by Glance. # Appropriate image format conversions are desirable for specific # storage backends in order to facilitate efficient handling of # bandwidth and usage of the storage infrastructure. # # By default, ``conversion_format`` is not set and must be set # explicitly in the configuration file. # # The allowed values for this option are ``raw``, ``qcow2`` and # ``vmdk``. The ``raw`` format is the unstructured disk format and # should be chosen when RBD or Ceph storage backends are used for # image storage. ``qcow2`` is supported by the QEMU emulator that # expands dynamically and supports Copy on Write. The ``vmdk`` is # another common disk format supported by many common virtual machine # monitors like VMWare Workstation. # # Possible values: # * qcow2 # * raw # * vmdk # # Related options: # * disk_formats # # (string value) # Possible values: # qcow2 - # raw - # vmdk - #conversion_format = raw glance-16.0.0/etc/oslo-config-generator/0000775000175100017510000000000013245511661020035 5ustar zuulzuul00000000000000glance-16.0.0/etc/oslo-config-generator/glance-image-import.conf0000666000175100017510000000014113245511421024515 0ustar zuulzuul00000000000000[DEFAULT] wrap_width = 80 output_file = etc/glance-image-import.conf.sample namespace = glanceglance-16.0.0/etc/oslo-config-generator/glance-registry.conf0000666000175100017510000000041213245511421023774 0ustar zuulzuul00000000000000[DEFAULT] wrap_width = 80 output_file = etc/glance-registry.conf.sample namespace = glance.registry namespace = oslo.messaging namespace = oslo.db namespace = oslo.db.concurrency namespace = oslo.policy namespace = keystonemiddleware.auth_token namespace = oslo.log glance-16.0.0/etc/oslo-config-generator/glance-cache.conf0000666000175100017510000000024413245511421023172 0ustar zuulzuul00000000000000[DEFAULT] wrap_width = 80 output_file = etc/glance-cache.conf.sample namespace = glance.cache namespace = glance.store namespace = oslo.log namespace = oslo.policy glance-16.0.0/etc/oslo-config-generator/glance-manage.conf0000666000175100017510000000025113245511421023355 0ustar zuulzuul00000000000000[DEFAULT] wrap_width = 80 output_file = etc/glance-manage.conf.sample namespace = glance.manage namespace = oslo.db namespace = oslo.db.concurrency namespace = oslo.log glance-16.0.0/etc/oslo-config-generator/glance-scrubber.conf0000666000175100017510000000037313245511421023741 0ustar zuulzuul00000000000000[DEFAULT] wrap_width = 80 output_file = etc/glance-scrubber.conf.sample namespace = glance.scrubber namespace = glance.store namespace = oslo.concurrency namespace = oslo.db namespace = oslo.db.concurrency namespace = oslo.log namespace = oslo.policy glance-16.0.0/etc/oslo-config-generator/glance-api.conf0000666000175100017510000000060613245511421022702 0ustar zuulzuul00000000000000[DEFAULT] wrap_width = 80 output_file = etc/glance-api.conf.sample namespace = glance.api namespace = glance.store namespace = oslo.concurrency namespace = oslo.messaging namespace = oslo.db namespace = oslo.db.concurrency namespace = oslo.policy namespace = keystonemiddleware.auth_token namespace = oslo.log namespace = oslo.middleware.cors namespace = oslo.middleware.http_proxy_to_wsgi glance-16.0.0/pylintrc0000666000175100017510000000145513245511421014647 0ustar zuulzuul00000000000000[Messages Control] # W0511: TODOs in code comments are fine. # W0142: *args and **kwargs are fine. # W0622: Redefining id is fine. disable-msg=W0511,W0142,W0622 [Basic] # Variable names can be 1 to 31 characters long, with lowercase and underscores variable-rgx=[a-z_][a-z0-9_]{0,30}$ # Argument names can be 2 to 31 characters long, with lowercase and underscores argument-rgx=[a-z_][a-z0-9_]{1,30}$ # Method names should be at least 3 characters long # and be lowercased with underscores method-rgx=[a-z_][a-z0-9_]{2,50}$ # Module names matching nova-* are ok (files in bin/) module-rgx=(([a-z_][a-z0-9_]*)|([A-Z][a-zA-Z0-9]+)|(nova-[a-z0-9_-]+))$ # Don't require docstrings on tests. no-docstring-rgx=((__.*__)|([tT]est.*)|setUp|tearDown)$ [Design] max-public-methods=100 min-public-methods=0 max-args=6 glance-16.0.0/CONTRIBUTING.rst0000666000175100017510000000203513245511421015514 0ustar zuulzuul00000000000000If you would like to contribute to the development of OpenStack, you must follow the steps documented at: http://docs.openstack.org/infra/manual/developers.html#getting-started Once those steps have been completed, changes to OpenStack should be submitted for review via the Gerrit tool, following the workflow documented at: http://docs.openstack.org/infra/manual/developers.html#development-workflow Pull requests submitted through GitHub will be ignored. Bugs should be filed on Launchpad, not GitHub: https://bugs.launchpad.net/glance Additionally, specific guidelines for contributing to Glance may be found in Glance's Documentation: https://docs.openstack.org/glance/latest/contributor/index.html Please read and follow these Glance-specific guidelines, particularly the section on `Disallowed Minor Code Changes `_. You will thereby prevent your friendly review team from pulling out whatever hair they have left. Thank you for your cooperation. glance-16.0.0/.stestr.conf0000666000175100017510000000010113245511421015314 0ustar zuulzuul00000000000000[DEFAULT] test_path=${TEST_PATH:-./glance/tests/unit} top_dir=./ glance-16.0.0/httpd/0000775000175100017510000000000013245511661014202 5ustar zuulzuul00000000000000glance-16.0.0/httpd/uwsgi-glance-api.conf0000666000175100017510000000013513245511421020200 0ustar zuulzuul00000000000000KeepAlive Off SetEnv proxy-sendchunked 1 ProxyPass "/image" "http://127.0.0.1:60999" retry=0 glance-16.0.0/httpd/glance-api-uwsgi.ini0000666000175100017510000000060413245511421020033 0ustar zuulzuul00000000000000[uwsgi] socket-timeout = 10 http-auto-chunked = true http-chunked-input = true http-raw-body = true chmod-socket = 666 lazy-apps = true add-header = Connection: close buffer-size = 65535 thunder-lock = true plugins = python enable-threads = true exit-on-reload = true die-on-term = true master = true processes = 4 http-socket = 127.0.0.1:60999 wsgi-file = /usr/local/bin/glance-wsgi-api glance-16.0.0/httpd/README0000666000175100017510000000012513245511421015054 0ustar zuulzuul00000000000000Documentation for running Glance with Apache HTTPD is in doc/source/apache-httpd.rst glance-16.0.0/PKG-INFO0000664000175100017510000000737613245511661014171 0ustar zuulzuul00000000000000Metadata-Version: 1.1 Name: glance Version: 16.0.0 Summary: OpenStack Image Service Home-page: https://docs.openstack.org/glance/latest/ Author: OpenStack Author-email: openstack-dev@lists.openstack.org License: UNKNOWN Description-Content-Type: UNKNOWN Description: ======================== Team and repository tags ======================== .. image:: http://governance.openstack.org/badges/glance.svg :target: http://governance.openstack.org/reference/tags/index.html :alt: The following tags have been asserted for the Glance project: "project:official", "tc:approved-release", "stable:follows-policy", "tc:starter-kit:compute", "vulnerability:managed", "team:diverse-affiliation", "assert:supports-upgrade", "assert:follows-standard-deprecation". Follow the link for an explanation of these tags. .. NOTE(rosmaita): the alt text above will have to be updated when additional tags are asserted for Glance. (The SVG in the governance repo is updated automatically.) .. Change things from this point on ====== Glance ====== Glance is a project that provides services and associated libraries to store, browse, share, distribute and manage bootable disk images, other data closely associated with initializing compute resources, and metadata definitions. Use the following resources to learn more: API --- To learn how to use Glance's API, consult the documentation available online at: * `Image Service APIs `_ Developers ---------- For information on how to contribute to Glance, please see the contents of the CONTRIBUTING.rst in this repository. Any new code must follow the development guidelines detailed in the HACKING.rst file, and pass all unit tests. Further developer focused documentation is available at: * `Official Glance documentation `_ * `Official Client documentation `_ Operators --------- To learn how to deploy and configure OpenStack Glance, consult the documentation available online at: * `Openstack Glance `_ In the unfortunate event that bugs are discovered, they should be reported to the appropriate bug tracker. You can raise bugs here: * `Bug Tracker `_ Other Information ----------------- During each design summit, we agree on what the whole community wants to focus on for the upcoming release. You can see image service plans: * `Image Service Plans `_ For more information about the Glance project please see: * `Glance Project `_ Platform: UNKNOWN Classifier: Environment :: OpenStack Classifier: Intended Audience :: Information Technology Classifier: Intended Audience :: System Administrators Classifier: License :: OSI Approved :: Apache Software License Classifier: Operating System :: POSIX :: Linux Classifier: Programming Language :: Python Classifier: Programming Language :: Python :: 2 Classifier: Programming Language :: Python :: 2.7 glance-16.0.0/ChangeLog0000664000175100017510000056267313245511656014660 0ustar zuulzuul00000000000000CHANGES ======= 16.0.0 ------ * Regenerate sample config files * Add Queens release note * Revise help text for uri filtering options * Triggers shouldn't be execute in offline migration * Revise database rolling upgrade documentation * api-ref: update interoperable image import info * Fix config group not found error * Migration support for postgresql * Add validation to check if E-M-C is already in sync * Revise interoperable image import documentation * Imported Translations from Zanata * Update Queens info about Glance and uWSGI * Imported Translations from Zanata 16.0.0.0rc2 ----------- * Update admin docs for web-download import method * URI filtering for web-download * Make the Image status transition early * Cleanup basic import tasks * Use bool instead of int for boolean filter value * Limit default workers to 8 * Offline migration support for postgresql * Use configured value for import-methods header * Imported Translations from Zanata * Fix bad usage of extend in list\_image\_import\_opts * Update UPPER\_CONSTRAINTS\_FILE for stable/queens * Update .gitreview for stable/queens 16.0.0.0rc1 ----------- * Update Queens metadefs release note * Update api-ref for v.2.6 * Add release note for API v2.6 * Align Vers Neg Middleware to current API * Implementation of db check command * Decouple Image Import Plugin Opts * Revise import property injection plugin releasenote * Correct 1-character typo * Release note for Queens metadefs changes * Regenerate sample configuration files * Exiting with user friendly message and SystemExit() * Modify glance manage db sync to use EMC * Add img\_linked\_clone to compute vmware metadefs * Handle TZ change in iso8601 >=0.1.12 * Replace xml defusedxml * Replace base functional tearDown with addCleanup * Add functional test gates * Skip one functional test * Fix py27 eventlet issue <0.22.0 * Fix pip install failure * Execute py35 functional tests under py35 environment * Enable Image Import per default and make current * Adds 'web-download' import method * Updated from global requirements * Skip one functional test * Use addOnException to capture server logs on failure * Separate out functional tests * Update Signature Documentation * Add doc8 to pep8 check for glance project * Implementation of Inject metadata properties * Resolve unit test failures with going to oslo.serialization 2.3.0 16.0.0.0b3 ---------- * Updated from global requirements * Updated from global requirements * Add documentation for image import plugins * Update scrubber documentation * Scrubber refactor * Add hooks for Image Import plugins * Updated from global requirements * Fix 500 if custom property name is greater than 255 * Fix member create to handle unicode characters * [import-tests] adds tests for image-import/staging * Updated from global requirements * [import-tests] Enhance image import tests * Add fixture to only emit DeprecationWarning once * Move 'upload\_image' policy check to the controller * Fix 500 from duplicate stage call * Updated from global requirements * Prevent image become active without disk and container formats 16.0.0.0b2 ---------- * Fix 500 on ValueError during image-import * Update the documentation links * Update the valid disk bus list for qemu and kvm hypervisors * Add the list of hw\_version supported by vmware driver * Updated from global requirements * Utilize LimitingReader for staging data * Fix 500 from image-import on 'active' image * Fix 500 from stage call on non-existing image * Fix unstage after staging store denies write * Updated from global requirements * Delete data if image is deleted after staging call * Fix 500 from image-import on queued images * Use new oslo.context arg names * Use new oslo.db base test case * Fix the wrong URL * Correct related section for enable\_image\_import * Fix SQLAlchemy reference link * Remove setting of version/release from releasenotes * Updated from global requirements * Fix format of configuration/configuring.rst * Removing unreachable line from stage() method * Wrong description in ImageMembersController.update * Updated from global requirements * Updated from global requirements * Correct sphinx syntax of glance doc * Update http deploy docs to be a bit more explicit * Clarify log message * Updated from global requirements * Document new URL format * Update api-ref about 403 for image location changes 16.0.0.0b1 ---------- * Make ImageTarget behave like a dictionary * Document Glance Registry deprecation * Replace body\_file with class to call uwsgi.chunked\_read() * tests: replace .testr.conf with .stestr.conf * Deprecate Registry and it's config opts * Update spec-lite info in contributors' docs * Fix 500 if user passes name with more than 80 characters * Remove use of deprecated optparse module * Replace DbMigrationError with DBMigrationError * Clean up api-ref index page * Updated from global requirements * Fix a typo in swift\_store\_utils.py: replace Vaid with Valid * TrivialFix: Fix wrong test case * Revert "Remove team:diverse-affiliation from tags" * Update image statuses doc for latest change * Update Rally Job related files * Add default configuration files to data\_files * Switch base to latest in link address * Align default policy in code with the one in conf * Fix missing some content of glance database creation * Updated from global requirements * Updated from global requirements * Clean up database section of admin docs * Add image import docs to admin guide * Updated from global requirements * Avoid restarting a child when terminating * Open Queens for data migrations * Change variable used by log message * api-ref: add 'protected' query filter * Update invalid links of User doc * Separate module reference from contributor/index page * Updated from global requirements * Optimize the way to serach file 'glance-api-paste.ini' * Fix api\_image\_import tasks stuck in 'pending' * Alembic should use oslo\_db facades * Correct group name in config * api-ref: add interoperable image import docs * Add release note for Glance Pike RC-2 * Fix Image API 'versions' response * Updated from global requirements * Return 404 for import-info call * Add 'tasks\_api\_access' policy * Add 'api\_image\_import' type to task(s) schemas * Update invalid path and link for Image Properties * Fix 500 error from image-stage call * Fix 500 error from image-import call * Imported Translations from Zanata * Update reno for stable/pike 15.0.0.0rc1 ----------- * Refresh config files for Pike RC-1 * Add release note for RC-1 including metadefs changes * Updated from global requirements * Add the missing i18n import * Bump Images API to v2.6 * api-ref: update container\_format, disk\_format * Update the documention for doc migration * Create image fails if 'enable\_image\_import' is set * Updated from global requirements * Add a default rootwrap.conf file 15.0.0.0b3 ---------- * Updated from global requirements * Fix typo in discovery API router * Updated from global requirements * Add release note for wsgi containerization * Remove team:diverse-affiliation from tags * Updated from global requirements * Add Discovery stub for Image Import * Update URL home-page in documents according to document migration * Satisfy API Reference documentation deleting tags * Fix glance image-download error * Handle file delete races in image cache * doc: Explicitly set 'builders' option * Add 'protected' filter to image-list call * Remove unused None from dict.get() * Updated from global requirements * Remove unused parameter from 'stop\_server' method * use openstackdocstheme html context * update doc URLs in the readme * only show first heading from the glossary on home page * move links to older install guides to the current install guide * switch to openstackdocstheme * Fix trust auth mechanism * import the cli examples from the admin guide in openstack-manuals * import troubleshooting section of admin guide from openstack-manuals * import the installation guide from openstack-manuals * import the glossary from openstack-manuals * turn on warning-is-error for sphinx build * Remove datastore\_name and datacenter\_path * Clean up the redundant code * Imported Translations from Zanata * Add metadefs release note for Pike * Updated from global requirements * do not declare code blocks as json when they do not parse * use :ref: instead of :doc: for xref * add index page to cli dir * fix image path * fix include directives * fix repeated hyperlink target names * fix the autodoc instructions * rearrange existing documentation to follow the new layout standard * Make i18n log translation functions as no-op * Remove unused variable * Tests: Remove the redundant methods * ignore generated sample config files * Fix broken link to the "Image service property keys" doc * Add docs and sample configs for running glance with apache * Add pbr wsgi script entrypoint to glance * Add external lock to image cache sqlite driver db init * Updated from global requirements * Remove use of config enforce\_type=True 15.0.0.0b2 ---------- * Updated from global requirements * Updated from global requirements * Remove duplicate key from dictionary * Updated from global requirements * Stop enforcing translations on logs * Remove usage of parameter enforce\_type * Add import endpoint to initiate image import * Add images//staging endpoint * Addresses the comments from review 391441 * Fixed PY35 Jenkins Gate warnings * Updated from global requirements * Add a local bindep.txt override * Add hide hypervisor id on guest host * Updated from global requirements * Updated from global requirements * Clean up py35 env in tox.ini * Trivial fix * Fix periodic py27 oslo-with-master test * Add OpenStack-image-import-methods header * Updated from global requirements * Fix wrong overridden value of config option client\_socket\_timeout * Remove test\_unsupported\_default\_store * Support new OSProfiler API * WIP:Add api\_image\_import flow * Change keystoneclient to keystoneauth1 * Clean up acceptable values for 'store\_type\_preference' * Fix the mismatch of title and content * Fix vmware option for glance\_store * Add node\_staging\_uri and enable\_image\_import opts * Fix doc generation for Python3 * Fix tests when CONF.set\_override with enforce\_type=True * Updated from global requirements * Document the duties of the Release CPL * Dev Docs for Writing E-M-C Migrations * Updated from global requirements 15.0.0.0b1 ---------- * Fix and enable integration tests on py35 * Update api-ref for Range request support * Do not serve partial img download reqs from cache * Updated from global requirements * Add release note for bug 1670409 * Accept Range requests and set appropriate response * Provide user friendly message for FK failure * Updated from global requirements * Use cryptography instead of pycrypto * Fix incompatibilities with WebOb 1.7 * Fix some reST field lists in docstrings * Fix some reST field lists in docstrings * Fix rendering of list elements * Fix and enable two funcitonal tests on py35 * Replace master/slave usage in replication * Fix and enable remaining v1 tests on py35 * Fix and enable test\_cache\_middleware test on py35 * Invoke Monkey Patching for All Tests * Update vmware metadef with ESX 6.5 supported OSes * Remove the remaining code after glare-ectomy * Invoke monkey\_patching early enough for eventlet 0.20.1 * correct "url" to "URL" * Fix Unsupported Language Test * Updated from global requirements * Use HostAddressOpt for opts that accept IP and hostnames * Fix experimental E-M-C migrations * Update man pages to Pike version and release date * Fix filter doesn't support non-ascii characters * Remove glare leftovers from setup.cfg * Fix api-ref with Sphinx 1.5 * Updated from global requirements * Restore man pages source files * Update test requirement * Glare-ectomy * Limit workers to 0 or 1 when using db.simple.api * Updated from global requirements * Restore Legacy Database Management doc * Open Pike for data migrations * Change identifiers in data migration tests * [docs] Removing docs from dev ref * Mock CURRENT\_RELEASE for migration unit test * Fix up links to static content in sample-configuration * Cleanup 'ResourceWarning: unclosed file' in py35 * Fix scrubber test failing py35 gate * Update developer docs for rolling upgrades * Updated from global requirements * Fix brackets to suggest optionality * Use https instead of http for git.openstack.org * Updated from global requirements * Prevent v1\_api from making requests to v2\_registry * Prepare for using standard python tests * Update reno for stable/ocata 14.0.0 ------ * Refresh config files for Ocata RC-1 * Alembic migrations/rolling upgrades release note * Add expand/migrate/contract migrations for CI * Add expand/migrate/contract commands to glance-manage CLI * Refactor tests to use Alembic to run migrations * Port Glance Migrations to Alembic * Handling scrubber's exit in non-daemon mode * Correct 2.5 minor version bump release note * Update api-ref for image visibility changes * refactor glare plugin loader tests to not mock private methods of stevedore * Refine migration query added with CI change * Hack to support old and new stevedore * do not mock private methods of objects from libraries * Update deprecated show\_multiple\_locations helptext * Add release note for image visibility changes * Update api-ref for partial download requests * Updated from global requirements 14.0.0.0b3 ---------- * Eliminate reference to metadefs 'namespace\_id' * Updated from global requirements * Add image update tests for is\_public * Fix regression introduced by Community Images * Bump minor API version * DB code refactor, simplify and clean-up * Properly validate metadef objects * Implement and Enable Community Images * Fix NameError in metadef\_namespaces.py * Update to "disallowed minor code changes" * remove useless EVENTLET\_NO\_GREENDNS * Updated from global requirements * Adjust test suite for new psutil versions * Update dev docs to include 'vhdx' disk format * Change SafeConfigParser into ConfigParser * Image signature documentation modify key manager api class * Log at error when we intend to reraise the exception * Remove obsolete swift links * Updated from global requirements * Add ploop to supported disk\_formats * Updated from global requirements * Fix some typos in api-ref * Update sample config files for Ocata-3 * Enable python3.5 testing * Update tox configuration file to reduce duplication * Expand hypervisor\_type meta data with Virtuozzo hypervisor * Remove v3 stub controller * Updated from global requirements 14.0.0.0b2 ---------- * Skipping tests for location 'add', 'replace' on 'queued' images * Editing release note for location update patch * Change cfg.set\_defaults into cors.set\_defaults * Restrict location updates to active, queued images * Allow purging of records less than 1 day old * Updated from global requirements * Updated from global requirements * Updated from global requirements * Python3: fix glance.tests.functional.v2.test\_images * Python 3: fix glance.tests.functional.v1.test\_misc * Python3: fix glance.tests.functional.test\_scrubber * Python3: fix logs/glance.tests.functional.test\_healthcheck\_middleware * Python3: Fix glance.tests.functional.test\_glance\_replicator * Python3: Fix glance.tests.functional.test\_bin\_glance\_cache\_manage * Python 3: fix glance.tests.functional.db.test\_sqlalchemy * Python3: fix test\_client\_redirects.py * Add working functional tests to tox.ini * Add alt text for badges * Correct releasenote "Prepare for oslo.log 3.17.0" * Prepare for oslo.log 3.17.0 * Show team and repo badges on README * Handling HTTP range requests in Glance * Remove uneccessary "in" from CONTRIBUTING.rst 14.0.0.0b1 ---------- * Updated from global requirements * IPv6 fix in Glance for malformed URLs * Updated from global requirements * Update api-ref with 409 response to image update * Added overwrite warning for db\_export\_metadefs * Allow specifying OS\_TEST\_PATH (to reduce tests ran) * Do not use service catalog for cache client * Added unit tests for disabled notifications in Notifier * Updated from global requirements * Updated from global requirements * ping\_server: Always close the socket * Remove mox3 in test-requirement.txt * Correct url in doc source * Updated from global requirements * Add DeprecationWarning in test environments * Updated from global requirements * Update .coveragerc after the removal of openstack directory * Updated from global requirements * Drop unused import cfg * Imported Translations from Zanata * Image signature documentation modify barbican auth\_endpoint * Add libvirt image metadef for hw\_pointer\_model * Drop MANIFEST.in - it's not needed by pbr * Add more resource url in readme.rst * Updated from global requirements * Cleanup newton release Notes * Imported Translations from Zanata * Fix Domain Model code example * Imported Translations from Zanata * Remove redundant word * Enable release notes translation * Updated from global requirements * Extracted HTTP response codes to constants in tests * Extracted HTTP response codes to constants * Updated from global requirements * Fix typo: remove redundant 'the' * dev-docs: mark v1 as deprecated * Updated from global requirements * Updated from global requirements * Correct releasenote for Ib900bbc05cb9ccd90c6f56ccb4bf2006e30cdc80 * Updated from global requirements * [api-ref] configure LogABug feature * Update CONTRIBUTING.rst * Adding constraints around qemu-img calls * Correct the order of parameters in assertEqual() * Fixing inconsistency in Glance store names * change the example URLs in api-ref for Glance * Updated from global requirements * api-ref: deprecate images v1 api * Remove unused oslo.service requirement * Update api-ref to add newly supported 'vhdx' disk format option * Fix incorrect call for \_gen\_uuid * Update description of image\_destroy method * Update reno for stable/newton 13.0.0.0rc1 ----------- * Complete and update Newton release notes * api-ref: add versions history * Correctly point to Task Statuses from Tasks doc * Updated from global requirements * Fix cursive named arguments * TrivialFix: Remove unused variable * Fix nits from commit that introduces cursive * Dev-docs: command in code block for refresh config * Bump up Glance API minor version to 2.4 * [api-ref] Remove temporary block * Add note to docs on release notes prelude section * Fixed indentation * Fix a small markup typo * Remove self.\_\_dict\_\_ for formatting strings * Keep consistent order for regenerated configs 13.0.0.0b3 ---------- * Regenerate config files for Newton * Improving help text for common-config opts * Improving help text for data access API option * Improving help text for Glance common-config opts * Remove DB downgrade * Release note for glance config opts * Improve help text of glance config opts * Attempt to not set location on non active or queued image * Improving help text for WSGI server conf opts * Use cursive for signature verification * Updated from global requirements * Improving help text for metadefs config option * Improve the help text for registry client opts * Improving help text for send\_identity\_headers opt * Remove unused requirements * Fix using filter() to meet python2,3 * Remove "Services which consume this" section * Deprecate \`show\_multiple\_locations\` option * Image signature base64 don't wrap lines * Deprecate the Images (Glance) v1 API * Improving help text of v1/v2 API & Registry opts * Improve help text of scrubber daemon option * Improving help text for RPC opt * Improving help text for image conversion\_format * Updated from global requirements * Updated from global requirements * TrivialFix: Remove cfg import unused * Improving help text for store\_type\_preference opt * Improving help text for Notifier opts * Removing deprecated variable aliases from oslo\_messaging * Improve help text of scrubber opts * Correct link to image properties * Use upper constraints for all jobs in tox.ini * Fix five typos on doc * Improve help text of quota opts * Improve help text of registry server opts * Get ready for os-api-ref sphinx theme change * Add registry\_client\_opts to glance-cache.conf.sample * Updated from global requirements * Add CPU thread pinning to metadata defs * Stop stack tracing on 404s * Don't use config option sqlite\_db * Index to generate doc page for refreshing-configs * Add guideline to refresh config files shipped with source * Add example for diff between assert true and equal * Updated from global requirements * Remove references of s3 store driver * Add test class to versions tests * change the example URLs in dev-docs for Glance * Updated from global requirements * Updated from global requirements * Updated from global requirements * Fix use of etc. in metadefs docs * Improving help text for location\_strategy opt * Use more specific asserts in unit tests * Add a requirements guidelines to docs * api-ref: correct versions response example * Updated from global requirements * Version negotiation api middleware to include v2.3 * Add release notes for newton-1 * Remove deprecated test utility * Some migrations tests incorrectly ref S3 for Swift * Remove extraneous ws in architecture docs * Refresh some config files based on bug fixes * Generate and include sample config as part of docs * Wrap text in sample configuration files at 80 * Improving help text for proprty utils opts * Updated from global requirements * Improving help text for swift\_store\_utils opts * cache\_manage: fix a print bug in exit main * replicator: dump: Display more info * replicator: livecopy: Display more info * Updated from global requirements * Add ova to container format doc to rally plugin * Add 'vhdx' disk format * Add 'ova' as a container\_format in dev-docs * Update sqlalchemy-migrate url * Improving help text for taskflow executor opts * Minor tweak to release note documentation * Replace OpenStack LLC with OpenStack Foundation * api-ref: Replace image-update response example * api-ref: Refresh images schemas * Correcting description of image\_update API method * Making Forbidden Exception action oriented * Updated from global requirements * Make docs copyright consistent * Add LOG.warning to Disallowed minor changes * WADL to RST migration (part 2 - images) * Updated from global requirements * Improving help text for context middleware opts 13.0.0.0b2 ---------- * Add \_\_ne\_\_ built-in function * Replace "LOG.warn(\_" with "LOG.(\_LW" * Updated from global requirements * Cleanup i18n marker functions to match Oslo usage * Use oslo.context features * glance-replicator: size: Handle no args better * WADL to RST migration (part 2 - metadefs) * Remove unused LOG to keep code clean * Nitpick spell change * Correct reraising of exception * Perform a cleanup of configuring.rst * Fix duplicated osprofile config for registry * replicator: size: Display human-readable size * Return 400 when name is more than 255 characters * glance-replicator: compare: Show image name in msg * Use MultiStrOpt instead of ListOpt for args * Updated from global requirements * Improving help text for public\_endpoint * Add image signature verification metadefs * Add signed images documentation * Glance tasks lost configuration item conversion\_format * Update to Glance Contributor's docs * WADL to RST migration (part 2 - tasks) * Updated from global requirements * Updated from global requirements * WADL to RST migration (part 1) * Add documentation about generating release notes * Change default policy to admin * Fix bug Swift ACL which disappears on Glance v1 images * Do not set header if checksum doesn't exist * Updated from global requirements * Fixes the use of dates when listing images * Use olso\_log and delay string interpolation while logging * Add in missing log hints * Use http-proxy-to-wsgi middleware from oslo.middleware * Updated from global requirements * Imported Translations from Zanata * Add a soft delete functionality for tasks * Update man pages to current version and dates * Incorrect title for Outbound Peak * Updated from global requirements 13.0.0.0b1 ---------- * Remove redundant store config from registry sample * Remove TODOs from deprecated "sign-the-hash" * Updated from global requirements * Fix import of profiler options * Add check to limit maximum value of max\_rows * Updated from global requirements * Updated from global requirements * Remove verbose option from glance tests * Raise exception when import without properties * Excluded the 'visibility' from protected artifact fields * Use OSprofiler options consolidated in lib itself * Remove unnecessary executable permissions * Updated from global requirements * Normalize the options use singele quotes * Updated from global requirements * Updated from global requirements * Allow tests to run when http proxy is set * Correct some misspelt words in glance * Clarify language used in glanceapi future section * Images APIs: The Future * Remove old \`run\_tests\` script * Updated from global requirements * Remove unnecessary executable privilge of unit test file * Updated from global requirements * Functional test comparing wrong items * Contribution doc change for spec-lite * Updated from global requirements * Improve help text of image cache opts * Remove deprecated "sign-the-hash" approach * Imported Translations from Zanata * Updated from global requirements * Return BadRequest for 4 byte unicode characters * Log when task is not configured properly * Corrected section underline * Give helpful error in tests if strace is missing * Adding detailed alt text to images for accessibility * Changed the spelling of opsrofiler to osprofiler * Fix doc build if git is absent * Increase max wait time, avoid racy failure in gate * Updated from global requirements * Add store opts to scrubber and cache sample conf * Add wsgi options to the sample options * Removed one extra enter key * use os-testr instead of testr * Updated from global requirements * Modified message of exception and log * Given space in between two words * Use messaging notifications transport instead of default * Updated from global requirements * Update the Administrator guide links with new ones * Imported Translations from Zanata * Use roles attribute from oslo context * Updated from global requirements * Fix doc-strings warnings and errors * Add 'Documentation' section to 'Contributing' docs 12.0.0 ------ * Imported Translations from Zanata * Fix typos in Glance files * Imported Translations from Zanata * Fix db purge type validation * Imported Translations from Zanata * Copy the size of the tag set * Changes behaviour when an image fails uploading * Imported Translations from Zanata * Handle SSL termination proxies for version list * Imported Translations from Zanata * Imported Translations from Zanata * Imported Translations from Zanata * Fixed typos in two comments * Update reno for stable/mitaka * Update .gitreview for stable/mitaka 12.0.0.0rc1 ----------- * Fix possible race conditions during status change * fix docstring warnings and errors * revert warnerrors before gate breaks * Fix link to document * Imported Translations from Zanata * Update the configuration doc * Catch exceptions.HasSnapshot() from delete image in rbd driver * Imported Translations from Zanata * register the config generator default hook with the right name * Reject bodies for metadef commands * Remove unused enable\_v3\_api config option * glance-manage db purge failure for limit * Imported Translations from Zanata * Remove state transition from active to queued * Imported Translations from Zanata * Updated the wording in the database architecture docs * Test tag against schema to check length * Update the config files * Imported Translations from Zanata * Adds virtual\_size to notifications * Update configuring of Cinder store * Add debug testenv in tox * Fix levels of Swift configuration documentation * no module docs generated * Deprecate use\_user\_token parameter * Creating or updating a image member in a list causes 500 * Updated from global requirements * Updating comment in tests/unit/test\_migrations.py * Fix update all props when you delete image 12.0.0.0b3 ---------- * Fix location update * Moved CORS middleware configuration into oslo-config-generator * Use assertGreater/Less/Equal instead of assertTrue(A \* B) * New metadata definitions from CIM * Add support for DSA signatures * Fix message formatting in glance-manage purge * Updated from global requirements * Remove unused pngmath sphinx extension * Do not use constraints for venv * Fix BaseException.message deprecation warnings * Remove py33 from tox envlist * Resolve i18n and Sphinx issues in signature\_utils * Add support for ECC signatures * Return 204 rather than 403 when no image data * Move bandit into pep8 * Updated from global requirements * Support importing OVA/OVF package to Glance * Always use constraints * Updated from global requirements * Include registry\_client\_\* options in glance-scrubber.conf * Python 3: fix a few simple "str vs bytes" issues * remove redundant "#!/usr/bin/env python" header * Encourage usage of identity API v3 * Python 3: fix glance.tests.functional.db.simple * Reuse encodeutils.to\_utf8() * Fix OpenSSL DeprecationWarning on Python 3 * Added support new v2 API image filters * Add sign-the-data signature verification * Stop gridfs driver support * Updated from global requirements * Set self and schema to readOnly * Make sure the generated glance-api.conf.sample is always the same * Add unit test for default number of workers * Replace assertRaisesRegexp with assertRaisesRegex * Reuse jsonutils.dump\_as\_bytes() * Do not log sensitive data * Cache documentation about differences in files * Tolerate installation of pycryptodome * grammar correction in basic architecture file * Promote log message to exception level on artifact load failure * Allow mutable argument to be passed to BinaryObject artifacts * Include version number into glare factory path in paste * Fix 500 status code when we add in "depend\_on" yourself * Unallowed request PATCH when work with blob * Use keystoneclient functions to receive endpoint * Drop python 2.6 support * Move Glance Artifact Repository API to separate endpoint * Imported Translations from Zanata * Imported Translations from Zanata * clean up auto-generated docs for configuration options * Update the home page * Updated from global requirements * Misspelling in message * v2 - "readOnly" key should be used in schemas * Prevent user to remove last location of the image * Fix \_wait\_on\_task\_execution() * Updating message for conversion\_format cfg\_opt * Fix setup.cfg * Replace exit() by sys.exit() * Change Metadefs OS::Nova::Instance to OS::Nova::Server * Change exception format checks in artifact tests * Imported Translations from Zanata * Remove glance\_store specific unit tests * Encode headers to launch glance v2 on mod\_wsgi * Make the task's API admin only by default * No need to have async executor fetching be a contextmanager * Updated from global requirements * Python 3: fix glance.tests.unit * Add storage\_policy VMware driver option for flavors * Remove unneeded glance unit test assert calls * utils: remove PrettyTable custom class in favor of the eponym libary * Hacking checks for not using dict iteration calls * Add note in comment where upstream taskflow change is * Fix for Image members not generating notifications * Updated from global requirements * Generate page of all config options in docs * Use oslo.utils exception encoding util * Add hacking check to ensure not use xrange() * Updated from global requirements * Fix help command in cache manange and replicator * Add properties\_target to Instance SW metadefs * Simplify taskflow engine loading * Allow image-list if access to attrs is forbidden * [docs] Add Domain model implementation sub-section * Drop dict.iterkeys() for python3 * Fix re-adding deleted members to an image in v1 * Replace xrange() with six.moves.range() 12.0.0.0b2 ---------- * Add metadefs for Cinder volume type configuration * Python3: Replace dict.iteritems with dict.items * Enhance description of instance-uuid option for image-create * Make cache config options clearer * Imported Translations from Zanata * Update links for CLI Reference * Python3: fix operations of dict keys() * Implement trust support for api v2 * Imported Translations from Zanata * Fix the wrong options in glance-api and glance-registry confs * Do not use api-paste.ini osprofiler options * Update the cache documentation * Updated from global requirements * Catch UnsupportedAlgorithm exceptions * Add functionality to define requests without body * Updated from global requirements * Use six.moves.reduce instead of builtin reduce * Fixing the deprecated library function * Remove Indices and tables section * Remove unused logging import * Fix Glance doesn't catches UnicodeDecodeError exception * Updated from global requirements * assertIsNone(val) instead of assertEqual(None,val) * Fix glance doesn't catches exception NotFound from glance\_store * Deprecated tox -downloadcache option removed * Wait all wsgi server completion for worker exit * Fix model sync for SQLite * Update the cache middleware flavor guideline * Add sign-the-hash deprecation warning * Add db purge command * Replace oslo\_utils.timeutils * Add missing CPU features to Glance Metadata Catalog * Updated from global requirements * Remove iso8601 dependency * Assert problems in Glance raised by Bandit * Import i18n functions directly * Validate empty location value for v1 api * Updated from global requirements * Added CORS support to Glance * Capitalize 'glance' in db.rst * Stop using tearDown in v1/test\_api.py * Fix return 200 status code when we operate with nonexistent property * Fix default value with postgreSQL * Rename glance-store to glance\_store * Run py34 env first when launching tests * Move store config opt to glance\_store section * Remove artifact entry point * Remove version from setup.cfg * Add the Docker container format * Change the format of some inconsistent docstring 12.0.0.0b1 ---------- * Updated from global requirements * Automated bandit checks in glance * Port \_validate\_time() to Python 3 * Updated from global requirements * Support Unicode request\_id on Python 3 * Unicode fix in BaseClient.\_do\_request() on py3 * Fix incorrect task status with wrong parameter * Document contribution guidelines * Updated from global requirements * Fix glance.tests.unit.v1.test\_registry\_client * Fix sample Rally plugin * force releasenotes warnings to be treated as errors * V1: Fix bad dates returning 500 * Fix 500 error when filtering with specified invalid operator * Fix 500 error when filtering by 'created\_at' and 'updated\_at' * Update os.path.remove as it does not exist * Change the default notification exchange to glance * Add documentation for configuring disk\_formats * V1: Stop id changes after the image creation * Format log messages correctly * [docs] Update description of Glance-Swift conf options * Disallow user modifing ACTIVE\_IMMUTABLE of deactivated images * [docs] Update Glance architecture image * test: make enforce\_type=True in CONF.set\_override * OpenStack typo * Support new v2 API image filters * Remove anyjson useless requirement * Python3: fix glance.tests.unit.v2.test\_registry\_client * Location add catch bad Uri * [docs] delete duplicated image\_status\_transition.png * Reactivating admin public image returns 500 * Python3: fix glance.tests.unit.test\_migrations * Python3: fix test\_image\_data\_resource * Remove todo to remove /versions * Python3: fix test\_registry\_api * Updated from global requirements * Fix typos in configuring.rst * Python3: fix glance.tests.unit.v2.test\_images\_resource * add "unreleased" release notes page * Python 3: Fix glance.tests.unit.v2.test\_tasks\_resource * Python 3: fix test\_image\_members\_resource * Remove default=None for config options * Update style for signature\_utils class * Add -constraints for CI jobs * Add a deprecation warning to the DB downgrade * Remove unused exceptions from glance * Add tasks info to glance documentation * Add reno for release notes management * Add subunit.log to gitignore * Updated from global requirements * Fix content type for Forbidden exception * Port v1.test\_registry\_api to Python 3 * Remove requests to example.com during unit testing * Port signature\_utils to Python 3 * Imported Translations from Zanata * Rename semantic-version dep to semantic\_version * Port script utils to Python 3 * Use dict comprehension * Typo fix * Updated from global requirements * Port test\_cache\_manage to Python 3 * Port test\_wsgi to Python 3 * Updated from global requirements * Fix incorrect Glance image metadata description * Rename glance-store dep to glance\_store * Remove glance\_store from exta requirements * Port async tests to Python 3 * Fixed registry invalid token exception handling * Updated from global requirements * Add more tests which pass on Python 3 * Show the file name when there is an error loading an image metadef file * Remove the duplicate file path created by sphinx build * [docs] Adds new image status - deactivated * Cause forbidden when deactivating image(non-admin) * Updated from global requirements * Don't allow queries with 'IN' predicate with an empty sequence * utils: use oslo\_utils.uuidutils * utils: remove unused functions in glance.utils * Bodies that are not dicts or lists return 400 * Pass CONF to logging setup * Fix 500 error when filtering by invalid version string * Fix error when downloading image status is not active 11.0.0 ------ * Add 'deactivated' status to image schema * Allow owner to be set on image create * Decrease test failure if second changes during run * config: remove default lockutils set * Catch InvalidImageStatusTransition error * Port rpc and wsgi to Python 3 * Refactoring exceptions * Fix glance ignored a headers when created artifact * Add ability to specify headers in PUT/PATCH request in functional tests * Fix 500 error when we specify invalid headers when work with blob/bloblist * fix a typo in show\_multiple\_locations help message * Updated from global requirements * Add testresources and testscenarios used by oslo.db fixture * Add testresources and testscenarios used by oslo.db fixture * Add 'deactivated' status to image schema * Fix the bug of "Error spelling of a word" * Imported Translations from Zanata * Fix 409 response when updating an image by removing read-only property 11.0.0.0rc2 ----------- * Imported Translations from Zanata * Updated from global requirements * Port api test\_common to Python 3 * An explicit check for the presence of a property * Cleanup chunks for deleted image if token expired * Download forbidden when get\_image\_location is set * Download forbidden when get\_image\_location is set * tell pbr to tell sphinx to treat warnings as errors * add placeholder to ensure \_static directory exists * add the man pages to the toctree * escape underline introducing a spurrious link reference * do not indent include directives * add missing document to toctree * fix restructuredtext formatting errors * Catch NotAuthenticated exception in import task * Cleanup chunks for deleted image if token expired * Catch NotAuthenticated exception in import task * Imported Translations from Zanata * Return missing authtoken options * Change string generation mechanism for info logging * Add Large pages meta definition * Return missing authtoken options * Fix mutable defaults in tests * Imported Translations from Zanata 11.0.0.0rc1 ----------- * Open Mitaka development * Cleanup of Translations * Remove redundant requirements.txt from tox * Add swiftclient to test-requirements * Updated from global requirements * Update Glance example configs to reflect Liberty * Imported Translations from Zanata * Fix server start ping timeout for functional tests * Prevent image status being directly modified via v1 * Fixed the output of list artifacts API calls * Change ignore-errors to ignore\_errors * Prevent extraneous log messages and stdout prints * [Glance Developer Guide] Grammar edits * utils: stop building useless closure * Remove \`openstack' directory * Imported Translations from Zanata * Fixes the possibility of leaving orphaned data * Add missing function '\_validate\_limit' * Fix wrong parameters order in Task * Remove WARN log message from version\_negotiation * Fix order of arguments in assertEqual * Scrub images in parallel * Make task\_time\_to\_live work * Incorrect permissions on database migration file * Add \_member\_ to property-protections-roles.conf.sample * Domain model section * Add unit tests for signature\_utils class * Scrubber to communicate with trustedauth registry * Corrected hyperlink in metadefs documentation * Remove pointless tests comparing opts against list * Remove old traces of the oslo-incubator * Updated from global requirements * Use oslo utils to encode exception messages * clean up requirements 11.0.0.0b3 ---------- * Disable v3 API by default * Glance metadef tables need unique constraints * Add image signing verification * Don't return 300 when requesting /versions * Updated from global requirements * Use min and max on IntOpt option types * Fixed non-owner write-access to artifacts * Remove WritableLogger from wsgi server * Allow to filter artifacts by range * Fixed version unequality artifact filtering * Artifacts are now properly filtered by dict props * Fixed an HTTP 500 on artifact blob upload * Port rally scenario plugin to new Rally framework * Use stevedore directive to document plugins * Catch update to a non-existent artifact property * Fix spelling mistake in test\_images.py * Change URL to End User Guide * Fix URLs to admin-guide-cloud * reuse the deleted image-member before create a new image-member * Imported Translations from Transifex * Add CPU Pinning in metadata definitions * Fix image owner can't be changed issue in v2 * Port common.utils to Python 3 * Port store image to Python 3 * Port replicator to Python 3 * Port glance.db to Python 3 * Port image cache to Python 3 * Fix Python 3 issues in glance.tests.unit.common * Don't use slashes for long lines - use parentheses instead * Updated from global requirements * Imported Translations from Transifex * Don't import files with backed files * Use oslo\_config PortOpt support * Setting default max\_request\_id\_length to 64 * Add mechanism to limit Request ID size * return request\_id in case of 500 error * Remove no longer used parameter (FEATURE\_BLACKLIST) * Fixed few typos * Correct the indentation on a few functions * Use dictionary literal for dictionary creation * List creation could be rewritten as a list literal * Remove duplicate name attribute * Incorrect variable name is declared * Fix Request ID has a double 'req-' at the start * Fix few typos in glance * Updated from global requirements * Fix 501 error when accessing the server with a non-existent method * Imported Translations from Transifex * Fix existing migrations to create utf-8 tables for MySQL DB * Remove Catalog Index Service * Fix error message's format in image\_member * Include metadefs files in all packages 11.0.0.0b2 ---------- * Move to using futurist library for taskflow executors * Updated from global requirements * Glance to handle exceptions from glance\_store * Keeping the configuration file with convention * Fix Python 3 issues in glance.tests.unit * Allow ramdisk\_id, kernel\_id to be null on schema * Remove duplicate string * Imported Translations from Transifex * Update glance\_store requirement to 0.7.1 * Fix Rally job failure * Make utf8 the default charset for mysql * Use oslo\_utils.encodeutils.exception\_to\_unicode() * Updated from global requirements * Remove H302,H402,H904 * add annotation of param * Adds a rados\_connect\_timeout description * Fix the document bug in part of digest\_algorithm * Purge dead file-backed scrubber queue code * Correct reference to VC as vCenter * Remove usage of assert\_called\_once in mocks * Rationalize test asserts * Add .eggs/\* to .gitignore * Refactoring of image-members v2 API implementation * Improve code readability in functional test for the WSGIServer * Make 'id' a read only property for v2 * Healthcheck Middleware * Updated from global requirements * Functional of the HTTPclient was put in own method * Fix wrong check when create image without data * Remove unneeded OS\_TEST\_DBAPI\_ADMIN\_CONNECTION * glance metadef resource-type-associate fails in postgresql * Change default digest\_algorithm value to sha256 * Update requirements * Remove unused oslo incubator files * Remove unnecessary mixin from artifact domain model * Adds os\_admin\_user to common OS image prop metadef * Validate size of 'min\_ram' and 'min\_disk' * Remove unused imported marker functions * Fix duplicate unique constraint in sqlite migration * Fix broken URL to docs.openstack.org * Remove unnecessary executable permission * Fix the db\_sync problem in 039 for db2 * Imported Translations from Transifex * Fix OSProfiler exception when is enabled * Add an API call to discover the list of available artifact types 11.0.0.0b1 ---------- * Provide extra parameter for FakeDB * Switch to oslo.service * tests: don't hardcode strace usage * Fix tox -e py34 * Imported Translations from Transifex * Typo fix * Drop use of 'oslo' namespace package * Update version for Liberty 11.0.0a0 -------- * Add client\_socket\_timeout option * Switch from MySQL-python to PyMySQL * Fix grammar in installation documentation * Use ConfigFixture to ensure config settings are reverted * Change status code from 500 to 400 for image update request * Added test for "delete image member for public image" * Pass environment variables of proxy to tox * Add info how to avoid issues with token expiration * Fix Python 3 issues * Cleanup TODO in glance/gateway.py for elasticsearch being unavailable * Fix DbError when image params are out of range * REST API layer for Artifact Repository * Remove duplicate creation of use\_user\_token * Correct bad documentation merge * Sync with latest oslo-incubator * Fix HTTP 500 on NotAuthenticated in registry (v2) * Domain layer for Artifact Repository * Refactoring registry tests for v2 * Return empty str for permissive, none, properties * Fix typo in the code * Fixed error message for negative values of min\_disk and min\_ram * Changes in rally-jobs/README.rst * Make create task as non-blocking * Mark task as failed in case of flow failure * Add VMDK as a conversion format to convert flow * Make properties roles check case-insensitive * Imported Translations from Transifex * Change generic NotFound to ImageNotFound exception * Remove is\_public from domain layer * Leverage dict comprehension in PEP-0274 * Fix Server.start() on Python 3.4 * Use six.moves to fix imports on Python 3 * Imported Translations from Transifex * Bug : tox -egenconfig failure (no glance-search.conf) * Replace types.NameType with name * Fix test\_opts to not resolve requirements * Fix logging task id when task fails * Fix typo in documentation * rpc: remove wrong default value in allowed exceptions * rpc: clean JSON serializer, remove strtime() usage * Set filesystem\_store\_datadir in tests * Taskflow engine mode should be parallel in sample conf * VMware: vmware\_ostype should be enum * VMware: add VirtualVmxnet3 to hw\_vif\_model * Fixed glance.tests.unit.test\_artifacts\_plugin\_loader unit-test * Fix delayed activation without disk and containers formats * Save image data after setting the data * Make sure the converted image is imported * Updated from global requirements * Imported Translations from Transifex * Register oslo.log's config options in tests * Remove string formatting from policy logging * Remove unneeded setup hook from setup.cfg * Drop use of 'oslo' namespace package 2015.1.0 -------- * Metadef JSON files need to be updated * Plugin types are not exposed to the client * v1 API should be in SUPPORTED status * Read tag name instead of ID * v1 API should be in SUPPORTED status * API calls to Registry now maintain Request IDs * Updated from global requirements * Remove ordereddict from requirements * Release Import of Translations from Transifex * Glance database architecture section * update .gitreview for stable/kilo * Plugin types are not exposed to the client * Revert "Reduce DB calls when getting an image" * Read tag name instead of ID * Metadef JSON files need to be updated * Fix wrong docstring by copy-paste * Add logging when policies forbid an action * Remove non-ascii characters in glance/doc/source/architecture.rst * Fix typos in glance/doc/source/configuring.rst * Correct text in error response 2015.1.0rc1 ----------- * Fixes glance-manage exporting meta definitions issue * Catch UnknownScheme exception * Refactor API function test class * Move elasticsearch dep to test-requirements.txt * Update openstack-common reference in openstack/common/README * glance-manage output when ran without any arguments * Reduce DB calls when getting an image * Open Liberty development * Zero downtime config reload (glance-control) * Imported Translations from Transifex * Glance cache to not prune newly cached images * glance-manage db load\_metadefs does not load all resource\_type\_associations * Fix intermittent unit test failures * Fix intermittent test case failure due to dict order * Imported Translations from Transifex * A mixin for jsonpatch requests validation * Artifact Plugins Loader * Declarative definitions of Artifact Types * Creating metadef object without any properties * Zero downtime config reload (log handling) * Database layer for Artifact Repository * Catalog Index Service - Index Update * Catalog Index Service * Zero downtime config reload (socket handling) * Typo in pylintrc file * Fix metadef tags migrations * Update documentation for glance-manage * Fix common misspellings 2015.1.0b3 ---------- * Replace assert statements with proper control-flow * Remove use of contextlib.nested * Use graduated oslo.policy * oslo: migrate namespace-less import paths * Fix typo in rpc controller * Fixes typo in doc-string * wsgi: clean JSON serializer * Remove scrubber cleanup logic * use is\_valid\_port from oslo.utils * Add ability to deactivate an image * Remove deprecated option db\_enforce\_mysql\_charset * Raise exception if store location URL not found * Fix missing translations for error and info * Basic support for image conversion * Extend images api v2 with new sorting syntax * Add the ability to specify the sort dir for each key * Move to graduated oslo.log module * Provide a way to upgrade metadata definitions * Pass a real image target to the policy enforcer * Glance basic architecture section * Fix typo in configuration file * Updated from global requirements * Add sync check for models\_metadef * Notifications for metadefinition resources * Update config and docs for multiple datastores support * Avoid usability regression when generating config * Glance Image Introspection * Add capabilities to storage driver * Updated from global requirements * Zero downtime configuration reload * Add operators to provide multivalue support * Remove the eventlet executor * SemVer utility to store object versions in DB * Switch to latest oslo-incubator * Use oslo\_config choices support * Fix the wrong format in the example * Remove en\_US translation * Git ignore covhtml directory * db\_export\_metadefs generates inappropriate json files * Synchronising oslo-incubator service module * Unify using six.moves.range rename everywhere * Updated from global requirements * Glance returns HTTP 500 for image download * Remove boto from requirements.txt * Unbreak python-swiftclient gate * Eventlet green threads not released back to pool * Imported Translations from Transifex * Removes unnecessary assert * Prevents swap files from being found by Git * Add BadStoreConfiguration handling to glance-api * Remove redundant parentheses in conditional statements * Make sure the parameter has the consistent meaning * Image data remains in backend for deleted image * Remove is\_public from reserved attribute in v2 * unify some messages * Typos fixed in the comments * The metadef tags create api does not match blue-print * Clarified doc of public\_endpoint config option * Add detail description of image\_cache\_max\_size * Updated from global requirements 2015.1.0b2 ---------- * Add Support for TaskFlow Executor * Include readonly flag in metadef API * Fix for CooperativeReader to process read length * Software Metadata Definitions * Updated from global requirements * Rewrite SSL tests * Replace snet config with endpoint config * Simplify context by using oslo.context * Handle empty request body with chunked encoding * Update vmware\_adaptertype metadef values * Typos fixed in the comments * Updated from global requirements * Redundant \_\_init\_\_ def in api.authorization.MetadefTagProxy * Make digest algorithm configurable * Switch to mox3 * Remove argparse from requirement * Remove optparse from glance-replicator * Eliminate shell param from subprocesses in tests * Remove test dependency on curl * Cleanup chunks for deleted image that was 'saving' * remove need for netaddr * Fix copy-from when user\_storage\_quota is enabled * remove extraneous --concurrency line in tox * SQL scripts should not manage transactions * Fixes line continuations * Upgrade to hacking 0.10 * Removed python-cinderclient from requirements.txt * Move from oslo.db to oslo\_db * Move from oslo.config to oslo\_config * Improve documentation for glance\_stores * Fix reference to "stores" from deprecated name * Move from oslo.utils to oslo\_utils * Updated from global requirements * Updated from global requirements * Prevent file, swift+config and filesystem schemes * Simplify usage of str.startswith * Adding filesystem schema check in async task * Fix spelling typo * Fix rendering of readme document * Imported Translations from Transifex * Add swift\_store\_cacert to config files and docs * Add latest swift options in glance-cache.conf * Fix document issue of image recover status * rename oslo.concurrency to oslo\_concurrency * Provide a quick way to run flake8 * Fix 3 intermittently failing tests * Removed obsolete db\_auto\_create configuration option * Fix client side i18n support for v1 api * Move default\_store option in glance-api.conf * Removes http-requests to glance/example.com in glance test * Remove \_i18n from openstack-common * Adds the ability to sort images with multiple keys * Add sort key validation in v2 api * Fixes typo: glance exception additional dot * Allow $OS\_AUTH\_URL environment variable to override config file value * Bump API version to 2.3 * Replace '\_' with '\_LI', '\_LE', '\_LW', '\_LC' 2015.1.0b1 ---------- * Removes unused modules: timeutils and importutils * Generate glance-manage.conf * Imported Translations from Transifex * Adding Metadef Tag support * Removed unnecessary dot(.) from log message * Using oslo.concurrency lib * Update config and docs for Multiple Containers * To prevent client use v2 patch api to handle file and swift location * Updated from global requirements * Use testr directly from tox * Remove reliance on import order of oslo.db mods * Remove openstack.common.gettextutils module * Fix typo in common module * Fix and add a test case for IPv6 * Start server message changed * Fix getaddrinfo if dnspython is installed * Workflow documentation is now in infra-manual * Allow None values to be returned from the API * Expose nullable fields properties * Allow some fields to be None * Update glance.openstack.common.policy and cleanup * A small refactoring of the domain * Updated from global requirements * Disable osprofiler by default * Work toward Python 3.4 support and testing * Correct GlanceStoreException to provide valid message - Glance * Remove Python 2.6 classifier * Add ModelSMigrationSync classes * Alter models and add migration * No 4 byte unicode allowed in image parameters * Update rally-jobs files * Move from using \_ builtin to using glance.i18n \_ * Change Glance to use i18n instead of gettextutils * Raising glance logging levels * Imported Translations from Transifex * Do not use LazyPluggable * metadef modules should only use - from wsme.rest import json * Wrong order of assertEquals args(Glance) * Removal of unnecessary sample file from repository * Upgrade tests' mocks to match glance\_store * Remove exception declarations from replicator.py * Typo correction of the prefix value in compute-host-capabilities * Replace custom lazy loading by stevedore * vim ropeproject directories added to gitignore * Initiate deletion of image files if the import was interrupted * Raise an exception when quota config parameter is broken * Fix context storage bug * Ignore Eric IDE files and folders in git * Make RequestContext use auth\_token (not auth\_tok) * Swift Multi-tenant store: Pass context on upload * Use unicode for error message * change default value for s3\_store\_host * remove url-path from the default value of s3\_store\_host * Complete the change of adding public\_endpoint option * Update the vmware\_disktype metadefs values * Add config option to override url for versions * Separate glance and eventlet wsgi logging * Remove openstack.common.test * Remove modules from openstack-common.conf * Improve error log for expired image location url * Handle some exceptions of image\_create v2 api * Remove eventlet\_hub option * Adds openSUSE in the installing documentation * Glance scrubber should page thru images from registry * Add logging to image\_members and image\_tags * Update glance.openstack.common 2014.2 ------ * Fix options and their groups - etc/glance-api.conf * Fix options and their groups - etc/glance-api.conf * Adjust authentication.rst doc to reference "identity\_uri" * Can not delete images if db deadlock occurs * Reduce extraneous test output * Isolate test from environment variables * Fix for adopt glance.store library in Glance * Adjust authentication.rst doc to reference "identity\_uri" 2014.2.rc2 ---------- * Use identity\_uri instead of older fragments * Prevent setting swift+config locations * Metadef schema column name is a reserved word in MySQL * Remove stale chunks when failed to update image to registry * GET property which name includes resource type prefix * g-api raises 500 error while uploading image * Fix for Adopt glance.store library in Glance * Update Metadefs associated with ImagePropertiesFilter * updated translations * Use ID for namespace generated by DB * Metadef Property and Object schema columns should use JSONEncodedDict * Add missing metadefs for shutdown behavior * Update driver metadata definitions to Juno * Mark custom properties in image schema as non-base * Specify the MetadefNamespace.namespace column is not nullable * Make compute-trust.json compatible with TrustFilter * Include Metadata Defs Concepts in Dev Docs * Nova instance config drive Metadata Definition * Add missing metadefs for Aggregate Filters * Updated from global requirements 2014.2.rc1 ---------- * Imported Translations from Transifex * Add specific docs build option to tox * Add documentation for a new storage file permissions option * Updated from global requirements * Remove db\_enforce\_mysql\_charset option for db\_sync of glance-manage * Fix assertEqual arguments order * Prevent setting swift+config locations * Remove stale chunks when failed to update image to registry * Use specific exceptions instead of the general MetadefRecordNotFound * Metadef schema column name is a reserved word in MySQL * Fix for Adopt glance.store library in Glance * GET property which name includes resource type prefix * Incorrect parameters passed * g-api raises 500 error while uploading image * Minor style tidy up in metadata code * Metadef Property and Object schema columns should use JSONEncodedDict * Updated from global requirements * Use ID for namespace generated by DB * Switch to oslo.serialization * Switch to oslo.utils * Imported Translations from Transifex * Add missing metadefs for shutdown behavior * hacking: upgrade to 0.9.x serie * Fix bad header bug in glance-replicator * Run tests with default concurrency 0 * Refactor test\_migrations module * Include Metadata Defs Concepts in Dev Docs * Open Kilo development * Mark custom properties in image schema as non-base * Fix missing space in user\_storage\_quota help message * Fix glance V2 incorrectly implements JSON Patch'add' * Make compute-trust.json compatible with TrustFilter * replace dict.iteritems() with six.iteritems(dict) * Enforce using six.text\_type() over unicode() * Update driver metadata definitions to Juno * Remove uses of unicode() builtin * Fixes Error Calling GET on V1 Registry * Enabling separated sample config file generation * Update Metadefs associated with ImagePropertiesFilter * Fixes logging in image\_import's main module * Refactor metadef ORM classes to use to\_dict instead of as\_dict * Stop using intersphinx * Just call register\_opts in tests * Replaces assertEqual with assertTrue and assertFalse * Block sqlalchemy-migrate 0.9.2 * Specify the MetadefNamespace.namespace column is not nullable * Add missing metadefs for Aggregate Filters * Nova instance config drive Metadata Definition * Improve OS::Compute::HostCapabilities description * Sync glance docs with metadefs api changes * Change open(file) to with block * Fix CommonImageProperties missing ":" * Fix VMware Namespace capitalization & description * Imported Translations from Transifex * Duplicated image id return 409 instead of 500 in API v2 * Glance API V2 can't recognize parameter 'id' * API support for random access to images * Adopt glance.store library in Glance * Adds missing db registry api tests for Tasks * warn against sorting requirements * Introduces eventlet executor for Glance Tasks 2014.2.b3 --------- * Glance Metadata Definitions Catalog - API * ignore .idea folder in glance * Glance Metadata Definitions Catalog - Seed * Glance Metadata Definitions Catalog - DB * Restrict users from downloading protected image * Syncing changes from oslo-incubator policy engine * Use identity\_uri instead of older fragments * Fix legacy tests using system policy.json file * Improve Glance profiling * Fix collection order issues and unit test failures * Check on schemes not stores * Replacement mox by mock * Imported Translations from Transifex * Log task ID when the task status changes * Changes HTTP response code for unsupported methods * Enforce image\_size\_cap on v2 upload * Do not assume order of images * Ensure constant order when setting all image tags * Fix bad indentation in glance * Use @mock.patch.object instead of mock.MagicMock * Adding status field to image location -- scrubber queue switching * Bump osprofiler requirement to 0.3.0 * Fix migration on older postgres * Fix rally performance job in glance * Integrate OSprofiler and Glance * Fix image killed after deletion * VMware store: Use the Content-Length if available * Fix RBD store to use READ\_CHUNKSIZE * Trivial fix typo: Unavilable to Unavailable * Quota column name 'key' in downgrade script * Do not log password in swift URLs in g-registry * Updated from global requirements * Use \`\_LW\` where appropriate in db/sqla/api * Log upload failed exception trace rather than debug * Decouple read chunk size from write chunk size * Enable F821 check: undefined name 'name' 2014.2.b2 --------- * Security hardening: fix possible shell injection vulnerability * Move to oslo.db * Catch exception.InUseByStore at API layer * Fixes the failure of updating or deleting image empty property * Adding status field to image location -- scrubber changes * Also run v2 functional tests with registry * Refactoring Glance logging lowering levels * Set defaults for amqp in glance-registry.conf * Fix typo in swift store message * Add a \`\_retry\_on\_deadlock\` decorator * Use auth\_token from keystonemiddleware * Allow some property operations when quota exceeded * Raising 400 Bad Request when using "changes-since" filter on v2 * Moving eventlet.hubs.use\_hub call up * Adding status field to image location -- domain and APIs changes * Add task functions to v2 registry * Changing replicator to use openstack.common.log * Fix unsaved exception in v1 API controller * Pass Message object to webob exception * Some exceptions raise UnicodeError * Handle session timeout in the VMware store * Some v2 exceptions raise unicodeError * Resolving the performance issue for image listing of v2 API on server * Switch over oslo.i18n * Fix typo in comment * Updated from global requirements * Imported Translations from Transifex * Updated from global requirements * Raise NotImplementedError instead of NotImplemented * Fix unsaved exception in store.rbd.Store.add() * Fix docstrings in enforce() and check() policy methods * Added an extra parameter to the df command * Add CONTRIBUTING.rst * Imported Translations from Transifex * Use (# of CPUs) glance workers by default * Sync processutils and lockutils from oslo with deps * Document registry 'workers' option * Removing translation from debug messages * Unifies how BadStoreUri gets raised and logged * Fix lazy translation UnicodeErrors * Changing Sheepdog driver to use correct configuration function * Implemented S3 multi-part upload functionality * Log swift container creation * Synced jsonutils and its dependencies from oslo-incubator * Remove user and key from location in swift * Updated from global requirements * Changed psutil dep. to match global requirements * Add pluging sample for glance gate * Fixes v2 return status on unauthorized download * Update documentation surrounding the api and registry servers * Do not call configure several times at startup * Move \`location\`'s domain code out of glance.store * sync oslo incubator code * notifier: remove notifier\_strategy compat support * notifier: simply notifier\_strategy compat support * colorizer: use staticmethod rather than classmethod * Improved coverage for glance.api.\* * Assign local variable in api.v2.image\_data 2014.2.b1 --------- * Use df(1) in a portable way * Add test for no\_translate\_debug\_logs hacking check * Add hacking checks * replace dict.iteritems() with six.iteritems(dict) * make uploading an image as public admin only by default * remove default=None for config options * Bump python-swiftclient version * TaskTest:test\_fail() should use asserIstNone * debug level logs should not be translated * use /usr/bin/env python instead of /usr/bin/python * Remove all mostly untranslated PO files * Remove duplicated is\_uuid\_like() function * fixed typos found by RETF rules in RST files * Use safe way through "with" statement to work with files * Clean up openstack-common.conf * Removing duplicate entry from base\_conf * Use safe way through "with" statement to work with files * Use Chunked transfer encoding in the VMware store * Ensures that task.message is of type unicode * Replace unicode() for six.text\_type * Prevent creation of http images with invalid URIs * Fixed a handful of typos * Fixes installation of test-requirements * Add rally performance gate job for glance * To fixes import error for run\_tests.sh * Replace assert\* with more suitable asserts in unit tests * Get rid of TaskDetails in favor of TaskStub * Fixes "bad format" in replicator for valid hosts * Sync latest network\_utils module from Oslo * Fixes spelling error in test name * Uses None instead of mutables for function param defaults * Fix various Pep8 1.5.4 errors * Fixes Glance Registry V2 client * Update Glance configuration sample files for database options * To prevent remote code injection on Sheepdog store * Added undescore function to some log messages * Adds TaskStub class * Updated from global requirements * user\_storage\_quota now accepts units with value * Do not allow HEAD images/detail * Configuration doc for VMware storage backend * Catch loading failures if transport\_url is not set * Fix Jenkins translation jobs * Fixed the pydev error message 2014.1.rc1 ---------- * Open Juno development * Making DB sanity checking be optional for DB migration * Fix deprecation warning in test\_multiprocessing * Do not set Location header on HTTP/OK (200) responses * Fix swift functional test "test\_create\_store" * Sanitize set passed to jsonutils.dumps() * When re-raising exceptions, use save\_and\_reraise * Imported Translations from Transifex * Sync common db code from Oslo * Return 405 when attempting DELETE on /tasks * Remove openstack.common.fixture * Enable H304 check * VMware store.add to return the image size uploaded * registry: log errors on failure * Removes use of timeutils.set\_time\_override * Provide explicit image create value for test\_image\_paginate case * Make the VMware datastore backend more robust * Pass Message object to webob exception * Detect MultiDict when generating json body * Makes possible to enable Registry API v1 and v2 * Do not use \_\_builtin\_\_ in python3 * Updated from global requirements * Fix swift functional test * Provide an upgrade period for enabling stores * API v2: Allow GET on unowned images with show\_image\_direct\_url * Add copyright text to glance/openstack/common/\_\_init\_\_.py * Don't enable all stores by default * Remove unused methods * Fix glance db migration failed on 031 * Document for API message localization 2014.1.b3 --------- * Add support for API message localization * Add the OVA container format * Store URI must start with the expected URI scheme * Documentation for Glance tasks * Remove import specific validation from tasks resource * Remove dependency of test\_v1\_api on other tests * Include Location header in POST /tasks response * Catch exception when image cache pruning * VMware storage backend should use oslo.vmware * Sync common db code from Oslo * Refactor UUID test * Replaced calls of get(foo, None) -> get(foo) * Use six.StringIO/BytesIO instead of StringIO.StringIO * Replaced "...\'%s\'..." with "...'%s'..." * Updated from global requirements * Fix logging context to include user\_identity * Log 'image\_id' with all BadStoreURI error messages * Added undescore function to some strings * Use 0-based indices for location entries * Glance all: Replace basestring by six for python3 compatability * Delete image metadata after image is deleted * Modify assert statement when comparing with None * Enable hacking H301 and disable H304, H302 * Replacement mox by mock * Keep py3.X compatibility for urllib * Use uuid instead of uuidutils * Use six.moves.urllib.parse instead of urlparse * Switch over to oslosphinx * Fix parsing of AMQP configuration * Add \`virtual\_size\` to Glance's API v2 * Add a virtual\_size attribute to the Image model * Enable F841 check * Add support for PartialTask list * Rename Openstack to OpenStack * Add a mailmap entry for myself * Sync log.py from oslo * Add unit tests around glance-manage * Remove tox locale overrides * Improve help strings * Provide explicit image create value in Registry v2 API test * VMware Datastore storage backend * Adding status field to image location -- DB migration * Apply image location selection strategy * Switch to testrepository for running tests * Clean up DatabaseMigrationError * Enable H302 check * Fix misspellings in glance * Expose image property 'owner' in v2 API * Removes logging of location uri * Updated from global requirements * Remove duplicate type defination of v2 images schema * Enable H202 check * Modify my mailmap * glance-manage wont take version into consideration * Move scrubber outside the store package * Depending on python-swiftclient>=1.6 * Now psutil>=1.1.0 is actually on PyPI * Fix indentation errors found by Pep8 1.4.6+ * Add VMware storage backend to location strategy * Log a warning when a create fails due to quota * glance requires pyOpenSSL>=0.11 * Imported Translations from Transifex * Restore image status to 'queued' if upload failed * Don't override transport\_url with old configs * Provide explicit image create value in Registry v2 Client test * Provide explicit task create and update value in controller tests * Enable hacking H703 check * Sync with global requirements * Sync oslo.messaging version with global-requirements * Don't rewrite the NotFound error message * Update all the glance manpages * Use common db migrations module from Oslo * Check --store parameter validity before \_reserve * Sync gettextutils from Oslo * Enable gating on H501 * Add multifilesystem store to support NFS servers as backend * Check first matching rule for protected properties * Retry failed image download from Swift * Restore image status on duplicate image upload 2014.1.b2 --------- * Tests added for glance/cmd/cache\_pruner.py * Prevent E500 when delayed delete is enabled * Sync unhandled exception logging change from Oslo * Check image id format before executing operations * fix bug:range() is not same in py3.x and py2.x * Fix the incorrect log message when creating images * Adding image location selection strategies * Fix inconsistent doc string and code of db\_sync * fixing typo in rst file * Fix tmp DB path calculation for test\_migrations.py * Change assertTrue(isinstance()) by optimal assert * add log for \_get\_images method * Makes 'expires\_at' not appear if not set on task * Remove vim header * Update the glance-api manpage * Remove 'openstack/common/context.py' * Allow users to customize max header size * Decouple the config dependence on glance domain * Fix typo in doc string * Prevent min\_disk and min\_ram from being negative * Set image size to None after removing all locations * Update README to the valid Oslo-incubator doc * Cleans up imports in models.py * Sync Log levels from OSLO * Align glance-api.conf rbd option defaults with config * Bump hacking to 0.8 and get python 3.x compatibility * Add config option to limit image locations * replace type calls with isinstance * Adding logs to tasks * Skip unconfigurable drivers for store initialization * Fix typo in gridfs store * Oslo sync to recover from db2 server disconnects * fix comments and docstrings misspelled words * Fix call to store.safe\_delete\_from\_backend * Switch to Hacking 0.8.x * assertEquals is deprecated, use assertEqual (H234) * Consider @,! in properties protection rule as a configuration error * Remove unused imports in glance * Remove return stmt of add,save and remove method * Migrate json to glance.openstack.common.jsonutils * Use common Oslo database session * Define sheepdog\_port as an integer value * Sync with oslo-incubator (git 6827012) * Enable gating on F811 (duplicate function definition) * Set image size after updating/adding locations * Disallow negative image sizes * Fix and enable gating on H306 * Make code base E125 and E126 compliant * Fix 031 migration failed on DB2 * Remove the redundant code * Correct URL in v1 test\_get\_images\_unauthorized * Refactor tests.unit.utils:FakeDB.reset * Fixed wrong string format in glance.api.v2.image\_data * Empty files shouldn't contain copyright nor license * Use uuid instead of uuidutils * Enable H233/H301/H302 tests that are ignored at the moment * Remove duplicate method implementations in ImageLocationsProxy * Make Glance code base H102 compliant * Make Glance code base H201 compliant * Cleanup: remove unused code from store\_utils * Filter out deleted images from storage usage * Add db2 communication error code when check the db connection * Refine output of glance service managment * Adds guard against upload contention * Fixes HTTP 500 when updating image with locations for V2 * Increase test coverage for glance.common.wsgi * Return 204 when image data does not exist * V2: disallow image format update for active status * Enable tasks REST API for async worker * Cleanly fail when location URI is malformed * Rename duplicate test\_add\_copy\_from\_upload\_image\_unauthorized * Adding missing copy\_from policy from policy.json * Fix simple-db image filtering on extra properties * Pin sphinx to <1.2 * assertEquals is deprecated, use assertEqual instead * Fix and enable gating on H702 * Replace startswith by more precise store matching * Remove unused exceptions * Remove duplicate method \_\_getitem\_\_ in quota/\_\_init\_\_.py * Enforce copy\_from policy during image-update * Refactor StorageQuotaFull test cases in test\_quota * remove hardcode of usage * Added error logging for http store * Forbidden update message diffs images/tasks/member * Unittests added for glance/cmd/cache\_manage.py * Makes tasks owner not nullable in models.py * Move is\_image\_sharable to registry api * Remove TestRegistryDB dependency on TestRegistryAPI * Introduce Task Info Table 2014.1.b1 --------- * Migrate to oslo.messaging * Add config option to limit image members * Add config option to limit image tags * Glance image-list failed when image number exceed DEFAULT\_PAGE\_SIZE * DB migration changes to support DB2 as sqlalchemy backend * Add documentation for some API parameters * RBD add() now returns correct size if given zero * Set upload\_image policy to control data upload * Replace deprecated method assertEquals * Clean up duplicate code in v2.image\_data.py * Fix docstring on detail in glance/api/v1/images.py * Use assertEqual instead of assertEquals in unit tests * Remove unused package in requirement.txt * Enable F40X checking * Verify for duplicate location+metadata instances * Adds domain level support for tasks * Add eclipse project files to .gitignore * Added unit tests for api/middleware/cache\_manage.py * Fixed quotes in \_assert\_tables() method * Use common db model class from Oslo * Add upload policy for glance v2 api * Adding an image status transition diagram for dev doc * Add config option to limit image properties * Explicit listing of Glance policies in json file * Imported Translations from Transifex * Sync openstack.common.local from oslo * Clean up numeric expressions with oslo constants * Don't use deprecated module commands * Add tests for glance/notifier/notify\_kombu * Fixes image delete and upload contention * Log unhandled exceptions * Add tests for glance/image\_cache/client.py * Remove lxml requirement * Sync common db and db.sqlalchemy code from Oslo * Update glance/opensatck/common from oslo Part 3 * Tests added for glance/cmd/cache\_cleaner.py * glance-manage should work like nova-manage * Adds tasks to db api * Sync lockutils from oslo * sync log from oslo * Add policy style '@'/'!' rules to prop protections * Enable H501: do not use locals() for formatting * Remove use of locals() when creating messages * Remove "image\_cache\_invalid\_entry\_grace\_period" option * Add unit test cases for get func of db member repo * assertEquals is deprecated, use assertEqual * Document default log location in config files * Remove unused method setup\_logging * Start using PyFlakes and Hacking * Sync units module from olso * Fixes error message encoding issue when using qpid * Use mock in test\_policy * Use packaged version of ordereddict * Imported Translations from Transifex * Glance v2: Include image/member id in 404 Response * Replace qpid\_host with qpid\_hostname * Fix Pep8 1.4.6 warnings * Fixes content-type checking for image uploading in API v1 and v2 * Update my mailmap * Addition of third example for Property Protections * Sync iso8601 requirement and fixes test case failures * Fixes wrong Qpid protocol configuration * Use HTTP storage to test copy file functionality * Remove redundant dependencies in test-requirements * Documentation for using policies for protected properties * checking length of argument list in "glance-cache-image" command * optimize queries for image-list * Using policies for protected properties * Cleanup and make HACKING.rst DRYer * Enable tasks data model and table for async worker * Updated from global requirements * Add call to get specific image member * Put formatting operation outside localisation call * Remove unused import * The V2 Api should delete a non existent image * Avoid printing URIs which can contain credentials * Remove whitespace from cfg options * Use Unix style LF instead of DOS style CRLF * Adding 'download\_image' policy enforcement to image cache middleware * Glance manage should parse glance-api.conf * Fixes rbd \_delete\_image snapshot with missing image * Correct documentation related to protected properties * Update functional tests for swift changes * Removed unsued import, HTTPError in v1/images.py * Allow tests to run with both provenances of mox * Glance GET /v2/images fails with 500 due to erroneous policy check * Do not allow the same member to be added twice 2013.2.rc1 ---------- * V2 RpcApi should register when db pool is enabled * Imported Translations from Transifex * Open Icehouse development * Convert Windows to Unix style line endings * Add documentation for property protections * Adding checking to prevent conflict image size * Fixes V2 member-create allows adding an empty tenantId as member * Fixing glance-api hangs in the qpid notifier * Change response code for successful delete image member to 204 * Cache cleaner wrongly deletes cache for non invalid images * Require oslo.config 1.2.0 final * Use built-in print() instead of print statement * Swift store add should not use wildcard raise * Corrected v2 image sharing documentation * Add swift\_store\_ssl\_compression param * Log a message when image object not found in swift * Ensure prop protections are read/enforced in order * Funtional Tests should call glance.db.get\_api * Enclose command args in with\_venv.sh * Fix typo in config string * Adding encryption support for image multiple locations * Fixes typos of v1 meta data in glanceapi.rst * Respond with 410 after upload if image was deleted * Fix misused assertTrue in unit tests * Convert location meta data from pickle to string * Disallow access/modify members of deleted image * Fix typo in protected property message * Remove the unused mapper of image member create * Changed header from LLC to Foundation based on trademark policies * Implement protected properties for API v1 * Add rbd store support for zero size image * Remove start index 0 in range() * Convert non-English exception message when a store loading error * add missing index for 'owner' column on images table * Publish recent api changes as v2.2 * Update schema descriptions to indicate readonly * Enable protected properties in gateway * Property Protection Layer * Rule parser for property protections * Scrubber refactoring * Fix typo in IMAGE\_META\_HEADERS * Fix localisation string usage * Notify error not called on upload errors in V2 * Fixes files with wrong bitmode * Remove unused local vars * Clean up data when store receiving image occurs error * Show traceback info if a functional test fails * Add a storage quota * Avoid redefinition of test * Fix useless assertTrue * emit warning while running flake8 without virtual env * Fix up trivial License mismatches * Introduced DB pooling for non blocking DB calls * Use latest Oslo's version * Improve the error msg of v2 image\_data.py * Fix Sphinx warning * Remove unused import * test failure induced by reading system config file * Prefetcher should perform data integrity check * Make size/checksum immutable for active images * Remove unused var DEFAULT\_MAX\_CACHE\_SIZE * Implement image query by tag * Remove unused import of oslo.config * Code dedup in glance/tests/unit/v1/test\_registry\_api.py * Add unit test for migration 012 * Call \_post\_downgrade\_### after downgrade migration is run * Use \_pre\_upgrade\_### instead of \_prerun\_### * Perform database migration snake walk test correctly * redundant conditions in paginate-query * Refactor glance/tests/unit/v2/test\_registry\_client.py * Refactor glance/tests/unit/v1/test\_registry\_client.py * Improve test/utils.py * Make sure owner column doesn't get dropped during downgrade * image-delete fires multiple queries to delete its child entries * glance-replicator: enable logging exceptions into log file * Make disk and container formats configurable * Add space in etc/glance-cache.conf * Removes duplicate options registration in registry clients * remove flake8 option in run\_tests.sh * Allow tests to run without installation * Remove glance CLI man page * Fix some logic in get\_caching\_iter * Adding metadata checking to image location proxy layer * Update .mailmap * Migrate to PBR for setup and version code * Interpolate strings after calling \_() * BaseException.message is deprecated since Python 2.6 * Raise jsonschema requirement * Text formatting changes * Using unicode() convert non-English exception message * ambiguous column 'checksum' error when querying image-list(v2) * Handle None value properties in glance-replicator * Fixes Opt types in glance/notifier/notify\_kombu.py * Add unit test for migration 010 * Sync models with migrations * Rename requirements files to standard names * Include pipeline option for using identity headers * Adding arguments pre-check for glance-replicator * Add v1 API x-image-meta- header whitelist * Stub out dependency on subprocess in unit tests * Allow insecure=True to be set in swiftclient * Verify if the RPC result is an instance of dict * Adds help messages to mongodb\_store\_db and mongodb\_store\_uri * Remove support for sqlalchemy-migrate < 0.7 * Don't rely on prog.Name for paste app * Simulate image\_locations table in simple/api.py * Turn off debug logging in sqlalchemy by default * Glance api to pass identity headers to registry v1 * add doc/source/api in gitignore * Use cross-platform 'ps' for test\_multiprocessing * Fix stubs setup and exception message formatting * Handle client disconnect during image upload * improving error handling in chunked upload 2013.2.b2 --------- * Adding Cinder backend storage driver to Glance * File system store can send metadata back with the location * index checksum image property * removed unused variable 'registry\_port' * DB Driver for the Registry Service * Unit tests for scrubber * Remove references to clean arg from cache-manage * Deleting image that is uploading leaves data * Adding a policy layer for locations APIs * Add/remove/replace locations from an image * Adding multiple locations support to image downloading * Make db properties functions consistent with the DB API * Adds missing error msg for HTTPNotFound exception * Allow storage drivers to add metadata to locations * Fixes image-download error of v2 * On deleting an image, its image\_tags are not deleted * Sync gettextutils from oslo * Adding store location proxy to domain * Notify does not occur on all image upload fails * Add location specific information to image locations db * Add custom RPC(Des|S)erializer to common/rpc.py * use tenant:\* as swift r/w acl * Add image id to the logging message for upload * Fix cache delete-all-queued-images for xattr * Fix stale process after unit tests complete * Sync install\_venv\_common from oslo * Fix list formatting in docs * Fix doc formatting issue * Ignore files created by Sphinx build * Use oslo.sphinx and remove local copy of doc theme * Refactor unsupported default store testing * Add Sheepdog store * Fix 'glance-cache-manage -h' default interpolation * Fix 'glance-cache-manage list-cached' for xattr * Dont raise NotFound in simple db image\_tag\_get\_all * Use python module loading to run glance-manage * Removed unusued variables to clean the code * Fixes exposing trace during calling image create API * Pin kombu and anyjson versions * Do not raise NEW exceptions * Port slow, overly assertive v1 functional tests to integration tests * Add a bit of description * Updated documentation to include notifications introduced in Grizzly * Make eventlet hub choice configurable * Don't run store tests without a store! * Import sql\_connection option before using it * Fix for unencrypted uris in scrubber queue files * Fix incorrect assertion in test\_create\_pool * Do not send traceback to clients by default * Use Python 3.x compatible octal literals * Remove explicit distribute depend * Add missing Keystone settings to scrubber conf * Sql query optimization for image detail * Prevent '500' error when admin uses private marker * Replace openstack-common with oslo in HACKING.rst * Patch changes Fedora 16 to 18 on install page * Pass configure\_via\_auth down to auth plugin * Move sql\_connection option into sqlalchemy package * Remove unused dictionary from test\_registry\_api.py * Remove routes collection mappings * updated content\_type in the exception where it is missing * python3: Introduce py33 to tox.ini * Don't make functional tests inherit from IsolatedUnitTest * Add a policy layer for membership APIs * Prevent E500 when listing with null values * Encode headers and params * Fix pydevd module import error * Add documentation on reserving a Glance image * Import strutils from oslo, and convert to it * Sync oslo imports to the latest version 2013.2.b1 --------- * Fix undefined variable in cache * Make passing user token to registry configurable * Respond with 412 after upload if image was deleted * Add unittests for image upload functionality in v1 * Remove glance-control from the test suite * Prevent '500' error when using forbidden marker * Improve unit tests for glance.common package * Improve unit tests for glance.api.v1 module * rbd: remove extra str() conversions and test with unicode * rbd: return image size when asked * Add qpid-python to test-requires * tests: remove unused methods from test\_s3 and test\_swift * Implement Registry's Client V2 * RBD store uses common utils for reading file chunks * Redirects requests from /v# to /v#/ with correct Location header * Add documentation for query parameters * Small change to 'is\_public' documentation * Fix test\_mismatched\_X test data deletion check * Add GLANCE\_LOCALEDIR env variable * Remove gettext.install() from glance/\_\_init\_\_.py * Implement registry API v2 * Add RBD support with the location option * Use flake8/hacking instead of pep8 * Use RBAC policy to determine if context is admin * Create package for registry's client * Compress response's content according to client's accepted encoding * Call os.kill for each child instead of the process group * Improve unit tests for glance.common.auth module * Convert scripts to entry points * Fix functional test 'test\_copy\_from\_swift' * Remove unused configure\_db function * Don't raise HTTPForbidden on a multitenant environment * Expand HACKING with commit message guidelines * Redirects requests from /v# to /v#/ * Functional tests use a clean cached db that is only created once * Fixes for mis-use of various exceptions * scrubber: dont print URI of image to be deleted * Eliminate the race when selecting a port for tests * Raise 404 while deleting a deleted image * Fix test redifinitions * Sync with oslo-incubator copy of setup.py and version.py * Gracefully handle qpid errors * Fix Qpid test cases * Imported Translations from Transifex * Fix the deletion of a pending\_delete image * Imported Translations from Transifex * Imported Translations from Transifex * Fix functional test 'test\_scrubber\_with\_metadata\_enc' * Make "private" functions that shouldn't be exported * Call monkey\_patch before other modules are loaded * Adding help text to the options that did not have it * Improve unit tests for glance.api.middleware.cache module * Add placeholder migrations to allow backports * Add GridFS store * glance-manage should not require glance-registry.conf * Verify SSL certificates at boot time * Invalid reference to self in functional test test\_scrubber.py * Make is\_public an argument rather than a filter * remove deprecated assert\_unicode sqlalchemy attribute * Functional tests display the logs of the services they started * Add 'set\_image\_location' policy option * Add a policy handler to control copy-from functionality * Fallback to inferring image\_members unique constraint name * Standardize on newer except syntax * Directly verifying that time and socket are monkey patched * Reformat openstack-common.conf * Fix domain database initialization * Add tests for image visibility filter in db * Add image\_size\_cap documentation * Return 413 when image\_size\_cap exceeded * Small change to exception handling in swift store * Remove internal store references from migration 017 * Check if creds are present and not None 2013.1.rc1 ---------- * Delete swift segments when image\_size\_cap exceeded * bump version to 2013.2 * Don't print sql password in debug messages * fixes use the fact that empty sequences are false * Handle Swift 404 in scrubber * Remove internal store references from migration 015 * Pin SQLAlchemy to 0.7.x * Add unit tests for glance.api.cached\_images module * Document the os options config for swift store * Segmented images not deleted cleanly from swift * Do not return location in headers * Fix uniqueness constraint on image\_members table * Declare index on ImageMember model * Log when image\_size\_cap has been exceeded * Publish API version 2.1 * Fix scrubber and other utils to use log.setup() * Switch to final 1.1.0 oslo.config release * Mark password options secret * Fix circular import in glance/db/sqlalchemy * Fix up publicize\_image unit test * Fix rabbit\_max\_retry * Fix visibility on db image\_member\_find * Fix calls to image\_member\_find in tests * Characterize image\_member\_find * Retain migration 12 indexes for table image\_properties with sqlite * Insure that migration 6 retains deleted image property index * Fix check\_003 method * Ensure disk\_ and container\_format during upload * Honor metadata\_encryption\_key in glance domain * Fix v2 data upload to swift * Switch to oslo.config * Update acls in the domain model * Refactor leaky abstractions * Remove unused variable 'image\_member\_factory' * Generate notification for cached v2 download * A test for concurrency when glance uses sleep * Update documentation to reflect API v2 image sharing * v1 api image-list does not return shared images * Cannot change locations on immutable images * Update db layer to expose multiple image locations * Test date with UTC instead of local timezone * Added better schemas for image members, revised tests * Add pre and check phases to test migration 006 * Fix response code for successful image upload * Remove unused imports * Add pre and check phases to test migration 005 * Add pre and check phases to test migration 004 * Add PostgreSQL support to test migrations * Enable support for MySQL with test migrations * Set status to 'active' after image is uploaded * Removed controversial common image property 'os\_libosinfo\_shortid' * Parse JSON Schema Draft 10 in v2 Image update * Redact location from notifications * Fix broken JSON schemas in v2 tests * Add migration 021 set\_engine\_mysql\_innodb * Refactor data migration tests * Fix migration 016 for sqlite * Pin jsonschema version below 1.0.0 * Add check for image\_locations table * Avoid using logging in signal handlers * monkey\_patch the time module for eventlet * Remove compat cfg wrapper * Remove unnecessary logging from migration 019 * Fix migration 015 downgrade with sqlite * Document db\_auto\_create in default config files * Update openstack.common * Extend the domain model to v2 image data * Add migration 20 - drop images.location * Add migration 19 - move image location data * Filter images by status and add visibility shared * Update oslo-config version * Sync latest install\_venv\_common.py * Adding new common image properties * Use oslo-config-2013.1b3 * Add migration 18 - create the image\_locations table * Create connection for each qpid notification * Add migration to quote encrypted image location urls * Updates OpenStack LLC with OpenStack Foundation * Allowing member to set status of image membership * Add an update option to run\_tests.sh * Use install\_venv\_common.py from oslo * Add status column to image\_members * Adding image members in glance v2 api * Fix issues with migration 012 * Add migration.py based on the one in nova * Updated\_at not being passed to db in image create * Fix moker typo in test\_notifier * Clean dangling image fragments in filesystem store * Sample config and doc for the show\_image\_direct\_url option * Avoid dangling partial image on size/checksum mismatch * Fix version issue during nosetests run * Adding database layer for image members domain model * Image Member Domain Model * Additional image member information * Adding finer notifications * Add LazyPluggable utility from nova * Update .coveragerc * Removed unnecessary code * Use more-specific value for X-Object-Manifest header * Allow description fields to be translated in schema * Mark password config options with secret * Update HACKING.rst per recent changes * Encrypt scrubber marker files * Quote action strings before passing to registry * Fixes 'not in' operator usage * Add to multi-tenant swift store documentation * Replace nose plugin with testtools details * Convert some prints to addDetails calls * Rearrange db tests in prep for testr * Stop using detailed-errors plugin for nose * Add \_FATAL\_EXCEPTION\_FORMAT\_ERRORS global * Fix kwargs in xattr BadDriverConfiguration exc * Prints list-cached dates in isoformat * Fail sensibly if swiftclient absent in test * Initialize CONF properly in store func tests * Ensure swift\_store\_admin\_tenants ACLs are set * Remove Swift location/password from messages * Removed unnecessary code * Removed unncessary code * Pull in tarball version fix from oslo * Updated image loop to not use an enumerator * Log exception details * Update version code from oslo * Revert "Avoid testtools 0.9.25" * Avoid testtools 0.9.25 * Update glance config files with log defaults * Sync latest cfg and log from oslo-incubator * Make v2 image tags test not load system policy * Replace custom tearDown with fixtures and cleanup * Update version code from oslo * Use testtools for unittest base class * Stub out find\_file... fix policy.json test issue * Remove unused declaration in images.py * Add import for filesystem\_store\_datadir config * Update v1/images DELETE so it returns empty body * Relax version constraint on Webob-1.0.8 * Set content-length despite webob * Update common openstack code from oslo-incubator * Modify the v2 image tags to use domain model * Fix broken link in docs to controllingservers * Adding a means for a glance worker to connect back to a pydevd debugger * Use imported exception for update\_store\_acls * Fix import order nits * Verify size in addition to checksum of uploaded image * Use one wsgi app, one dbengine worker * Set Content-MD5 after calling webob.Response.\_app\_iter\_\_set * Modify the v2 image controller to use domain model * Log error on failure to load paste deploy app * Configure endpoint\_type and service\_type for swift * Refactor multi-tenant swift store * Add registry\_client\_timeout parameter * Use io.BufferedIOBase.read() instead of io.BytesIO.getvalue() * Port to argparse based cfg * wsgi.Middleware forward-compatibility with webob 1.2b1 or later * Allow running testsuite as root user * Allow newer boto library versions * Fixed image not getting deleted from cache * Updates keystone middleware classname in docs * v2 API image upload set image status to active * Use auth\_token middleware from python-keystoneclient * Add domain proxies that stop unauthorized actions * Add domain proxies that do policy.enforce checks * Use 'notifications' as default notification queue name * Unused variables removed * Fixed deleted image being downloadable by admin * Rewrite S3 functional tests * Add store test coverage for the get\_size method * Implement get\_size filesystem store method * Add an image repo proxy that handles notifications * Fixed Typo * Return size as int from store get call * Wrap log messages with \_() * Add pep8 ignore options to run\_tests.sh * Fix typo uudiutils -> uuidutils * Make cooperative reader always support read() * Add an image proxy to handle stored image data * Allow for not running pep8 * Refactor where store drivers are initialized * Audit error logging * Stop logging all registry client exceptions * Remove unused imports * Add note about urlencoding the sql\_connection config opt * Add an image repo to encapsulate db api access * Add an image domain model and related helpers * Fix simple db image\_get to look like sqlalchemy * Return 403 on images you can see but can't modify * Fixes is\_image\_visible to not use deleted key * Ensure strings passed to librbd are not unicode * Use generate\_uuid from openstack common * Update uuidutils from openstack common * Code cleanup: remove ImageAddResult class * Lowering certain log lines from error to info * Prevent infinite respawn of child processes * Make run\_tests.sh run pep8 checks on bin * Make tox.ini run pep8 checks on bin * Pep8 fixes to bin/glance\* scripts * Ensure authorization before deleting from store * Port uuidutils to Glance * Delete from store after registry delete * Unit test remaining glance-replicator methods * Use openstack common timeutils in simple db api * Unit test replication\_dump * pin sqlalchemy to the 0.7 series * DRY up image fetch code in v2 API * Return 403 when admin deletes a deleted image * Pull in a versioning fix from openstack-common * Fixes deletion of invalid image member * Return HTTP 404 for deleted images in v2 * Update common to 18 October 2012 * implements selecting version in db sync * add command "status" to "glance-control" * Disallow admin updating deleted images in v2 api * Clean up is\_public filtering in image\_get\_all * SSL functional tests always omitted * Fix scrubber not scrubbing with swift backend * Add OpenStack trove classifier for PyPI * Disallow updating deleted images * Unit test replication\_size * Add noseopts and replace noseargs where needed to run\_test.sh * Setup the pep8 config to check bin/glance-control * Change useexisting to extend\_existing to fix deprecation warnings * Fix fragile respawn storm test * Fix glance filesystem store race condition * Add support for multiple db test classes * Don't parse commandline in filesystem tests * Improve test coverage for replicator's REST client * Correct conversion of properties in headers * Add test for v2 image visibility * change the default sql connection timeout to 60s * Add test for v1 image visibility * FakeAuth not always admin * Add GLANCE\_TEST\_TMP\_DIR environment var for tests * Call setup\_s3 before checking for disabled state * Add insecure option to registry https client * Clean up pep8 E128 violations * Rename non-public method in sqlalchemy db driver * Add image\_member\_update to simple db api * Multiprocess respawn functional test fix * Remove unnecessary set\_acl calls * Clean up pep8 E127 violations * Remove notifications on error * Change type of rabbit\_durable\_queues to boolean * Pass empty args to test config parser * Document api deployment configuration * Clean up pep8 E125 violations * Clean up pep8 E124 violations * Ensure workers set to 0 for all functional tests * image\_member\_\* db functions return dicts * Alter image\_member\_[update|delete] to use member id * Add test for db api method image\_member\_create * Add test for image\_tag\_set\_all * Add rabbit\_durable\_queues config option * Remove extraneous db method image\_property\_update * Update docs with modified workers default value * Replace README with links to better docs * Remove unused animation module * Drop Glance Client * Enable multi-processing by default * Ensure glance-api application is "greened" * Clean up pep8 E122, E123 violations * Clean up pep8 E121 violations * Fix scrubber start & not scrubbing when not daemon * Clean up pep8 E502, E711 violations * Expand cache middleware unit tests * Change qpid\_heartbeat default * Don't WARN if trying to add a scheme which exists * Add unit tests for size\_checked\_iter * Add functional tests for the HTTP store * Generalize remote image functional test * Add filesystem store driver to new func testing * Add region configuration for swift * Update openstack-common log and setup code * Update v2.0 API version to CURRENT * Set new version to open Grizzly development * Add s3\_store\_bucket\_url\_format config option * Ensure status of 'queued' image updated on delete * Fallback to a temp pid file in glance-control * Separate glance cache client from main client * Rewrite Swift store functional tests * Raise bad request early if image metadata is invalid * Return actual unicode instead of escape sequences in v2 * Handle multi-process SIGHUP correctly * Remove extraneous whitespace in config files * Remove db auto-creation magic from glance-manage * Makes deployed APIs configurable * Asynchronously copy from external image source * Sort UUID lists in test\_image\_get\_all\_owned * Call do\_start correctly in glance-control reload * Sync some misc changes from openstack-common * Sync latest cfg changes from openstack-common * Exception Handling for image upload in v2 * Fix cache not handling backend failures * Instantiate wsgi app for each worker * Require 'status' in simple db image\_create * Drop glance client + keystone config docs * Use PATCH instead of PUT for v2 image modification * Delete image from backend store on delete * Document how to deploy cachemanage middleware * Clean up comments in paste files * WARN and use defaults when no policy file is found * Encode headers in v1 API to utf-8 * Fix LP bug #1044462 cfg items need secret=True * Always call stop\_servers() after having started them in tests * Adds registry logging * Filter out deleted image properties in v2 api * Limit simple db image\_create to known image attrs * Raise Duplicate on image\_create with duplicate id * Expand image\_create db test * Add test for nonexistent image in db layer * Catch pruner exception when no images are cached * Remove bad error message in glance-cache-manage * Add missing columns to migration 14 * Adds notifications for images v2 * Move authtoken config out of paste * Add kernel/ramdisk\_id, instance\_uuid to v2 schema * Tweak doc page titles * Drop architecture doc page * Add link to notifications docs on index * Remove repeated image-sharing docs * Tidy up API docs * Log level for BaseContextMiddleware should be warn * Raise Forbidden exception in image\_get * Activation notification for glance v1 api * Add glance/versioninfo to MANIFEST.in * HTTPBadRequest in v2 on malformed JSON request body * PEP8 fix in conf.py * Typo fix in glance: existant => existent * Rename glance api docs to something more concise * Drop deprecated client docs * Clean up policies docs page * Remove autodoc and useless index docs * Add nosehtmloutput as a test dependency * Remove partial image data when filesystem is full * Add 'bytes' to image size rejection message * Add policy check for downloading image * Convert limiting\_iter to LimitingReader * Add back necessary import * Adds glance registry req id to glance api logging * Make max image size upload configurable * Correctly re-raise exception on bad v1 checksum * Return httplib.HTTPResponse from fake reg conn * Add DB Management docs * Fix auth cred opts for glance-cache-manage * Remove unused imports * Set proper auth middleware option for anon. access * multi\_tenant: Fix 'context' is not defined error * Validate uuid-ness in v2 image entity * v2 Images API returns 201 on image data upload * Fixes issue with non string header values in glance client * Fix build\_sphinx setup.py command * Updates Image attribute updated\_at * Add policy enforcment for v2 api * Raise 400 error on POST/PUTs missing request bodies * Mark bin/glance as deprecated * Return 201 on v2 image create * Ignore duplicate tags in v2 API * Expose 'protected' image attribute in v2 API * Move to tag-based versioning * Update restrictions on allowed v2 image properties * Reveal v2 API as v2.0 in versions response * Add min\_ram and min\_disk to v2 images schema * Filter out None values from v2 API image entity * Refactor v2 images resource unit tests * Use container\_format and disk\_format as-is in v2 * Make swift\_store\_admin\_tenants a ListOpt * Update rbd store to allow copy-on-write clones * Call stop\_servers() in direct\_url func tests * Drop unfinshed parts of v2 API * Fix a couple i18n issues in glance/common/auth.py * Sync with latest version of openstack.common.notifier * Sync with latest version of openstack.common.log * Sync with latest version of openstack.common.timeutils * Sync with latest version of openstack.common.importutils * Sync with latest version of openstack.common.cfg * Allows exposing image location based on config * Do not cache images that fail checksum verfication * Omit deleted properties on image-list by property * Allow server-side validation of client ssl certs * Handle images which exist but can't be seen * Adds proper response checking to HTTP Store * Use function registration for policy checks * fix the qpid\_heartbeat option so that it's effective * Add links to image access schema * ^c shouldn't leave incomplete images in cache * uuid is a silly name for a var * Support master and slave having different tokens * Add a missing header strip opportunity * URLs to glance need to be absolute * Use with for file IO * Add swift\_store\_admin\_tenants option * Update v1/v2 images APIs to set store ACLs * Use event.listen() instead of deprecated listeners kwarg * Store context in local thread store for logging * Process umask shouldn't allow world-readable files * Make TCP\_KEEPIDLE configurable * Reject rather than ignore forbidden updates * Raise HTTPBadRequest when schema validation fails * Expose 'status' on v2 image entities * Simplify image and access\_record responses * Move optional dependencies from pip-requires to test-requires * Fix dead link to image access collection schema * Add in missing image collection schema link * Drop static API v2 responses * Include dates in detailed image output * Update image caching middleware for v2 URIs * Ensure Content-Type is JSON-like where necessary * Have non-empty image properties in image.delete payload * Add Content-MD5 header to V2 API image download * Adds set\_acls function for swift store * Store swift images in separate containers * Include chunk\_name in swift debug message * Set deleted\_at field when image members and properties are deleted * Use size\_checked\_iter in v2 API * Honor '--insecure' commandline flag also for keystone authentication * Make functional tests listen on 127.0.0.1 * Adds multi tenant support for swift backend * Provide stores access to the request context * Increase wait time for test\_unsupported\_default\_store * Match path\_info in image cache middleware * Dont show stack trace on command line for service error * Replace example.com with localhost for some tests * Fix registry error message and exception contents * Move checked\_iter from v1 API glance.api.common * Support zero-size image creation via the v1 API * Prevent client from overriding important headers * Updates run\_tests.sh to exclude openstack-common * Use openstack.common.log to log request id * Update 'logging' imports to openstack-common * Make get\_endpoint a generic reusable function * Adds service\_catalog to the context * Add openstack-common's local and notifier modules * Making docs pretty! * Removing 'Indices and tables' heading from docs * Remove microseconds before time format conversion * Add bin/glance-replicator to scripts in setup.py * Initial implementation of glance replication * Generate request id and return in header to client * Reorganize context module * Add openstack.common.log * Ignore openstack-common in pep8 check * Keystone dep is not actually needed * Report size of image file in v2 API * Expose owner on v2 image entities * Add function tests for image members * Allow admin's to modify image members * Allow admins to share images regardless of owner * Improve eventlet concurrency when uploading/downloading * Simplify v2 API functional tests * Fix IndexError when adding/updating image members * Report image checksum in v2 API * Store properties dict as list in simple db driver * Use PyPI for swiftclient * Refactor pagination db functional tests * Combine same-time tests with main db test case * Add retry to server launch in respawn test * Reorder imports by full import path * Adds /v2/schemas/images * Implement image filtering in v2 * Include all tests in generated tarballs * Allow CONF.notifier\_strategy to be a full path * Add image access records schema for image resources * Remove image members joinedload * Clean up image member db api methods * Retry test server launch on failure to listen * Make image.upload notification send up2date metadata * Added schema links logic to image resources * Simplify sqlalchemy imports in driver * Reduce 'global' usage in sqlalchemy db driver * Standardize logger instantiation * Add link descriptor objects to schemas * Fix exception if glance fails to load schema * Move the particulars of v2 schemas under v2 * Remove listing of image tags * Set up Simple DB driver tests * Trace glance service on launch failure * Revert "Funnel debug logging through nose properly." * Capture logs of failing services in assertion msg * Remove some more glance-cache PasteDeploy remnants * Fix typo of conf variable in config.py * Remove unused imports in db migrations * Increase timeout to avoid spurious test failures * adds missing import and removes empty docstring * Convert db testing to use inheritance * Clean up .pyc files before running tests * make roles case-insensitive * Funnel debug logging through nose properly * Fix typo of swift\_client/swiftclient in store\_utils * Stop revealing sensitive store info * Avoid thread creation prior to service launch * Don't use PasteDeploy for scrubber and cache daemons * Remove some unused glance-cache-queue-image code * Implement pagination and sorting in v2 * Turn off SQL query logging at log level INFO * Default db\_auto\_create to False * Use zipballs instead of git urls * Add metadata\_encryption\_key to glance-cache.conf * Fix help messages for --debug * Use python-swiftclient for swift store * Fix to not use deprecated response.environ any more * Import db driver through configuration * Move RequestContext.is\_image\_\* methods to db layer * Begin replacement of sqlalchemy driver imports * webob exception incorrectly used in v1 images.py * Add tests and simplify GlanceExceptions * Update default values for known\_stores config * Remove the conf passing PasteDeploy factories * Port remaining code to global conf object * Made changes to adhere to HACKING.rst specifications * Use openstack-common's policy module * Re-add migrate.cfg to tarball * Implements cleaner fake\_request * Create 'simple' db driver * Glance should use openstack.common.timeutils * Clean up a few ugly bits from the testing patch * Fix typo in doc * Add cfg's new global CONF object * fix side effects from seekability test on input file * Just use pure nosetests * Fix coverage jobs. Also, clean up the tox.ini * Move glance.registry.db to glance.db * Glance should use openstack.common.importutils * Add read-only enforcement to v2 API * Add a base class for tests * Expose tags on image entities in v2 API * Add additional info. to image.delete notification * Expose timestamps on image entities in v2 API * Sync with latest version of openstack.common.cfg * Enable anonymous access through context middleware * Add allow\_additional\_image\_properties * Fix integration of image properties in v2 API * Lock pep8 at v1.1 * Lock pep8 to version 0.6.1 in tox.ini * Fail gracefully if paste config file is missing * Add missing files to tarball * Remove unused imports in setup.py * Adds sql\_ config settings to glance-api.conf * Correct format of schema-image.json * Fix paste to correctly deploy v2 API * Add connection timeout to glance client * Leave behind sqlite DB for red functional tests * Support DB auto-create suppression * Fix glance-api process leak in respawn storm test * Stubout httplib to avoid actual http calls * Backslash continuation removal (Glance folsom-1) * Implement image visibility in v2 API * Add min\_ram and min\_disk to bin/glance help * Implements blueprint import-dynamic-stores * Add credential quoting to Swift's StoreLocation * Combine v2 functional image tests * Simplify JSON Schema validation in v2 API * Expose deployer-specific properties in v2 API * Test that v2 deserializers use custom schemas * Load schema properties when v2 API starts * Support custom properties in schemas for v2 API * Fix tiny format string nit in log message * Fixes bug 997565 * Allow chunked image upload in v2 API * wsgi: do not respawn on missing eventlet hub * Implement v2 API access resource * Disallow image uploads in v2 API when data exists * Implement v2 API image tags * Use ConfigOpts.find\_file() for policy and paste * Implement image data upload/download for v2 API * Use sdist cmdclass from openstack-common * glance-api: separate exit status from message * Update noauth caching pipeline to use unauth-ctx * Return 204 from DELETE /v2/images/ * Add localization catalog and initial po files to Glance. Fix bug 706449 * Add /v2 to sample glance-api-paste.ini * Basic functionality of v2 /images resource * Split noauth context middleware into new class * Add -c|--coverage option to run\_tests.sh * Convert glance to glance/openstack/common/setup.py * Update glance to pass properly tenant\_name * Cleanup authtoken examples * Support for directory source of config files * Support conf from URL's with versions * Auto generate AUTHORS file for glance * Integrate openstack-common using update.py * Fixes LP #992096 - Ensure version in URL * Begin functional testing of v2 API * Fixes LP #978119 - cachemanagement w/o keystone * Omit Content-Length on chunked transfer * Fix content type for qpid notifier * Remove \_\_init\_\_.py from locale dir * Fix i18n in glance.notifier.notify\_kombu * Override OS\_AUTH\_URL when running functional tests * remove superfluous 'pass' * fix bug lp:980892,update glance doc * Add a space to fix minor typo in glance help * Suppress pagination on non-tty glance index * Kill glance-api child workers on SIGINT * Ensure swift auth URL includes trailing slash * add postgresql support to test\_migrations * 012\_id\_to\_uuid: Also convert ramdisk + kernel ids * API v2 controller/serialization separation * search for logger in PATH * Set install\_requires in setup.py * Minor grammar corrections * Bootstrapping v2 Image API implementation * Fix db migration 12 * Remove unused imports * Reorganize pipelines for multiple api versions * Skip test depending on sqlite3 if unavailable * Defaulted amazon disk & container formats * Compile BigInteger to INTEGER for sqlite * Updated RST docs on containers, fewer references to OVF format * rename the right index * Reject excessively long image names * Test coverage for update of image ownership * Add MySQLPingListener() back * Add support for auth version 2 * Run version\_control after auto-creating the DB * Allow specifying the current version in 'glance-manage version\_control' * Publish v2 in versions responses * Allow yes-like values to be interpreted as bool * Support owner paramater to glance add * Adding versioned namespaces in test dir * Typo * Ensure functional db connection in configure\_db() * Set content\_type for messages in Qpid notifier * Avoid leaking secrets into config logging * Fixes lp959670 * Send output of stty test cmd to stderr * Use unique per-test S3 bucket name * Specify location when creating s3 bucket * Open Folsom * Update 'bin/glance add' docstring \*\_format options * Ensure all unauthorized reponses return 403 * Avoid leaking s3 credentials into logs * Avoid glance-logcapture displaying empty logs * Add 'publicize\_image' policy * Fixed db conn recovery issue. Fixes bug 954971 * tox tests with run\_tests.sh instead of nosetests * Don't use auth url to determine service protocol * Use tenant/user ids rather than names * Update context middleware with supported headers * Fixes LP #957401 - Remove stray output on stderr * check connection in Listener. refer to Bug #943031 * Avoid tests leaking empty tmp dirs * Remove keystone.middleware.glance\_auth\_token * Updating version of Keystone * Add policy checks for cache manage middleware * nose plugin to capture glance service logs * Add new UnexpectedStatus exception * Do not error when service does not have 'type' * Disambiguates HTTP 401 and HTTP 403 in Glance. Fixes bug 956513 * Add admin\_role option * Remove references to admin\_token * Remove glance-cache-queue-image * Remove dependency on apiv1app from cachemanage * Return 403 when policy engine denies action * Add error checking to get\_terminal\_size * Well-formed exception types for 413 & 503 * Ensure copy and original image IDs differ * Include babel.cfg and glance.pot in tarballs * Updating authentication docs * General cleanup * General docs cleanup * Remove todolist from docs * Add note about cache config options * Change CLIAuth arg names * Retry sendfile on EAGAIN or EBUSY * Add module name to ClientException * Update cli docs * Remove 'community' doc page * Removing registry spec from docs * Fixes LP#934492 - Allow Null Name * Refresh SSL cfg after parsing service catalog entry * Fix typo in tox.ini * Glance cache updates to support Keystone Essex * updates man page for glance-scrubber. this time with extra pep8 scrubbing powers. Fixes bug 908803 * Update tox.ini for jenkins * Replaced use of webob.Request.str\_param * Update paste file to use service tenant * Update bin/glance to allow for specifying image id * Fix deprecated warnings * Remove trailing whitespaces in regular file * add git commit date / sha1 to sphinx html docs * Glance skip prompting if stdin isn't a tty * Allow region selection when using V2 keystone * Disallow file:// sources on location or copy-from * Progress bar causes intermittent test failures * Added first step of babel-based translations * Complete fix for modification of unowned image * Fix update of queued image with location set * Support copy-from for queued images * Add checksum to an external image during add * Align to jenkins tox patterns * Fix MANIFEST.in to include missing files * Fix exception name * Correct kernel/ramdisk example in docs * Create sorting/pagination helper function * Support new image copied from external storage * blueprint progressbar-upload-image * Avoid TestClient error on missing '\_\_mro\_\_' attr * disk/container\_format required on image activate * Require container & disk formats on image create * Support non-UTC timestamps in changes-since filter * Return 503 if insufficient permission on filestore * Adds README.rst to the tarball * Ensure StorageFull only raised on space starvation * Require auth URL if keystone strategy is enabled * 003\_add\_disk\_format.py: Avoid deadlock in upgrade * Function uses 'msg' not 'message' * Fix paging ties * Ensure sane chunk size when pysendfile unavailable * New -k/--insecure command line option * Add a generic tox build environment * Fix pep8 error * Update Authors file * Implement blueprint add-qpid-support * Include glance/tests/etc * Don't fail response if caching failed * Force auth\_strategy=keystone if --auth\_url or OS\_AUTH\_URL is set * Make Glance work with SQLAlchemy 0.7 * Use sendfile() for zero-copy of uploaded images * Respawn glance services on unexpected death * Blueprint cli-auth: common cli args * Prep tox config for jenkins builds * Get rid of DeprecationWarning during db migration * Add --capture-output option to glance-control * Add filter validation to glance API * Fixes LP 922723 * Typofix is\_publi -> is\_public * Add --await-child option to glance-control * Fix Bug #919255 * Cap boto version at 2.1.1 * Simplify pep8 output to one line per violation * Handle access restriction to public unowned image * Check service catalogue type rather than name * Restore inadvertantly dropped lines * Include the LICENSE file in the tarball * Change xattr usage to be more broadly compatible * Fix mixed usage of 's' and 'self' * Don't force client to supply SSL cert/key * Few small cleanups to align with Nova * Adds documentation for policy files * Client.add\_image() accepts image data as iterable * More flexible specification of auth credentials * glance-api fails fast if default store unsupported * Bug #909574: Glance does not sanity-check given image size on upload * glance-control need not locate a server's config file (lp#919520) * Bug#911599 - Location field wiped on update * Return 400 if registry returns 400 * Set url's on AuthBadRequest exceptions * Add policy checking for basic image operations * Swallow exception on unsupported image deletion * Ensure we only send a single content-type header * Multi-process Glance API server support * Set size metadata correctly for remote images * Make paste.ini file location configurable * Avoid the need for users to manually edit PasteDeploy config in order to switch pipelines * Split out paste deployment config from the core glance \*.conf files into corresponding \*-paste.ini files * Fixes LP Bug#913608 - tests should be isolated * Set correct Content-Length on cached remote images * Implement retries in notify\_kombu * Return correct href if bind\_host is 0.0.0.0 * Remove assertDictEqual for python 2.6 compatibility * Add optional revision field to version number * LP Bug#912800 - Delete image remain in cache * Add notifications for sending an image * Bug #909533: Swift uploads through Glance using ridiculously small chunks * Add Fedora clauses to the installing document * Remove doc/Makefile * Fixes incorrect URI scheme for s3 backend * Add comments for swift options in glance-api.conf * Split notification strategies out into modules * fix bug 911681 * Fix help output for inverse of BoolOpt * PEP8 glance cleanup * Add more man pages * Set execute permissions on glance-cache-queue-image * Add a LICENSE file * Add ability to specify syslog facility * Install an actual good version of pip * Bug #909538: Swift upload via Glance logs the password it's using * Add tox.ini file * Synchronize notification queue setup between nova and glance * Fixes keystone auth test failures in python 2.6 * Removed bin/glance's TTY detection * Fixes request with a deleted image as marker * Adds support for protecting images from accidental deletion * Fix for bug 901609, when using v2 auth should use /v2.0/tokens path * Updated glance.registry.db for bug 904863 * Removing caching cruft from bin/glance * Fixes LP Bug#901534 - Lost properties in upload * Update glance caching middleware so doesn't try to process calls to subresources. Fixes LP bug #889209 * Ensure functional tests clean up their images * Remove extra swift delete\_object call * Add missing files to tarball * Allow glance keystone unit tests to run with essex keystone * Convert glance to use the new cfg module * Add new cfg module * Lock keystone to specific commit in pip-requires * Add the missing column header to list-cached * Rename 'options' variables to 'conf' * Add generic PasteDeploy app and filter factories * Secondary iteration of fix for bug 891738 * Rename .glance-venv to .venv * Fix for bug 900258 -- add documentation for '--url' glance cli option * Add --url option to glance cli * Fixes LP Bug#850377 * Fixes LP Bug#861650 - Glance client deps * Added some examples for "glance add" * Bug#894027: use correct module when building docs * Adds option to set custom data buffer dir * Fix bug 891738 * Added missing depend on nosexcover * Removed some cruft * Fixes LP Bug#837817 - bin/glance cache disabled * Separating add vs general store configuration * Fixes LP Bug#885341 - Test failure in TestImageCacheManageXattr * Making prefetcher call create\_stores * Fix handle get\_from\_backend returning a tuple * Casting foreign\_keys to a list in order to index into it * Using Keystone's new port number 35357 * Adding admin\_token to image-cache config * Removing assertGreaterEqual * Correcting image cleanup in cache drivers * Adding tests to check 'glance show ' format * Update 'glance show' to print a valid URI. Fixes bug #888370 * Gracefully handle image\_cache\_dir being undefined * Remove unused versions pipeline from PasteDeploy config * Allow glance-cache-\* find their config files * Add some test cases for glance.common.config * Fix name error in cache middleware * Check to make sure the incomplete file exists before moving it during rollback. Fixes bug #888241 * Fix global name 'sleep' is not defined in wsgi.py. Fixes bug #888215 * Fixes LP Bug#878411 - No docs for image cache * Fix typo in the cached images controller * load gettext in \_\_init\_\_ to fix '\_ is not defined' * Adds option to encrypt 'location' metadata * Fix LP Bug#885696 two issues with checked\_iter * Fix Keystone API skew issue with Glance client * Fixed test failure in Python 2.6 * Glance redirect support for clients * Fixes LP Bug#882185 - Document Swift HTTPS default * Fixes LP Bug#884297 - Install docs should have git * Add "import errno" to a couple of files * Consolidate glance.utils into glance.common.utils * Correcting exception handling in glance-manage * More cache refactoring - Management Middleware * Fixes LP Bug#882585 - Backend storage disconnect * Convert image id value to a uuid * Remove 'location' from POST/PUT image responses * Removing glance-upload * Adds Driver Layer to Image Cache * Removed 'mox==0.5.0' and replaced with just 'mox' in tools/pip-requires * Removing duplicate mox install in pip-requires * Add .gitreview config file for gerrit * Making TCP\_KEEPIDLE socket option optional * Overhauls the image cache to be truly optional * Fixing functional tests that require keystone * Fixes LP Bug#844618 - SQLAlchemy errors not logged * Additions to .gitignore * Better document using Glance with Keystone * Fixes LP Bug#872276 - small typo in error message * Adds SSL configuration params to the client * Increases test coverage for the common utils * Refactoring/cleanup around our exception handling * Port Authors test to git * Add RBD store backend * Fixes LP Bug#860862 - Security creds still shown * Extract image members into new Glance API controller * Refactoring registry api controllers * Returning functionality of s3 backend to stream remote images * Make remote swift image streaming functional * Improving swfit store uri construction * Fixes LP Bug #850685 * Do not allow min\_ram or min\_disk properties to be NULL and if they are None, make sure to default to 0. Fixes bug 857711 * Implementing changes-since param in api & registry * Documenting nova\_to\_os\_env.sh tool * Added min\_disk and min\_ram properties to images Fixes LP Bug#849368 * Fixing bug 794582 - Now able to stream http(s) images * Fixes LP Bug#755916 - Location field shows creds * Fixes LP Bug #804429 * Fixes Bug #851216 * Fixes LP Bug #833285 * Fixes bug 851016 * Fix keystone paste config for functional tests * Updating image status docs * \* Scrubber now uses registry client to communicate with registry \* glance-api writes out to a scrubber "queue" dir on delete \* Scrubber determines images to deleted from "queue" dir not db * Fixes LP Bug#845788 * Open Essex * Remove PWD from possible config\_file\_dirs * Update paste config files with keystone examples. see ticket: lp839559 * Adding Keystone support for Glance client * Fix cached-images API endpoint * Bug fix lp:726864 * Fixes Bug: lp825024 * Add functional tests * Switch file based logging to WatchedFileHandler for logrotate * Fixes LP Bug #827660 - Swift driver fail 5G upload * Bug lp:829064 * Bug lp:829654 * Update rfc.sh to use 'true' * Addresses glance/+spec/i18n * Addresses glance/+spec/i18n * Add rfc.sh for git review * Add support for shared images * Add notifications for uploads, updates and deletes * Bug Fix lp:825493 * Bug fix lp:824706 * Adds syslog support * Fixes image cache enabled config * Improves logging by including traceback * Addresses glance/+spec/i18n * casting image\_id to int in db api to prevent false matching in database lookups * Addresses Bug lp:781410 * Removes faked out datastore entirely, allowing the DB API to be unit tested * Consolidates the functional API test cases into /glance/tests/functional/test\_api.py, adds a new Swift functional test case, verified that it works on Cloud Files with a test account * breaking up MAX\_ITEM\_LIMIT and making the new values configurable * Add @skip\_if\_disabled decorator to test.utils and integrate it into the base functional API test case. The S3 functional test case now uses test\_api.TestApi as its base class and the setUp() method sets the disabled and disabled\_message attributes that the @skip\_if\_disabled decorator uses * Adds swift\_enable\_snet config * Fixes bug lp:821296 * Detect python version in install\_venv * Implemented @utils.skip\_test, @utils.skip\_unless and @utils.skip\_if functionality in glance/test/utils.py. Added glance/tests/unit/test\_skip\_examples.py which contains example skip case usages * Changed setup.py to pull version info from git * Removes the call to webob.Request.make\_body\_seekable() in the general images controller to prevent the image from being copied into memory. In the S3 controller, which needs a seekable file-like object when calling boto.s3.Key.set\_contents\_from\_file(), we work around this by writing chunks of the request body to a tempfile on the API node, then stream this tempfile to S3 * Make sure we're passing the temporary file in a read-mode file descriptor to S3 * Removes the call to webob.Request.make\_body\_seekable() in the general images controller to prevent the image from being copied into memory. In the S3 controller, which needs a seekable file-like object when calling boto.s3.Key.set\_contents\_from\_file(), we work around this by writing chunks of the request body to a tempfile on the API node, then stream this tempfile to S3 * - removed curl api functional tests - moved httplib2 api functional tests to tests/functional/test\_api.py * merging trunk * Make tests a package under glance * removing curl tests and moving httplib2 tests * Move tests under the glance namespace * Add filter support to bin/glance index and details calls * merging trunk * Update registry db api to properly handle pagination through sorted results * Our code doesn't work with python-xattr 0.5.0, and that's the version installed in RH/Centos :( Andrey has updated the RPM config to specify 0.6.0, and this does the same to pip-requires * Replaced occurances of |str(e)| with |"%s" % e| * First round of refactoring on stores * Remove expected\_size stuff * Make calling delete on a store that doesn't support it raise an exception, clean up stubout of HTTP store and testing of http store * adding sort\_key/sort\_dir to details * merging lp:~rackspace-titan/glance/registry-marker-lp819551 * adding sort\_key/sort\_dir params * adding --fixes * adding complex test cases to recreate bug; updating db api to respect marker * Add configuration check for Filesystem store on configure(), not every call to add() * Refactor S3 store to make configuration one-time at init versus every method call invocation * Refactor Swift store to make configuration one-time at init versus every method call invocation * Forgot to add a new file.. * Refactors stores to be stateful: * Make sure xattr>=0.6.0 in pip-requires * updating documentation * making limit option an integer * updating broken tests * adding limit/marker to bin/glance details call * adding limit/marker params to bin/glance index * merging trunk * Use of "%default" in help string does not work, have to use "%(default)s". Per the 4th example http://docs.python.org/dev/library/argparse.html#prog * Added nose-exclude to pip-requires * Installed nose-exclude, ./run\_tests.sh --unittests-only add '--exclude-dir=tests/functional' to NOSEARGS * This one has been bugging me for a while, finally found out how to use the local default variable in the help string * adding --fixes to commit * Replaced occurances of |str(e)| with |"%s" % e| * Completes the S3 storage backend. The original code did not actually fit the API from boto it turned out, and the stubs that were in the unit test were hiding this fact * Fix for boto1.9b issue 540 (http://code.google.com/p/boto/issues/detail?id=540) * Remove unnecessary hashlib entry in pip-requires * Add myself to Authors (again) * hashlib exists all of the way back to python 2.5, there's no need to install an additional copy * Adds image\_cache\_enabled config needed to enable/disable the image-cache in the glance-api * Add more unit tests for URI parsing and get\_backend\_class() (which is going away in refactor-stores branch, but oh well..) * Added unit tests for swift\_auth\_url @property. It was broken. startwith('swift+http') matches swift+https first * Don't tee into the cache if that image is already being written * Re-add else: raise * Final fixes merging Rick's swift\_auth\_url @property with previous URI parsing fixes that were in the S3 bug branch.. * merge trunk * This updates the pep8 version in pip-requires and updates run\_tests.sh to provide a '-p' option that allows for just pep8 to be run * Adding back image\_cache\_enabled config option for glance-api * Don't tee same image into cache multiple times * Fixes two things: * adding run\_tests.sh -p * PEP8 whitespace fix * Swift client library needs scheme * Add tests for bad schemes passed to get\_backend\_class() * Add tests for bad URI parsing and get\_backend\_class() * Include missing bin/glance-scrubber in tarball * Include bin/glance-scrubber in tarball binaries * One more auth\_tok-related change, to make it easier for nova to use the client without violating any abstraction boundaries * Add fix for Bug #816386. Wait up to 5 min for the image to be deleted, but at least 15 seconds * remove superfluous if statement * Loop up to 5 min checking for when the scrubber deletes * Typo in error condition for create\_bucket\_on\_put, make body seekable in req object, and remove +glance from docs and configs * Add functional test case for checking delete and get of non-existing image * New local filesystem image cache with REST managment API * PEP8 Fixes * Using DELETE instead of POST reap\_invalid, reap\_stalled * Forgot to put back fix for the get\_backend\_class problem.. * Adding logging if unable to delete image cache file * Add test case for S3 s3\_store\_host variations and fixes for URL bug * Ensure image is active before trying to fetch it * Boy, I'm an idiot...put this in the wrong branch directory.. * Handling ZeroDivision Error * Using alternate logging syntax * Missing import of common.config in S3 driver * Tighten up file-mode handling for cache entry * Adding request context handling * Merging trunk * Fixed review stuff from Brian * Allow delaying the actual deletion of an image * have the scrubber init a real context instead of a dict * merge trunk * Adds authentication middleware support in glance (integration to keystone will be performed as a piece of middleware extending this and committed to the keystone repository). Also implements private images. No limited-visibility shared image support is provided yet * Take out extraneous comments; tune up doc string; rename image\_visible() to is\_image\_visible(); log authorization failures * use runs\_sql instead of hackery * Updating setup.py per bin/image\_cache removal * Removing bin/image\_cache directory * Removing cache enabled flag from most confs * Removing imagecache from default WSGI pipeline * Allow plugging in alternate context classes so the owner property and the image\_visible() method can be overridden * Make a context property 'owner' that returns the tenant; this makes it possible to change the concept of ownership by using a different context object * Unit tests for the context's image\_visible() routine * We don't really need elevate().. * Merging in adding\_image\_caching * Importing module rather than function * PEP 8 fixes * Adding reap stalled images * Returning number of files deleted by cache-clear * Returning num\_reaped from reap\_invalid * Moving bin to image\_cache/ * Fixing comment * Adding reaper script * Adding percent done to incomplete and invalid image listing * Renaming tmp\_path to incomplete\_path * Renaming tmp\_path to incomplete\_path * Renaming purge\_all clear, less elegant variation * Refactor to use lookup\_command, so command map is used in one place * Refactoring to use same command map between functions * Renaming to cache-prefetching * Renaming to cache-prefetch * Renaming to cache-purge-all * Renaming to cache-purge * Renaming to cache-invalid * Beginning to normalize names * Refactoring out common code * Refactoring prefetch * Refactoring purge * Refactoring purge\_all * Refactoring listing of prefetching images * Using querystring params for invalid images * Link incoming context with image owner for authorization decisions * How in the world did I manage to forget this? \*sigh\* * Make tests work again * merge trunk * pull-up from trunk * This patch: * PEP8 nit * Added fix for Bug #813291: POST to /images setting x-image-meta-id to an already existing image id causes a 500 error * One more try.. * Yet another attempt to fix URIs * Add in security context information * Moving cached image list to middleware * Initial work on moving cached\_images to WSGI middleware * API is now returning a 409 error on duplicate POST. I also modified the testcase to expect a 409 response * Add owner to database schema * Fix URI parsing on MacOSX - Python 2.6.1 urlparse bugs * Namespacing xattr keys * PEP8 fixes * Added 3 tests in tests/functional/test\_httplib2\_api.py to validate is\_public filtering works * left in 2 fixes.. removing redundant fix * If meta-data contains an id field, pass it to \_image\_update() * Adding functional test to show bug #813291 * fixed an inline comment * removed pprint import, and added check for other 3 images to make sure is\_public=True * Added 3 tests to validate is\_public filtering works * Completed rewrite of tests/functional/test\_curl\_api.py using httplib2 * Changes the default filtering of images to only show is\_public to actually use a default filter instead of hard coding. This allows us to override the default behavior by passing in a new filter * removing pprint import * completed rewrite of test\_ordered\_images().. this completes rewrite of test\_curl\_api using httplib2 * test\_ordered\_images() missing closing self.stop\_servers() * finished rewrite of test\_filtered\_images() * add tests and make None filters work * Change default is\_public = True to just set a default filter instead of hard coding so it can be overridden * make the tests work with new trunk * merge trunk * Refactoring PrettyTable so it doesn't print the lines itself * Adding pruner and prefetcher to setup.py * Removing extraneous text * PEP 8 fixes * Adding prefetching list to bin/glance * More cleanups * Adding prefetching of images * Overhaul the way that the store URI works. We can now support specifying the authurls for Swift and S3 with either an http://, an https:// or no prefix at all * Typo fix * Removing test exception * PEP 8 fixes * Adding Error to invalid cache images * Show invalid images from bin/glance * Improving comments * Cleaning up cache write * Moving xattrs out to utils * Clip and justify columns for display * Including last accessed time in cached list * Adding more comments * Adding hit counter * Pruning invalid cache entries after grace period * Clear invalid images when purging all cached images * Rollback by moving images to invalid\_path * Improving comments * PEP8 fixes * Adding cached image purge to bin/glance * Adding purge all to bin/glance * Adding catch\_error decorator to bin/glance * Adding 'cached' command to bin/glance * Write incomplete files to tmp path * Adding purge\_all, skip if set if xattrs arent supported * Adding purge cache API call * Adding API call to query for cache entries * Create bin/glance-pruner * Adding image\_caching * rewrote test\_traceback\_not\_consumed(), working on test\_filtered\_images() * Only changes is reverting the patch that added migration to configure\_db() and resets the in-memory SQLite database as the one used in functional testing. Yamahata's commits were unmodified.. * Reverts commit that did db migration during configure\_db() and makes functional tests use in-memory database again. The issues we were seeing had to do with the timeout not being long enough when starting servers with disk-based registry databases and migrate taking too long when spinning up the registry server... this was shown in almost random failures of tests saying failure to start servers. Rather than increase the timeout from 3 seconds, I reverted the change that runs migrate on every startup and cut the total test duration down about 15 seconds * merged glance trunk * updated Authors * Resolves bug lp:803260, by adding a check to ensure req.headers['Accept'] exists before it gets assigned to a variable * run\_tests.py: make test runner accepts plugins * run\_tests.py: make run\_tests.py work * Fix the poor error handling uncovered through bug in nova * Added stop\_servers() to the end of the test cases * adding testing & error handling for invalid markers * removed pprint import * removed extra space on test\_queued\_process\_flow method definition * removing commented out line * merged in lp:~jshepher/glance/functional\_tests\_using\_httplib2\_part2 * applied requested fix in merge-prop * Removing ordering numbers from the test cases, per jay pipes * cleaning up the 'no accept headers' test cases. this should fail until Bug lp:803260 is resolved * Cleaning up docstring spacing * rewrite of test\_size\_greater\_2G\_mysql from test\_curl\_api.py using httplib2. All tests currently pass * completed rewrite of test\_003\_version\_variations. bug lp:803260 filed about step #0, and noted as a comment in code * Fix for bug 803188. This branch also proposed for merging into trunk * miss-numbering of steps * fixing pep8 violation * Added a check to ensure req.headers['Accept'] exists before it gets assigned to a variable. All unit/functional tests pass with this patch * half way done with rewrite of test\_003\_version\_variations.. step #0 causes a 500 error unless we supply an Accept header * Prevent query params from being set to None instead of a dict * removing rogue print * fixing issue where filters are set to None * Backport for bug 803055 * rewrote test\_002\_queued\_process\_flow from test\_curl\_api.py, all 6 steps pass against trunk revno:146 * Backport for bug 803055 * Prevent clients from adding query parameters set to None * ignores None param values passed to do\_request * cleaning up docstrings * merging trunk * docstring * Added sort\_key and sort\_dir query params to apis and clients * fixing one last docstring * docstrings\! * unit/test\_config.py: make it independent on sys.argv * run\_tests.py: make test runner accepts plugins * reverting one import change; another docstring fix * docstring * Switch image\_data to be a file-like object instead of bare string in image creating and updating Without this Glance loads all image into memory, then copies it one time, then writes it to temp file, and only after all this copies image to target repository * Add myself to Authors file * cleaning up None values being passed into images\_get\_all\_public db call * adding base client module * restructuring client code * merging trunk * Explicitly set headers rather than add them * fixing httplib2 functional test that was expecting wrong content-type value * merging trunk * rewrite of test\_get\_head\_simple\_post from tests/functional/test\_curl\_api.py using httplib2 * adding assert to check content\_type in GET /images/ test * Explicitly setting Content-Type, Content-Length, ETag, Location headers to prevent duplication * Bug #801703: No logging is configured for unit tests * Bug #801703: No logging is configured for unit tests * Change image\_data to body\_file instead of body * reset \_MAKER every test and make sure to stop the servers * Trunk merge, changed returned content-type header from 'application/octet-stream' to 'text/html; charset=UTF-8, application/octet-stream' * yea python strings * updated main docstring, as it was directly coppied from test\_curl\_api.py * merged trunk * refactoring for Jay * make image data a constant * Fixes build failures due to webob upgrade. Updated pop-requires as well * upgrading webob and fixing tests * - refactoring wsgi code to divide deserialization, controller, serialization among different objects - Resource object acts as coordinator * updating client docs * fixing bad request error messages * making SUPPORTED\_\* lists into tuples * slight refactoring * updating docs * adding ordering support to glance api * adding support to registry server and client for sort\_key and sort\_dir params * re-ordered imports, using alpha-ordering * removing unnecessary unittest import * moved httplib2 tests to their own test case file, and uncommented md5 match * updating docs; adding support for status filter * adding query filters to bin/glance details * adding query filters to bin/glance index * forgot to remove pprint import * adding hashlib as a dependency to pip-requires (not 100% sure it is not part of the base install though) * fixed pep8 violation * rewote the test #7 - #11 for testcase (test\_get\_head\_simple\_post) * refactoring for Brian * refactoring from Rick's comments * Added httplib2 dependency to tools/pip-requires * rewriting functional tests to utilize httplib2 instead of curl * make sure it runs as a daemon for the tests * default to no daemon * also allow for daemon in the config file so that we can test it easier * default to non-daemon mode * change order of paramaters and make event optional * initial refactoring from Jay's comments * remove eventlet import and leftover function from previous refactoring * remove file that got resurrected by accident * fixed test case * add functional tests of the scrubber and delayed\_delete * start the scrubber in addition to the api and registry * add glance-scrubber to glance-control * call it a Daemon, cuz it is * Update Authors * add the function to the stubs * cleanup * adding tests for wsgi module * removing rogue print * further refactoring * adding refactored wsgi code from nova; moving registry api to new wsgi * delayed scrubbing now works * add the scrubber startup script * remove unnecessary option * add pending\_delete to stub api * pep8 fixed * pep8 fixes * pass in the type we want so it gets converted properly * self leaked ;( * only return the results that we need to act on * allow passing of time to get only results earlier than the time' * server and scrubber work * update the docstring to reflect current * pass in a wakeup\_time for the default time between database hits * start making the server that will periodicly scrub * Config file for the scrubber. We make our own connection to the db here and bypass using the registry client so we don't have to expose non-public images over the http connection * make the commits * Add webob>=1.0.7 requirement to tools/pip-requires * all delayed deletes will be going through a new service, if delayed\_delete is False, then delete it right away, otherwise set it to pending\_delete * add scrub file * set the image to pending delete prior to scheduling the delete * refactor a bit so the db gets updated as needed and we only trigger the delay if the config option is set * add scheduled\_delete\_from\_backend which delays the deletion of images for at least 1 second * don't delete directly but schedule deletion * add the api function to get the images that are pending deleteion * add in delayed delete options * Add workaround for Webob bug issue #12 and fix DELETE operation in S3 where URL parsing was broken * Add ability to create missing s3 bucket on first post, similar to Swift driver * Adding support for marker/limit query params from api, through registry client/api, and implementing at registry db api layer * Bug #787296: test\_walk\_versions fails with SQLalchemy 0.7 * OK, fixes the issue where older versions of webob.Request did not have the body\_file\_seekable attribute. After investigation, turned out that webob.Request.make\_body\_seekable() method was available in all versions of webob, so we use that instead * Added new disk\_format type of 'iso'. Nova can use this information to identify images that have to be booted from a CDROM * adding marker & limit params to glance client * Auto-migrate if the tables don't exist yet * Fix up unit tests for S3 after note from Chris. Also fix bug when S3 test was skipped, was returning error by accident * \* Adds functional test that works with Amazon S3 \* Fixes parsing of "S3 URLs" which urlparse utterly barfs on because Amazon stupidly allows forward slashes in their secret keys \* Update /etc/glance-api.conf for S3 settings * merging trunk, resolving conflicts * fixing sql query * completing marker functionality * Call stop\_servers() for those 2 test cases missing it * Correct documentation * Add missing stop\_servers() calls to two functional test cases * Remove changes to stub database * Auto-migrate if tables don't exist * Fix accidental delete * Remove additions to FIXTURES in test/stubs.py, which requried changes elsewhere * Sync with trunk * Documentation for new results filtering in the API and client * Fix tiny typo * Documentation for new results filtering in the API and client * Adding support for query filtering from the glance client library * renaming query\_params to params * abstracting out filters query param serialization into BaseClient.do\_request * renaming tests to resolve conflict * adding filters param to get\_images and get\_images\_detailed in glance client * Bug #787296: test\_walk\_versions fails with SQLalchemy 0.7 * Updated doc with 'iso' disk\_format * Update documentation * Adding support for api query filtering - equality testing on select attributes: name, status, container\_format, disk\_format - relative comparison of size attribute with size\_min, size\_max - equality testing on user-defined properties (preface property name with "property-" in query) * updating stubs with new sorting logic; updating tests * fixing some copy/paste errors * fixing some webob exceptions * slight modification to registry db api to ensure marker works correctly * slight refactoring per jaypipes' suggestions; sort on get images calls is now created\_at desc * Add tests for 'iso' image type. Remove hard coding of next available image id in tests. This prevents new test images from being added to the set generated by tests.unit.stubs.FakeDatastore * pulling from parent branch * docstring fix * pushing marker/limit logic down into registry db api * adding support for marker & limit query params * removing some unnecessary imports * making registry db api filters more structured; adding in a bit of sqlalchemy code to filter image properties more efficiently * consolidating image\_get\_all\_public and image\_get\_filtered in registry db api * adding test case for multiple parameters from command line * adding custom property api filtering * adding size\_min and size\_max api query filters * implemented api filtering on name, status, disk\_format, and container\_format * Adds versioning to the Glance API * Add test and fix for /v1.2/images not properly returning version choices * Add more tests for version URIs and accept headers and fix up some of Brian's review comments * Fix merge conflict.. * Changes versioned URIs to be /v1/ instead of /v1.0/ * Improve logging configuration docs.. * Doc and docstring fixes from Dan's review * Removed some test config files that slipped in.. * Fix up find\_config\_file() to accept an app\_name arg. Update all documentation referencing config files * Fix pep8 complaint * Add DISK\_FORMAT for 'iso' type images * Adds versioning to Glance's API * Changes glance index to return all public images in any status other than 'killed'. This should allow tools like euca-describe-images to show images while they are in a saving/untarring/decrypting state * Fix numbering in comment.. * Fixed doh. Updates test case to test for condition that should have failed with status!='active' * Changes glance index to return all public images in any status other than 'killed'. This should allow tools like euca-describe-images to show images while they are in a saving/untarring/decrypting state * Adding prefilled Authors, mailmap files Adding test to validate Authors file is properly set up * Documentation updates to make glance add command clearer, hopefully :) * adding Authors functionality; fixing one rogue pep8 violation * Improve logging configuration docs.. * Prevent users from uploading images with a bad or missing store. Allow deletion from registry when backend cannot be used * bcwaldon review fixups * adding comment * Fix for bug #768969: glance index shows non-active images; glance show does not show status * Completes the S3 storage backend. The original code did not actually fit the API from boto it turned out, and the stubs that were in the unit test were hiding this fact * catching NotFound to prevent failure on bad location * Prevent requests with invalid store in location param * Allow registry deletion to succeed if store deletion fails * Documentation updates to make glance add command clearer, hopefully :) * Fix for LP Bug #768969 * Expanding user confirmation default behavior * removing excessive exception handling * pep8 fixes * docstring and exception handling * Expanding user\_confirm default behavior * I modified documentation to show more first-time user friendly examples on using glance. With the previous examples, I followed it as a first-time user and had to spend more than necessary time to figure out how to use it. With this modification, other first-time users would make it work on their systems more quickly * - Require user confirmation for "bin/glance clear" and "bin/glance delete " - Allow for override with -f/--force command-line option * adding --force option to test\_add\_clear * Adds a test case for updating an image's Name attribute. glance update was not regarding 'name' as a top-level modifiable attribute.. * Name is an attribute that is modifiable in glance update, too. * Mark image properties as deleted when deleting images. Added a unit test to verify public images and their properties get deleted when running a 'glance clear' command * Update tests and .bzrignore to use tests.sqlite instead of glance.sqlite * Only modify the connection URL in runs\_sql if the original connection string starts with 'sqlite' * Create a decorator that handles setting the SQL store to a disk-based SQLite database when arbitrary SQL statements need to be run against the registry database during a test case * Docstring update on the run\_sql\_command function * Mark image properties as deleted when deleting images. Added a unit test to verify public images and their properties get deleted when running a 'glance clear' command * Add log\_file to example glance.conf * fixing spacing in help text * adding confirmation on image delete/clear; adding user\_confirm functionality * Add log\_file to example glance.conf * Make sure we use get\_option() when dealing with boolean values read from configuration files...otherwise "False" is True :( * Fixing tests. Sorry for late response * Make sure we use get\_option() when dealing with boolean values read from configuration files...otherwise "False" is True :( * resolve merge conflicts * chnaged output * Open Diablo release * Diablo versioning * Fake merge with ancient trunk. This is only so that people who "accidentally" have been following lp:~hudson-openstack/glance/trunk will not have problems updating to this 2011.2 ------ * Final versioning for Cactus * fixing after review * Removes capture of exception from eventlet in \_upload\_and\_activate(), which catches the exceptions that come from the \_safe\_kill() method properly * RickH fixups from review * Add catch-all except: block in \_upload() * change output from glance-registry * get latest from lp:glance * Ensures that configuration values for debug and verbose are used if command-line options are not set * Removes capture of exception from eventlet in \_upload\_and\_activate(), which catches the exceptions that come from the \_safe\_kill() method properly * Fix logging in swift * Fix Thierry's notice about switched debug and verbose * Change parsing of headers to accept 'True', 'on', 1 for boolean truth values * Final cactus versioning * OK, fix docs to make it clear that only the string 'true' is allowed for boolean headers. Add False-hood unit tests as well * Logging was not being setup with configuration file values for debug/verbose * Fix up the way the exception is raised from \_safe\_kill()... When I "fixed" bug 729726, I mistakenly used the traceback as the message. doh * Change parsing of headers to accept 'True', 'on', 1 for boolean truth values * Add the migration sql scripts to MANIFEST.in. The gets them included in not only the tarball, but also by setup.py install * Add the migration sql scripts to MANIFEST.in. The gets them included in not only the tarball, but also by setup.py install * Changed raise of exception to avoid displaying incorrect error message in \_safe\_kill() * fix logging in swift * Changes "key" column in image\_properties to "name" * Updated properties should be marked as deleted=0. This allows previously deleted properties to be reactivated on an update * Adds --config-file option to common options processing * Update the docs in bin/glance so that help for the 'update' command states that metadata not specified will be deleted * Fix config test fixtures and pep8 error in bin/glance-manage * Provide revised schema and migration scripts for turning 'size' column in 'images' table to BIGINT. This overcomes a 2 gig limit on images sizes that can be downloaded from Glance * Updated properties should be marked as deleted=0. Add unit tests * Use logging module, not echo, for logging SQLAlchemy. Fixes bug 746435 * Change order of setting debug/verbose logging. Thanks for spotting this, Elgar * Use logging module, not echo, for logging SQLAlchemy. Fixes bug 746435 * Ensure we don't ask the backend store to delete an image if the image is in a queued or saving state, since clearly the backend state has yet to completely store the image * Changes "key" column in image\_properties to "name" * Use logging module, not echo for logging SQLAlchemy * Updates glance-manage to use configuration files as well as command line options * Ensure we don't ask a backend store to delete an image if the image is queued or saving * Moved migration into Python script, otherwise PostgreSQL was not migrated. Added changes to the functional test base class to reset the data store between tests. GLANCE\_SQL\_CONNECTION env variable is now GLANCE\_TEST\_SQL\_CONNECTION * changed to more typical examples * Add migration scripts for revising the datatype of the 'size' column in the images table * Changes to database schema required to support images larger than 2Gig on MySQL. Does not update the migration scripts * Updates to the Registry API such that only external requests to update image properties purge existing properties. The update\_image call now contains an extra flag to purge\_props which is set to True for external requests but False internally * Updates to the Registry API such that only external requests to update image properties purge existing properties. The update\_image call now contains an extra flag to purge\_props which is set to True for external requests but False internally * Update the glance registry so that it marks properties as deleted if they are no longer exist when images are updated * Simple one.. just add back the Changelog I removed by accident in r94. Fixes bug #742353 * Adds checksumming to Glance * Uhhhm, stop\_servers() should stop servers, not start them! Thanks to Cory for uncovering this copy/paste fail * Fix up test case after merging in bug fixes from trunk... expected results were incorrect in curl test * Add ChangeLog back to MANIFEST.in * Add migration testing and migration for disk\_format/container\_format * tests.unit.test\_misc.execute -> tests.utils.execute after merge * Allow someone to set the GLANCE\_TEST\_MIGRATIONS\_CONF environment variable to override the config file to run for the migrations unit test: * Update the glance registry so that it marks properties as deleted if they are no longer in the update list * Start eventlet WSGI server with a logger to avoid stdout output * Adds robust functional testing to Glance * Add migration script for checksum column * Fixed an oops. Didn't realized Repository.latest returned a 0-based version number, and forgot to reversed() the downgrade test * OK, migrations are finally under control and properly tested * Remove non-existing files from MANIFEST.in * Removed glance-combined. Fixed README * Removed glance-commit * Re-raise \_safe\_kill() exception in non-3-arg form to avoid pep8 deprecation error * Bug #737979: glance-control uses fixed path to Python interpreter, breaking virtualenv * Bug #737979: glance-control uses fixed path to Python interpreter, breaking virtualenv * Removes glance-combined and fixes TypeError from bad function calls in glance-manage * Start eventlet WSGI server with a logger to avoid stdout output * Pass boolean values to glance.client as strings, not integers * Small adjustment on wait\_for\_servers()... fixed infinite loop possibility * Adds robust functional testing to Glance * Ensure Content-type set to application/octet-stream for GET /images/ * Ensure Content-Length sent for GET /images/ * HTTPBackend.get() needed options in kwargs * Remove glance-combined (use glance-control all start). Fix glance-manage to call the setup\_logging() and add\_logging\_options() methods according to the way they are called in glance-api and glance-registry * Support account:user:key in Swift URIs. Adds unit tests for various calls to parse\_swift\_tokens() * Adds documentation on configuring logging and a unit test for checking simple log output * Support account:user:key in Swift URIs. Adds unit tests for various calls to parse\_swift\_tokens() * Cherry pick r86 from bug720816 * Cherry pick r87 from bug720816 * Fixed run\_tests.py addError() method since I noted it was faulty in another branch.. * Tiny pep8'ers * I stole the colorized code from nova * Fix typo * A quick patch to allow running the test suite on an alternate db backend * Merged trunk -resolved conflicts * [Add] colorization stolen from nova * Don't require swift module for unit-tests * Pep8 fix * Backing out unit-test workaround * Changed to have 2 slashes * Allow unit-tests to run without swift module * Remove spurios comment in test file * Add Glance CLI tool * Silly mistake when resolving merge conflict...fixed * Fixes passing of None values in metadata by turning them into strings. Also fixes the passing of the deleted column by converting it to and from a bool. The test for passing metadata was updated to include these values * Adds documentation on configuring logging and a test that log\_file works. It didn't, so this also inludes fixes for setting up log handling :) * fix data passing * add failing test for None and deleted * Uses logger instead of logging in migration.py * Using logger in migration api instead of logging directly * Only clean up in the cleanup method. Also, we don't need the separate URI now * Use unregister\_models instead of os.unlink to clean up after ourselves * Fixed unregister\_models to actually work * Fixed migration test to use a second DB URL * Replaced use of has\_key with get + default value * Make it clear that the checksum is an MD5 checksum in docs * Adds checksumming to Glance * Whoops! Left out a self.db\_path * Allow tests to run on an alternate dburi given via environment variables * Adds ability for Swift to be used as a full-fledged backend. Adds POST/PUT capabilities to the SwiftBackend Adds lots of unit tests for both FilesystemBackend and SwiftBackend Removes now-unused tests.unit.fakeswifthttp module * Remove last vestiges of account in Swift store * Quick fixup on registry.get\_client() * Public? => Public: per Cory's comment. Added a little more robust exception handling to some methods in bin/glance * Fixes for Devin and Rick's reviews * Adds disk\_format and container\_format to Image, and removes the type column * Fixes client update\_image to work like create\_image. Also fixes some messed up exceptions that were causing a try, except to reraise * Final review fixes. Makes disk\_format and container\_format optional. Makes glance-upload --type put the type in properties * remove test skip * Put account in glance.conf.sample's swift\_store\_auth\_address, use real swift.common.client.ClientException, ensure tests work with older installed versions of Swift (which do not have, for example, swift.common.client.Connection.get\_auth method) * Work around Eventlet exception clearing by memorizing exception context and re-raising using 3-arg form * Adds bin/glance to setup.py * Fixes from Rick's review #1 * Reverts Image \`type\` back to the old behavior of being nullable * Work around Eventlet exception clearing * Add sys.path mangling to glance-upload * Add sys.path adjustment magic to glance-upload * Adds ability for Swift to be used as a full-fledged backend. Adds POST/PUT capabilities to the SwiftBackend Adds lots of unit tests for both FilesystemBackend and SwiftBackend Removes now-unused tests.unit.fakeswifthttp module * Couple tiny cleanups noticed when readin merge diff. * bin/glance-admin => bin/glance, since it's really just the CLI tool to interact with Glance. Added lots of documentation and more logging statements in some critical areas (like the glance.registry calls.. * Adds lots of unit tests for verifying exceptions are raised properly with invalid or mismatched disk and container formats * Makes --kernel and --ramdisk required arguments for glance-upload since Nova currently requires them * Removing image\_type required behavior * Removing requirement to pass kernel and ramdisk * Add test cases for missing and invalid disk and container formats * Requiring kernel and ramdisk args in glance-upload * Make disk\_format and container\_format required * Make disk\_format and container\_format required * Adds an admin tool to Glance (bin/glance-admin) that allows a user to administer the Glance server: * Make sure validate\_image() doesn't throw exception on missing status when updating image * Adds disk\_format and container\_format to Image, and removes the type column * This adds a test case for LP Bug 704854 -- Exception raised by Registry server gets eaten by API server * Add debugging output to assert in test\_misc. Trying to debug what Hudson fails on.. * Fixups from Rick's review * Removes now-unnecessary @validates decorator on model * I should probably rebase this commit considering all the previous commits weren't actually addressing the issue. The fact that I had glance-api and glance-registry installed on my local machine was causing the test runs to improperly return a passing result * Use Nova's path trick in all bins.. * Add path to glance-control * Removes image type validation in the Glance registry * Adding vhd as recognized image type * Reverting the removal of validation * Removing image type validation * Adds --pid-file option to bin/glance-control * Add %default for image type in glance-upload * Adds Location: header to return from API server for POST /images, per APP spec * Cleanups from Soren's review * Add an ImportError check when importing migrate.exceptions, as the location of that module changed in a recent version of the sqlalchemy-migrate library * Adds Location: header to return from API server for POST /images, per APP spec * This adds a test case for LP Bug 704854 -- Exception raised by Registry server gets eaten by API server * Adds --pid-file option to bin/glance-control * Add an ImportError check when importing migrate.exceptions, as the location of that module changed in a recent version of the sqlalchemy-migrate library * Adds sql\_idle\_timeout to reestablish connections to database after given period of time * Add sql\_idle\_timeout * Removes lockfile and custom python-daemon server initialization in favour of paste.deploy * Review 3 fixups * Remove get\_config\_file\_options() from glance-control * Fixes for Rick review #2 * Remove no-longer-needed imports.. * Remove extraneous debug import.. * Changes the server daemon programs to be configured only via paste.deploy configuration files. Removed ability to configure server options from CLI options when starting the servers with the exception of --verbose and --debug, which are useful during debugging * Adds glance-combined and glance-manage to setup.py * Fix merge conflicts * Adds glance-combined and glance-manage to setup.py * Fixes bug 714454 * ReStructure Text files need to end in .rst, not .py ;) * Update README, remove some vestigial directories, and other small tweaks * Removing dubious advice * Adds facilities for configuring Glance's servers via configuration files * Use fix\_path on find\_config\_file() too * Fixups from Rick's review * Including tests/ in pep8 * Typo fixes, clarifying * Updating README, rmdir some empty dirs * Adds bin/glance-control program server daemonization wrapper program based on Swift's swift-init script * Ignore build and deploy-related files * Adds sqlalchemy migrations * Fix bug 712575. Make BASE = models.BASE * Make sure BASE is the models.BASE, not a new declarative\_base() object * Had to reverse search order of directories for finding config files * Removes lockfile and custom python-daemon server initialization in favour of paste.deploy * Adds facilities for configuring Glance's servers via configuration files * Creating indexes * Adding migration test * Fixing migration import errors * Small cleanups * glance-manage uses common options * Merging in glance/cactus * Pep8 fix * Pep8 fixes * Refactoring into option groups 0.1.7 ----- * Hopefully-final versioning (0.1.7), no review needed * Final versioning, no review needed * Adding db\_sync to mirror nova * Adding some basic documentation * Better logging * Adding image\_properties migration * Adding migration for images table * Adding migration management commands * Remove debugging output that wasn't supposed to go into this branch (yet) :) * Adds --debug option for DEBUG-level logging. --verbose now only outputs INFO-level log records * Typo add\_option -> add\_options * Fixes from Rick's review. Thanks, Rick * Adds --sql-connection option * First round of logging functionality: * Merged use-optparse * Removes glance.common.db.sqlalchemy and moves registration of models and create\_engine into glance.registry.db.api * pep8-er in bin/glance-combined * Fixes lp710789 - use-optparse breaks daemonized process stop * Adds bin/glance-combined. Useful in testing.. * Tiny pep8 fixup in setup.py * Rework what comes back from parse\_options()[0] to not stringify option values. Keep them typed * Remove use of gflags entirely. Use optparse * Removing unecessary param to get\_all\_public * Merging trunk * Adding back some missing code * Cleaning up some code * Makes Glance's versioning non-static. Uses Nova's versioning scheme * Adds/updates the copyright info on most of the files in glance and copies over the Authors check from Nova * Removing sqlalchemy dir * Removed methods from sqlalchemy/api * Refactor update/create * Messed up a permission somehow * Refactoring destroy * feh * A few more * A few more I missed * version bumped after tarball cut. no review needed.. * Bump version * Removing authors test for now * PEP8 cleanup * PEP8 cleanup * Should fix the sphinx issue * Adds architecture docs and enables Graphviz sphinx extension. Also cleans up source code formatting in docs * Make sphinx conditional * bumps version after tarball release of 0.1.4 * Bump version * Added bzr to pip-requires and refixed some pep8 stuff * Authors check * A few more copyrights * Copyright year change * Pylint cleanup * Added copyright info * Adds architecture docs and enables Graphviz sphinx extension. Also cleans up source code formatting in docs * bumps release version. ready for Bexar final release * Version bump after release * added sphinx and argparse into tools/pip-requires so that setup.py works. this bug also prevents nova from creating a virtualenv * fixes setup install pip dependencies * Version bump for release * Fixes bug #706636: Make sure pep8 failures will return failure for run\_tests.sh * Make run\_tests.sh return failure when pep8 returns fail, and fix the pep8 error in /bin/glance-upload * This patch: \* Converts dashes to underscores when extracting image-properties from HTTP headers (we already do this for 'regular' image attributes \* Update image\_properties on image PUTs rather than trying to create dups * This patch replaces some remaining references to req.body (which buffers the entire request body into memory!) with the util.has\_body method which can determine whether a body is present without reading any of it into memory * Adding Apache license, fixing long line * Making glance-upload a first-class binary * Revove useless test\_data.py file, add image uploader * Fix property create * Dont buffer entire image stream on PUT * Adds man pages for glance-registry and glance-api programs. Adds Getting Started guide to the Glance documentation * Fixes LP Bug #700162: Images greater than 2GB cannot be uploaded using glance.client.Client * Duh, it helps to import the class you are inheriting from... * OK, found a solution to our test or functional dilemma. w00t * Make compat with chunked transfer * Removes the last vestiges of Twisted from Glance * Pull in typo fix * Add in manpage installation hook. Thanks Soren :) * Fixes LP Bug #700162: Images greater than 2GB cannot be uploaded using glance.client.Client * Removes Twisted from tools/install\_venv.py and zope.interface from tools/pip-requires. Shaved a full 45 seconds for me off of run\_tests.sh -V -f now we're not downloading a giant Twisted tarball.. * Remove last little vestiges of twisted * Quick typo fix in docs * Add run\_tests.py to tarball * Also include run\_tests.py in tarball * Adds man pages for glance-registry and glance-api. Adds Getting Started guide to Glance docs * Fixes bug #696375: x-image-meta-size not optional despite documentation saying so * PEP8 fixes in /glance/store/\_\_init\_\_.py * Fix Bug #704038: Unable to start or connect to register server on anything other than 0.0.0.0:9191 * Fix Bug #704038: Unable to start or connect to register server on anything other than 0.0.0.0:9191 * upgrade version.. * Fixes Bug#696375: x-image-meta-size is not optional, contrary to documentation * Increase version after release * Cut 0.1.2 * Files missing from the tarball (and you probably need to cut a 0.1.2.) * Cleanup of RST documentation and addition of docs on an image's status * Include some files that were left out * Implements the S3 store to the level of the swift store * fixes bug698318 * Fixes suggested by JayPipes review. Did not modify docstrings in non-related files * This merge is in conjunction with lp:~rconradharris/nova/xs-snap-return-image-id-before-snapshot * Updating docs * Merging trunk * Clean up the rest of Glance's PEP8 problems * PEP-8 Fixes * Fixing eventlet-raise issue * Bug #698316: Glance reads the whole image into memory when handling a POST /images request * Merging trunk * Fixed pylint/pep8 for glance.store.s3 * Implement S3 to the level of swift * removing old methods * refactoring so update can take image\_data * More PEP8 fixes * Fix all Glance's pep8 problems * Remove incorrect doccomments about there being a default for the host parameter, fix misdocumented default port, and remove handling of missing parameters in BaseClient, because the values are always specified by the subclass's \_\_init\_\_ * Bug #696385: Glance is not pep8-clean * Bug #696382: Glance client parameter defaults misdocumented * Fixes a number of things that came up during initial coding of the admin tool: * Made review changes from Rick * Duh, use\_ssl should not use HTTPConnection.. * Remove final debugging statement * merge trunk * Remove debugging statements * Fixes a number of things that came up during initial coding of the admin tool: * fix bug 694382 * Bug #694382: setup.py refers to parallax-server and teller-server, when these have been renamed * documentation cleanup and matching to other OpenStack projects. Glance is no longer the red-headed documentation stepchild in OpenStack.. * Converts timestamp attributes to datetime objects before persisting * Adding \_\_protected\_attributes\_\_, some PEP8 cleanups * review fixes * Update sphinx conf to match other OpenStack projects * Documentation cleanup. Splits out index.rst into multiple section docs * Converting to datetime before saving image * Enhances POST /images call to, you know, actually make it work.. * Make directory for filesystem backend * doing the merge of this again...somehow the trunk branch never got rev26 :( * Adds POST /images work that saves image data to a store backend * Update docs for adding image.. * Fix Chris minor nit on docstring * Fixes binaries, updates WSGI file to more recent version from Nova, and fixes an issue in SQLAlchemy API that was being hidden by stubs and only showed up when starting up the actual binaries and testing.. * Major refactoring.. * Fix testing/debug left in * Fixes from review * Documentation updates and GlanceClient -> Client * Refactor a bunch of stuff around the image files collection * Cleanup around x-image-meta and x-image-meta-property HTTP headers in GET/HEAD * Update /glance/client.py to have GlanceClient do all operations that RegistryClient does * Merges Glance API with the registry API: \* Makes HEAD /images/ return metadata in headers \* Make GET /images/ return image data with metadata in headers Updates docs some (more needed) * Second step in simplifying the Glance API * This is the first part of simplifying the Glance API and consolidating the Teller and Parallax APIs into a single, unified Glance API * Adds DELETE call to Teller API * Fixes Swift URL Parsing in Python 2.6.5 by adding back netloc * Moving imports into main which will only be executed after we daemonize thus avoiding the premature initialization of epoll * Delaying eventlet import until after daemonization * Fix Swift URL parsing for Python 2.6.5 * Don't leak implementation details in Swift backend. Return None on successful delete\_object call * Adds call to Swift's DELETE * Typo fixed and tiny cleanups * Adds DELETE to Teller's API * Just some small cleanups, fixing: \* Swapped port numbers (Parallax Port <=> Teller port) \* Removing extraneous routes in Teller API \* Adding required slashes to do\_request * \* Changes Teller API to use REST with opaque ID sent in API calls instead of a "parallax URI". This hides the URI stuff behind the API layer in communication between Parallax and Teller. \* Adds unit tests for the only complete Teller API call so far: GET images/, which returns a gzip'd string of image data * Fixing swapped port numbers, removing extraneous routes in Teller controller, adding required slash for do\_request calls * \* Changes Teller API to use REST with opaque ID sent in API calls instead of a "parallax URI". This hides the URI stuff behind the API layer in communication between Parallax and Teller. \* Adds unit tests for the only complete Teller API call so far: GET images/, which returns a gzip'd string of image data * Add files attribute to Parallax client tests * Adds client classes for Parallax and Teller and fixes some issues where our controller was not returning proper HTTP response codes on errors.. * Cleanup/fixes for Rick review * Adds client classes ParallaxClient and (stubbed) TellerClient to new glance.client module * packaging fixups preparing for release candidate * Remove symlinks in bin/ * Packaging fixups * awesomeness. merging into trunk since my parallax-api is already in trunk I believe. :) * Moving ATTR helpers into db module * PUTing and POSTing using image key * Quick fix...gives base Model an update() method to make it behave like a dict * Make returned mapping have an 'image' key to help in XML serialization * Ignore virtualenv directory in bzr * This patch removes unique index on the 'key' column of image\_metadatum and replaces it with a compound UniqueConstraint on 'image\_id' and 'key'. The 'key' column remains indexed * Fixes lp653358 * Renaming is\_cloudfiles\_available -> is\_swift\_available * Adds compound unique constraint to ImageMetadatum * Using swift.common.client rather than python-cloudfiles in Teller's Swift backend * Adds DELETE to the Parallax REST API * Implements the REST call for updating image metadata in the Parallax API * Implements Parallax API call to register a new image * Adds a /images/detail route to the Parallax controller, adds a unit test for it, and cleans up Michael's suggestions * Works around non-RFC compliance in Python (< 2.6.5) urlparse library * Workaround for bug in Python 2.6.1 urlparse library * Adds tests for bad status set on image * Implements Parallax API call to register a new image * This patch overhauls the testing in Glance: * unittest2 -> unittest. For now, since not using unittest2 features yet * Fixes up test\_teller\_api.py to use stubout correctly. Fixes a few bugs that showed up in the process, and remove the now-unnecessary FakeParallaxAdapter * First round of cleaning up the unittests. Adds test suite runner, support for virtualenv setup and library dependencies, resolves issues with ImportErrors on cloudfiles, adds pymox/stubout support and splits the backend testing into distinct unittest cases * With this patch Parallax and teller now work end-to-end with the Swift backend * Adding missing backend files, fixing typos in comments * This patch: \* Decouples Controller for ParallaxAdapter implementation by adding generic RegistryAdapter and providing a lookup function \* Adds base model attributes to Parallax's JSON (created\_at, etc) * Improving symmetry between teller and parallax * Fixing swift authurl * Add RegistryAdapter, include ModelBase attributes * Fixing Teller image tests * Created teller-server.py in bin/ * Cleaning up Teller backend * Rewrote ImageController to inherit from the work Rick Harris did in glance.common. Moved it into teller/api/images.py to make teller match parallax. Fixed tests. Renamed them to distinguish if any parallax tests ever get written * Adding Image index call, nesting the Image show dict to facilitate XML serialization * Moving parallax models out of common and into the parallax module * Updated tests * Reimplements server.py as a wsgi api inheriting from glance.common * This patch: \* pulls in a number of useful libraries from Nova under the common/ path (we can factor those out to a shared library in Bexar-release) \* Defines the models in common.db.sqlalchemy.models.py (this should be factored out into the parallax package soon) \* Adds the parallax api-server under /bin (if PyPI was used to pull python-daemon and python-lockfile, you may need to apply a patch I have against it) * Changes the obj['uri'] to obj['location'] to better sync with the representation within Nova. Adds the image\_lookup\_fn = ParallaxAdapter.lookup to teller.server * ImageChunk -> ImageFile, merging APIRouter into API for now * Adding Apache header to test\_data.py * Small cleanups * Parallax will return obj['location'] instead of obj['uri'], also maybe a parallax lookup fn would be nice? * Implements a Parallax adapter for looking up images requested from nova. Adds a size check to SwiftBackend to ensure that the chunks haven't been truncated or anything * Reconciling parallax modifications with modulization of glance * Adding Images controller * Adding API directory and server.py * Modulify the imports * Implements Parallax adapter for lookups from Teller, also adds size expectations to the backend adapters * Adding files from Nova * Makes glance a module, containing teller and parallax sub-modules * libify glance into teller and parallax modules. Make nosetests work by making tests and tests/unit/ into packages * Rearranged the code a little. Added a setup.py. Added sphinx doc skeleton * Added setup.py and sphinx docs * Reorg to make Monty's build pedanticness side happier * Implements Swift backend for teller * ignore all .pyc files * Merging ricks changes * Adding basic image controller and mock backends * Adding description of registry data structure * Adding teller\_server * adding filesystem and http backends * Initial check-in glance-16.0.0/test-requirements.txt0000666000175100017510000000223213245511421017313 0ustar zuulzuul00000000000000# The order of packages is significant, because pip processes them in the order # of appearance. Changing the order has an impact on the overall integration # process, which may cause wedges in the gate later. # Hacking already pins down pep8, pyflakes and flake8 hacking!=0.13.0,<0.14,>=0.12.0 # Apache-2.0 # For translations processing Babel!=2.4.0,>=2.3.4 # BSD # Needed for testing bandit>=1.1.0 # Apache-2.0 coverage!=4.4,>=4.0 # Apache-2.0 ddt>=1.0.1 # MIT fixtures>=3.0.0 # Apache-2.0/BSD mock>=2.0.0 # BSD sphinx!=1.6.6,>=1.6.2 # BSD requests>=2.14.2 # Apache-2.0 testrepository>=0.0.18 # Apache-2.0/BSD testresources>=2.0.0 # Apache-2.0/BSD testscenarios>=0.4 # Apache-2.0/BSD testtools>=2.2.0 # MIT psutil>=3.2.2 # BSD oslotest>=3.2.0 # Apache-2.0 os-testr>=1.0.0 # Apache-2.0 doc8>=0.6.0 # Apache-2.0 # Optional packages that should be installed when testing PyMySQL>=0.7.6 # MIT License psycopg2>=2.6.2 # LGPL/ZPL pysendfile>=2.0.0 # MIT qpid-python>=0.26;python_version=='2.7' # Apache-2.0 xattr>=0.9.2 # MIT python-swiftclient>=3.2.0 # Apache-2.0 # Documentation os-api-ref>=1.4.0 # Apache-2.0 openstackdocstheme>=1.18.1 # Apache-2.0 reno>=2.5.0 # Apache-2.0 glance-16.0.0/glance/0000775000175100017510000000000013245511661014310 5ustar zuulzuul00000000000000glance-16.0.0/glance/location.py0000666000175100017510000004446713245511421016505 0ustar zuulzuul00000000000000# Copyright 2014 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections import copy from cryptography import exceptions as crypto_exception from cursive import exception as cursive_exception from cursive import signature_utils import glance_store as store from oslo_config import cfg from oslo_log import log as logging from oslo_utils import encodeutils from oslo_utils import excutils from glance.common import exception from glance.common import utils import glance.domain.proxy from glance.i18n import _, _LE, _LI, _LW CONF = cfg.CONF LOG = logging.getLogger(__name__) class ImageRepoProxy(glance.domain.proxy.Repo): def __init__(self, image_repo, context, store_api, store_utils): self.context = context self.store_api = store_api proxy_kwargs = {'context': context, 'store_api': store_api, 'store_utils': store_utils} super(ImageRepoProxy, self).__init__(image_repo, item_proxy_class=ImageProxy, item_proxy_kwargs=proxy_kwargs) self.db_api = glance.db.get_api() def _set_acls(self, image): public = image.visibility == 'public' member_ids = [] if image.locations and not public: member_repo = _get_member_repo_for_store(image, self.context, self.db_api, self.store_api) member_ids = [m.member_id for m in member_repo.list()] for location in image.locations: self.store_api.set_acls(location['url'], public=public, read_tenants=member_ids, context=self.context) def add(self, image): result = super(ImageRepoProxy, self).add(image) self._set_acls(image) return result def save(self, image, from_state=None): result = super(ImageRepoProxy, self).save(image, from_state=from_state) self._set_acls(image) return result def _get_member_repo_for_store(image, context, db_api, store_api): image_member_repo = glance.db.ImageMemberRepo(context, db_api, image) store_image_repo = glance.location.ImageMemberRepoProxy( image_member_repo, image, context, store_api) return store_image_repo def _check_location_uri(context, store_api, store_utils, uri): """Check if an image location is valid. :param context: Glance request context :param store_api: store API module :param store_utils: store utils module :param uri: location's uri string """ try: # NOTE(zhiyan): Some stores return zero when it catch exception is_ok = (store_utils.validate_external_location(uri) and store_api.get_size_from_backend(uri, context=context) > 0) except (store.UnknownScheme, store.NotFound, store.BadStoreUri): is_ok = False if not is_ok: reason = _('Invalid location') raise exception.BadStoreUri(message=reason) def _check_image_location(context, store_api, store_utils, location): _check_location_uri(context, store_api, store_utils, location['url']) store_api.check_location_metadata(location['metadata']) def _set_image_size(context, image, locations): if not image.size: for location in locations: size_from_backend = store.get_size_from_backend( location['url'], context=context) if size_from_backend: # NOTE(flwang): This assumes all locations have the same size image.size = size_from_backend break def _count_duplicated_locations(locations, new): """ To calculate the count of duplicated locations for new one. :param locations: The exiting image location set :param new: The new image location :returns: The count of duplicated locations """ ret = 0 for loc in locations: if loc['url'] == new['url'] and loc['metadata'] == new['metadata']: ret += 1 return ret class ImageFactoryProxy(glance.domain.proxy.ImageFactory): def __init__(self, factory, context, store_api, store_utils): self.context = context self.store_api = store_api self.store_utils = store_utils proxy_kwargs = {'context': context, 'store_api': store_api, 'store_utils': store_utils} super(ImageFactoryProxy, self).__init__(factory, proxy_class=ImageProxy, proxy_kwargs=proxy_kwargs) def new_image(self, **kwargs): locations = kwargs.get('locations', []) for loc in locations: _check_image_location(self.context, self.store_api, self.store_utils, loc) loc['status'] = 'active' if _count_duplicated_locations(locations, loc) > 1: raise exception.DuplicateLocation(location=loc['url']) return super(ImageFactoryProxy, self).new_image(**kwargs) class StoreLocations(collections.MutableSequence): """ The proxy for store location property. It takes responsibility for:: 1. Location uri correctness checking when adding a new location. 2. Remove the image data from the store when a location is removed from an image. """ def __init__(self, image_proxy, value): self.image_proxy = image_proxy if isinstance(value, list): self.value = value else: self.value = list(value) def append(self, location): # NOTE(flaper87): Insert this # location at the very end of # the value list. self.insert(len(self.value), location) def extend(self, other): if isinstance(other, StoreLocations): locations = other.value else: locations = list(other) for location in locations: self.append(location) def insert(self, i, location): _check_image_location(self.image_proxy.context, self.image_proxy.store_api, self.image_proxy.store_utils, location) location['status'] = 'active' if _count_duplicated_locations(self.value, location) > 0: raise exception.DuplicateLocation(location=location['url']) self.value.insert(i, location) _set_image_size(self.image_proxy.context, self.image_proxy, [location]) def pop(self, i=-1): location = self.value.pop(i) try: self.image_proxy.store_utils.delete_image_location_from_backend( self.image_proxy.context, self.image_proxy.image.image_id, location) except Exception: with excutils.save_and_reraise_exception(): self.value.insert(i, location) return location def count(self, location): return self.value.count(location) def index(self, location, *args): return self.value.index(location, *args) def remove(self, location): if self.count(location): self.pop(self.index(location)) else: self.value.remove(location) def reverse(self): self.value.reverse() # Mutable sequence, so not hashable __hash__ = None def __getitem__(self, i): return self.value.__getitem__(i) def __setitem__(self, i, location): _check_image_location(self.image_proxy.context, self.image_proxy.store_api, self.image_proxy.store_utils, location) location['status'] = 'active' self.value.__setitem__(i, location) _set_image_size(self.image_proxy.context, self.image_proxy, [location]) def __delitem__(self, i): if isinstance(i, slice): if i.step not in (None, 1): raise NotImplementedError("slice with step") self.__delslice__(i.start, i.stop) return location = None try: location = self.value[i] except Exception: del self.value[i] return self.image_proxy.store_utils.delete_image_location_from_backend( self.image_proxy.context, self.image_proxy.image.image_id, location) del self.value[i] def __delslice__(self, i, j): i = 0 if i is None else max(i, 0) j = len(self) if j is None else max(j, 0) locations = [] try: locations = self.value[i:j] except Exception: del self.value[i:j] return for location in locations: self.image_proxy.store_utils.delete_image_location_from_backend( self.image_proxy.context, self.image_proxy.image.image_id, location) del self.value[i] def __iadd__(self, other): self.extend(other) return self def __contains__(self, location): return location in self.value def __len__(self): return len(self.value) def __cast(self, other): if isinstance(other, StoreLocations): return other.value else: return other def __cmp__(self, other): return cmp(self.value, self.__cast(other)) def __eq__(self, other): return self.value == self.__cast(other) def __ne__(self, other): return not self.__eq__(other) def __iter__(self): return iter(self.value) def __copy__(self): return type(self)(self.image_proxy, self.value) def __deepcopy__(self, memo): # NOTE(zhiyan): Only copy location entries, others can be reused. value = copy.deepcopy(self.value, memo) self.image_proxy.image.locations = value return type(self)(self.image_proxy, value) def _locations_proxy(target, attr): """ Make a location property proxy on the image object. :param target: the image object on which to add the proxy :param attr: the property proxy we want to hook """ def get_attr(self): value = getattr(getattr(self, target), attr) return StoreLocations(self, value) def set_attr(self, value): if not isinstance(value, (list, StoreLocations)): reason = _('Invalid locations') raise exception.BadStoreUri(message=reason) ori_value = getattr(getattr(self, target), attr) if ori_value != value: # NOTE(flwang): If all the URL of passed-in locations are same as # current image locations, that means user would like to only # update the metadata, not the URL. ordered_value = sorted([loc['url'] for loc in value]) ordered_ori = sorted([loc['url'] for loc in ori_value]) if len(ori_value) > 0 and ordered_value != ordered_ori: raise exception.Invalid(_('Original locations is not empty: ' '%s') % ori_value) # NOTE(zhiyan): Check locations are all valid # NOTE(flwang): If all the URL of passed-in locations are same as # current image locations, then it's not necessary to verify those # locations again. Otherwise, if there is any restricted scheme in # existing locations. _check_image_location will fail. if ordered_value != ordered_ori: for loc in value: _check_image_location(self.context, self.store_api, self.store_utils, loc) loc['status'] = 'active' if _count_duplicated_locations(value, loc) > 1: raise exception.DuplicateLocation(location=loc['url']) _set_image_size(self.context, getattr(self, target), value) else: for loc in value: loc['status'] = 'active' return setattr(getattr(self, target), attr, list(value)) def del_attr(self): value = getattr(getattr(self, target), attr) while len(value): self.store_utils.delete_image_location_from_backend( self.context, self.image.image_id, value[0]) del value[0] setattr(getattr(self, target), attr, value) return delattr(getattr(self, target), attr) return property(get_attr, set_attr, del_attr) class ImageProxy(glance.domain.proxy.Image): locations = _locations_proxy('image', 'locations') def __init__(self, image, context, store_api, store_utils): self.image = image self.context = context self.store_api = store_api self.store_utils = store_utils proxy_kwargs = { 'context': context, 'image': self, 'store_api': store_api, } super(ImageProxy, self).__init__( image, member_repo_proxy_class=ImageMemberRepoProxy, member_repo_proxy_kwargs=proxy_kwargs) def delete(self): self.image.delete() if self.image.locations: for location in self.image.locations: self.store_utils.delete_image_location_from_backend( self.context, self.image.image_id, location) def set_data(self, data, size=None): if size is None: size = 0 # NOTE(markwash): zero -> unknown size # Create the verifier for signature verification (if correct properties # are present) extra_props = self.image.extra_properties if (signature_utils.should_create_verifier(extra_props)): # NOTE(bpoulos): if creating verifier fails, exception will be # raised img_signature = extra_props[signature_utils.SIGNATURE] hash_method = extra_props[signature_utils.HASH_METHOD] key_type = extra_props[signature_utils.KEY_TYPE] cert_uuid = extra_props[signature_utils.CERT_UUID] verifier = signature_utils.get_verifier( context=self.context, img_signature_certificate_uuid=cert_uuid, img_signature_hash_method=hash_method, img_signature=img_signature, img_signature_key_type=key_type ) else: verifier = None location, size, checksum, loc_meta = self.store_api.add_to_backend( CONF, self.image.image_id, utils.LimitingReader(utils.CooperativeReader(data), CONF.image_size_cap), size, context=self.context, verifier=verifier) # NOTE(bpoulos): if verification fails, exception will be raised if verifier: try: verifier.verify() LOG.info(_LI("Successfully verified signature for image %s"), self.image.image_id) except crypto_exception.InvalidSignature: raise cursive_exception.SignatureVerificationError( _('Signature verification failed') ) self.image.locations = [{'url': location, 'metadata': loc_meta, 'status': 'active'}] self.image.size = size self.image.checksum = checksum self.image.status = 'active' def get_data(self, offset=0, chunk_size=None): if not self.image.locations: # NOTE(mclaren): This is the only set of arguments # which work with this exception currently, see: # https://bugs.launchpad.net/glance-store/+bug/1501443 # When the above glance_store bug is fixed we can # add a msg as usual. raise store.NotFound(image=None) err = None for loc in self.image.locations: try: data, size = self.store_api.get_from_backend( loc['url'], offset=offset, chunk_size=chunk_size, context=self.context) return data except Exception as e: LOG.warn(_LW('Get image %(id)s data failed: ' '%(err)s.') % {'id': self.image.image_id, 'err': encodeutils.exception_to_unicode(e)}) err = e # tried all locations LOG.error(_LE('Glance tried all active locations to get data for ' 'image %s but all have failed.') % self.image.image_id) raise err class ImageMemberRepoProxy(glance.domain.proxy.Repo): def __init__(self, repo, image, context, store_api): self.repo = repo self.image = image self.context = context self.store_api = store_api super(ImageMemberRepoProxy, self).__init__(repo) def _set_acls(self): public = self.image.visibility == 'public' if self.image.locations and not public: member_ids = [m.member_id for m in self.repo.list()] for location in self.image.locations: self.store_api.set_acls(location['url'], public=public, read_tenants=member_ids, context=self.context) def add(self, member): super(ImageMemberRepoProxy, self).add(member) self._set_acls() def remove(self, member): super(ImageMemberRepoProxy, self).remove(member) self._set_acls() glance-16.0.0/glance/api/0000775000175100017510000000000013245511661015061 5ustar zuulzuul00000000000000glance-16.0.0/glance/api/policy.py0000666000175100017510000006317313245511421016740 0ustar zuulzuul00000000000000# Copyright (c) 2011 OpenStack Foundation # Copyright 2013 IBM Corp. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Policy Engine For Glance""" import collections import copy from oslo_config import cfg from oslo_log import log as logging from oslo_policy import policy from glance.common import exception import glance.domain.proxy from glance.i18n import _ LOG = logging.getLogger(__name__) CONF = cfg.CONF DEFAULT_RULES = policy.Rules.from_dict({ 'context_is_admin': 'role:admin', 'default': 'role:admin', 'manage_image_cache': 'role:admin', }) class Enforcer(policy.Enforcer): """Responsible for loading and enforcing rules""" def __init__(self): if CONF.find_file(CONF.oslo_policy.policy_file): kwargs = dict(rules=None, use_conf=True) else: kwargs = dict(rules=DEFAULT_RULES, use_conf=False) super(Enforcer, self).__init__(CONF, overwrite=False, **kwargs) def add_rules(self, rules): """Add new rules to the Rules object""" self.set_rules(rules, overwrite=False, use_conf=self.use_conf) def enforce(self, context, action, target): """Verifies that the action is valid on the target in this context. :param context: Glance request context :param action: String representing the action to be checked :param target: Dictionary representing the object of the action. :raises: `glance.common.exception.Forbidden` :returns: A non-False value if access is allowed. """ return super(Enforcer, self).enforce(action, target, context.to_policy_values(), do_raise=True, exc=exception.Forbidden, action=action) def check(self, context, action, target): """Verifies that the action is valid on the target in this context. :param context: Glance request context :param action: String representing the action to be checked :param target: Dictionary representing the object of the action. :returns: A non-False value if access is allowed. """ return super(Enforcer, self).enforce(action, target, context.to_policy_values()) def check_is_admin(self, context): """Check if the given context is associated with an admin role, as defined via the 'context_is_admin' RBAC rule. :param context: Glance request context :returns: A non-False value if context role is admin. """ return self.check(context, 'context_is_admin', context.to_dict()) class ImageRepoProxy(glance.domain.proxy.Repo): def __init__(self, image_repo, context, policy): self.context = context self.policy = policy self.image_repo = image_repo proxy_kwargs = {'context': self.context, 'policy': self.policy} super(ImageRepoProxy, self).__init__(image_repo, item_proxy_class=ImageProxy, item_proxy_kwargs=proxy_kwargs) def get(self, image_id): try: image = super(ImageRepoProxy, self).get(image_id) except exception.NotFound: self.policy.enforce(self.context, 'get_image', {}) raise else: self.policy.enforce(self.context, 'get_image', dict(ImageTarget(image))) return image def list(self, *args, **kwargs): self.policy.enforce(self.context, 'get_images', {}) return super(ImageRepoProxy, self).list(*args, **kwargs) def save(self, image, from_state=None): self.policy.enforce(self.context, 'modify_image', dict(image.target)) return super(ImageRepoProxy, self).save(image, from_state=from_state) def add(self, image): self.policy.enforce(self.context, 'add_image', dict(image.target)) return super(ImageRepoProxy, self).add(image) def _enforce_image_visibility(policy, context, visibility, target): if visibility == 'public': policy.enforce(context, 'publicize_image', target) elif visibility == 'community': policy.enforce(context, 'communitize_image', target) class ImageProxy(glance.domain.proxy.Image): def __init__(self, image, context, policy): self.image = image self.target = ImageTarget(image) self.context = context self.policy = policy super(ImageProxy, self).__init__(image) @property def visibility(self): return self.image.visibility @visibility.setter def visibility(self, value): _enforce_image_visibility(self.policy, self.context, value, self.target) self.image.visibility = value @property def locations(self): return ImageLocationsProxy(self.image.locations, self.context, self.policy) @locations.setter def locations(self, value): if not isinstance(value, (list, ImageLocationsProxy)): raise exception.Invalid(_('Invalid locations: %s') % value) self.policy.enforce(self.context, 'set_image_location', self.target) new_locations = list(value) if (set([loc['url'] for loc in self.image.locations]) - set([loc['url'] for loc in new_locations])): self.policy.enforce(self.context, 'delete_image_location', self.target) self.image.locations = new_locations def delete(self): self.policy.enforce(self.context, 'delete_image', dict(self.target)) return self.image.delete() def deactivate(self): LOG.debug('Attempting deactivate') target = ImageTarget(self.image) self.policy.enforce(self.context, 'deactivate', target=target) LOG.debug('Deactivate allowed, continue') self.image.deactivate() def reactivate(self): LOG.debug('Attempting reactivate') target = ImageTarget(self.image) self.policy.enforce(self.context, 'reactivate', target=target) LOG.debug('Reactivate allowed, continue') self.image.reactivate() def get_data(self, *args, **kwargs): self.policy.enforce(self.context, 'download_image', self.target) return self.image.get_data(*args, **kwargs) def set_data(self, *args, **kwargs): return self.image.set_data(*args, **kwargs) class ImageMemberProxy(glance.domain.proxy.ImageMember): def __init__(self, image_member, context, policy): super(ImageMemberProxy, self).__init__(image_member) self.image_member = image_member self.context = context self.policy = policy class ImageFactoryProxy(glance.domain.proxy.ImageFactory): def __init__(self, image_factory, context, policy): self.image_factory = image_factory self.context = context self.policy = policy proxy_kwargs = {'context': self.context, 'policy': self.policy} super(ImageFactoryProxy, self).__init__(image_factory, proxy_class=ImageProxy, proxy_kwargs=proxy_kwargs) def new_image(self, **kwargs): _enforce_image_visibility(self.policy, self.context, kwargs.get('visibility'), {}) return super(ImageFactoryProxy, self).new_image(**kwargs) class ImageMemberFactoryProxy(glance.domain.proxy.ImageMembershipFactory): def __init__(self, member_factory, context, policy): super(ImageMemberFactoryProxy, self).__init__( member_factory, proxy_class=ImageMemberProxy, proxy_kwargs={'context': context, 'policy': policy}) class ImageMemberRepoProxy(glance.domain.proxy.Repo): def __init__(self, member_repo, image, context, policy): self.member_repo = member_repo self.image = image self.target = ImageTarget(image) self.context = context self.policy = policy def add(self, member): self.policy.enforce(self.context, 'add_member', self.target) self.member_repo.add(member) def get(self, member_id): self.policy.enforce(self.context, 'get_member', self.target) return self.member_repo.get(member_id) def save(self, member, from_state=None): self.policy.enforce(self.context, 'modify_member', self.target) self.member_repo.save(member, from_state=from_state) def list(self, *args, **kwargs): self.policy.enforce(self.context, 'get_members', self.target) return self.member_repo.list(*args, **kwargs) def remove(self, member): self.policy.enforce(self.context, 'delete_member', self.target) self.member_repo.remove(member) class ImageLocationsProxy(object): __hash__ = None def __init__(self, locations, context, policy): self.locations = locations self.context = context self.policy = policy def __copy__(self): return type(self)(self.locations, self.context, self.policy) def __deepcopy__(self, memo): # NOTE(zhiyan): Only copy location entries, others can be reused. return type(self)(copy.deepcopy(self.locations, memo), self.context, self.policy) def _get_checker(action, func_name): def _checker(self, *args, **kwargs): self.policy.enforce(self.context, action, {}) method = getattr(self.locations, func_name) return method(*args, **kwargs) return _checker count = _get_checker('get_image_location', 'count') index = _get_checker('get_image_location', 'index') __getitem__ = _get_checker('get_image_location', '__getitem__') __contains__ = _get_checker('get_image_location', '__contains__') __len__ = _get_checker('get_image_location', '__len__') __cast = _get_checker('get_image_location', '__cast') __cmp__ = _get_checker('get_image_location', '__cmp__') __iter__ = _get_checker('get_image_location', '__iter__') append = _get_checker('set_image_location', 'append') extend = _get_checker('set_image_location', 'extend') insert = _get_checker('set_image_location', 'insert') reverse = _get_checker('set_image_location', 'reverse') __iadd__ = _get_checker('set_image_location', '__iadd__') __setitem__ = _get_checker('set_image_location', '__setitem__') pop = _get_checker('delete_image_location', 'pop') remove = _get_checker('delete_image_location', 'remove') __delitem__ = _get_checker('delete_image_location', '__delitem__') __delslice__ = _get_checker('delete_image_location', '__delslice__') del _get_checker class TaskProxy(glance.domain.proxy.Task): def __init__(self, task, context, policy): self.task = task self.context = context self.policy = policy super(TaskProxy, self).__init__(task) class TaskStubProxy(glance.domain.proxy.TaskStub): def __init__(self, task_stub, context, policy): self.task_stub = task_stub self.context = context self.policy = policy super(TaskStubProxy, self).__init__(task_stub) class TaskRepoProxy(glance.domain.proxy.TaskRepo): def __init__(self, task_repo, context, task_policy): self.context = context self.policy = task_policy self.task_repo = task_repo proxy_kwargs = {'context': self.context, 'policy': self.policy} super(TaskRepoProxy, self).__init__(task_repo, task_proxy_class=TaskProxy, task_proxy_kwargs=proxy_kwargs) def get(self, task_id): self.policy.enforce(self.context, 'get_task', {}) return super(TaskRepoProxy, self).get(task_id) def add(self, task): self.policy.enforce(self.context, 'add_task', {}) super(TaskRepoProxy, self).add(task) def save(self, task): self.policy.enforce(self.context, 'modify_task', {}) super(TaskRepoProxy, self).save(task) class TaskStubRepoProxy(glance.domain.proxy.TaskStubRepo): def __init__(self, task_stub_repo, context, task_policy): self.context = context self.policy = task_policy self.task_stub_repo = task_stub_repo proxy_kwargs = {'context': self.context, 'policy': self.policy} super(TaskStubRepoProxy, self).__init__(task_stub_repo, task_stub_proxy_class=TaskStubProxy, task_stub_proxy_kwargs=proxy_kwargs) def list(self, *args, **kwargs): self.policy.enforce(self.context, 'get_tasks', {}) return super(TaskStubRepoProxy, self).list(*args, **kwargs) class TaskFactoryProxy(glance.domain.proxy.TaskFactory): def __init__(self, task_factory, context, policy): self.task_factory = task_factory self.context = context self.policy = policy proxy_kwargs = {'context': self.context, 'policy': self.policy} super(TaskFactoryProxy, self).__init__( task_factory, task_proxy_class=TaskProxy, task_proxy_kwargs=proxy_kwargs) class ImageTarget(collections.Mapping): SENTINEL = object() def __init__(self, target): """Initialize the object :param target: Object being targeted """ self.target = target self._target_keys = [k for k in dir(ImageProxy) if not k.startswith('__') if not callable(getattr(ImageProxy, k))] def __getitem__(self, key): """Return the value of 'key' from the target. If the target has the attribute 'key', return it. :param key: value to retrieve """ key = self.key_transforms(key) value = getattr(self.target, key, self.SENTINEL) if value is self.SENTINEL: extra_properties = getattr(self.target, 'extra_properties', None) if extra_properties is not None: value = extra_properties[key] else: value = None return value def get(self, key, default=None): try: return self.__getitem__(key) except KeyError: return default def __len__(self): length = len(self._target_keys) length += len(getattr(self.target, 'extra_properties', {})) return length def __iter__(self): for key in self._target_keys: yield key for key in getattr(self.target, 'extra_properties', {}).keys(): yield key def key_transforms(self, key): if key == 'id': key = 'image_id' return key # Metadef Namespace classes class MetadefNamespaceProxy(glance.domain.proxy.MetadefNamespace): def __init__(self, namespace, context, policy): self.namespace_input = namespace self.context = context self.policy = policy super(MetadefNamespaceProxy, self).__init__(namespace) class MetadefNamespaceRepoProxy(glance.domain.proxy.MetadefNamespaceRepo): def __init__(self, namespace_repo, context, namespace_policy): self.context = context self.policy = namespace_policy self.namespace_repo = namespace_repo proxy_kwargs = {'context': self.context, 'policy': self.policy} super(MetadefNamespaceRepoProxy, self).__init__(namespace_repo, namespace_proxy_class=MetadefNamespaceProxy, namespace_proxy_kwargs=proxy_kwargs) def get(self, namespace): self.policy.enforce(self.context, 'get_metadef_namespace', {}) return super(MetadefNamespaceRepoProxy, self).get(namespace) def list(self, *args, **kwargs): self.policy.enforce(self.context, 'get_metadef_namespaces', {}) return super(MetadefNamespaceRepoProxy, self).list(*args, **kwargs) def save(self, namespace): self.policy.enforce(self.context, 'modify_metadef_namespace', {}) return super(MetadefNamespaceRepoProxy, self).save(namespace) def add(self, namespace): self.policy.enforce(self.context, 'add_metadef_namespace', {}) return super(MetadefNamespaceRepoProxy, self).add(namespace) class MetadefNamespaceFactoryProxy( glance.domain.proxy.MetadefNamespaceFactory): def __init__(self, meta_namespace_factory, context, policy): self.meta_namespace_factory = meta_namespace_factory self.context = context self.policy = policy proxy_kwargs = {'context': self.context, 'policy': self.policy} super(MetadefNamespaceFactoryProxy, self).__init__( meta_namespace_factory, meta_namespace_proxy_class=MetadefNamespaceProxy, meta_namespace_proxy_kwargs=proxy_kwargs) # Metadef Object classes class MetadefObjectProxy(glance.domain.proxy.MetadefObject): def __init__(self, meta_object, context, policy): self.meta_object = meta_object self.context = context self.policy = policy super(MetadefObjectProxy, self).__init__(meta_object) class MetadefObjectRepoProxy(glance.domain.proxy.MetadefObjectRepo): def __init__(self, object_repo, context, object_policy): self.context = context self.policy = object_policy self.object_repo = object_repo proxy_kwargs = {'context': self.context, 'policy': self.policy} super(MetadefObjectRepoProxy, self).__init__(object_repo, object_proxy_class=MetadefObjectProxy, object_proxy_kwargs=proxy_kwargs) def get(self, namespace, object_name): self.policy.enforce(self.context, 'get_metadef_object', {}) return super(MetadefObjectRepoProxy, self).get(namespace, object_name) def list(self, *args, **kwargs): self.policy.enforce(self.context, 'get_metadef_objects', {}) return super(MetadefObjectRepoProxy, self).list(*args, **kwargs) def save(self, meta_object): self.policy.enforce(self.context, 'modify_metadef_object', {}) return super(MetadefObjectRepoProxy, self).save(meta_object) def add(self, meta_object): self.policy.enforce(self.context, 'add_metadef_object', {}) return super(MetadefObjectRepoProxy, self).add(meta_object) class MetadefObjectFactoryProxy(glance.domain.proxy.MetadefObjectFactory): def __init__(self, meta_object_factory, context, policy): self.meta_object_factory = meta_object_factory self.context = context self.policy = policy proxy_kwargs = {'context': self.context, 'policy': self.policy} super(MetadefObjectFactoryProxy, self).__init__( meta_object_factory, meta_object_proxy_class=MetadefObjectProxy, meta_object_proxy_kwargs=proxy_kwargs) # Metadef ResourceType classes class MetadefResourceTypeProxy(glance.domain.proxy.MetadefResourceType): def __init__(self, meta_resource_type, context, policy): self.meta_resource_type = meta_resource_type self.context = context self.policy = policy super(MetadefResourceTypeProxy, self).__init__(meta_resource_type) class MetadefResourceTypeRepoProxy( glance.domain.proxy.MetadefResourceTypeRepo): def __init__(self, resource_type_repo, context, resource_type_policy): self.context = context self.policy = resource_type_policy self.resource_type_repo = resource_type_repo proxy_kwargs = {'context': self.context, 'policy': self.policy} super(MetadefResourceTypeRepoProxy, self).__init__( resource_type_repo, resource_type_proxy_class=MetadefResourceTypeProxy, resource_type_proxy_kwargs=proxy_kwargs) def list(self, *args, **kwargs): self.policy.enforce(self.context, 'list_metadef_resource_types', {}) return super(MetadefResourceTypeRepoProxy, self).list(*args, **kwargs) def get(self, *args, **kwargs): self.policy.enforce(self.context, 'get_metadef_resource_type', {}) return super(MetadefResourceTypeRepoProxy, self).get(*args, **kwargs) def add(self, resource_type): self.policy.enforce(self.context, 'add_metadef_resource_type_association', {}) return super(MetadefResourceTypeRepoProxy, self).add(resource_type) class MetadefResourceTypeFactoryProxy( glance.domain.proxy.MetadefResourceTypeFactory): def __init__(self, resource_type_factory, context, policy): self.resource_type_factory = resource_type_factory self.context = context self.policy = policy proxy_kwargs = {'context': self.context, 'policy': self.policy} super(MetadefResourceTypeFactoryProxy, self).__init__( resource_type_factory, resource_type_proxy_class=MetadefResourceTypeProxy, resource_type_proxy_kwargs=proxy_kwargs) # Metadef namespace properties classes class MetadefPropertyProxy(glance.domain.proxy.MetadefProperty): def __init__(self, namespace_property, context, policy): self.namespace_property = namespace_property self.context = context self.policy = policy super(MetadefPropertyProxy, self).__init__(namespace_property) class MetadefPropertyRepoProxy(glance.domain.proxy.MetadefPropertyRepo): def __init__(self, property_repo, context, object_policy): self.context = context self.policy = object_policy self.property_repo = property_repo proxy_kwargs = {'context': self.context, 'policy': self.policy} super(MetadefPropertyRepoProxy, self).__init__( property_repo, property_proxy_class=MetadefPropertyProxy, property_proxy_kwargs=proxy_kwargs) def get(self, namespace, property_name): self.policy.enforce(self.context, 'get_metadef_property', {}) return super(MetadefPropertyRepoProxy, self).get(namespace, property_name) def list(self, *args, **kwargs): self.policy.enforce(self.context, 'get_metadef_properties', {}) return super(MetadefPropertyRepoProxy, self).list( *args, **kwargs) def save(self, namespace_property): self.policy.enforce(self.context, 'modify_metadef_property', {}) return super(MetadefPropertyRepoProxy, self).save( namespace_property) def add(self, namespace_property): self.policy.enforce(self.context, 'add_metadef_property', {}) return super(MetadefPropertyRepoProxy, self).add( namespace_property) class MetadefPropertyFactoryProxy(glance.domain.proxy.MetadefPropertyFactory): def __init__(self, namespace_property_factory, context, policy): self.namespace_property_factory = namespace_property_factory self.context = context self.policy = policy proxy_kwargs = {'context': self.context, 'policy': self.policy} super(MetadefPropertyFactoryProxy, self).__init__( namespace_property_factory, property_proxy_class=MetadefPropertyProxy, property_proxy_kwargs=proxy_kwargs) # Metadef Tag classes class MetadefTagProxy(glance.domain.proxy.MetadefTag): def __init__(self, meta_tag, context, policy): self.context = context self.policy = policy super(MetadefTagProxy, self).__init__(meta_tag) class MetadefTagRepoProxy(glance.domain.proxy.MetadefTagRepo): def __init__(self, tag_repo, context, tag_policy): self.context = context self.policy = tag_policy self.tag_repo = tag_repo proxy_kwargs = {'context': self.context, 'policy': self.policy} super(MetadefTagRepoProxy, self).__init__(tag_repo, tag_proxy_class=MetadefTagProxy, tag_proxy_kwargs=proxy_kwargs) def get(self, namespace, tag_name): self.policy.enforce(self.context, 'get_metadef_tag', {}) return super(MetadefTagRepoProxy, self).get(namespace, tag_name) def list(self, *args, **kwargs): self.policy.enforce(self.context, 'get_metadef_tags', {}) return super(MetadefTagRepoProxy, self).list(*args, **kwargs) def save(self, meta_tag): self.policy.enforce(self.context, 'modify_metadef_tag', {}) return super(MetadefTagRepoProxy, self).save(meta_tag) def add(self, meta_tag): self.policy.enforce(self.context, 'add_metadef_tag', {}) return super(MetadefTagRepoProxy, self).add(meta_tag) def add_tags(self, meta_tags): self.policy.enforce(self.context, 'add_metadef_tags', {}) return super(MetadefTagRepoProxy, self).add_tags(meta_tags) class MetadefTagFactoryProxy(glance.domain.proxy.MetadefTagFactory): def __init__(self, meta_tag_factory, context, policy): self.meta_tag_factory = meta_tag_factory self.context = context self.policy = policy proxy_kwargs = {'context': self.context, 'policy': self.policy} super(MetadefTagFactoryProxy, self).__init__( meta_tag_factory, meta_tag_proxy_class=MetadefTagProxy, meta_tag_proxy_kwargs=proxy_kwargs) glance-16.0.0/glance/api/v2/0000775000175100017510000000000013245511661015410 5ustar zuulzuul00000000000000glance-16.0.0/glance/api/v2/image_members.py0000666000175100017510000003464213245511421020563 0ustar zuulzuul00000000000000# Copyright 2013 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import glance_store from oslo_log import log as logging from oslo_serialization import jsonutils from oslo_utils import encodeutils import six from six.moves import http_client as http import webob from glance.api import policy from glance.common import exception from glance.common import timeutils from glance.common import utils from glance.common import wsgi import glance.db import glance.gateway from glance.i18n import _ import glance.notifier import glance.schema LOG = logging.getLogger(__name__) class ImageMembersController(object): def __init__(self, db_api=None, policy_enforcer=None, notifier=None, store_api=None): self.db_api = db_api or glance.db.get_api() self.policy = policy_enforcer or policy.Enforcer() self.notifier = notifier or glance.notifier.Notifier() self.store_api = store_api or glance_store self.gateway = glance.gateway.Gateway(self.db_api, self.store_api, self.notifier, self.policy) def _get_member_repo(self, req, image): try: # For public, private, and community images, a forbidden exception # with message "Only shared images have members." is thrown. return self.gateway.get_member_repo(image, req.context) except exception.Forbidden as e: msg = (_("Error fetching members of image %(image_id)s: " "%(inner_msg)s") % {"image_id": image.image_id, "inner_msg": e.msg}) LOG.warning(msg) raise webob.exc.HTTPForbidden(explanation=msg) def _lookup_image(self, req, image_id): image_repo = self.gateway.get_repo(req.context) try: return image_repo.get(image_id) except (exception.NotFound): msg = _("Image %s not found.") % image_id LOG.warning(msg) raise webob.exc.HTTPNotFound(explanation=msg) except exception.Forbidden: msg = _("You are not authorized to lookup image %s.") % image_id LOG.warning(msg) raise webob.exc.HTTPForbidden(explanation=msg) def _lookup_member(self, req, image, member_id): member_repo = self._get_member_repo(req, image) try: return member_repo.get(member_id) except (exception.NotFound): msg = (_("%(m_id)s not found in the member list of the image " "%(i_id)s.") % {"m_id": member_id, "i_id": image.image_id}) LOG.warning(msg) raise webob.exc.HTTPNotFound(explanation=msg) except exception.Forbidden: msg = (_("You are not authorized to lookup the members of the " "image %s.") % image.image_id) LOG.warning(msg) raise webob.exc.HTTPForbidden(explanation=msg) @utils.mutating def create(self, req, image_id, member_id): """ Adds a membership to the image. :param req: the Request object coming from the wsgi layer :param image_id: the image identifier :param member_id: the member identifier :returns: The response body is a mapping of the following form :: {'member_id': , 'image_id': , 'status': 'created_at': .., 'updated_at': ..} """ image = self._lookup_image(req, image_id) member_repo = self._get_member_repo(req, image) image_member_factory = self.gateway.get_image_member_factory( req.context) try: new_member = image_member_factory.new_image_member(image, member_id) member_repo.add(new_member) return new_member except exception.Invalid as e: raise webob.exc.HTTPBadRequest(explanation=e.msg) except exception.Forbidden: msg = _("Not allowed to create members for image %s.") % image_id LOG.warning(msg) raise webob.exc.HTTPForbidden(explanation=msg) except exception.Duplicate: msg = _("Member %(member_id)s is duplicated for image " "%(image_id)s") % {"member_id": member_id, "image_id": image_id} LOG.warning(msg) raise webob.exc.HTTPConflict(explanation=msg) except exception.ImageMemberLimitExceeded as e: msg = (_("Image member limit exceeded for image %(id)s: %(e)s:") % {"id": image_id, "e": encodeutils.exception_to_unicode(e)}) LOG.warning(msg) raise webob.exc.HTTPRequestEntityTooLarge(explanation=msg) @utils.mutating def update(self, req, image_id, member_id, status): """ Update the status of a member for a given image. :param req: the Request object coming from the wsgi layer :param image_id: the image identifier :param member_id: the member identifier :param status: the status of a member :returns: The response body is a mapping of the following form :: {'member_id': , 'image_id': , 'status': , 'created_at': .., 'updated_at': ..} """ image = self._lookup_image(req, image_id) member_repo = self._get_member_repo(req, image) member = self._lookup_member(req, image, member_id) try: member.status = status member_repo.save(member) return member except exception.Forbidden: msg = _("Not allowed to update members for image %s.") % image_id LOG.warning(msg) raise webob.exc.HTTPForbidden(explanation=msg) except ValueError as e: msg = (_("Incorrect request: %s") % encodeutils.exception_to_unicode(e)) LOG.warning(msg) raise webob.exc.HTTPBadRequest(explanation=msg) def index(self, req, image_id): """ Return a list of dictionaries indicating the members of the image, i.e., those tenants the image is shared with. :param req: the Request object coming from the wsgi layer :param image_id: The image identifier :returns: The response body is a mapping of the following form :: {'members': [ {'member_id': , 'image_id': , 'status': , 'created_at': .., 'updated_at': ..}, .. ]} """ image = self._lookup_image(req, image_id) member_repo = self._get_member_repo(req, image) members = [] try: for member in member_repo.list(): members.append(member) except exception.Forbidden: msg = _("Not allowed to list members for image %s.") % image_id LOG.warning(msg) raise webob.exc.HTTPForbidden(explanation=msg) return dict(members=members) def show(self, req, image_id, member_id): """ Returns the membership of the tenant wrt to the image_id specified. :param req: the Request object coming from the wsgi layer :param image_id: The image identifier :returns: The response body is a mapping of the following form :: {'member_id': , 'image_id': , 'status': 'created_at': .., 'updated_at': ..} """ try: image = self._lookup_image(req, image_id) return self._lookup_member(req, image, member_id) except webob.exc.HTTPForbidden as e: # Convert Forbidden to NotFound to prevent information # leakage. raise webob.exc.HTTPNotFound(explanation=e.explanation) @utils.mutating def delete(self, req, image_id, member_id): """ Removes a membership from the image. """ image = self._lookup_image(req, image_id) member_repo = self._get_member_repo(req, image) member = self._lookup_member(req, image, member_id) try: member_repo.remove(member) return webob.Response(body='', status=http.NO_CONTENT) except exception.Forbidden: msg = _("Not allowed to delete members for image %s.") % image_id LOG.warning(msg) raise webob.exc.HTTPForbidden(explanation=msg) class RequestDeserializer(wsgi.JSONRequestDeserializer): def __init__(self): super(RequestDeserializer, self).__init__() def _get_request_body(self, request): output = super(RequestDeserializer, self).default(request) if 'body' not in output: msg = _('Body expected in request.') raise webob.exc.HTTPBadRequest(explanation=msg) return output['body'] def create(self, request): body = self._get_request_body(request) try: member_id = body['member'] if not member_id: raise ValueError() except KeyError: msg = _("Member to be added not specified") raise webob.exc.HTTPBadRequest(explanation=msg) except ValueError: msg = _("Member can't be empty") raise webob.exc.HTTPBadRequest(explanation=msg) except TypeError: msg = _('Expected a member in the form: ' '{"member": "image_id"}') raise webob.exc.HTTPBadRequest(explanation=msg) return dict(member_id=member_id) def update(self, request): body = self._get_request_body(request) try: status = body['status'] except KeyError: msg = _("Status not specified") raise webob.exc.HTTPBadRequest(explanation=msg) except TypeError: msg = _('Expected a status in the form: ' '{"status": "status"}') raise webob.exc.HTTPBadRequest(explanation=msg) return dict(status=status) class ResponseSerializer(wsgi.JSONResponseSerializer): def __init__(self, schema=None): super(ResponseSerializer, self).__init__() self.schema = schema or get_schema() def _format_image_member(self, member): member_view = {} attributes = ['member_id', 'image_id', 'status'] for key in attributes: member_view[key] = getattr(member, key) member_view['created_at'] = timeutils.isotime(member.created_at) member_view['updated_at'] = timeutils.isotime(member.updated_at) member_view['schema'] = '/v2/schemas/member' member_view = self.schema.filter(member_view) return member_view def create(self, response, image_member): image_member_view = self._format_image_member(image_member) body = jsonutils.dumps(image_member_view, ensure_ascii=False) response.unicode_body = six.text_type(body) response.content_type = 'application/json' def update(self, response, image_member): image_member_view = self._format_image_member(image_member) body = jsonutils.dumps(image_member_view, ensure_ascii=False) response.unicode_body = six.text_type(body) response.content_type = 'application/json' def index(self, response, image_members): image_members = image_members['members'] image_members_view = [] for image_member in image_members: image_member_view = self._format_image_member(image_member) image_members_view.append(image_member_view) totalview = dict(members=image_members_view) totalview['schema'] = '/v2/schemas/members' body = jsonutils.dumps(totalview, ensure_ascii=False) response.unicode_body = six.text_type(body) response.content_type = 'application/json' def show(self, response, image_member): image_member_view = self._format_image_member(image_member) body = jsonutils.dumps(image_member_view, ensure_ascii=False) response.unicode_body = six.text_type(body) response.content_type = 'application/json' _MEMBER_SCHEMA = { 'member_id': { 'type': 'string', 'description': _('An identifier for the image member (tenantId)') }, 'image_id': { 'type': 'string', 'description': _('An identifier for the image'), 'pattern': ('^([0-9a-fA-F]){8}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}' '-([0-9a-fA-F]){4}-([0-9a-fA-F]){12}$'), }, 'created_at': { 'type': 'string', 'description': _('Date and time of image member creation'), # TODO(brian-rosmaita): our jsonschema library doesn't seem to like the # format attribute, figure out why (and also fix in images.py) # 'format': 'date-time', }, 'updated_at': { 'type': 'string', 'description': _('Date and time of last modification of image member'), # 'format': 'date-time', }, 'status': { 'type': 'string', 'description': _('The status of this image member'), 'enum': [ 'pending', 'accepted', 'rejected' ] }, 'schema': { 'readOnly': True, 'type': 'string' } } def get_schema(): properties = copy.deepcopy(_MEMBER_SCHEMA) schema = glance.schema.Schema('member', properties) return schema def get_collection_schema(): member_schema = get_schema() return glance.schema.CollectionSchema('members', member_schema) def create_resource(): """Image Members resource factory method""" deserializer = RequestDeserializer() serializer = ResponseSerializer() controller = ImageMembersController() return wsgi.Resource(controller, deserializer, serializer) glance-16.0.0/glance/api/v2/image_data.py0000666000175100017510000005174613245511421020046 0ustar zuulzuul00000000000000# Copyright 2012 OpenStack Foundation. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from cursive import exception as cursive_exception import glance_store from glance_store import backend from oslo_config import cfg from oslo_log import log as logging from oslo_utils import encodeutils from oslo_utils import excutils import six import webob.exc import glance.api.policy from glance.common import exception from glance.common import trust_auth from glance.common import utils from glance.common import wsgi import glance.db import glance.gateway from glance.i18n import _, _LE, _LI import glance.notifier LOG = logging.getLogger(__name__) CONF = cfg.CONF class ImageDataController(object): def __init__(self, db_api=None, store_api=None, policy_enforcer=None, notifier=None): db_api = db_api or glance.db.get_api() store_api = store_api or glance_store notifier = notifier or glance.notifier.Notifier() self.policy = policy_enforcer or glance.api.policy.Enforcer() self.gateway = glance.gateway.Gateway(db_api, store_api, notifier, self.policy) def _restore(self, image_repo, image): """ Restore the image to queued status. :param image_repo: The instance of ImageRepo :param image: The image will be restored """ try: if image_repo and image: image.status = 'queued' image_repo.save(image) except Exception as e: msg = (_LE("Unable to restore image %(image_id)s: %(e)s") % {'image_id': image.image_id, 'e': encodeutils.exception_to_unicode(e)}) LOG.exception(msg) def _unstage(self, image_repo, image, staging_store): """ Restore the image to queued status and remove data from staging. :param image_repo: The instance of ImageRepo :param image: The image will be restored :param staging_store: The store used for staging """ loc = glance_store.location.get_location_from_uri(str( CONF.node_staging_uri + '/' + image.image_id)) try: staging_store.delete(loc) except glance_store.exceptions.NotFound: pass finally: self._restore(image_repo, image) def _delete(self, image_repo, image): """Delete the image. :param image_repo: The instance of ImageRepo :param image: The image that will be deleted """ try: if image_repo and image: image.status = 'killed' image_repo.save(image) except Exception as e: msg = (_LE("Unable to delete image %(image_id)s: %(e)s") % {'image_id': image.image_id, 'e': encodeutils.exception_to_unicode(e)}) LOG.exception(msg) @utils.mutating def upload(self, req, image_id, data, size): image_repo = self.gateway.get_repo(req.context) image = None refresher = None cxt = req.context try: self.policy.enforce(cxt, 'upload_image', {}) image = image_repo.get(image_id) image.status = 'saving' try: if CONF.data_api == 'glance.db.registry.api': # create a trust if backend is registry try: # request user plugin for current token user_plugin = req.environ.get('keystone.token_auth') roles = [] # use roles from request environment because they # are not transformed to lower-case unlike cxt.roles for role_info in req.environ.get( 'keystone.token_info')['token']['roles']: roles.append(role_info['name']) refresher = trust_auth.TokenRefresher(user_plugin, cxt.tenant, roles) except Exception as e: LOG.info(_LI("Unable to create trust: %s " "Use the existing user token."), encodeutils.exception_to_unicode(e)) image_repo.save(image, from_state='queued') image.set_data(data, size) try: image_repo.save(image, from_state='saving') except exception.NotAuthenticated: if refresher is not None: # request a new token to update an image in database cxt.auth_token = refresher.refresh_token() image_repo = self.gateway.get_repo(req.context) image_repo.save(image, from_state='saving') else: raise try: # release resources required for re-auth if refresher is not None: refresher.release_resources() except Exception as e: LOG.info(_LI("Unable to delete trust %(trust)s: %(msg)s"), {"trust": refresher.trust_id, "msg": encodeutils.exception_to_unicode(e)}) except (glance_store.NotFound, exception.ImageNotFound, exception.Conflict): msg = (_("Image %s could not be found after upload. " "The image may have been deleted during the " "upload, cleaning up the chunks uploaded.") % image_id) LOG.warn(msg) # NOTE(sridevi): Cleaning up the uploaded chunks. try: image.delete() except exception.ImageNotFound: # NOTE(sridevi): Ignore this exception pass raise webob.exc.HTTPGone(explanation=msg, request=req, content_type='text/plain') except exception.NotAuthenticated: msg = (_("Authentication error - the token may have " "expired during file upload. Deleting image data for " "%s.") % image_id) LOG.debug(msg) try: image.delete() except exception.NotAuthenticated: # NOTE: Ignore this exception pass raise webob.exc.HTTPUnauthorized(explanation=msg, request=req, content_type='text/plain') except ValueError as e: LOG.debug("Cannot save data for image %(id)s: %(e)s", {'id': image_id, 'e': encodeutils.exception_to_unicode(e)}) self._restore(image_repo, image) raise webob.exc.HTTPBadRequest( explanation=encodeutils.exception_to_unicode(e)) except glance_store.StoreAddDisabled: msg = _("Error in store configuration. Adding images to store " "is disabled.") LOG.exception(msg) self._restore(image_repo, image) raise webob.exc.HTTPGone(explanation=msg, request=req, content_type='text/plain') except exception.InvalidImageStatusTransition as e: msg = encodeutils.exception_to_unicode(e) LOG.exception(msg) raise webob.exc.HTTPConflict(explanation=e.msg, request=req) except exception.Forbidden as e: msg = ("Not allowed to upload image data for image %s" % image_id) LOG.debug(msg) raise webob.exc.HTTPForbidden(explanation=msg, request=req) except exception.NotFound as e: raise webob.exc.HTTPNotFound(explanation=e.msg) except glance_store.StorageFull as e: msg = _("Image storage media " "is full: %s") % encodeutils.exception_to_unicode(e) LOG.error(msg) self._restore(image_repo, image) raise webob.exc.HTTPRequestEntityTooLarge(explanation=msg, request=req) except exception.StorageQuotaFull as e: msg = _("Image exceeds the storage " "quota: %s") % encodeutils.exception_to_unicode(e) LOG.error(msg) self._restore(image_repo, image) raise webob.exc.HTTPRequestEntityTooLarge(explanation=msg, request=req) except exception.ImageSizeLimitExceeded as e: msg = _("The incoming image is " "too large: %s") % encodeutils.exception_to_unicode(e) LOG.error(msg) self._restore(image_repo, image) raise webob.exc.HTTPRequestEntityTooLarge(explanation=msg, request=req) except glance_store.StorageWriteDenied as e: msg = _("Insufficient permissions on image " "storage media: %s") % encodeutils.exception_to_unicode(e) LOG.error(msg) self._restore(image_repo, image) raise webob.exc.HTTPServiceUnavailable(explanation=msg, request=req) except cursive_exception.SignatureVerificationError as e: msg = (_LE("Signature verification failed for image %(id)s: %(e)s") % {'id': image_id, 'e': encodeutils.exception_to_unicode(e)}) LOG.error(msg) self._delete(image_repo, image) raise webob.exc.HTTPBadRequest(explanation=msg) except webob.exc.HTTPGone as e: with excutils.save_and_reraise_exception(): LOG.error(_LE("Failed to upload image data due to HTTP error")) except webob.exc.HTTPError as e: with excutils.save_and_reraise_exception(): LOG.error(_LE("Failed to upload image data due to HTTP error")) self._restore(image_repo, image) except Exception as e: with excutils.save_and_reraise_exception(): LOG.error(_LE("Failed to upload image data due to " "internal error")) self._restore(image_repo, image) @utils.mutating def stage(self, req, image_id, data, size): image_repo = self.gateway.get_repo(req.context) image = None # NOTE(jokke): this is horrible way to do it but as long as # glance_store is in a shape it is, the only way. Don't hold me # accountable for it. def _build_staging_store(): conf = cfg.ConfigOpts() backend.register_opts(conf) conf.set_override('filesystem_store_datadir', CONF.node_staging_uri[7:], group='glance_store') staging_store = backend._load_store(conf, 'file') try: staging_store.configure() except AttributeError: msg = _("'node_staging_uri' is not set correctly. Could not " "load staging store.") raise exception.BadStoreUri(message=msg) return staging_store staging_store = _build_staging_store() try: image = image_repo.get(image_id) image.status = 'uploading' image_repo.save(image, from_state='queued') try: staging_store.add( image_id, utils.LimitingReader( utils.CooperativeReader(data), CONF.image_size_cap), 0) except glance_store.Duplicate as e: msg = _("The image %s has data on staging") % image_id raise webob.exc.HTTPConflict(explanation=msg) except exception.NotFound as e: raise webob.exc.HTTPNotFound(explanation=e.msg) except glance_store.StorageFull as e: msg = _("Image storage media " "is full: %s") % encodeutils.exception_to_unicode(e) LOG.error(msg) self._unstage(image_repo, image, staging_store) raise webob.exc.HTTPRequestEntityTooLarge(explanation=msg, request=req) except exception.StorageQuotaFull as e: msg = _("Image exceeds the storage " "quota: %s") % encodeutils.exception_to_unicode(e) LOG.debug(msg) self._unstage(image_repo, image, staging_store) raise webob.exc.HTTPRequestEntityTooLarge(explanation=msg, request=req) except exception.ImageSizeLimitExceeded as e: msg = _("The incoming image is " "too large: %s") % encodeutils.exception_to_unicode(e) LOG.debug(msg) self._unstage(image_repo, image, staging_store) raise webob.exc.HTTPRequestEntityTooLarge(explanation=msg, request=req) except glance_store.StorageWriteDenied as e: msg = _("Insufficient permissions on image " "storage media: %s") % encodeutils.exception_to_unicode(e) LOG.error(msg) self._unstage(image_repo, image, staging_store) raise webob.exc.HTTPServiceUnavailable(explanation=msg, request=req) except exception.InvalidImageStatusTransition as e: msg = encodeutils.exception_to_unicode(e) LOG.debug(msg) raise webob.exc.HTTPConflict(explanation=e.msg, request=req) except Exception as e: with excutils.save_and_reraise_exception(): LOG.exception(_LE("Failed to stage image data due to " "internal error")) self._restore(image_repo, image) def download(self, req, image_id): image_repo = self.gateway.get_repo(req.context) try: image = image_repo.get(image_id) if image.status == 'deactivated' and not req.context.is_admin: msg = _('The requested image has been deactivated. ' 'Image data download is forbidden.') raise exception.Forbidden(message=msg) except exception.NotFound as e: raise webob.exc.HTTPNotFound(explanation=e.msg) except exception.Forbidden as e: LOG.debug("User not permitted to download image '%s'", image_id) raise webob.exc.HTTPForbidden(explanation=e.msg) return image class RequestDeserializer(wsgi.JSONRequestDeserializer): def upload(self, request): try: request.get_content_type(('application/octet-stream',)) except exception.InvalidContentType as e: raise webob.exc.HTTPUnsupportedMediaType(explanation=e.msg) if self.is_valid_encoding(request) and self.is_valid_method(request): request.is_body_readable = True image_size = request.content_length or None return {'size': image_size, 'data': request.body_file} def stage(self, request): if not CONF.enable_image_import: msg = _("Image import is not supported at this site.") raise webob.exc.HTTPNotFound(explanation=msg) try: request.get_content_type(('application/octet-stream',)) except exception.InvalidContentType as e: raise webob.exc.HTTPUnsupportedMediaType(explanation=e.msg) if self.is_valid_encoding(request) and self.is_valid_method(request): request.is_body_readable = True image_size = request.content_length or None return {'size': image_size, 'data': request.body_file} class ResponseSerializer(wsgi.JSONResponseSerializer): def download(self, response, image): offset, chunk_size = 0, None # NOTE(dharinic): In case of a malformed range header, # glance/common/wsgi.py will raise HTTPRequestRangeNotSatisfiable # (setting status_code to 416) range_val = response.request.get_range_from_request(image.size) if range_val: if isinstance(range_val, webob.byterange.Range): response_end = image.size - 1 # NOTE(dharinic): webob parsing is zero-indexed. # i.e.,to download first 5 bytes of a 10 byte image, # request should be "bytes=0-4" and the response would be # "bytes 0-4/10". # Range if validated, will never have 'start' object as None. if range_val.start >= 0: offset = range_val.start else: # NOTE(dharinic): Negative start values needs to be # processed to allow suffix-length for Range request # like "bytes=-2" as per rfc7233. if abs(range_val.start) < image.size: offset = image.size + range_val.start if range_val.end is not None and range_val.end < image.size: chunk_size = range_val.end - offset response_end = range_val.end - 1 else: chunk_size = image.size - offset # NOTE(dharinic): For backward compatibility reasons, we maintain # support for 'Content-Range' in requests even though it's not # correct to use it in requests. elif isinstance(range_val, webob.byterange.ContentRange): response_end = range_val.stop - 1 # NOTE(flaper87): if not present, both, start # and stop, will be None. offset = range_val.start chunk_size = range_val.stop - offset response.status_int = 206 response.headers['Content-Type'] = 'application/octet-stream' try: # NOTE(markwash): filesystem store (and maybe others?) cause a # problem with the caching middleware if they are not wrapped in # an iterator very strange response.app_iter = iter(image.get_data(offset=offset, chunk_size=chunk_size)) # NOTE(dharinic): In case of a full image download, when # chunk_size was none, reset it to image.size to set the # response header's Content-Length. if chunk_size is not None: response.headers['Content-Range'] = 'bytes %s-%s/%s'\ % (offset, response_end, image.size) else: chunk_size = image.size except glance_store.NotFound as e: raise webob.exc.HTTPNoContent(explanation=e.msg) except glance_store.RemoteServiceUnavailable as e: raise webob.exc.HTTPServiceUnavailable(explanation=e.msg) except (glance_store.StoreGetNotSupported, glance_store.StoreRandomGetNotSupported) as e: raise webob.exc.HTTPBadRequest(explanation=e.msg) except exception.Forbidden as e: LOG.debug("User not permitted to download image '%s'", image) raise webob.exc.HTTPForbidden(explanation=e.msg) # NOTE(saschpe): "response.app_iter = ..." currently resets Content-MD5 # (https://github.com/Pylons/webob/issues/86), so it should be set # afterwards for the time being. if image.checksum: response.headers['Content-MD5'] = image.checksum # NOTE(markwash): "response.app_iter = ..." also erroneously resets the # content-length response.headers['Content-Length'] = six.text_type(chunk_size) def upload(self, response, result): response.status_int = 204 def stage(self, response, result): response.status_int = 204 def create_resource(): """Image data resource factory method""" deserializer = RequestDeserializer() serializer = ResponseSerializer() controller = ImageDataController() return wsgi.Resource(controller, deserializer, serializer) glance-16.0.0/glance/api/v2/model/0000775000175100017510000000000013245511661016510 5ustar zuulzuul00000000000000glance-16.0.0/glance/api/v2/model/metadef_namespace.py0000666000175100017510000000571513245511421022507 0ustar zuulzuul00000000000000# Copyright (c) 2014 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import wsme from wsme.rest import json from wsme import types from glance.api.v2.model.metadef_object import MetadefObject from glance.api.v2.model.metadef_property_type import PropertyType from glance.api.v2.model.metadef_resource_type import ResourceTypeAssociation from glance.api.v2.model.metadef_tag import MetadefTag from glance.common.wsme_utils import WSMEModelTransformer class Namespace(types.Base, WSMEModelTransformer): # Base fields namespace = wsme.wsattr(types.text, mandatory=True) display_name = wsme.wsattr(types.text, mandatory=False) description = wsme.wsattr(types.text, mandatory=False) visibility = wsme.wsattr(types.text, mandatory=False) protected = wsme.wsattr(bool, mandatory=False) owner = wsme.wsattr(types.text, mandatory=False) # Not using datetime since time format has to be # in oslo_utils.timeutils.isotime() format created_at = wsme.wsattr(types.text, mandatory=False) updated_at = wsme.wsattr(types.text, mandatory=False) # Contained fields resource_type_associations = wsme.wsattr([ResourceTypeAssociation], mandatory=False) properties = wsme.wsattr({types.text: PropertyType}, mandatory=False) objects = wsme.wsattr([MetadefObject], mandatory=False) tags = wsme.wsattr([MetadefTag], mandatory=False) # Generated fields self = wsme.wsattr(types.text, mandatory=False) schema = wsme.wsattr(types.text, mandatory=False) def __init__(cls, **kwargs): super(Namespace, cls).__init__(**kwargs) @staticmethod def to_model_properties(db_property_types): property_types = {} for db_property_type in db_property_types: # Convert the persisted json schema to a dict of PropertyTypes property_type = json.fromjson( PropertyType, db_property_type.schema) property_type_name = db_property_type.name property_types[property_type_name] = property_type return property_types class Namespaces(types.Base, WSMEModelTransformer): namespaces = wsme.wsattr([Namespace], mandatory=False) # Pagination next = wsme.wsattr(types.text, mandatory=False) schema = wsme.wsattr(types.text, mandatory=True) first = wsme.wsattr(types.text, mandatory=True) def __init__(self, **kwargs): super(Namespaces, self).__init__(**kwargs) glance-16.0.0/glance/api/v2/model/metadef_object.py0000666000175100017510000000335013245511421022012 0ustar zuulzuul00000000000000# Copyright (c) 2014 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import wsme from wsme import types from glance.api.v2.model.metadef_property_type import PropertyType from glance.common.wsme_utils import WSMEModelTransformer class MetadefObject(types.Base, WSMEModelTransformer): name = wsme.wsattr(types.text, mandatory=True) required = wsme.wsattr([types.text], mandatory=False) description = wsme.wsattr(types.text, mandatory=False) properties = wsme.wsattr({types.text: PropertyType}, mandatory=False) # Not using datetime since time format has to be # in oslo_utils.timeutils.isotime() format created_at = wsme.wsattr(types.text, mandatory=False) updated_at = wsme.wsattr(types.text, mandatory=False) # Generated fields self = wsme.wsattr(types.text, mandatory=False) schema = wsme.wsattr(types.text, mandatory=False) def __init__(cls, **kwargs): super(MetadefObject, cls).__init__(**kwargs) class MetadefObjects(types.Base, WSMEModelTransformer): objects = wsme.wsattr([MetadefObject], mandatory=False) schema = wsme.wsattr(types.text, mandatory=True) def __init__(self, **kwargs): super(MetadefObjects, self).__init__(**kwargs) glance-16.0.0/glance/api/v2/model/metadef_tag.py0000666000175100017510000000216313245511421021320 0ustar zuulzuul00000000000000# Copyright (c) 2014 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import wsme from wsme import types from glance.common import wsme_utils class MetadefTag(types.Base, wsme_utils.WSMEModelTransformer): name = wsme.wsattr(types.text, mandatory=True) # Not using datetime since time format has to be # in oslo_utils.timeutils.isotime() format created_at = wsme.wsattr(types.text, mandatory=False) updated_at = wsme.wsattr(types.text, mandatory=False) class MetadefTags(types.Base, wsme_utils.WSMEModelTransformer): tags = wsme.wsattr([MetadefTag], mandatory=False) glance-16.0.0/glance/api/v2/model/metadef_property_item_type.py0000666000175100017510000000161213245511421024506 0ustar zuulzuul00000000000000# Copyright (c) 2014 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import wsme from wsme import types class ItemType(types.Base): type = wsme.wsattr(types.text, mandatory=True) enum = wsme.wsattr([types.text], mandatory=False) _wsme_attr_order = ('type', 'enum') def __init__(self, **kwargs): super(ItemType, self).__init__(**kwargs) glance-16.0.0/glance/api/v2/model/metadef_property_type.py0000666000175100017510000000446213245511421023476 0ustar zuulzuul00000000000000# Copyright (c) 2014 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import wsme from wsme import types from glance.api.v2.model.metadef_property_item_type import ItemType from glance.common.wsme_utils import WSMEModelTransformer class PropertyType(types.Base, WSMEModelTransformer): # When used in collection of PropertyTypes, name is a dictionary key # and not included as separate field. name = wsme.wsattr(types.text, mandatory=False) type = wsme.wsattr(types.text, mandatory=True) title = wsme.wsattr(types.text, mandatory=True) description = wsme.wsattr(types.text, mandatory=False) operators = wsme.wsattr([types.text], mandatory=False) default = wsme.wsattr(types.bytes, mandatory=False) readonly = wsme.wsattr(bool, mandatory=False) # fields for type = string minimum = wsme.wsattr(int, mandatory=False) maximum = wsme.wsattr(int, mandatory=False) enum = wsme.wsattr([types.text], mandatory=False) pattern = wsme.wsattr(types.text, mandatory=False) # fields for type = integer, number minLength = wsme.wsattr(int, mandatory=False) maxLength = wsme.wsattr(int, mandatory=False) confidential = wsme.wsattr(bool, mandatory=False) # fields for type = array items = wsme.wsattr(ItemType, mandatory=False) uniqueItems = wsme.wsattr(bool, mandatory=False) minItems = wsme.wsattr(int, mandatory=False) maxItems = wsme.wsattr(int, mandatory=False) additionalItems = wsme.wsattr(bool, mandatory=False) def __init__(self, **kwargs): super(PropertyType, self).__init__(**kwargs) class PropertyTypes(types.Base, WSMEModelTransformer): properties = wsme.wsattr({types.text: PropertyType}, mandatory=False) def __init__(self, **kwargs): super(PropertyTypes, self).__init__(**kwargs) glance-16.0.0/glance/api/v2/model/__init__.py0000666000175100017510000000000013245511421020603 0ustar zuulzuul00000000000000glance-16.0.0/glance/api/v2/model/metadef_resource_type.py0000666000175100017510000000421213245511421023432 0ustar zuulzuul00000000000000# Copyright (c) 2014 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import wsme from wsme import types from glance.common.wsme_utils import WSMEModelTransformer class ResourceTypeAssociation(types.Base, WSMEModelTransformer): name = wsme.wsattr(types.text, mandatory=True) prefix = wsme.wsattr(types.text, mandatory=False) properties_target = wsme.wsattr(types.text, mandatory=False) # Not using datetime since time format has to be # in oslo_utils.timeutils.isotime() format created_at = wsme.wsattr(types.text, mandatory=False) updated_at = wsme.wsattr(types.text, mandatory=False) def __init__(self, **kwargs): super(ResourceTypeAssociation, self).__init__(**kwargs) class ResourceTypeAssociations(types.Base, WSMEModelTransformer): resource_type_associations = wsme.wsattr([ResourceTypeAssociation], mandatory=False) def __init__(self, **kwargs): super(ResourceTypeAssociations, self).__init__(**kwargs) class ResourceType(types.Base, WSMEModelTransformer): name = wsme.wsattr(types.text, mandatory=True) # Not using datetime since time format has to be # in oslo_utils.timeutils.isotime() format created_at = wsme.wsattr(types.text, mandatory=False) updated_at = wsme.wsattr(types.text, mandatory=False) def __init__(self, **kwargs): super(ResourceType, self).__init__(**kwargs) class ResourceTypes(types.Base, WSMEModelTransformer): resource_types = wsme.wsattr([ResourceType], mandatory=False) def __init__(self, **kwargs): super(ResourceTypes, self).__init__(**kwargs) glance-16.0.0/glance/api/v2/image_actions.py0000666000175100017510000000727313245511421020571 0ustar zuulzuul00000000000000# Copyright 2015 OpenStack Foundation. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import glance_store from oslo_log import log as logging from six.moves import http_client as http import webob.exc from glance.api import policy from glance.common import exception from glance.common import utils from glance.common import wsgi import glance.db import glance.gateway from glance.i18n import _LI import glance.notifier LOG = logging.getLogger(__name__) class ImageActionsController(object): def __init__(self, db_api=None, policy_enforcer=None, notifier=None, store_api=None): self.db_api = db_api or glance.db.get_api() self.policy = policy_enforcer or policy.Enforcer() self.notifier = notifier or glance.notifier.Notifier() self.store_api = store_api or glance_store self.gateway = glance.gateway.Gateway(self.db_api, self.store_api, self.notifier, self.policy) @utils.mutating def deactivate(self, req, image_id): image_repo = self.gateway.get_repo(req.context) try: image = image_repo.get(image_id) status = image.status image.deactivate() # not necessary to change the status if it's already 'deactivated' if status == 'active': image_repo.save(image, from_state='active') LOG.info(_LI("Image %s is deactivated"), image_id) except exception.NotFound as e: raise webob.exc.HTTPNotFound(explanation=e.msg) except exception.Forbidden as e: LOG.debug("User not permitted to deactivate image '%s'", image_id) raise webob.exc.HTTPForbidden(explanation=e.msg) except exception.InvalidImageStatusTransition as e: raise webob.exc.HTTPBadRequest(explanation=e.msg) @utils.mutating def reactivate(self, req, image_id): image_repo = self.gateway.get_repo(req.context) try: image = image_repo.get(image_id) status = image.status image.reactivate() # not necessary to change the status if it's already 'active' if status == 'deactivated': image_repo.save(image, from_state='deactivated') LOG.info(_LI("Image %s is reactivated"), image_id) except exception.NotFound as e: raise webob.exc.HTTPNotFound(explanation=e.msg) except exception.Forbidden as e: LOG.debug("User not permitted to reactivate image '%s'", image_id) raise webob.exc.HTTPForbidden(explanation=e.msg) except exception.InvalidImageStatusTransition as e: raise webob.exc.HTTPBadRequest(explanation=e.msg) class ResponseSerializer(wsgi.JSONResponseSerializer): def deactivate(self, response, result): response.status_int = http.NO_CONTENT def reactivate(self, response, result): response.status_int = http.NO_CONTENT def create_resource(): """Image data resource factory method""" deserializer = None serializer = ResponseSerializer() controller = ImageActionsController() return wsgi.Resource(controller, deserializer, serializer) glance-16.0.0/glance/api/v2/router.py0000666000175100017510000006406313245511421017307 0ustar zuulzuul00000000000000# Copyright 2012 OpenStack Foundation. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from glance.api.v2 import discovery from glance.api.v2 import image_actions from glance.api.v2 import image_data from glance.api.v2 import image_members from glance.api.v2 import image_tags from glance.api.v2 import images from glance.api.v2 import metadef_namespaces from glance.api.v2 import metadef_objects from glance.api.v2 import metadef_properties from glance.api.v2 import metadef_resource_types from glance.api.v2 import metadef_tags from glance.api.v2 import schemas from glance.api.v2 import tasks from glance.common import wsgi class API(wsgi.Router): """WSGI router for Glance v2 API requests.""" def __init__(self, mapper): custom_image_properties = images.load_custom_properties() reject_method_resource = wsgi.Resource(wsgi.RejectMethodController()) schemas_resource = schemas.create_resource(custom_image_properties) mapper.connect('/schemas/image', controller=schemas_resource, action='image', conditions={'method': ['GET']}, body_reject=True) mapper.connect('/schemas/image', controller=reject_method_resource, action='reject', allowed_methods='GET') mapper.connect('/schemas/images', controller=schemas_resource, action='images', conditions={'method': ['GET']}, body_reject=True) mapper.connect('/schemas/images', controller=reject_method_resource, action='reject', allowed_methods='GET') mapper.connect('/schemas/member', controller=schemas_resource, action='member', conditions={'method': ['GET']}, body_reject=True) mapper.connect('/schemas/member', controller=reject_method_resource, action='reject', allowed_methods='GET') mapper.connect('/schemas/members', controller=schemas_resource, action='members', conditions={'method': ['GET']}, body_reject=True) mapper.connect('/schemas/members', controller=reject_method_resource, action='reject', allowed_methods='GET') mapper.connect('/schemas/task', controller=schemas_resource, action='task', conditions={'method': ['GET']}) mapper.connect('/schemas/task', controller=reject_method_resource, action='reject', allowed_methods='GET') mapper.connect('/schemas/tasks', controller=schemas_resource, action='tasks', conditions={'method': ['GET']}) mapper.connect('/schemas/tasks', controller=reject_method_resource, action='reject', allowed_methods='GET') mapper.connect('/schemas/metadefs/namespace', controller=schemas_resource, action='metadef_namespace', conditions={'method': ['GET']}, body_reject=True) mapper.connect('/schemas/metadefs/namespace', controller=reject_method_resource, action='reject', allowed_methods='GET') mapper.connect('/schemas/metadefs/namespaces', controller=schemas_resource, action='metadef_namespaces', conditions={'method': ['GET']}, body_reject=True) mapper.connect('/schemas/metadefs/namespaces', controller=reject_method_resource, action='reject', allowed_methods='GET') mapper.connect('/schemas/metadefs/resource_type', controller=schemas_resource, action='metadef_resource_type', conditions={'method': ['GET']}, body_reject=True) mapper.connect('/schemas/metadefs/resource_type', controller=reject_method_resource, action='reject', allowed_methods='GET') mapper.connect('/schemas/metadefs/resource_types', controller=schemas_resource, action='metadef_resource_types', conditions={'method': ['GET']}, body_reject=True) mapper.connect('/schemas/metadefs/resource_types', controller=reject_method_resource, action='reject', allowed_methods='GET') mapper.connect('/schemas/metadefs/property', controller=schemas_resource, action='metadef_property', conditions={'method': ['GET']}, body_reject=True) mapper.connect('/schemas/metadefs/property', controller=reject_method_resource, action='reject', allowed_methods='GET') mapper.connect('/schemas/metadefs/properties', controller=schemas_resource, action='metadef_properties', conditions={'method': ['GET']}, body_reject=True) mapper.connect('/schemas/metadefs/properties', controller=reject_method_resource, action='reject', allowed_methods='GET') mapper.connect('/schemas/metadefs/object', controller=schemas_resource, action='metadef_object', conditions={'method': ['GET']}, body_reject=True) mapper.connect('/schemas/metadefs/object', controller=reject_method_resource, action='reject', allowed_methods='GET') mapper.connect('/schemas/metadefs/objects', controller=schemas_resource, action='metadef_objects', conditions={'method': ['GET']}, body_reject=True) mapper.connect('/schemas/metadefs/objects', controller=reject_method_resource, action='reject', allowed_methods='GET') mapper.connect('/schemas/metadefs/tag', controller=schemas_resource, action='metadef_tag', conditions={'method': ['GET']}, body_reject=True) mapper.connect('/schemas/metadefs/tag', controller=reject_method_resource, action='reject', allowed_methods='GET') mapper.connect('/schemas/metadefs/tags', controller=schemas_resource, action='metadef_tags', conditions={'method': ['GET']}, body_reject=True) mapper.connect('/schemas/metadefs/tags', controller=reject_method_resource, action='reject', allowed_methods='GET') # Metadef resource types metadef_resource_types_resource = ( metadef_resource_types.create_resource()) mapper.connect('/metadefs/resource_types', controller=metadef_resource_types_resource, action='index', conditions={'method': ['GET']}, body_reject=True) mapper.connect('/metadefs/resource_types', controller=reject_method_resource, action='reject', allowed_methods='GET') mapper.connect('/metadefs/namespaces/{namespace}/resource_types', controller=metadef_resource_types_resource, action='show', conditions={'method': ['GET']}, body_reject=True) mapper.connect('/metadefs/namespaces/{namespace}/resource_types', controller=metadef_resource_types_resource, action='create', conditions={'method': ['POST']}) mapper.connect('/metadefs/namespaces/{namespace}/resource_types', controller=reject_method_resource, action='reject', allowed_methods='GET, POST') mapper.connect('/metadefs/namespaces/{namespace}/resource_types/' '{resource_type}', controller=metadef_resource_types_resource, action='delete', conditions={'method': ['DELETE']}, body_reject=True) mapper.connect('/metadefs/namespaces/{namespace}/resource_types/' '{resource_type}', controller=reject_method_resource, action='reject', allowed_methods='DELETE') # Metadef Namespaces metadef_namespace_resource = metadef_namespaces.create_resource() mapper.connect('/metadefs/namespaces', controller=metadef_namespace_resource, action='index', conditions={'method': ['GET']}) mapper.connect('/metadefs/namespaces', controller=metadef_namespace_resource, action='create', conditions={'method': ['POST']}) mapper.connect('/metadefs/namespaces', controller=reject_method_resource, action='reject', allowed_methods='GET, POST') mapper.connect('/metadefs/namespaces/{namespace}', controller=metadef_namespace_resource, action='show', conditions={'method': ['GET']}, body_reject=True) mapper.connect('/metadefs/namespaces/{namespace}', controller=metadef_namespace_resource, action='update', conditions={'method': ['PUT']}) mapper.connect('/metadefs/namespaces/{namespace}', controller=metadef_namespace_resource, action='delete', conditions={'method': ['DELETE']}, body_reject=True) mapper.connect('/metadefs/namespaces/{namespace}', controller=reject_method_resource, action='reject', allowed_methods='GET, PUT, DELETE') # Metadef namespace properties metadef_properties_resource = metadef_properties.create_resource() mapper.connect('/metadefs/namespaces/{namespace}/properties', controller=metadef_properties_resource, action='index', conditions={'method': ['GET']}, body_reject=True) mapper.connect('/metadefs/namespaces/{namespace}/properties', controller=metadef_properties_resource, action='create', conditions={'method': ['POST']}) mapper.connect('/metadefs/namespaces/{namespace}/properties', controller=metadef_namespace_resource, action='delete_properties', conditions={'method': ['DELETE']}) mapper.connect('/metadefs/namespaces/{namespace}/properties', controller=reject_method_resource, action='reject', allowed_methods='GET, POST, DELETE') mapper.connect('/metadefs/namespaces/{namespace}/properties/{' 'property_name}', controller=metadef_properties_resource, action='show', conditions={'method': ['GET']}) mapper.connect('/metadefs/namespaces/{namespace}/properties/{' 'property_name}', controller=metadef_properties_resource, action='update', conditions={'method': ['PUT']}) mapper.connect('/metadefs/namespaces/{namespace}/properties/{' 'property_name}', controller=metadef_properties_resource, action='delete', conditions={'method': ['DELETE']}) mapper.connect('/metadefs/namespaces/{namespace}/properties/{' 'property_name}', controller=reject_method_resource, action='reject', allowed_methods='GET, PUT, DELETE') # Metadef objects metadef_objects_resource = metadef_objects.create_resource() mapper.connect('/metadefs/namespaces/{namespace}/objects', controller=metadef_objects_resource, action='index', conditions={'method': ['GET']}) mapper.connect('/metadefs/namespaces/{namespace}/objects', controller=metadef_objects_resource, action='create', conditions={'method': ['POST']}) mapper.connect('/metadefs/namespaces/{namespace}/objects', controller=metadef_namespace_resource, action='delete_objects', conditions={'method': ['DELETE']}) mapper.connect('/metadefs/namespaces/{namespace}/objects', controller=reject_method_resource, action='reject', allowed_methods='GET, POST, DELETE') mapper.connect('/metadefs/namespaces/{namespace}/objects/{' 'object_name}', controller=metadef_objects_resource, action='show', conditions={'method': ['GET']}, body_reject=True) mapper.connect('/metadefs/namespaces/{namespace}/objects/{' 'object_name}', controller=metadef_objects_resource, action='update', conditions={'method': ['PUT']}) mapper.connect('/metadefs/namespaces/{namespace}/objects/{' 'object_name}', controller=metadef_objects_resource, action='delete', conditions={'method': ['DELETE']}, body_reject=True) mapper.connect('/metadefs/namespaces/{namespace}/objects/{' 'object_name}', controller=reject_method_resource, action='reject', allowed_methods='GET, PUT, DELETE') # Metadef tags metadef_tags_resource = metadef_tags.create_resource() mapper.connect('/metadefs/namespaces/{namespace}/tags', controller=metadef_tags_resource, action='index', conditions={'method': ['GET']}) mapper.connect('/metadefs/namespaces/{namespace}/tags', controller=metadef_tags_resource, action='create_tags', conditions={'method': ['POST']}) mapper.connect('/metadefs/namespaces/{namespace}/tags', controller=metadef_namespace_resource, action='delete_tags', conditions={'method': ['DELETE']}) mapper.connect('/metadefs/namespaces/{namespace}/tags', controller=reject_method_resource, action='reject', allowed_methods='GET, POST, DELETE') mapper.connect('/metadefs/namespaces/{namespace}/tags/{tag_name}', controller=metadef_tags_resource, action='show', conditions={'method': ['GET']}, body_reject=True) mapper.connect('/metadefs/namespaces/{namespace}/tags/{tag_name}', controller=metadef_tags_resource, action='create', conditions={'method': ['POST']}, body_reject=True) mapper.connect('/metadefs/namespaces/{namespace}/tags/{tag_name}', controller=metadef_tags_resource, action='update', conditions={'method': ['PUT']}) mapper.connect('/metadefs/namespaces/{namespace}/tags/{tag_name}', controller=metadef_tags_resource, action='delete', conditions={'method': ['DELETE']}, body_reject=True) mapper.connect('/metadefs/namespaces/{namespace}/tags/{tag_name}', controller=reject_method_resource, action='reject', allowed_methods='GET, POST, PUT, DELETE') images_resource = images.create_resource(custom_image_properties) mapper.connect('/images', controller=images_resource, action='index', conditions={'method': ['GET']}) mapper.connect('/images', controller=images_resource, action='create', conditions={'method': ['POST']}) mapper.connect('/images', controller=reject_method_resource, action='reject', allowed_methods='GET, POST') mapper.connect('/images/{image_id}', controller=images_resource, action='update', conditions={'method': ['PATCH']}) mapper.connect('/images/{image_id}', controller=images_resource, action='show', conditions={'method': ['GET']}, body_reject=True) mapper.connect('/images/{image_id}', controller=images_resource, action='delete', conditions={'method': ['DELETE']}, body_reject=True) mapper.connect('/images/{image_id}', controller=reject_method_resource, action='reject', allowed_methods='GET, PATCH, DELETE') mapper.connect('/images/{image_id}/import', controller=images_resource, action='import_image', conditions={'method': ['POST']}) mapper.connect('/images/{image_id}/import', controller=reject_method_resource, action='reject', allowed_methods='POST') image_actions_resource = image_actions.create_resource() mapper.connect('/images/{image_id}/actions/deactivate', controller=image_actions_resource, action='deactivate', conditions={'method': ['POST']}, body_reject=True) mapper.connect('/images/{image_id}/actions/reactivate', controller=image_actions_resource, action='reactivate', conditions={'method': ['POST']}, body_reject=True) mapper.connect('/images/{image_id}/actions/deactivate', controller=reject_method_resource, action='reject', allowed_methods='POST') mapper.connect('/images/{image_id}/actions/reactivate', controller=reject_method_resource, action='reject', allowed_methods='POST') image_data_resource = image_data.create_resource() mapper.connect('/images/{image_id}/file', controller=image_data_resource, action='download', conditions={'method': ['GET']}, body_reject=True) mapper.connect('/images/{image_id}/file', controller=image_data_resource, action='upload', conditions={'method': ['PUT']}) mapper.connect('/images/{image_id}/file', controller=reject_method_resource, action='reject', allowed_methods='GET, PUT') mapper.connect('/images/{image_id}/stage', controller=image_data_resource, action='stage', conditions={'method': ['PUT']}) mapper.connect('/images/{image_id}/stage', controller=reject_method_resource, action='reject', allowed_methods='PUT') image_tags_resource = image_tags.create_resource() mapper.connect('/images/{image_id}/tags/{tag_value}', controller=image_tags_resource, action='update', conditions={'method': ['PUT']}, body_reject=True) mapper.connect('/images/{image_id}/tags/{tag_value}', controller=image_tags_resource, action='delete', conditions={'method': ['DELETE']}, body_reject=True) mapper.connect('/images/{image_id}/tags/{tag_value}', controller=reject_method_resource, action='reject', allowed_methods='PUT, DELETE') image_members_resource = image_members.create_resource() mapper.connect('/images/{image_id}/members', controller=image_members_resource, action='index', conditions={'method': ['GET']}, body_reject=True) mapper.connect('/images/{image_id}/members', controller=image_members_resource, action='create', conditions={'method': ['POST']}) mapper.connect('/images/{image_id}/members', controller=reject_method_resource, action='reject', allowed_methods='GET, POST') mapper.connect('/images/{image_id}/members/{member_id}', controller=image_members_resource, action='show', conditions={'method': ['GET']}, body_reject=True) mapper.connect('/images/{image_id}/members/{member_id}', controller=image_members_resource, action='update', conditions={'method': ['PUT']}) mapper.connect('/images/{image_id}/members/{member_id}', controller=image_members_resource, action='delete', conditions={'method': ['DELETE']}, body_reject=True) mapper.connect('/images/{image_id}/members/{member_id}', controller=reject_method_resource, action='reject', allowed_methods='GET, PUT, DELETE') tasks_resource = tasks.create_resource() mapper.connect('/tasks', controller=tasks_resource, action='create', conditions={'method': ['POST']}) mapper.connect('/tasks', controller=tasks_resource, action='index', conditions={'method': ['GET']}) mapper.connect('/tasks', controller=reject_method_resource, action='reject', allowed_methods='GET, POST') mapper.connect('/tasks/{task_id}', controller=tasks_resource, action='get', conditions={'method': ['GET']}) mapper.connect('/tasks/{task_id}', controller=tasks_resource, action='delete', conditions={'method': ['DELETE']}) mapper.connect('/tasks/{task_id}', controller=reject_method_resource, action='reject', allowed_methods='GET, DELETE') # Discovery API info_resource = discovery.create_resource() mapper.connect('/info/import', controller=info_resource, action='get_image_import', conditions={'method': ['GET']}, body_reject=True) mapper.connect('/info/import', controller=reject_method_resource, action='reject', allowed_methods='GET') super(API, self).__init__(mapper) glance-16.0.0/glance/api/v2/metadef_objects.py0000666000175100017510000003421213245511421021076 0ustar zuulzuul00000000000000# Copyright (c) 2014 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from oslo_log import log as logging from oslo_serialization import jsonutils from oslo_utils import encodeutils import six from six.moves import http_client as http import webob.exc from wsme.rest import json from glance.api import policy from glance.api.v2 import metadef_namespaces as namespaces import glance.api.v2.metadef_properties as properties from glance.api.v2.model.metadef_object import MetadefObject from glance.api.v2.model.metadef_object import MetadefObjects from glance.common import exception from glance.common import wsgi from glance.common import wsme_utils import glance.db from glance.i18n import _ import glance.notifier import glance.schema LOG = logging.getLogger(__name__) class MetadefObjectsController(object): def __init__(self, db_api=None, policy_enforcer=None, notifier=None): self.db_api = db_api or glance.db.get_api() self.policy = policy_enforcer or policy.Enforcer() self.notifier = notifier or glance.notifier.Notifier() self.gateway = glance.gateway.Gateway(db_api=self.db_api, notifier=self.notifier, policy_enforcer=self.policy) self.obj_schema_link = '/v2/schemas/metadefs/object' def create(self, req, metadata_object, namespace): object_factory = self.gateway.get_metadef_object_factory(req.context) object_repo = self.gateway.get_metadef_object_repo(req.context) try: new_meta_object = object_factory.new_object( namespace=namespace, **metadata_object.to_dict()) object_repo.add(new_meta_object) except exception.Forbidden as e: LOG.debug("User not permitted to create metadata object within " "'%s' namespace", namespace) raise webob.exc.HTTPForbidden(explanation=e.msg) except exception.Invalid as e: msg = (_("Couldn't create metadata object: %s") % encodeutils.exception_to_unicode(e)) raise webob.exc.HTTPBadRequest(explanation=msg) except exception.NotFound as e: raise webob.exc.HTTPNotFound(explanation=e.msg) except exception.Duplicate as e: raise webob.exc.HTTPConflict(explanation=e.msg) except Exception as e: LOG.error(encodeutils.exception_to_unicode(e)) raise webob.exc.HTTPInternalServerError() return MetadefObject.to_wsme_model( new_meta_object, get_object_href(namespace, new_meta_object), self.obj_schema_link) def index(self, req, namespace, marker=None, limit=None, sort_key='created_at', sort_dir='desc', filters=None): try: filters = filters or dict() filters['namespace'] = namespace object_repo = self.gateway.get_metadef_object_repo(req.context) db_metaobject_list = object_repo.list( marker=marker, limit=limit, sort_key=sort_key, sort_dir=sort_dir, filters=filters) object_list = [MetadefObject.to_wsme_model( db_metaobject, get_object_href(namespace, db_metaobject), self.obj_schema_link) for db_metaobject in db_metaobject_list] metadef_objects = MetadefObjects() metadef_objects.objects = object_list except exception.Forbidden as e: LOG.debug("User not permitted to retrieve metadata objects within " "'%s' namespace", namespace) raise webob.exc.HTTPForbidden(explanation=e.msg) except exception.NotFound as e: raise webob.exc.HTTPNotFound(explanation=e.msg) except Exception as e: LOG.error(encodeutils.exception_to_unicode(e)) raise webob.exc.HTTPInternalServerError() return metadef_objects def show(self, req, namespace, object_name): meta_object_repo = self.gateway.get_metadef_object_repo( req.context) try: metadef_object = meta_object_repo.get(namespace, object_name) return MetadefObject.to_wsme_model( metadef_object, get_object_href(namespace, metadef_object), self.obj_schema_link) except exception.Forbidden as e: LOG.debug("User not permitted to show metadata object '%s' " "within '%s' namespace", namespace, object_name) raise webob.exc.HTTPForbidden(explanation=e.msg) except exception.NotFound as e: raise webob.exc.HTTPNotFound(explanation=e.msg) except Exception as e: LOG.error(encodeutils.exception_to_unicode(e)) raise webob.exc.HTTPInternalServerError() def update(self, req, metadata_object, namespace, object_name): meta_repo = self.gateway.get_metadef_object_repo(req.context) try: metadef_object = meta_repo.get(namespace, object_name) metadef_object._old_name = metadef_object.name metadef_object.name = wsme_utils._get_value( metadata_object.name) metadef_object.description = wsme_utils._get_value( metadata_object.description) metadef_object.required = wsme_utils._get_value( metadata_object.required) metadef_object.properties = wsme_utils._get_value( metadata_object.properties) updated_metadata_obj = meta_repo.save(metadef_object) except exception.Invalid as e: msg = (_("Couldn't update metadata object: %s") % encodeutils.exception_to_unicode(e)) raise webob.exc.HTTPBadRequest(explanation=msg) except exception.Forbidden as e: LOG.debug("User not permitted to update metadata object '%s' " "within '%s' namespace ", object_name, namespace) raise webob.exc.HTTPForbidden(explanation=e.msg) except exception.NotFound as e: raise webob.exc.HTTPNotFound(explanation=e.msg) except exception.Duplicate as e: raise webob.exc.HTTPConflict(explanation=e.msg) except Exception as e: LOG.error(encodeutils.exception_to_unicode(e)) raise webob.exc.HTTPInternalServerError() return MetadefObject.to_wsme_model( updated_metadata_obj, get_object_href(namespace, updated_metadata_obj), self.obj_schema_link) def delete(self, req, namespace, object_name): meta_repo = self.gateway.get_metadef_object_repo(req.context) try: metadef_object = meta_repo.get(namespace, object_name) metadef_object.delete() meta_repo.remove(metadef_object) except exception.Forbidden as e: LOG.debug("User not permitted to delete metadata object '%s' " "within '%s' namespace", object_name, namespace) raise webob.exc.HTTPForbidden(explanation=e.msg) except exception.NotFound as e: raise webob.exc.HTTPNotFound(explanation=e.msg) except Exception as e: LOG.error(encodeutils.exception_to_unicode(e)) raise webob.exc.HTTPInternalServerError() def _get_base_definitions(): return namespaces.get_schema_definitions() def _get_base_properties(): return { "name": { "type": "string", "maxLength": 80 }, "description": { "type": "string" }, "required": { "$ref": "#/definitions/stringArray" }, "properties": { "$ref": "#/definitions/property" }, "schema": { 'readOnly': True, "type": "string" }, "self": { 'readOnly': True, "type": "string" }, "created_at": { "type": "string", "readOnly": True, "description": _("Date and time of object creation"), "format": "date-time" }, "updated_at": { "type": "string", "readOnly": True, "description": _("Date and time of the last object modification"), "format": "date-time" } } def get_schema(): definitions = _get_base_definitions() properties = _get_base_properties() mandatory_attrs = MetadefObject.get_mandatory_attrs() schema = glance.schema.Schema( 'object', properties, required=mandatory_attrs, definitions=definitions, ) return schema def get_collection_schema(): object_schema = get_schema() return glance.schema.CollectionSchema('objects', object_schema) class RequestDeserializer(wsgi.JSONRequestDeserializer): _disallowed_properties = ['self', 'schema', 'created_at', 'updated_at'] def __init__(self, schema=None): super(RequestDeserializer, self).__init__() self.schema = schema or get_schema() def _get_request_body(self, request): output = super(RequestDeserializer, self).default(request) if 'body' not in output: msg = _('Body expected in request.') raise webob.exc.HTTPBadRequest(explanation=msg) return output['body'] def create(self, request): body = self._get_request_body(request) self._check_allowed(body) try: self.schema.validate(body) if 'properties' in body: for propertyname in body['properties']: schema = properties.get_schema(require_name=False) schema.validate(body['properties'][propertyname]) except exception.InvalidObject as e: raise webob.exc.HTTPBadRequest(explanation=e.msg) metadata_object = json.fromjson(MetadefObject, body) return dict(metadata_object=metadata_object) def update(self, request): body = self._get_request_body(request) self._check_allowed(body) try: self.schema.validate(body) except exception.InvalidObject as e: raise webob.exc.HTTPBadRequest(explanation=e.msg) metadata_object = json.fromjson(MetadefObject, body) return dict(metadata_object=metadata_object) def index(self, request): params = request.params.copy() limit = params.pop('limit', None) marker = params.pop('marker', None) sort_dir = params.pop('sort_dir', 'desc') query_params = { 'sort_key': params.pop('sort_key', 'created_at'), 'sort_dir': self._validate_sort_dir(sort_dir), 'filters': self._get_filters(params) } if marker is not None: query_params['marker'] = marker if limit is not None: query_params['limit'] = self._validate_limit(limit) return query_params def _validate_sort_dir(self, sort_dir): if sort_dir not in ['asc', 'desc']: msg = _('Invalid sort direction: %s') % sort_dir raise webob.exc.HTTPBadRequest(explanation=msg) return sort_dir def _get_filters(self, filters): visibility = filters.get('visibility') if visibility: if visibility not in ['public', 'private', 'shared']: msg = _('Invalid visibility value: %s') % visibility raise webob.exc.HTTPBadRequest(explanation=msg) return filters def _validate_limit(self, limit): try: limit = int(limit) except ValueError: msg = _("limit param must be an integer") raise webob.exc.HTTPBadRequest(explanation=msg) if limit <= 0: msg = _("limit param must be positive") raise webob.exc.HTTPBadRequest(explanation=msg) return limit @classmethod def _check_allowed(cls, image): for key in cls._disallowed_properties: if key in image: msg = _("Attribute '%s' is read-only.") % key raise webob.exc.HTTPForbidden(explanation=msg) class ResponseSerializer(wsgi.JSONResponseSerializer): def __init__(self, schema=None): super(ResponseSerializer, self).__init__() self.schema = schema or get_schema() def create(self, response, metadata_object): response.status_int = http.CREATED self.show(response, metadata_object) def show(self, response, metadata_object): metadata_object_json = json.tojson(MetadefObject, metadata_object) body = jsonutils.dumps(metadata_object_json, ensure_ascii=False) response.unicode_body = six.text_type(body) response.content_type = 'application/json' def update(self, response, metadata_object): response.status_int = http.OK self.show(response, metadata_object) def index(self, response, result): result.schema = "v2/schemas/metadefs/objects" metadata_objects_json = json.tojson(MetadefObjects, result) body = jsonutils.dumps(metadata_objects_json, ensure_ascii=False) response.unicode_body = six.text_type(body) response.content_type = 'application/json' def delete(self, response, result): response.status_int = http.NO_CONTENT def get_object_href(namespace_name, metadef_object): base_href = ('/v2/metadefs/namespaces/%s/objects/%s' % (namespace_name, metadef_object.name)) return base_href def create_resource(): """Metadef objects resource factory method""" schema = get_schema() deserializer = RequestDeserializer(schema) serializer = ResponseSerializer(schema) controller = MetadefObjectsController() return wsgi.Resource(controller, deserializer, serializer) glance-16.0.0/glance/api/v2/metadef_properties.py0000666000175100017510000003134113245511421021641 0ustar zuulzuul00000000000000# Copyright (c) 2014 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from oslo_log import log as logging from oslo_serialization import jsonutils from oslo_utils import encodeutils import six from six.moves import http_client as http import webob.exc from wsme.rest import json from glance.api import policy from glance.api.v2 import metadef_namespaces as namespaces from glance.api.v2.model.metadef_namespace import Namespace from glance.api.v2.model.metadef_property_type import PropertyType from glance.api.v2.model.metadef_property_type import PropertyTypes from glance.common import exception from glance.common import wsgi import glance.db import glance.gateway from glance.i18n import _ import glance.notifier import glance.schema LOG = logging.getLogger(__name__) class NamespacePropertiesController(object): def __init__(self, db_api=None, policy_enforcer=None, notifier=None): self.db_api = db_api or glance.db.get_api() self.policy = policy_enforcer or policy.Enforcer() self.notifier = notifier or glance.notifier.Notifier() self.gateway = glance.gateway.Gateway(db_api=self.db_api, notifier=self.notifier, policy_enforcer=self.policy) def _to_dict(self, model_property_type): # Convert the model PropertyTypes dict to a JSON encoding db_property_type_dict = dict() db_property_type_dict['schema'] = json.tojson( PropertyType, model_property_type) db_property_type_dict['name'] = model_property_type.name return db_property_type_dict def _to_model(self, db_property_type): # Convert the persisted json schema to a dict of PropertyTypes property_type = json.fromjson( PropertyType, db_property_type.schema) property_type.name = db_property_type.name return property_type def index(self, req, namespace): try: filters = dict() filters['namespace'] = namespace prop_repo = self.gateway.get_metadef_property_repo(req.context) db_properties = prop_repo.list(filters=filters) property_list = Namespace.to_model_properties(db_properties) namespace_properties = PropertyTypes() namespace_properties.properties = property_list except exception.Forbidden as e: LOG.debug("User not permitted to retrieve metadata properties " "within '%s' namespace", namespace) raise webob.exc.HTTPForbidden(explanation=e.msg) except exception.NotFound as e: raise webob.exc.HTTPNotFound(explanation=e.msg) except Exception as e: LOG.error(encodeutils.exception_to_unicode(e)) raise webob.exc.HTTPInternalServerError() return namespace_properties def show(self, req, namespace, property_name, filters=None): try: if filters and filters['resource_type']: rs_repo = self.gateway.get_metadef_resource_type_repo( req.context) db_resource_type = rs_repo.get(filters['resource_type'], namespace) prefix = db_resource_type.prefix if prefix and property_name.startswith(prefix): property_name = property_name[len(prefix):] else: msg = (_("Property %(property_name)s does not start " "with the expected resource type association " "prefix of '%(prefix)s'.") % {'property_name': property_name, 'prefix': prefix}) raise exception.NotFound(msg) prop_repo = self.gateway.get_metadef_property_repo(req.context) db_property = prop_repo.get(namespace, property_name) property = self._to_model(db_property) except exception.Forbidden as e: LOG.debug("User not permitted to show metadata property '%s' " "within '%s' namespace", property_name, namespace) raise webob.exc.HTTPForbidden(explanation=e.msg) except exception.NotFound as e: raise webob.exc.HTTPNotFound(explanation=e.msg) except Exception as e: LOG.error(encodeutils.exception_to_unicode(e)) raise webob.exc.HTTPInternalServerError() return property def create(self, req, namespace, property_type): prop_factory = self.gateway.get_metadef_property_factory(req.context) prop_repo = self.gateway.get_metadef_property_repo(req.context) try: new_property_type = prop_factory.new_namespace_property( namespace=namespace, **self._to_dict(property_type)) prop_repo.add(new_property_type) except exception.Forbidden as e: LOG.debug("User not permitted to create metadata property within " "'%s' namespace", namespace) raise webob.exc.HTTPForbidden(explanation=e.msg) except exception.Invalid as e: msg = (_("Couldn't create metadata property: %s") % encodeutils.exception_to_unicode(e)) raise webob.exc.HTTPBadRequest(explanation=msg) except exception.NotFound as e: raise webob.exc.HTTPNotFound(explanation=e.msg) except exception.Duplicate as e: raise webob.exc.HTTPConflict(explanation=e.msg) except Exception as e: LOG.error(encodeutils.exception_to_unicode(e)) raise webob.exc.HTTPInternalServerError() return self._to_model(new_property_type) def update(self, req, namespace, property_name, property_type): prop_repo = self.gateway.get_metadef_property_repo(req.context) try: db_property_type = prop_repo.get(namespace, property_name) db_property_type._old_name = db_property_type.name db_property_type.name = property_type.name db_property_type.schema = (self._to_dict(property_type))['schema'] updated_property_type = prop_repo.save(db_property_type) except exception.Invalid as e: msg = (_("Couldn't update metadata property: %s") % encodeutils.exception_to_unicode(e)) raise webob.exc.HTTPBadRequest(explanation=msg) except exception.Forbidden as e: LOG.debug("User not permitted to update metadata property '%s' " "within '%s' namespace", property_name, namespace) raise webob.exc.HTTPForbidden(explanation=e.msg) except exception.NotFound as e: raise webob.exc.HTTPNotFound(explanation=e.msg) except exception.Duplicate as e: raise webob.exc.HTTPConflict(explanation=e.msg) except Exception as e: LOG.error(encodeutils.exception_to_unicode(e)) raise webob.exc.HTTPInternalServerError() return self._to_model(updated_property_type) def delete(self, req, namespace, property_name): prop_repo = self.gateway.get_metadef_property_repo(req.context) try: property_type = prop_repo.get(namespace, property_name) property_type.delete() prop_repo.remove(property_type) except exception.Forbidden as e: LOG.debug("User not permitted to delete metadata property '%s' " "within '%s' namespace", property_name, namespace) raise webob.exc.HTTPForbidden(explanation=e.msg) except exception.NotFound as e: raise webob.exc.HTTPNotFound(explanation=e.msg) except Exception as e: LOG.error(encodeutils.exception_to_unicode(e)) raise webob.exc.HTTPInternalServerError() class RequestDeserializer(wsgi.JSONRequestDeserializer): _disallowed_properties = ['created_at', 'updated_at'] def __init__(self, schema=None): super(RequestDeserializer, self).__init__() self.schema = schema or get_schema() def _get_request_body(self, request): output = super(RequestDeserializer, self).default(request) if 'body' not in output: msg = _('Body expected in request.') raise webob.exc.HTTPBadRequest(explanation=msg) return output['body'] @classmethod def _check_allowed(cls, image): for key in cls._disallowed_properties: if key in image: msg = _("Attribute '%s' is read-only.") % key raise webob.exc.HTTPForbidden(explanation=msg) def create(self, request): body = self._get_request_body(request) self._check_allowed(body) try: self.schema.validate(body) except exception.InvalidObject as e: raise webob.exc.HTTPBadRequest(explanation=e.msg) property_type = json.fromjson(PropertyType, body) return dict(property_type=property_type) def update(self, request): body = self._get_request_body(request) self._check_allowed(body) try: self.schema.validate(body) except exception.InvalidObject as e: raise webob.exc.HTTPBadRequest(explanation=e.msg) property_type = json.fromjson(PropertyType, body) return dict(property_type=property_type) def show(self, request): params = request.params.copy() query_params = { 'filters': params } return query_params class ResponseSerializer(wsgi.JSONResponseSerializer): def __init__(self, schema=None): super(ResponseSerializer, self).__init__() self.schema = schema def show(self, response, result): property_type_json = json.tojson(PropertyType, result) body = jsonutils.dumps(property_type_json, ensure_ascii=False) response.unicode_body = six.text_type(body) response.content_type = 'application/json' def index(self, response, result): property_type_json = json.tojson(PropertyTypes, result) body = jsonutils.dumps(property_type_json, ensure_ascii=False) response.unicode_body = six.text_type(body) response.content_type = 'application/json' def create(self, response, result): response.status_int = http.CREATED self.show(response, result) def update(self, response, result): response.status_int = http.OK self.show(response, result) def delete(self, response, result): response.status_int = http.NO_CONTENT def _get_base_definitions(): return { "positiveInteger": { "type": "integer", "minimum": 0 }, "positiveIntegerDefault0": { "allOf": [ {"$ref": "#/definitions/positiveInteger"}, {"default": 0} ] }, "stringArray": { "type": "array", "items": {"type": "string"}, "minItems": 1, "uniqueItems": True } } def _get_base_properties(): base_def = namespaces.get_schema_definitions() return base_def['property']['additionalProperties']['properties'] def get_schema(require_name=True): definitions = _get_base_definitions() properties = _get_base_properties() mandatory_attrs = PropertyType.get_mandatory_attrs() if require_name: # name is required attribute when use as single property type mandatory_attrs.append('name') schema = glance.schema.Schema( 'property', properties, required=mandatory_attrs, definitions=definitions ) return schema def get_collection_schema(): namespace_properties_schema = get_schema() # Property name is a dict key and not a required attribute in # individual property schema inside property collections namespace_properties_schema.required.remove('name') return glance.schema.DictCollectionSchema('properties', namespace_properties_schema) def create_resource(): """NamespaceProperties resource factory method""" schema = get_schema() deserializer = RequestDeserializer(schema) serializer = ResponseSerializer(schema) controller = NamespacePropertiesController() return wsgi.Resource(controller, deserializer, serializer) glance-16.0.0/glance/api/v2/image_tags.py0000666000175100017510000001047613245511421020066 0ustar zuulzuul00000000000000# Copyright 2012 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import glance_store from oslo_log import log as logging from oslo_utils import encodeutils from six.moves import http_client as http import webob.exc from glance.api import policy from glance.api.v2 import images as v2_api from glance.common import exception from glance.common import utils from glance.common import wsgi import glance.db import glance.gateway from glance.i18n import _ import glance.notifier LOG = logging.getLogger(__name__) class Controller(object): def __init__(self, db_api=None, policy_enforcer=None, notifier=None, store_api=None): self.db_api = db_api or glance.db.get_api() self.policy = policy_enforcer or policy.Enforcer() self.notifier = notifier or glance.notifier.Notifier() self.store_api = store_api or glance_store self.gateway = glance.gateway.Gateway(self.db_api, self.store_api, self.notifier, self.policy) @utils.mutating def update(self, req, image_id, tag_value): image_repo = self.gateway.get_repo(req.context) try: image = image_repo.get(image_id) image.tags.add(tag_value) image_repo.save(image) except exception.NotFound: msg = _("Image %s not found.") % image_id LOG.warning(msg) raise webob.exc.HTTPNotFound(explanation=msg) except exception.Forbidden: msg = _("Not allowed to update tags for image %s.") % image_id LOG.warning(msg) raise webob.exc.HTTPForbidden(explanation=msg) except exception.Invalid as e: msg = (_("Could not update image: %s") % encodeutils.exception_to_unicode(e)) LOG.warning(msg) raise webob.exc.HTTPBadRequest(explanation=msg) except exception.ImageTagLimitExceeded as e: msg = (_("Image tag limit exceeded for image %(id)s: %(e)s:") % {"id": image_id, "e": encodeutils.exception_to_unicode(e)}) LOG.warning(msg) raise webob.exc.HTTPRequestEntityTooLarge(explanation=msg) @utils.mutating def delete(self, req, image_id, tag_value): image_repo = self.gateway.get_repo(req.context) try: image = image_repo.get(image_id) if tag_value not in image.tags: raise webob.exc.HTTPNotFound() image.tags.remove(tag_value) image_repo.save(image) except exception.NotFound: msg = _("Image %s not found.") % image_id LOG.warning(msg) raise webob.exc.HTTPNotFound(explanation=msg) except exception.Forbidden: msg = _("Not allowed to delete tags for image %s.") % image_id LOG.warning(msg) raise webob.exc.HTTPForbidden(explanation=msg) class ResponseSerializer(wsgi.JSONResponseSerializer): def update(self, response, result): response.status_int = http.NO_CONTENT def delete(self, response, result): response.status_int = http.NO_CONTENT class RequestDeserializer(wsgi.JSONRequestDeserializer): def update(self, request): try: schema = v2_api.get_schema() schema_format = {"tags": [request.urlvars.get('tag_value')]} schema.validate(schema_format) except exception.InvalidObject as e: raise webob.exc.HTTPBadRequest(explanation=e.msg) return super(RequestDeserializer, self).default(request) def create_resource(): """Images resource factory method""" serializer = ResponseSerializer() deserializer = RequestDeserializer() controller = Controller() return wsgi.Resource(controller, deserializer, serializer) glance-16.0.0/glance/api/v2/metadef_namespaces.py0000666000175100017510000007740113245511421021573 0ustar zuulzuul00000000000000# Copyright (c) 2014 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from oslo_config import cfg from oslo_log import log as logging from oslo_serialization import jsonutils from oslo_utils import encodeutils import six from six.moves import http_client as http import six.moves.urllib.parse as urlparse import webob.exc from wsme.rest import json from glance.api import policy from glance.api.v2.model.metadef_namespace import Namespace from glance.api.v2.model.metadef_namespace import Namespaces from glance.api.v2.model.metadef_object import MetadefObject from glance.api.v2.model.metadef_property_type import PropertyType from glance.api.v2.model.metadef_resource_type import ResourceTypeAssociation from glance.api.v2.model.metadef_tag import MetadefTag from glance.common import exception from glance.common import utils from glance.common import wsgi from glance.common import wsme_utils import glance.db import glance.gateway from glance.i18n import _, _LE import glance.notifier import glance.schema LOG = logging.getLogger(__name__) CONF = cfg.CONF class NamespaceController(object): def __init__(self, db_api=None, policy_enforcer=None, notifier=None): self.db_api = db_api or glance.db.get_api() self.policy = policy_enforcer or policy.Enforcer() self.notifier = notifier or glance.notifier.Notifier() self.gateway = glance.gateway.Gateway(db_api=self.db_api, notifier=self.notifier, policy_enforcer=self.policy) self.ns_schema_link = '/v2/schemas/metadefs/namespace' self.obj_schema_link = '/v2/schemas/metadefs/object' self.tag_schema_link = '/v2/schemas/metadefs/tag' def index(self, req, marker=None, limit=None, sort_key='created_at', sort_dir='desc', filters=None): try: ns_repo = self.gateway.get_metadef_namespace_repo(req.context) # Get namespace id if marker: namespace_obj = ns_repo.get(marker) marker = namespace_obj.namespace_id database_ns_list = ns_repo.list( marker=marker, limit=limit, sort_key=sort_key, sort_dir=sort_dir, filters=filters) for db_namespace in database_ns_list: # Get resource type associations filters = dict() filters['namespace'] = db_namespace.namespace rs_repo = ( self.gateway.get_metadef_resource_type_repo(req.context)) repo_rs_type_list = rs_repo.list(filters=filters) resource_type_list = [ResourceTypeAssociation.to_wsme_model( resource_type) for resource_type in repo_rs_type_list] if resource_type_list: db_namespace.resource_type_associations = ( resource_type_list) namespace_list = [Namespace.to_wsme_model( db_namespace, get_namespace_href(db_namespace), self.ns_schema_link) for db_namespace in database_ns_list] namespaces = Namespaces() namespaces.namespaces = namespace_list if len(namespace_list) != 0 and len(namespace_list) == limit: namespaces.next = namespace_list[-1].namespace except exception.Forbidden as e: LOG.debug("User not permitted to retrieve metadata namespaces " "index") raise webob.exc.HTTPForbidden(explanation=e.msg) except exception.NotFound as e: raise webob.exc.HTTPNotFound(explanation=e.msg) except Exception as e: LOG.error(encodeutils.exception_to_unicode(e)) raise webob.exc.HTTPInternalServerError() return namespaces @utils.mutating def create(self, req, namespace): try: namespace_created = False # Create Namespace ns_factory = self.gateway.get_metadef_namespace_factory( req.context) ns_repo = self.gateway.get_metadef_namespace_repo(req.context) new_namespace = ns_factory.new_namespace(**namespace.to_dict()) ns_repo.add(new_namespace) namespace_created = True # Create Resource Types if namespace.resource_type_associations: rs_factory = (self.gateway.get_metadef_resource_type_factory( req.context)) rs_repo = self.gateway.get_metadef_resource_type_repo( req.context) for resource_type in namespace.resource_type_associations: new_resource = rs_factory.new_resource_type( namespace=namespace.namespace, **resource_type.to_dict()) rs_repo.add(new_resource) # Create Objects if namespace.objects: object_factory = self.gateway.get_metadef_object_factory( req.context) object_repo = self.gateway.get_metadef_object_repo( req.context) for metadata_object in namespace.objects: new_meta_object = object_factory.new_object( namespace=namespace.namespace, **metadata_object.to_dict()) object_repo.add(new_meta_object) # Create Tags if namespace.tags: tag_factory = self.gateway.get_metadef_tag_factory( req.context) tag_repo = self.gateway.get_metadef_tag_repo(req.context) for metadata_tag in namespace.tags: new_meta_tag = tag_factory.new_tag( namespace=namespace.namespace, **metadata_tag.to_dict()) tag_repo.add(new_meta_tag) # Create Namespace Properties if namespace.properties: prop_factory = (self.gateway.get_metadef_property_factory( req.context)) prop_repo = self.gateway.get_metadef_property_repo( req.context) for (name, value) in namespace.properties.items(): new_property_type = ( prop_factory.new_namespace_property( namespace=namespace.namespace, **self._to_property_dict(name, value) )) prop_repo.add(new_property_type) except exception.Invalid as e: msg = (_("Couldn't create metadata namespace: %s") % encodeutils.exception_to_unicode(e)) raise webob.exc.HTTPBadRequest(explanation=msg) except exception.Forbidden as e: self._cleanup_namespace(ns_repo, namespace, namespace_created) LOG.debug("User not permitted to create metadata namespace") raise webob.exc.HTTPForbidden(explanation=e.msg) except exception.NotFound as e: self._cleanup_namespace(ns_repo, namespace, namespace_created) raise webob.exc.HTTPNotFound(explanation=e.msg) except exception.Duplicate as e: self._cleanup_namespace(ns_repo, namespace, namespace_created) raise webob.exc.HTTPConflict(explanation=e.msg) except Exception as e: LOG.error(encodeutils.exception_to_unicode(e)) raise webob.exc.HTTPInternalServerError() # Return the user namespace as we don't expose the id to user new_namespace.properties = namespace.properties new_namespace.objects = namespace.objects new_namespace.resource_type_associations = ( namespace.resource_type_associations) new_namespace.tags = namespace.tags return Namespace.to_wsme_model(new_namespace, get_namespace_href(new_namespace), self.ns_schema_link) def _to_property_dict(self, name, value): # Convert the model PropertyTypes dict to a JSON string db_property_type_dict = dict() db_property_type_dict['schema'] = json.tojson(PropertyType, value) db_property_type_dict['name'] = name return db_property_type_dict def _cleanup_namespace(self, namespace_repo, namespace, namespace_created): if namespace_created: try: namespace_obj = namespace_repo.get(namespace.namespace) namespace_obj.delete() namespace_repo.remove(namespace_obj) LOG.debug("Cleaned up namespace %(namespace)s ", {'namespace': namespace.namespace}) except Exception as e: msg = (_LE("Failed to delete namespace %(namespace)s." "Exception: %(exception)s"), {'namespace': namespace.namespace, 'exception': encodeutils.exception_to_unicode(e)}) LOG.error(msg) def show(self, req, namespace, filters=None): try: # Get namespace ns_repo = self.gateway.get_metadef_namespace_repo(req.context) namespace_obj = ns_repo.get(namespace) namespace_detail = Namespace.to_wsme_model( namespace_obj, get_namespace_href(namespace_obj), self.ns_schema_link) ns_filters = dict() ns_filters['namespace'] = namespace # Get objects object_repo = self.gateway.get_metadef_object_repo(req.context) db_metaobject_list = object_repo.list(filters=ns_filters) object_list = [MetadefObject.to_wsme_model( db_metaobject, get_object_href(namespace, db_metaobject), self.obj_schema_link) for db_metaobject in db_metaobject_list] if object_list: namespace_detail.objects = object_list # Get resource type associations rs_repo = self.gateway.get_metadef_resource_type_repo(req.context) db_resource_type_list = rs_repo.list(filters=ns_filters) resource_type_list = [ResourceTypeAssociation.to_wsme_model( resource_type) for resource_type in db_resource_type_list] if resource_type_list: namespace_detail.resource_type_associations = ( resource_type_list) # Get properties prop_repo = self.gateway.get_metadef_property_repo(req.context) db_properties = prop_repo.list(filters=ns_filters) property_list = Namespace.to_model_properties(db_properties) if property_list: namespace_detail.properties = property_list if filters and filters['resource_type']: namespace_detail = self._prefix_property_name( namespace_detail, filters['resource_type']) # Get tags tag_repo = self.gateway.get_metadef_tag_repo(req.context) db_metatag_list = tag_repo.list(filters=ns_filters) tag_list = [MetadefTag(**{'name': db_metatag.name}) for db_metatag in db_metatag_list] if tag_list: namespace_detail.tags = tag_list except exception.Forbidden as e: LOG.debug("User not permitted to show metadata namespace " "'%s'", namespace) raise webob.exc.HTTPForbidden(explanation=e.msg) except exception.NotFound as e: raise webob.exc.HTTPNotFound(explanation=e.msg) except Exception as e: LOG.error(encodeutils.exception_to_unicode(e)) raise webob.exc.HTTPInternalServerError() return namespace_detail def update(self, req, user_ns, namespace): namespace_repo = self.gateway.get_metadef_namespace_repo(req.context) try: ns_obj = namespace_repo.get(namespace) ns_obj._old_namespace = ns_obj.namespace ns_obj.namespace = wsme_utils._get_value(user_ns.namespace) ns_obj.display_name = wsme_utils._get_value(user_ns.display_name) ns_obj.description = wsme_utils._get_value(user_ns.description) # Following optional fields will default to same values as in # create namespace if not specified ns_obj.visibility = ( wsme_utils._get_value(user_ns.visibility) or 'private') ns_obj.protected = ( wsme_utils._get_value(user_ns.protected) or False) ns_obj.owner = ( wsme_utils._get_value(user_ns.owner) or req.context.owner) updated_namespace = namespace_repo.save(ns_obj) except exception.Invalid as e: msg = (_("Couldn't update metadata namespace: %s") % encodeutils.exception_to_unicode(e)) raise webob.exc.HTTPBadRequest(explanation=msg) except exception.Forbidden as e: LOG.debug("User not permitted to update metadata namespace " "'%s'", namespace) raise webob.exc.HTTPForbidden(explanation=e.msg) except exception.NotFound as e: raise webob.exc.HTTPNotFound(explanation=e.msg) except exception.Duplicate as e: raise webob.exc.HTTPConflict(explanation=e.msg) except Exception as e: LOG.error(encodeutils.exception_to_unicode(e)) raise webob.exc.HTTPInternalServerError() return Namespace.to_wsme_model(updated_namespace, get_namespace_href(updated_namespace), self.ns_schema_link) def delete(self, req, namespace): namespace_repo = self.gateway.get_metadef_namespace_repo(req.context) try: namespace_obj = namespace_repo.get(namespace) namespace_obj.delete() namespace_repo.remove(namespace_obj) except exception.Forbidden as e: LOG.debug("User not permitted to delete metadata namespace " "'%s'", namespace) raise webob.exc.HTTPForbidden(explanation=e.msg) except exception.NotFound as e: raise webob.exc.HTTPNotFound(explanation=e.msg) except Exception as e: LOG.error(encodeutils.exception_to_unicode(e)) raise webob.exc.HTTPInternalServerError() def delete_objects(self, req, namespace): ns_repo = self.gateway.get_metadef_namespace_repo(req.context) try: namespace_obj = ns_repo.get(namespace) namespace_obj.delete() ns_repo.remove_objects(namespace_obj) except exception.Forbidden as e: LOG.debug("User not permitted to delete metadata objects " "within '%s' namespace", namespace) raise webob.exc.HTTPForbidden(explanation=e.msg) except exception.NotFound as e: raise webob.exc.HTTPNotFound(explanation=e.msg) except Exception as e: LOG.error(encodeutils.exception_to_unicode(e)) raise webob.exc.HTTPInternalServerError() def delete_tags(self, req, namespace): ns_repo = self.gateway.get_metadef_namespace_repo(req.context) try: namespace_obj = ns_repo.get(namespace) namespace_obj.delete() ns_repo.remove_tags(namespace_obj) except exception.Forbidden as e: LOG.debug("User not permitted to delete metadata tags " "within '%s' namespace", namespace) raise webob.exc.HTTPForbidden(explanation=e.msg) except exception.NotFound as e: raise webob.exc.HTTPNotFound(explanation=e.msg) except Exception as e: LOG.error(encodeutils.exception_to_unicode(e)) raise webob.exc.HTTPInternalServerError() def delete_properties(self, req, namespace): ns_repo = self.gateway.get_metadef_namespace_repo(req.context) try: namespace_obj = ns_repo.get(namespace) namespace_obj.delete() ns_repo.remove_properties(namespace_obj) except exception.Forbidden as e: LOG.debug("User not permitted to delete metadata properties " "within '%s' namespace", namespace) raise webob.exc.HTTPForbidden(explanation=e.msg) except exception.NotFound as e: raise webob.exc.HTTPNotFound(explanation=e.msg) except Exception as e: LOG.error(encodeutils.exception_to_unicode(e)) raise webob.exc.HTTPInternalServerError() def _prefix_property_name(self, namespace_detail, user_resource_type): prefix = None if user_resource_type and namespace_detail.resource_type_associations: for resource_type in namespace_detail.resource_type_associations: if resource_type.name == user_resource_type: prefix = resource_type.prefix break if prefix: if namespace_detail.properties: new_property_dict = dict() for (key, value) in namespace_detail.properties.items(): new_property_dict[prefix + key] = value namespace_detail.properties = new_property_dict if namespace_detail.objects: for object in namespace_detail.objects: new_object_property_dict = dict() for (key, value) in object.properties.items(): new_object_property_dict[prefix + key] = value object.properties = new_object_property_dict if object.required and len(object.required) > 0: required = [prefix + name for name in object.required] object.required = required return namespace_detail class RequestDeserializer(wsgi.JSONRequestDeserializer): _disallowed_properties = ['self', 'schema', 'created_at', 'updated_at'] def __init__(self, schema=None): super(RequestDeserializer, self).__init__() self.schema = schema or get_schema() def _get_request_body(self, request): output = super(RequestDeserializer, self).default(request) if 'body' not in output: msg = _('Body expected in request.') raise webob.exc.HTTPBadRequest(explanation=msg) return output['body'] @classmethod def _check_allowed(cls, image): for key in cls._disallowed_properties: if key in image: msg = _("Attribute '%s' is read-only.") % key raise webob.exc.HTTPForbidden(explanation=msg) def index(self, request): params = request.params.copy() limit = params.pop('limit', None) marker = params.pop('marker', None) sort_dir = params.pop('sort_dir', 'desc') if limit is None: limit = CONF.limit_param_default limit = min(CONF.api_limit_max, int(limit)) query_params = { 'sort_key': params.pop('sort_key', 'created_at'), 'sort_dir': self._validate_sort_dir(sort_dir), 'filters': self._get_filters(params) } if marker is not None: query_params['marker'] = marker if limit is not None: query_params['limit'] = self._validate_limit(limit) return query_params def _validate_sort_dir(self, sort_dir): if sort_dir not in ['asc', 'desc']: msg = _('Invalid sort direction: %s') % sort_dir raise webob.exc.HTTPBadRequest(explanation=msg) return sort_dir def _get_filters(self, filters): visibility = filters.get('visibility') if visibility: if visibility not in ['public', 'private']: msg = _('Invalid visibility value: %s') % visibility raise webob.exc.HTTPBadRequest(explanation=msg) return filters def _validate_limit(self, limit): try: limit = int(limit) except ValueError: msg = _("limit param must be an integer") raise webob.exc.HTTPBadRequest(explanation=msg) if limit < 0: msg = _("limit param must be positive") raise webob.exc.HTTPBadRequest(explanation=msg) return limit def show(self, request): params = request.params.copy() query_params = { 'filters': self._get_filters(params) } return query_params def create(self, request): body = self._get_request_body(request) self._check_allowed(body) try: self.schema.validate(body) except exception.InvalidObject as e: raise webob.exc.HTTPBadRequest(explanation=e.msg) namespace = json.fromjson(Namespace, body) return dict(namespace=namespace) def update(self, request): body = self._get_request_body(request) self._check_allowed(body) try: self.schema.validate(body) except exception.InvalidObject as e: raise webob.exc.HTTPBadRequest(explanation=e.msg) namespace = json.fromjson(Namespace, body) return dict(user_ns=namespace) class ResponseSerializer(wsgi.JSONResponseSerializer): def __init__(self, schema=None): super(ResponseSerializer, self).__init__() self.schema = schema def create(self, response, namespace): ns_json = json.tojson(Namespace, namespace) response = self.__render(ns_json, response, http.CREATED) response.location = get_namespace_href(namespace) def show(self, response, namespace): ns_json = json.tojson(Namespace, namespace) response = self.__render(ns_json, response) def index(self, response, result): params = dict(response.request.params) params.pop('marker', None) query = urlparse.urlencode(params) result.first = "/v2/metadefs/namespaces" result.schema = "/v2/schemas/metadefs/namespaces" if query: result.first = '%s?%s' % (result.first, query) if result.next: params['marker'] = result.next next_query = urlparse.urlencode(params) result.next = '/v2/metadefs/namespaces?%s' % next_query ns_json = json.tojson(Namespaces, result) response = self.__render(ns_json, response) def update(self, response, namespace): ns_json = json.tojson(Namespace, namespace) response = self.__render(ns_json, response, http.OK) def delete(self, response, result): response.status_int = http.NO_CONTENT def delete_objects(self, response, result): response.status_int = http.NO_CONTENT def delete_properties(self, response, result): response.status_int = http.NO_CONTENT def delete_tags(self, response, result): response.status_int = http.NO_CONTENT def __render(self, json_data, response, response_status=None): body = jsonutils.dumps(json_data, ensure_ascii=False) response.unicode_body = six.text_type(body) response.content_type = 'application/json' if response_status: response.status_int = response_status return response def _get_base_definitions(): return get_schema_definitions() def get_schema_definitions(): return { "positiveInteger": { "type": "integer", "minimum": 0 }, "positiveIntegerDefault0": { "allOf": [ {"$ref": "#/definitions/positiveInteger"}, {"default": 0} ] }, "stringArray": { "type": "array", "items": {"type": "string"}, # "minItems": 1, "uniqueItems": True }, "property": { "type": "object", "additionalProperties": { "type": "object", "required": ["title", "type"], "properties": { "name": { "type": "string", "maxLength": 80 }, "title": { "type": "string" }, "description": { "type": "string" }, "operators": { "type": "array", "items": { "type": "string" } }, "type": { "type": "string", "enum": [ "array", "boolean", "integer", "number", "object", "string", None ] }, "required": { "$ref": "#/definitions/stringArray" }, "minimum": { "type": "number" }, "maximum": { "type": "number" }, "maxLength": { "$ref": "#/definitions/positiveInteger" }, "minLength": { "$ref": "#/definitions/positiveIntegerDefault0" }, "pattern": { "type": "string", "format": "regex" }, "enum": { "type": "array" }, "readonly": { "type": "boolean" }, "default": {}, "items": { "type": "object", "properties": { "type": { "type": "string", "enum": [ "array", "boolean", "integer", "number", "object", "string", None ] }, "enum": { "type": "array" } } }, "maxItems": { "$ref": "#/definitions/positiveInteger" }, "minItems": { "$ref": "#/definitions/positiveIntegerDefault0" }, "uniqueItems": { "type": "boolean", "default": False }, "additionalItems": { "type": "boolean" }, } } } } def _get_base_properties(): return { "namespace": { "type": "string", "description": _("The unique namespace text."), "maxLength": 80, }, "display_name": { "type": "string", "description": _("The user friendly name for the namespace. Used " "by UI if available."), "maxLength": 80, }, "description": { "type": "string", "description": _("Provides a user friendly description of the " "namespace."), "maxLength": 500, }, "visibility": { "type": "string", "description": _("Scope of namespace accessibility."), "enum": ["public", "private"], }, "protected": { "type": "boolean", "description": _("If true, namespace will not be deletable."), }, "owner": { "type": "string", "description": _("Owner of the namespace."), "maxLength": 255, }, "created_at": { "type": "string", "readOnly": True, "description": _("Date and time of namespace creation"), "format": "date-time" }, "updated_at": { "type": "string", "readOnly": True, "description": _("Date and time of the last namespace" " modification"), "format": "date-time" }, "schema": { 'readOnly': True, "type": "string" }, "self": { 'readOnly': True, "type": "string" }, "resource_type_associations": { "type": "array", "items": { "type": "object", "properties": { "name": { "type": "string" }, "prefix": { "type": "string" }, "properties_target": { "type": "string" } } } }, "properties": { "$ref": "#/definitions/property" }, "objects": { "type": "array", "items": { "type": "object", "properties": { "name": { "type": "string" }, "description": { "type": "string" }, "required": { "$ref": "#/definitions/stringArray" }, "properties": { "$ref": "#/definitions/property" }, } } }, "tags": { "type": "array", "items": { "type": "object", "properties": { "name": { "type": "string" } } } }, } def get_schema(): properties = _get_base_properties() definitions = _get_base_definitions() mandatory_attrs = Namespace.get_mandatory_attrs() schema = glance.schema.Schema( 'namespace', properties, required=mandatory_attrs, definitions=definitions ) return schema def get_collection_schema(): namespace_schema = get_schema() return glance.schema.CollectionSchema('namespaces', namespace_schema) def get_namespace_href(namespace): base_href = '/v2/metadefs/namespaces/%s' % namespace.namespace return base_href def get_object_href(namespace_name, metadef_object): base_href = ('/v2/metadefs/namespaces/%s/objects/%s' % (namespace_name, metadef_object.name)) return base_href def get_tag_href(namespace_name, metadef_tag): base_href = ('/v2/metadefs/namespaces/%s/tags/%s' % (namespace_name, metadef_tag.name)) return base_href def create_resource(): """Namespaces resource factory method""" schema = get_schema() deserializer = RequestDeserializer(schema) serializer = ResponseSerializer(schema) controller = NamespaceController() return wsgi.Resource(controller, deserializer, serializer) glance-16.0.0/glance/api/v2/tasks.py0000666000175100017510000003747113245511421017117 0ustar zuulzuul00000000000000# Copyright 2013 IBM Corp. # All Rights Reserved. # # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import debtcollector import glance_store from oslo_config import cfg from oslo_log import log as logging import oslo_serialization.jsonutils as json from oslo_utils import encodeutils from oslo_utils import uuidutils import six from six.moves import http_client as http import six.moves.urllib.parse as urlparse import webob.exc from glance.api import common from glance.api import policy from glance.common import exception from glance.common import timeutils from glance.common import wsgi import glance.db import glance.gateway from glance.i18n import _, _LW import glance.notifier import glance.schema LOG = logging.getLogger(__name__) CONF = cfg.CONF CONF.import_opt('task_time_to_live', 'glance.common.config', group='task') _DEPRECATION_MESSAGE = ("The task API is being deprecated and " "it will be superseded by the new image import " "API. Please refer to this link for more " "information about the aforementioned process: " "https://specs.openstack.org/openstack/glance-specs/" "specs/mitaka/approved/image-import/" "image-import-refactor.html") class TasksController(object): """Manages operations on tasks.""" def __init__(self, db_api=None, policy_enforcer=None, notifier=None, store_api=None): self.db_api = db_api or glance.db.get_api() self.policy = policy_enforcer or policy.Enforcer() self.notifier = notifier or glance.notifier.Notifier() self.store_api = store_api or glance_store self.gateway = glance.gateway.Gateway(self.db_api, self.store_api, self.notifier, self.policy) @debtcollector.removals.remove(message=_DEPRECATION_MESSAGE) def create(self, req, task): # NOTE(rosmaita): access to this call is enforced in the deserializer task_factory = self.gateway.get_task_factory(req.context) executor_factory = self.gateway.get_task_executor_factory(req.context) task_repo = self.gateway.get_task_repo(req.context) try: new_task = task_factory.new_task(task_type=task['type'], owner=req.context.owner, task_input=task['input']) task_repo.add(new_task) task_executor = executor_factory.new_task_executor(req.context) pool = common.get_thread_pool("tasks_eventlet_pool") pool.spawn_n(new_task.run, task_executor) except exception.Forbidden as e: msg = (_LW("Forbidden to create task. Reason: %(reason)s") % {'reason': encodeutils.exception_to_unicode(e)}) LOG.warn(msg) raise webob.exc.HTTPForbidden(explanation=e.msg) return new_task @debtcollector.removals.remove(message=_DEPRECATION_MESSAGE) def index(self, req, marker=None, limit=None, sort_key='created_at', sort_dir='desc', filters=None): # NOTE(rosmaita): access to this call is enforced in the deserializer result = {} if filters is None: filters = {} filters['deleted'] = False if limit is None: limit = CONF.limit_param_default limit = min(CONF.api_limit_max, limit) task_repo = self.gateway.get_task_stub_repo(req.context) try: tasks = task_repo.list(marker, limit, sort_key, sort_dir, filters) if len(tasks) != 0 and len(tasks) == limit: result['next_marker'] = tasks[-1].task_id except (exception.NotFound, exception.InvalidSortKey, exception.InvalidFilterRangeValue) as e: LOG.warn(encodeutils.exception_to_unicode(e)) raise webob.exc.HTTPBadRequest(explanation=e.msg) except exception.Forbidden as e: LOG.warn(encodeutils.exception_to_unicode(e)) raise webob.exc.HTTPForbidden(explanation=e.msg) result['tasks'] = tasks return result @debtcollector.removals.remove(message=_DEPRECATION_MESSAGE) def get(self, req, task_id): _enforce_access_policy(self.policy, req) try: task_repo = self.gateway.get_task_repo(req.context) task = task_repo.get(task_id) except exception.NotFound as e: msg = (_LW("Failed to find task %(task_id)s. Reason: %(reason)s") % {'task_id': task_id, 'reason': encodeutils.exception_to_unicode(e)}) LOG.warn(msg) raise webob.exc.HTTPNotFound(explanation=e.msg) except exception.Forbidden as e: msg = (_LW("Forbidden to get task %(task_id)s. Reason:" " %(reason)s") % {'task_id': task_id, 'reason': encodeutils.exception_to_unicode(e)}) LOG.warn(msg) raise webob.exc.HTTPForbidden(explanation=e.msg) return task @debtcollector.removals.remove(message=_DEPRECATION_MESSAGE) def delete(self, req, task_id): _enforce_access_policy(self.policy, req) msg = (_("This operation is currently not permitted on Glance Tasks. " "They are auto deleted after reaching the time based on " "their expires_at property.")) raise webob.exc.HTTPMethodNotAllowed(explanation=msg, headers={'Allow': 'GET'}, body_template='${explanation}') class RequestDeserializer(wsgi.JSONRequestDeserializer): _required_properties = ['type', 'input'] def _get_request_body(self, request): output = super(RequestDeserializer, self).default(request) if 'body' not in output: msg = _('Body expected in request.') raise webob.exc.HTTPBadRequest(explanation=msg) return output['body'] def _validate_sort_dir(self, sort_dir): if sort_dir not in ['asc', 'desc']: msg = _('Invalid sort direction: %s') % sort_dir raise webob.exc.HTTPBadRequest(explanation=msg) return sort_dir def _get_filters(self, filters): status = filters.get('status') if status: if status not in ['pending', 'processing', 'success', 'failure']: msg = _('Invalid status value: %s') % status raise webob.exc.HTTPBadRequest(explanation=msg) type = filters.get('type') if type: if type not in ['import']: msg = _('Invalid type value: %s') % type raise webob.exc.HTTPBadRequest(explanation=msg) return filters def _validate_marker(self, marker): if marker and not uuidutils.is_uuid_like(marker): msg = _('Invalid marker format') raise webob.exc.HTTPBadRequest(explanation=msg) return marker def _validate_limit(self, limit): try: limit = int(limit) except ValueError: msg = _("limit param must be an integer") raise webob.exc.HTTPBadRequest(explanation=msg) if limit < 0: msg = _("limit param must be positive") raise webob.exc.HTTPBadRequest(explanation=msg) return limit def _validate_create_body(self, body): """Validate the body of task creating request""" for param in self._required_properties: if param not in body: msg = _("Task '%s' is required") % param raise webob.exc.HTTPBadRequest(explanation=msg) def __init__(self, schema=None, policy_engine=None): super(RequestDeserializer, self).__init__() self.schema = schema or get_task_schema() # want to enforce the access policy as early as possible self.policy_engine = policy_engine or policy.Enforcer() def create(self, request): _enforce_access_policy(self.policy_engine, request) body = self._get_request_body(request) self._validate_create_body(body) try: self.schema.validate(body) except exception.InvalidObject as e: raise webob.exc.HTTPBadRequest(explanation=e.msg) task = {} properties = body for key in self._required_properties: try: task[key] = properties.pop(key) except KeyError: pass return dict(task=task) def index(self, request): _enforce_access_policy(self.policy_engine, request) params = request.params.copy() limit = params.pop('limit', None) marker = params.pop('marker', None) sort_dir = params.pop('sort_dir', 'desc') query_params = { 'sort_key': params.pop('sort_key', 'created_at'), 'sort_dir': self._validate_sort_dir(sort_dir), 'filters': self._get_filters(params) } if marker is not None: query_params['marker'] = self._validate_marker(marker) if limit is not None: query_params['limit'] = self._validate_limit(limit) return query_params class ResponseSerializer(wsgi.JSONResponseSerializer): def __init__(self, task_schema=None, partial_task_schema=None): super(ResponseSerializer, self).__init__() self.task_schema = task_schema or get_task_schema() self.partial_task_schema = (partial_task_schema or _get_partial_task_schema()) def _inject_location_header(self, response, task): location = self._get_task_location(task) if six.PY2: location = location.encode('utf-8') response.headers['Location'] = location def _get_task_location(self, task): return '/v2/tasks/%s' % task.task_id def _format_task(self, schema, task): task_view = { 'id': task.task_id, 'input': task.task_input, 'type': task.type, 'status': task.status, 'owner': task.owner, 'message': task.message, 'result': task.result, 'created_at': timeutils.isotime(task.created_at), 'updated_at': timeutils.isotime(task.updated_at), 'self': self._get_task_location(task), 'schema': '/v2/schemas/task' } if task.expires_at: task_view['expires_at'] = timeutils.isotime(task.expires_at) task_view = schema.filter(task_view) # domain return task_view def _format_task_stub(self, schema, task): task_view = { 'id': task.task_id, 'type': task.type, 'status': task.status, 'owner': task.owner, 'created_at': timeutils.isotime(task.created_at), 'updated_at': timeutils.isotime(task.updated_at), 'self': self._get_task_location(task), 'schema': '/v2/schemas/task' } if task.expires_at: task_view['expires_at'] = timeutils.isotime(task.expires_at) task_view = schema.filter(task_view) # domain return task_view def create(self, response, task): response.status_int = http.CREATED self._inject_location_header(response, task) self.get(response, task) def get(self, response, task): task_view = self._format_task(self.task_schema, task) body = json.dumps(task_view, ensure_ascii=False) response.unicode_body = six.text_type(body) response.content_type = 'application/json' def index(self, response, result): params = dict(response.request.params) params.pop('marker', None) query = urlparse.urlencode(params) body = { 'tasks': [self._format_task_stub(self.partial_task_schema, task) for task in result['tasks']], 'first': '/v2/tasks', 'schema': '/v2/schemas/tasks', } if query: body['first'] = '%s?%s' % (body['first'], query) if 'next_marker' in result: params['marker'] = result['next_marker'] next_query = urlparse.urlencode(params) body['next'] = '/v2/tasks?%s' % next_query response.unicode_body = six.text_type(json.dumps(body, ensure_ascii=False)) response.content_type = 'application/json' _TASK_SCHEMA = { "id": { "description": _("An identifier for the task"), "pattern": _('^([0-9a-fA-F]){8}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}' '-([0-9a-fA-F]){4}-([0-9a-fA-F]){12}$'), "type": "string" }, "type": { "description": _("The type of task represented by this content"), "enum": [ "import", "api_image_import" ], "type": "string" }, "status": { "description": _("The current status of this task"), "enum": [ "pending", "processing", "success", "failure" ], "type": "string" }, "input": { "description": _("The parameters required by task, JSON blob"), "type": ["null", "object"], }, "result": { "description": _("The result of current task, JSON blob"), "type": ["null", "object"], }, "owner": { "description": _("An identifier for the owner of this task"), "type": "string" }, "message": { "description": _("Human-readable informative message only included" " when appropriate (usually on failure)"), "type": "string", }, "expires_at": { "description": _("Datetime when this resource would be" " subject to removal"), "type": ["null", "string"] }, "created_at": { "description": _("Datetime when this resource was created"), "type": "string" }, "updated_at": { "description": _("Datetime when this resource was updated"), "type": "string" }, 'self': { 'readOnly': True, 'type': 'string' }, 'schema': { 'readOnly': True, 'type': 'string' } } def _enforce_access_policy(policy_engine, request): try: policy_engine.enforce(request.context, 'tasks_api_access', {}) except exception.Forbidden: LOG.debug("User does not have permission to access the Tasks API") raise webob.exc.HTTPForbidden() def get_task_schema(): properties = copy.deepcopy(_TASK_SCHEMA) schema = glance.schema.Schema('task', properties) return schema def _get_partial_task_schema(): properties = copy.deepcopy(_TASK_SCHEMA) hide_properties = ['input', 'result', 'message'] for key in hide_properties: del properties[key] schema = glance.schema.Schema('task', properties) return schema def get_collection_schema(): task_schema = _get_partial_task_schema() return glance.schema.CollectionSchema('tasks', task_schema) def create_resource(): """Task resource factory method""" task_schema = get_task_schema() partial_task_schema = _get_partial_task_schema() deserializer = RequestDeserializer(task_schema) serializer = ResponseSerializer(task_schema, partial_task_schema) controller = TasksController() return wsgi.Resource(controller, deserializer, serializer) glance-16.0.0/glance/api/v2/images.py0000666000175100017510000013330113245511421017224 0ustar zuulzuul00000000000000# Copyright 2012 OpenStack Foundation. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import re import glance_store from oslo_config import cfg from oslo_log import log as logging from oslo_serialization import jsonutils as json from oslo_utils import encodeutils import six from six.moves import http_client as http import six.moves.urllib.parse as urlparse import webob.exc from glance.api import common from glance.api import policy from glance.common import exception from glance.common import location_strategy from glance.common import timeutils from glance.common import utils from glance.common import wsgi import glance.db import glance.gateway from glance.i18n import _, _LW import glance.notifier import glance.schema LOG = logging.getLogger(__name__) CONF = cfg.CONF CONF.import_opt('disk_formats', 'glance.common.config', group='image_format') CONF.import_opt('container_formats', 'glance.common.config', group='image_format') CONF.import_opt('show_multiple_locations', 'glance.common.config') class ImagesController(object): def __init__(self, db_api=None, policy_enforcer=None, notifier=None, store_api=None): self.db_api = db_api or glance.db.get_api() self.policy = policy_enforcer or policy.Enforcer() self.notifier = notifier or glance.notifier.Notifier() self.store_api = store_api or glance_store self.gateway = glance.gateway.Gateway(self.db_api, self.store_api, self.notifier, self.policy) @utils.mutating def create(self, req, image, extra_properties, tags): image_factory = self.gateway.get_image_factory(req.context) image_repo = self.gateway.get_repo(req.context) try: image = image_factory.new_image(extra_properties=extra_properties, tags=tags, **image) image_repo.add(image) except (exception.DuplicateLocation, exception.Invalid) as e: raise webob.exc.HTTPBadRequest(explanation=e.msg) except (exception.ReservedProperty, exception.ReadonlyProperty) as e: raise webob.exc.HTTPForbidden(explanation=e.msg) except exception.Forbidden as e: LOG.debug("User not permitted to create image") raise webob.exc.HTTPForbidden(explanation=e.msg) except exception.LimitExceeded as e: LOG.warn(encodeutils.exception_to_unicode(e)) raise webob.exc.HTTPRequestEntityTooLarge( explanation=e.msg, request=req, content_type='text/plain') except exception.Duplicate as e: raise webob.exc.HTTPConflict(explanation=e.msg) except exception.NotAuthenticated as e: raise webob.exc.HTTPUnauthorized(explanation=e.msg) except TypeError as e: LOG.debug(encodeutils.exception_to_unicode(e)) raise webob.exc.HTTPBadRequest(explanation=e) return image @utils.mutating def import_image(self, req, image_id, body): task_factory = self.gateway.get_task_factory(req.context) executor_factory = self.gateway.get_task_executor_factory(req.context) task_repo = self.gateway.get_task_repo(req.context) task_input = {'image_id': image_id, 'import_req': body} import_method = body.get('method').get('name') uri = body.get('method').get('uri') if (import_method == 'web-download' and not utils.validate_import_uri(uri)): LOG.debug("URI for web-download does not pass filtering: %s", uri) msg = (_("URI for web-download does not pass filtering: %s") % uri) raise webob.exc.HTTPBadRequest(explanation=msg) try: import_task = task_factory.new_task(task_type='api_image_import', owner=req.context.owner, task_input=task_input) task_repo.add(import_task) task_executor = executor_factory.new_task_executor(req.context) pool = common.get_thread_pool("tasks_eventlet_pool") pool.spawn_n(import_task.run, task_executor) except exception.Forbidden as e: LOG.debug("User not permitted to create image import task.") raise webob.exc.HTTPForbidden(explanation=e.msg) except exception.Conflict as e: raise webob.exc.HTTPConflict(explanation=e.msg) except exception.InvalidImageStatusTransition as e: raise webob.exc.HTTPConflict(explanation=e.msg) except ValueError as e: LOG.debug("Cannot import data for image %(id)s: %(e)s", {'id': image_id, 'e': encodeutils.exception_to_unicode(e)}) raise webob.exc.HTTPBadRequest( explanation=encodeutils.exception_to_unicode(e)) return image_id def index(self, req, marker=None, limit=None, sort_key=None, sort_dir=None, filters=None, member_status='accepted'): sort_key = ['created_at'] if not sort_key else sort_key sort_dir = ['desc'] if not sort_dir else sort_dir result = {} if filters is None: filters = {} filters['deleted'] = False protected = filters.get('protected') if protected is not None: if protected not in ['true', 'false']: message = _("Invalid value '%s' for 'protected' filter." " Valid values are 'true' or 'false'.") % protected raise webob.exc.HTTPBadRequest(explanation=message) # ensure the type of protected is boolean filters['protected'] = protected == 'true' if limit is None: limit = CONF.limit_param_default limit = min(CONF.api_limit_max, limit) image_repo = self.gateway.get_repo(req.context) try: images = image_repo.list(marker=marker, limit=limit, sort_key=sort_key, sort_dir=sort_dir, filters=filters, member_status=member_status) if len(images) != 0 and len(images) == limit: result['next_marker'] = images[-1].image_id except (exception.NotFound, exception.InvalidSortKey, exception.InvalidFilterRangeValue, exception.InvalidParameterValue, exception.InvalidFilterOperatorValue) as e: raise webob.exc.HTTPBadRequest(explanation=e.msg) except exception.Forbidden as e: LOG.debug("User not permitted to retrieve images index") raise webob.exc.HTTPForbidden(explanation=e.msg) except exception.NotAuthenticated as e: raise webob.exc.HTTPUnauthorized(explanation=e.msg) result['images'] = images return result def show(self, req, image_id): image_repo = self.gateway.get_repo(req.context) try: return image_repo.get(image_id) except exception.Forbidden as e: LOG.debug("User not permitted to show image '%s'", image_id) raise webob.exc.HTTPForbidden(explanation=e.msg) except exception.NotFound as e: raise webob.exc.HTTPNotFound(explanation=e.msg) except exception.NotAuthenticated as e: raise webob.exc.HTTPUnauthorized(explanation=e.msg) @utils.mutating def update(self, req, image_id, changes): image_repo = self.gateway.get_repo(req.context) try: image = image_repo.get(image_id) for change in changes: change_method_name = '_do_%s' % change['op'] change_method = getattr(self, change_method_name) change_method(req, image, change) if changes: image_repo.save(image) except exception.NotFound as e: raise webob.exc.HTTPNotFound(explanation=e.msg) except (exception.Invalid, exception.BadStoreUri) as e: raise webob.exc.HTTPBadRequest(explanation=e.msg) except exception.Forbidden as e: LOG.debug("User not permitted to update image '%s'", image_id) raise webob.exc.HTTPForbidden(explanation=e.msg) except exception.StorageQuotaFull as e: msg = (_("Denying attempt to upload image because it exceeds the" " quota: %s") % encodeutils.exception_to_unicode(e)) LOG.warn(msg) raise webob.exc.HTTPRequestEntityTooLarge( explanation=msg, request=req, content_type='text/plain') except exception.LimitExceeded as e: LOG.exception(encodeutils.exception_to_unicode(e)) raise webob.exc.HTTPRequestEntityTooLarge( explanation=e.msg, request=req, content_type='text/plain') except exception.NotAuthenticated as e: raise webob.exc.HTTPUnauthorized(explanation=e.msg) return image def _do_replace(self, req, image, change): path = change['path'] path_root = path[0] value = change['value'] if path_root == 'locations' and value == []: msg = _("Cannot set locations to empty list.") raise webob.exc.HTTPForbidden(msg) elif path_root == 'locations' and value != []: self._do_replace_locations(image, value) elif path_root == 'owner' and req.context.is_admin == False: msg = _("Owner can't be updated by non admin.") raise webob.exc.HTTPForbidden(msg) else: if hasattr(image, path_root): setattr(image, path_root, value) elif path_root in image.extra_properties: image.extra_properties[path_root] = value else: msg = _("Property %s does not exist.") raise webob.exc.HTTPConflict(msg % path_root) def _do_add(self, req, image, change): path = change['path'] path_root = path[0] value = change['value'] json_schema_version = change.get('json_schema_version', 10) if path_root == 'locations': self._do_add_locations(image, path[1], value) else: if ((hasattr(image, path_root) or path_root in image.extra_properties) and json_schema_version == 4): msg = _("Property %s already present.") raise webob.exc.HTTPConflict(msg % path_root) if hasattr(image, path_root): setattr(image, path_root, value) else: image.extra_properties[path_root] = value def _do_remove(self, req, image, change): path = change['path'] path_root = path[0] if path_root == 'locations': try: self._do_remove_locations(image, path[1]) except exception.Forbidden as e: raise webob.exc.HTTPForbidden(e.msg) else: if hasattr(image, path_root): msg = _("Property %s may not be removed.") raise webob.exc.HTTPForbidden(msg % path_root) elif path_root in image.extra_properties: del image.extra_properties[path_root] else: msg = _("Property %s does not exist.") raise webob.exc.HTTPConflict(msg % path_root) @utils.mutating def delete(self, req, image_id): image_repo = self.gateway.get_repo(req.context) try: image = image_repo.get(image_id) # NOTE(abhishekk): If 'image-import' is supported and image status # is uploading then delete image data from the staging area. if CONF.enable_image_import and image.status == 'uploading': file_path = str(CONF.node_staging_uri + '/' + image.image_id) self.store_api.delete_from_backend(file_path) image.delete() image_repo.remove(image) except (glance_store.Forbidden, exception.Forbidden) as e: LOG.debug("User not permitted to delete image '%s'", image_id) raise webob.exc.HTTPForbidden(explanation=e.msg) except (glance_store.NotFound, exception.NotFound) as e: msg = (_("Failed to find image %(image_id)s to delete") % {'image_id': image_id}) LOG.warn(msg) raise webob.exc.HTTPNotFound(explanation=msg) except glance_store.exceptions.InUseByStore as e: msg = (_("Image %(id)s could not be deleted " "because it is in use: %(exc)s") % {"id": image_id, "exc": e.msg}) LOG.warn(msg) raise webob.exc.HTTPConflict(explanation=msg) except glance_store.exceptions.HasSnapshot as e: raise webob.exc.HTTPConflict(explanation=e.msg) except exception.InvalidImageStatusTransition as e: raise webob.exc.HTTPBadRequest(explanation=e.msg) except exception.NotAuthenticated as e: raise webob.exc.HTTPUnauthorized(explanation=e.msg) def _get_locations_op_pos(self, path_pos, max_pos, allow_max): if path_pos is None or max_pos is None: return None pos = max_pos if allow_max else max_pos - 1 if path_pos.isdigit(): pos = int(path_pos) elif path_pos != '-': return None if not (allow_max or 0 <= pos < max_pos): return None return pos def _do_replace_locations(self, image, value): if CONF.show_multiple_locations == False: msg = _("It's not allowed to update locations if locations are " "invisible.") raise webob.exc.HTTPForbidden(explanation=msg) if image.status not in ('active', 'queued'): msg = _("It's not allowed to replace locations if image status is " "%s.") % image.status raise webob.exc.HTTPConflict(explanation=msg) try: # NOTE(flwang): _locations_proxy's setattr method will check if # the update is acceptable. image.locations = value except (exception.BadStoreUri, exception.DuplicateLocation) as e: raise webob.exc.HTTPBadRequest(explanation=e.msg) except ValueError as ve: # update image status failed. raise webob.exc.HTTPBadRequest( explanation=encodeutils.exception_to_unicode(ve)) def _do_add_locations(self, image, path_pos, value): if CONF.show_multiple_locations == False: msg = _("It's not allowed to add locations if locations are " "invisible.") raise webob.exc.HTTPForbidden(explanation=msg) if image.status not in ('active', 'queued'): msg = _("It's not allowed to add locations if image status is " "%s.") % image.status raise webob.exc.HTTPConflict(explanation=msg) pos = self._get_locations_op_pos(path_pos, len(image.locations), True) if pos is None: msg = _("Invalid position for adding a location.") raise webob.exc.HTTPBadRequest(explanation=msg) try: image.locations.insert(pos, value) if image.status == 'queued': image.status = 'active' except (exception.BadStoreUri, exception.DuplicateLocation) as e: raise webob.exc.HTTPBadRequest(explanation=e.msg) except ValueError as e: # update image status failed. raise webob.exc.HTTPBadRequest( explanation=encodeutils.exception_to_unicode(e)) def _do_remove_locations(self, image, path_pos): if CONF.show_multiple_locations == False: msg = _("It's not allowed to remove locations if locations are " "invisible.") raise webob.exc.HTTPForbidden(explanation=msg) if image.status not in ('active'): msg = _("It's not allowed to remove locations if image status is " "%s.") % image.status raise webob.exc.HTTPConflict(explanation=msg) if len(image.locations) == 1: LOG.debug("User forbidden to remove last location of image %s", image.image_id) msg = _("Cannot remove last location in the image.") raise exception.Forbidden(msg) pos = self._get_locations_op_pos(path_pos, len(image.locations), False) if pos is None: msg = _("Invalid position for removing a location.") raise webob.exc.HTTPBadRequest(explanation=msg) try: # NOTE(zhiyan): this actually deletes the location # from the backend store. image.locations.pop(pos) # TODO(jokke): Fix this, we should catch what store throws and # provide definitely something else than IternalServerError to user. except Exception as e: raise webob.exc.HTTPInternalServerError( explanation=encodeutils.exception_to_unicode(e)) class RequestDeserializer(wsgi.JSONRequestDeserializer): _disallowed_properties = ('direct_url', 'self', 'file', 'schema') _readonly_properties = ('created_at', 'updated_at', 'status', 'checksum', 'size', 'virtual_size', 'direct_url', 'self', 'file', 'schema', 'id') _reserved_properties = ('location', 'deleted', 'deleted_at') _base_properties = ('checksum', 'created_at', 'container_format', 'disk_format', 'id', 'min_disk', 'min_ram', 'name', 'size', 'virtual_size', 'status', 'tags', 'owner', 'updated_at', 'visibility', 'protected') _available_sort_keys = ('name', 'status', 'container_format', 'disk_format', 'size', 'id', 'created_at', 'updated_at') _default_sort_key = 'created_at' _default_sort_dir = 'desc' _path_depth_limits = {'locations': {'add': 2, 'remove': 2, 'replace': 1}} _supported_operations = ('add', 'remove', 'replace') def __init__(self, schema=None): super(RequestDeserializer, self).__init__() self.schema = schema or get_schema() def _get_request_body(self, request): output = super(RequestDeserializer, self).default(request) if 'body' not in output: msg = _('Body expected in request.') raise webob.exc.HTTPBadRequest(explanation=msg) return output['body'] @classmethod def _check_allowed(cls, image): for key in cls._disallowed_properties: if key in image: msg = _("Attribute '%s' is read-only.") % key raise webob.exc.HTTPForbidden( explanation=six.text_type(msg)) def create(self, request): body = self._get_request_body(request) self._check_allowed(body) try: self.schema.validate(body) except exception.InvalidObject as e: raise webob.exc.HTTPBadRequest(explanation=e.msg) image = {} properties = body tags = properties.pop('tags', []) for key in self._base_properties: try: # NOTE(flwang): Instead of changing the _check_unexpected # of ImageFactory. It would be better to do the mapping # at here. if key == 'id': image['image_id'] = properties.pop(key) else: image[key] = properties.pop(key) except KeyError: pass # NOTE(abhishekk): Check if custom property key name is less than 255 # characters. Reference LP #1737952 for key in properties: if len(key) > 255: msg = (_("Custom property should not be greater than 255 " "characters.")) raise webob.exc.HTTPBadRequest(explanation=msg) return dict(image=image, extra_properties=properties, tags=tags) def _get_change_operation_d10(self, raw_change): op = raw_change.get('op') if op is None: msg = (_('Unable to find `op` in JSON Schema change. ' 'It must be one of the following: %(available)s.') % {'available': ', '.join(self._supported_operations)}) raise webob.exc.HTTPBadRequest(explanation=msg) if op not in self._supported_operations: msg = (_('Invalid operation: `%(op)s`. ' 'It must be one of the following: %(available)s.') % {'op': op, 'available': ', '.join(self._supported_operations)}) raise webob.exc.HTTPBadRequest(explanation=msg) return op def _get_change_operation_d4(self, raw_change): op = None for key in self._supported_operations: if key in raw_change: if op is not None: msg = _('Operation objects must contain only one member' ' named "add", "remove", or "replace".') raise webob.exc.HTTPBadRequest(explanation=msg) op = key if op is None: msg = _('Operation objects must contain exactly one member' ' named "add", "remove", or "replace".') raise webob.exc.HTTPBadRequest(explanation=msg) return op def _get_change_path_d10(self, raw_change): try: return raw_change['path'] except KeyError: msg = _("Unable to find '%s' in JSON Schema change") % 'path' raise webob.exc.HTTPBadRequest(explanation=msg) def _get_change_path_d4(self, raw_change, op): return raw_change[op] def _decode_json_pointer(self, pointer): """Parse a json pointer. Json Pointers are defined in http://tools.ietf.org/html/draft-pbryan-zyp-json-pointer . The pointers use '/' for separation between object attributes, such that '/A/B' would evaluate to C in {"A": {"B": "C"}}. A '/' character in an attribute name is encoded as "~1" and a '~' character is encoded as "~0". """ self._validate_json_pointer(pointer) ret = [] for part in pointer.lstrip('/').split('/'): ret.append(part.replace('~1', '/').replace('~0', '~').strip()) return ret def _validate_json_pointer(self, pointer): """Validate a json pointer. We only accept a limited form of json pointers. """ if not pointer.startswith('/'): msg = _('Pointer `%s` does not start with "/".') % pointer raise webob.exc.HTTPBadRequest(explanation=msg) if re.search('/\s*?/', pointer[1:]): msg = _('Pointer `%s` contains adjacent "/".') % pointer raise webob.exc.HTTPBadRequest(explanation=msg) if len(pointer) > 1 and pointer.endswith('/'): msg = _('Pointer `%s` end with "/".') % pointer raise webob.exc.HTTPBadRequest(explanation=msg) if pointer[1:].strip() == '/': msg = _('Pointer `%s` does not contains valid token.') % pointer raise webob.exc.HTTPBadRequest(explanation=msg) if re.search('~[^01]', pointer) or pointer.endswith('~'): msg = _('Pointer `%s` contains "~" not part of' ' a recognized escape sequence.') % pointer raise webob.exc.HTTPBadRequest(explanation=msg) def _get_change_value(self, raw_change, op): if 'value' not in raw_change: msg = _('Operation "%s" requires a member named "value".') raise webob.exc.HTTPBadRequest(explanation=msg % op) return raw_change['value'] def _validate_change(self, change): path_root = change['path'][0] if path_root in self._readonly_properties: msg = _("Attribute '%s' is read-only.") % path_root raise webob.exc.HTTPForbidden(explanation=six.text_type(msg)) if path_root in self._reserved_properties: msg = _("Attribute '%s' is reserved.") % path_root raise webob.exc.HTTPForbidden(explanation=six.text_type(msg)) if change['op'] == 'remove': return partial_image = None if len(change['path']) == 1: partial_image = {path_root: change['value']} elif ((path_root in get_base_properties().keys()) and (get_base_properties()[path_root].get('type', '') == 'array')): # NOTE(zhiyan): client can use the PATCH API to add an element # directly to an existing property # Such as: 1. using '/locations/N' path to add a location # to the image's 'locations' list at position N. # (implemented) # 2. using '/tags/-' path to append a tag to the # image's 'tags' list at the end. (Not implemented) partial_image = {path_root: [change['value']]} if partial_image: try: self.schema.validate(partial_image) except exception.InvalidObject as e: raise webob.exc.HTTPBadRequest(explanation=e.msg) def _validate_path(self, op, path): path_root = path[0] limits = self._path_depth_limits.get(path_root, {}) if len(path) != limits.get(op, 1): msg = _("Invalid JSON pointer for this resource: " "'/%s'") % '/'.join(path) raise webob.exc.HTTPBadRequest(explanation=six.text_type(msg)) def _parse_json_schema_change(self, raw_change, draft_version): if draft_version == 10: op = self._get_change_operation_d10(raw_change) path = self._get_change_path_d10(raw_change) elif draft_version == 4: op = self._get_change_operation_d4(raw_change) path = self._get_change_path_d4(raw_change, op) else: msg = _('Unrecognized JSON Schema draft version') raise webob.exc.HTTPBadRequest(explanation=msg) path_list = self._decode_json_pointer(path) return op, path_list def update(self, request): changes = [] content_types = { 'application/openstack-images-v2.0-json-patch': 4, 'application/openstack-images-v2.1-json-patch': 10, } if request.content_type not in content_types: headers = {'Accept-Patch': ', '.join(sorted(content_types.keys()))} raise webob.exc.HTTPUnsupportedMediaType(headers=headers) json_schema_version = content_types[request.content_type] body = self._get_request_body(request) if not isinstance(body, list): msg = _('Request body must be a JSON array of operation objects.') raise webob.exc.HTTPBadRequest(explanation=msg) for raw_change in body: if not isinstance(raw_change, dict): msg = _('Operations must be JSON objects.') raise webob.exc.HTTPBadRequest(explanation=msg) (op, path) = self._parse_json_schema_change(raw_change, json_schema_version) # NOTE(zhiyan): the 'path' is a list. self._validate_path(op, path) change = {'op': op, 'path': path, 'json_schema_version': json_schema_version} if not op == 'remove': change['value'] = self._get_change_value(raw_change, op) self._validate_change(change) changes.append(change) return {'changes': changes} def _validate_limit(self, limit): try: limit = int(limit) except ValueError: msg = _("limit param must be an integer") raise webob.exc.HTTPBadRequest(explanation=msg) if limit < 0: msg = _("limit param must be positive") raise webob.exc.HTTPBadRequest(explanation=msg) return limit def _validate_sort_key(self, sort_key): if sort_key not in self._available_sort_keys: msg = _('Invalid sort key: %(sort_key)s. ' 'It must be one of the following: %(available)s.') % ( {'sort_key': sort_key, 'available': ', '.join(self._available_sort_keys)}) raise webob.exc.HTTPBadRequest(explanation=msg) return sort_key def _validate_sort_dir(self, sort_dir): if sort_dir not in ['asc', 'desc']: msg = _('Invalid sort direction: %s') % sort_dir raise webob.exc.HTTPBadRequest(explanation=msg) return sort_dir def _validate_member_status(self, member_status): if member_status not in ['pending', 'accepted', 'rejected', 'all']: msg = _('Invalid status: %s') % member_status raise webob.exc.HTTPBadRequest(explanation=msg) return member_status def _get_filters(self, filters): visibility = filters.get('visibility') if visibility: if visibility not in ['community', 'public', 'private', 'shared']: msg = _('Invalid visibility value: %s') % visibility raise webob.exc.HTTPBadRequest(explanation=msg) changes_since = filters.get('changes-since') if changes_since: msg = _('The "changes-since" filter is no longer available on v2.') raise webob.exc.HTTPBadRequest(explanation=msg) return filters def _get_sorting_params(self, params): """ Process sorting params. Currently glance supports two sorting syntax: classic and new one, that is uniform for all OpenStack projects. Classic syntax: sort_key=name&sort_dir=asc&sort_key=size&sort_dir=desc New syntax: sort=name:asc,size:desc """ sort_keys = [] sort_dirs = [] if 'sort' in params: # use new sorting syntax here if 'sort_key' in params or 'sort_dir' in params: msg = _('Old and new sorting syntax cannot be combined') raise webob.exc.HTTPBadRequest(explanation=msg) for sort_param in params.pop('sort').strip().split(','): key, _sep, dir = sort_param.partition(':') if not dir: dir = self._default_sort_dir sort_keys.append(self._validate_sort_key(key.strip())) sort_dirs.append(self._validate_sort_dir(dir.strip())) else: # continue with classic syntax # NOTE(mfedosin): we have 3 options here: # 1. sort_dir wasn't passed: we use default one - 'desc'. # 2. Only one sort_dir was passed: use it for every sort_key # in the list. # 3. Multiple sort_dirs were passed: consistently apply each one to # the corresponding sort_key. # If number of sort_dirs and sort_keys doesn't match then raise an # exception. while 'sort_key' in params: sort_keys.append(self._validate_sort_key( params.pop('sort_key').strip())) while 'sort_dir' in params: sort_dirs.append(self._validate_sort_dir( params.pop('sort_dir').strip())) if sort_dirs: dir_len = len(sort_dirs) key_len = len(sort_keys) if dir_len > 1 and dir_len != key_len: msg = _('Number of sort dirs does not match the number ' 'of sort keys') raise webob.exc.HTTPBadRequest(explanation=msg) if not sort_keys: sort_keys = [self._default_sort_key] if not sort_dirs: sort_dirs = [self._default_sort_dir] return sort_keys, sort_dirs def index(self, request): params = request.params.copy() limit = params.pop('limit', None) marker = params.pop('marker', None) member_status = params.pop('member_status', 'accepted') # NOTE (flwang) To avoid using comma or any predefined chars to split # multiple tags, now we allow user specify multiple 'tag' parameters # in URL, such as v2/images?tag=x86&tag=64bit. tags = [] while 'tag' in params: tags.append(params.pop('tag').strip()) query_params = { 'filters': self._get_filters(params), 'member_status': self._validate_member_status(member_status), } if marker is not None: query_params['marker'] = marker if limit is not None: query_params['limit'] = self._validate_limit(limit) if tags: query_params['filters']['tags'] = tags # NOTE(mfedosin): param is still called sort_key and sort_dir, # instead of sort_keys and sort_dirs respectively. # It's done because in v1 it's still a single value. query_params['sort_key'], query_params['sort_dir'] = ( self._get_sorting_params(params)) return query_params def _validate_import_body(self, body): # TODO(rosmaita): do schema validation of body instead # of this ad-hoc stuff try: method = body['method'] except KeyError: msg = _("Import request requires a 'method' field.") raise webob.exc.HTTPBadRequest(explanation=msg) try: method_name = method['name'] except KeyError: msg = _("Import request requires a 'name' field.") raise webob.exc.HTTPBadRequest(explanation=msg) if method_name not in ['glance-direct', 'web-download']: msg = _("Unknown import method name '%s'.") % method_name raise webob.exc.HTTPBadRequest(explanation=msg) def import_image(self, request): if not CONF.enable_image_import: msg = _("Image import is not supported at this site.") raise webob.exc.HTTPNotFound(explanation=msg) body = self._get_request_body(request) self._validate_import_body(body) return {'body': body} class ResponseSerializer(wsgi.JSONResponseSerializer): def __init__(self, schema=None): super(ResponseSerializer, self).__init__() self.schema = schema or get_schema() def _get_image_href(self, image, subcollection=''): base_href = '/v2/images/%s' % image.image_id if subcollection: base_href = '%s/%s' % (base_href, subcollection) return base_href def _format_image(self, image): def _get_image_locations(image): try: return list(image.locations) except exception.Forbidden: return [] try: image_view = dict(image.extra_properties) attributes = ['name', 'disk_format', 'container_format', 'visibility', 'size', 'virtual_size', 'status', 'checksum', 'protected', 'min_ram', 'min_disk', 'owner'] for key in attributes: image_view[key] = getattr(image, key) image_view['id'] = image.image_id image_view['created_at'] = timeutils.isotime(image.created_at) image_view['updated_at'] = timeutils.isotime(image.updated_at) if CONF.show_multiple_locations: locations = _get_image_locations(image) if locations: image_view['locations'] = [] for loc in locations: tmp = dict(loc) tmp.pop('id', None) tmp.pop('status', None) image_view['locations'].append(tmp) else: # NOTE (flwang): We will still show "locations": [] if # image.locations is None to indicate it's allowed to show # locations but it's just non-existent. image_view['locations'] = [] LOG.debug("The 'locations' list of image %s is empty", image.image_id) if CONF.show_image_direct_url: locations = _get_image_locations(image) if locations: # Choose best location configured strategy l = location_strategy.choose_best_location(locations) image_view['direct_url'] = l['url'] else: LOG.debug("The 'locations' list of image %s is empty, " "not including 'direct_url' in response", image.image_id) image_view['tags'] = list(image.tags) image_view['self'] = self._get_image_href(image) image_view['file'] = self._get_image_href(image, 'file') image_view['schema'] = '/v2/schemas/image' image_view = self.schema.filter(image_view) # domain return image_view except exception.Forbidden as e: raise webob.exc.HTTPForbidden(explanation=e.msg) def create(self, response, image): response.status_int = http.CREATED self.show(response, image) response.location = self._get_image_href(image) # TODO(rosmaita): remove the outer 'if' statement when the # enable_image_import config option is removed if CONF.enable_image_import: # according to RFC7230, headers should not have empty fields # see http://httpwg.org/specs/rfc7230.html#field.components if CONF.enabled_import_methods: import_methods = ("OpenStack-image-import-methods", ','.join(CONF.enabled_import_methods)) response.headerlist.append(import_methods) def show(self, response, image): image_view = self._format_image(image) body = json.dumps(image_view, ensure_ascii=False) response.unicode_body = six.text_type(body) response.content_type = 'application/json' def update(self, response, image): image_view = self._format_image(image) body = json.dumps(image_view, ensure_ascii=False) response.unicode_body = six.text_type(body) response.content_type = 'application/json' def index(self, response, result): params = dict(response.request.params) params.pop('marker', None) query = urlparse.urlencode(params) body = { 'images': [self._format_image(i) for i in result['images']], 'first': '/v2/images', 'schema': '/v2/schemas/images', } if query: body['first'] = '%s?%s' % (body['first'], query) if 'next_marker' in result: params['marker'] = result['next_marker'] next_query = urlparse.urlencode(params) body['next'] = '/v2/images?%s' % next_query response.unicode_body = six.text_type(json.dumps(body, ensure_ascii=False)) response.content_type = 'application/json' def delete(self, response, result): response.status_int = http.NO_CONTENT def import_image(self, response, result): response.status_int = http.ACCEPTED def get_base_properties(): return { 'id': { 'type': 'string', 'description': _('An identifier for the image'), 'pattern': ('^([0-9a-fA-F]){8}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}' '-([0-9a-fA-F]){4}-([0-9a-fA-F]){12}$'), }, 'name': { 'type': ['null', 'string'], 'description': _('Descriptive name for the image'), 'maxLength': 255, }, 'status': { 'type': 'string', 'readOnly': True, 'description': _('Status of the image'), 'enum': ['queued', 'saving', 'active', 'killed', 'deleted', 'pending_delete', 'deactivated'], }, 'visibility': { 'type': 'string', 'description': _('Scope of image accessibility'), 'enum': ['community', 'public', 'private', 'shared'], }, 'protected': { 'type': 'boolean', 'description': _('If true, image will not be deletable.'), }, 'checksum': { 'type': ['null', 'string'], 'readOnly': True, 'description': _('md5 hash of image contents.'), 'maxLength': 32, }, 'owner': { 'type': ['null', 'string'], 'description': _('Owner of the image'), 'maxLength': 255, }, 'size': { 'type': ['null', 'integer'], 'readOnly': True, 'description': _('Size of image file in bytes'), }, 'virtual_size': { 'type': ['null', 'integer'], 'readOnly': True, 'description': _('Virtual size of image in bytes'), }, 'container_format': { 'type': ['null', 'string'], 'description': _('Format of the container'), 'enum': [None] + CONF.image_format.container_formats, }, 'disk_format': { 'type': ['null', 'string'], 'description': _('Format of the disk'), 'enum': [None] + CONF.image_format.disk_formats, }, 'created_at': { 'type': 'string', 'readOnly': True, 'description': _('Date and time of image registration' ), # TODO(bcwaldon): our jsonschema library doesn't seem to like the # format attribute, figure out why! # 'format': 'date-time', }, 'updated_at': { 'type': 'string', 'readOnly': True, 'description': _('Date and time of the last image modification' ), # 'format': 'date-time', }, 'tags': { 'type': 'array', 'description': _('List of strings related to the image'), 'items': { 'type': 'string', 'maxLength': 255, }, }, 'direct_url': { 'type': 'string', 'readOnly': True, 'description': _('URL to access the image file kept in external ' 'store'), }, 'min_ram': { 'type': 'integer', 'description': _('Amount of ram (in MB) required to boot image.'), }, 'min_disk': { 'type': 'integer', 'description': _('Amount of disk space (in GB) required to boot ' 'image.'), }, 'self': { 'type': 'string', 'readOnly': True, 'description': _('An image self url'), }, 'file': { 'type': 'string', 'readOnly': True, 'description': _('An image file url'), }, 'schema': { 'type': 'string', 'readOnly': True, 'description': _('An image schema url'), }, 'locations': { 'type': 'array', 'items': { 'type': 'object', 'properties': { 'url': { 'type': 'string', 'maxLength': 255, }, 'metadata': { 'type': 'object', }, }, 'required': ['url', 'metadata'], }, 'description': _('A set of URLs to access the image file kept in ' 'external store'), }, } def _get_base_links(): return [ {'rel': 'self', 'href': '{self}'}, {'rel': 'enclosure', 'href': '{file}'}, {'rel': 'describedby', 'href': '{schema}'}, ] def get_schema(custom_properties=None): properties = get_base_properties() links = _get_base_links() if CONF.allow_additional_image_properties: schema = glance.schema.PermissiveSchema('image', properties, links) else: schema = glance.schema.Schema('image', properties) if custom_properties: for property_value in custom_properties.values(): property_value['is_base'] = False schema.merge_properties(custom_properties) return schema def get_collection_schema(custom_properties=None): image_schema = get_schema(custom_properties) return glance.schema.CollectionSchema('images', image_schema) def load_custom_properties(): """Find the schema properties files and load them into a dict.""" filename = 'schema-image.json' match = CONF.find_file(filename) if match: with open(match, 'r') as schema_file: schema_data = schema_file.read() return json.loads(schema_data) else: msg = (_LW('Could not find schema properties file %s. Continuing ' 'without custom properties') % filename) LOG.warn(msg) return {} def create_resource(custom_properties=None): """Images resource factory method""" schema = get_schema(custom_properties) deserializer = RequestDeserializer(schema) serializer = ResponseSerializer(schema) controller = ImagesController() return wsgi.Resource(controller, deserializer, serializer) glance-16.0.0/glance/api/v2/__init__.py0000666000175100017510000000000013245511421017503 0ustar zuulzuul00000000000000glance-16.0.0/glance/api/v2/metadef_resource_types.py0000666000175100017510000002616213245511421022525 0ustar zuulzuul00000000000000# Copyright (c) 2014 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from oslo_log import log as logging from oslo_serialization import jsonutils from oslo_utils import encodeutils import six from six.moves import http_client as http import webob.exc from wsme.rest import json from glance.api import policy from glance.api.v2.model.metadef_resource_type import ResourceType from glance.api.v2.model.metadef_resource_type import ResourceTypeAssociation from glance.api.v2.model.metadef_resource_type import ResourceTypeAssociations from glance.api.v2.model.metadef_resource_type import ResourceTypes from glance.common import exception from glance.common import wsgi import glance.db import glance.gateway from glance.i18n import _ import glance.notifier import glance.schema LOG = logging.getLogger(__name__) class ResourceTypeController(object): def __init__(self, db_api=None, policy_enforcer=None, notifier=None): self.db_api = db_api or glance.db.get_api() self.policy = policy_enforcer or policy.Enforcer() self.notifier = notifier or glance.notifier.Notifier() self.gateway = glance.gateway.Gateway(db_api=self.db_api, notifier=self.notifier, policy_enforcer=self.policy) def index(self, req): try: filters = {'namespace': None} rs_type_repo = self.gateway.get_metadef_resource_type_repo( req.context) db_resource_type_list = rs_type_repo.list(filters=filters) resource_type_list = [ResourceType.to_wsme_model( resource_type) for resource_type in db_resource_type_list] resource_types = ResourceTypes() resource_types.resource_types = resource_type_list except exception.Forbidden as e: LOG.debug("User not permitted to retrieve metadata resource types " "index") raise webob.exc.HTTPForbidden(explanation=e.msg) except exception.NotFound as e: raise webob.exc.HTTPNotFound(explanation=e.msg) except Exception as e: LOG.error(encodeutils.exception_to_unicode(e)) raise webob.exc.HTTPInternalServerError(e) return resource_types def show(self, req, namespace): try: filters = {'namespace': namespace} rs_type_repo = self.gateway.get_metadef_resource_type_repo( req.context) db_resource_type_list = rs_type_repo.list(filters=filters) resource_type_list = [ResourceTypeAssociation.to_wsme_model( resource_type) for resource_type in db_resource_type_list] resource_types = ResourceTypeAssociations() resource_types.resource_type_associations = resource_type_list except exception.Forbidden as e: LOG.debug("User not permitted to retrieve metadata resource types " "within '%s' namespace", namespace) raise webob.exc.HTTPForbidden(explanation=e.msg) except exception.NotFound as e: raise webob.exc.HTTPNotFound(explanation=e.msg) except Exception as e: LOG.error(encodeutils.exception_to_unicode(e)) raise webob.exc.HTTPInternalServerError(e) return resource_types def create(self, req, resource_type, namespace): rs_type_factory = self.gateway.get_metadef_resource_type_factory( req.context) rs_type_repo = self.gateway.get_metadef_resource_type_repo(req.context) try: new_resource_type = rs_type_factory.new_resource_type( namespace=namespace, **resource_type.to_dict()) rs_type_repo.add(new_resource_type) except exception.Forbidden as e: LOG.debug("User not permitted to create metadata resource type " "within '%s' namespace", namespace) raise webob.exc.HTTPForbidden(explanation=e.msg) except exception.NotFound as e: raise webob.exc.HTTPNotFound(explanation=e.msg) except exception.Duplicate as e: raise webob.exc.HTTPConflict(explanation=e.msg) except Exception as e: LOG.error(encodeutils.exception_to_unicode(e)) raise webob.exc.HTTPInternalServerError() return ResourceTypeAssociation.to_wsme_model(new_resource_type) def delete(self, req, namespace, resource_type): rs_type_repo = self.gateway.get_metadef_resource_type_repo(req.context) try: filters = {} found = False filters['namespace'] = namespace db_resource_type_list = rs_type_repo.list(filters=filters) for db_resource_type in db_resource_type_list: if db_resource_type.name == resource_type: db_resource_type.delete() rs_type_repo.remove(db_resource_type) found = True if not found: raise exception.NotFound() except exception.Forbidden as e: LOG.debug("User not permitted to delete metadata resource type " "'%s' within '%s' namespace", resource_type, namespace) raise webob.exc.HTTPForbidden(explanation=e.msg) except exception.NotFound as e: msg = (_("Failed to find resource type %(resourcetype)s to " "delete") % {'resourcetype': resource_type}) LOG.error(msg) raise webob.exc.HTTPNotFound(explanation=msg) except Exception as e: LOG.error(encodeutils.exception_to_unicode(e)) raise webob.exc.HTTPInternalServerError() class RequestDeserializer(wsgi.JSONRequestDeserializer): _disallowed_properties = ['created_at', 'updated_at'] def __init__(self, schema=None): super(RequestDeserializer, self).__init__() self.schema = schema or get_schema() def _get_request_body(self, request): output = super(RequestDeserializer, self).default(request) if 'body' not in output: msg = _('Body expected in request.') raise webob.exc.HTTPBadRequest(explanation=msg) return output['body'] @classmethod def _check_allowed(cls, image): for key in cls._disallowed_properties: if key in image: msg = _("Attribute '%s' is read-only.") % key raise webob.exc.HTTPForbidden(explanation=msg) def create(self, request): body = self._get_request_body(request) self._check_allowed(body) try: self.schema.validate(body) except exception.InvalidObject as e: raise webob.exc.HTTPBadRequest(explanation=e.msg) resource_type = json.fromjson(ResourceTypeAssociation, body) return dict(resource_type=resource_type) class ResponseSerializer(wsgi.JSONResponseSerializer): def __init__(self, schema=None): super(ResponseSerializer, self).__init__() self.schema = schema def show(self, response, result): resource_type_json = json.tojson(ResourceTypeAssociations, result) body = jsonutils.dumps(resource_type_json, ensure_ascii=False) response.unicode_body = six.text_type(body) response.content_type = 'application/json' def index(self, response, result): resource_type_json = json.tojson(ResourceTypes, result) body = jsonutils.dumps(resource_type_json, ensure_ascii=False) response.unicode_body = six.text_type(body) response.content_type = 'application/json' def create(self, response, result): resource_type_json = json.tojson(ResourceTypeAssociation, result) response.status_int = http.CREATED body = jsonutils.dumps(resource_type_json, ensure_ascii=False) response.unicode_body = six.text_type(body) response.content_type = 'application/json' def delete(self, response, result): response.status_int = http.NO_CONTENT def _get_base_properties(): return { 'name': { 'type': 'string', 'description': _('Resource type names should be aligned with Heat ' 'resource types whenever possible: ' 'http://docs.openstack.org/developer/heat/' 'template_guide/openstack.html'), 'maxLength': 80, }, 'prefix': { 'type': 'string', 'description': _('Specifies the prefix to use for the given ' 'resource type. Any properties in the namespace ' 'should be prefixed with this prefix when being ' 'applied to the specified resource type. Must ' 'include prefix separator (e.g. a colon :).'), 'maxLength': 80, }, 'properties_target': { 'type': 'string', 'description': _('Some resource types allow more than one key / ' 'value pair per instance. For example, Cinder ' 'allows user and image metadata on volumes. Only ' 'the image properties metadata is evaluated by ' 'Nova (scheduling or drivers). This property ' 'allows a namespace target to remove the ' 'ambiguity.'), 'maxLength': 80, }, "created_at": { "type": "string", "readOnly": True, "description": _("Date and time of resource type association"), "format": "date-time" }, "updated_at": { "type": "string", "readOnly": True, "description": _("Date and time of the last resource type " "association modification"), "format": "date-time" } } def get_schema(): properties = _get_base_properties() mandatory_attrs = ResourceTypeAssociation.get_mandatory_attrs() schema = glance.schema.Schema( 'resource_type_association', properties, required=mandatory_attrs, ) return schema def get_collection_schema(): resource_type_schema = get_schema() return glance.schema.CollectionSchema('resource_type_associations', resource_type_schema) def create_resource(): """ResourceTypeAssociation resource factory method""" schema = get_schema() deserializer = RequestDeserializer(schema) serializer = ResponseSerializer(schema) controller = ResourceTypeController() return wsgi.Resource(controller, deserializer, serializer) glance-16.0.0/glance/api/v2/discovery.py0000666000175100017510000000263113245511421017767 0ustar zuulzuul00000000000000# Copyright (c) 2017 RedHat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from oslo_config import cfg import webob.exc from glance.common import wsgi from glance.i18n import _ CONF = cfg.CONF class InfoController(object): def get_image_import(self, req): # TODO(jokke): Will be removed after the config option # is removed. (deprecated) if not CONF.enable_image_import: msg = _("Image import is not supported at this site.") raise webob.exc.HTTPNotFound(explanation=msg) # TODO(jokke): All the rest of the boundaries should be implemented. import_methods = { 'description': 'Import methods available.', 'type': 'array', 'value': CONF.get('enabled_import_methods') } return { 'import-methods': import_methods } def create_resource(): return wsgi.Resource(InfoController()) glance-16.0.0/glance/api/v2/metadef_tags.py0000666000175100017510000003566713245511421020422 0ustar zuulzuul00000000000000# Copyright (c) 2014 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from oslo_log import log as logging from oslo_serialization import jsonutils from oslo_utils import encodeutils import six from six.moves import http_client as http import webob.exc from wsme.rest import json from glance.api import policy from glance.api.v2.model.metadef_tag import MetadefTag from glance.api.v2.model.metadef_tag import MetadefTags from glance.common import exception from glance.common import wsgi from glance.common import wsme_utils import glance.db from glance.i18n import _ import glance.notifier import glance.schema LOG = logging.getLogger(__name__) class TagsController(object): def __init__(self, db_api=None, policy_enforcer=None, notifier=None, schema=None): self.db_api = db_api or glance.db.get_api() self.policy = policy_enforcer or policy.Enforcer() self.notifier = notifier or glance.notifier.Notifier() self.gateway = glance.gateway.Gateway(db_api=self.db_api, notifier=self.notifier, policy_enforcer=self.policy) self.schema = schema or get_schema() self.tag_schema_link = '/v2/schemas/metadefs/tag' def create(self, req, namespace, tag_name): tag_factory = self.gateway.get_metadef_tag_factory(req.context) tag_repo = self.gateway.get_metadef_tag_repo(req.context) tag_name_as_dict = {'name': tag_name} try: self.schema.validate(tag_name_as_dict) except exception.InvalidObject as e: raise webob.exc.HTTPBadRequest(explanation=e.msg) try: new_meta_tag = tag_factory.new_tag( namespace=namespace, **tag_name_as_dict) tag_repo.add(new_meta_tag) except exception.Invalid as e: msg = (_("Couldn't create metadata tag: %s") % encodeutils.exception_to_unicode(e)) raise webob.exc.HTTPBadRequest(explanation=msg) except exception.Forbidden as e: LOG.debug("User not permitted to create metadata tag within " "'%s' namespace", namespace) raise webob.exc.HTTPForbidden(explanation=e.msg) except exception.NotFound as e: raise webob.exc.HTTPNotFound(explanation=e.msg) except exception.Duplicate as e: raise webob.exc.HTTPConflict(explanation=e.msg) except Exception as e: LOG.error(encodeutils.exception_to_unicode(e)) raise webob.exc.HTTPInternalServerError() return MetadefTag.to_wsme_model(new_meta_tag) def create_tags(self, req, metadata_tags, namespace): tag_factory = self.gateway.get_metadef_tag_factory(req.context) tag_repo = self.gateway.get_metadef_tag_repo(req.context) try: tag_list = [] for metadata_tag in metadata_tags.tags: tag_list.append(tag_factory.new_tag( namespace=namespace, **metadata_tag.to_dict())) tag_repo.add_tags(tag_list) tag_list_out = [MetadefTag(**{'name': db_metatag.name}) for db_metatag in tag_list] metadef_tags = MetadefTags() metadef_tags.tags = tag_list_out except exception.Forbidden as e: LOG.debug("User not permitted to create metadata tags within " "'%s' namespace", namespace) raise webob.exc.HTTPForbidden(explanation=e.msg) except exception.NotFound as e: raise webob.exc.HTTPNotFound(explanation=e.msg) except exception.Duplicate as e: raise webob.exc.HTTPConflict(explanation=e.msg) except Exception as e: LOG.error(encodeutils.exception_to_unicode(e)) raise webob.exc.HTTPInternalServerError() return metadef_tags def index(self, req, namespace, marker=None, limit=None, sort_key='created_at', sort_dir='desc', filters=None): try: filters = filters or dict() filters['namespace'] = namespace tag_repo = self.gateway.get_metadef_tag_repo(req.context) if marker: metadef_tag = tag_repo.get(namespace, marker) marker = metadef_tag.tag_id db_metatag_list = tag_repo.list( marker=marker, limit=limit, sort_key=sort_key, sort_dir=sort_dir, filters=filters) tag_list = [MetadefTag(**{'name': db_metatag.name}) for db_metatag in db_metatag_list] metadef_tags = MetadefTags() metadef_tags.tags = tag_list except exception.Forbidden as e: LOG.debug("User not permitted to retrieve metadata tags " "within '%s' namespace", namespace) raise webob.exc.HTTPForbidden(explanation=e.msg) except exception.NotFound as e: raise webob.exc.HTTPNotFound(explanation=e.msg) except Exception as e: LOG.error(encodeutils.exception_to_unicode(e)) raise webob.exc.HTTPInternalServerError() return metadef_tags def show(self, req, namespace, tag_name): meta_tag_repo = self.gateway.get_metadef_tag_repo(req.context) try: metadef_tag = meta_tag_repo.get(namespace, tag_name) return MetadefTag.to_wsme_model(metadef_tag) except exception.Forbidden as e: LOG.debug("User not permitted to show metadata tag '%s' " "within '%s' namespace", tag_name, namespace) raise webob.exc.HTTPForbidden(explanation=e.msg) except exception.NotFound as e: raise webob.exc.HTTPNotFound(explanation=e.msg) except Exception as e: LOG.error(encodeutils.exception_to_unicode(e)) raise webob.exc.HTTPInternalServerError() def update(self, req, metadata_tag, namespace, tag_name): meta_repo = self.gateway.get_metadef_tag_repo(req.context) try: metadef_tag = meta_repo.get(namespace, tag_name) metadef_tag._old_name = metadef_tag.name metadef_tag.name = wsme_utils._get_value( metadata_tag.name) updated_metadata_tag = meta_repo.save(metadef_tag) except exception.Invalid as e: msg = (_("Couldn't update metadata tag: %s") % encodeutils.exception_to_unicode(e)) raise webob.exc.HTTPBadRequest(explanation=msg) except exception.Forbidden as e: LOG.debug("User not permitted to update metadata tag '%s' " "within '%s' namespace", tag_name, namespace) raise webob.exc.HTTPForbidden(explanation=e.msg) except exception.NotFound as e: raise webob.exc.HTTPNotFound(explanation=e.msg) except exception.Duplicate as e: raise webob.exc.HTTPConflict(explanation=e.msg) except Exception as e: LOG.error(encodeutils.exception_to_unicode(e)) raise webob.exc.HTTPInternalServerError() return MetadefTag.to_wsme_model(updated_metadata_tag) def delete(self, req, namespace, tag_name): meta_repo = self.gateway.get_metadef_tag_repo(req.context) try: metadef_tag = meta_repo.get(namespace, tag_name) metadef_tag.delete() meta_repo.remove(metadef_tag) except exception.Forbidden as e: LOG.debug("User not permitted to delete metadata tag '%s' " "within '%s' namespace", tag_name, namespace) raise webob.exc.HTTPForbidden(explanation=e.msg) except exception.NotFound as e: raise webob.exc.HTTPNotFound(explanation=e.msg) except Exception as e: LOG.error(encodeutils.exception_to_unicode(e)) raise webob.exc.HTTPInternalServerError() def _get_base_definitions(): return None def _get_base_properties(): return { "name": { "type": "string", "maxLength": 80 }, "created_at": { "type": "string", "readOnly": True, "description": _("Date and time of tag creation"), "format": "date-time" }, "updated_at": { "type": "string", "readOnly": True, "description": _("Date and time of the last tag modification"), "format": "date-time" } } def _get_base_properties_for_list(): return { "tags": { "type": "array", "items": { "type": "object", "properties": { "name": { "type": "string" } }, 'required': ['name'], "additionalProperties": False } }, } def get_schema(): definitions = _get_base_definitions() properties = _get_base_properties() mandatory_attrs = MetadefTag.get_mandatory_attrs() schema = glance.schema.Schema( 'tag', properties, required=mandatory_attrs, definitions=definitions, ) return schema def get_schema_for_list(): definitions = _get_base_definitions() properties = _get_base_properties_for_list() schema = glance.schema.Schema( 'tags', properties, required=None, definitions=definitions, ) return schema def get_collection_schema(): tag_schema = get_schema() return glance.schema.CollectionSchema('tags', tag_schema) class RequestDeserializer(wsgi.JSONRequestDeserializer): _disallowed_properties = ['created_at', 'updated_at'] def __init__(self, schema=None): super(RequestDeserializer, self).__init__() self.schema = schema or get_schema() self.schema_for_list = get_schema_for_list() def _get_request_body(self, request): output = super(RequestDeserializer, self).default(request) if 'body' not in output: msg = _('Body expected in request.') raise webob.exc.HTTPBadRequest(explanation=msg) return output['body'] def _validate_sort_dir(self, sort_dir): if sort_dir not in ['asc', 'desc']: msg = _('Invalid sort direction: %s') % sort_dir raise webob.exc.HTTPBadRequest(explanation=msg) return sort_dir def _get_filters(self, filters): visibility = filters.get('visibility') if visibility: if visibility not in ['public', 'private', 'shared']: msg = _('Invalid visibility value: %s') % visibility raise webob.exc.HTTPBadRequest(explanation=msg) return filters def _validate_limit(self, limit): try: limit = int(limit) except ValueError: msg = _("limit param must be an integer") raise webob.exc.HTTPBadRequest(explanation=msg) if limit < 0: msg = _("limit param must be positive") raise webob.exc.HTTPBadRequest(explanation=msg) return limit def update(self, request): body = self._get_request_body(request) self._check_allowed(body) try: self.schema.validate(body) except exception.InvalidObject as e: raise webob.exc.HTTPBadRequest(explanation=e.msg) metadata_tag = json.fromjson(MetadefTag, body) return dict(metadata_tag=metadata_tag) def index(self, request): params = request.params.copy() limit = params.pop('limit', None) marker = params.pop('marker', None) sort_dir = params.pop('sort_dir', 'desc') query_params = { 'sort_key': params.pop('sort_key', 'created_at'), 'sort_dir': self._validate_sort_dir(sort_dir), 'filters': self._get_filters(params) } if marker: query_params['marker'] = marker if limit: query_params['limit'] = self._validate_limit(limit) return query_params def create_tags(self, request): body = self._get_request_body(request) self._check_allowed(body) try: self.schema_for_list.validate(body) except exception.InvalidObject as e: raise webob.exc.HTTPBadRequest(explanation=e.msg) metadata_tags = json.fromjson(MetadefTags, body) return dict(metadata_tags=metadata_tags) @classmethod def _check_allowed(cls, image): for key in cls._disallowed_properties: if key in image: msg = _("Attribute '%s' is read-only.") % key raise webob.exc.HTTPForbidden(explanation=msg) class ResponseSerializer(wsgi.JSONResponseSerializer): def __init__(self, schema=None): super(ResponseSerializer, self).__init__() self.schema = schema or get_schema() def create(self, response, metadata_tag): response.status_int = http.CREATED self.show(response, metadata_tag) def create_tags(self, response, result): response.status_int = http.CREATED metadata_tags_json = json.tojson(MetadefTags, result) body = jsonutils.dumps(metadata_tags_json, ensure_ascii=False) response.unicode_body = six.text_type(body) response.content_type = 'application/json' def show(self, response, metadata_tag): metadata_tag_json = json.tojson(MetadefTag, metadata_tag) body = jsonutils.dumps(metadata_tag_json, ensure_ascii=False) response.unicode_body = six.text_type(body) response.content_type = 'application/json' def update(self, response, metadata_tag): response.status_int = http.OK self.show(response, metadata_tag) def index(self, response, result): metadata_tags_json = json.tojson(MetadefTags, result) body = jsonutils.dumps(metadata_tags_json, ensure_ascii=False) response.unicode_body = six.text_type(body) response.content_type = 'application/json' def delete(self, response, result): response.status_int = http.NO_CONTENT def get_tag_href(namespace_name, metadef_tag): base_href = ('/v2/metadefs/namespaces/%s/tags/%s' % (namespace_name, metadef_tag.name)) return base_href def create_resource(): """Metadef tags resource factory method""" schema = get_schema() deserializer = RequestDeserializer(schema) serializer = ResponseSerializer(schema) controller = TagsController() return wsgi.Resource(controller, deserializer, serializer) glance-16.0.0/glance/api/v2/schemas.py0000666000175100017510000000755313245511421017413 0ustar zuulzuul00000000000000# Copyright 2012 OpenStack Foundation. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from glance.api.v2 import image_members from glance.api.v2 import images from glance.api.v2 import metadef_namespaces from glance.api.v2 import metadef_objects from glance.api.v2 import metadef_properties from glance.api.v2 import metadef_resource_types from glance.api.v2 import metadef_tags from glance.api.v2 import tasks from glance.common import wsgi class Controller(object): def __init__(self, custom_image_properties=None): self.image_schema = images.get_schema(custom_image_properties) self.image_collection_schema = images.get_collection_schema( custom_image_properties) self.member_schema = image_members.get_schema() self.member_collection_schema = image_members.get_collection_schema() self.task_schema = tasks.get_task_schema() self.task_collection_schema = tasks.get_collection_schema() # Metadef schemas self.metadef_namespace_schema = metadef_namespaces.get_schema() self.metadef_namespace_collection_schema = ( metadef_namespaces.get_collection_schema()) self.metadef_resource_type_schema = metadef_resource_types.get_schema() self.metadef_resource_type_collection_schema = ( metadef_resource_types.get_collection_schema()) self.metadef_property_schema = metadef_properties.get_schema() self.metadef_property_collection_schema = ( metadef_properties.get_collection_schema()) self.metadef_object_schema = metadef_objects.get_schema() self.metadef_object_collection_schema = ( metadef_objects.get_collection_schema()) self.metadef_tag_schema = metadef_tags.get_schema() self.metadef_tag_collection_schema = ( metadef_tags.get_collection_schema()) def image(self, req): return self.image_schema.raw() def images(self, req): return self.image_collection_schema.raw() def member(self, req): return self.member_schema.minimal() def members(self, req): return self.member_collection_schema.minimal() def task(self, req): return self.task_schema.minimal() def tasks(self, req): return self.task_collection_schema.minimal() def metadef_namespace(self, req): return self.metadef_namespace_schema.raw() def metadef_namespaces(self, req): return self.metadef_namespace_collection_schema.raw() def metadef_resource_type(self, req): return self.metadef_resource_type_schema.raw() def metadef_resource_types(self, req): return self.metadef_resource_type_collection_schema.raw() def metadef_property(self, req): return self.metadef_property_schema.raw() def metadef_properties(self, req): return self.metadef_property_collection_schema.raw() def metadef_object(self, req): return self.metadef_object_schema.raw() def metadef_objects(self, req): return self.metadef_object_collection_schema.raw() def metadef_tag(self, req): return self.metadef_tag_schema.raw() def metadef_tags(self, req): return self.metadef_tag_collection_schema.raw() def create_resource(custom_image_properties=None): controller = Controller(custom_image_properties) return wsgi.Resource(controller) glance-16.0.0/glance/api/property_protections.py0000666000175100017510000001175013245511421021750 0ustar zuulzuul00000000000000# Copyright 2013 Rackspace # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from glance.common import exception import glance.domain.proxy class ProtectedImageFactoryProxy(glance.domain.proxy.ImageFactory): def __init__(self, image_factory, context, property_rules): self.image_factory = image_factory self.context = context self.property_rules = property_rules kwargs = {'context': self.context, 'property_rules': self.property_rules} super(ProtectedImageFactoryProxy, self).__init__( image_factory, proxy_class=ProtectedImageProxy, proxy_kwargs=kwargs) def new_image(self, **kwargs): extra_props = kwargs.pop('extra_properties', {}) extra_properties = {} for key in extra_props.keys(): if self.property_rules.check_property_rules(key, 'create', self.context): extra_properties[key] = extra_props[key] else: raise exception.ReservedProperty(property=key) return super(ProtectedImageFactoryProxy, self).new_image( extra_properties=extra_properties, **kwargs) class ProtectedImageRepoProxy(glance.domain.proxy.Repo): def __init__(self, image_repo, context, property_rules): self.context = context self.image_repo = image_repo self.property_rules = property_rules proxy_kwargs = {'context': self.context} super(ProtectedImageRepoProxy, self).__init__( image_repo, item_proxy_class=ProtectedImageProxy, item_proxy_kwargs=proxy_kwargs) def get(self, image_id): return ProtectedImageProxy(self.image_repo.get(image_id), self.context, self.property_rules) def list(self, *args, **kwargs): images = self.image_repo.list(*args, **kwargs) return [ProtectedImageProxy(image, self.context, self.property_rules) for image in images] class ProtectedImageProxy(glance.domain.proxy.Image): def __init__(self, image, context, property_rules): self.image = image self.context = context self.property_rules = property_rules self.image.extra_properties = ExtraPropertiesProxy( self.context, self.image.extra_properties, self.property_rules) super(ProtectedImageProxy, self).__init__(self.image) class ExtraPropertiesProxy(glance.domain.ExtraProperties): def __init__(self, context, extra_props, property_rules): self.context = context self.property_rules = property_rules extra_properties = {} for key in extra_props.keys(): if self.property_rules.check_property_rules(key, 'read', self.context): extra_properties[key] = extra_props[key] super(ExtraPropertiesProxy, self).__init__(extra_properties) def __getitem__(self, key): if self.property_rules.check_property_rules(key, 'read', self.context): return dict.__getitem__(self, key) else: raise KeyError def __setitem__(self, key, value): # NOTE(isethi): Exceptions are raised only for actions update, delete # and create, where the user proactively interacts with the properties. # A user cannot request to read a specific property, hence reads do # raise an exception try: if self.__getitem__(key) is not None: if self.property_rules.check_property_rules(key, 'update', self.context): return dict.__setitem__(self, key, value) else: raise exception.ReservedProperty(property=key) except KeyError: if self.property_rules.check_property_rules(key, 'create', self.context): return dict.__setitem__(self, key, value) else: raise exception.ReservedProperty(property=key) def __delitem__(self, key): if key not in super(ExtraPropertiesProxy, self).keys(): raise KeyError if self.property_rules.check_property_rules(key, 'delete', self.context): return dict.__delitem__(self, key) else: raise exception.ReservedProperty(property=key) glance-16.0.0/glance/api/common.py0000666000175100017510000001702613245511421016725 0ustar zuulzuul00000000000000# Copyright 2012 OpenStack Foundation. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import re from oslo_concurrency import lockutils from oslo_config import cfg from oslo_log import log as logging from oslo_utils import excutils from oslo_utils import units from glance.common import exception from glance.common import wsgi from glance.i18n import _, _LE, _LW LOG = logging.getLogger(__name__) CONF = cfg.CONF _CACHED_THREAD_POOL = {} def size_checked_iter(response, image_meta, expected_size, image_iter, notifier): image_id = image_meta['id'] bytes_written = 0 def notify_image_sent_hook(env): image_send_notification(bytes_written, expected_size, image_meta, response.request, notifier) # Add hook to process after response is fully sent if 'eventlet.posthooks' in response.request.environ: response.request.environ['eventlet.posthooks'].append( (notify_image_sent_hook, (), {})) try: for chunk in image_iter: yield chunk bytes_written += len(chunk) except Exception as err: with excutils.save_and_reraise_exception(): msg = (_LE("An error occurred reading from backend storage for " "image %(image_id)s: %(err)s") % {'image_id': image_id, 'err': err}) LOG.error(msg) if expected_size != bytes_written: msg = (_LE("Backend storage for image %(image_id)s " "disconnected after writing only %(bytes_written)d " "bytes") % {'image_id': image_id, 'bytes_written': bytes_written}) LOG.error(msg) raise exception.GlanceException(_("Corrupt image download for " "image %(image_id)s") % {'image_id': image_id}) def image_send_notification(bytes_written, expected_size, image_meta, request, notifier): """Send an image.send message to the notifier.""" try: context = request.context payload = { 'bytes_sent': bytes_written, 'image_id': image_meta['id'], 'owner_id': image_meta['owner'], 'receiver_tenant_id': context.tenant, 'receiver_user_id': context.user, 'destination_ip': request.remote_addr, } if bytes_written != expected_size: notify = notifier.error else: notify = notifier.info notify('image.send', payload) except Exception as err: msg = (_LE("An error occurred during image.send" " notification: %(err)s") % {'err': err}) LOG.error(msg) def get_remaining_quota(context, db_api, image_id=None): """Method called to see if the user is allowed to store an image. Checks if it is allowed based on the given size in glance based on their quota and current usage. :param context: :param db_api: The db_api in use for this configuration :param image_id: The image that will be replaced with this new data size :returns: The number of bytes the user has remaining under their quota. None means infinity """ # NOTE(jbresnah) in the future this value will come from a call to # keystone. users_quota = CONF.user_storage_quota # set quota must have a number optionally followed by B, KB, MB, # GB or TB without any spaces in between pattern = re.compile('^(\d+)((K|M|G|T)?B)?$') match = pattern.match(users_quota) if not match: LOG.error(_LE("Invalid value for option user_storage_quota: " "%(users_quota)s") % {'users_quota': users_quota}) raise exception.InvalidOptionValue(option='user_storage_quota', value=users_quota) quota_value, quota_unit = (match.groups())[0:2] # fall back to Bytes if user specified anything other than # permitted values quota_unit = quota_unit or "B" factor = getattr(units, quota_unit.replace('B', 'i'), 1) users_quota = int(quota_value) * factor if users_quota <= 0: return usage = db_api.user_get_storage_usage(context, context.owner, image_id=image_id) return users_quota - usage def check_quota(context, image_size, db_api, image_id=None): """Method called to see if the user is allowed to store an image. Checks if it is allowed based on the given size in glance based on their quota and current usage. :param context: :param image_size: The size of the image we hope to store :param db_api: The db_api in use for this configuration :param image_id: The image that will be replaced with this new data size :returns: """ remaining = get_remaining_quota(context, db_api, image_id=image_id) if remaining is None: return user = getattr(context, 'user', '') if image_size is None: # NOTE(jbresnah) When the image size is None it means that it is # not known. In this case the only time we will raise an # exception is when there is no room left at all, thus we know # it will not fit if remaining <= 0: LOG.warn(_LW("User %(user)s attempted to upload an image of" " unknown size that will exceed the quota." " %(remaining)d bytes remaining.") % {'user': user, 'remaining': remaining}) raise exception.StorageQuotaFull(image_size=image_size, remaining=remaining) return if image_size > remaining: LOG.warn(_LW("User %(user)s attempted to upload an image of size" " %(size)d that will exceed the quota. %(remaining)d" " bytes remaining.") % {'user': user, 'size': image_size, 'remaining': remaining}) raise exception.StorageQuotaFull(image_size=image_size, remaining=remaining) return remaining def memoize(lock_name): def memoizer_wrapper(func): @lockutils.synchronized(lock_name) def memoizer(lock_name): if lock_name not in _CACHED_THREAD_POOL: _CACHED_THREAD_POOL[lock_name] = func() return _CACHED_THREAD_POOL[lock_name] return memoizer(lock_name) return memoizer_wrapper def get_thread_pool(lock_name, size=1024): """Initializes eventlet thread pool. If thread pool is present in cache, then returns it from cache else create new pool, stores it in cache and return newly created pool. @param lock_name: Name of the lock. @param size: Size of eventlet pool. @return: eventlet pool """ @memoize(lock_name) def _get_thread_pool(): return wsgi.get_asynchronous_eventlet_pool(size=size) return _get_thread_pool glance-16.0.0/glance/api/authorization.py0000666000175100017510000007447713245511421020352 0ustar zuulzuul00000000000000# Copyright 2012 OpenStack Foundation # Copyright 2013 IBM Corp. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy from glance.common import exception import glance.domain.proxy from glance.i18n import _ def is_image_mutable(context, image): """Return True if the image is mutable in this context.""" if context.is_admin: return True if image.owner is None or context.owner is None: return False return image.owner == context.owner def proxy_image(context, image): if is_image_mutable(context, image): return ImageProxy(image, context) else: return ImmutableImageProxy(image, context) def is_member_mutable(context, member): """Return True if the image is mutable in this context.""" if context.is_admin: return True if context.owner is None: return False return member.member_id == context.owner def proxy_member(context, member): if is_member_mutable(context, member): return member else: return ImmutableMemberProxy(member) def is_task_mutable(context, task): """Return True if the task is mutable in this context.""" if context.is_admin: return True if context.owner is None: return False return task.owner == context.owner def is_task_stub_mutable(context, task_stub): """Return True if the task stub is mutable in this context.""" if context.is_admin: return True if context.owner is None: return False return task_stub.owner == context.owner def proxy_task(context, task): if is_task_mutable(context, task): return task else: return ImmutableTaskProxy(task) def proxy_task_stub(context, task_stub): if is_task_stub_mutable(context, task_stub): return task_stub else: return ImmutableTaskStubProxy(task_stub) class ImageRepoProxy(glance.domain.proxy.Repo): def __init__(self, image_repo, context): self.context = context self.image_repo = image_repo proxy_kwargs = {'context': self.context} super(ImageRepoProxy, self).__init__(image_repo, item_proxy_class=ImageProxy, item_proxy_kwargs=proxy_kwargs) def get(self, image_id): image = self.image_repo.get(image_id) return proxy_image(self.context, image) def list(self, *args, **kwargs): images = self.image_repo.list(*args, **kwargs) return [proxy_image(self.context, i) for i in images] def _validate_image_accepts_members(visibility): if visibility != 'shared': message = _("Only shared images have members.") raise exception.Forbidden(message) class ImageMemberRepoProxy(glance.domain.proxy.MemberRepo): def __init__(self, member_repo, image, context): self.member_repo = member_repo self.image = image self.context = context proxy_kwargs = {'context': self.context} super(ImageMemberRepoProxy, self).__init__( image, member_repo, member_proxy_class=ImageMemberProxy, member_proxy_kwargs=proxy_kwargs) _validate_image_accepts_members(self.image.visibility) def get(self, member_id): if (self.context.is_admin or self.context.owner in (self.image.owner, member_id)): member = self.member_repo.get(member_id) return proxy_member(self.context, member) else: message = _("You cannot get image member for %s") raise exception.Forbidden(message % member_id) def list(self, *args, **kwargs): members = self.member_repo.list(*args, **kwargs) if (self.context.is_admin or self.context.owner == self.image.owner): return [proxy_member(self.context, m) for m in members] for member in members: if member.member_id == self.context.owner: return [proxy_member(self.context, member)] message = _("You cannot get image member for %s") raise exception.Forbidden(message % self.image.image_id) def remove(self, image_member): if (self.image.owner == self.context.owner or self.context.is_admin): self.member_repo.remove(image_member) else: message = _("You cannot delete image member for %s") raise exception.Forbidden(message % self.image.image_id) def add(self, image_member): if (self.image.owner == self.context.owner or self.context.is_admin): self.member_repo.add(image_member) else: message = _("You cannot add image member for %s") raise exception.Forbidden(message % self.image.image_id) def save(self, image_member, from_state=None): if (self.context.is_admin or self.context.owner == image_member.member_id): self.member_repo.save(image_member, from_state=from_state) else: message = _("You cannot update image member %s") raise exception.Forbidden(message % image_member.member_id) class ImageFactoryProxy(glance.domain.proxy.ImageFactory): def __init__(self, image_factory, context): self.image_factory = image_factory self.context = context kwargs = {'context': self.context} super(ImageFactoryProxy, self).__init__(image_factory, proxy_class=ImageProxy, proxy_kwargs=kwargs) def new_image(self, **kwargs): owner = kwargs.pop('owner', self.context.owner) if not self.context.is_admin: if owner is None or owner != self.context.owner: message = _("You are not permitted to create images " "owned by '%s'.") raise exception.Forbidden(message % owner) return super(ImageFactoryProxy, self).new_image(owner=owner, **kwargs) class ImageMemberFactoryProxy(glance.domain.proxy.ImageMembershipFactory): def __init__(self, image_member_factory, context): self.image_member_factory = image_member_factory self.context = context kwargs = {'context': self.context} super(ImageMemberFactoryProxy, self).__init__( image_member_factory, proxy_class=ImageMemberProxy, proxy_kwargs=kwargs) def new_image_member(self, image, member_id): owner = image.owner if not self.context.is_admin: if owner is None or owner != self.context.owner: message = _("You are not permitted to create image members " "for the image.") raise exception.Forbidden(message) _validate_image_accepts_members(image.visibility) return self.image_member_factory.new_image_member(image, member_id) def _immutable_attr(target, attr, proxy=None): def get_attr(self): value = getattr(getattr(self, target), attr) if proxy is not None: value = proxy(value) return value def forbidden(self, *args, **kwargs): resource = getattr(self, 'resource_name', 'resource') message = _("You are not permitted to modify '%(attr)s' on this " "%(resource)s.") raise exception.Forbidden(message % {'attr': attr, 'resource': resource}) return property(get_attr, forbidden, forbidden) class ImmutableLocations(list): def forbidden(self, *args, **kwargs): message = _("You are not permitted to modify locations " "for this image.") raise exception.Forbidden(message) def __deepcopy__(self, memo): return ImmutableLocations(copy.deepcopy(list(self), memo)) append = forbidden extend = forbidden insert = forbidden pop = forbidden remove = forbidden reverse = forbidden sort = forbidden __delitem__ = forbidden __delslice__ = forbidden __iadd__ = forbidden __imul__ = forbidden __setitem__ = forbidden __setslice__ = forbidden class ImmutableProperties(dict): def forbidden_key(self, key, *args, **kwargs): message = _("You are not permitted to modify '%s' on this image.") raise exception.Forbidden(message % key) def forbidden(self, *args, **kwargs): message = _("You are not permitted to modify this image.") raise exception.Forbidden(message) __delitem__ = forbidden_key __setitem__ = forbidden_key pop = forbidden popitem = forbidden setdefault = forbidden update = forbidden class ImmutableTags(set): def forbidden(self, *args, **kwargs): message = _("You are not permitted to modify tags on this image.") raise exception.Forbidden(message) add = forbidden clear = forbidden difference_update = forbidden intersection_update = forbidden pop = forbidden remove = forbidden symmetric_difference_update = forbidden update = forbidden class ImmutableImageProxy(object): def __init__(self, base, context): self.base = base self.context = context self.resource_name = 'image' name = _immutable_attr('base', 'name') image_id = _immutable_attr('base', 'image_id') status = _immutable_attr('base', 'status') created_at = _immutable_attr('base', 'created_at') updated_at = _immutable_attr('base', 'updated_at') visibility = _immutable_attr('base', 'visibility') min_disk = _immutable_attr('base', 'min_disk') min_ram = _immutable_attr('base', 'min_ram') protected = _immutable_attr('base', 'protected') locations = _immutable_attr('base', 'locations', proxy=ImmutableLocations) checksum = _immutable_attr('base', 'checksum') owner = _immutable_attr('base', 'owner') disk_format = _immutable_attr('base', 'disk_format') container_format = _immutable_attr('base', 'container_format') size = _immutable_attr('base', 'size') virtual_size = _immutable_attr('base', 'virtual_size') extra_properties = _immutable_attr('base', 'extra_properties', proxy=ImmutableProperties) tags = _immutable_attr('base', 'tags', proxy=ImmutableTags) def delete(self): message = _("You are not permitted to delete this image.") raise exception.Forbidden(message) def get_data(self, *args, **kwargs): return self.base.get_data(*args, **kwargs) def set_data(self, *args, **kwargs): message = _("You are not permitted to upload data for this image.") raise exception.Forbidden(message) def deactivate(self, *args, **kwargs): message = _("You are not permitted to deactivate this image.") raise exception.Forbidden(message) def reactivate(self, *args, **kwargs): message = _("You are not permitted to reactivate this image.") raise exception.Forbidden(message) class ImmutableMemberProxy(object): def __init__(self, base): self.base = base self.resource_name = 'image member' id = _immutable_attr('base', 'id') image_id = _immutable_attr('base', 'image_id') member_id = _immutable_attr('base', 'member_id') status = _immutable_attr('base', 'status') created_at = _immutable_attr('base', 'created_at') updated_at = _immutable_attr('base', 'updated_at') class ImmutableTaskProxy(object): def __init__(self, base): self.base = base self.resource_name = 'task' task_id = _immutable_attr('base', 'task_id') type = _immutable_attr('base', 'type') status = _immutable_attr('base', 'status') owner = _immutable_attr('base', 'owner') expires_at = _immutable_attr('base', 'expires_at') created_at = _immutable_attr('base', 'created_at') updated_at = _immutable_attr('base', 'updated_at') input = _immutable_attr('base', 'input') message = _immutable_attr('base', 'message') result = _immutable_attr('base', 'result') def run(self, executor): self.base.run(executor) def begin_processing(self): message = _("You are not permitted to set status on this task.") raise exception.Forbidden(message) def succeed(self, result): message = _("You are not permitted to set status on this task.") raise exception.Forbidden(message) def fail(self, message): message = _("You are not permitted to set status on this task.") raise exception.Forbidden(message) class ImmutableTaskStubProxy(object): def __init__(self, base): self.base = base self.resource_name = 'task stub' task_id = _immutable_attr('base', 'task_id') type = _immutable_attr('base', 'type') status = _immutable_attr('base', 'status') owner = _immutable_attr('base', 'owner') expires_at = _immutable_attr('base', 'expires_at') created_at = _immutable_attr('base', 'created_at') updated_at = _immutable_attr('base', 'updated_at') class ImageProxy(glance.domain.proxy.Image): def __init__(self, image, context): self.image = image self.context = context super(ImageProxy, self).__init__(image) class ImageMemberProxy(glance.domain.proxy.ImageMember): def __init__(self, image_member, context): self.image_member = image_member self.context = context super(ImageMemberProxy, self).__init__(image_member) class TaskProxy(glance.domain.proxy.Task): def __init__(self, task): self.task = task super(TaskProxy, self).__init__(task) class TaskFactoryProxy(glance.domain.proxy.TaskFactory): def __init__(self, task_factory, context): self.task_factory = task_factory self.context = context super(TaskFactoryProxy, self).__init__( task_factory, task_proxy_class=TaskProxy) def new_task(self, **kwargs): owner = kwargs.get('owner', self.context.owner) # NOTE(nikhil): Unlike Images, Tasks are expected to have owner. # We currently do not allow even admins to set the owner to None. if owner is not None and (owner == self.context.owner or self.context.is_admin): return super(TaskFactoryProxy, self).new_task(**kwargs) else: message = _("You are not permitted to create this task with " "owner as: %s") raise exception.Forbidden(message % owner) class TaskRepoProxy(glance.domain.proxy.TaskRepo): def __init__(self, task_repo, context): self.task_repo = task_repo self.context = context super(TaskRepoProxy, self).__init__(task_repo) def get(self, task_id): task = self.task_repo.get(task_id) return proxy_task(self.context, task) class TaskStubRepoProxy(glance.domain.proxy.TaskStubRepo): def __init__(self, task_stub_repo, context): self.task_stub_repo = task_stub_repo self.context = context super(TaskStubRepoProxy, self).__init__(task_stub_repo) def list(self, *args, **kwargs): task_stubs = self.task_stub_repo.list(*args, **kwargs) return [proxy_task_stub(self.context, t) for t in task_stubs] # Metadef Namespace classes def is_namespace_mutable(context, namespace): """Return True if the namespace is mutable in this context.""" if context.is_admin: return True if context.owner is None: return False return namespace.owner == context.owner def proxy_namespace(context, namespace): if is_namespace_mutable(context, namespace): return namespace else: return ImmutableMetadefNamespaceProxy(namespace) class ImmutableMetadefNamespaceProxy(object): def __init__(self, base): self.base = base self.resource_name = 'namespace' namespace_id = _immutable_attr('base', 'namespace_id') namespace = _immutable_attr('base', 'namespace') display_name = _immutable_attr('base', 'display_name') description = _immutable_attr('base', 'description') owner = _immutable_attr('base', 'owner') visibility = _immutable_attr('base', 'visibility') protected = _immutable_attr('base', 'protected') created_at = _immutable_attr('base', 'created_at') updated_at = _immutable_attr('base', 'updated_at') def delete(self): message = _("You are not permitted to delete this namespace.") raise exception.Forbidden(message) def save(self): message = _("You are not permitted to update this namespace.") raise exception.Forbidden(message) class MetadefNamespaceProxy(glance.domain.proxy.MetadefNamespace): def __init__(self, namespace): self.namespace_input = namespace super(MetadefNamespaceProxy, self).__init__(namespace) class MetadefNamespaceFactoryProxy( glance.domain.proxy.MetadefNamespaceFactory): def __init__(self, meta_namespace_factory, context): self.meta_namespace_factory = meta_namespace_factory self.context = context super(MetadefNamespaceFactoryProxy, self).__init__( meta_namespace_factory, meta_namespace_proxy_class=MetadefNamespaceProxy) def new_namespace(self, **kwargs): owner = kwargs.pop('owner', self.context.owner) if not self.context.is_admin: if owner is None or owner != self.context.owner: message = _("You are not permitted to create namespace " "owned by '%s'") raise exception.Forbidden(message % (owner)) return super(MetadefNamespaceFactoryProxy, self).new_namespace( owner=owner, **kwargs) class MetadefNamespaceRepoProxy(glance.domain.proxy.MetadefNamespaceRepo): def __init__(self, namespace_repo, context): self.namespace_repo = namespace_repo self.context = context super(MetadefNamespaceRepoProxy, self).__init__(namespace_repo) def get(self, namespace): namespace_obj = self.namespace_repo.get(namespace) return proxy_namespace(self.context, namespace_obj) def list(self, *args, **kwargs): namespaces = self.namespace_repo.list(*args, **kwargs) return [proxy_namespace(self.context, namespace) for namespace in namespaces] # Metadef Object classes def is_object_mutable(context, object): """Return True if the object is mutable in this context.""" if context.is_admin: return True if context.owner is None: return False return object.namespace.owner == context.owner def proxy_object(context, object): if is_object_mutable(context, object): return object else: return ImmutableMetadefObjectProxy(object) class ImmutableMetadefObjectProxy(object): def __init__(self, base): self.base = base self.resource_name = 'object' object_id = _immutable_attr('base', 'object_id') name = _immutable_attr('base', 'name') required = _immutable_attr('base', 'required') description = _immutable_attr('base', 'description') properties = _immutable_attr('base', 'properties') created_at = _immutable_attr('base', 'created_at') updated_at = _immutable_attr('base', 'updated_at') def delete(self): message = _("You are not permitted to delete this object.") raise exception.Forbidden(message) def save(self): message = _("You are not permitted to update this object.") raise exception.Forbidden(message) class MetadefObjectProxy(glance.domain.proxy.MetadefObject): def __init__(self, meta_object): self.meta_object = meta_object super(MetadefObjectProxy, self).__init__(meta_object) class MetadefObjectFactoryProxy(glance.domain.proxy.MetadefObjectFactory): def __init__(self, meta_object_factory, context): self.meta_object_factory = meta_object_factory self.context = context super(MetadefObjectFactoryProxy, self).__init__( meta_object_factory, meta_object_proxy_class=MetadefObjectProxy) def new_object(self, **kwargs): owner = kwargs.pop('owner', self.context.owner) if not self.context.is_admin: if owner is None or owner != self.context.owner: message = _("You are not permitted to create object " "owned by '%s'") raise exception.Forbidden(message % (owner)) return super(MetadefObjectFactoryProxy, self).new_object(**kwargs) class MetadefObjectRepoProxy(glance.domain.proxy.MetadefObjectRepo): def __init__(self, object_repo, context): self.object_repo = object_repo self.context = context super(MetadefObjectRepoProxy, self).__init__(object_repo) def get(self, namespace, object_name): meta_object = self.object_repo.get(namespace, object_name) return proxy_object(self.context, meta_object) def list(self, *args, **kwargs): objects = self.object_repo.list(*args, **kwargs) return [proxy_object(self.context, meta_object) for meta_object in objects] # Metadef ResourceType classes def is_meta_resource_type_mutable(context, meta_resource_type): """Return True if the meta_resource_type is mutable in this context.""" if context.is_admin: return True if context.owner is None: return False # (lakshmiS): resource type can exist without an association with # namespace and resource type cannot be created/update/deleted directly( # they have to be associated/de-associated from namespace) if meta_resource_type.namespace: return meta_resource_type.namespace.owner == context.owner else: return False def proxy_meta_resource_type(context, meta_resource_type): if is_meta_resource_type_mutable(context, meta_resource_type): return meta_resource_type else: return ImmutableMetadefResourceTypeProxy(meta_resource_type) class ImmutableMetadefResourceTypeProxy(object): def __init__(self, base): self.base = base self.resource_name = 'meta_resource_type' namespace = _immutable_attr('base', 'namespace') name = _immutable_attr('base', 'name') prefix = _immutable_attr('base', 'prefix') properties_target = _immutable_attr('base', 'properties_target') created_at = _immutable_attr('base', 'created_at') updated_at = _immutable_attr('base', 'updated_at') def delete(self): message = _("You are not permitted to delete this meta_resource_type.") raise exception.Forbidden(message) class MetadefResourceTypeProxy(glance.domain.proxy.MetadefResourceType): def __init__(self, meta_resource_type): self.meta_resource_type = meta_resource_type super(MetadefResourceTypeProxy, self).__init__(meta_resource_type) class MetadefResourceTypeFactoryProxy( glance.domain.proxy.MetadefResourceTypeFactory): def __init__(self, resource_type_factory, context): self.meta_resource_type_factory = resource_type_factory self.context = context super(MetadefResourceTypeFactoryProxy, self).__init__( resource_type_factory, resource_type_proxy_class=MetadefResourceTypeProxy) def new_resource_type(self, **kwargs): owner = kwargs.pop('owner', self.context.owner) if not self.context.is_admin: if owner is None or owner != self.context.owner: message = _("You are not permitted to create resource_type " "owned by '%s'") raise exception.Forbidden(message % (owner)) return super(MetadefResourceTypeFactoryProxy, self).new_resource_type( **kwargs) class MetadefResourceTypeRepoProxy( glance.domain.proxy.MetadefResourceTypeRepo): def __init__(self, meta_resource_type_repo, context): self.meta_resource_type_repo = meta_resource_type_repo self.context = context super(MetadefResourceTypeRepoProxy, self).__init__( meta_resource_type_repo) def list(self, *args, **kwargs): meta_resource_types = self.meta_resource_type_repo.list( *args, **kwargs) return [proxy_meta_resource_type(self.context, meta_resource_type) for meta_resource_type in meta_resource_types] def get(self, *args, **kwargs): meta_resource_type = self.meta_resource_type_repo.get(*args, **kwargs) return proxy_meta_resource_type(self.context, meta_resource_type) # Metadef namespace properties classes def is_namespace_property_mutable(context, namespace_property): """Return True if the object is mutable in this context.""" if context.is_admin: return True if context.owner is None: return False return namespace_property.namespace.owner == context.owner def proxy_namespace_property(context, namespace_property): if is_namespace_property_mutable(context, namespace_property): return namespace_property else: return ImmutableMetadefPropertyProxy(namespace_property) class ImmutableMetadefPropertyProxy(object): def __init__(self, base): self.base = base self.resource_name = 'namespace_property' property_id = _immutable_attr('base', 'property_id') name = _immutable_attr('base', 'name') schema = _immutable_attr('base', 'schema') def delete(self): message = _("You are not permitted to delete this property.") raise exception.Forbidden(message) def save(self): message = _("You are not permitted to update this property.") raise exception.Forbidden(message) class MetadefPropertyProxy(glance.domain.proxy.MetadefProperty): def __init__(self, namespace_property): self.meta_object = namespace_property super(MetadefPropertyProxy, self).__init__(namespace_property) class MetadefPropertyFactoryProxy(glance.domain.proxy.MetadefPropertyFactory): def __init__(self, namespace_property_factory, context): self.meta_object_factory = namespace_property_factory self.context = context super(MetadefPropertyFactoryProxy, self).__init__( namespace_property_factory, property_proxy_class=MetadefPropertyProxy) def new_namespace_property(self, **kwargs): owner = kwargs.pop('owner', self.context.owner) if not self.context.is_admin: if owner is None or owner != self.context.owner: message = _("You are not permitted to create property " "owned by '%s'") raise exception.Forbidden(message % (owner)) return super(MetadefPropertyFactoryProxy, self).new_namespace_property( **kwargs) class MetadefPropertyRepoProxy(glance.domain.proxy.MetadefPropertyRepo): def __init__(self, namespace_property_repo, context): self.namespace_property_repo = namespace_property_repo self.context = context super(MetadefPropertyRepoProxy, self).__init__(namespace_property_repo) def get(self, namespace, object_name): namespace_property = self.namespace_property_repo.get(namespace, object_name) return proxy_namespace_property(self.context, namespace_property) def list(self, *args, **kwargs): namespace_properties = self.namespace_property_repo.list( *args, **kwargs) return [proxy_namespace_property(self.context, namespace_property) for namespace_property in namespace_properties] # Metadef Tag classes def is_tag_mutable(context, tag): """Return True if the tag is mutable in this context.""" if context.is_admin: return True if context.owner is None: return False return tag.namespace.owner == context.owner def proxy_tag(context, tag): if is_tag_mutable(context, tag): return tag else: return ImmutableMetadefTagProxy(tag) class ImmutableMetadefTagProxy(object): def __init__(self, base): self.base = base self.resource_name = 'tag' tag_id = _immutable_attr('base', 'tag_id') name = _immutable_attr('base', 'name') created_at = _immutable_attr('base', 'created_at') updated_at = _immutable_attr('base', 'updated_at') def delete(self): message = _("You are not permitted to delete this tag.") raise exception.Forbidden(message) def save(self): message = _("You are not permitted to update this tag.") raise exception.Forbidden(message) class MetadefTagProxy(glance.domain.proxy.MetadefTag): pass class MetadefTagFactoryProxy(glance.domain.proxy.MetadefTagFactory): def __init__(self, meta_tag_factory, context): self.meta_tag_factory = meta_tag_factory self.context = context super(MetadefTagFactoryProxy, self).__init__( meta_tag_factory, meta_tag_proxy_class=MetadefTagProxy) def new_tag(self, **kwargs): owner = kwargs.pop('owner', self.context.owner) if not self.context.is_admin: if owner is None: message = _("Owner must be specified to create a tag.") raise exception.Forbidden(message) elif owner != self.context.owner: message = _("You are not permitted to create a tag" " in the namespace owned by '%s'") raise exception.Forbidden(message % (owner)) return super(MetadefTagFactoryProxy, self).new_tag(**kwargs) class MetadefTagRepoProxy(glance.domain.proxy.MetadefTagRepo): def __init__(self, tag_repo, context): self.tag_repo = tag_repo self.context = context super(MetadefTagRepoProxy, self).__init__(tag_repo) def get(self, namespace, tag_name): meta_tag = self.tag_repo.get(namespace, tag_name) return proxy_tag(self.context, meta_tag) def list(self, *args, **kwargs): tags = self.tag_repo.list(*args, **kwargs) return [proxy_tag(self.context, meta_tag) for meta_tag in tags] glance-16.0.0/glance/api/__init__.py0000666000175100017510000000176113245511421017173 0ustar zuulzuul00000000000000# Copyright 2011-2012 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg import paste.urlmap CONF = cfg.CONF def root_app_factory(loader, global_conf, **local_conf): if not CONF.enable_v1_api and '/v1' in local_conf: del local_conf['/v1'] if not CONF.enable_v2_api and '/v2' in local_conf: del local_conf['/v2'] return paste.urlmap.urlmap_factory(loader, global_conf, **local_conf) glance-16.0.0/glance/api/versions.py0000666000175100017510000001003313245511421017274 0ustar zuulzuul00000000000000# Copyright 2012 OpenStack Foundation. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg from oslo_log import log as logging from oslo_serialization import jsonutils from six.moves import http_client import webob.dec from glance.common import wsgi from glance.i18n import _, _LW versions_opts = [ cfg.StrOpt('public_endpoint', help=_(""" Public url endpoint to use for Glance versions response. This is the public url endpoint that will appear in the Glance "versions" response. If no value is specified, the endpoint that is displayed in the version's response is that of the host running the API service. Change the endpoint to represent the proxy URL if the API service is running behind a proxy. If the service is running behind a load balancer, add the load balancer's URL for this value. Possible values: * None * Proxy URL * Load balancer URL Related options: * None """)), ] CONF = cfg.CONF CONF.register_opts(versions_opts) LOG = logging.getLogger(__name__) class Controller(object): """A wsgi controller that reports which API versions are supported.""" def index(self, req, explicit=False): """Respond to a request for all OpenStack API versions.""" def build_version_object(version, path, status): url = CONF.public_endpoint or req.host_url return { 'id': 'v%s' % version, 'status': status, 'links': [ { 'rel': 'self', 'href': '%s/%s/' % (url, path), }, ], } version_objs = [] if CONF.enable_v2_api: version_objs.extend([ build_version_object(2.6, 'v2', 'CURRENT'), build_version_object(2.5, 'v2', 'SUPPORTED'), build_version_object(2.4, 'v2', 'SUPPORTED'), build_version_object(2.3, 'v2', 'SUPPORTED'), build_version_object(2.2, 'v2', 'SUPPORTED'), build_version_object(2.1, 'v2', 'SUPPORTED'), build_version_object(2.0, 'v2', 'SUPPORTED'), ]) if CONF.enable_v1_api: LOG.warn(_LW('The Images (Glance) v1 API is deprecated and will ' 'be removed on or after the Pike release, following ' 'the standard OpenStack deprecation policy. ' 'Currently, the solution is to set ' 'enable_v1_api=False and enable_v2_api=True in your ' 'glance-api.conf file. Once those options are ' 'removed from the code, Images (Glance) v2 API will ' 'be switched on by default and will be the only ' 'option to deploy and use.')) version_objs.extend([ build_version_object(1.1, 'v1', 'DEPRECATED'), build_version_object(1.0, 'v1', 'DEPRECATED'), ]) status = explicit and http_client.OK or http_client.MULTIPLE_CHOICES response = webob.Response(request=req, status=status, content_type='application/json') response.body = jsonutils.dump_as_bytes(dict(versions=version_objs)) return response @webob.dec.wsgify(RequestClass=wsgi.Request) def __call__(self, req): return self.index(req) def create_resource(conf): return wsgi.Resource(Controller()) glance-16.0.0/glance/api/middleware/0000775000175100017510000000000013245511661017176 5ustar zuulzuul00000000000000glance-16.0.0/glance/api/middleware/version_negotiation.py0000666000175100017510000001024213245511421023630 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ A filter middleware that inspects the requested URI for a version string and/or Accept headers and attempts to negotiate an API controller to return """ from oslo_config import cfg from oslo_log import log as logging from glance.api import versions from glance.common import wsgi CONF = cfg.CONF LOG = logging.getLogger(__name__) class VersionNegotiationFilter(wsgi.Middleware): def __init__(self, app): self.versions_app = versions.Controller() self.allowed_versions = None self.vnd_mime_type = 'application/vnd.openstack.images-' super(VersionNegotiationFilter, self).__init__(app) def process_request(self, req): """Try to find a version first in the accept header, then the URL""" args = {'method': req.method, 'path': req.path, 'accept': req.accept} LOG.debug("Determining version of request: %(method)s %(path)s " "Accept: %(accept)s", args) # If the request is for /versions, just return the versions container if req.path_info_peek() == "versions": return self.versions_app.index(req, explicit=True) accept = str(req.accept) if accept.startswith(self.vnd_mime_type): LOG.debug("Using media-type versioning") token_loc = len(self.vnd_mime_type) req_version = accept[token_loc:] else: LOG.debug("Using url versioning") # Remove version in url so it doesn't conflict later req_version = self._pop_path_info(req) try: version = self._match_version_string(req_version) except ValueError: LOG.debug("Unknown version. Returning version choices.") return self.versions_app req.environ['api.version'] = version req.path_info = ''.join(('/v', str(version), req.path_info)) LOG.debug("Matched version: v%d", version) LOG.debug('new path %s', req.path_info) return None def _get_allowed_versions(self): allowed_versions = {} if CONF.enable_v1_api: allowed_versions['v1'] = 1 allowed_versions['v1.0'] = 1 allowed_versions['v1.1'] = 1 if CONF.enable_v2_api: allowed_versions['v2'] = 2 allowed_versions['v2.0'] = 2 allowed_versions['v2.1'] = 2 allowed_versions['v2.2'] = 2 allowed_versions['v2.3'] = 2 allowed_versions['v2.4'] = 2 allowed_versions['v2.5'] = 2 allowed_versions['v2.6'] = 2 return allowed_versions def _match_version_string(self, subject): """ Given a string, tries to match a major and/or minor version number. :param subject: The string to check :returns: version found in the subject :raises ValueError: if no acceptable version could be found """ if self.allowed_versions is None: self.allowed_versions = self._get_allowed_versions() if subject in self.allowed_versions: return self.allowed_versions[subject] else: raise ValueError() def _pop_path_info(self, req): """ 'Pops' off the next segment of PATH_INFO, returns the popped segment. Do NOT push it onto SCRIPT_NAME. """ path = req.path_info if not path: return None while path.startswith('/'): path = path[1:] idx = path.find('/') if idx == -1: idx = len(path) r = path[:idx] req.path_info = path[idx:] return r glance-16.0.0/glance/api/middleware/cache_manage.py0000666000175100017510000000572713245511421022132 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Image Cache Management API """ from oslo_log import log as logging import routes from glance.api import cached_images from glance.common import wsgi from glance.i18n import _LI LOG = logging.getLogger(__name__) class CacheManageFilter(wsgi.Middleware): def __init__(self, app): mapper = routes.Mapper() resource = cached_images.create_resource() mapper.connect("/v1/cached_images", controller=resource, action="get_cached_images", conditions=dict(method=["GET"])) mapper.connect("/v1/cached_images/{image_id}", controller=resource, action="delete_cached_image", conditions=dict(method=["DELETE"])) mapper.connect("/v1/cached_images", controller=resource, action="delete_cached_images", conditions=dict(method=["DELETE"])) mapper.connect("/v1/queued_images/{image_id}", controller=resource, action="queue_image", conditions=dict(method=["PUT"])) mapper.connect("/v1/queued_images", controller=resource, action="get_queued_images", conditions=dict(method=["GET"])) mapper.connect("/v1/queued_images/{image_id}", controller=resource, action="delete_queued_image", conditions=dict(method=["DELETE"])) mapper.connect("/v1/queued_images", controller=resource, action="delete_queued_images", conditions=dict(method=["DELETE"])) self._mapper = mapper self._resource = resource LOG.info(_LI("Initialized image cache management middleware")) super(CacheManageFilter, self).__init__(app) def process_request(self, request): # Map request to our resource object if we can handle it match = self._mapper.match(request.path_info, request.environ) if match: request.environ['wsgiorg.routing_args'] = (None, match) return self._resource(request) # Pass off downstream if we don't match the request path else: return None glance-16.0.0/glance/api/middleware/__init__.py0000666000175100017510000000000013245511421021271 0ustar zuulzuul00000000000000glance-16.0.0/glance/api/middleware/gzip.py0000666000175100017510000000434713245511421020525 0ustar zuulzuul00000000000000# Copyright 2013 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Use gzip compression if the client accepts it. """ import re from oslo_log import log as logging from glance.common import wsgi from glance.i18n import _LI LOG = logging.getLogger(__name__) class GzipMiddleware(wsgi.Middleware): re_zip = re.compile(r'\bgzip\b') def __init__(self, app): LOG.info(_LI("Initialized gzip middleware")) super(GzipMiddleware, self).__init__(app) def process_response(self, response): request = response.request accept_encoding = request.headers.get('Accept-Encoding', '') if self.re_zip.search(accept_encoding): # NOTE(flaper87): Webob removes the content-md5 when # app_iter is called. We'll keep it and reset it later checksum = response.headers.get("Content-MD5") # NOTE(flaper87): We'll use lazy for images so # that they can be compressed without reading # the whole content in memory. Notice that using # lazy will set response's content-length to 0. content_type = response.headers.get("Content-Type", "") lazy = content_type == "application/octet-stream" # NOTE(flaper87): Webob takes care of the compression # process, it will replace the body either with a # compressed body or a generator - used for lazy com # pression - depending on the lazy value. # # Webob itself will set the Content-Encoding header. response.encode_content(lazy=lazy) if checksum: response.headers['Content-MD5'] = checksum return response glance-16.0.0/glance/api/middleware/cache.py0000666000175100017510000003304213245511421020611 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Transparent image file caching middleware, designed to live on Glance API nodes. When images are requested from the API node, this middleware caches the returned image file to local filesystem. When subsequent requests for the same image file are received, the local cached copy of the image file is returned. """ import re import six from oslo_log import log as logging from six.moves import http_client as http import webob from glance.api.common import size_checked_iter from glance.api import policy from glance.api.v1 import images from glance.common import exception from glance.common import utils from glance.common import wsgi import glance.db from glance.i18n import _LE, _LI from glance import image_cache from glance import notifier import glance.registry.client.v1.api as registry LOG = logging.getLogger(__name__) PATTERNS = { ('v1', 'GET'): re.compile(r'^/v1/images/([^\/]+)$'), ('v1', 'DELETE'): re.compile(r'^/v1/images/([^\/]+)$'), ('v2', 'GET'): re.compile(r'^/v2/images/([^\/]+)/file$'), ('v2', 'DELETE'): re.compile(r'^/v2/images/([^\/]+)$') } class CacheFilter(wsgi.Middleware): def __init__(self, app): self.cache = image_cache.ImageCache() self.serializer = images.ImageSerializer() self.policy = policy.Enforcer() LOG.info(_LI("Initialized image cache middleware")) super(CacheFilter, self).__init__(app) def _verify_metadata(self, image_meta): """ Sanity check the 'deleted' and 'size' metadata values. """ # NOTE: admins can see image metadata in the v1 API, but shouldn't # be able to download the actual image data. if image_meta['status'] == 'deleted' and image_meta['deleted']: raise exception.NotFound() if not image_meta['size']: # override image size metadata with the actual cached # file size, see LP Bug #900959 if not isinstance(image_meta, policy.ImageTarget): image_meta['size'] = self.cache.get_image_size( image_meta['id']) else: image_meta.target.size = self.cache.get_image_size( image_meta['id']) @staticmethod def _match_request(request): """Determine the version of the url and extract the image id :returns: tuple of version and image id if the url is a cacheable, otherwise None """ for ((version, method), pattern) in PATTERNS.items(): if request.method != method: continue match = pattern.match(request.path_info) if match is None: continue image_id = match.group(1) # Ensure the image id we got looks like an image id to filter # out a URI like /images/detail. See LP Bug #879136 if image_id != 'detail': return (version, method, image_id) def _enforce(self, req, action, target=None): """Authorize an action against our policies""" if target is None: target = {} try: self.policy.enforce(req.context, action, target) except exception.Forbidden as e: LOG.debug("User not permitted to perform '%s' action", action) raise webob.exc.HTTPForbidden(explanation=e.msg, request=req) def _get_v1_image_metadata(self, request, image_id): """ Retrieves image metadata using registry for v1 api and creates dictionary-like mash-up of image core and custom properties. """ try: image_metadata = registry.get_image_metadata(request.context, image_id) return utils.create_mashup_dict(image_metadata) except exception.NotFound as e: LOG.debug("No metadata found for image '%s'", image_id) raise webob.exc.HTTPNotFound(explanation=e.msg, request=request) def _get_v2_image_metadata(self, request, image_id): """ Retrieves image and for v2 api and creates adapter like object to access image core or custom properties on request. """ db_api = glance.db.get_api() image_repo = glance.db.ImageRepo(request.context, db_api) try: image = image_repo.get(image_id) # Storing image object in request as it is required in # _process_v2_request call. request.environ['api.cache.image'] = image return policy.ImageTarget(image) except exception.NotFound as e: raise webob.exc.HTTPNotFound(explanation=e.msg, request=request) def process_request(self, request): """ For requests for an image file, we check the local image cache. If present, we return the image file, appending the image metadata in headers. If not present, we pass the request on to the next application in the pipeline. """ match = self._match_request(request) try: (version, method, image_id) = match except TypeError: # Trying to unpack None raises this exception return None self._stash_request_info(request, image_id, method, version) # Partial image download requests shall not be served from cache # Bug: 1664709 # TODO(dharinic): If an image is already cached, add support to serve # only the requested bytes (partial image download) from the cache. if (request.headers.get('Content-Range') or request.headers.get('Range')): return None if request.method != 'GET' or not self.cache.is_cached(image_id): return None method = getattr(self, '_get_%s_image_metadata' % version) image_metadata = method(request, image_id) # Deactivated images shall not be served from cache if image_metadata['status'] == 'deactivated': return None try: self._enforce(request, 'download_image', target=image_metadata) except exception.Forbidden: return None LOG.debug("Cache hit for image '%s'", image_id) image_iterator = self.get_from_cache(image_id) method = getattr(self, '_process_%s_request' % version) try: return method(request, image_id, image_iterator, image_metadata) except exception.ImageNotFound: msg = _LE("Image cache contained image file for image '%s', " "however the registry did not contain metadata for " "that image!") % image_id LOG.error(msg) self.cache.delete_cached_image(image_id) @staticmethod def _stash_request_info(request, image_id, method, version): """ Preserve the image id, version and request method for later retrieval """ request.environ['api.cache.image_id'] = image_id request.environ['api.cache.method'] = method request.environ['api.cache.version'] = version @staticmethod def _fetch_request_info(request): """ Preserve the cached image id, version for consumption by the process_response method of this middleware """ try: image_id = request.environ['api.cache.image_id'] method = request.environ['api.cache.method'] version = request.environ['api.cache.version'] except KeyError: return None else: return (image_id, method, version) def _process_v1_request(self, request, image_id, image_iterator, image_meta): # Don't display location if 'location' in image_meta: del image_meta['location'] image_meta.pop('location_data', None) self._verify_metadata(image_meta) response = webob.Response(request=request) raw_response = { 'image_iterator': image_iterator, 'image_meta': image_meta, } return self.serializer.show(response, raw_response) def _process_v2_request(self, request, image_id, image_iterator, image_meta): # We do some contortions to get the image_metadata so # that we can provide it to 'size_checked_iter' which # will generate a notification. # TODO(mclaren): Make notification happen more # naturally once caching is part of the domain model. image = request.environ['api.cache.image'] self._verify_metadata(image_meta) response = webob.Response(request=request) response.app_iter = size_checked_iter(response, image_meta, image_meta['size'], image_iterator, notifier.Notifier()) # NOTE (flwang): Set the content-type, content-md5 and content-length # explicitly to be consistent with the non-cache scenario. # Besides, it's not worth the candle to invoke the "download" method # of ResponseSerializer under image_data. Because method "download" # will reset the app_iter. Then we have to call method # "size_checked_iter" to avoid missing any notification. But after # call "size_checked_iter", we will lose the content-md5 and # content-length got by the method "download" because of this issue: # https://github.com/Pylons/webob/issues/86 response.headers['Content-Type'] = 'application/octet-stream' if image.checksum: response.headers['Content-MD5'] = (image.checksum.encode('utf-8') if six.PY2 else image.checksum) response.headers['Content-Length'] = str(image.size) return response def process_response(self, resp): """ We intercept the response coming back from the main images Resource, removing image file from the cache if necessary """ status_code = self.get_status_code(resp) if not 200 <= status_code < 300: return resp # Note(dharinic): Bug: 1664709: Do not cache partial images. if status_code == http.PARTIAL_CONTENT: return resp try: (image_id, method, version) = self._fetch_request_info( resp.request) except TypeError: return resp if method == 'GET' and status_code == http.NO_CONTENT: # Bugfix:1251055 - Don't cache non-existent image files. # NOTE: Both GET for an image without locations and DELETE return # 204 but DELETE should be processed. return resp method_str = '_process_%s_response' % method try: process_response_method = getattr(self, method_str) except AttributeError: LOG.error(_LE('could not find %s') % method_str) # Nothing to do here, move along return resp else: return process_response_method(resp, image_id, version=version) def _process_DELETE_response(self, resp, image_id, version=None): if self.cache.is_cached(image_id): LOG.debug("Removing image %s from cache", image_id) self.cache.delete_cached_image(image_id) return resp def _process_GET_response(self, resp, image_id, version=None): image_checksum = resp.headers.get('Content-MD5') if not image_checksum: # API V1 stores the checksum in a different header: image_checksum = resp.headers.get('x-image-meta-checksum') if not image_checksum: LOG.error(_LE("Checksum header is missing.")) # fetch image_meta on the basis of version image_metadata = None if version: method = getattr(self, '_get_%s_image_metadata' % version) image_metadata = method(resp.request, image_id) # NOTE(zhiyan): image_cache return a generator object and set to # response.app_iter, it will be called by eventlet.wsgi later. # So we need enforce policy firstly but do it by application # since eventlet.wsgi could not catch webob.exc.HTTPForbidden and # return 403 error to client then. self._enforce(resp.request, 'download_image', target=image_metadata) resp.app_iter = self.cache.get_caching_iter(image_id, image_checksum, resp.app_iter) return resp def get_status_code(self, response): """ Returns the integer status code from the response, which can be either a Webob.Response (used in testing) or httplib.Response """ if hasattr(response, 'status_int'): return response.status_int return response.status def get_from_cache(self, image_id): """Called if cache hit""" with self.cache.open_for_read(image_id) as cache_file: chunks = utils.chunkiter(cache_file) for chunk in chunks: yield chunk glance-16.0.0/glance/api/middleware/context.py0000666000175100017510000001543613245511421021241 0ustar zuulzuul00000000000000# Copyright 2011-2012 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg from oslo_log import log as logging from oslo_serialization import jsonutils import webob.exc from glance.api import policy from glance.common import wsgi import glance.context from glance.i18n import _, _LW context_opts = [ cfg.BoolOpt('owner_is_tenant', default=True, help=_(""" Set the image owner to tenant or the authenticated user. Assign a boolean value to determine the owner of an image. When set to True, the owner of the image is the tenant. When set to False, the owner of the image will be the authenticated user issuing the request. Setting it to False makes the image private to the associated user and sharing with other users within the same tenant (or "project") requires explicit image sharing via image membership. Possible values: * True * False Related options: * None """)), cfg.StrOpt('admin_role', default='admin', help=_(""" Role used to identify an authenticated user as administrator. Provide a string value representing a Keystone role to identify an administrative user. Users with this role will be granted administrative privileges. The default value for this option is 'admin'. Possible values: * A string value which is a valid Keystone role Related options: * None """)), cfg.BoolOpt('allow_anonymous_access', default=False, help=_(""" Allow limited access to unauthenticated users. Assign a boolean to determine API access for unathenticated users. When set to False, the API cannot be accessed by unauthenticated users. When set to True, unauthenticated users can access the API with read-only privileges. This however only applies when using ContextMiddleware. Possible values: * True * False Related options: * None """)), cfg.IntOpt('max_request_id_length', default=64, min=0, help=_(""" Limit the request ID length. Provide an integer value to limit the length of the request ID to the specified length. The default value is 64. Users can change this to any ineteger value between 0 and 16384 however keeping in mind that a larger value may flood the logs. Possible values: * Integer value between 0 and 16384 Related options: * None """)), ] CONF = cfg.CONF CONF.register_opts(context_opts) LOG = logging.getLogger(__name__) class BaseContextMiddleware(wsgi.Middleware): def process_response(self, resp): try: request_id = resp.request.context.request_id except AttributeError: LOG.warn(_LW('Unable to retrieve request id from context')) else: # For python 3 compatibility need to use bytes type prefix = b'req-' if isinstance(request_id, bytes) else 'req-' if not request_id.startswith(prefix): request_id = prefix + request_id resp.headers['x-openstack-request-id'] = request_id return resp class ContextMiddleware(BaseContextMiddleware): def __init__(self, app): self.policy_enforcer = policy.Enforcer() super(ContextMiddleware, self).__init__(app) def process_request(self, req): """Convert authentication information into a request context Generate a glance.context.RequestContext object from the available authentication headers and store on the 'context' attribute of the req object. :param req: wsgi request object that will be given the context object :raises webob.exc.HTTPUnauthorized: when value of the X-Identity-Status header is not 'Confirmed' and anonymous access is disallowed """ if req.headers.get('X-Identity-Status') == 'Confirmed': req.context = self._get_authenticated_context(req) elif CONF.allow_anonymous_access: req.context = self._get_anonymous_context() else: raise webob.exc.HTTPUnauthorized() def _get_anonymous_context(self): kwargs = { 'user': None, 'tenant': None, 'roles': [], 'is_admin': False, 'read_only': True, 'policy_enforcer': self.policy_enforcer, } return glance.context.RequestContext(**kwargs) def _get_authenticated_context(self, req): service_catalog = None if req.headers.get('X-Service-Catalog') is not None: try: catalog_header = req.headers.get('X-Service-Catalog') service_catalog = jsonutils.loads(catalog_header) except ValueError: raise webob.exc.HTTPInternalServerError( _('Invalid service catalog json.')) request_id = req.headers.get('X-Openstack-Request-ID') if request_id and (0 < CONF.max_request_id_length < len(request_id)): msg = (_('x-openstack-request-id is too long, max size %s') % CONF.max_request_id_length) return webob.exc.HTTPRequestHeaderFieldsTooLarge(comment=msg) kwargs = { 'owner_is_tenant': CONF.owner_is_tenant, 'service_catalog': service_catalog, 'policy_enforcer': self.policy_enforcer, 'request_id': request_id, } ctxt = glance.context.RequestContext.from_environ(req.environ, **kwargs) # FIXME(jamielennox): glance has traditionally lowercased its roles. # This was related to bug #1010519 where at least the admin role was # case insensitive. This seems to no longer be the case and should be # fixed. ctxt.roles = [r.lower() for r in ctxt.roles] if CONF.admin_role.strip().lower() in ctxt.roles: ctxt.is_admin = True return ctxt class UnauthenticatedContextMiddleware(BaseContextMiddleware): def process_request(self, req): """Create a context without an authorized user.""" kwargs = { 'user': None, 'tenant': None, 'roles': [], 'is_admin': True, } req.context = glance.context.RequestContext(**kwargs) glance-16.0.0/glance/api/v1/0000775000175100017510000000000013245511661015407 5ustar zuulzuul00000000000000glance-16.0.0/glance/api/v1/router.py0000666000175100017510000001106013245511421017273 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from glance.api.v1 import images from glance.api.v1 import members from glance.common import wsgi class API(wsgi.Router): """WSGI router for Glance v1 API requests.""" def __init__(self, mapper): reject_method_resource = wsgi.Resource(wsgi.RejectMethodController()) images_resource = images.create_resource() mapper.connect("/", controller=images_resource, action="index") mapper.connect("/images", controller=images_resource, action='index', conditions={'method': ['GET']}) mapper.connect("/images", controller=images_resource, action='create', conditions={'method': ['POST']}) mapper.connect("/images", controller=reject_method_resource, action='reject', allowed_methods='GET, POST') mapper.connect("/images/detail", controller=images_resource, action='detail', conditions={'method': ['GET', 'HEAD']}) mapper.connect("/images/detail", controller=reject_method_resource, action='reject', allowed_methods='GET, HEAD') mapper.connect("/images/{id}", controller=images_resource, action="meta", conditions=dict(method=["HEAD"])) mapper.connect("/images/{id}", controller=images_resource, action="show", conditions=dict(method=["GET"])) mapper.connect("/images/{id}", controller=images_resource, action="update", conditions=dict(method=["PUT"])) mapper.connect("/images/{id}", controller=images_resource, action="delete", conditions=dict(method=["DELETE"])) mapper.connect("/images/{id}", controller=reject_method_resource, action='reject', allowed_methods='GET, HEAD, PUT, DELETE') members_resource = members.create_resource() mapper.connect("/images/{image_id}/members", controller=members_resource, action="index", conditions={'method': ['GET']}) mapper.connect("/images/{image_id}/members", controller=members_resource, action="update_all", conditions=dict(method=["PUT"])) mapper.connect("/images/{image_id}/members", controller=reject_method_resource, action='reject', allowed_methods='GET, PUT') mapper.connect("/images/{image_id}/members/{id}", controller=members_resource, action="show", conditions={'method': ['GET']}) mapper.connect("/images/{image_id}/members/{id}", controller=members_resource, action="update", conditions={'method': ['PUT']}) mapper.connect("/images/{image_id}/members/{id}", controller=members_resource, action="delete", conditions={'method': ['DELETE']}) mapper.connect("/images/{image_id}/members/{id}", controller=reject_method_resource, action='reject', allowed_methods='GET, PUT, DELETE') mapper.connect("/shared-images/{id}", controller=members_resource, action="index_shared_images") super(API, self).__init__(mapper) glance-16.0.0/glance/api/v1/members.py0000666000175100017510000002227513245511421017417 0ustar zuulzuul00000000000000# Copyright 2012 OpenStack Foundation. # Copyright 2013 NTT corp. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg from oslo_log import log as logging from oslo_utils import encodeutils import webob.exc from glance.api import policy from glance.api.v1 import controller from glance.common import exception from glance.common import utils from glance.common import wsgi from glance.i18n import _ import glance.registry.client.v1.api as registry LOG = logging.getLogger(__name__) CONF = cfg.CONF CONF.import_opt('image_member_quota', 'glance.common.config') class Controller(controller.BaseController): def __init__(self): self.policy = policy.Enforcer() def _check_can_access_image_members(self, context): if context.owner is None and not context.is_admin: raise webob.exc.HTTPUnauthorized(_("No authenticated user")) def _enforce(self, req, action): """Authorize an action against our policies""" try: self.policy.enforce(req.context, action, {}) except exception.Forbidden: LOG.debug("User not permitted to perform '%s' action", action) raise webob.exc.HTTPForbidden() def _raise_404_if_image_deleted(self, req, image_id): image = self.get_image_meta_or_404(req, image_id) if image['status'] == 'deleted': msg = _("Image with identifier %s has been deleted.") % image_id raise webob.exc.HTTPNotFound(msg) def index(self, req, image_id): """ Return a list of dictionaries indicating the members of the image, i.e., those tenants the image is shared with. :param req: the Request object coming from the wsgi layer :param image_id: The opaque image identifier :returns: The response body is a mapping of the following form :: {'members': [ {'member_id': , 'can_share': , ...}, ... ]} """ self._enforce(req, 'get_members') self._raise_404_if_image_deleted(req, image_id) try: members = registry.get_image_members(req.context, image_id) except exception.NotFound: msg = _("Image with identifier %s not found") % image_id LOG.warn(msg) raise webob.exc.HTTPNotFound(msg) except exception.Forbidden: msg = _("Unauthorized image access") LOG.warn(msg) raise webob.exc.HTTPForbidden(msg) return dict(members=members) @utils.mutating def delete(self, req, image_id, id): """ Removes a membership from the image. """ self._check_can_access_image_members(req.context) self._enforce(req, 'delete_member') self._raise_404_if_image_deleted(req, image_id) try: registry.delete_member(req.context, image_id, id) self._update_store_acls(req, image_id) except exception.NotFound as e: LOG.debug(encodeutils.exception_to_unicode(e)) raise webob.exc.HTTPNotFound(explanation=e.msg) except exception.Forbidden as e: LOG.debug("User not permitted to remove membership from image " "'%s'", image_id) raise webob.exc.HTTPNotFound(explanation=e.msg) return webob.exc.HTTPNoContent() def default(self, req, image_id, id, body=None): """This will cover the missing 'show' and 'create' actions""" raise webob.exc.HTTPMethodNotAllowed() def _enforce_image_member_quota(self, req, attempted): if CONF.image_member_quota < 0: # If value is negative, allow unlimited number of members return maximum = CONF.image_member_quota if attempted > maximum: msg = _("The limit has been exceeded on the number of allowed " "image members for this image. Attempted: %(attempted)s, " "Maximum: %(maximum)s") % {'attempted': attempted, 'maximum': maximum} raise webob.exc.HTTPRequestEntityTooLarge(explanation=msg, request=req) @utils.mutating def update(self, req, image_id, id, body=None): """ Adds a membership to the image, or updates an existing one. If a body is present, it is a dict with the following format :: {'member': { 'can_share': [True|False] }} If `can_share` is provided, the member's ability to share is set accordingly. If it is not provided, existing memberships remain unchanged and new memberships default to False. """ self._check_can_access_image_members(req.context) self._enforce(req, 'modify_member') self._raise_404_if_image_deleted(req, image_id) new_number_of_members = len(registry.get_image_members(req.context, image_id)) + 1 self._enforce_image_member_quota(req, new_number_of_members) # Figure out can_share can_share = None if body and 'member' in body and 'can_share' in body['member']: can_share = bool(body['member']['can_share']) try: registry.add_member(req.context, image_id, id, can_share) self._update_store_acls(req, image_id) except exception.Invalid as e: LOG.debug(encodeutils.exception_to_unicode(e)) raise webob.exc.HTTPBadRequest(explanation=e.msg) except exception.NotFound as e: LOG.debug(encodeutils.exception_to_unicode(e)) raise webob.exc.HTTPNotFound(explanation=e.msg) except exception.Forbidden as e: LOG.debug(encodeutils.exception_to_unicode(e)) raise webob.exc.HTTPNotFound(explanation=e.msg) return webob.exc.HTTPNoContent() @utils.mutating def update_all(self, req, image_id, body): """ Replaces the members of the image with those specified in the body. The body is a dict with the following format :: {'memberships': [ {'member_id': , ['can_share': [True|False]]}, ... ]} """ self._check_can_access_image_members(req.context) self._enforce(req, 'modify_member') self._raise_404_if_image_deleted(req, image_id) memberships = body.get('memberships') if memberships: new_number_of_members = len(body['memberships']) self._enforce_image_member_quota(req, new_number_of_members) try: registry.replace_members(req.context, image_id, body) self._update_store_acls(req, image_id) except exception.Invalid as e: LOG.debug(encodeutils.exception_to_unicode(e)) raise webob.exc.HTTPBadRequest(explanation=e.msg) except exception.NotFound as e: LOG.debug(encodeutils.exception_to_unicode(e)) raise webob.exc.HTTPNotFound(explanation=e.msg) except exception.Forbidden as e: LOG.debug(encodeutils.exception_to_unicode(e)) raise webob.exc.HTTPNotFound(explanation=e.msg) return webob.exc.HTTPNoContent() def index_shared_images(self, req, id): """ Retrieves list of image memberships for the given member. :param req: the Request object coming from the wsgi layer :param id: the opaque member identifier :returns: The response body is a mapping of the following form :: {'shared_images': [ {'image_id': , 'can_share': , ...}, ... ]} """ try: members = registry.get_member_images(req.context, id) except exception.NotFound as e: LOG.debug(encodeutils.exception_to_unicode(e)) raise webob.exc.HTTPNotFound(explanation=e.msg) except exception.Forbidden as e: LOG.debug(encodeutils.exception_to_unicode(e)) raise webob.exc.HTTPForbidden(explanation=e.msg) return dict(shared_images=members) def _update_store_acls(self, req, image_id): image_meta = self.get_image_meta_or_404(req, image_id) location_uri = image_meta.get('location') public = image_meta.get('is_public') self.update_store_acls(req, image_id, location_uri, public) def create_resource(): """Image members resource factory method""" deserializer = wsgi.JSONRequestDeserializer() serializer = wsgi.JSONResponseSerializer() return wsgi.Resource(Controller(), deserializer, serializer) glance-16.0.0/glance/api/v1/images.py0000666000175100017510000016050613245511421017232 0ustar zuulzuul00000000000000# Copyright 2013 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ /images endpoint for Glance v1 API """ import copy import glance_store as store import glance_store.location from oslo_config import cfg from oslo_log import log as logging from oslo_utils import encodeutils from oslo_utils import excutils from oslo_utils import strutils import six from webob.exc import HTTPBadRequest from webob.exc import HTTPConflict from webob.exc import HTTPForbidden from webob.exc import HTTPMethodNotAllowed from webob.exc import HTTPNotFound from webob.exc import HTTPRequestEntityTooLarge from webob.exc import HTTPServiceUnavailable from webob.exc import HTTPUnauthorized from webob import Response from glance.api import common from glance.api import policy import glance.api.v1 from glance.api.v1 import controller from glance.api.v1 import filters from glance.api.v1 import upload_utils from glance.common import exception from glance.common import property_utils from glance.common import store_utils from glance.common import timeutils from glance.common import utils from glance.common import wsgi from glance.i18n import _, _LE, _LI, _LW from glance import notifier import glance.registry.client.v1.api as registry LOG = logging.getLogger(__name__) SUPPORTED_PARAMS = glance.api.v1.SUPPORTED_PARAMS SUPPORTED_FILTERS = glance.api.v1.SUPPORTED_FILTERS ACTIVE_IMMUTABLE = glance.api.v1.ACTIVE_IMMUTABLE IMMUTABLE = glance.api.v1.IMMUTABLE CONF = cfg.CONF CONF.import_opt('disk_formats', 'glance.common.config', group='image_format') CONF.import_opt('container_formats', 'glance.common.config', group='image_format') CONF.import_opt('image_property_quota', 'glance.common.config') def _validate_time(req, values): """Validates time formats for updated_at, created_at and deleted_at. 'strftime' only allows values after 1900 in glance v1 so this is enforced here. This was introduced to keep modularity. """ for time_field in ['created_at', 'updated_at', 'deleted_at']: if time_field in values and values[time_field]: try: time = timeutils.parse_isotime(values[time_field]) # On Python 2, datetime.datetime.strftime() raises a ValueError # for years older than 1900. On Python 3, years older than 1900 # are accepted. But we explicitly want to reject timestamps # older than January 1st, 1900 for Glance API v1. if time.year < 1900: raise ValueError values[time_field] = time.strftime( timeutils.PERFECT_TIME_FORMAT) except ValueError: msg = (_("Invalid time format for %s.") % time_field) raise HTTPBadRequest(explanation=msg, request=req) def _validate_format(req, values): """Validates disk_format and container_format fields Introduced to split too complex validate_image_meta method. """ amazon_formats = ('aki', 'ari', 'ami') disk_format = values.get('disk_format') container_format = values.get('container_format') if 'disk_format' in values: if disk_format not in CONF.image_format.disk_formats: msg = _("Invalid disk format '%s' for image.") % disk_format raise HTTPBadRequest(explanation=msg, request=req) if 'container_format' in values: if container_format not in CONF.image_format.container_formats: msg = _("Invalid container format '%s' " "for image.") % container_format raise HTTPBadRequest(explanation=msg, request=req) if any(f in amazon_formats for f in [disk_format, container_format]): if disk_format is None: values['disk_format'] = container_format elif container_format is None: values['container_format'] = disk_format elif container_format != disk_format: msg = (_("Invalid mix of disk and container formats. " "When setting a disk or container format to " "one of 'aki', 'ari', or 'ami', the container " "and disk formats must match.")) raise HTTPBadRequest(explanation=msg, request=req) def validate_image_meta(req, values): _validate_format(req, values) _validate_time(req, values) name = values.get('name') checksum = values.get('checksum') if name and len(name) > 255: msg = _('Image name too long: %d') % len(name) raise HTTPBadRequest(explanation=msg, request=req) # check that checksum retrieved is exactly 32 characters # as long as we expect md5 checksum # https://bugs.launchpad.net/glance/+bug/1454730 if checksum and len(checksum) > 32: msg = (_("Invalid checksum '%s': can't exceed 32 characters") % checksum) raise HTTPBadRequest(explanation=msg, request=req) return values def redact_loc(image_meta, copy_dict=True): """ Create a shallow copy of image meta with 'location' removed for security (as it can contain credentials). """ if copy_dict: new_image_meta = copy.copy(image_meta) else: new_image_meta = image_meta new_image_meta.pop('location', None) new_image_meta.pop('location_data', None) return new_image_meta class Controller(controller.BaseController): """ WSGI controller for images resource in Glance v1 API The images resource API is a RESTful web service for image data. The API is as follows:: GET /images -- Returns a set of brief metadata about images GET /images/detail -- Returns a set of detailed metadata about images HEAD /images/ -- Return metadata about an image with id GET /images/ -- Return image data for image with id POST /images -- Store image data and return metadata about the newly-stored image PUT /images/ -- Update image metadata and/or upload image data for a previously-reserved image DELETE /images/ -- Delete the image with id """ def __init__(self): self.notifier = notifier.Notifier() registry.configure_registry_client() self.policy = policy.Enforcer() if property_utils.is_property_protection_enabled(): self.prop_enforcer = property_utils.PropertyRules(self.policy) else: self.prop_enforcer = None def _enforce(self, req, action, target=None): """Authorize an action against our policies""" if target is None: target = {} try: self.policy.enforce(req.context, action, target) except exception.Forbidden: LOG.debug("User not permitted to perform '%s' action", action) raise HTTPForbidden() def _enforce_image_property_quota(self, image_meta, orig_image_meta=None, purge_props=False, req=None): if CONF.image_property_quota < 0: # If value is negative, allow unlimited number of properties return props = list(image_meta['properties'].keys()) # NOTE(ameade): If we are not removing existing properties, # take them in to account if (not purge_props) and orig_image_meta: original_props = orig_image_meta['properties'].keys() props.extend(original_props) props = set(props) if len(props) > CONF.image_property_quota: msg = (_("The limit has been exceeded on the number of allowed " "image properties. Attempted: %(num)s, Maximum: " "%(quota)s") % {'num': len(props), 'quota': CONF.image_property_quota}) LOG.warn(msg) raise HTTPRequestEntityTooLarge(explanation=msg, request=req, content_type="text/plain") def _enforce_create_protected_props(self, create_props, req): """ Check request is permitted to create certain properties :param create_props: List of properties to check :param req: The WSGI/Webob Request object :raises HTTPForbidden: if request forbidden to create a property """ if property_utils.is_property_protection_enabled(): for key in create_props: if (self.prop_enforcer.check_property_rules( key, 'create', req.context) is False): msg = _("Property '%s' is protected") % key LOG.warn(msg) raise HTTPForbidden(explanation=msg, request=req, content_type="text/plain") def _enforce_read_protected_props(self, image_meta, req): """ Remove entries from metadata properties if they are read protected :param image_meta: Mapping of metadata about image :param req: The WSGI/Webob Request object """ if property_utils.is_property_protection_enabled(): for key in list(image_meta['properties'].keys()): if (self.prop_enforcer.check_property_rules( key, 'read', req.context) is False): image_meta['properties'].pop(key) def _enforce_update_protected_props(self, update_props, image_meta, orig_meta, req): """ Check request is permitted to update certain properties. Read permission is required to delete a property. If the property value is unchanged, i.e. a noop, it is permitted, however, it is important to ensure read access first. Otherwise the value could be discovered using brute force. :param update_props: List of properties to check :param image_meta: Mapping of proposed new metadata about image :param orig_meta: Mapping of existing metadata about image :param req: The WSGI/Webob Request object :raises HTTPForbidden: if request forbidden to create a property """ if property_utils.is_property_protection_enabled(): for key in update_props: has_read = self.prop_enforcer.check_property_rules( key, 'read', req.context) if ((self.prop_enforcer.check_property_rules( key, 'update', req.context) is False and image_meta['properties'][key] != orig_meta['properties'][key]) or not has_read): msg = _("Property '%s' is protected") % key LOG.warn(msg) raise HTTPForbidden(explanation=msg, request=req, content_type="text/plain") def _enforce_delete_protected_props(self, delete_props, image_meta, orig_meta, req): """ Check request is permitted to delete certain properties. Read permission is required to delete a property. Note, the absence of a property in a request does not necessarily indicate a delete. The requester may not have read access, and so can not know the property exists. Hence, read access is a requirement for delete, otherwise the delete is ignored transparently. :param delete_props: List of properties to check :param image_meta: Mapping of proposed new metadata about image :param orig_meta: Mapping of existing metadata about image :param req: The WSGI/Webob Request object :raises HTTPForbidden: if request forbidden to create a property """ if property_utils.is_property_protection_enabled(): for key in delete_props: if (self.prop_enforcer.check_property_rules( key, 'read', req.context) is False): # NOTE(bourke): if read protected, re-add to image_meta to # prevent deletion image_meta['properties'][key] = orig_meta[ 'properties'][key] elif (self.prop_enforcer.check_property_rules( key, 'delete', req.context) is False): msg = _("Property '%s' is protected") % key LOG.warn(msg) raise HTTPForbidden(explanation=msg, request=req, content_type="text/plain") def index(self, req): """ Returns the following information for all public, available images: * id -- The opaque image identifier * name -- The name of the image * disk_format -- The disk image format * container_format -- The "container" format of the image * checksum -- MD5 checksum of the image data * size -- Size of image data in bytes :param req: The WSGI/Webob Request object :returns: The response body is a mapping of the following form :: {'images': [ {'id': , 'name': , 'disk_format': , 'container_format': , 'checksum': , 'size': }, {...}] } """ self._enforce(req, 'get_images') params = self._get_query_params(req) try: images = registry.get_images_list(req.context, **params) except exception.Invalid as e: raise HTTPBadRequest(explanation=e.msg, request=req) return dict(images=images) def detail(self, req): """ Returns detailed information for all available images :param req: The WSGI/Webob Request object :returns: The response body is a mapping of the following form :: {'images': [{ 'id': , 'name': , 'size': , 'disk_format': , 'container_format': , 'checksum': , 'min_disk': , 'min_ram': , 'store': , 'status': , 'created_at': , 'updated_at': , 'deleted_at': |, 'properties': {'distro': 'Ubuntu 10.04 LTS', {...}} }, {...}] } """ if req.method == 'HEAD': msg = (_("This operation is currently not permitted on " "Glance images details.")) raise HTTPMethodNotAllowed(explanation=msg, headers={'Allow': 'GET'}, body_template='${explanation}') self._enforce(req, 'get_images') params = self._get_query_params(req) try: images = registry.get_images_detail(req.context, **params) # Strip out the Location attribute. Temporary fix for # LP Bug #755916. This information is still coming back # from the registry, since the API server still needs access # to it, however we do not return this potential security # information to the API end user... for image in images: redact_loc(image, copy_dict=False) self._enforce_read_protected_props(image, req) except exception.Invalid as e: raise HTTPBadRequest(explanation=e.msg, request=req) except exception.NotAuthenticated as e: raise HTTPUnauthorized(explanation=e.msg, request=req) return dict(images=images) def _get_query_params(self, req): """ Extracts necessary query params from request. :param req: the WSGI Request object :returns: dict of parameters that can be used by registry client """ params = {'filters': self._get_filters(req)} for PARAM in SUPPORTED_PARAMS: if PARAM in req.params: params[PARAM] = req.params.get(PARAM) # Fix for LP Bug #1132294 # Ensure all shared images are returned in v1 params['member_status'] = 'all' return params def _get_filters(self, req): """ Return a dictionary of query param filters from the request :param req: the Request object coming from the wsgi layer :returns: a dict of key/value filters """ query_filters = {} for param in req.params: if param in SUPPORTED_FILTERS or param.startswith('property-'): query_filters[param] = req.params.get(param) if not filters.validate(param, query_filters[param]): raise HTTPBadRequest(_('Bad value passed to filter ' '%(filter)s got %(val)s') % {'filter': param, 'val': query_filters[param]}) return query_filters def meta(self, req, id): """ Returns metadata about an image in the HTTP headers of the response object :param req: The WSGI/Webob Request object :param id: The opaque image identifier :returns: similar to 'show' method but without image_data :raises HTTPNotFound: if image metadata is not available to user """ self._enforce(req, 'get_image') image_meta = self.get_image_meta_or_404(req, id) image_meta = redact_loc(image_meta) self._enforce_read_protected_props(image_meta, req) return { 'image_meta': image_meta } @staticmethod def _validate_source(source, req): """ Validate if external sources (as specified via the location or copy-from headers) are supported. Otherwise we reject with 400 "Bad Request". """ if store_utils.validate_external_location(source): return source else: if source: msg = _("External sources are not supported: '%s'") % source else: msg = _("External source should not be empty") LOG.warn(msg) raise HTTPBadRequest(explanation=msg, request=req, content_type="text/plain") @staticmethod def _copy_from(req): return req.headers.get('x-glance-api-copy-from') def _external_source(self, image_meta, req): if 'location' in image_meta: self._enforce(req, 'set_image_location') source = image_meta['location'] elif 'x-glance-api-copy-from' in req.headers: source = Controller._copy_from(req) else: # we have an empty external source value # so we are creating "draft" of the image and no need validation return None return Controller._validate_source(source, req) @staticmethod def _get_from_store(context, where, dest=None): try: loc = glance_store.location.get_location_from_uri(where) src_store = store.get_store_from_uri(where) if dest is not None: src_store.READ_CHUNKSIZE = dest.WRITE_CHUNKSIZE image_data, image_size = src_store.get(loc, context=context) except store.RemoteServiceUnavailable as e: raise HTTPServiceUnavailable(explanation=e.msg) except store.NotFound as e: raise HTTPNotFound(explanation=e.msg) except (store.StoreGetNotSupported, store.StoreRandomGetNotSupported, store.UnknownScheme) as e: raise HTTPBadRequest(explanation=e.msg) image_size = int(image_size) if image_size else None return image_data, image_size def show(self, req, id): """ Returns an iterator that can be used to retrieve an image's data along with the image metadata. :param req: The WSGI/Webob Request object :param id: The opaque image identifier :raises HTTPNotFound: if image is not available to user """ self._enforce(req, 'get_image') try: image_meta = self.get_active_image_meta_or_error(req, id) except HTTPNotFound: # provision for backward-compatibility breaking issue # catch the 404 exception and raise it after enforcing # the policy with excutils.save_and_reraise_exception(): self._enforce(req, 'download_image') else: target = utils.create_mashup_dict(image_meta) self._enforce(req, 'download_image', target=target) self._enforce_read_protected_props(image_meta, req) if image_meta.get('size') == 0: image_iterator = iter([]) else: image_iterator, size = self._get_from_store(req.context, image_meta['location']) image_iterator = utils.cooperative_iter(image_iterator) image_meta['size'] = size or image_meta['size'] image_meta = redact_loc(image_meta) return { 'image_iterator': image_iterator, 'image_meta': image_meta, } def _reserve(self, req, image_meta): """ Adds the image metadata to the registry and assigns an image identifier if one is not supplied in the request headers. Sets the image's status to `queued`. :param req: The WSGI/Webob Request object :param id: The opaque image identifier :param image_meta: The image metadata :raises HTTPConflict: if image already exists :raises HTTPBadRequest: if image metadata is not valid """ location = self._external_source(image_meta, req) scheme = image_meta.get('store') if scheme and scheme not in store.get_known_schemes(): msg = _("Required store %s is invalid") % scheme LOG.warn(msg) raise HTTPBadRequest(explanation=msg, content_type='text/plain') image_meta['status'] = ('active' if image_meta.get('size') == 0 else 'queued') if location: try: backend = store.get_store_from_location(location) except (store.UnknownScheme, store.BadStoreUri): LOG.debug("Invalid location %s", location) msg = _("Invalid location %s") % location raise HTTPBadRequest(explanation=msg, request=req, content_type="text/plain") # check the store exists before we hit the registry, but we # don't actually care what it is at this point self.get_store_or_400(req, backend) # retrieve the image size from remote store (if not provided) image_meta['size'] = self._get_size(req.context, image_meta, location) else: # Ensure that the size attribute is set to zero for directly # uploadable images (if not provided). The size will be set # to a non-zero value during upload image_meta['size'] = image_meta.get('size', 0) try: image_meta = registry.add_image_metadata(req.context, image_meta) self.notifier.info("image.create", redact_loc(image_meta)) return image_meta except exception.Duplicate: msg = (_("An image with identifier %s already exists") % image_meta['id']) LOG.warn(msg) raise HTTPConflict(explanation=msg, request=req, content_type="text/plain") except exception.Invalid as e: msg = (_("Failed to reserve image. Got error: %s") % encodeutils.exception_to_unicode(e)) LOG.exception(msg) raise HTTPBadRequest(explanation=msg, request=req, content_type="text/plain") except exception.Forbidden: msg = _("Forbidden to reserve image.") LOG.warn(msg) raise HTTPForbidden(explanation=msg, request=req, content_type="text/plain") def _upload(self, req, image_meta): """ Uploads the payload of the request to a backend store in Glance. If the `x-image-meta-store` header is set, Glance will attempt to use that scheme; if not, Glance will use the scheme set by the flag `default_store` to find the backing store. :param req: The WSGI/Webob Request object :param image_meta: Mapping of metadata about image :raises HTTPConflict: if image already exists :returns: The location where the image was stored """ scheme = req.headers.get('x-image-meta-store', CONF.glance_store.default_store) store = self.get_store_or_400(req, scheme) copy_from = self._copy_from(req) if copy_from: try: image_data, image_size = self._get_from_store(req.context, copy_from, dest=store) except Exception: upload_utils.safe_kill(req, image_meta['id'], 'queued') msg = (_LE("Copy from external source '%(scheme)s' failed for " "image: %(image)s") % {'scheme': scheme, 'image': image_meta['id']}) LOG.exception(msg) return image_meta['size'] = image_size or image_meta['size'] else: try: req.get_content_type(('application/octet-stream',)) except exception.InvalidContentType: upload_utils.safe_kill(req, image_meta['id'], 'queued') msg = _("Content-Type must be application/octet-stream") LOG.warn(msg) raise HTTPBadRequest(explanation=msg) image_data = req.body_file image_id = image_meta['id'] LOG.debug("Setting image %s to status 'saving'", image_id) registry.update_image_metadata(req.context, image_id, {'status': 'saving'}) LOG.debug("Uploading image data for image %(image_id)s " "to %(scheme)s store", {'image_id': image_id, 'scheme': scheme}) self.notifier.info("image.prepare", redact_loc(image_meta)) image_meta, location_data = upload_utils.upload_data_to_store( req, image_meta, image_data, store, self.notifier) self.notifier.info('image.upload', redact_loc(image_meta)) return location_data def _activate(self, req, image_id, location_data, from_state=None): """ Sets the image status to `active` and the image's location attribute. :param req: The WSGI/Webob Request object :param image_id: Opaque image identifier :param location_data: Location of where Glance stored this image """ image_meta = { 'location': location_data['url'], 'status': 'active', 'location_data': [location_data] } try: s = from_state image_meta_data = registry.update_image_metadata(req.context, image_id, image_meta, from_state=s) self.notifier.info("image.activate", redact_loc(image_meta_data)) self.notifier.info("image.update", redact_loc(image_meta_data)) return image_meta_data except exception.Duplicate: with excutils.save_and_reraise_exception(): # Delete image data since it has been superseded by another # upload and re-raise. LOG.debug("duplicate operation - deleting image data for " " %(id)s (location:%(location)s)", {'id': image_id, 'location': image_meta['location']}) upload_utils.initiate_deletion(req, location_data, image_id) except exception.Invalid as e: msg = (_("Failed to activate image. Got error: %s") % encodeutils.exception_to_unicode(e)) LOG.warn(msg) raise HTTPBadRequest(explanation=msg, request=req, content_type="text/plain") def _upload_and_activate(self, req, image_meta): """ Safely uploads the image data in the request payload and activates the image in the registry after a successful upload. :param req: The WSGI/Webob Request object :param image_meta: Mapping of metadata about image :returns: Mapping of updated image data """ location_data = self._upload(req, image_meta) image_id = image_meta['id'] LOG.info(_LI("Uploaded data of image %s from request " "payload successfully."), image_id) if location_data: try: image_meta = self._activate(req, image_id, location_data, from_state='saving') except exception.Duplicate: raise except Exception: with excutils.save_and_reraise_exception(): # NOTE(zhiyan): Delete image data since it has already # been added to store by above _upload() call. LOG.warn(_LW("Failed to activate image %s in " "registry. About to delete image " "bits from store and update status " "to 'killed'.") % image_id) upload_utils.initiate_deletion(req, location_data, image_id) upload_utils.safe_kill(req, image_id, 'saving') else: image_meta = None return image_meta def _get_size(self, context, image_meta, location): # retrieve the image size from remote store (if not provided) try: return (image_meta.get('size', 0) or store.get_size_from_backend(location, context=context)) except store.NotFound as e: # NOTE(rajesht): The exception is logged as debug message because # the image is located at third-party server and it has nothing to # do with glance. If log.exception is used here, in that case the # log file might be flooded with exception log messages if # malicious user keeps on trying image-create using non-existent # location url. Used log.debug because administrator can # disable debug logs. LOG.debug(encodeutils.exception_to_unicode(e)) raise HTTPNotFound(explanation=e.msg, content_type="text/plain") except (store.UnknownScheme, store.BadStoreUri) as e: # NOTE(rajesht): See above note of store.NotFound LOG.debug(encodeutils.exception_to_unicode(e)) raise HTTPBadRequest(explanation=e.msg, content_type="text/plain") def _handle_source(self, req, image_id, image_meta, image_data): copy_from = self._copy_from(req) location = image_meta.get('location') sources = [obj for obj in (copy_from, location, image_data) if obj] if len(sources) >= 2: msg = _("It's invalid to provide multiple image sources.") LOG.warn(msg) raise HTTPBadRequest(explanation=msg, request=req, content_type="text/plain") if len(sources) == 0: return image_meta if image_data: image_meta = self._validate_image_for_activation(req, image_id, image_meta) image_meta = self._upload_and_activate(req, image_meta) elif copy_from: msg = _LI('Triggering asynchronous copy from external source') LOG.info(msg) pool = common.get_thread_pool("copy_from_eventlet_pool") pool.spawn_n(self._upload_and_activate, req, image_meta) else: if location: self._validate_image_for_activation(req, image_id, image_meta) image_size_meta = image_meta.get('size') if image_size_meta: try: image_size_store = store.get_size_from_backend( location, req.context) except (store.BadStoreUri, store.UnknownScheme) as e: LOG.debug(encodeutils.exception_to_unicode(e)) raise HTTPBadRequest(explanation=e.msg, request=req, content_type="text/plain") # NOTE(zhiyan): A returned size of zero usually means # the driver encountered an error. In this case the # size provided by the client will be used as-is. if (image_size_store and image_size_store != image_size_meta): msg = (_("Provided image size must match the stored" " image size. (provided size: %(ps)d, " "stored size: %(ss)d)") % {"ps": image_size_meta, "ss": image_size_store}) LOG.warn(msg) raise HTTPConflict(explanation=msg, request=req, content_type="text/plain") location_data = {'url': location, 'metadata': {}, 'status': 'active'} image_meta = self._activate(req, image_id, location_data) return image_meta def _validate_image_for_activation(self, req, id, values): """Ensures that all required image metadata values are valid.""" image = self.get_image_meta_or_404(req, id) if values['disk_format'] is None: if not image['disk_format']: msg = _("Disk format is not specified.") raise HTTPBadRequest(explanation=msg, request=req) values['disk_format'] = image['disk_format'] if values['container_format'] is None: if not image['container_format']: msg = _("Container format is not specified.") raise HTTPBadRequest(explanation=msg, request=req) values['container_format'] = image['container_format'] if 'name' not in values: values['name'] = image['name'] values = validate_image_meta(req, values) return values @utils.mutating def create(self, req, image_meta, image_data): """ Adds a new image to Glance. Four scenarios exist when creating an image: 1. If the image data is available directly for upload, create can be passed the image data as the request body and the metadata as the request headers. The image will initially be 'queued', during upload it will be in the 'saving' status, and then 'killed' or 'active' depending on whether the upload completed successfully. 2. If the image data exists somewhere else, you can upload indirectly from the external source using the x-glance-api-copy-from header. Once the image is uploaded, the external store is not subsequently consulted, i.e. the image content is served out from the configured glance image store. State transitions are as for option #1. 3. If the image data exists somewhere else, you can reference the source using the x-image-meta-location header. The image content will be served out from the external store, i.e. is never uploaded to the configured glance image store. 4. If the image data is not available yet, but you'd like reserve a spot for it, you can omit the data and a record will be created in the 'queued' state. This exists primarily to maintain backwards compatibility with OpenStack/Rackspace API semantics. The request body *must* be encoded as application/octet-stream, otherwise an HTTPBadRequest is returned. Upon a successful save of the image data and metadata, a response containing metadata about the image is returned, including its opaque identifier. :param req: The WSGI/Webob Request object :param image_meta: Mapping of metadata about image :param image_data: Actual image data that is to be stored :raises HTTPBadRequest: if x-image-meta-location is missing and the request body is not application/octet-stream image data. """ self._enforce(req, 'add_image') is_public = image_meta.get('is_public') if is_public: self._enforce(req, 'publicize_image') if Controller._copy_from(req): self._enforce(req, 'copy_from') if image_data or Controller._copy_from(req): self._enforce(req, 'upload_image') self._enforce_create_protected_props(image_meta['properties'].keys(), req) self._enforce_image_property_quota(image_meta, req=req) image_meta = self._reserve(req, image_meta) id = image_meta['id'] image_meta = self._handle_source(req, id, image_meta, image_data) location_uri = image_meta.get('location') if location_uri: self.update_store_acls(req, id, location_uri, public=is_public) # Prevent client from learning the location, as it # could contain security credentials image_meta = redact_loc(image_meta) return {'image_meta': image_meta} @utils.mutating def update(self, req, id, image_meta, image_data): """ Updates an existing image with the registry. :param request: The WSGI/Webob Request object :param id: The opaque image identifier :returns: Returns the updated image information as a mapping """ self._enforce(req, 'modify_image') is_public = image_meta.get('is_public') if is_public: self._enforce(req, 'publicize_image') if Controller._copy_from(req): self._enforce(req, 'copy_from') if image_data or Controller._copy_from(req): self._enforce(req, 'upload_image') orig_image_meta = self.get_image_meta_or_404(req, id) orig_status = orig_image_meta['status'] # Do not allow any updates on a deleted image. # Fix for LP Bug #1060930 if orig_status == 'deleted': msg = _("Forbidden to update deleted image.") raise HTTPForbidden(explanation=msg, request=req, content_type="text/plain") if req.context.is_admin is False: # Once an image is 'active' only an admin can # modify certain core metadata keys for key in ACTIVE_IMMUTABLE: if ((orig_status == 'active' or orig_status == 'deactivated') and key in image_meta and image_meta.get(key) != orig_image_meta.get(key)): msg = _("Forbidden to modify '%(key)s' of %(status)s " "image.") % {'key': key, 'status': orig_status} raise HTTPForbidden(explanation=msg, request=req, content_type="text/plain") for key in IMMUTABLE: if (key in image_meta and image_meta.get(key) != orig_image_meta.get(key)): msg = _("Forbidden to modify '%s' of image.") % key raise HTTPForbidden(explanation=msg, request=req, content_type="text/plain") # The default behaviour for a PUT /images/ is to # override any properties that were previously set. This, however, # leads to a number of issues for the common use case where a caller # registers an image with some properties and then almost immediately # uploads an image file along with some more properties. Here, we # check for a special header value to be false in order to force # properties NOT to be purged. However we also disable purging of # properties if an image file is being uploaded... purge_props = req.headers.get('x-glance-registry-purge-props', True) purge_props = (strutils.bool_from_string(purge_props) and image_data is None) if image_data is not None and orig_status != 'queued': raise HTTPConflict(_("Cannot upload to an unqueued image")) # Only allow the Location|Copy-From fields to be modified if the # image is in queued status, which indicates that the user called # POST /images but originally supply neither a Location|Copy-From # field NOR image data location = self._external_source(image_meta, req) reactivating = orig_status != 'queued' and location activating = orig_status == 'queued' and (location or image_data) # Make image public in the backend store (if implemented) orig_or_updated_loc = location or orig_image_meta.get('location') if orig_or_updated_loc: try: if is_public is not None or location is not None: self.update_store_acls(req, id, orig_or_updated_loc, public=is_public) except store.BadStoreUri: msg = _("Invalid location: %s") % location LOG.warn(msg) raise HTTPBadRequest(explanation=msg, request=req, content_type="text/plain") if reactivating: msg = _("Attempted to update Location field for an image " "not in queued status.") raise HTTPBadRequest(explanation=msg, request=req, content_type="text/plain") # ensure requester has permissions to create/update/delete properties # according to property-protections.conf orig_keys = set(orig_image_meta['properties']) new_keys = set(image_meta['properties']) self._enforce_update_protected_props( orig_keys.intersection(new_keys), image_meta, orig_image_meta, req) self._enforce_create_protected_props( new_keys.difference(orig_keys), req) if purge_props: self._enforce_delete_protected_props( orig_keys.difference(new_keys), image_meta, orig_image_meta, req) self._enforce_image_property_quota(image_meta, orig_image_meta=orig_image_meta, purge_props=purge_props, req=req) try: if location: image_meta['size'] = self._get_size(req.context, image_meta, location) image_meta = registry.update_image_metadata(req.context, id, image_meta, purge_props) if activating: image_meta = self._handle_source(req, id, image_meta, image_data) except exception.Invalid as e: msg = (_("Failed to update image metadata. Got error: %s") % encodeutils.exception_to_unicode(e)) LOG.warn(msg) raise HTTPBadRequest(explanation=msg, request=req, content_type="text/plain") except exception.ImageNotFound as e: msg = (_("Failed to find image to update: %s") % encodeutils.exception_to_unicode(e)) LOG.warn(msg) raise HTTPNotFound(explanation=msg, request=req, content_type="text/plain") except exception.Forbidden as e: msg = (_("Forbidden to update image: %s") % encodeutils.exception_to_unicode(e)) LOG.warn(msg) raise HTTPForbidden(explanation=msg, request=req, content_type="text/plain") except (exception.Conflict, exception.Duplicate) as e: LOG.warn(encodeutils.exception_to_unicode(e)) raise HTTPConflict(body=_('Image operation conflicts'), request=req, content_type='text/plain') else: self.notifier.info('image.update', redact_loc(image_meta)) # Prevent client from learning the location, as it # could contain security credentials image_meta = redact_loc(image_meta) self._enforce_read_protected_props(image_meta, req) return {'image_meta': image_meta} @utils.mutating def delete(self, req, id): """ Deletes the image and all its chunks from the Glance :param req: The WSGI/Webob Request object :param id: The opaque image identifier :raises HttpBadRequest: if image registry is invalid :raises HttpNotFound: if image or any chunk is not available :raises HttpUnauthorized: if image or any chunk is not deleteable by the requesting user """ self._enforce(req, 'delete_image') image = self.get_image_meta_or_404(req, id) if image['protected']: msg = _("Image is protected") LOG.warn(msg) raise HTTPForbidden(explanation=msg, request=req, content_type="text/plain") if image['status'] == 'pending_delete': msg = (_("Forbidden to delete a %s image.") % image['status']) LOG.warn(msg) raise HTTPForbidden(explanation=msg, request=req, content_type="text/plain") elif image['status'] == 'deleted': msg = _("Image %s not found.") % id LOG.warn(msg) raise HTTPNotFound(explanation=msg, request=req, content_type="text/plain") if image['location'] and CONF.delayed_delete: status = 'pending_delete' else: status = 'deleted' ori_status = image['status'] try: # Update the image from the registry first, since we rely on it # for authorization checks. # See https://bugs.launchpad.net/glance/+bug/1065187 image = registry.update_image_metadata(req.context, id, {'status': status}) try: # The image's location field may be None in the case # of a saving or queued image, therefore don't ask a backend # to delete the image if the backend doesn't yet store it. # See https://bugs.launchpad.net/glance/+bug/747799 if image['location']: for loc_data in image['location_data']: if loc_data['status'] == 'active': upload_utils.initiate_deletion(req, loc_data, id) except Exception: with excutils.save_and_reraise_exception(): registry.update_image_metadata(req.context, id, {'status': ori_status}) registry.delete_image_metadata(req.context, id) except exception.ImageNotFound as e: msg = (_("Failed to find image to delete: %s") % encodeutils.exception_to_unicode(e)) LOG.warn(msg) raise HTTPNotFound(explanation=msg, request=req, content_type="text/plain") except exception.Forbidden as e: msg = (_("Forbidden to delete image: %s") % encodeutils.exception_to_unicode(e)) LOG.warn(msg) raise HTTPForbidden(explanation=msg, request=req, content_type="text/plain") except store.InUseByStore as e: msg = (_("Image %(id)s could not be deleted because it is in use: " "%(exc)s") % {"id": id, "exc": encodeutils.exception_to_unicode(e)}) LOG.warn(msg) raise HTTPConflict(explanation=msg, request=req, content_type="text/plain") else: self.notifier.info('image.delete', redact_loc(image)) return Response(body='', status=200) def get_store_or_400(self, request, scheme): """ Grabs the storage backend for the supplied store name or raises an HTTPBadRequest (400) response :param request: The WSGI/Webob Request object :param scheme: The backend store scheme :raises HTTPBadRequest: if store does not exist """ try: return store.get_store_from_scheme(scheme) except store.UnknownScheme: msg = _("Store for scheme %s not found") % scheme LOG.warn(msg) raise HTTPBadRequest(explanation=msg, request=request, content_type='text/plain') class ImageDeserializer(wsgi.JSONRequestDeserializer): """Handles deserialization of specific controller method requests.""" def _deserialize(self, request): result = {} try: result['image_meta'] = utils.get_image_meta_from_headers(request) except exception.InvalidParameterValue as e: msg = encodeutils.exception_to_unicode(e) LOG.warn(msg, exc_info=True) raise HTTPBadRequest(explanation=e.msg, request=request) image_meta = result['image_meta'] image_meta = validate_image_meta(request, image_meta) if request.content_length: image_size = request.content_length elif 'size' in image_meta: image_size = image_meta['size'] else: image_size = None data = request.body_file if self.has_body(request) else None if image_size is None and data is not None: data = utils.LimitingReader(data, CONF.image_size_cap) # NOTE(bcwaldon): this is a hack to make sure the downstream code # gets the correct image data request.body_file = data elif image_size is not None and image_size > CONF.image_size_cap: max_image_size = CONF.image_size_cap msg = (_("Denying attempt to upload image larger than %d" " bytes.") % max_image_size) LOG.warn(msg) raise HTTPBadRequest(explanation=msg, request=request) result['image_data'] = data return result def create(self, request): return self._deserialize(request) def update(self, request): return self._deserialize(request) class ImageSerializer(wsgi.JSONResponseSerializer): """Handles serialization of specific controller method responses.""" def __init__(self): self.notifier = notifier.Notifier() def _inject_location_header(self, response, image_meta): location = self._get_image_location(image_meta) if six.PY2: location = location.encode('utf-8') response.headers['Location'] = location def _inject_checksum_header(self, response, image_meta): if image_meta['checksum'] is not None: checksum = image_meta['checksum'] if six.PY2: checksum = checksum.encode('utf-8') response.headers['ETag'] = checksum def _inject_image_meta_headers(self, response, image_meta): """ Given a response and mapping of image metadata, injects the Response with a set of HTTP headers for the image metadata. Each main image metadata field is injected as a HTTP header with key 'x-image-meta-' except for the properties field, which is further broken out into a set of 'x-image-meta-property-' headers :param response: The Webob Response object :param image_meta: Mapping of image metadata """ headers = utils.image_meta_to_http_headers(image_meta) for k, v in headers.items(): if six.PY3: response.headers[str(k)] = str(v) else: response.headers[k.encode('utf-8')] = v.encode('utf-8') def _get_image_location(self, image_meta): """Build a relative url to reach the image defined by image_meta.""" return "/v1/images/%s" % image_meta['id'] def meta(self, response, result): image_meta = result['image_meta'] self._inject_image_meta_headers(response, image_meta) self._inject_checksum_header(response, image_meta) return response def show(self, response, result): image_meta = result['image_meta'] image_iter = result['image_iterator'] # image_meta['size'] should be an int, but could possibly be a str expected_size = int(image_meta['size']) response.app_iter = common.size_checked_iter( response, image_meta, expected_size, image_iter, self.notifier) # Using app_iter blanks content-length, so we set it here... response.headers['Content-Length'] = str(image_meta['size']) response.headers['Content-Type'] = 'application/octet-stream' self._inject_image_meta_headers(response, image_meta) self._inject_checksum_header(response, image_meta) return response def update(self, response, result): image_meta = result['image_meta'] response.body = self.to_json(dict(image=image_meta)) response.headers['Content-Type'] = 'application/json' self._inject_checksum_header(response, image_meta) return response def create(self, response, result): image_meta = result['image_meta'] response.status = 201 response.headers['Content-Type'] = 'application/json' response.body = self.to_json(dict(image=image_meta)) self._inject_location_header(response, image_meta) self._inject_checksum_header(response, image_meta) return response def create_resource(): """Images resource factory method""" deserializer = ImageDeserializer() serializer = ImageSerializer() return wsgi.Resource(Controller(), deserializer, serializer) glance-16.0.0/glance/api/v1/__init__.py0000666000175100017510000000214113245511421017512 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. SUPPORTED_FILTERS = ['name', 'status', 'container_format', 'disk_format', 'min_ram', 'min_disk', 'size_min', 'size_max', 'is_public', 'changes-since', 'protected'] SUPPORTED_PARAMS = ('limit', 'marker', 'sort_key', 'sort_dir') # Metadata which only an admin can change once the image is active ACTIVE_IMMUTABLE = ('size', 'checksum') # Metadata which cannot be changed (irrespective of the current image state) IMMUTABLE = ('status', 'id') glance-16.0.0/glance/api/v1/controller.py0000666000175100017510000001013213245511421020135 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import glance_store as store from oslo_log import log as logging import webob.exc from glance.common import exception from glance.i18n import _ import glance.registry.client.v1.api as registry LOG = logging.getLogger(__name__) class BaseController(object): def get_image_meta_or_404(self, request, image_id): """ Grabs the image metadata for an image with a supplied identifier or raises an HTTPNotFound (404) response :param request: The WSGI/Webob Request object :param image_id: The opaque image identifier :raises HTTPNotFound: if image does not exist """ context = request.context try: return registry.get_image_metadata(context, image_id) except exception.NotFound: LOG.debug("Image with identifier %s not found", image_id) msg = _("Image with identifier %s not found") % image_id raise webob.exc.HTTPNotFound( msg, request=request, content_type='text/plain') except exception.Forbidden: LOG.debug("Forbidden image access") raise webob.exc.HTTPForbidden(_("Forbidden image access"), request=request, content_type='text/plain') def get_active_image_meta_or_error(self, request, image_id): """ Same as get_image_meta_or_404 except that it will raise a 403 if the image is deactivated or 404 if the image is otherwise not 'active'. """ image = self.get_image_meta_or_404(request, image_id) if image['status'] == 'deactivated': LOG.debug("Image %s is deactivated", image_id) msg = _("Image %s is deactivated") % image_id raise webob.exc.HTTPForbidden( msg, request=request, content_type='text/plain') if image['status'] != 'active': LOG.debug("Image %s is not active", image_id) msg = _("Image %s is not active") % image_id raise webob.exc.HTTPNotFound( msg, request=request, content_type='text/plain') return image def update_store_acls(self, req, image_id, location_uri, public=False): if location_uri: try: read_tenants = [] write_tenants = [] members = registry.get_image_members(req.context, image_id) if members: for member in members: if member['can_share']: write_tenants.append(member['member_id']) else: read_tenants.append(member['member_id']) store.set_acls(location_uri, public=public, read_tenants=read_tenants, write_tenants=write_tenants, context=req.context) except store.UnknownScheme: msg = _("Store for image_id not found: %s") % image_id raise webob.exc.HTTPBadRequest(explanation=msg, request=req, content_type='text/plain') except store.NotFound: msg = _("Data for image_id not found: %s") % image_id raise webob.exc.HTTPNotFound(explanation=msg, request=req, content_type='text/plain') glance-16.0.0/glance/api/v1/filters.py0000666000175100017510000000255313245511421017432 0ustar zuulzuul00000000000000# Copyright 2012, Piston Cloud Computing, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. def validate(filter, value): return FILTER_FUNCTIONS.get(filter, lambda v: True)(value) def validate_int_in_range(min=0, max=None): def _validator(v): try: if max is None: return min <= int(v) return min <= int(v) <= max except ValueError: return False return _validator def validate_boolean(v): return v.lower() in ('none', 'true', 'false', '1', '0') FILTER_FUNCTIONS = {'size_max': validate_int_in_range(), # build validator 'size_min': validate_int_in_range(), # build validator 'min_ram': validate_int_in_range(), # build validator 'protected': validate_boolean, 'is_public': validate_boolean, } glance-16.0.0/glance/api/v1/upload_utils.py0000666000175100017510000003051513245511421020465 0ustar zuulzuul00000000000000# Copyright 2013 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import glance_store as store_api from oslo_config import cfg from oslo_log import log as logging from oslo_utils import encodeutils from oslo_utils import excutils import webob.exc from glance.common import exception from glance.common import store_utils from glance.common import utils import glance.db from glance.i18n import _, _LE, _LI import glance.registry.client.v1.api as registry CONF = cfg.CONF LOG = logging.getLogger(__name__) def initiate_deletion(req, location_data, id): """ Deletes image data from the location of backend store. :param req: The WSGI/Webob Request object :param location_data: Location to the image data in a data store :param id: Opaque image identifier """ store_utils.delete_image_location_from_backend(req.context, id, location_data) def _kill(req, image_id, from_state): """ Marks the image status to `killed`. :param req: The WSGI/Webob Request object :param image_id: Opaque image identifier :param from_state: Permitted current status for transition to 'killed' """ # TODO(dosaboy): http://docs.openstack.org/developer/glance/statuses.html # needs updating to reflect the fact that queued->killed and saving->killed # are both allowed. registry.update_image_metadata(req.context, image_id, {'status': 'killed'}, from_state=from_state) def safe_kill(req, image_id, from_state): """ Mark image killed without raising exceptions if it fails. Since _kill is meant to be called from exceptions handlers, it should not raise itself, rather it should just log its error. :param req: The WSGI/Webob Request object :param image_id: Opaque image identifier :param from_state: Permitted current status for transition to 'killed' """ try: _kill(req, image_id, from_state) except Exception: LOG.exception(_LE("Unable to kill image %(id)s: ") % {'id': image_id}) def upload_data_to_store(req, image_meta, image_data, store, notifier): """ Upload image data to specified store. Upload image data to the store and cleans up on error. """ image_id = image_meta['id'] db_api = glance.db.get_api(v1_mode=True) image_size = image_meta.get('size') try: # By default image_data will be passed as CooperativeReader object. # But if 'user_storage_quota' is enabled and 'remaining' is not None # then it will be passed as object of LimitingReader to # 'store_add_to_backend' method. image_data = utils.CooperativeReader(image_data) remaining = glance.api.common.check_quota( req.context, image_size, db_api, image_id=image_id) if remaining is not None: image_data = utils.LimitingReader(image_data, remaining) (uri, size, checksum, location_metadata) = store_api.store_add_to_backend( image_meta['id'], image_data, image_meta['size'], store, context=req.context) location_data = {'url': uri, 'metadata': location_metadata, 'status': 'active'} try: # recheck the quota in case there were simultaneous uploads that # did not provide the size glance.api.common.check_quota( req.context, size, db_api, image_id=image_id) except exception.StorageQuotaFull: with excutils.save_and_reraise_exception(): LOG.info(_LI('Cleaning up %s after exceeding ' 'the quota'), image_id) store_utils.safe_delete_from_backend( req.context, image_meta['id'], location_data) def _kill_mismatched(image_meta, attr, actual): supplied = image_meta.get(attr) if supplied and supplied != actual: msg = (_("Supplied %(attr)s (%(supplied)s) and " "%(attr)s generated from uploaded image " "(%(actual)s) did not match. Setting image " "status to 'killed'.") % {'attr': attr, 'supplied': supplied, 'actual': actual}) LOG.error(msg) safe_kill(req, image_id, 'saving') initiate_deletion(req, location_data, image_id) raise webob.exc.HTTPBadRequest(explanation=msg, content_type="text/plain", request=req) # Verify any supplied size/checksum value matches size/checksum # returned from store when adding image _kill_mismatched(image_meta, 'size', size) _kill_mismatched(image_meta, 'checksum', checksum) # Update the database with the checksum returned # from the backend store LOG.debug("Updating image %(image_id)s data. " "Checksum set to %(checksum)s, size set " "to %(size)d", {'image_id': image_id, 'checksum': checksum, 'size': size}) update_data = {'checksum': checksum, 'size': size} try: try: state = 'saving' image_meta = registry.update_image_metadata(req.context, image_id, update_data, from_state=state) except exception.Duplicate: image = registry.get_image_metadata(req.context, image_id) if image['status'] == 'deleted': raise exception.ImageNotFound() else: raise except exception.NotAuthenticated as e: # Delete image data due to possible token expiration. LOG.debug("Authentication error - the token may have " "expired during file upload. Deleting image data for " " %s " % image_id) initiate_deletion(req, location_data, image_id) raise webob.exc.HTTPUnauthorized(explanation=e.msg, request=req) except exception.ImageNotFound: msg = _("Image %s could not be found after upload. The image may" " have been deleted during the upload.") % image_id LOG.info(msg) # NOTE(jculp): we need to clean up the datastore if an image # resource is deleted while the image data is being uploaded # # We get "location_data" from above call to store.add(), any # exceptions that occur there handle this same issue internally, # Since this is store-agnostic, should apply to all stores. initiate_deletion(req, location_data, image_id) raise webob.exc.HTTPPreconditionFailed(explanation=msg, request=req, content_type='text/plain') except store_api.StoreAddDisabled: msg = _("Error in store configuration. Adding images to store " "is disabled.") LOG.exception(msg) safe_kill(req, image_id, 'saving') notifier.error('image.upload', msg) raise webob.exc.HTTPGone(explanation=msg, request=req, content_type='text/plain') except (store_api.Duplicate, exception.Duplicate) as e: msg = (_("Attempt to upload duplicate image: %s") % encodeutils.exception_to_unicode(e)) LOG.warn(msg) # NOTE(dosaboy): do not delete the image since it is likely that this # conflict is a result of another concurrent upload that will be # successful. notifier.error('image.upload', msg) raise webob.exc.HTTPConflict(explanation=msg, request=req, content_type="text/plain") except exception.Forbidden as e: msg = (_("Forbidden upload attempt: %s") % encodeutils.exception_to_unicode(e)) LOG.warn(msg) safe_kill(req, image_id, 'saving') notifier.error('image.upload', msg) raise webob.exc.HTTPForbidden(explanation=msg, request=req, content_type="text/plain") except store_api.StorageFull as e: msg = (_("Image storage media is full: %s") % encodeutils.exception_to_unicode(e)) LOG.error(msg) safe_kill(req, image_id, 'saving') notifier.error('image.upload', msg) raise webob.exc.HTTPRequestEntityTooLarge(explanation=msg, request=req, content_type='text/plain') except store_api.StorageWriteDenied as e: msg = (_("Insufficient permissions on image storage media: %s") % encodeutils.exception_to_unicode(e)) LOG.error(msg) safe_kill(req, image_id, 'saving') notifier.error('image.upload', msg) raise webob.exc.HTTPServiceUnavailable(explanation=msg, request=req, content_type='text/plain') except exception.ImageSizeLimitExceeded as e: msg = (_("Denying attempt to upload image larger than %d bytes.") % CONF.image_size_cap) LOG.warn(msg) safe_kill(req, image_id, 'saving') notifier.error('image.upload', msg) raise webob.exc.HTTPRequestEntityTooLarge(explanation=msg, request=req, content_type='text/plain') except exception.StorageQuotaFull as e: msg = (_("Denying attempt to upload image because it exceeds the " "quota: %s") % encodeutils.exception_to_unicode(e)) LOG.warn(msg) safe_kill(req, image_id, 'saving') notifier.error('image.upload', msg) raise webob.exc.HTTPRequestEntityTooLarge(explanation=msg, request=req, content_type='text/plain') except webob.exc.HTTPError: # NOTE(bcwaldon): Ideally, we would just call 'raise' here, # but something in the above function calls is affecting the # exception context and we must explicitly re-raise the # caught exception. msg = _LE("Received HTTP error while uploading image %s") % image_id notifier.error('image.upload', msg) with excutils.save_and_reraise_exception(): LOG.exception(msg) safe_kill(req, image_id, 'saving') except (ValueError, IOError) as e: msg = _("Client disconnected before sending all data to backend") LOG.warn(msg) safe_kill(req, image_id, 'saving') raise webob.exc.HTTPBadRequest(explanation=msg, content_type="text/plain", request=req) except Exception as e: msg = _("Failed to upload image %s") % image_id LOG.exception(msg) safe_kill(req, image_id, 'saving') notifier.error('image.upload', msg) raise webob.exc.HTTPInternalServerError(explanation=msg, request=req, content_type='text/plain') return image_meta, location_data glance-16.0.0/glance/api/cached_images.py0000666000175100017510000000721113245511421020164 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Controller for Image Cache Management API """ from oslo_log import log as logging import webob.exc from glance.api import policy from glance.api.v1 import controller from glance.common import exception from glance.common import wsgi from glance import image_cache LOG = logging.getLogger(__name__) class Controller(controller.BaseController): """ A controller for managing cached images. """ def __init__(self): self.cache = image_cache.ImageCache() self.policy = policy.Enforcer() def _enforce(self, req): """Authorize request against 'manage_image_cache' policy""" try: self.policy.enforce(req.context, 'manage_image_cache', {}) except exception.Forbidden: LOG.debug("User not permitted to manage the image cache") raise webob.exc.HTTPForbidden() def get_cached_images(self, req): """ GET /cached_images Returns a mapping of records about cached images. """ self._enforce(req) images = self.cache.get_cached_images() return dict(cached_images=images) def delete_cached_image(self, req, image_id): """ DELETE /cached_images/ Removes an image from the cache. """ self._enforce(req) self.cache.delete_cached_image(image_id) def delete_cached_images(self, req): """ DELETE /cached_images - Clear all active cached images Removes all images from the cache. """ self._enforce(req) return dict(num_deleted=self.cache.delete_all_cached_images()) def get_queued_images(self, req): """ GET /queued_images Returns a mapping of records about queued images. """ self._enforce(req) images = self.cache.get_queued_images() return dict(queued_images=images) def queue_image(self, req, image_id): """ PUT /queued_images/ Queues an image for caching. We do not check to see if the image is in the registry here. That is done by the prefetcher... """ self._enforce(req) self.cache.queue_image(image_id) def delete_queued_image(self, req, image_id): """ DELETE /queued_images/ Removes an image from the cache. """ self._enforce(req) self.cache.delete_queued_image(image_id) def delete_queued_images(self, req): """ DELETE /queued_images - Clear all active queued images Removes all images from the cache. """ self._enforce(req) return dict(num_deleted=self.cache.delete_all_queued_images()) class CachedImageDeserializer(wsgi.JSONRequestDeserializer): pass class CachedImageSerializer(wsgi.JSONResponseSerializer): pass def create_resource(): """Cached Images resource factory method""" deserializer = CachedImageDeserializer() serializer = CachedImageSerializer() return wsgi.Resource(Controller(), deserializer, serializer) glance-16.0.0/glance/db/0000775000175100017510000000000013245511661014675 5ustar zuulzuul00000000000000glance-16.0.0/glance/db/sqlalchemy/0000775000175100017510000000000013245511661017037 5ustar zuulzuul00000000000000glance-16.0.0/glance/db/sqlalchemy/metadef_api/0000775000175100017510000000000013245511661021275 5ustar zuulzuul00000000000000glance-16.0.0/glance/db/sqlalchemy/metadef_api/tag.py0000666000175100017510000001706413245511421022426 0ustar zuulzuul00000000000000# Copyright (c) 2014 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_db import exception as db_exc from oslo_db.sqlalchemy.utils import paginate_query from oslo_log import log as logging from sqlalchemy import func import sqlalchemy.orm as sa_orm from glance.common import exception as exc from glance.db.sqlalchemy.metadef_api import namespace as namespace_api import glance.db.sqlalchemy.metadef_api.utils as metadef_utils from glance.db.sqlalchemy import models_metadef as models from glance.i18n import _LW LOG = logging.getLogger(__name__) def _get(context, id, session): try: query = (session.query(models.MetadefTag).filter_by(id=id)) metadef_tag = query.one() except sa_orm.exc.NoResultFound: msg = (_LW("Metadata tag not found for id %s") % id) LOG.warn(msg) raise exc.MetadefTagNotFound(message=msg) return metadef_tag def _get_by_name(context, namespace_name, name, session): namespace = namespace_api.get(context, namespace_name, session) try: query = (session.query(models.MetadefTag).filter_by( name=name, namespace_id=namespace['id'])) metadef_tag = query.one() except sa_orm.exc.NoResultFound: LOG.debug("The metadata tag with name=%(name)s" " was not found in namespace=%(namespace_name)s.", {'name': name, 'namespace_name': namespace_name}) raise exc.MetadefTagNotFound(name=name, namespace_name=namespace_name) return metadef_tag def get_all(context, namespace_name, session, filters=None, marker=None, limit=None, sort_key='created_at', sort_dir='desc'): """Get all tags that match zero or more filters. :param filters: dict of filter keys and values. :param marker: tag id after which to start page :param limit: maximum number of namespaces to return :param sort_key: namespace attribute by which results should be sorted :param sort_dir: direction in which results should be sorted (asc, desc) """ namespace = namespace_api.get(context, namespace_name, session) query = (session.query(models.MetadefTag).filter_by( namespace_id=namespace['id'])) marker_tag = None if marker is not None: marker_tag = _get(context, marker, session) sort_keys = ['created_at', 'id'] sort_keys.insert(0, sort_key) if sort_key not in sort_keys else sort_keys query = paginate_query(query=query, model=models.MetadefTag, limit=limit, sort_keys=sort_keys, marker=marker_tag, sort_dir=sort_dir) metadef_tag = query.all() metadef_tag_list = [] for tag in metadef_tag: metadef_tag_list.append(tag.to_dict()) return metadef_tag_list def create(context, namespace_name, values, session): namespace = namespace_api.get(context, namespace_name, session) values.update({'namespace_id': namespace['id']}) metadef_tag = models.MetadefTag() metadef_utils.drop_protected_attrs(models.MetadefTag, values) metadef_tag.update(values.copy()) try: metadef_tag.save(session=session) except db_exc.DBDuplicateEntry: LOG.debug("A metadata tag name=%(name)s" " already exists in namespace=%(namespace_name)s." " (Please note that metadata tag names are" " case insensitive).", {'name': metadef_tag.name, 'namespace_name': namespace_name}) raise exc.MetadefDuplicateTag( name=metadef_tag.name, namespace_name=namespace_name) return metadef_tag.to_dict() def create_tags(context, namespace_name, tag_list, session): metadef_tags_list = [] if tag_list: namespace = namespace_api.get(context, namespace_name, session) try: with session.begin(): query = (session.query(models.MetadefTag).filter_by( namespace_id=namespace['id'])) query.delete(synchronize_session='fetch') for value in tag_list: value.update({'namespace_id': namespace['id']}) metadef_utils.drop_protected_attrs( models.MetadefTag, value) metadef_tag = models.MetadefTag() metadef_tag.update(value.copy()) metadef_tag.save(session=session) metadef_tags_list.append(metadef_tag.to_dict()) except db_exc.DBDuplicateEntry: LOG.debug("A metadata tag name=%(name)s" " in namespace=%(namespace_name)s already exists.", {'name': metadef_tag.name, 'namespace_name': namespace_name}) raise exc.MetadefDuplicateTag( name=metadef_tag.name, namespace_name=namespace_name) return metadef_tags_list def get(context, namespace_name, name, session): metadef_tag = _get_by_name(context, namespace_name, name, session) return metadef_tag.to_dict() def update(context, namespace_name, id, values, session): """Update an tag, raise if ns not found/visible or duplicate result""" namespace_api.get(context, namespace_name, session) metadata_tag = _get(context, id, session) metadef_utils.drop_protected_attrs(models.MetadefTag, values) # values['updated_at'] = timeutils.utcnow() - done by TS mixin try: metadata_tag.update(values.copy()) metadata_tag.save(session=session) except db_exc.DBDuplicateEntry: LOG.debug("Invalid update. It would result in a duplicate" " metadata tag with same name=%(name)s" " in namespace=%(namespace_name)s.", {'name': values['name'], 'namespace_name': namespace_name}) raise exc.MetadefDuplicateTag( name=values['name'], namespace_name=namespace_name) return metadata_tag.to_dict() def delete(context, namespace_name, name, session): namespace_api.get(context, namespace_name, session) md_tag = _get_by_name(context, namespace_name, name, session) session.delete(md_tag) session.flush() return md_tag.to_dict() def delete_namespace_content(context, namespace_id, session): """Use this def only if the ns for the id has been verified as visible""" count = 0 query = (session.query(models.MetadefTag).filter_by( namespace_id=namespace_id)) count = query.delete(synchronize_session='fetch') return count def delete_by_namespace_name(context, namespace_name, session): namespace = namespace_api.get(context, namespace_name, session) return delete_namespace_content(context, namespace['id'], session) def count(context, namespace_name, session): """Get the count of objects for a namespace, raise if ns not found""" namespace = namespace_api.get(context, namespace_name, session) query = (session.query(func.count(models.MetadefTag.id)).filter_by( namespace_id=namespace['id'])) return query.scalar() glance-16.0.0/glance/db/sqlalchemy/metadef_api/property.py0000666000175100017510000001370213245511421023532 0ustar zuulzuul00000000000000# Copyright (c) 2014 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_db import exception as db_exc from oslo_log import log as logging from sqlalchemy import func import sqlalchemy.orm as sa_orm from glance.common import exception as exc from glance.db.sqlalchemy.metadef_api import namespace as namespace_api from glance.db.sqlalchemy.metadef_api import utils as metadef_utils from glance.db.sqlalchemy import models_metadef as models from glance.i18n import _ LOG = logging.getLogger(__name__) def _get(context, property_id, session): try: query = session.query(models.MetadefProperty).filter_by(id=property_id) property_rec = query.one() except sa_orm.exc.NoResultFound: msg = (_("Metadata definition property not found for id=%s") % property_id) LOG.warn(msg) raise exc.MetadefPropertyNotFound(msg) return property_rec def _get_by_name(context, namespace_name, name, session): """get a property; raise if ns not found/visible or property not found""" namespace = namespace_api.get(context, namespace_name, session) try: query = session.query(models.MetadefProperty).filter_by( name=name, namespace_id=namespace['id']) property_rec = query.one() except sa_orm.exc.NoResultFound: LOG.debug("The metadata definition property with name=%(name)s" " was not found in namespace=%(namespace_name)s.", {'name': name, 'namespace_name': namespace_name}) raise exc.MetadefPropertyNotFound(property_name=name, namespace_name=namespace_name) return property_rec def get(context, namespace_name, name, session): """get a property; raise if ns not found/visible or property not found""" property_rec = _get_by_name(context, namespace_name, name, session) return property_rec.to_dict() def get_all(context, namespace_name, session): namespace = namespace_api.get(context, namespace_name, session) query = session.query(models.MetadefProperty).filter_by( namespace_id=namespace['id']) properties = query.all() properties_list = [] for prop in properties: properties_list.append(prop.to_dict()) return properties_list def create(context, namespace_name, values, session): namespace = namespace_api.get(context, namespace_name, session) values.update({'namespace_id': namespace['id']}) property_rec = models.MetadefProperty() metadef_utils.drop_protected_attrs(models.MetadefProperty, values) property_rec.update(values.copy()) try: property_rec.save(session=session) except db_exc.DBDuplicateEntry: LOG.debug("Can not create metadata definition property. A property" " with name=%(name)s already exists in" " namespace=%(namespace_name)s.", {'name': property_rec.name, 'namespace_name': namespace_name}) raise exc.MetadefDuplicateProperty( property_name=property_rec.name, namespace_name=namespace_name) return property_rec.to_dict() def update(context, namespace_name, property_id, values, session): """Update a property, raise if ns not found/visible or duplicate result""" namespace_api.get(context, namespace_name, session) property_rec = _get(context, property_id, session) metadef_utils.drop_protected_attrs(models.MetadefProperty, values) # values['updated_at'] = timeutils.utcnow() - done by TS mixin try: property_rec.update(values.copy()) property_rec.save(session=session) except db_exc.DBDuplicateEntry: LOG.debug("Invalid update. It would result in a duplicate" " metadata definition property with the same name=%(name)s" " in namespace=%(namespace_name)s.", {'name': property_rec.name, 'namespace_name': namespace_name}) emsg = (_("Invalid update. It would result in a duplicate" " metadata definition property with the same name=%(name)s" " in namespace=%(namespace_name)s.") % {'name': property_rec.name, 'namespace_name': namespace_name}) raise exc.MetadefDuplicateProperty(emsg) return property_rec.to_dict() def delete(context, namespace_name, property_name, session): property_rec = _get_by_name( context, namespace_name, property_name, session) if property_rec: session.delete(property_rec) session.flush() return property_rec.to_dict() def delete_namespace_content(context, namespace_id, session): """Use this def only if the ns for the id has been verified as visible""" count = 0 query = session.query(models.MetadefProperty).filter_by( namespace_id=namespace_id) count = query.delete(synchronize_session='fetch') return count def delete_by_namespace_name(context, namespace_name, session): namespace = namespace_api.get(context, namespace_name, session) return delete_namespace_content(context, namespace['id'], session) def count(context, namespace_name, session): """Get the count of properties for a namespace, raise if ns not found""" namespace = namespace_api.get(context, namespace_name, session) query = session.query(func.count(models.MetadefProperty.id)).filter_by( namespace_id=namespace['id']) return query.scalar() glance-16.0.0/glance/db/sqlalchemy/metadef_api/resource_type_association.py0000666000175100017510000002012513245511421027127 0ustar zuulzuul00000000000000# Copyright (c) 2014 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_db import exception as db_exc from oslo_log import log as logging import sqlalchemy.orm as sa_orm from glance.common import exception as exc from glance.db.sqlalchemy.metadef_api import namespace as namespace_api from glance.db.sqlalchemy.metadef_api import resource_type as resource_type_api from glance.db.sqlalchemy.metadef_api import utils as metadef_utils from glance.db.sqlalchemy import models_metadef as models LOG = logging.getLogger(__name__) def _to_db_dict(namespace_id, resource_type_id, model_dict): """transform a model dict to a metadef_namespace_resource_type dict""" db_dict = {'namespace_id': namespace_id, 'resource_type_id': resource_type_id, 'properties_target': model_dict['properties_target'], 'prefix': model_dict['prefix']} return db_dict def _to_model_dict(resource_type_name, ns_res_type_dict): """transform a metadef_namespace_resource_type dict to a model dict""" model_dict = {'name': resource_type_name, 'properties_target': ns_res_type_dict['properties_target'], 'prefix': ns_res_type_dict['prefix'], 'created_at': ns_res_type_dict['created_at'], 'updated_at': ns_res_type_dict['updated_at']} return model_dict def _set_model_dict(resource_type_name, properties_target, prefix, created_at, updated_at): """return a model dict set with the passed in key values""" model_dict = {'name': resource_type_name, 'properties_target': properties_target, 'prefix': prefix, 'created_at': created_at, 'updated_at': updated_at} return model_dict def _get(context, namespace_name, resource_type_name, namespace_id, resource_type_id, session): """Get a namespace resource_type association""" # visibility check assumed done in calling routine via namespace_get try: query = session.query(models.MetadefNamespaceResourceType).filter_by( namespace_id=namespace_id, resource_type_id=resource_type_id) db_rec = query.one() except sa_orm.exc.NoResultFound: LOG.debug("The metadata definition resource-type association of" " resource_type=%(resource_type_name)s to" " namespace_name=%(namespace_name)s was not found.", {'resource_type_name': resource_type_name, 'namespace_name': namespace_name}) raise exc.MetadefResourceTypeAssociationNotFound( resource_type_name=resource_type_name, namespace_name=namespace_name) return db_rec def _create_association( context, namespace_name, resource_type_name, values, session): """Create an association, raise if it already exists.""" namespace_resource_type_rec = models.MetadefNamespaceResourceType() metadef_utils.drop_protected_attrs( models.MetadefNamespaceResourceType, values) # values['updated_at'] = timeutils.utcnow() # TS mixin should do this namespace_resource_type_rec.update(values.copy()) try: namespace_resource_type_rec.save(session=session) except db_exc.DBDuplicateEntry: LOG.debug("The metadata definition resource-type association of" " resource_type=%(resource_type_name)s to" " namespace=%(namespace_name)s, already exists.", {'resource_type_name': resource_type_name, 'namespace_name': namespace_name}) raise exc.MetadefDuplicateResourceTypeAssociation( resource_type_name=resource_type_name, namespace_name=namespace_name) return namespace_resource_type_rec.to_dict() def _delete(context, namespace_name, resource_type_name, namespace_id, resource_type_id, session): """Delete a resource type association or raise if not found.""" db_rec = _get(context, namespace_name, resource_type_name, namespace_id, resource_type_id, session) session.delete(db_rec) session.flush() return db_rec.to_dict() def get(context, namespace_name, resource_type_name, session): """Get a resource_type associations; raise if not found""" namespace = namespace_api.get( context, namespace_name, session) resource_type = resource_type_api.get( context, resource_type_name, session) found = _get(context, namespace_name, resource_type_name, namespace['id'], resource_type['id'], session) return _to_model_dict(resource_type_name, found) def get_all_by_namespace(context, namespace_name, session): """List resource_type associations by namespace, raise if not found""" # namespace get raises an exception if not visible namespace = namespace_api.get( context, namespace_name, session) db_recs = ( session.query(models.MetadefResourceType) .join(models.MetadefResourceType.associations) .filter_by(namespace_id=namespace['id']) .values(models.MetadefResourceType.name, models.MetadefNamespaceResourceType.properties_target, models.MetadefNamespaceResourceType.prefix, models.MetadefNamespaceResourceType.created_at, models.MetadefNamespaceResourceType.updated_at)) model_dict_list = [] for name, properties_target, prefix, created_at, updated_at in db_recs: model_dict_list.append( _set_model_dict (name, properties_target, prefix, created_at, updated_at) ) return model_dict_list def create(context, namespace_name, values, session): """Create an association, raise if already exists or ns not found.""" namespace = namespace_api.get( context, namespace_name, session) # if the resource_type does not exist, create it resource_type_name = values['name'] metadef_utils.drop_protected_attrs( models.MetadefNamespaceResourceType, values) try: resource_type = resource_type_api.get( context, resource_type_name, session) except exc.NotFound: resource_type = None LOG.debug("Creating resource-type %s", resource_type_name) if resource_type is None: resource_type_dict = {'name': resource_type_name, 'protected': False} resource_type = resource_type_api.create( context, resource_type_dict, session) # Create the association record, set the field values ns_resource_type_dict = _to_db_dict( namespace['id'], resource_type['id'], values) new_rec = _create_association(context, namespace_name, resource_type_name, ns_resource_type_dict, session) return _to_model_dict(resource_type_name, new_rec) def delete(context, namespace_name, resource_type_name, session): """Delete an association or raise if not found""" namespace = namespace_api.get( context, namespace_name, session) resource_type = resource_type_api.get( context, resource_type_name, session) deleted = _delete(context, namespace_name, resource_type_name, namespace['id'], resource_type['id'], session) return _to_model_dict(resource_type_name, deleted) def delete_namespace_content(context, namespace_id, session): """Use this def only if the ns for the id has been verified as visible""" count = 0 query = session.query(models.MetadefNamespaceResourceType).filter_by( namespace_id=namespace_id) count = query.delete(synchronize_session='fetch') return count glance-16.0.0/glance/db/sqlalchemy/metadef_api/object.py0000666000175100017510000001314313245511421023113 0ustar zuulzuul00000000000000# Copyright (c) 2014 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_db import exception as db_exc from oslo_log import log as logging from sqlalchemy import func import sqlalchemy.orm as sa_orm from glance.common import exception as exc from glance.db.sqlalchemy.metadef_api import namespace as namespace_api import glance.db.sqlalchemy.metadef_api.utils as metadef_utils from glance.db.sqlalchemy import models_metadef as models from glance.i18n import _ LOG = logging.getLogger(__name__) def _get(context, object_id, session): try: query = session.query(models.MetadefObject).filter_by(id=object_id) metadef_object = query.one() except sa_orm.exc.NoResultFound: msg = (_("Metadata definition object not found for id=%s") % object_id) LOG.warn(msg) raise exc.MetadefObjectNotFound(msg) return metadef_object def _get_by_name(context, namespace_name, name, session): namespace = namespace_api.get(context, namespace_name, session) try: query = session.query(models.MetadefObject).filter_by( name=name, namespace_id=namespace['id']) metadef_object = query.one() except sa_orm.exc.NoResultFound: LOG.debug("The metadata definition object with name=%(name)s" " was not found in namespace=%(namespace_name)s.", {'name': name, 'namespace_name': namespace_name}) raise exc.MetadefObjectNotFound(object_name=name, namespace_name=namespace_name) return metadef_object def get_all(context, namespace_name, session): namespace = namespace_api.get(context, namespace_name, session) query = session.query(models.MetadefObject).filter_by( namespace_id=namespace['id']) md_objects = query.all() md_objects_list = [] for obj in md_objects: md_objects_list.append(obj.to_dict()) return md_objects_list def create(context, namespace_name, values, session): namespace = namespace_api.get(context, namespace_name, session) values.update({'namespace_id': namespace['id']}) md_object = models.MetadefObject() metadef_utils.drop_protected_attrs(models.MetadefObject, values) md_object.update(values.copy()) try: md_object.save(session=session) except db_exc.DBDuplicateEntry: LOG.debug("A metadata definition object with name=%(name)s" " in namespace=%(namespace_name)s already exists.", {'name': md_object.name, 'namespace_name': namespace_name}) raise exc.MetadefDuplicateObject( object_name=md_object.name, namespace_name=namespace_name) return md_object.to_dict() def get(context, namespace_name, name, session): md_object = _get_by_name(context, namespace_name, name, session) return md_object.to_dict() def update(context, namespace_name, object_id, values, session): """Update an object, raise if ns not found/visible or duplicate result""" namespace_api.get(context, namespace_name, session) md_object = _get(context, object_id, session) metadef_utils.drop_protected_attrs(models.MetadefObject, values) # values['updated_at'] = timeutils.utcnow() - done by TS mixin try: md_object.update(values.copy()) md_object.save(session=session) except db_exc.DBDuplicateEntry: LOG.debug("Invalid update. It would result in a duplicate" " metadata definition object with same name=%(name)s" " in namespace=%(namespace_name)s.", {'name': md_object.name, 'namespace_name': namespace_name}) emsg = (_("Invalid update. It would result in a duplicate" " metadata definition object with the same name=%(name)s" " in namespace=%(namespace_name)s.") % {'name': md_object.name, 'namespace_name': namespace_name}) raise exc.MetadefDuplicateObject(emsg) return md_object.to_dict() def delete(context, namespace_name, object_name, session): namespace_api.get(context, namespace_name, session) md_object = _get_by_name(context, namespace_name, object_name, session) session.delete(md_object) session.flush() return md_object.to_dict() def delete_namespace_content(context, namespace_id, session): """Use this def only if the ns for the id has been verified as visible""" count = 0 query = session.query(models.MetadefObject).filter_by( namespace_id=namespace_id) count = query.delete(synchronize_session='fetch') return count def delete_by_namespace_name(context, namespace_name, session): namespace = namespace_api.get(context, namespace_name, session) return delete_namespace_content(context, namespace['id'], session) def count(context, namespace_name, session): """Get the count of objects for a namespace, raise if ns not found""" namespace = namespace_api.get(context, namespace_name, session) query = session.query(func.count(models.MetadefObject.id)).filter_by( namespace_id=namespace['id']) return query.scalar() glance-16.0.0/glance/db/sqlalchemy/metadef_api/namespace.py0000666000175100017510000002522713245511421023607 0ustar zuulzuul00000000000000# Copyright (c) 2014 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_db import exception as db_exc from oslo_db.sqlalchemy.utils import paginate_query from oslo_log import log as logging import sqlalchemy.exc as sa_exc from sqlalchemy import or_ import sqlalchemy.orm as sa_orm from glance.common import exception as exc import glance.db.sqlalchemy.metadef_api as metadef_api from glance.db.sqlalchemy import models_metadef as models from glance.i18n import _ LOG = logging.getLogger(__name__) def _is_namespace_visible(context, namespace, status=None): """Return True if the namespace is visible in this context.""" # Is admin == visible if context.is_admin: return True # No owner == visible if namespace['owner'] is None: return True # Is public == visible if 'visibility' in namespace: if namespace['visibility'] == 'public': return True # context.owner has a value and is the namespace owner == visible if context.owner is not None: if context.owner == namespace['owner']: return True # Private return False def _select_namespaces_query(context, session): """Build the query to get all namespaces based on the context""" LOG.debug("context.is_admin=%(is_admin)s; context.owner=%(owner)s", {'is_admin': context.is_admin, 'owner': context.owner}) # If admin, return everything. query_ns = session.query(models.MetadefNamespace) if context.is_admin: return query_ns else: # If regular user, return only public namespaces. # However, if context.owner has a value, return both # public and private namespaces of the context.owner. if context.owner is not None: query = ( query_ns.filter( or_(models.MetadefNamespace.owner == context.owner, models.MetadefNamespace.visibility == 'public'))) else: query = query_ns.filter( models.MetadefNamespace.visibility == 'public') return query def _get(context, namespace_id, session): """Get a namespace by id, raise if not found""" try: query = session.query(models.MetadefNamespace).filter_by( id=namespace_id) namespace_rec = query.one() except sa_orm.exc.NoResultFound: msg = (_("Metadata definition namespace not found for id=%s") % namespace_id) LOG.warn(msg) raise exc.MetadefNamespaceNotFound(msg) # Make sure they are allowed to view it. if not _is_namespace_visible(context, namespace_rec.to_dict()): LOG.debug("Forbidding request, metadata definition namespace=%s" " is not visible.", namespace_rec.namespace) emsg = _("Forbidding request, metadata definition namespace=%s" " is not visible.") % namespace_rec.namespace raise exc.MetadefForbidden(emsg) return namespace_rec def _get_by_name(context, name, session): """Get a namespace by name, raise if not found""" try: query = session.query(models.MetadefNamespace).filter_by( namespace=name) namespace_rec = query.one() except sa_orm.exc.NoResultFound: LOG.debug("Metadata definition namespace=%s was not found.", name) raise exc.MetadefNamespaceNotFound(namespace_name=name) # Make sure they are allowed to view it. if not _is_namespace_visible(context, namespace_rec.to_dict()): LOG.debug("Forbidding request, metadata definition namespace=%s" " is not visible.", name) emsg = _("Forbidding request, metadata definition namespace=%s" " is not visible.") % name raise exc.MetadefForbidden(emsg) return namespace_rec def _get_all(context, session, filters=None, marker=None, limit=None, sort_key='created_at', sort_dir='desc'): """Get all namespaces that match zero or more filters. :param filters: dict of filter keys and values. :param marker: namespace id after which to start page :param limit: maximum number of namespaces to return :param sort_key: namespace attribute by which results should be sorted :param sort_dir: direction in which results should be sorted (asc, desc) """ filters = filters or {} query = _select_namespaces_query(context, session) # if visibility filter, apply it to the context based query visibility = filters.pop('visibility', None) if visibility is not None: query = query.filter(models.MetadefNamespace.visibility == visibility) # if id_list filter, apply it to the context based query id_list = filters.pop('id_list', None) if id_list is not None: query = query.filter(models.MetadefNamespace.id.in_(id_list)) marker_namespace = None if marker is not None: marker_namespace = _get(context, marker, session) sort_keys = ['created_at', 'id'] sort_keys.insert(0, sort_key) if sort_key not in sort_keys else sort_keys query = paginate_query(query=query, model=models.MetadefNamespace, limit=limit, sort_keys=sort_keys, marker=marker_namespace, sort_dir=sort_dir) return query.all() def _get_all_by_resource_types(context, session, filters, marker=None, limit=None, sort_key=None, sort_dir=None): """get all visible namespaces for the specified resource_types""" resource_types = filters['resource_types'] resource_type_list = resource_types.split(',') db_recs = ( session.query(models.MetadefResourceType) .join(models.MetadefResourceType.associations) .filter(models.MetadefResourceType.name.in_(resource_type_list)) .values(models.MetadefResourceType.name, models.MetadefNamespaceResourceType.namespace_id) ) namespace_id_list = [] for name, namespace_id in db_recs: namespace_id_list.append(namespace_id) if len(namespace_id_list) is 0: return [] filters2 = filters filters2.update({'id_list': namespace_id_list}) return _get_all(context, session, filters2, marker, limit, sort_key, sort_dir) def get_all(context, session, marker=None, limit=None, sort_key=None, sort_dir=None, filters=None): """List all visible namespaces""" namespaces = [] filters = filters or {} if 'resource_types' in filters: namespaces = _get_all_by_resource_types( context, session, filters, marker, limit, sort_key, sort_dir) else: namespaces = _get_all( context, session, filters, marker, limit, sort_key, sort_dir) return [ns.to_dict() for ns in namespaces] def get(context, name, session): """Get a namespace by name, raise if not found""" namespace_rec = _get_by_name(context, name, session) return namespace_rec.to_dict() def create(context, values, session): """Create a namespace, raise if namespace already exists.""" namespace_name = values['namespace'] namespace = models.MetadefNamespace() metadef_api.utils.drop_protected_attrs(models.MetadefNamespace, values) namespace.update(values.copy()) try: namespace.save(session=session) except db_exc.DBDuplicateEntry: LOG.debug("Can not create the metadata definition namespace." " Namespace=%s already exists.", namespace_name) raise exc.MetadefDuplicateNamespace( namespace_name=namespace_name) return namespace.to_dict() def update(context, namespace_id, values, session): """Update a namespace, raise if not found/visible or duplicate result""" namespace_rec = _get(context, namespace_id, session) metadef_api.utils.drop_protected_attrs(models.MetadefNamespace, values) try: namespace_rec.update(values.copy()) namespace_rec.save(session=session) except db_exc.DBDuplicateEntry: LOG.debug("Invalid update. It would result in a duplicate" " metadata definition namespace with the same name of %s", values['namespace']) emsg = (_("Invalid update. It would result in a duplicate" " metadata definition namespace with the same name of %s") % values['namespace']) raise exc.MetadefDuplicateNamespace(emsg) return namespace_rec.to_dict() def delete(context, name, session): """Raise if not found, has references or not visible""" namespace_rec = _get_by_name(context, name, session) try: session.delete(namespace_rec) session.flush() except db_exc.DBError as e: if isinstance(e.inner_exception, sa_exc.IntegrityError): LOG.debug("Metadata definition namespace=%s not deleted. " "Other records still refer to it.", name) raise exc.MetadefIntegrityError( record_type='namespace', record_name=name) else: raise return namespace_rec.to_dict() def delete_cascade(context, name, session): """Raise if not found, has references or not visible""" namespace_rec = _get_by_name(context, name, session) with session.begin(): try: metadef_api.tag.delete_namespace_content( context, namespace_rec.id, session) metadef_api.object.delete_namespace_content( context, namespace_rec.id, session) metadef_api.property.delete_namespace_content( context, namespace_rec.id, session) metadef_api.resource_type_association.delete_namespace_content( context, namespace_rec.id, session) session.delete(namespace_rec) session.flush() except db_exc.DBError as e: if isinstance(e.inner_exception, sa_exc.IntegrityError): LOG.debug("Metadata definition namespace=%s not deleted. " "Other records still refer to it.", name) raise exc.MetadefIntegrityError( record_type='namespace', record_name=name) else: raise return namespace_rec.to_dict() glance-16.0.0/glance/db/sqlalchemy/metadef_api/__init__.py0000666000175100017510000000000013245511421023370 0ustar zuulzuul00000000000000glance-16.0.0/glance/db/sqlalchemy/metadef_api/utils.py0000666000175100017510000000163113245511421023004 0ustar zuulzuul00000000000000# Copyright (c) 2014 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. def drop_protected_attrs(model_class, values): """ Removed protected attributes from values dictionary using the models __protected_attributes__ field. """ for attr in model_class.__protected_attributes__: if attr in values: del values[attr] glance-16.0.0/glance/db/sqlalchemy/metadef_api/resource_type.py0000666000175100017510000000706313245511421024541 0ustar zuulzuul00000000000000# Copyright (c) 2014 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_db import exception as db_exc from oslo_log import log as logging import sqlalchemy.exc as sa_exc import sqlalchemy.orm as sa_orm from glance.common import exception as exc import glance.db.sqlalchemy.metadef_api.utils as metadef_utils from glance.db.sqlalchemy import models_metadef as models LOG = logging.getLogger(__name__) def get(context, name, session): """Get a resource type, raise if not found""" try: query = session.query(models.MetadefResourceType).filter_by(name=name) resource_type = query.one() except sa_orm.exc.NoResultFound: LOG.debug("No metadata definition resource-type found with name %s", name) raise exc.MetadefResourceTypeNotFound(resource_type_name=name) return resource_type.to_dict() def get_all(context, session): """Get a list of all resource types""" query = session.query(models.MetadefResourceType) resource_types = query.all() resource_types_list = [] for rt in resource_types: resource_types_list.append(rt.to_dict()) return resource_types_list def create(context, values, session): """Create a resource_type, raise if it already exists.""" resource_type = models.MetadefResourceType() metadef_utils.drop_protected_attrs(models.MetadefResourceType, values) resource_type.update(values.copy()) try: resource_type.save(session=session) except db_exc.DBDuplicateEntry: LOG.debug("Can not create the metadata definition resource-type. " "A resource-type with name=%s already exists.", resource_type.name) raise exc.MetadefDuplicateResourceType( resource_type_name=resource_type.name) return resource_type.to_dict() def update(context, values, session): """Update a resource type, raise if not found""" name = values['name'] metadef_utils.drop_protected_attrs(models.MetadefResourceType, values) db_rec = get(context, name, session) db_rec.update(values.copy()) db_rec.save(session=session) return db_rec.to_dict() def delete(context, name, session): """Delete a resource type or raise if not found or is protected""" db_rec = get(context, name, session) if db_rec.protected is True: LOG.debug("Delete forbidden. Metadata definition resource-type %s is a" " seeded-system type and can not be deleted.", name) raise exc.ProtectedMetadefResourceTypeSystemDelete( resource_type_name=name) try: session.delete(db_rec) session.flush() except db_exc.DBError as e: if isinstance(e.inner_exception, sa_exc.IntegrityError): LOG.debug("Could not delete Metadata definition resource-type %s" ". It still has content", name) raise exc.MetadefIntegrityError( record_type='resource-type', record_name=name) else: raise return db_rec.to_dict() glance-16.0.0/glance/db/sqlalchemy/alembic_migrations/0000775000175100017510000000000013245511661022667 5ustar zuulzuul00000000000000glance-16.0.0/glance/db/sqlalchemy/alembic_migrations/versions/0000775000175100017510000000000013245511661024537 5ustar zuulzuul00000000000000././@LongLink0000000000000000000000000000015100000000000011212 Lustar 00000000000000glance-16.0.0/glance/db/sqlalchemy/alembic_migrations/versions/mitaka01_add_image_created_updated_idx.pyglance-16.0.0/glance/db/sqlalchemy/alembic_migrations/versions/mitaka01_add_image_created_updated_id0000666000175100017510000000263013245511421033711 0ustar zuulzuul00000000000000# Copyright 2016 Rackspace # Copyright 2013 Intel Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """add index on created_at and updated_at columns of 'images' table Revision ID: mitaka01 Revises: liberty Create Date: 2016-08-03 17:19:35.306161 """ from alembic import op from sqlalchemy import MetaData, Table, Index # revision identifiers, used by Alembic. revision = 'mitaka01' down_revision = 'liberty' branch_labels = None depends_on = None CREATED_AT_INDEX = 'created_at_image_idx' UPDATED_AT_INDEX = 'updated_at_image_idx' def upgrade(): migrate_engine = op.get_bind() meta = MetaData(bind=migrate_engine) images = Table('images', meta, autoload=True) created_index = Index(CREATED_AT_INDEX, images.c.created_at) created_index.create(migrate_engine) updated_index = Index(UPDATED_AT_INDEX, images.c.updated_at) updated_index.create(migrate_engine) glance-16.0.0/glance/db/sqlalchemy/alembic_migrations/versions/pike_expand01_empty.py0000666000175100017510000000156713245511421030764 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """empty expand for symmetry with pike_contract01 Revision ID: pike_expand01 Revises: ocata_expand01 Create Date: 2017-02-09 19:55:16.657499 """ # revision identifiers, used by Alembic. revision = 'pike_expand01' down_revision = 'ocata_expand01' branch_labels = None depends_on = None def upgrade(): pass ././@LongLink0000000000000000000000000000015000000000000011211 Lustar 00000000000000glance-16.0.0/glance/db/sqlalchemy/alembic_migrations/versions/pike_contract01_drop_artifacts_tables.pyglance-16.0.0/glance/db/sqlalchemy/alembic_migrations/versions/pike_contract01_drop_artifacts_tables0000666000175100017510000000247113245511421034066 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """drop glare artifacts tables Revision ID: pike_contract01 Revises: ocata_contract01 Create Date: 2017-02-09 20:32:51.222867 """ from alembic import op # revision identifiers, used by Alembic. revision = 'pike_contract01' down_revision = 'ocata_contract01' branch_labels = None depends_on = 'pike_expand01' def upgrade(): # create list of artifact tables in reverse order of their creation table_names = [] table_names.append('artifact_blob_locations') table_names.append('artifact_properties') table_names.append('artifact_blobs') table_names.append('artifact_dependencies') table_names.append('artifact_tags') table_names.append('artifacts') for table_name in table_names: op.drop_table(table_name=table_name) glance-16.0.0/glance/db/sqlalchemy/alembic_migrations/versions/queens_expand01_empty.py0000666000175100017510000000143413245511421031325 0ustar zuulzuul00000000000000# Copyright (C) 2018 NTT DATA # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # revision identifiers, used by Alembic. revision = 'queens_expand01' down_revision = 'pike_expand01' branch_labels = None depends_on = None def upgrade(): pass glance-16.0.0/glance/db/sqlalchemy/alembic_migrations/versions/ocata_contract01_drop_is_public.py0000666000175100017510000000446313245511421033316 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """remove is_public from images Revision ID: ocata_contract01 Revises: mitaka02 Create Date: 2017-01-27 12:58:16.647499 """ from alembic import op from sqlalchemy import MetaData, Enum from glance.cmd import manage from glance.db import migration # revision identifiers, used by Alembic. revision = 'ocata_contract01' down_revision = 'mitaka02' branch_labels = ('ocata01', migration.CONTRACT_BRANCH) depends_on = 'ocata_expand01' MYSQL_DROP_INSERT_TRIGGER = """ DROP TRIGGER insert_visibility; """ MYSQL_DROP_UPDATE_TRIGGER = """ DROP TRIGGER update_visibility; """ def _drop_column(): with op.batch_alter_table('images') as batch_op: batch_op.drop_index('ix_images_is_public') batch_op.drop_column('is_public') def _drop_triggers(engine): engine_name = engine.engine.name if engine_name == "mysql": op.execute(MYSQL_DROP_INSERT_TRIGGER) op.execute(MYSQL_DROP_UPDATE_TRIGGER) def _set_nullability_and_default_on_visibility(meta): # NOTE(hemanthm): setting the default on 'visibility' column # to 'shared'. Also, marking it as non-nullable. # images = Table('images', meta, autoload=True) existing_type = Enum('private', 'public', 'shared', 'community', metadata=meta, name='image_visibility') with op.batch_alter_table('images') as batch_op: batch_op.alter_column('visibility', nullable=False, server_default='shared', existing_type=existing_type) def upgrade(): migrate_engine = op.get_bind() meta = MetaData(bind=migrate_engine) _drop_column() if manage.USE_TRIGGERS: _drop_triggers(migrate_engine) _set_nullability_and_default_on_visibility(meta) glance-16.0.0/glance/db/sqlalchemy/alembic_migrations/versions/liberty_initial.py0000666000175100017510000000241113245511421030266 0ustar zuulzuul00000000000000# Copyright 2016 Rackspace # Copyright 2013 Intel Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """liberty initial Revision ID: liberty Revises: Create Date: 2016-08-03 16:06:59.657433 """ from glance.db.sqlalchemy.alembic_migrations import add_artifacts_tables from glance.db.sqlalchemy.alembic_migrations import add_images_tables from glance.db.sqlalchemy.alembic_migrations import add_metadefs_tables from glance.db.sqlalchemy.alembic_migrations import add_tasks_tables # revision identifiers, used by Alembic. revision = 'liberty' down_revision = None branch_labels = None depends_on = None def upgrade(): add_images_tables.upgrade() add_tasks_tables.upgrade() add_metadefs_tables.upgrade() add_artifacts_tables.upgrade() glance-16.0.0/glance/db/sqlalchemy/alembic_migrations/versions/queens_contract01_empty.py0000666000175100017510000000145513245511421031666 0ustar zuulzuul00000000000000# Copyright (C) 2018 NTT DATA # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # revision identifiers, used by Alembic. revision = 'queens_contract01' down_revision = 'pike_contract01' branch_labels = None depends_on = 'queens_expand01' def upgrade(): pass glance-16.0.0/glance/db/sqlalchemy/alembic_migrations/versions/__init__.py0000666000175100017510000000000013245511421026632 0ustar zuulzuul00000000000000glance-16.0.0/glance/db/sqlalchemy/alembic_migrations/versions/ocata_expand01_add_visibility.py0000666000175100017510000001330213245511421032752 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """add visibility to images Revision ID: ocata_expand01 Revises: mitaka02 Create Date: 2017-01-27 12:58:16.647499 """ from alembic import op from sqlalchemy import Column, Enum, MetaData, Table from glance.cmd import manage from glance.db import migration # revision identifiers, used by Alembic. revision = 'ocata_expand01' down_revision = 'mitaka02' branch_labels = migration.EXPAND_BRANCH depends_on = None ERROR_MESSAGE = 'Invalid visibility value' MYSQL_INSERT_TRIGGER = """ CREATE TRIGGER insert_visibility BEFORE INSERT ON images FOR EACH ROW BEGIN -- NOTE(abashmak): -- The following IF/ELSE block implements a priority decision tree. -- Strict order MUST be followed to correctly cover all the edge cases. -- Edge case: neither is_public nor visibility specified -- (or both specified as NULL): IF NEW.is_public <=> NULL AND NEW.visibility <=> NULL THEN SIGNAL SQLSTATE '45000' SET MESSAGE_TEXT = '%s'; -- Edge case: both is_public and visibility specified: ELSEIF NOT(NEW.is_public <=> NULL OR NEW.visibility <=> NULL) THEN SIGNAL SQLSTATE '45000' SET MESSAGE_TEXT = '%s'; -- Inserting with is_public, set visibility accordingly: ELSEIF NOT NEW.is_public <=> NULL THEN IF NEW.is_public = 1 THEN SET NEW.visibility = 'public'; ELSE SET NEW.visibility = 'shared'; END IF; -- Inserting with visibility, set is_public accordingly: ELSEIF NOT NEW.visibility <=> NULL THEN IF NEW.visibility = 'public' THEN SET NEW.is_public = 1; ELSE SET NEW.is_public = 0; END IF; -- Edge case: either one of: is_public or visibility, -- is explicitly set to NULL: ELSE SIGNAL SQLSTATE '45000' SET MESSAGE_TEXT = '%s'; END IF; END; """ MYSQL_UPDATE_TRIGGER = """ CREATE TRIGGER update_visibility BEFORE UPDATE ON images FOR EACH ROW BEGIN -- Case: new value specified for is_public: IF NOT NEW.is_public <=> OLD.is_public THEN -- Edge case: is_public explicitly set to NULL: IF NEW.is_public <=> NULL THEN SIGNAL SQLSTATE '45000' SET MESSAGE_TEXT = '%s'; -- Edge case: new value also specified for visibility ELSEIF NOT NEW.visibility <=> OLD.visibility THEN SIGNAL SQLSTATE '45000' SET MESSAGE_TEXT = '%s'; -- Case: visibility not specified or specified as OLD value: -- NOTE(abashmak): There is no way to reliably determine which -- of the above two cases occurred, but allowing to proceed with -- the update in either case does not break the model for both -- N and N-1 services. ELSE -- Set visibility according to the value of is_public: IF NEW.is_public <=> 1 THEN SET NEW.visibility = 'public'; ELSE SET NEW.visibility = 'shared'; END IF; END IF; -- Case: new value specified for visibility: ELSEIF NOT NEW.visibility <=> OLD.visibility THEN -- Edge case: visibility explicitly set to NULL: IF NEW.visibility <=> NULL THEN SIGNAL SQLSTATE '45000' SET MESSAGE_TEXT = '%s'; -- Edge case: new value also specified for is_public ELSEIF NOT NEW.is_public <=> OLD.is_public THEN SIGNAL SQLSTATE '45000' SET MESSAGE_TEXT = '%s'; -- Case: is_public not specified or specified as OLD value: -- NOTE(abashmak): There is no way to reliably determine which -- of the above two cases occurred, but allowing to proceed with -- the update in either case does not break the model for both -- N and N-1 services. ELSE -- Set is_public according to the value of visibility: IF NEW.visibility <=> 'public' THEN SET NEW.is_public = 1; ELSE SET NEW.is_public = 0; END IF; END IF; END IF; END; """ def _add_visibility_column(meta): enum = Enum('private', 'public', 'shared', 'community', metadata=meta, name='image_visibility') enum.create() v_col = Column('visibility', enum, nullable=True, server_default=None) op.add_column('images', v_col) op.create_index('visibility_image_idx', 'images', ['visibility']) def _add_triggers(engine): if engine.engine.name == 'mysql': op.execute(MYSQL_INSERT_TRIGGER % (ERROR_MESSAGE, ERROR_MESSAGE, ERROR_MESSAGE)) op.execute(MYSQL_UPDATE_TRIGGER % (ERROR_MESSAGE, ERROR_MESSAGE, ERROR_MESSAGE, ERROR_MESSAGE)) def _change_nullability_and_default_on_is_public(meta): # NOTE(hemanthm): we mark is_public as nullable so that when new versions # add data only to be visibility column, is_public can be null. images = Table('images', meta, autoload=True) images.c.is_public.alter(nullable=True, server_default=None) def upgrade(): migrate_engine = op.get_bind() meta = MetaData(bind=migrate_engine) _add_visibility_column(meta) _change_nullability_and_default_on_is_public(meta) if manage.USE_TRIGGERS: _add_triggers(migrate_engine) ././@LongLink0000000000000000000000000000015100000000000011212 Lustar 00000000000000glance-16.0.0/glance/db/sqlalchemy/alembic_migrations/versions/mitaka02_update_metadef_os_nova_server.pyglance-16.0.0/glance/db/sqlalchemy/alembic_migrations/versions/mitaka02_update_metadef_os_nova_serve0000666000175100017510000000237013245511421034047 0ustar zuulzuul00000000000000# Copyright 2016 Rackspace # Copyright 2013 Intel Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """update metadef os_nova_server Revision ID: mitaka02 Revises: mitaka01 Create Date: 2016-08-03 17:23:23.041663 """ from alembic import op from sqlalchemy import MetaData, Table # revision identifiers, used by Alembic. revision = 'mitaka02' down_revision = 'mitaka01' branch_labels = None depends_on = None def upgrade(): migrate_engine = op.get_bind() meta = MetaData(bind=migrate_engine) resource_types_table = Table('metadef_resource_types', meta, autoload=True) resource_types_table.update(values={'name': 'OS::Nova::Server'}).where( resource_types_table.c.name == 'OS::Nova::Instance').execute() glance-16.0.0/glance/db/sqlalchemy/alembic_migrations/env.py0000666000175100017510000000523013245511421024025 0ustar zuulzuul00000000000000# Copyright 2016 Rackspace # Copyright 2013 Intel Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from __future__ import with_statement from logging import config as log_config from alembic import context from oslo_config import cfg from oslo_db.sqlalchemy import enginefacade from glance.db.sqlalchemy import models from glance.db.sqlalchemy import models_metadef # this is the Alembic Config object, which provides # access to the values within the .ini file in use. config = context.config CONF = cfg.CONF # other values from the config, defined by the needs of env.py, # can be acquired: # my_important_option = config.get_main_option("my_important_option") # ... etc. # Interpret the config file for Python logging. # This line sets up loggers basically. log_config.fileConfig(config.config_file_name) # add your model's MetaData object here # for 'autogenerate' support target_metadata = models.BASE.metadata for table in models_metadef.BASE_DICT.metadata.sorted_tables: target_metadata._add_table(table.name, table.schema, table) def run_migrations_offline(): """Run migrations in 'offline' mode. This configures the context with just a URL and not an Engine, though an Engine is acceptable here as well. By skipping the Engine creation we don't even need a DBAPI to be available. Calls to context.execute() here emit the given string to the script output. """ url = CONF.database.connection context.configure( url=url, target_metadata=target_metadata, literal_binds=True) with context.begin_transaction(): context.run_migrations() def run_migrations_online(): """Run migrations in 'online' mode. In this scenario we need to create an Engine and associate a connection with the context. """ engine = enginefacade.writer.get_engine() with engine.connect() as connection: context.configure( connection=connection, target_metadata=target_metadata ) with context.begin_transaction(): context.run_migrations() if context.is_offline_mode(): run_migrations_offline() else: run_migrations_online() glance-16.0.0/glance/db/sqlalchemy/alembic_migrations/add_images_tables.py0000666000175100017510000002136513245511421026653 0ustar zuulzuul00000000000000# Copyright 2016 Rackspace # Copyright 2013 Intel Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from alembic import op from sqlalchemy import sql from sqlalchemy.schema import ( Column, PrimaryKeyConstraint, ForeignKeyConstraint, UniqueConstraint) from glance.db.sqlalchemy.migrate_repo.schema import ( Boolean, DateTime, Integer, BigInteger, String, Text) # noqa from glance.db.sqlalchemy.models import JSONEncodedDict def _add_images_table(): op.create_table('images', Column('id', String(length=36), nullable=False), Column('name', String(length=255), nullable=True), Column('size', BigInteger(), nullable=True), Column('status', String(length=30), nullable=False), Column('is_public', Boolean(), nullable=False), Column('created_at', DateTime(), nullable=False), Column('updated_at', DateTime(), nullable=True), Column('deleted_at', DateTime(), nullable=True), Column('deleted', Boolean(), nullable=False), Column('disk_format', String(length=20), nullable=True), Column('container_format', String(length=20), nullable=True), Column('checksum', String(length=32), nullable=True), Column('owner', String(length=255), nullable=True), Column('min_disk', Integer(), nullable=False), Column('min_ram', Integer(), nullable=False), Column('protected', Boolean(), server_default=sql.false(), nullable=False), Column('virtual_size', BigInteger(), nullable=True), PrimaryKeyConstraint('id'), mysql_engine='InnoDB', mysql_charset='utf8', extend_existing=True) op.create_index('checksum_image_idx', 'images', ['checksum'], unique=False) op.create_index('ix_images_deleted', 'images', ['deleted'], unique=False) op.create_index('ix_images_is_public', 'images', ['is_public'], unique=False) op.create_index('owner_image_idx', 'images', ['owner'], unique=False) def _add_image_properties_table(): op.create_table('image_properties', Column('id', Integer(), nullable=False), Column('image_id', String(length=36), nullable=False), Column('name', String(length=255), nullable=False), Column('value', Text(), nullable=True), Column('created_at', DateTime(), nullable=False), Column('updated_at', DateTime(), nullable=True), Column('deleted_at', DateTime(), nullable=True), Column('deleted', Boolean(), nullable=False), PrimaryKeyConstraint('id'), ForeignKeyConstraint(['image_id'], ['images.id'], ), UniqueConstraint('image_id', 'name', name='ix_image_properties_image_id_name'), mysql_engine='InnoDB', mysql_charset='utf8', extend_existing=True) op.create_index('ix_image_properties_deleted', 'image_properties', ['deleted'], unique=False) op.create_index('ix_image_properties_image_id', 'image_properties', ['image_id'], unique=False) def _add_image_locations_table(): op.create_table('image_locations', Column('id', Integer(), nullable=False), Column('image_id', String(length=36), nullable=False), Column('value', Text(), nullable=False), Column('created_at', DateTime(), nullable=False), Column('updated_at', DateTime(), nullable=True), Column('deleted_at', DateTime(), nullable=True), Column('deleted', Boolean(), nullable=False), Column('meta_data', JSONEncodedDict(), nullable=True), Column('status', String(length=30), server_default='active', nullable=False), PrimaryKeyConstraint('id'), ForeignKeyConstraint(['image_id'], ['images.id'], ), mysql_engine='InnoDB', mysql_charset='utf8', extend_existing=True) op.create_index('ix_image_locations_deleted', 'image_locations', ['deleted'], unique=False) op.create_index('ix_image_locations_image_id', 'image_locations', ['image_id'], unique=False) def _add_image_members_table(): deleted_member_constraint = 'image_members_image_id_member_deleted_at_key' op.create_table('image_members', Column('id', Integer(), nullable=False), Column('image_id', String(length=36), nullable=False), Column('member', String(length=255), nullable=False), Column('can_share', Boolean(), nullable=False), Column('created_at', DateTime(), nullable=False), Column('updated_at', DateTime(), nullable=True), Column('deleted_at', DateTime(), nullable=True), Column('deleted', Boolean(), nullable=False), Column('status', String(length=20), server_default='pending', nullable=False), ForeignKeyConstraint(['image_id'], ['images.id'], ), PrimaryKeyConstraint('id'), UniqueConstraint('image_id', 'member', 'deleted_at', name=deleted_member_constraint), mysql_engine='InnoDB', mysql_charset='utf8', extend_existing=True) op.create_index('ix_image_members_deleted', 'image_members', ['deleted'], unique=False) op.create_index('ix_image_members_image_id', 'image_members', ['image_id'], unique=False) op.create_index('ix_image_members_image_id_member', 'image_members', ['image_id', 'member'], unique=False) def _add_images_tags_table(): op.create_table('image_tags', Column('id', Integer(), nullable=False), Column('image_id', String(length=36), nullable=False), Column('value', String(length=255), nullable=False), Column('created_at', DateTime(), nullable=False), Column('updated_at', DateTime(), nullable=True), Column('deleted_at', DateTime(), nullable=True), Column('deleted', Boolean(), nullable=False), ForeignKeyConstraint(['image_id'], ['images.id'], ), PrimaryKeyConstraint('id'), mysql_engine='InnoDB', mysql_charset='utf8', extend_existing=True) op.create_index('ix_image_tags_image_id', 'image_tags', ['image_id'], unique=False) op.create_index('ix_image_tags_image_id_tag_value', 'image_tags', ['image_id', 'value'], unique=False) def upgrade(): _add_images_table() _add_image_properties_table() _add_image_locations_table() _add_image_members_table() _add_images_tags_table() glance-16.0.0/glance/db/sqlalchemy/alembic_migrations/README0000666000175100017510000000004713245511421023544 0ustar zuulzuul00000000000000Generic single-database configuration. glance-16.0.0/glance/db/sqlalchemy/alembic_migrations/alembic.ini0000666000175100017510000000302513245511421024760 0ustar zuulzuul00000000000000# A generic, single database configuration. [alembic] # path to migration scripts script_location = %(here)s # template used to generate migration files # file_template = %%(rev)s_%%(slug)s # max length of characters to apply to the # "slug" field #truncate_slug_length = 40 # set to 'true' to run the environment during # the 'revision' command, regardless of autogenerate # revision_environment = false # set to 'true' to allow .pyc and .pyo files without # a source .py file to be detected as revisions in the # versions/ directory # sourceless = false # version location specification; this defaults # to alembic_migrations/versions. When using multiple version # directories, initial revisions must be specified with --version-path # version_locations = %(here)s/bar %(here)s/bat alembic_migrations/versions # the output encoding used when revision files # are written from script.py.mako # output_encoding = utf-8 # Uncomment and update to your sql connection string if wishing to run # alembic directly from command line #sqlalchemy.url = # Logging configuration [loggers] keys = root,sqlalchemy,alembic [handlers] keys = console [formatters] keys = generic [logger_root] level = WARN handlers = console qualname = [logger_sqlalchemy] level = WARN handlers = qualname = sqlalchemy.engine [logger_alembic] level = INFO handlers = qualname = alembic [handler_console] class = StreamHandler args = (sys.stderr,) level = NOTSET formatter = generic [formatter_generic] format = %(levelname)-5.5s [%(name)s] %(message)s datefmt = %H:%M:%S glance-16.0.0/glance/db/sqlalchemy/alembic_migrations/add_artifacts_tables.py0000666000175100017510000002427513245511421027371 0ustar zuulzuul00000000000000# Copyright 2016 Rackspace # Copyright 2013 Intel Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from alembic import op from sqlalchemy.schema import ( Column, PrimaryKeyConstraint, ForeignKeyConstraint) from glance.db.sqlalchemy.migrate_repo.schema import ( Boolean, DateTime, Integer, BigInteger, String, Text, Numeric) # noqa def _add_artifacts_table(): op.create_table('artifacts', Column('id', String(length=36), nullable=False), Column('name', String(length=255), nullable=False), Column('type_name', String(length=255), nullable=False), Column('type_version_prefix', BigInteger(), nullable=False), Column('type_version_suffix', String(length=255), nullable=True), Column('type_version_meta', String(length=255), nullable=True), Column('version_prefix', BigInteger(), nullable=False), Column('version_suffix', String(length=255), nullable=True), Column('version_meta', String(length=255), nullable=True), Column('description', Text(), nullable=True), Column('visibility', String(length=32), nullable=False), Column('state', String(length=32), nullable=False), Column('owner', String(length=255), nullable=False), Column('created_at', DateTime(), nullable=False), Column('updated_at', DateTime(), nullable=False), Column('deleted_at', DateTime(), nullable=True), Column('published_at', DateTime(), nullable=True), PrimaryKeyConstraint('id'), mysql_engine='InnoDB', mysql_charset='utf8', extend_existing=True) op.create_index('ix_artifact_name_and_version', 'artifacts', ['name', 'version_prefix', 'version_suffix'], unique=False) op.create_index('ix_artifact_owner', 'artifacts', ['owner'], unique=False) op.create_index('ix_artifact_state', 'artifacts', ['state'], unique=False) op.create_index('ix_artifact_type', 'artifacts', ['type_name', 'type_version_prefix', 'type_version_suffix'], unique=False) op.create_index('ix_artifact_visibility', 'artifacts', ['visibility'], unique=False) def _add_artifact_blobs_table(): op.create_table('artifact_blobs', Column('id', String(length=36), nullable=False), Column('artifact_id', String(length=36), nullable=False), Column('size', BigInteger(), nullable=False), Column('checksum', String(length=32), nullable=True), Column('name', String(length=255), nullable=False), Column('item_key', String(length=329), nullable=True), Column('position', Integer(), nullable=True), Column('created_at', DateTime(), nullable=False), Column('updated_at', DateTime(), nullable=False), ForeignKeyConstraint(['artifact_id'], ['artifacts.id'], ), PrimaryKeyConstraint('id'), mysql_engine='InnoDB', mysql_charset='utf8', extend_existing=True) op.create_index('ix_artifact_blobs_artifact_id', 'artifact_blobs', ['artifact_id'], unique=False) op.create_index('ix_artifact_blobs_name', 'artifact_blobs', ['name'], unique=False) def _add_artifact_dependencies_table(): op.create_table('artifact_dependencies', Column('id', String(length=36), nullable=False), Column('artifact_source', String(length=36), nullable=False), Column('artifact_dest', String(length=36), nullable=False), Column('artifact_origin', String(length=36), nullable=False), Column('is_direct', Boolean(), nullable=False), Column('position', Integer(), nullable=True), Column('name', String(length=36), nullable=True), Column('created_at', DateTime(), nullable=False), Column('updated_at', DateTime(), nullable=False), ForeignKeyConstraint(['artifact_dest'], ['artifacts.id'], ), ForeignKeyConstraint(['artifact_origin'], ['artifacts.id'], ), ForeignKeyConstraint(['artifact_source'], ['artifacts.id'], ), PrimaryKeyConstraint('id'), mysql_engine='InnoDB', mysql_charset='utf8', extend_existing=True) op.create_index('ix_artifact_dependencies_dest_id', 'artifact_dependencies', ['artifact_dest'], unique=False) op.create_index('ix_artifact_dependencies_direct_dependencies', 'artifact_dependencies', ['artifact_source', 'is_direct'], unique=False) op.create_index('ix_artifact_dependencies_origin_id', 'artifact_dependencies', ['artifact_origin'], unique=False) op.create_index('ix_artifact_dependencies_source_id', 'artifact_dependencies', ['artifact_source'], unique=False) def _add_artifact_properties_table(): op.create_table('artifact_properties', Column('id', String(length=36), nullable=False), Column('artifact_id', String(length=36), nullable=False), Column('name', String(length=255), nullable=False), Column('string_value', String(length=255), nullable=True), Column('int_value', Integer(), nullable=True), Column('numeric_value', Numeric(), nullable=True), Column('bool_value', Boolean(), nullable=True), Column('text_value', Text(), nullable=True), Column('created_at', DateTime(), nullable=False), Column('updated_at', DateTime(), nullable=False), Column('position', Integer(), nullable=True), ForeignKeyConstraint(['artifact_id'], ['artifacts.id'], ), PrimaryKeyConstraint('id'), mysql_engine='InnoDB', mysql_charset='utf8', extend_existing=True) op.create_index('ix_artifact_properties_artifact_id', 'artifact_properties', ['artifact_id'], unique=False) op.create_index('ix_artifact_properties_name', 'artifact_properties', ['name'], unique=False) def _add_artifact_tags_table(): op.create_table('artifact_tags', Column('id', String(length=36), nullable=False), Column('artifact_id', String(length=36), nullable=False), Column('value', String(length=255), nullable=False), Column('created_at', DateTime(), nullable=False), Column('updated_at', DateTime(), nullable=False), ForeignKeyConstraint(['artifact_id'], ['artifacts.id'], ), PrimaryKeyConstraint('id'), mysql_engine='InnoDB', mysql_charset='utf8', extend_existing=True) op.create_index('ix_artifact_tags_artifact_id', 'artifact_tags', ['artifact_id'], unique=False) op.create_index('ix_artifact_tags_artifact_id_tag_value', 'artifact_tags', ['artifact_id', 'value'], unique=False) def _add_artifact_blob_locations_table(): op.create_table('artifact_blob_locations', Column('id', String(length=36), nullable=False), Column('blob_id', String(length=36), nullable=False), Column('value', Text(), nullable=False), Column('created_at', DateTime(), nullable=False), Column('updated_at', DateTime(), nullable=False), Column('position', Integer(), nullable=True), Column('status', String(length=36), nullable=True), ForeignKeyConstraint(['blob_id'], ['artifact_blobs.id'], ), PrimaryKeyConstraint('id'), mysql_engine='InnoDB', mysql_charset='utf8', extend_existing=True) op.create_index('ix_artifact_blob_locations_blob_id', 'artifact_blob_locations', ['blob_id'], unique=False) def upgrade(): _add_artifacts_table() _add_artifact_blobs_table() _add_artifact_dependencies_table() _add_artifact_properties_table() _add_artifact_tags_table() _add_artifact_blob_locations_table() glance-16.0.0/glance/db/sqlalchemy/alembic_migrations/__init__.py0000666000175100017510000001073513245511421025002 0ustar zuulzuul00000000000000# Copyright 2016 Rackspace # Copyright 2013 Intel Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os import sys from alembic import command as alembic_command from alembic import config as alembic_config from alembic import migration as alembic_migration from alembic import script as alembic_script from sqlalchemy import MetaData, Table from oslo_db import exception as db_exception from oslo_db.sqlalchemy import migration as sqla_migration from glance.db import migration as db_migration from glance.db.sqlalchemy import api as db_api from glance.i18n import _ def get_alembic_config(engine=None): """Return a valid alembic config object""" ini_path = os.path.join(os.path.dirname(__file__), 'alembic.ini') config = alembic_config.Config(os.path.abspath(ini_path)) if engine is None: engine = db_api.get_engine() config.set_main_option('sqlalchemy.url', str(engine.url)) return config def get_current_alembic_heads(): """Return current heads (if any) from the alembic migration table""" engine = db_api.get_engine() with engine.connect() as conn: context = alembic_migration.MigrationContext.configure(conn) heads = context.get_current_heads() def update_alembic_version(old, new): """Correct alembic head in order to upgrade DB using EMC method. :param:old: Actual alembic head :param:new: Expected alembic head to be updated """ meta = MetaData(engine) alembic_version = Table('alembic_version', meta, autoload=True) alembic_version.update().values( version_num=new).where( alembic_version.c.version_num == old).execute() if ("pike01" in heads): update_alembic_version("pike01", "pike_contract01") elif ("ocata01" in heads): update_alembic_version("ocata01", "ocata_contract01") heads = context.get_current_heads() return heads def get_current_legacy_head(): try: legacy_head = sqla_migration.db_version(db_api.get_engine(), db_migration.MIGRATE_REPO_PATH, db_migration.INIT_VERSION) except db_exception.DBMigrationError: legacy_head = None return legacy_head def is_database_under_alembic_control(): if get_current_alembic_heads(): return True return False def is_database_under_migrate_control(): if get_current_legacy_head(): return True return False def place_database_under_alembic_control(): a_config = get_alembic_config() if not is_database_under_migrate_control(): return if not is_database_under_alembic_control(): print(_("Database is currently not under Alembic's migration " "control.")) head = get_current_legacy_head() if head == 42: alembic_version = 'liberty' elif head == 43: alembic_version = 'mitaka01' elif head == 44: alembic_version = 'mitaka02' elif head == 45: alembic_version = 'ocata01' elif head in range(1, 42): print("Legacy head: ", head) sys.exit(_("The current database version is not supported any " "more. Please upgrade to Liberty release first.")) else: sys.exit(_("Unable to place database under Alembic's migration " "control. Unknown database state, can't proceed " "further.")) print(_("Placing database under Alembic's migration control at " "revision:"), alembic_version) alembic_command.stamp(a_config, alembic_version) def get_alembic_branch_head(branch): """Return head revision name for particular branch""" a_config = get_alembic_config() script = alembic_script.ScriptDirectory.from_config(a_config) return script.revision_map.get_current_head(branch) glance-16.0.0/glance/db/sqlalchemy/alembic_migrations/add_metadefs_tables.py0000666000175100017510000001732313245511421027175 0ustar zuulzuul00000000000000# Copyright 2016 Rackspace # Copyright 2013 Intel Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from alembic import op from sqlalchemy.schema import ( Column, PrimaryKeyConstraint, ForeignKeyConstraint, UniqueConstraint) from glance.db.sqlalchemy.migrate_repo.schema import ( Boolean, DateTime, Integer, String, Text) # noqa from glance.db.sqlalchemy.models import JSONEncodedDict def _add_metadef_namespaces_table(): op.create_table('metadef_namespaces', Column('id', Integer(), nullable=False), Column('namespace', String(length=80), nullable=False), Column('display_name', String(length=80), nullable=True), Column('description', Text(), nullable=True), Column('visibility', String(length=32), nullable=True), Column('protected', Boolean(), nullable=True), Column('owner', String(length=255), nullable=False), Column('created_at', DateTime(), nullable=False), Column('updated_at', DateTime(), nullable=True), PrimaryKeyConstraint('id'), UniqueConstraint('namespace', name='uq_metadef_namespaces_namespace'), mysql_engine='InnoDB', mysql_charset='utf8', extend_existing=True) op.create_index('ix_metadef_namespaces_owner', 'metadef_namespaces', ['owner'], unique=False) def _add_metadef_resource_types_table(): op.create_table('metadef_resource_types', Column('id', Integer(), nullable=False), Column('name', String(length=80), nullable=False), Column('protected', Boolean(), nullable=False), Column('created_at', DateTime(), nullable=False), Column('updated_at', DateTime(), nullable=True), PrimaryKeyConstraint('id'), UniqueConstraint('name', name='uq_metadef_resource_types_name'), mysql_engine='InnoDB', mysql_charset='utf8', extend_existing=True) def _add_metadef_namespace_resource_types_table(): op.create_table('metadef_namespace_resource_types', Column('resource_type_id', Integer(), nullable=False), Column('namespace_id', Integer(), nullable=False), Column('properties_target', String(length=80), nullable=True), Column('prefix', String(length=80), nullable=True), Column('created_at', DateTime(), nullable=False), Column('updated_at', DateTime(), nullable=True), ForeignKeyConstraint(['namespace_id'], ['metadef_namespaces.id'], ), ForeignKeyConstraint(['resource_type_id'], ['metadef_resource_types.id'], ), PrimaryKeyConstraint('resource_type_id', 'namespace_id'), mysql_engine='InnoDB', mysql_charset='utf8', extend_existing=True) op.create_index('ix_metadef_ns_res_types_namespace_id', 'metadef_namespace_resource_types', ['namespace_id'], unique=False) def _add_metadef_objects_table(): ns_id_name_constraint = 'uq_metadef_objects_namespace_id_name' op.create_table('metadef_objects', Column('id', Integer(), nullable=False), Column('namespace_id', Integer(), nullable=False), Column('name', String(length=80), nullable=False), Column('description', Text(), nullable=True), Column('required', Text(), nullable=True), Column('json_schema', JSONEncodedDict(), nullable=False), Column('created_at', DateTime(), nullable=False), Column('updated_at', DateTime(), nullable=True), ForeignKeyConstraint(['namespace_id'], ['metadef_namespaces.id'], ), PrimaryKeyConstraint('id'), UniqueConstraint('namespace_id', 'name', name=ns_id_name_constraint), mysql_engine='InnoDB', mysql_charset='utf8', extend_existing=True) op.create_index('ix_metadef_objects_name', 'metadef_objects', ['name'], unique=False) def _add_metadef_properties_table(): ns_id_name_constraint = 'uq_metadef_properties_namespace_id_name' op.create_table('metadef_properties', Column('id', Integer(), nullable=False), Column('namespace_id', Integer(), nullable=False), Column('name', String(length=80), nullable=False), Column('json_schema', JSONEncodedDict(), nullable=False), Column('created_at', DateTime(), nullable=False), Column('updated_at', DateTime(), nullable=True), ForeignKeyConstraint(['namespace_id'], ['metadef_namespaces.id'], ), PrimaryKeyConstraint('id'), UniqueConstraint('namespace_id', 'name', name=ns_id_name_constraint), mysql_engine='InnoDB', mysql_charset='utf8', extend_existing=True) op.create_index('ix_metadef_properties_name', 'metadef_properties', ['name'], unique=False) def _add_metadef_tags_table(): op.create_table('metadef_tags', Column('id', Integer(), nullable=False), Column('namespace_id', Integer(), nullable=False), Column('name', String(length=80), nullable=False), Column('created_at', DateTime(), nullable=False), Column('updated_at', DateTime(), nullable=True), ForeignKeyConstraint(['namespace_id'], ['metadef_namespaces.id'], ), PrimaryKeyConstraint('id'), UniqueConstraint('namespace_id', 'name', name='uq_metadef_tags_namespace_id_name'), mysql_engine='InnoDB', mysql_charset='utf8', extend_existing=True) op.create_index('ix_metadef_tags_name', 'metadef_tags', ['name'], unique=False) def upgrade(): _add_metadef_namespaces_table() _add_metadef_resource_types_table() _add_metadef_namespace_resource_types_table() _add_metadef_objects_table() _add_metadef_properties_table() _add_metadef_tags_table() glance-16.0.0/glance/db/sqlalchemy/alembic_migrations/migrate.cfg0000666000175100017510000000174113245511421024777 0ustar zuulzuul00000000000000[db_settings] # Used to identify which repository this database is versioned under. # You can use the name of your project. repository_id=Glance Migrations # The name of the database table used to track the schema version. # This name shouldn't already be used by your project. # If this is changed once a database is under version control, you'll need to # change the table name in each database too. version_table=alembic_version # When committing a change script, Migrate will attempt to generate the # sql for all supported databases; normally, if one of them fails - probably # because you don't have that database installed - it is ignored and the # commit continues, perhaps ending successfully. # Databases in this list MUST compile successfully during a commit, or the # entire commit will fail. List the databases your application will actually # be using to ensure your updates to that database work properly. # This must be a list; example: ['postgres','sqlite'] required_dbs=[] glance-16.0.0/glance/db/sqlalchemy/alembic_migrations/add_tasks_tables.py0000666000175100017510000000545313245511421026533 0ustar zuulzuul00000000000000# Copyright 2016 Rackspace # Copyright 2013 Intel Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from alembic import op from sqlalchemy.schema import ( Column, PrimaryKeyConstraint, ForeignKeyConstraint) from glance.db.sqlalchemy.migrate_repo.schema import ( Boolean, DateTime, String, Text) # noqa from glance.db.sqlalchemy.models import JSONEncodedDict def _add_tasks_table(): op.create_table('tasks', Column('id', String(length=36), nullable=False), Column('type', String(length=30), nullable=False), Column('status', String(length=30), nullable=False), Column('owner', String(length=255), nullable=False), Column('expires_at', DateTime(), nullable=True), Column('created_at', DateTime(), nullable=False), Column('updated_at', DateTime(), nullable=True), Column('deleted_at', DateTime(), nullable=True), Column('deleted', Boolean(), nullable=False), PrimaryKeyConstraint('id'), mysql_engine='InnoDB', mysql_charset='utf8', extend_existing=True) op.create_index('ix_tasks_deleted', 'tasks', ['deleted'], unique=False) op.create_index('ix_tasks_owner', 'tasks', ['owner'], unique=False) op.create_index('ix_tasks_status', 'tasks', ['status'], unique=False) op.create_index('ix_tasks_type', 'tasks', ['type'], unique=False) op.create_index('ix_tasks_updated_at', 'tasks', ['updated_at'], unique=False) def _add_task_info_table(): op.create_table('task_info', Column('task_id', String(length=36), nullable=False), Column('input', JSONEncodedDict(), nullable=True), Column('result', JSONEncodedDict(), nullable=True), Column('message', Text(), nullable=True), ForeignKeyConstraint(['task_id'], ['tasks.id'], ), PrimaryKeyConstraint('task_id'), mysql_engine='InnoDB', mysql_charset='utf8', extend_existing=True) def upgrade(): _add_tasks_table() _add_task_info_table() glance-16.0.0/glance/db/sqlalchemy/alembic_migrations/data_migrations/0000775000175100017510000000000013245511661026034 5ustar zuulzuul00000000000000././@LongLink0000000000000000000000000000015200000000000011213 Lustar 00000000000000glance-16.0.0/glance/db/sqlalchemy/alembic_migrations/data_migrations/ocata_migrate01_community_images.pyglance-16.0.0/glance/db/sqlalchemy/alembic_migrations/data_migrations/ocata_migrate01_community_imag0000666000175100017510000000754213245511421034024 0ustar zuulzuul00000000000000# Copyright 2016 Rackspace # Copyright 2016 Intel Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from sqlalchemy import MetaData, select, Table, and_, not_ def has_migrations(engine): """Returns true if at least one data row can be migrated. There are rows left to migrate if: #1 There exists a row with visibility not set yet. Or #2 There exists a private image with active members but its visibility isn't set to 'shared' yet. Note: This method can return a false positive if data migrations are running in the background as it's being called. """ meta = MetaData(engine) images = Table('images', meta, autoload=True) rows_with_null_visibility = (select([images.c.id]) .where(images.c.visibility == None) .limit(1) .execute()) if rows_with_null_visibility.rowcount == 1: return True image_members = Table('image_members', meta, autoload=True) rows_with_pending_shared = (select([images.c.id]) .where(and_( images.c.visibility == 'private', images.c.id.in_( select([image_members.c.image_id]) .distinct() .where(not_(image_members.c.deleted)))) ) .limit(1) .execute()) if rows_with_pending_shared.rowcount == 1: return True return False def _mark_all_public_images_with_public_visibility(images): migrated_rows = (images .update().values(visibility='public') .where(images.c.is_public) .execute()) return migrated_rows.rowcount def _mark_all_non_public_images_with_private_visibility(images): migrated_rows = (images .update().values(visibility='private') .where(not_(images.c.is_public)) .execute()) return migrated_rows.rowcount def _mark_all_private_images_with_members_as_shared_visibility(images, image_members): migrated_rows = (images .update().values(visibility='shared') .where(and_(images.c.visibility == 'private', images.c.id.in_( select([image_members.c.image_id]) .distinct() .where(not_(image_members.c.deleted))))) .execute()) return migrated_rows.rowcount def _migrate_all(engine): meta = MetaData(engine) images = Table('images', meta, autoload=True) image_members = Table('image_members', meta, autoload=True) num_rows = _mark_all_public_images_with_public_visibility(images) num_rows += _mark_all_non_public_images_with_private_visibility(images) num_rows += _mark_all_private_images_with_members_as_shared_visibility( images, image_members) return num_rows def migrate(engine): """Set visibility column based on is_public and image members.""" return _migrate_all(engine) glance-16.0.0/glance/db/sqlalchemy/alembic_migrations/data_migrations/queens_migrate01_empty.py0000666000175100017510000000146713245511421033001 0ustar zuulzuul00000000000000# Copyright (C) 2018 NTT DATA # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. def has_migrations(engine): """Returns true if at least one data row can be migrated.""" return False def migrate(engine): """Return the number of rows migrated.""" return 0 glance-16.0.0/glance/db/sqlalchemy/alembic_migrations/data_migrations/__init__.py0000666000175100017510000000443313245511421030145 0ustar zuulzuul00000000000000# Copyright 2016 Rackspace # Copyright 2016 Intel Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import importlib import os.path import pkgutil from glance.common import exception from glance.db import migration as db_migrations from glance.db.sqlalchemy import api as db_api def _find_migration_modules(release): migrations = list() for _, module_name, _ in pkgutil.iter_modules([os.path.dirname(__file__)]): if module_name.startswith(release): migrations.append(module_name) migration_modules = list() for migration in sorted(migrations): module = importlib.import_module('.'.join([__package__, migration])) has_migrations_function = getattr(module, 'has_migrations', None) migrate_function = getattr(module, 'migrate', None) if has_migrations_function is None or migrate_function is None: raise exception.InvalidDataMigrationScript(script=module.__name__) migration_modules.append(module) return migration_modules def _run_migrations(engine, migrations): rows_migrated = 0 for migration in migrations: if migration.has_migrations(engine): rows_migrated += migration.migrate(engine) return rows_migrated def has_pending_migrations(engine=None, release=db_migrations.CURRENT_RELEASE): if not engine: engine = db_api.get_engine() migrations = _find_migration_modules(release) if not migrations: return False return any([x.has_migrations(engine) for x in migrations]) def migrate(engine=None, release=db_migrations.CURRENT_RELEASE): if not engine: engine = db_api.get_engine() migrations = _find_migration_modules(release) rows_migrated = _run_migrations(engine, migrations) return rows_migrated glance-16.0.0/glance/db/sqlalchemy/alembic_migrations/data_migrations/pike_migrate01_empty.py0000666000175100017510000000161113245511421032420 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # NOTE(rosmaita): This file implements the migration interface, but doesn't # migrate any data. The pike01 migration is contract-only. def has_migrations(engine): """Returns true if at least one data row can be migrated.""" return False def migrate(engine): """Return the number of rows migrated.""" return 0 glance-16.0.0/glance/db/sqlalchemy/alembic_migrations/script.py.mako0000666000175100017510000000065713245511421025477 0ustar zuulzuul00000000000000"""${message} Revision ID: ${up_revision} Revises: ${down_revision | comma,n} Create Date: ${create_date} """ # revision identifiers, used by Alembic. revision = ${repr(up_revision)} down_revision = ${repr(down_revision)} branch_labels = ${repr(branch_labels)} depends_on = ${repr(depends_on)} from alembic import op import sqlalchemy as sa ${imports if imports else ""} def upgrade(): ${upgrades if upgrades else "pass"} glance-16.0.0/glance/db/sqlalchemy/models.py0000666000175100017510000002345613245511421020702 0ustar zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ SQLAlchemy models for glance data """ import uuid from oslo_db.sqlalchemy import models from oslo_serialization import jsonutils from sqlalchemy import BigInteger from sqlalchemy import Boolean from sqlalchemy import Column from sqlalchemy import DateTime from sqlalchemy import Enum from sqlalchemy.ext.declarative import declarative_base from sqlalchemy import ForeignKey from sqlalchemy import Index from sqlalchemy import Integer from sqlalchemy.orm import backref, relationship from sqlalchemy import sql from sqlalchemy import String from sqlalchemy import Text from sqlalchemy.types import TypeDecorator from sqlalchemy import UniqueConstraint from glance.common import timeutils BASE = declarative_base() class JSONEncodedDict(TypeDecorator): """Represents an immutable structure as a json-encoded string""" impl = Text def process_bind_param(self, value, dialect): if value is not None: value = jsonutils.dumps(value) return value def process_result_value(self, value, dialect): if value is not None: value = jsonutils.loads(value) return value class GlanceBase(models.ModelBase, models.TimestampMixin): """Base class for Glance Models.""" __table_args__ = {'mysql_engine': 'InnoDB', 'mysql_charset': 'utf8'} __table_initialized__ = False __protected_attributes__ = set([ "created_at", "updated_at", "deleted_at", "deleted"]) def save(self, session=None): from glance.db.sqlalchemy import api as db_api super(GlanceBase, self).save(session or db_api.get_session()) created_at = Column(DateTime, default=lambda: timeutils.utcnow(), nullable=False) # TODO(vsergeyev): Column `updated_at` have no default value in # OpenStack common code. We should decide, is this value # required and make changes in oslo (if required) or # in glance (if not). updated_at = Column(DateTime, default=lambda: timeutils.utcnow(), nullable=True, onupdate=lambda: timeutils.utcnow()) # TODO(boris-42): Use SoftDeleteMixin instead of deleted Column after # migration that provides UniqueConstraints and change # type of this column. deleted_at = Column(DateTime) deleted = Column(Boolean, nullable=False, default=False) def delete(self, session=None): """Delete this object.""" self.deleted = True self.deleted_at = timeutils.utcnow() self.save(session=session) def keys(self): return self.__dict__.keys() def values(self): return self.__dict__.values() def items(self): return self.__dict__.items() def to_dict(self): d = self.__dict__.copy() # NOTE(flaper87): Remove # private state instance # It is not serializable # and causes CircularReference d.pop("_sa_instance_state") return d class Image(BASE, GlanceBase): """Represents an image in the datastore.""" __tablename__ = 'images' __table_args__ = (Index('checksum_image_idx', 'checksum'), Index('visibility_image_idx', 'visibility'), Index('ix_images_deleted', 'deleted'), Index('owner_image_idx', 'owner'), Index('created_at_image_idx', 'created_at'), Index('updated_at_image_idx', 'updated_at')) id = Column(String(36), primary_key=True, default=lambda: str(uuid.uuid4())) name = Column(String(255)) disk_format = Column(String(20)) container_format = Column(String(20)) size = Column(BigInteger().with_variant(Integer, "sqlite")) virtual_size = Column(BigInteger().with_variant(Integer, "sqlite")) status = Column(String(30), nullable=False) visibility = Column(Enum('private', 'public', 'shared', 'community', name='image_visibility'), nullable=False, server_default='shared') checksum = Column(String(32)) min_disk = Column(Integer, nullable=False, default=0) min_ram = Column(Integer, nullable=False, default=0) owner = Column(String(255)) protected = Column(Boolean, nullable=False, default=False, server_default=sql.expression.false()) class ImageProperty(BASE, GlanceBase): """Represents an image properties in the datastore.""" __tablename__ = 'image_properties' __table_args__ = (Index('ix_image_properties_image_id', 'image_id'), Index('ix_image_properties_deleted', 'deleted'), UniqueConstraint('image_id', 'name', name='ix_image_properties_' 'image_id_name'),) id = Column(Integer, primary_key=True) image_id = Column(String(36), ForeignKey('images.id'), nullable=False) image = relationship(Image, backref=backref('properties')) name = Column(String(255), nullable=False) value = Column(Text) class ImageTag(BASE, GlanceBase): """Represents an image tag in the datastore.""" __tablename__ = 'image_tags' __table_args__ = (Index('ix_image_tags_image_id', 'image_id'), Index('ix_image_tags_image_id_tag_value', 'image_id', 'value'),) id = Column(Integer, primary_key=True, nullable=False) image_id = Column(String(36), ForeignKey('images.id'), nullable=False) image = relationship(Image, backref=backref('tags')) value = Column(String(255), nullable=False) class ImageLocation(BASE, GlanceBase): """Represents an image location in the datastore.""" __tablename__ = 'image_locations' __table_args__ = (Index('ix_image_locations_image_id', 'image_id'), Index('ix_image_locations_deleted', 'deleted'),) id = Column(Integer, primary_key=True, nullable=False) image_id = Column(String(36), ForeignKey('images.id'), nullable=False) image = relationship(Image, backref=backref('locations')) value = Column(Text(), nullable=False) meta_data = Column(JSONEncodedDict(), default={}) status = Column(String(30), server_default='active', nullable=False) class ImageMember(BASE, GlanceBase): """Represents an image members in the datastore.""" __tablename__ = 'image_members' unique_constraint_key_name = 'image_members_image_id_member_deleted_at_key' __table_args__ = (Index('ix_image_members_deleted', 'deleted'), Index('ix_image_members_image_id', 'image_id'), Index('ix_image_members_image_id_member', 'image_id', 'member'), UniqueConstraint('image_id', 'member', 'deleted_at', name=unique_constraint_key_name),) id = Column(Integer, primary_key=True) image_id = Column(String(36), ForeignKey('images.id'), nullable=False) image = relationship(Image, backref=backref('members')) member = Column(String(255), nullable=False) can_share = Column(Boolean, nullable=False, default=False) status = Column(String(20), nullable=False, default="pending", server_default='pending') class Task(BASE, GlanceBase): """Represents an task in the datastore""" __tablename__ = 'tasks' __table_args__ = (Index('ix_tasks_type', 'type'), Index('ix_tasks_status', 'status'), Index('ix_tasks_owner', 'owner'), Index('ix_tasks_deleted', 'deleted'), Index('ix_tasks_updated_at', 'updated_at')) id = Column(String(36), primary_key=True, default=lambda: str(uuid.uuid4())) type = Column(String(30), nullable=False) status = Column(String(30), nullable=False) owner = Column(String(255), nullable=False) expires_at = Column(DateTime, nullable=True) class TaskInfo(BASE, models.ModelBase): """Represents task info in the datastore""" __tablename__ = 'task_info' task_id = Column(String(36), ForeignKey('tasks.id'), primary_key=True, nullable=False) task = relationship(Task, backref=backref('info', uselist=False)) # NOTE(nikhil): input and result are stored as text in the DB. # SQLAlchemy marshals the data to/from JSON using custom type # JSONEncodedDict. It uses simplejson underneath. input = Column(JSONEncodedDict()) result = Column(JSONEncodedDict()) message = Column(Text) def register_models(engine): """Create database tables for all models with the given engine.""" models = (Image, ImageProperty, ImageMember) for model in models: model.metadata.create_all(engine) def unregister_models(engine): """Drop database tables for all models with the given engine.""" models = (Image, ImageProperty) for model in models: model.metadata.drop_all(engine) glance-16.0.0/glance/db/sqlalchemy/migrate_repo/0000775000175100017510000000000013245511661021514 5ustar zuulzuul00000000000000glance-16.0.0/glance/db/sqlalchemy/migrate_repo/versions/0000775000175100017510000000000013245511661023364 5ustar zuulzuul00000000000000glance-16.0.0/glance/db/sqlalchemy/migrate_repo/versions/036_rename_metadef_schema_columns.py0000666000175100017510000000202113245511421032331 0ustar zuulzuul00000000000000# Copyright (c) 2014 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from sqlalchemy.schema import MetaData from sqlalchemy.schema import Table def upgrade(migrate_engine): meta = MetaData(bind=migrate_engine) metadef_objects = Table('metadef_objects', meta, autoload=True) metadef_objects.c.schema.alter(name='json_schema') metadef_properties = Table('metadef_properties', meta, autoload=True) metadef_properties.c.schema.alter(name='json_schema') glance-16.0.0/glance/db/sqlalchemy/migrate_repo/versions/001_add_images_table.py0000666000175100017510000000406213245511421027540 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from sqlalchemy.schema import (Column, MetaData, Table) from glance.db.sqlalchemy.migrate_repo.schema import ( Boolean, DateTime, Integer, String, Text, create_tables) # noqa def define_images_table(meta): images = Table('images', meta, Column('id', Integer(), primary_key=True, nullable=False), Column('name', String(255)), Column('type', String(30)), Column('size', Integer()), Column('status', String(30), nullable=False), Column('is_public', Boolean(), nullable=False, default=False, index=True), Column('location', Text()), Column('created_at', DateTime(), nullable=False), Column('updated_at', DateTime()), Column('deleted_at', DateTime()), Column('deleted', Boolean(), nullable=False, default=False, index=True), mysql_engine='InnoDB', mysql_charset='utf8', extend_existing=True) return images def upgrade(migrate_engine): meta = MetaData() meta.bind = migrate_engine tables = [define_images_table(meta)] create_tables(tables) glance-16.0.0/glance/db/sqlalchemy/migrate_repo/versions/013_add_protected.py0000666000175100017510000000162613245511421027123 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from sqlalchemy import MetaData, Table, Column, Boolean meta = MetaData() protected = Column('protected', Boolean, default=False) def upgrade(migrate_engine): meta.bind = migrate_engine images = Table('images', meta, autoload=True) images.create_column(protected) glance-16.0.0/glance/db/sqlalchemy/migrate_repo/versions/014_add_image_tags_table.py0000666000175100017510000000547113245511421030404 0ustar zuulzuul00000000000000# Copyright 2012 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from sqlalchemy import schema from glance.db.sqlalchemy.migrate_repo import schema as glance_schema def define_image_tags_table(meta): # Load the images table so the foreign key can be set up properly schema.Table('images', meta, autoload=True) image_tags = schema.Table('image_tags', meta, schema.Column('id', glance_schema.Integer(), primary_key=True, nullable=False), schema.Column('image_id', glance_schema.String(36), schema.ForeignKey('images.id'), nullable=False), schema.Column('value', glance_schema.String(255), nullable=False), schema.Column('created_at', glance_schema.DateTime(), nullable=False), schema.Column('updated_at', glance_schema.DateTime()), schema.Column('deleted_at', glance_schema.DateTime()), schema.Column('deleted', glance_schema.Boolean(), nullable=False, default=False), mysql_engine='InnoDB', mysql_charset='utf8') schema.Index('ix_image_tags_image_id', image_tags.c.image_id) schema.Index('ix_image_tags_image_id_tag_value', image_tags.c.image_id, image_tags.c.value) return image_tags def upgrade(migrate_engine): meta = schema.MetaData() meta.bind = migrate_engine tables = [define_image_tags_table(meta)] glance_schema.create_tables(tables) glance-16.0.0/glance/db/sqlalchemy/migrate_repo/versions/035_add_metadef_tables.py0000666000175100017510000001712213245511421030073 0ustar zuulzuul00000000000000# Copyright (c) 2014 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import sqlalchemy from sqlalchemy.schema import ( Column, ForeignKey, Index, MetaData, Table, UniqueConstraint) # noqa from glance.common import timeutils from glance.db.sqlalchemy.migrate_repo.schema import ( Boolean, DateTime, Integer, String, Text, create_tables) # noqa RESOURCE_TYPES = [u'OS::Glance::Image', u'OS::Cinder::Volume', u'OS::Nova::Flavor', u'OS::Nova::Aggregate', u'OS::Nova::Server'] def _get_metadef_resource_types_table(meta): return sqlalchemy.Table('metadef_resource_types', meta, autoload=True) def _populate_resource_types(resource_types_table): now = timeutils.utcnow() for resource_type in RESOURCE_TYPES: values = { 'name': resource_type, 'protected': True, 'created_at': now, 'updated_at': now } resource_types_table.insert(values=values).execute() def define_metadef_namespaces_table(meta): # NOTE: For DB2 if UniqueConstraint is used when creating a table # an index will automatically be created. So, for DB2 specify the # index name up front. If not DB2 then create the Index. _constr_kwargs = {} if meta.bind.name == 'ibm_db_sa': _constr_kwargs['name'] = 'ix_namespaces_namespace' namespaces = Table('metadef_namespaces', meta, Column('id', Integer(), primary_key=True, nullable=False), Column('namespace', String(80), nullable=False), Column('display_name', String(80)), Column('description', Text()), Column('visibility', String(32)), Column('protected', Boolean()), Column('owner', String(255), nullable=False), Column('created_at', DateTime(), nullable=False), Column('updated_at', DateTime()), UniqueConstraint('namespace', **_constr_kwargs), mysql_engine='InnoDB', mysql_charset='utf8', extend_existing=True) if meta.bind.name != 'ibm_db_sa': Index('ix_namespaces_namespace', namespaces.c.namespace) return namespaces def define_metadef_objects_table(meta): _constr_kwargs = {} if meta.bind.name == 'ibm_db_sa': _constr_kwargs['name'] = 'ix_objects_namespace_id_name' objects = Table('metadef_objects', meta, Column('id', Integer(), primary_key=True, nullable=False), Column('namespace_id', Integer(), ForeignKey('metadef_namespaces.id'), nullable=False), Column('name', String(80), nullable=False), Column('description', Text()), Column('required', Text()), Column('schema', Text(), nullable=False), Column('created_at', DateTime(), nullable=False), Column('updated_at', DateTime()), UniqueConstraint('namespace_id', 'name', **_constr_kwargs), mysql_engine='InnoDB', mysql_charset='utf8', extend_existing=True) if meta.bind.name != 'ibm_db_sa': Index('ix_objects_namespace_id_name', objects.c.namespace_id, objects.c.name) return objects def define_metadef_properties_table(meta): _constr_kwargs = {} if meta.bind.name == 'ibm_db_sa': _constr_kwargs['name'] = 'ix_metadef_properties_namespace_id_name' metadef_properties = Table( 'metadef_properties', meta, Column('id', Integer(), primary_key=True, nullable=False), Column('namespace_id', Integer(), ForeignKey('metadef_namespaces.id'), nullable=False), Column('name', String(80), nullable=False), Column('schema', Text(), nullable=False), Column('created_at', DateTime(), nullable=False), Column('updated_at', DateTime()), UniqueConstraint('namespace_id', 'name', **_constr_kwargs), mysql_engine='InnoDB', mysql_charset='utf8', extend_existing=True) if meta.bind.name != 'ibm_db_sa': Index('ix_metadef_properties_namespace_id_name', metadef_properties.c.namespace_id, metadef_properties.c.name) return metadef_properties def define_metadef_resource_types_table(meta): _constr_kwargs = {} if meta.bind.name == 'ibm_db_sa': _constr_kwargs['name'] = 'ix_metadef_resource_types_name' metadef_res_types = Table( 'metadef_resource_types', meta, Column('id', Integer(), primary_key=True, nullable=False), Column('name', String(80), nullable=False), Column('protected', Boolean(), nullable=False, default=False), Column('created_at', DateTime(), nullable=False), Column('updated_at', DateTime()), UniqueConstraint('name', **_constr_kwargs), mysql_engine='InnoDB', mysql_charset='utf8', extend_existing=True) if meta.bind.name != 'ibm_db_sa': Index('ix_metadef_resource_types_name', metadef_res_types.c.name) return metadef_res_types def define_metadef_namespace_resource_types_table(meta): _constr_kwargs = {} if meta.bind.name == 'ibm_db_sa': _constr_kwargs['name'] = 'ix_metadef_ns_res_types_res_type_id_ns_id' metadef_associations = Table( 'metadef_namespace_resource_types', meta, Column('resource_type_id', Integer(), ForeignKey('metadef_resource_types.id'), primary_key=True, nullable=False), Column('namespace_id', Integer(), ForeignKey('metadef_namespaces.id'), primary_key=True, nullable=False), Column('properties_target', String(80)), Column('prefix', String(80)), Column('created_at', DateTime(), nullable=False), Column('updated_at', DateTime()), UniqueConstraint('resource_type_id', 'namespace_id', **_constr_kwargs), mysql_engine='InnoDB', mysql_charset='utf8', extend_existing=True) if meta.bind.name != 'ibm_db_sa': Index('ix_metadef_ns_res_types_res_type_id_ns_id', metadef_associations.c.resource_type_id, metadef_associations.c.namespace_id) return metadef_associations def upgrade(migrate_engine): meta = MetaData() meta.bind = migrate_engine tables = [define_metadef_namespaces_table(meta), define_metadef_objects_table(meta), define_metadef_properties_table(meta), define_metadef_resource_types_table(meta), define_metadef_namespace_resource_types_table(meta)] create_tables(tables) resource_types_table = _get_metadef_resource_types_table(meta) _populate_resource_types(resource_types_table) glance-16.0.0/glance/db/sqlalchemy/migrate_repo/versions/045_sqlite_upgrade.sql0000666000175100017510000000702413245511421027504 0ustar zuulzuul00000000000000CREATE TEMPORARY TABLE images_backup ( id VARCHAR(36) NOT NULL, name VARCHAR(255), size INTEGER, status VARCHAR(30) NOT NULL, is_public BOOLEAN NOT NULL, created_at DATETIME NOT NULL, updated_at DATETIME, deleted_at DATETIME, deleted BOOLEAN NOT NULL, disk_format VARCHAR(20), container_format VARCHAR(20), checksum VARCHAR(32), owner VARCHAR(255), min_disk INTEGER NOT NULL, min_ram INTEGER NOT NULL, protected BOOLEAN DEFAULT 0 NOT NULL, virtual_size INTEGER, PRIMARY KEY (id), CHECK (is_public IN (0, 1)), CHECK (deleted IN (0, 1)), CHECK (protected IN (0, 1)) ); INSERT INTO images_backup SELECT id, name, size, status, is_public, created_at, updated_at, deleted_at, deleted, disk_format, container_format, checksum, owner, min_disk, min_ram, protected, virtual_size FROM images; DROP TABLE images; CREATE TABLE images ( id VARCHAR(36) NOT NULL, name VARCHAR(255), size INTEGER, status VARCHAR(30) NOT NULL, created_at DATETIME NOT NULL, updated_at DATETIME, deleted_at DATETIME, deleted BOOLEAN NOT NULL, disk_format VARCHAR(20), container_format VARCHAR(20), checksum VARCHAR(32), owner VARCHAR(255), min_disk INTEGER NOT NULL, min_ram INTEGER NOT NULL, protected BOOLEAN DEFAULT 0 NOT NULL, virtual_size INTEGER, visibility VARCHAR(9) DEFAULT 'shared' NOT NULL, PRIMARY KEY (id), CHECK (deleted IN (0, 1)), CHECK (protected IN (0, 1)), CONSTRAINT image_visibility CHECK (visibility IN ('private', 'public', 'shared', 'community')) ); CREATE INDEX checksum_image_idx ON images (checksum); CREATE INDEX visibility_image_idx ON images (visibility); CREATE INDEX ix_images_deleted ON images (deleted); CREATE INDEX owner_image_idx ON images (owner); CREATE INDEX created_at_image_idx ON images (created_at); CREATE INDEX updated_at_image_idx ON images (updated_at); -- Copy over all the 'public' rows INSERT INTO images ( id, name, size, status, created_at, updated_at, deleted_at, deleted, disk_format, container_format, checksum, owner, min_disk, min_ram, protected, virtual_size ) SELECT id, name, size, status, created_at, updated_at, deleted_at, deleted, disk_format, container_format, checksum, owner, min_disk, min_ram, protected, virtual_size FROM images_backup WHERE is_public=1; UPDATE images SET visibility='public'; -- Now copy over the 'private' rows INSERT INTO images ( id, name, size, status, created_at, updated_at, deleted_at, deleted, disk_format, container_format, checksum, owner, min_disk, min_ram, protected, virtual_size ) SELECT id, name, size, status, created_at, updated_at, deleted_at, deleted, disk_format, container_format, checksum, owner, min_disk, min_ram, protected, virtual_size FROM images_backup WHERE is_public=0; UPDATE images SET visibility='private' WHERE visibility='shared'; UPDATE images SET visibility='shared' WHERE visibility='private' AND id IN (SELECT DISTINCT image_id FROM image_members WHERE deleted != 1); DROP TABLE images_backup; glance-16.0.0/glance/db/sqlalchemy/migrate_repo/versions/026_add_location_storage_information.py0000666000175100017510000000223313245511421033072 0ustar zuulzuul00000000000000# Copyright 2013 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import sqlalchemy from glance.db.sqlalchemy.migrate_repo import schema def upgrade(migrate_engine): meta = sqlalchemy.schema.MetaData() meta.bind = migrate_engine image_locations_table = sqlalchemy.Table('image_locations', meta, autoload=True) meta_data = sqlalchemy.Column('meta_data', schema.PickleType(), default={}) meta_data.create(image_locations_table) glance-16.0.0/glance/db/sqlalchemy/migrate_repo/versions/007_add_owner.py0000666000175100017510000000450513245511421026266 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from sqlalchemy import * # noqa from glance.db.sqlalchemy.migrate_repo.schema import ( Boolean, DateTime, BigInteger, Integer, String, Text) # noqa def get_images_table(meta): """ Returns the Table object for the images table that corresponds to the images table definition of this version. """ images = Table('images', meta, Column('id', Integer(), primary_key=True, nullable=False), Column('name', String(255)), Column('disk_format', String(20)), Column('container_format', String(20)), Column('size', BigInteger()), Column('status', String(30), nullable=False), Column('is_public', Boolean(), nullable=False, default=False, index=True), Column('location', Text()), Column('created_at', DateTime(), nullable=False), Column('updated_at', DateTime()), Column('deleted_at', DateTime()), Column('deleted', Boolean(), nullable=False, default=False, index=True), Column('checksum', String(32)), Column('owner', String(255)), mysql_engine='InnoDB', extend_existing=True) return images def upgrade(migrate_engine): meta = MetaData() meta.bind = migrate_engine images = get_images_table(meta) owner = Column('owner', String(255)) owner.create(images) ././@LongLink0000000000000000000000000000016400000000000011216 Lustar 00000000000000glance-16.0.0/glance/db/sqlalchemy/migrate_repo/versions/042_add_changes_to_reinstall_unique_metadef_constraints.pyglance-16.0.0/glance/db/sqlalchemy/migrate_repo/versions/042_add_changes_to_reinstall_unique_metadef0000666000175100017510000004344313245511421033752 0ustar zuulzuul00000000000000 # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import migrate import sqlalchemy from sqlalchemy import (func, Index, inspect, orm, String, Table, type_coerce) # The _upgrade...get_duplicate() def's are separate functions to # accommodate sqlite which locks the database against updates as long as # db_recs is active. # In addition, sqlite doesn't support the function 'concat' between # Strings and Integers, so, the updating of records is also adjusted. def _upgrade_metadef_namespaces_get_duplicates(migrate_engine): meta = sqlalchemy.schema.MetaData(migrate_engine) metadef_namespaces = Table('metadef_namespaces', meta, autoload=True) session = orm.sessionmaker(bind=migrate_engine)() db_recs = (session.query(func.min(metadef_namespaces.c.id), metadef_namespaces.c.namespace) .group_by(metadef_namespaces.c.namespace) .having(func.count(metadef_namespaces.c.namespace) > 1)) dbrecs = [] for row in db_recs: dbrecs.append({'id': row[0], 'namespace': row[1]}) session.close() return dbrecs def _upgrade_metadef_objects_get_duplicates(migrate_engine): meta = sqlalchemy.schema.MetaData(migrate_engine) metadef_objects = Table('metadef_objects', meta, autoload=True) session = orm.sessionmaker(bind=migrate_engine)() db_recs = (session.query(func.min(metadef_objects.c.id), metadef_objects.c.namespace_id, metadef_objects.c.name) .group_by(metadef_objects.c.namespace_id, metadef_objects.c.name) .having(func.count() > 1)) dbrecs = [] for row in db_recs: dbrecs.append({'id': row[0], 'namespace_id': row[1], 'name': row[2]}) session.close() return dbrecs def _upgrade_metadef_properties_get_duplicates(migrate_engine): meta = sqlalchemy.schema.MetaData(migrate_engine) metadef_properties = Table('metadef_properties', meta, autoload=True) session = orm.sessionmaker(bind=migrate_engine)() db_recs = (session.query(func.min(metadef_properties.c.id), metadef_properties.c.namespace_id, metadef_properties.c.name) .group_by(metadef_properties.c.namespace_id, metadef_properties.c.name) .having(func.count() > 1)) dbrecs = [] for row in db_recs: dbrecs.append({'id': row[0], 'namespace_id': row[1], 'name': row[2]}) session.close() return dbrecs def _upgrade_metadef_tags_get_duplicates(migrate_engine): meta = sqlalchemy.schema.MetaData(migrate_engine) metadef_tags = Table('metadef_tags', meta, autoload=True) session = orm.sessionmaker(bind=migrate_engine)() db_recs = (session.query(func.min(metadef_tags.c.id), metadef_tags.c.namespace_id, metadef_tags.c.name) .group_by(metadef_tags.c.namespace_id, metadef_tags.c.name) .having(func.count() > 1)) dbrecs = [] for row in db_recs: dbrecs.append({'id': row[0], 'namespace_id': row[1], 'name': row[2]}) session.close() return dbrecs def _upgrade_metadef_resource_types_get_duplicates(migrate_engine): meta = sqlalchemy.schema.MetaData(migrate_engine) metadef_resource_types = Table('metadef_resource_types', meta, autoload=True) session = orm.sessionmaker(bind=migrate_engine)() db_recs = (session.query(func.min(metadef_resource_types.c.id), metadef_resource_types.c.name) .group_by(metadef_resource_types.c.name) .having(func.count(metadef_resource_types.c.name) > 1)) dbrecs = [] for row in db_recs: dbrecs.append({'id': row[0], 'name': row[1]}) session.close() return dbrecs def _upgrade_data(migrate_engine): # Rename duplicates to be unique. meta = sqlalchemy.schema.MetaData(migrate_engine) # ORM tables metadef_namespaces = Table('metadef_namespaces', meta, autoload=True) metadef_objects = Table('metadef_objects', meta, autoload=True) metadef_properties = Table('metadef_properties', meta, autoload=True) metadef_tags = Table('metadef_tags', meta, autoload=True) metadef_resource_types = Table('metadef_resource_types', meta, autoload=True) # Fix duplicate metadef_namespaces # Update the non-first record(s) with an unique namespace value dbrecs = _upgrade_metadef_namespaces_get_duplicates(migrate_engine) for row in dbrecs: s = (metadef_namespaces.update() .where(metadef_namespaces.c.id > row['id']) .where(metadef_namespaces.c.namespace == row['namespace']) ) if migrate_engine.name == 'sqlite': s = (s.values(namespace=(row['namespace'] + '-DUPL-' + type_coerce(metadef_namespaces.c.id, String)), display_name=(row['namespace'] + '-DUPL-' + type_coerce(metadef_namespaces.c.id, String)))) else: s = s.values(namespace=func.concat(row['namespace'], '-DUPL-', metadef_namespaces.c.id), display_name=func.concat(row['namespace'], '-DUPL-', metadef_namespaces.c.id)) s.execute() # Fix duplicate metadef_objects dbrecs = _upgrade_metadef_objects_get_duplicates(migrate_engine) for row in dbrecs: s = (metadef_objects.update() .where(metadef_objects.c.id > row['id']) .where(metadef_objects.c.namespace_id == row['namespace_id']) .where(metadef_objects.c.name == str(row['name'])) ) if migrate_engine.name == 'sqlite': s = (s.values(name=(row['name'] + '-DUPL-' + type_coerce(metadef_objects.c.id, String)))) else: s = s.values(name=func.concat(row['name'], '-DUPL-', metadef_objects.c.id)) s.execute() # Fix duplicate metadef_properties dbrecs = _upgrade_metadef_properties_get_duplicates(migrate_engine) for row in dbrecs: s = (metadef_properties.update() .where(metadef_properties.c.id > row['id']) .where(metadef_properties.c.namespace_id == row['namespace_id']) .where(metadef_properties.c.name == str(row['name'])) ) if migrate_engine.name == 'sqlite': s = (s.values(name=(row['name'] + '-DUPL-' + type_coerce(metadef_properties.c.id, String))) ) else: s = s.values(name=func.concat(row['name'], '-DUPL-', metadef_properties.c.id)) s.execute() # Fix duplicate metadef_tags dbrecs = _upgrade_metadef_tags_get_duplicates(migrate_engine) for row in dbrecs: s = (metadef_tags.update() .where(metadef_tags.c.id > row['id']) .where(metadef_tags.c.namespace_id == row['namespace_id']) .where(metadef_tags.c.name == str(row['name'])) ) if migrate_engine.name == 'sqlite': s = (s.values(name=(row['name'] + '-DUPL-' + type_coerce(metadef_tags.c.id, String))) ) else: s = s.values(name=func.concat(row['name'], '-DUPL-', metadef_tags.c.id)) s.execute() # Fix duplicate metadef_resource_types dbrecs = _upgrade_metadef_resource_types_get_duplicates(migrate_engine) for row in dbrecs: s = (metadef_resource_types.update() .where(metadef_resource_types.c.id > row['id']) .where(metadef_resource_types.c.name == str(row['name'])) ) if migrate_engine.name == 'sqlite': s = (s.values(name=(row['name'] + '-DUPL-' + type_coerce(metadef_resource_types.c.id, String))) ) else: s = s.values(name=func.concat(row['name'], '-DUPL-', metadef_resource_types.c.id)) s.execute() def _update_sqlite_namespace_id_name_constraint(metadef, metadef_namespaces, new_constraint_name, new_fk_name): migrate.UniqueConstraint( metadef.c.namespace_id, metadef.c.name).drop() migrate.UniqueConstraint( metadef.c.namespace_id, metadef.c.name, name=new_constraint_name).create() migrate.ForeignKeyConstraint( [metadef.c.namespace_id], [metadef_namespaces.c.id], name=new_fk_name).create() def _drop_unique_constraint_if_exists(inspector, table_name, metadef): name = _get_unique_constraint_name(inspector, table_name, ['namespace_id', 'name']) if name: migrate.UniqueConstraint(metadef.c.namespace_id, metadef.c.name, name=name).drop() def _drop_index_with_fk_constraint(metadef, metadef_namespaces, index_name, fk_old_name, fk_new_name): fkc = migrate.ForeignKeyConstraint([metadef.c.namespace_id], [metadef_namespaces.c.id], name=fk_old_name) fkc.drop() if index_name: Index(index_name, metadef.c.namespace_id).drop() # Rename the fk for consistency across all db's fkc = migrate.ForeignKeyConstraint([metadef.c.namespace_id], [metadef_namespaces.c.id], name=fk_new_name) fkc.create() def _get_unique_constraint_name(inspector, table_name, columns): constraints = inspector.get_unique_constraints(table_name) for constraint in constraints: if set(constraint['column_names']) == set(columns): return constraint['name'] return None def _get_fk_constraint_name(inspector, table_name, columns): constraints = inspector.get_foreign_keys(table_name) for constraint in constraints: if set(constraint['constrained_columns']) == set(columns): return constraint['name'] return None def upgrade(migrate_engine): _upgrade_data(migrate_engine) meta = sqlalchemy.MetaData() meta.bind = migrate_engine inspector = inspect(migrate_engine) # ORM tables metadef_namespaces = Table('metadef_namespaces', meta, autoload=True) metadef_objects = Table('metadef_objects', meta, autoload=True) metadef_properties = Table('metadef_properties', meta, autoload=True) metadef_tags = Table('metadef_tags', meta, autoload=True) metadef_ns_res_types = Table('metadef_namespace_resource_types', meta, autoload=True) metadef_resource_types = Table('metadef_resource_types', meta, autoload=True) # Drop the bad, non-unique indices. if migrate_engine.name == 'sqlite': # For sqlite: # Only after the unique constraints have been added should the indices # be dropped. If done the other way, sqlite complains during # constraint adding/dropping that the index does/does not exist. # Note: The _get_unique_constraint_name, _get_fk_constraint_name # return None for constraints that do in fact exist. Also, # get_index_names returns names, but, the names can not be used with # the Index(name, blah).drop() command, so, putting sqlite into # it's own section. # Objects _update_sqlite_namespace_id_name_constraint( metadef_objects, metadef_namespaces, 'uq_metadef_objects_namespace_id_name', 'metadef_objects_fk_1') # Properties _update_sqlite_namespace_id_name_constraint( metadef_properties, metadef_namespaces, 'uq_metadef_properties_namespace_id_name', 'metadef_properties_fk_1') # Tags _update_sqlite_namespace_id_name_constraint( metadef_tags, metadef_namespaces, 'uq_metadef_tags_namespace_id_name', 'metadef_tags_fk_1') # Namespaces migrate.UniqueConstraint( metadef_namespaces.c.namespace).drop() migrate.UniqueConstraint( metadef_namespaces.c.namespace, name='uq_metadef_namespaces_namespace').create() # ResourceTypes migrate.UniqueConstraint( metadef_resource_types.c.name).drop() migrate.UniqueConstraint( metadef_resource_types.c.name, name='uq_metadef_resource_types_name').create() # Now drop the bad indices Index('ix_metadef_objects_namespace_id', metadef_objects.c.namespace_id, metadef_objects.c.name).drop() Index('ix_metadef_properties_namespace_id', metadef_properties.c.namespace_id, metadef_properties.c.name).drop() Index('ix_metadef_tags_namespace_id', metadef_tags.c.namespace_id, metadef_tags.c.name).drop() else: # First drop the bad non-unique indices. # To do that (for mysql), must first drop foreign key constraints # BY NAME and then drop the bad indices. # Finally, re-create the foreign key constraints with a consistent # name. # DB2 still has unique constraints, but, they are badly named. # Drop them, they will be recreated at the final step. name = _get_unique_constraint_name(inspector, 'metadef_namespaces', ['namespace']) if name: migrate.UniqueConstraint(metadef_namespaces.c.namespace, name=name).drop() _drop_unique_constraint_if_exists(inspector, 'metadef_objects', metadef_objects) _drop_unique_constraint_if_exists(inspector, 'metadef_properties', metadef_properties) _drop_unique_constraint_if_exists(inspector, 'metadef_tags', metadef_tags) name = _get_unique_constraint_name(inspector, 'metadef_resource_types', ['name']) if name: migrate.UniqueConstraint(metadef_resource_types.c.name, name=name).drop() # Objects _drop_index_with_fk_constraint( metadef_objects, metadef_namespaces, 'ix_metadef_objects_namespace_id', _get_fk_constraint_name( inspector, 'metadef_objects', ['namespace_id']), 'metadef_objects_fk_1') # Properties _drop_index_with_fk_constraint( metadef_properties, metadef_namespaces, 'ix_metadef_properties_namespace_id', _get_fk_constraint_name( inspector, 'metadef_properties', ['namespace_id']), 'metadef_properties_fk_1') # Tags _drop_index_with_fk_constraint( metadef_tags, metadef_namespaces, 'ix_metadef_tags_namespace_id', _get_fk_constraint_name( inspector, 'metadef_tags', ['namespace_id']), 'metadef_tags_fk_1') # Drop Others without fk constraints. Index('ix_metadef_namespaces_namespace', metadef_namespaces.c.namespace).drop() # The next two don't exist in ibm_db_sa, but, drop them everywhere else. if migrate_engine.name != 'ibm_db_sa': Index('ix_metadef_resource_types_name', metadef_resource_types.c.name).drop() # Not needed due to primary key on same columns Index('ix_metadef_ns_res_types_res_type_id_ns_id', metadef_ns_res_types.c.resource_type_id, metadef_ns_res_types.c.namespace_id).drop() # Now, add back the dropped indexes as unique constraints if migrate_engine.name != 'sqlite': # Namespaces migrate.UniqueConstraint( metadef_namespaces.c.namespace, name='uq_metadef_namespaces_namespace').create() # Objects migrate.UniqueConstraint( metadef_objects.c.namespace_id, metadef_objects.c.name, name='uq_metadef_objects_namespace_id_name').create() # Properties migrate.UniqueConstraint( metadef_properties.c.namespace_id, metadef_properties.c.name, name='uq_metadef_properties_namespace_id_name').create() # Tags migrate.UniqueConstraint( metadef_tags.c.namespace_id, metadef_tags.c.name, name='uq_metadef_tags_namespace_id_name').create() # Resource Types migrate.UniqueConstraint( metadef_resource_types.c.name, name='uq_metadef_resource_types_name').create() glance-16.0.0/glance/db/sqlalchemy/migrate_repo/versions/032_add_task_info_table.py0000666000175100017510000000452513245511421030260 0ustar zuulzuul00000000000000# Copyright 2013 Rackspace # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from sqlalchemy.schema import (Column, ForeignKey, MetaData, Table) from glance.db.sqlalchemy.migrate_repo.schema import (String, Text, create_tables) # noqa TASKS_MIGRATE_COLUMNS = ['input', 'message', 'result'] def define_task_info_table(meta): Table('tasks', meta, autoload=True) # NOTE(nikhil): input and result are stored as text in the DB. # SQLAlchemy marshals the data to/from JSON using custom type # JSONEncodedDict. It uses simplejson underneath. task_info = Table('task_info', meta, Column('task_id', String(36), ForeignKey('tasks.id'), primary_key=True, nullable=False), Column('input', Text()), Column('result', Text()), Column('message', Text()), mysql_engine='InnoDB', mysql_charset='utf8') return task_info def upgrade(migrate_engine): meta = MetaData() meta.bind = migrate_engine tables = [define_task_info_table(meta)] create_tables(tables) tasks_table = Table('tasks', meta, autoload=True) task_info_table = Table('task_info', meta, autoload=True) tasks = tasks_table.select().execute().fetchall() for task in tasks: values = { 'task_id': task.id, 'input': task.input, 'result': task.result, 'message': task.message, } task_info_table.insert(values=values).execute() for col_name in TASKS_MIGRATE_COLUMNS: tasks_table.columns[col_name].drop() glance-16.0.0/glance/db/sqlalchemy/migrate_repo/versions/023_placeholder.py0000666000175100017510000000130213245511421026574 0ustar zuulzuul00000000000000# Copyright 2013 OpenStack Foundation # Copyright 2013 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. def upgrade(migrate_engine): pass glance-16.0.0/glance/db/sqlalchemy/migrate_repo/versions/033_add_location_status.py0000666000175100017510000000324213245511421030343 0ustar zuulzuul00000000000000# Copyright 2014 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import six import sqlalchemy from glance.db.sqlalchemy.migrate_repo import schema def upgrade(migrate_engine): meta = sqlalchemy.schema.MetaData() meta.bind = migrate_engine images_table = sqlalchemy.Table('images', meta, autoload=True) image_locations_table = sqlalchemy.Table('image_locations', meta, autoload=True) # Create 'status' column for image_locations table status = sqlalchemy.Column('status', schema.String(30), server_default='active', nullable=False) status.create(image_locations_table) # Set 'status' column initial value for image_locations table mapping = {'active': 'active', 'pending_delete': 'pending_delete', 'deleted': 'deleted', 'killed': 'deleted'} for src, dst in six.iteritems(mapping): subq = sqlalchemy.sql.select([images_table.c.id]).where( images_table.c.status == src) image_locations_table.update(values={'status': dst}).where( image_locations_table.c.image_id.in_(subq)).execute() glance-16.0.0/glance/db/sqlalchemy/migrate_repo/versions/043_add_image_created_updated_idx.py0000666000175100017510000000203513245511421032253 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from sqlalchemy import MetaData, Table, Index CREATED_AT_INDEX = 'created_at_image_idx' UPDATED_AT_INDEX = 'updated_at_image_idx' def upgrade(migrate_engine): meta = MetaData() meta.bind = migrate_engine images = Table('images', meta, autoload=True) created_index = Index(CREATED_AT_INDEX, images.c.created_at) created_index.create(migrate_engine) updated_index = Index(UPDATED_AT_INDEX, images.c.updated_at) updated_index.create(migrate_engine) glance-16.0.0/glance/db/sqlalchemy/migrate_repo/versions/037_sqlite_upgrade.sql0000666000175100017510000001133213245511421027502 0ustar zuulzuul00000000000000UPDATE images SET protected = 0 WHERE protected is NULL; UPDATE image_members SET status = 'pending' WHERE status is NULL; CREATE TEMPORARY TABLE images_backup ( id VARCHAR(36) NOT NULL, name VARCHAR(255), size INTEGER, status VARCHAR(30) NOT NULL, is_public BOOLEAN NOT NULL, created_at DATETIME NOT NULL, updated_at DATETIME, deleted_at DATETIME, deleted BOOLEAN NOT NULL, disk_format VARCHAR(20), container_format VARCHAR(20), checksum VARCHAR(32), owner VARCHAR(255), min_disk INTEGER, min_ram INTEGER, protected BOOLEAN NOT NULL DEFAULT 0, virtual_size INTEGER, PRIMARY KEY (id), CHECK (is_public IN (0, 1)), CHECK (deleted IN (0, 1)) ); INSERT INTO images_backup SELECT id, name, size, status, is_public, created_at, updated_at, deleted_at, deleted, disk_format, container_format, checksum, owner, min_disk, min_ram, protected, virtual_size FROM images; DROP TABLE images; CREATE TABLE images ( id VARCHAR(36) NOT NULL, name VARCHAR(255), size INTEGER, status VARCHAR(30) NOT NULL, is_public BOOLEAN NOT NULL, created_at DATETIME NOT NULL, updated_at DATETIME, deleted_at DATETIME, deleted BOOLEAN NOT NULL, disk_format VARCHAR(20), container_format VARCHAR(20), checksum VARCHAR(32), owner VARCHAR(255), min_disk INTEGER NOT NULL, min_ram INTEGER NOT NULL, protected BOOLEAN NOT NULL DEFAULT 0, virtual_size INTEGER, PRIMARY KEY (id), CHECK (is_public IN (0, 1)), CHECK (deleted IN (0, 1)) ); CREATE INDEX ix_images_deleted ON images (deleted); CREATE INDEX ix_images_is_public ON images (is_public); CREATE INDEX owner_image_idx ON images (owner); CREATE INDEX checksum_image_idx ON images (checksum); INSERT INTO images SELECT id, name, size, status, is_public, created_at, updated_at, deleted_at, deleted, disk_format, container_format, checksum, owner, min_disk, min_ram, protected, virtual_size FROM images_backup; DROP TABLE images_backup; CREATE TEMPORARY TABLE image_members_backup ( id INTEGER NOT NULL, image_id VARCHAR(36) NOT NULL, member VARCHAR(255) NOT NULL, can_share BOOLEAN NOT NULL, created_at DATETIME NOT NULL, updated_at DATETIME, deleted_at DATETIME, deleted BOOLEAN NOT NULL, status VARCHAR(20) NOT NULL DEFAULT 'pending', PRIMARY KEY (id), UNIQUE (image_id, member), CHECK (can_share IN (0, 1)), CHECK (deleted IN (0, 1)), FOREIGN KEY(image_id) REFERENCES images (id) ); INSERT INTO image_members_backup SELECT id, image_id, member, can_share, created_at, updated_at, deleted_at, deleted, status FROM image_members; DROP TABLE image_members; CREATE TABLE image_members ( id INTEGER NOT NULL, image_id VARCHAR(36) NOT NULL, member VARCHAR(255) NOT NULL, can_share BOOLEAN NOT NULL, created_at DATETIME NOT NULL, updated_at DATETIME, deleted_at DATETIME, deleted BOOLEAN NOT NULL, status VARCHAR(20) NOT NULL DEFAULT 'pending', PRIMARY KEY (id), UNIQUE (image_id, member), CHECK (can_share IN (0, 1)), CHECK (deleted IN (0, 1)), FOREIGN KEY(image_id) REFERENCES images (id), CONSTRAINT image_members_image_id_member_deleted_at_key UNIQUE (image_id, member, deleted_at) ); CREATE INDEX ix_image_members_deleted ON image_members (deleted); CREATE INDEX ix_image_members_image_id ON image_members (image_id); CREATE INDEX ix_image_members_image_id_member ON image_members (image_id, member); INSERT INTO image_members SELECT id, image_id, member, can_share, created_at, updated_at, deleted_at, deleted, status FROM image_members_backup; DROP TABLE image_members_backup; CREATE TEMPORARY TABLE image_properties_backup ( id INTEGER NOT NULL, image_id VARCHAR(36) NOT NULL, name VARCHAR(255) NOT NULL, value TEXT, created_at DATETIME NOT NULL, updated_at DATETIME, deleted_at DATETIME, deleted BOOLEAN NOT NULL, PRIMARY KEY (id) ); INSERT INTO image_properties_backup SELECT id, image_id, name, value, created_at, updated_at, deleted_at, deleted FROM image_properties; DROP TABLE image_properties; CREATE TABLE image_properties ( id INTEGER NOT NULL, image_id VARCHAR(36) NOT NULL, name VARCHAR(255) NOT NULL, value TEXT, created_at DATETIME NOT NULL, updated_at DATETIME, deleted_at DATETIME, deleted BOOLEAN NOT NULL, PRIMARY KEY (id), CHECK (deleted IN (0, 1)), FOREIGN KEY(image_id) REFERENCES images (id), CONSTRAINT ix_image_properties_image_id_name UNIQUE (image_id, name) ); CREATE INDEX ix_image_properties_deleted ON image_properties (deleted); CREATE INDEX ix_image_properties_image_id ON image_properties (image_id); INSERT INTO image_properties (id, image_id, name, value, created_at, updated_at, deleted_at, deleted) SELECT id, image_id, name, value, created_at, updated_at, deleted_at, deleted FROM image_properties_backup; DROP TABLE image_properties_backup; glance-16.0.0/glance/db/sqlalchemy/migrate_repo/versions/002_add_image_properties_table.py0000666000175100017510000000631413245511421031634 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from sqlalchemy.schema import ( Column, ForeignKey, Index, MetaData, Table, UniqueConstraint) from glance.db.sqlalchemy.migrate_repo.schema import ( Boolean, DateTime, Integer, String, Text, create_tables, from_migration_import) # noqa def define_image_properties_table(meta): (define_images_table,) = from_migration_import( '001_add_images_table', ['define_images_table']) images = define_images_table(meta) # noqa # NOTE(dperaza) DB2: specify the UniqueConstraint option when creating the # table will cause an index being created to specify the index # name and skip the step of creating another index with the same columns. # The index name is needed so it can be dropped and re-created later on. constr_kwargs = {} if meta.bind.name == 'ibm_db_sa': constr_kwargs['name'] = 'ix_image_properties_image_id_key' image_properties = Table('image_properties', meta, Column('id', Integer(), primary_key=True, nullable=False), Column('image_id', Integer(), ForeignKey('images.id'), nullable=False, index=True), Column('key', String(255), nullable=False), Column('value', Text()), Column('created_at', DateTime(), nullable=False), Column('updated_at', DateTime()), Column('deleted_at', DateTime()), Column('deleted', Boolean(), nullable=False, default=False, index=True), UniqueConstraint('image_id', 'key', **constr_kwargs), mysql_engine='InnoDB', mysql_charset='utf8', extend_existing=True) if meta.bind.name != 'ibm_db_sa': Index('ix_image_properties_image_id_key', image_properties.c.image_id, image_properties.c.key) return image_properties def upgrade(migrate_engine): meta = MetaData() meta.bind = migrate_engine tables = [define_image_properties_table(meta)] create_tables(tables) glance-16.0.0/glance/db/sqlalchemy/migrate_repo/versions/018_add_image_locations_table.py0000666000175100017510000000411413245511421031436 0ustar zuulzuul00000000000000# Copyright 2013 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import sqlalchemy from glance.db.sqlalchemy.migrate_repo import schema def upgrade(migrate_engine): meta = sqlalchemy.schema.MetaData(migrate_engine) # NOTE(bcwaldon): load the images table for the ForeignKey below sqlalchemy.Table('images', meta, autoload=True) image_locations_table = sqlalchemy.Table( 'image_locations', meta, sqlalchemy.Column('id', schema.Integer(), primary_key=True, nullable=False), sqlalchemy.Column('image_id', schema.String(36), sqlalchemy.ForeignKey('images.id'), nullable=False, index=True), sqlalchemy.Column('value', schema.Text(), nullable=False), sqlalchemy.Column('created_at', schema.DateTime(), nullable=False), sqlalchemy.Column('updated_at', schema.DateTime()), sqlalchemy.Column('deleted_at', schema.DateTime()), sqlalchemy.Column('deleted', schema.Boolean(), nullable=False, default=False, index=True), mysql_engine='InnoDB', mysql_charset='utf8', ) schema.create_tables([image_locations_table]) glance-16.0.0/glance/db/sqlalchemy/migrate_repo/versions/017_quote_encrypted_swift_credentials.py0000666000175100017510000002075113245511421033331 0ustar zuulzuul00000000000000# Copyright 2013 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ This migration handles migrating encrypted image location values from the unquoted form to the quoted form. If 'metadata_encryption_key' is specified in the config then this migration performs the following steps for every entry in the images table: 1. Decrypt the location value with the metadata_encryption_key 2. Changes the value to its quoted form 3. Encrypts the new value with the metadata_encryption_key 4. Inserts the new value back into the row Fixes bug #1081043 """ from oslo_config import cfg from oslo_log import log as logging from oslo_utils import encodeutils import six.moves.urllib.parse as urlparse import sqlalchemy from glance.common import crypt from glance.common import exception from glance.i18n import _, _LE, _LI, _LW LOG = logging.getLogger(__name__) CONF = cfg.CONF CONF.import_opt('metadata_encryption_key', 'glance.common.config') def upgrade(migrate_engine): migrate_location_credentials(migrate_engine, to_quoted=True) def migrate_location_credentials(migrate_engine, to_quoted): """ Migrate location credentials for encrypted swift uri's between the quoted and unquoted forms. :param migrate_engine: The configured db engine :param to_quoted: If True, migrate location credentials from unquoted to quoted form. If False, do the reverse. """ if not CONF.metadata_encryption_key: msg = _LI("'metadata_encryption_key' was not specified in the config" " file or a config file was not specified. This means that" " this migration is a NOOP.") LOG.info(msg) return meta = sqlalchemy.schema.MetaData() meta.bind = migrate_engine images_table = sqlalchemy.Table('images', meta, autoload=True) images = list(images_table.select().execute()) for image in images: try: fixed_uri = fix_uri_credentials(image['location'], to_quoted) images_table.update().where( images_table.c.id == image['id']).values( location=fixed_uri).execute() except exception.Invalid: msg = _LW("Failed to decrypt location value for image" " %(image_id)s") % {'image_id': image['id']} LOG.warn(msg) except exception.BadStoreUri as e: reason = encodeutils.exception_to_unicode(e) msg = _LE("Invalid store uri for image: %(image_id)s. " "Details: %(reason)s") % {'image_id': image.id, 'reason': reason} LOG.exception(msg) raise def decrypt_location(uri): return crypt.urlsafe_decrypt(CONF.metadata_encryption_key, uri) def encrypt_location(uri): return crypt.urlsafe_encrypt(CONF.metadata_encryption_key, uri, 64) def fix_uri_credentials(uri, to_quoted): """ Fix the given uri's embedded credentials by round-tripping with StoreLocation. If to_quoted is True, the uri is assumed to have credentials that have not been quoted, and the resulting uri will contain quoted credentials. If to_quoted is False, the uri is assumed to have credentials that have been quoted, and the resulting uri will contain credentials that have not been quoted. """ if not uri: return try: decrypted_uri = decrypt_location(uri) # NOTE (ameade): If a uri is not encrypted or incorrectly encoded then we # we raise an exception. except (TypeError, ValueError) as e: raise exception.Invalid(str(e)) return legacy_parse_uri(decrypted_uri, to_quoted) def legacy_parse_uri(uri, to_quote): """ Parse URLs. This method fixes an issue where credentials specified in the URL are interpreted differently in Python 2.6.1+ than prior versions of Python. It also deals with the peculiarity that new-style Swift URIs have where a username can contain a ':', like so: swift://account:user:pass@authurl.com/container/obj If to_quoted is True, the uri is assumed to have credentials that have not been quoted, and the resulting uri will contain quoted credentials. If to_quoted is False, the uri is assumed to have credentials that have been quoted, and the resulting uri will contain credentials that have not been quoted. """ # Make sure that URIs that contain multiple schemes, such as: # swift://user:pass@http://authurl.com/v1/container/obj # are immediately rejected. if uri.count('://') != 1: reason = _("URI cannot contain more than one occurrence of a scheme." "If you have specified a URI like " "swift://user:pass@http://authurl.com/v1/container/obj" ", you need to change it to use the swift+http:// scheme, " "like so: " "swift+http://user:pass@authurl.com/v1/container/obj") raise exception.BadStoreUri(message=reason) pieces = urlparse.urlparse(uri) if pieces.scheme not in ('swift', 'swift+http', 'swift+https'): raise exception.BadStoreUri(message="Unacceptable scheme: '%s'" % pieces.scheme) scheme = pieces.scheme netloc = pieces.netloc path = pieces.path.lstrip('/') if netloc != '': # > Python 2.6.1 if '@' in netloc: creds, netloc = netloc.split('@') else: creds = None else: # Python 2.6.1 compat # see lp659445 and Python issue7904 if '@' in path: creds, path = path.split('@') else: creds = None netloc = path[0:path.find('/')].strip('/') path = path[path.find('/'):].strip('/') if creds: cred_parts = creds.split(':') # User can be account:user, in which case cred_parts[0:2] will be # the account and user. Combine them into a single username of # account:user if to_quote: if len(cred_parts) == 1: reason = (_("Badly formed credentials '%(creds)s' in Swift " "URI") % {'creds': creds}) raise exception.BadStoreUri(message=reason) elif len(cred_parts) == 3: user = ':'.join(cred_parts[0:2]) else: user = cred_parts[0] key = cred_parts[-1] user = user key = key else: if len(cred_parts) != 2: reason = (_("Badly formed credentials in Swift URI.")) raise exception.BadStoreUri(message=reason) user, key = cred_parts user = urlparse.unquote(user) key = urlparse.unquote(key) else: user = None key = None path_parts = path.split('/') try: obj = path_parts.pop() container = path_parts.pop() if not netloc.startswith('http'): # push hostname back into the remaining to build full authurl path_parts.insert(0, netloc) auth_or_store_url = '/'.join(path_parts) except IndexError: reason = _("Badly formed S3 URI: %(uri)s") % {'uri': uri} raise exception.BadStoreUri(message=reason) if auth_or_store_url.startswith('http://'): auth_or_store_url = auth_or_store_url[len('http://'):] elif auth_or_store_url.startswith('https://'): auth_or_store_url = auth_or_store_url[len('https://'):] credstring = '' if user and key: if to_quote: quote_user = urlparse.quote(user) quote_key = urlparse.quote(key) else: quote_user = user quote_key = key credstring = '%s:%s@' % (quote_user, quote_key) auth_or_store_url = auth_or_store_url.strip('/') container = container.strip('/') obj = obj.strip('/') uri = '%s://%s%s/%s/%s' % (scheme, credstring, auth_or_store_url, container, obj) return encrypt_location(uri) glance-16.0.0/glance/db/sqlalchemy/migrate_repo/versions/022_image_member_index.py0000666000175100017510000000420613245511421030117 0ustar zuulzuul00000000000000# Copyright 2013 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import re from migrate.changeset import UniqueConstraint from oslo_db import exception as db_exception from sqlalchemy import MetaData, Table from sqlalchemy.exc import OperationalError, ProgrammingError NEW_KEYNAME = 'image_members_image_id_member_deleted_at_key' ORIGINAL_KEYNAME_RE = re.compile('image_members_image_id.*_key') def upgrade(migrate_engine): image_members = _get_image_members_table(migrate_engine) if migrate_engine.name in ('mysql', 'postgresql'): try: UniqueConstraint('image_id', name=_get_original_keyname(migrate_engine.name), table=image_members).drop() except (OperationalError, ProgrammingError, db_exception.DBError): UniqueConstraint('image_id', name=_infer_original_keyname(image_members), table=image_members).drop() UniqueConstraint('image_id', 'member', 'deleted_at', name=NEW_KEYNAME, table=image_members).create() def _get_image_members_table(migrate_engine): meta = MetaData() meta.bind = migrate_engine return Table('image_members', meta, autoload=True) def _get_original_keyname(db): return {'mysql': 'image_id', 'postgresql': 'image_members_image_id_member_key'}[db] def _infer_original_keyname(table): for i in table.indexes: if ORIGINAL_KEYNAME_RE.match(i.name): return i.name glance-16.0.0/glance/db/sqlalchemy/migrate_repo/versions/008_add_image_members_table.py0000666000175100017510000000567013245511421031104 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from sqlalchemy import * # noqa from glance.db.sqlalchemy.migrate_repo.schema import ( Boolean, DateTime, Integer, String, create_tables, from_migration_import) # noqa def get_images_table(meta): """ No changes to the images table from 007... """ (get_images_table,) = from_migration_import( '007_add_owner', ['get_images_table']) images = get_images_table(meta) return images def get_image_members_table(meta): images = get_images_table(meta) # noqa image_members = Table('image_members', meta, Column('id', Integer(), primary_key=True, nullable=False), Column('image_id', Integer(), ForeignKey('images.id'), nullable=False, index=True), Column('member', String(255), nullable=False), Column('can_share', Boolean(), nullable=False, default=False), Column('created_at', DateTime(), nullable=False), Column('updated_at', DateTime()), Column('deleted_at', DateTime()), Column('deleted', Boolean(), nullable=False, default=False, index=True), UniqueConstraint('image_id', 'member'), mysql_charset='utf8', mysql_engine='InnoDB', extend_existing=True) # DB2: an index has already been created for the UniqueConstraint option # specified on the Table() statement above. if meta.bind.name != "ibm_db_sa": Index('ix_image_members_image_id_member', image_members.c.image_id, image_members.c.member) return image_members def upgrade(migrate_engine): meta = MetaData() meta.bind = migrate_engine tables = [get_image_members_table(meta)] create_tables(tables) glance-16.0.0/glance/db/sqlalchemy/migrate_repo/versions/034_add_virtual_size.py0000666000175100017510000000167413245511421027660 0ustar zuulzuul00000000000000# Copyright 2014 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import sqlalchemy def upgrade(migrate_engine): meta = sqlalchemy.MetaData() meta.bind = migrate_engine images = sqlalchemy.Table('images', meta, autoload=True) virtual_size = sqlalchemy.Column('virtual_size', sqlalchemy.BigInteger) images.create_column(virtual_size) glance-16.0.0/glance/db/sqlalchemy/migrate_repo/versions/037_add_changes_to_satisfy_models.py0000666000175100017510000000675213245511421032364 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import sqlalchemy from sqlalchemy import Table, Index, UniqueConstraint from sqlalchemy.schema import (AddConstraint, DropConstraint, ForeignKeyConstraint) from sqlalchemy import sql from sqlalchemy import update def upgrade(migrate_engine): meta = sqlalchemy.MetaData() meta.bind = migrate_engine if migrate_engine.name not in ['mysql', 'postgresql']: return image_properties = Table('image_properties', meta, autoload=True) image_members = Table('image_members', meta, autoload=True) images = Table('images', meta, autoload=True) # We have to ensure that we doesn't have `nulls` values since we are going # to set nullable=False migrate_engine.execute( update(image_members) .where(image_members.c.status == sql.expression.null()) .values(status='pending')) migrate_engine.execute( update(images) .where(images.c.protected == sql.expression.null()) .values(protected=sql.expression.false())) image_members.c.status.alter(nullable=False, server_default='pending') images.c.protected.alter( nullable=False, server_default=sql.expression.false()) if migrate_engine.name == 'postgresql': Index('ix_image_properties_image_id_name', image_properties.c.image_id, image_properties.c.name).drop() # We have different names of this constraint in different versions of # postgresql. Since we have only one constraint on this table, we can # get it in the following way. name = migrate_engine.execute( """SELECT conname FROM pg_constraint WHERE conrelid = (SELECT oid FROM pg_class WHERE relname LIKE 'image_properties') AND contype = 'u';""").scalar() constraint = UniqueConstraint(image_properties.c.image_id, image_properties.c.name, name='%s' % name) migrate_engine.execute(DropConstraint(constraint)) constraint = UniqueConstraint(image_properties.c.image_id, image_properties.c.name, name='ix_image_properties_image_id_name') migrate_engine.execute(AddConstraint(constraint)) images.c.id.alter(server_default=None) if migrate_engine.name == 'mysql': constraint = UniqueConstraint(image_properties.c.image_id, image_properties.c.name, name='image_id') migrate_engine.execute(DropConstraint(constraint)) image_locations = Table('image_locations', meta, autoload=True) if len(image_locations.foreign_keys) == 0: migrate_engine.execute(AddConstraint(ForeignKeyConstraint( [image_locations.c.image_id], [images.c.id]))) glance-16.0.0/glance/db/sqlalchemy/migrate_repo/versions/040_add_changes_to_satisfy_metadefs_tags.py0000666000175100017510000000165313245511421033674 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import sqlalchemy from sqlalchemy import (Table, Index) def upgrade(migrate_engine): if migrate_engine.name == 'mysql': meta = sqlalchemy.MetaData() meta.bind = migrate_engine metadef_tags = Table('metadef_tags', meta, autoload=True) Index('namespace_id', metadef_tags.c.namespace_id, metadef_tags.c.name).drop() glance-16.0.0/glance/db/sqlalchemy/migrate_repo/versions/006_mysql_upgrade.sql0000666000175100017510000000054513245511421027346 0ustar zuulzuul00000000000000-- -- This file is necessary because MySQL does not support -- renaming indexes. -- DROP INDEX ix_image_properties_image_id_key ON image_properties; -- Rename the `key` column to `name` ALTER TABLE image_properties CHANGE COLUMN `key` name VARCHAR(255) NOT NULL; CREATE UNIQUE INDEX ix_image_properties_image_id_name ON image_properties (image_id, name); glance-16.0.0/glance/db/sqlalchemy/migrate_repo/versions/021_set_engine_mysql_innodb.py0000666000175100017510000000220113245511421031205 0ustar zuulzuul00000000000000# Copyright 2013 OpenStack Foundation # All Rights Reserved. # Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from sqlalchemy import MetaData tables = ['image_locations'] def upgrade(migrate_engine): meta = MetaData() meta.bind = migrate_engine if migrate_engine.name == "mysql": d = migrate_engine.execute("SHOW TABLE STATUS WHERE Engine!='InnoDB';") for row in d.fetchall(): table_name = row[0] if table_name in tables: migrate_engine.execute("ALTER TABLE %s Engine=InnoDB" % table_name) glance-16.0.0/glance/db/sqlalchemy/migrate_repo/versions/003_add_disk_format.py0000666000175100017510000001021713245511421027427 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from sqlalchemy import * # noqa from glance.db.sqlalchemy.migrate_repo.schema import ( Boolean, DateTime, Integer, String, Text, from_migration_import) # noqa def get_images_table(meta): """ Returns the Table object for the images table that corresponds to the images table definition of this version. """ images = Table('images', meta, Column('id', Integer(), primary_key=True, nullable=False), Column('name', String(255)), Column('disk_format', String(20)), Column('container_format', String(20)), Column('size', Integer()), Column('status', String(30), nullable=False), Column('is_public', Boolean(), nullable=False, default=False, index=True), Column('location', Text()), Column('created_at', DateTime(), nullable=False), Column('updated_at', DateTime()), Column('deleted_at', DateTime()), Column('deleted', Boolean(), nullable=False, default=False, index=True), mysql_engine='InnoDB', extend_existing=True) return images def upgrade(migrate_engine): meta = MetaData() meta.bind = migrate_engine (define_images_table,) = from_migration_import( '001_add_images_table', ['define_images_table']) (define_image_properties_table,) = from_migration_import( '002_add_image_properties_table', ['define_image_properties_table']) conn = migrate_engine.connect() images = define_images_table(meta) image_properties = define_image_properties_table(meta) # Steps to take, in this order: # 1) Move the existing type column from Image into # ImageProperty for all image records that have a non-NULL # type column # 2) Drop the type column in images # 3) Add the new columns to images # The below wackiness correlates to the following ANSI SQL: # SELECT images.* FROM images # LEFT JOIN image_properties # ON images.id = image_properties.image_id # AND image_properties.key = 'type' # WHERE image_properties.image_id IS NULL # AND images.type IS NOT NULL # # which returns all the images that have a type set # but that DO NOT yet have an image_property record # with key of type. from_stmt = [ images.outerjoin(image_properties, and_(images.c.id == image_properties.c.image_id, image_properties.c.key == 'type')) ] and_stmt = and_(image_properties.c.image_id == None, images.c.type != None) sel = select([images], from_obj=from_stmt).where(and_stmt) image_records = conn.execute(sel).fetchall() property_insert = image_properties.insert() for record in image_records: conn.execute(property_insert, image_id=record.id, key='type', created_at=record.created_at, deleted=False, value=record.type) conn.close() disk_format = Column('disk_format', String(20)) disk_format.create(images) container_format = Column('container_format', String(20)) container_format.create(images) images.columns['type'].drop() glance-16.0.0/glance/db/sqlalchemy/migrate_repo/versions/031_remove_duplicated_locations.py0000666000175100017510000000562413245511421032072 0ustar zuulzuul00000000000000# Copyright 2013 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import sqlalchemy from sqlalchemy import func from sqlalchemy import orm from sqlalchemy import sql from sqlalchemy import Table def upgrade(migrate_engine): meta = sqlalchemy.schema.MetaData(migrate_engine) image_locations = Table('image_locations', meta, autoload=True) if migrate_engine.name == "ibm_db_sa": il = orm.aliased(image_locations) # NOTE(wenchma): Get all duplicated rows. qry = (sql.select([il.c.id]) .where(il.c.id > (sql.select([func.min(image_locations.c.id)]) .where(image_locations.c.image_id == il.c.image_id) .where(image_locations.c.value == il.c.value) .where(image_locations.c.meta_data == il.c.meta_data) .where(image_locations.c.deleted == False))) .where(il.c.deleted == False) .execute() ) for row in qry: stmt = (image_locations.delete() .where(image_locations.c.id == row[0]) .where(image_locations.c.deleted == False)) stmt.execute() else: session = orm.sessionmaker(bind=migrate_engine)() # NOTE(flaper87): Lets group by # image_id, location and metadata. grp = [image_locations.c.image_id, image_locations.c.value, image_locations.c.meta_data] # NOTE(flaper87): Get all duplicated rows qry = (session.query(*grp) .filter(image_locations.c.deleted == False) .group_by(*grp) .having(func.count() > 1)) for row in qry: # NOTE(flaper87): Not the fastest way to do it. # This is the best way to do it since sqlalchemy # has a bug around delete + limit. s = (sql.select([image_locations.c.id]) .where(image_locations.c.image_id == row[0]) .where(image_locations.c.value == row[1]) .where(image_locations.c.meta_data == row[2]) .where(image_locations.c.deleted == False) .limit(1).execute()) stmt = (image_locations.delete() .where(image_locations.c.id == s.first()[0])) stmt.execute() session.close() glance-16.0.0/glance/db/sqlalchemy/migrate_repo/versions/012_id_to_uuid.py0000666000175100017510000003077713245511421026456 0ustar zuulzuul00000000000000# Copyright 2013 IBM Corp. # Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ While SQLAlchemy/sqlalchemy-migrate should abstract this correctly, there are known issues with these libraries so SQLite and non-SQLite migrations must be done separately. """ import uuid import migrate import sqlalchemy and_ = sqlalchemy.and_ or_ = sqlalchemy.or_ def upgrade(migrate_engine): """ Call the correct dialect-specific upgrade. """ meta = sqlalchemy.MetaData() meta.bind = migrate_engine t_images = _get_table('images', meta) t_image_members = _get_table('image_members', meta) t_image_properties = _get_table('image_properties', meta) dialect = migrate_engine.url.get_dialect().name if dialect == "sqlite": _upgrade_sqlite(meta, t_images, t_image_members, t_image_properties) _update_all_ids_to_uuids(t_images, t_image_members, t_image_properties) elif dialect == "ibm_db_sa": _upgrade_db2(meta, t_images, t_image_members, t_image_properties) _update_all_ids_to_uuids(t_images, t_image_members, t_image_properties) _add_db2_constraints(meta) else: _upgrade_other(t_images, t_image_members, t_image_properties, dialect) def _upgrade_sqlite(meta, t_images, t_image_members, t_image_properties): """ Upgrade 011 -> 012 with special SQLite-compatible logic. """ sql_commands = [ """CREATE TABLE images_backup ( id VARCHAR(36) NOT NULL, name VARCHAR(255), size INTEGER, status VARCHAR(30) NOT NULL, is_public BOOLEAN NOT NULL, location TEXT, created_at DATETIME NOT NULL, updated_at DATETIME, deleted_at DATETIME, deleted BOOLEAN NOT NULL, disk_format VARCHAR(20), container_format VARCHAR(20), checksum VARCHAR(32), owner VARCHAR(255), min_disk INTEGER NOT NULL, min_ram INTEGER NOT NULL, PRIMARY KEY (id), CHECK (is_public IN (0, 1)), CHECK (deleted IN (0, 1)) );""", """INSERT INTO images_backup SELECT * FROM images;""", """CREATE TABLE image_members_backup ( id INTEGER NOT NULL, image_id VARCHAR(36) NOT NULL, member VARCHAR(255) NOT NULL, can_share BOOLEAN NOT NULL, created_at DATETIME NOT NULL, updated_at DATETIME, deleted_at DATETIME, deleted BOOLEAN NOT NULL, PRIMARY KEY (id), UNIQUE (image_id, member), CHECK (can_share IN (0, 1)), CHECK (deleted IN (0, 1)), FOREIGN KEY(image_id) REFERENCES images (id) );""", """INSERT INTO image_members_backup SELECT * FROM image_members;""", """CREATE TABLE image_properties_backup ( id INTEGER NOT NULL, image_id VARCHAR(36) NOT NULL, name VARCHAR(255) NOT NULL, value TEXT, created_at DATETIME NOT NULL, updated_at DATETIME, deleted_at DATETIME, deleted BOOLEAN NOT NULL, PRIMARY KEY (id), CHECK (deleted IN (0, 1)), UNIQUE (image_id, name), FOREIGN KEY(image_id) REFERENCES images (id) );""", """INSERT INTO image_properties_backup SELECT * FROM image_properties;""", ] for command in sql_commands: meta.bind.execute(command) _sqlite_table_swap(meta, t_image_members, t_image_properties, t_images) def _upgrade_db2(meta, t_images, t_image_members, t_image_properties): """ Upgrade for DB2. """ t_images.c.id.alter(sqlalchemy.String(36), primary_key=True) image_members_backup = sqlalchemy.Table( 'image_members_backup', meta, sqlalchemy.Column('id', sqlalchemy.Integer(), primary_key=True, nullable=False), sqlalchemy.Column('image_id', sqlalchemy.String(36), nullable=False, index=True), sqlalchemy.Column('member', sqlalchemy.String(255), nullable=False), sqlalchemy.Column('can_share', sqlalchemy.Boolean(), nullable=False, default=False), sqlalchemy.Column('created_at', sqlalchemy.DateTime(), nullable=False), sqlalchemy.Column('updated_at', sqlalchemy.DateTime()), sqlalchemy.Column('deleted_at', sqlalchemy.DateTime()), sqlalchemy.Column('deleted', sqlalchemy.Boolean(), nullable=False, default=False, index=True), sqlalchemy.UniqueConstraint('image_id', 'member'), extend_existing=True) image_properties_backup = sqlalchemy.Table( 'image_properties_backup', meta, sqlalchemy.Column('id', sqlalchemy.Integer(), primary_key=True, nullable=False), sqlalchemy.Column('image_id', sqlalchemy.String(36), nullable=False, index=True), sqlalchemy.Column('name', sqlalchemy.String(255), nullable=False), sqlalchemy.Column('value', sqlalchemy.Text()), sqlalchemy.Column('created_at', sqlalchemy.DateTime(), nullable=False), sqlalchemy.Column('updated_at', sqlalchemy.DateTime()), sqlalchemy.Column('deleted_at', sqlalchemy.DateTime()), sqlalchemy.Column('deleted', sqlalchemy.Boolean(), nullable=False, default=False, index=True), sqlalchemy.UniqueConstraint( 'image_id', 'name', name='ix_image_properties_image_id_name'), extend_existing=True) image_members_backup.create() image_properties_backup.create() sql_commands = [ """INSERT INTO image_members_backup SELECT * FROM image_members;""", """INSERT INTO image_properties_backup SELECT * FROM image_properties;""", ] for command in sql_commands: meta.bind.execute(command) t_image_members.drop() t_image_properties.drop() image_members_backup.rename(name='image_members') image_properties_backup.rename(name='image_properties') def _add_db2_constraints(meta): # Create the foreign keys sql_commands = [ """ALTER TABLE image_members ADD CONSTRAINT member_image_id FOREIGN KEY (image_id) REFERENCES images (id);""", """ALTER TABLE image_properties ADD CONSTRAINT property_image_id FOREIGN KEY (image_id) REFERENCES images (id);""", ] for command in sql_commands: meta.bind.execute(command) def _upgrade_other(t_images, t_image_members, t_image_properties, dialect): """ Upgrade 011 -> 012 with logic for non-SQLite databases. """ foreign_keys = _get_foreign_keys(t_images, t_image_members, t_image_properties, dialect) for fk in foreign_keys: fk.drop() t_images.c.id.alter(sqlalchemy.String(36), primary_key=True) t_image_members.c.image_id.alter(sqlalchemy.String(36)) t_image_properties.c.image_id.alter(sqlalchemy.String(36)) _update_all_ids_to_uuids(t_images, t_image_members, t_image_properties) for fk in foreign_keys: fk.create() def _sqlite_table_swap(meta, t_image_members, t_image_properties, t_images): t_image_members.drop() t_image_properties.drop() t_images.drop() meta.bind.execute("ALTER TABLE images_backup " "RENAME TO images") meta.bind.execute("ALTER TABLE image_members_backup " "RENAME TO image_members") meta.bind.execute("ALTER TABLE image_properties_backup " "RENAME TO image_properties") meta.bind.execute("""CREATE INDEX ix_image_properties_deleted ON image_properties (deleted);""") meta.bind.execute("""CREATE INDEX ix_image_properties_name ON image_properties (name);""") def _get_table(table_name, metadata): """Return a sqlalchemy Table definition with associated metadata.""" return sqlalchemy.Table(table_name, metadata, autoload=True) def _get_foreign_keys(t_images, t_image_members, t_image_properties, dialect): """Retrieve and return foreign keys for members/properties tables.""" foreign_keys = [] if t_image_members.foreign_keys: img_members_fk_name = list(t_image_members.foreign_keys)[0].name if dialect == 'mysql': fk1 = migrate.ForeignKeyConstraint([t_image_members.c.image_id], [t_images.c.id], name=img_members_fk_name) else: fk1 = migrate.ForeignKeyConstraint([t_image_members.c.image_id], [t_images.c.id]) foreign_keys.append(fk1) if t_image_properties.foreign_keys: img_properties_fk_name = list(t_image_properties.foreign_keys)[0].name if dialect == 'mysql': fk2 = migrate.ForeignKeyConstraint([t_image_properties.c.image_id], [t_images.c.id], name=img_properties_fk_name) else: fk2 = migrate.ForeignKeyConstraint([t_image_properties.c.image_id], [t_images.c.id]) foreign_keys.append(fk2) return foreign_keys def _update_all_ids_to_uuids(t_images, t_image_members, t_image_properties): """Transition from INTEGER id to VARCHAR(36) id.""" images = list(t_images.select().execute()) for image in images: old_id = image["id"] new_id = str(uuid.uuid4()) t_images.update().where( t_images.c.id == old_id).values(id=new_id).execute() t_image_members.update().where( t_image_members.c.image_id == old_id).values( image_id=new_id).execute() t_image_properties.update().where( t_image_properties.c.image_id == old_id).values( image_id=new_id).execute() t_image_properties.update().where( and_(or_(t_image_properties.c.name == 'kernel_id', t_image_properties.c.name == 'ramdisk_id'), t_image_properties.c.value == old_id)).values( value=new_id).execute() def _update_all_uuids_to_ids(t_images, t_image_members, t_image_properties): """Transition from VARCHAR(36) id to INTEGER id.""" images = list(t_images.select().execute()) new_id = 1 for image in images: old_id = image["id"] t_images.update().where( t_images.c.id == old_id).values( id=str(new_id)).execute() t_image_members.update().where( t_image_members.c.image_id == old_id).values( image_id=str(new_id)).execute() t_image_properties.update().where( t_image_properties.c.image_id == old_id).values( image_id=str(new_id)).execute() t_image_properties.update().where( and_(or_(t_image_properties.c.name == 'kernel_id', t_image_properties.c.name == 'ramdisk_id'), t_image_properties.c.value == old_id)).values( value=str(new_id)).execute() new_id += 1 glance-16.0.0/glance/db/sqlalchemy/migrate_repo/versions/009_add_mindisk_and_minram.py0000666000175100017510000000501713245511421030760 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from sqlalchemy import * # noqa from glance.db.sqlalchemy.migrate_repo.schema import ( Boolean, DateTime, Integer, String, Text) # noqa def get_images_table(meta): """ Returns the Table object for the images table that corresponds to the images table definition of this version. """ images = Table('images', meta, Column('id', Integer(), primary_key=True, nullable=False), Column('name', String(255)), Column('disk_format', String(20)), Column('container_format', String(20)), Column('size', Integer()), Column('status', String(30), nullable=False), Column('is_public', Boolean(), nullable=False, default=False, index=True), Column('location', Text()), Column('created_at', DateTime(), nullable=False), Column('updated_at', DateTime()), Column('deleted_at', DateTime()), Column('deleted', Boolean(), nullable=False, default=False, index=True), Column('checksum', String(32)), Column('owner', String(255)), Column('min_disk', Integer(), default=0), Column('min_ram', Integer(), default=0), mysql_engine='InnoDB', extend_existing=True) return images def upgrade(migrate_engine): meta = MetaData() meta.bind = migrate_engine images = get_images_table(meta) min_disk = Column('min_disk', Integer(), default=0) min_disk.create(images) min_ram = Column('min_ram', Integer(), default=0) min_ram.create(images) glance-16.0.0/glance/db/sqlalchemy/migrate_repo/versions/024_placeholder.py0000666000175100017510000000130213245511421026575 0ustar zuulzuul00000000000000# Copyright 2013 OpenStack Foundation # Copyright 2013 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. def upgrade(migrate_engine): pass ././@LongLink0000000000000000000000000000014600000000000011216 Lustar 00000000000000glance-16.0.0/glance/db/sqlalchemy/migrate_repo/versions/039_add_changes_to_satisfy_models_metadef.pyglance-16.0.0/glance/db/sqlalchemy/migrate_repo/versions/039_add_changes_to_satisfy_models_metadef.p0000666000175100017510000002124613245511421033655 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import migrate import sqlalchemy from sqlalchemy import inspect from sqlalchemy import (Table, Index, UniqueConstraint) from sqlalchemy.schema import (DropConstraint) def _change_db2_unique_constraint(operation_type, constraint_name, *columns): constraint = migrate.UniqueConstraint(*columns, name=constraint_name) operation = getattr(constraint, operation_type) operation() def upgrade(migrate_engine): meta = sqlalchemy.MetaData() meta.bind = migrate_engine inspector = inspect(migrate_engine) metadef_namespaces = Table('metadef_namespaces', meta, autoload=True) metadef_properties = Table('metadef_properties', meta, autoload=True) metadef_objects = Table('metadef_objects', meta, autoload=True) metadef_ns_res_types = Table('metadef_namespace_resource_types', meta, autoload=True) metadef_resource_types = Table('metadef_resource_types', meta, autoload=True) metadef_tags = Table('metadef_tags', meta, autoload=True) constraints = [('ix_namespaces_namespace', [metadef_namespaces.c.namespace]), ('ix_objects_namespace_id_name', [metadef_objects.c.namespace_id, metadef_objects.c.name]), ('ix_metadef_properties_namespace_id_name', [metadef_properties.c.namespace_id, metadef_properties.c.name])] metadef_tags_constraints = inspector.get_unique_constraints('metadef_tags') for constraint in metadef_tags_constraints: if set(constraint['column_names']) == set(['namespace_id', 'name']): constraints.append((constraint['name'], [metadef_tags.c.namespace_id, metadef_tags.c.name])) if meta.bind.name == "ibm_db_sa": # For db2, the following constraints need to be dropped first, # otherwise the index like ix_metadef_ns_res_types_namespace_id # will fail to create. These constraints will be added back at # the end. It should not affect the origional logic for other # database backends. for (constraint_name, cols) in constraints: _change_db2_unique_constraint('drop', constraint_name, *cols) else: Index('ix_namespaces_namespace', metadef_namespaces.c.namespace).drop() Index('ix_objects_namespace_id_name', metadef_objects.c.namespace_id, metadef_objects.c.name).drop() Index('ix_metadef_properties_namespace_id_name', metadef_properties.c.namespace_id, metadef_properties.c.name).drop() fkc = migrate.ForeignKeyConstraint([metadef_tags.c.namespace_id], [metadef_namespaces.c.id]) fkc.create() # `migrate` module removes unique constraint after adding # foreign key to the table in sqlite. # The reason of this issue is that it isn't possible to add fkc to # existing table in sqlite. Instead of this we should recreate the table # with needed fkc in the declaration. Migrate package provide us with such # possibility, but unfortunately it recreates the table without # constraints. Create unique constraint manually. if migrate_engine.name == 'sqlite' and len( inspector.get_unique_constraints('metadef_tags')) == 0: uc = migrate.UniqueConstraint(metadef_tags.c.namespace_id, metadef_tags.c.name) uc.create() if meta.bind.name != "ibm_db_sa": Index('ix_tags_namespace_id_name', metadef_tags.c.namespace_id, metadef_tags.c.name).drop() Index('ix_metadef_tags_name', metadef_tags.c.name).create() Index('ix_metadef_tags_namespace_id', metadef_tags.c.namespace_id, metadef_tags.c.name).create() if migrate_engine.name == 'mysql': # We need to drop some foreign keys first because unique constraints # that we want to delete depend on them. So drop the fk and recreate # it again after unique constraint deletion. fkc = migrate.ForeignKeyConstraint([metadef_properties.c.namespace_id], [metadef_namespaces.c.id], name='metadef_properties_ibfk_1') fkc.drop() constraint = UniqueConstraint(metadef_properties.c.namespace_id, metadef_properties.c.name, name='namespace_id') migrate_engine.execute(DropConstraint(constraint)) fkc.create() fkc = migrate.ForeignKeyConstraint([metadef_objects.c.namespace_id], [metadef_namespaces.c.id], name='metadef_objects_ibfk_1') fkc.drop() constraint = UniqueConstraint(metadef_objects.c.namespace_id, metadef_objects.c.name, name='namespace_id') migrate_engine.execute(DropConstraint(constraint)) fkc.create() constraint = UniqueConstraint(metadef_ns_res_types.c.resource_type_id, metadef_ns_res_types.c.namespace_id, name='resource_type_id') migrate_engine.execute(DropConstraint(constraint)) constraint = UniqueConstraint(metadef_namespaces.c.namespace, name='namespace') migrate_engine.execute(DropConstraint(constraint)) constraint = UniqueConstraint(metadef_resource_types.c.name, name='name') migrate_engine.execute(DropConstraint(constraint)) if migrate_engine.name == 'postgresql': met_obj_index_name = ( inspector.get_unique_constraints('metadef_objects')[0]['name']) constraint = UniqueConstraint( metadef_objects.c.namespace_id, metadef_objects.c.name, name=met_obj_index_name) migrate_engine.execute(DropConstraint(constraint)) met_prop_index_name = ( inspector.get_unique_constraints('metadef_properties')[0]['name']) constraint = UniqueConstraint( metadef_properties.c.namespace_id, metadef_properties.c.name, name=met_prop_index_name) migrate_engine.execute(DropConstraint(constraint)) metadef_namespaces_name = ( inspector.get_unique_constraints( 'metadef_namespaces')[0]['name']) constraint = UniqueConstraint( metadef_namespaces.c.namespace, name=metadef_namespaces_name) migrate_engine.execute(DropConstraint(constraint)) metadef_resource_types_name = (inspector.get_unique_constraints( 'metadef_resource_types')[0]['name']) constraint = UniqueConstraint( metadef_resource_types.c.name, name=metadef_resource_types_name) migrate_engine.execute(DropConstraint(constraint)) constraint = UniqueConstraint( metadef_tags.c.namespace_id, metadef_tags.c.name, name='metadef_tags_namespace_id_name_key') migrate_engine.execute(DropConstraint(constraint)) Index('ix_metadef_ns_res_types_namespace_id', metadef_ns_res_types.c.namespace_id).create() Index('ix_metadef_namespaces_namespace', metadef_namespaces.c.namespace).create() Index('ix_metadef_namespaces_owner', metadef_namespaces.c.owner).create() Index('ix_metadef_objects_name', metadef_objects.c.name).create() Index('ix_metadef_objects_namespace_id', metadef_objects.c.namespace_id).create() Index('ix_metadef_properties_name', metadef_properties.c.name).create() Index('ix_metadef_properties_namespace_id', metadef_properties.c.namespace_id).create() if meta.bind.name == "ibm_db_sa": # For db2, add these constraints back. It should not affect the # origional logic for other database backends. for (constraint_name, cols) in constraints: _change_db2_unique_constraint('create', constraint_name, *cols) glance-16.0.0/glance/db/sqlalchemy/migrate_repo/versions/025_placeholder.py0000666000175100017510000000130213245511421026576 0ustar zuulzuul00000000000000# Copyright 2013 OpenStack Foundation # Copyright 2013 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. def upgrade(migrate_engine): pass glance-16.0.0/glance/db/sqlalchemy/migrate_repo/versions/015_quote_swift_credentials.py0000666000175100017510000001471513245511421031255 0ustar zuulzuul00000000000000# Copyright 2012 OpenStack Foundation. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_log import log as logging from oslo_utils import encodeutils import six.moves.urllib.parse as urlparse import sqlalchemy from glance.common import exception from glance.i18n import _, _LE LOG = logging.getLogger(__name__) def upgrade(migrate_engine): migrate_location_credentials(migrate_engine, to_quoted=True) def migrate_location_credentials(migrate_engine, to_quoted): """ Migrate location credentials for swift uri's between the quoted and unquoted forms. :param migrate_engine: The configured db engine :param to_quoted: If True, migrate location credentials from unquoted to quoted form. If False, do the reverse. """ meta = sqlalchemy.schema.MetaData() meta.bind = migrate_engine images_table = sqlalchemy.Table('images', meta, autoload=True) images = list(images_table.select(images_table.c.location.startswith( 'swift')).execute()) for image in images: try: fixed_uri = legacy_parse_uri(image['location'], to_quoted) images_table.update().where( images_table.c.id == image['id']).values( location=fixed_uri).execute() except exception.BadStoreUri as e: reason = encodeutils.exception_to_unicode(e) msg = _LE("Invalid store uri for image: %(image_id)s. " "Details: %(reason)s") % {'image_id': image.id, 'reason': reason} LOG.exception(msg) raise def legacy_parse_uri(uri, to_quote): """ Parse URLs. This method fixes an issue where credentials specified in the URL are interpreted differently in Python 2.6.1+ than prior versions of Python. It also deals with the peculiarity that new-style Swift URIs have where a username can contain a ':', like so: swift://account:user:pass@authurl.com/container/obj If to_quoted is True, the uri is assumed to have credentials that have not been quoted, and the resulting uri will contain quoted credentials. If to_quoted is False, the uri is assumed to have credentials that have been quoted, and the resulting uri will contain credentials that have not been quoted. """ # Make sure that URIs that contain multiple schemes, such as: # swift://user:pass@http://authurl.com/v1/container/obj # are immediately rejected. if uri.count('://') != 1: reason = _("URI cannot contain more than one occurrence of a scheme." "If you have specified a URI like " "swift://user:pass@http://authurl.com/v1/container/obj" ", you need to change it to use the swift+http:// scheme, " "like so: " "swift+http://user:pass@authurl.com/v1/container/obj") raise exception.BadStoreUri(message=reason) pieces = urlparse.urlparse(uri) if pieces.scheme not in ('swift', 'swift+http', 'swift+https'): raise exception.BadStoreUri(message="Unacceptable scheme: '%s'" % pieces.scheme) scheme = pieces.scheme netloc = pieces.netloc path = pieces.path.lstrip('/') if netloc != '': # > Python 2.6.1 if '@' in netloc: creds, netloc = netloc.split('@') else: creds = None else: # Python 2.6.1 compat # see lp659445 and Python issue7904 if '@' in path: creds, path = path.split('@') else: creds = None netloc = path[0:path.find('/')].strip('/') path = path[path.find('/'):].strip('/') if creds: cred_parts = creds.split(':') # User can be account:user, in which case cred_parts[0:2] will be # the account and user. Combine them into a single username of # account:user if to_quote: if len(cred_parts) == 1: reason = (_("Badly formed credentials '%(creds)s' in Swift " "URI") % {'creds': creds}) raise exception.BadStoreUri(message=reason) elif len(cred_parts) == 3: user = ':'.join(cred_parts[0:2]) else: user = cred_parts[0] key = cred_parts[-1] user = user key = key else: if len(cred_parts) != 2: reason = (_("Badly formed credentials in Swift URI.")) raise exception.BadStoreUri(message=reason) user, key = cred_parts user = urlparse.unquote(user) key = urlparse.unquote(key) else: user = None key = None path_parts = path.split('/') try: obj = path_parts.pop() container = path_parts.pop() if not netloc.startswith('http'): # push hostname back into the remaining to build full authurl path_parts.insert(0, netloc) auth_or_store_url = '/'.join(path_parts) except IndexError: reason = _("Badly formed S3 URI: %(uri)s") % {'uri': uri} raise exception.BadStoreUri(message=reason) if auth_or_store_url.startswith('http://'): auth_or_store_url = auth_or_store_url[len('http://'):] elif auth_or_store_url.startswith('https://'): auth_or_store_url = auth_or_store_url[len('https://'):] credstring = '' if user and key: if to_quote: quote_user = urlparse.quote(user) quote_key = urlparse.quote(key) else: quote_user = user quote_key = key credstring = '%s:%s@' % (quote_user, quote_key) auth_or_store_url = auth_or_store_url.strip('/') container = container.strip('/') obj = obj.strip('/') return '%s://%s%s/%s/%s' % (scheme, credstring, auth_or_store_url, container, obj) glance-16.0.0/glance/db/sqlalchemy/migrate_repo/versions/016_add_status_image_member.py0000666000175100017510000000165013245511421031146 0ustar zuulzuul00000000000000# Copyright 2013 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from sqlalchemy import MetaData, Table, Column, String meta = MetaData() status = Column('status', String(20), default="pending") def upgrade(migrate_engine): meta.bind = migrate_engine image_members = Table('image_members', meta, autoload=True) image_members.create_column(status) glance-16.0.0/glance/db/sqlalchemy/migrate_repo/versions/027_checksum_index.py0000666000175100017510000000164413245511421027320 0ustar zuulzuul00000000000000# Copyright 2013 Rackspace Hosting # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from sqlalchemy import MetaData, Table, Index INDEX_NAME = 'checksum_image_idx' def upgrade(migrate_engine): meta = MetaData() meta.bind = migrate_engine images = Table('images', meta, autoload=True) index = Index(INDEX_NAME, images.c.checksum) index.create(migrate_engine) glance-16.0.0/glance/db/sqlalchemy/migrate_repo/versions/__init__.py0000666000175100017510000000000013245511421025457 0ustar zuulzuul00000000000000glance-16.0.0/glance/db/sqlalchemy/migrate_repo/versions/019_migrate_image_locations.py0000666000175100017510000000300513245511421031166 0ustar zuulzuul00000000000000# Copyright 2013 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import sqlalchemy def get_images_table(meta): return sqlalchemy.Table('images', meta, autoload=True) def get_image_locations_table(meta): return sqlalchemy.Table('image_locations', meta, autoload=True) def upgrade(migrate_engine): meta = sqlalchemy.schema.MetaData(migrate_engine) images_table = get_images_table(meta) image_locations_table = get_image_locations_table(meta) image_records = images_table.select().execute().fetchall() for image in image_records: if image.location is not None: values = { 'image_id': image.id, 'value': image.location, 'created_at': image.created_at, 'updated_at': image.updated_at, 'deleted': image.deleted, 'deleted_at': image.deleted_at, } image_locations_table.insert(values=values).execute() glance-16.0.0/glance/db/sqlalchemy/migrate_repo/versions/006_sqlite_upgrade.sql0000666000175100017510000000236713245511421027506 0ustar zuulzuul00000000000000-- -- This is necessary because SQLite does not support -- RENAME INDEX or ALTER TABLE CHANGE COLUMN. -- CREATE TEMPORARY TABLE image_properties_backup ( id INTEGER NOT NULL, image_id INTEGER NOT NULL, name VARCHAR(255) NOT NULL, value TEXT, created_at DATETIME NOT NULL, updated_at DATETIME, deleted_at DATETIME, deleted BOOLEAN NOT NULL, PRIMARY KEY (id) ); INSERT INTO image_properties_backup SELECT id, image_id, key, value, created_at, updated_at, deleted_at, deleted FROM image_properties; DROP TABLE image_properties; CREATE TABLE image_properties ( id INTEGER NOT NULL, image_id INTEGER NOT NULL, name VARCHAR(255) NOT NULL, value TEXT, created_at DATETIME NOT NULL, updated_at DATETIME, deleted_at DATETIME, deleted BOOLEAN NOT NULL, PRIMARY KEY (id), CHECK (deleted IN (0, 1)), UNIQUE (image_id, name), FOREIGN KEY(image_id) REFERENCES images (id) ); CREATE INDEX ix_image_properties_name ON image_properties (name); CREATE INDEX ix_image_properties_deleted ON image_properties (deleted); INSERT INTO image_properties (id, image_id, name, value, created_at, updated_at, deleted_at, deleted) SELECT id, image_id, name, value, created_at, updated_at, deleted_at, deleted FROM image_properties_backup; DROP TABLE image_properties_backup; glance-16.0.0/glance/db/sqlalchemy/migrate_repo/versions/044_update_metadef_os_nova_server.py0000666000175100017510000000174113245511421032405 0ustar zuulzuul00000000000000# Copyright (c) 2016 Hewlett Packard Enterprise Software, LLC # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from sqlalchemy import MetaData, Table def upgrade(migrate_engine): meta = MetaData() meta.bind = migrate_engine resource_types_table = Table('metadef_resource_types', meta, autoload=True) resource_types_table.update(values={'name': 'OS::Nova::Server'}).where( resource_types_table.c.name == 'OS::Nova::Instance').execute() glance-16.0.0/glance/db/sqlalchemy/migrate_repo/versions/028_owner_index.py0000666000175100017510000000163613245511421026652 0ustar zuulzuul00000000000000# Copyright 2013 Rackspace Hosting # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from sqlalchemy import MetaData, Table, Index INDEX_NAME = 'owner_image_idx' def upgrade(migrate_engine): meta = MetaData() meta.bind = migrate_engine images = Table('images', meta, autoload=True) index = Index(INDEX_NAME, images.c.owner) index.create(migrate_engine) glance-16.0.0/glance/db/sqlalchemy/migrate_repo/versions/020_drop_images_table_location.py0000666000175100017510000000162213245511421031644 0ustar zuulzuul00000000000000# Copyright 2013 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import sqlalchemy def get_images_table(meta): return sqlalchemy.Table('images', meta, autoload=True) def upgrade(migrate_engine): meta = sqlalchemy.schema.MetaData(migrate_engine) images_table = get_images_table(meta) images_table.columns['location'].drop() glance-16.0.0/glance/db/sqlalchemy/migrate_repo/versions/038_add_metadef_tags_table.py0000666000175100017510000000374013245511421030732 0ustar zuulzuul00000000000000# Copyright (c) 2014 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from sqlalchemy.schema import ( Column, Index, MetaData, Table, UniqueConstraint) # noqa from glance.db.sqlalchemy.migrate_repo.schema import ( DateTime, Integer, String, create_tables) # noqa def define_metadef_tags_table(meta): _constr_kwargs = {} metadef_tags = Table('metadef_tags', meta, Column('id', Integer(), primary_key=True, nullable=False), Column('namespace_id', Integer(), nullable=False), Column('name', String(80), nullable=False), Column('created_at', DateTime(), nullable=False), Column('updated_at', DateTime()), UniqueConstraint('namespace_id', 'name', **_constr_kwargs), mysql_engine='InnoDB', mysql_charset='utf8', extend_existing=False) if meta.bind.name != 'ibm_db_sa': Index('ix_tags_namespace_id_name', metadef_tags.c.namespace_id, metadef_tags.c.name) return metadef_tags def upgrade(migrate_engine): meta = MetaData() meta.bind = migrate_engine tables = [define_metadef_tags_table(meta)] create_tables(tables) glance-16.0.0/glance/db/sqlalchemy/migrate_repo/versions/004_add_checksum.py0000666000175100017510000000516413245511421026735 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from sqlalchemy import * # noqa from glance.db.sqlalchemy.migrate_repo.schema import ( Boolean, DateTime, Integer, String, Text, from_migration_import) # noqa def get_images_table(meta): """ Returns the Table object for the images table that corresponds to the images table definition of this version. """ images = Table('images', meta, Column('id', Integer(), primary_key=True, nullable=False), Column('name', String(255)), Column('disk_format', String(20)), Column('container_format', String(20)), Column('size', Integer()), Column('status', String(30), nullable=False), Column('is_public', Boolean(), nullable=False, default=False, index=True), Column('location', Text()), Column('created_at', DateTime(), nullable=False), Column('updated_at', DateTime()), Column('deleted_at', DateTime()), Column('deleted', Boolean(), nullable=False, default=False, index=True), Column('checksum', String(32)), mysql_engine='InnoDB', extend_existing=True) return images def get_image_properties_table(meta): """ No changes to the image properties table from 002... """ (define_image_properties_table,) = from_migration_import( '002_add_image_properties_table', ['define_image_properties_table']) image_properties = define_image_properties_table(meta) return image_properties def upgrade(migrate_engine): meta = MetaData() meta.bind = migrate_engine images = get_images_table(meta) checksum = Column('checksum', String(32)) checksum.create(images) glance-16.0.0/glance/db/sqlalchemy/migrate_repo/versions/041_add_artifact_tables.py0000666000175100017510000002417713245511421030270 0ustar zuulzuul00000000000000# Copyright (c) 2015 Mirantis, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from sqlalchemy.schema import (Column, ForeignKey, Index, MetaData, Table) from glance.db.sqlalchemy.migrate_repo.schema import ( BigInteger, Boolean, DateTime, Integer, Numeric, String, Text, create_tables) # noqa def define_artifacts_table(meta): artifacts = Table('artifacts', meta, Column('id', String(36), primary_key=True, nullable=False), Column('name', String(255), nullable=False), Column('type_name', String(255), nullable=False), Column('type_version_prefix', BigInteger(), nullable=False), Column('type_version_suffix', String(255)), Column('type_version_meta', String(255)), Column('version_prefix', BigInteger(), nullable=False), Column('version_suffix', String(255)), Column('version_meta', String(255)), Column('description', Text()), Column('visibility', String(32), nullable=False), Column('state', String(32), nullable=False), Column('owner', String(255), nullable=False), Column('created_at', DateTime(), nullable=False), Column('updated_at', DateTime(), nullable=False), Column('deleted_at', DateTime()), Column('published_at', DateTime()), mysql_engine='InnoDB', mysql_charset='utf8', extend_existing=True) Index('ix_artifact_name_and_version', artifacts.c.name, artifacts.c.version_prefix, artifacts.c.version_suffix) Index('ix_artifact_type', artifacts.c.type_name, artifacts.c.type_version_prefix, artifacts.c.type_version_suffix) Index('ix_artifact_state', artifacts.c.state) Index('ix_artifact_owner', artifacts.c.owner) Index('ix_artifact_visibility', artifacts.c.visibility) return artifacts def define_artifact_tags_table(meta): artifact_tags = Table('artifact_tags', meta, Column('id', String(36), primary_key=True, nullable=False), Column('artifact_id', String(36), ForeignKey('artifacts.id'), nullable=False), Column('value', String(255), nullable=False), Column('created_at', DateTime(), nullable=False), Column('updated_at', DateTime(), nullable=False), mysql_engine='InnoDB', mysql_charset='utf8', extend_existing=True) Index('ix_artifact_tags_artifact_id', artifact_tags.c.artifact_id) Index('ix_artifact_tags_artifact_id_tag_value', artifact_tags.c.artifact_id, artifact_tags.c.value) return artifact_tags def define_artifact_dependencies_table(meta): artifact_dependencies = Table('artifact_dependencies', meta, Column('id', String(36), primary_key=True, nullable=False), Column('artifact_source', String(36), ForeignKey('artifacts.id'), nullable=False), Column('artifact_dest', String(36), ForeignKey('artifacts.id'), nullable=False), Column('artifact_origin', String(36), ForeignKey('artifacts.id'), nullable=False), Column('is_direct', Boolean(), nullable=False), Column('position', Integer()), Column('name', String(36)), Column('created_at', DateTime(), nullable=False), Column('updated_at', DateTime(), nullable=False), mysql_engine='InnoDB', mysql_charset='utf8', extend_existing=True) Index('ix_artifact_dependencies_source_id', artifact_dependencies.c.artifact_source) Index('ix_artifact_dependencies_dest_id', artifact_dependencies.c.artifact_dest), Index('ix_artifact_dependencies_origin_id', artifact_dependencies.c.artifact_origin) Index('ix_artifact_dependencies_direct_dependencies', artifact_dependencies.c.artifact_source, artifact_dependencies.c.is_direct) return artifact_dependencies def define_artifact_blobs_table(meta): artifact_blobs = Table('artifact_blobs', meta, Column('id', String(36), primary_key=True, nullable=False), Column('artifact_id', String(36), ForeignKey('artifacts.id'), nullable=False), Column('size', BigInteger(), nullable=False), Column('checksum', String(32)), Column('name', String(255), nullable=False), Column('item_key', String(329)), Column('position', Integer()), Column('created_at', DateTime(), nullable=False), Column('updated_at', DateTime(), nullable=False), mysql_engine='InnoDB', mysql_charset='utf8', extend_existing=True) Index('ix_artifact_blobs_artifact_id', artifact_blobs.c.artifact_id) Index('ix_artifact_blobs_name', artifact_blobs.c.name) return artifact_blobs def define_artifact_properties_table(meta): artifact_properties = Table('artifact_properties', meta, Column('id', String(36), primary_key=True, nullable=False), Column('artifact_id', String(36), ForeignKey('artifacts.id'), nullable=False), Column('name', String(255), nullable=False), Column('string_value', String(255)), Column('int_value', Integer()), Column('numeric_value', Numeric()), Column('bool_value', Boolean()), Column('text_value', Text()), Column('created_at', DateTime(), nullable=False), Column('updated_at', DateTime(), nullable=False), Column('position', Integer()), mysql_engine='InnoDB', mysql_charset='utf8', extend_existing=True) Index('ix_artifact_properties_artifact_id', artifact_properties.c.artifact_id) Index('ix_artifact_properties_name', artifact_properties.c.name) return artifact_properties def define_artifact_blob_locations_table(meta): artifact_blob_locations = Table('artifact_blob_locations', meta, Column('id', String(36), primary_key=True, nullable=False), Column('blob_id', String(36), ForeignKey('artifact_blobs.id'), nullable=False), Column('value', Text(), nullable=False), Column('created_at', DateTime(), nullable=False), Column('updated_at', DateTime(), nullable=False), Column('position', Integer()), Column('status', String(36), nullable=True), mysql_engine='InnoDB', mysql_charset='utf8', extend_existing=True) Index('ix_artifact_blob_locations_blob_id', artifact_blob_locations.c.blob_id) return artifact_blob_locations def upgrade(migrate_engine): meta = MetaData() meta.bind = migrate_engine tables = [define_artifacts_table(meta), define_artifact_tags_table(meta), define_artifact_properties_table(meta), define_artifact_blobs_table(meta), define_artifact_blob_locations_table(meta), define_artifact_dependencies_table(meta)] create_tables(tables) glance-16.0.0/glance/db/sqlalchemy/migrate_repo/versions/010_default_update_at.py0000666000175100017510000000254613245511421027773 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from sqlalchemy import * # noqa from glance.db.sqlalchemy.migrate_repo.schema import from_migration_import def get_images_table(meta): """ No changes to the images table from 008... """ (get_images_table,) = from_migration_import( '008_add_image_members_table', ['get_images_table']) images = get_images_table(meta) return images def upgrade(migrate_engine): meta = MetaData() meta.bind = migrate_engine images_table = get_images_table(meta) # set updated_at to created_at if equal to None conn = migrate_engine.connect() conn.execute( images_table.update( images_table.c.updated_at == None, {images_table.c.updated_at: images_table.c.created_at})) glance-16.0.0/glance/db/sqlalchemy/migrate_repo/versions/045_add_visibility.py0000666000175100017510000000410013245511421027314 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from sqlalchemy import Column, Enum, Index, MetaData, Table, select, not_, and_ from sqlalchemy.engine import reflection def upgrade(migrate_engine): meta = MetaData(bind=migrate_engine) images = Table('images', meta, autoload=True) enum = Enum('private', 'public', 'shared', 'community', metadata=meta, name='image_visibility') enum.create() images.create_column(Column('visibility', enum, nullable=False, server_default='shared')) visibility_index = Index('visibility_image_idx', images.c.visibility) visibility_index.create(migrate_engine) images.update(values={'visibility': 'public'}).where( images.c.is_public).execute() image_members = Table('image_members', meta, autoload=True) # NOTE(dharinic): Mark all the non-public images as 'private' first images.update().values(visibility='private').where( not_(images.c.is_public)).execute() # NOTE(dharinic): Identify 'shared' images from the above images.update().values(visibility='shared').where(and_( images.c.visibility == 'private', images.c.id.in_(select( [image_members.c.image_id]).distinct().where( not_(image_members.c.deleted))))).execute() insp = reflection.Inspector.from_engine(migrate_engine) for index in insp.get_indexes('images'): if 'ix_images_is_public' == index['name']: Index('ix_images_is_public', images.c.is_public).drop() break images.c.is_public.drop() glance-16.0.0/glance/db/sqlalchemy/migrate_repo/versions/029_location_meta_data_pickle_to_string.py0000666000175100017510000000317413245511421033557 0ustar zuulzuul00000000000000# Copyright 2013 Rackspace Hosting # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import pickle import sqlalchemy from sqlalchemy import Table, Column # noqa from glance.db.sqlalchemy import models def upgrade(migrate_engine): meta = sqlalchemy.schema.MetaData(migrate_engine) image_locations = Table('image_locations', meta, autoload=True) new_meta_data = Column('storage_meta_data', models.JSONEncodedDict, default={}) new_meta_data.create(image_locations) noe = pickle.dumps({}) s = sqlalchemy.sql.select([image_locations]).where( image_locations.c.meta_data != noe) conn = migrate_engine.connect() res = conn.execute(s) for row in res: meta_data = row['meta_data'] x = pickle.loads(meta_data) if x != {}: stmt = image_locations.update().where( image_locations.c.id == row['id']).values(storage_meta_data=x) conn.execute(stmt) conn.close() image_locations.columns['meta_data'].drop() image_locations.columns['storage_meta_data'].alter(name='meta_data') glance-16.0.0/glance/db/sqlalchemy/migrate_repo/versions/011_sqlite_upgrade.sql0000666000175100017510000000320013245511421027465 0ustar zuulzuul00000000000000CREATE TEMPORARY TABLE images_backup ( id INTEGER NOT NULL, name VARCHAR(255), size INTEGER, status VARCHAR(30) NOT NULL, is_public BOOLEAN NOT NULL, location TEXT, created_at DATETIME NOT NULL, updated_at DATETIME, deleted_at DATETIME, deleted BOOLEAN NOT NULL, disk_format VARCHAR(20), container_format VARCHAR(20), checksum VARCHAR(32), owner VARCHAR(255), min_disk INTEGER, min_ram INTEGER, PRIMARY KEY (id), CHECK (is_public IN (0, 1)), CHECK (deleted IN (0, 1)) ); INSERT INTO images_backup SELECT id, name, size, status, is_public, location, created_at, updated_at, deleted_at, deleted, disk_format, container_format, checksum, owner, min_disk, min_ram FROM images; DROP TABLE images; CREATE TABLE images ( id INTEGER NOT NULL, name VARCHAR(255), size INTEGER, status VARCHAR(30) NOT NULL, is_public BOOLEAN NOT NULL, location TEXT, created_at DATETIME NOT NULL, updated_at DATETIME, deleted_at DATETIME, deleted BOOLEAN NOT NULL, disk_format VARCHAR(20), container_format VARCHAR(20), checksum VARCHAR(32), owner VARCHAR(255), min_disk INTEGER NOT NULL, min_ram INTEGER NOT NULL, PRIMARY KEY (id), CHECK (is_public IN (0, 1)), CHECK (deleted IN (0, 1)) ); CREATE INDEX ix_images_deleted ON images (deleted); CREATE INDEX ix_images_is_public ON images (is_public); INSERT INTO images SELECT id, name, size, status, is_public, location, created_at, updated_at, deleted_at, deleted, disk_format, container_format, checksum, owner, min_disk, min_ram FROM images_backup; DROP TABLE images_backup; glance-16.0.0/glance/db/sqlalchemy/migrate_repo/versions/006_key_to_name.py0000666000175100017510000000426213245511421026615 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from sqlalchemy import * # noqa from glance.db.sqlalchemy.migrate_repo.schema import from_migration_import def get_images_table(meta): """ No changes to the image properties table from 002... """ (get_images_table,) = from_migration_import( '004_add_checksum', ['get_images_table']) images = get_images_table(meta) return images def upgrade(migrate_engine): meta = MetaData() meta.bind = migrate_engine (get_image_properties_table,) = from_migration_import( '004_add_checksum', ['get_image_properties_table']) image_properties = get_image_properties_table(meta) if migrate_engine.name == "ibm_db_sa": # NOTE(dperaza) ibm db2 does not allow ALTER INDEX so we will drop # the index, rename the column, then re-create the index sql_commands = [ """ALTER TABLE image_properties DROP UNIQUE ix_image_properties_image_id_key;""", """ALTER TABLE image_properties RENAME COLUMN \"key\" to name;""", """ALTER TABLE image_properties ADD CONSTRAINT ix_image_properties_image_id_name UNIQUE(image_id, name);""", ] for command in sql_commands: meta.bind.execute(command) else: index = Index('ix_image_properties_image_id_key', image_properties.c.image_id, image_properties.c.key) index.rename('ix_image_properties_image_id_name') image_properties = get_image_properties_table(meta) image_properties.columns['key'].alter(name="name") glance-16.0.0/glance/db/sqlalchemy/migrate_repo/versions/003_sqlite_upgrade.sql0000666000175100017510000000321013245511421027467 0ustar zuulzuul00000000000000-- Move type column from base images table -- to be records in image_properties table CREATE TEMPORARY TABLE tmp_type_records (id INTEGER NOT NULL, type VARCHAR(30) NOT NULL); INSERT INTO tmp_type_records SELECT id, type FROM images WHERE type IS NOT NULL; REPLACE INTO image_properties (image_id, key, value, created_at, deleted) SELECT id, 'type', type, date('now'), 0 FROM tmp_type_records; DROP TABLE tmp_type_records; -- Make changes to the base images table CREATE TEMPORARY TABLE images_backup ( id INTEGER NOT NULL, name VARCHAR(255), size INTEGER, status VARCHAR(30) NOT NULL, is_public BOOLEAN NOT NULL, location TEXT, created_at DATETIME NOT NULL, updated_at DATETIME, deleted_at DATETIME, deleted BOOLEAN NOT NULL, PRIMARY KEY (id) ); INSERT INTO images_backup SELECT id, name, size, status, is_public, location, created_at, updated_at, deleted_at, deleted FROM images; DROP TABLE images; CREATE TABLE images ( id INTEGER NOT NULL, name VARCHAR(255), size INTEGER, status VARCHAR(30) NOT NULL, is_public BOOLEAN NOT NULL, location TEXT, created_at DATETIME NOT NULL, updated_at DATETIME, deleted_at DATETIME, deleted BOOLEAN NOT NULL, disk_format VARCHAR(20), container_format VARCHAR(20), PRIMARY KEY (id), CHECK (is_public IN (0, 1)), CHECK (deleted IN (0, 1)) ); CREATE INDEX ix_images_deleted ON images (deleted); CREATE INDEX ix_images_is_public ON images (is_public); INSERT INTO images (id, name, size, status, is_public, location, created_at, updated_at, deleted_at, deleted) SELECT id, name, size, status, is_public, location, created_at, updated_at, deleted_at, deleted FROM images_backup; DROP TABLE images_backup; glance-16.0.0/glance/db/sqlalchemy/migrate_repo/versions/011_make_mindisk_and_minram_notnull.py0000666000175100017510000000157713245511421032720 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import sqlalchemy meta = sqlalchemy.MetaData() def upgrade(migrate_engine): meta.bind = migrate_engine images = sqlalchemy.Table('images', meta, autoload=True) images.c.min_disk.alter(nullable=False) images.c.min_ram.alter(nullable=False) glance-16.0.0/glance/db/sqlalchemy/migrate_repo/versions/030_add_tasks_table.py0000666000175100017510000000432713245511421027426 0ustar zuulzuul00000000000000# Copyright 2013 OpenStack Foundation # All Rights Reserved. # Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from sqlalchemy.schema import (Column, MetaData, Table, Index) from glance.db.sqlalchemy.migrate_repo.schema import ( Boolean, DateTime, String, Text, create_tables) # noqa def define_tasks_table(meta): tasks = Table('tasks', meta, Column('id', String(36), primary_key=True, nullable=False), Column('type', String(30), nullable=False), Column('status', String(30), nullable=False), Column('owner', String(255), nullable=False), Column('input', Text()), # json blob Column('result', Text()), # json blob Column('message', Text()), Column('expires_at', DateTime(), nullable=True), Column('created_at', DateTime(), nullable=False), Column('updated_at', DateTime()), Column('deleted_at', DateTime()), Column('deleted', Boolean(), nullable=False, default=False), mysql_engine='InnoDB', mysql_charset='utf8', extend_existing=True) Index('ix_tasks_type', tasks.c.type) Index('ix_tasks_status', tasks.c.status) Index('ix_tasks_owner', tasks.c.owner) Index('ix_tasks_deleted', tasks.c.deleted) Index('ix_tasks_updated_at', tasks.c.updated_at) return tasks def upgrade(migrate_engine): meta = MetaData() meta.bind = migrate_engine tables = [define_tasks_table(meta)] create_tables(tables) glance-16.0.0/glance/db/sqlalchemy/migrate_repo/versions/005_size_big_integer.py0000666000175100017510000000543213245511421027632 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from sqlalchemy import * # noqa from glance.db.sqlalchemy.migrate_repo.schema import ( Boolean, DateTime, BigInteger, Integer, String, Text, from_migration_import) # noqa def get_images_table(meta): """ Returns the Table object for the images table that corresponds to the images table definition of this version. """ images = Table('images', meta, Column('id', Integer(), primary_key=True, nullable=False), Column('name', String(255)), Column('disk_format', String(20)), Column('container_format', String(20)), Column('size', BigInteger()), Column('status', String(30), nullable=False), Column('is_public', Boolean(), nullable=False, default=False, index=True), Column('location', Text()), Column('created_at', DateTime(), nullable=False), Column('updated_at', DateTime()), Column('deleted_at', DateTime()), Column('deleted', Boolean(), nullable=False, default=False, index=True), mysql_engine='InnoDB', extend_existing=True) return images def upgrade(migrate_engine): meta = MetaData() meta.bind = migrate_engine # No changes to SQLite stores are necessary, since # there is no BIG INTEGER type in SQLite. Unfortunately, # running the Python 005_size_big_integer.py migration script # on a SQLite datastore results in an error in the sa-migrate # code that does the workarounds for SQLite not having # ALTER TABLE MODIFY COLUMN ability dialect = migrate_engine.url.get_dialect().name if not dialect.startswith('sqlite'): (get_images_table,) = from_migration_import( '003_add_disk_format', ['get_images_table']) images = get_images_table(meta) images.columns['size'].alter(type=BigInteger()) glance-16.0.0/glance/db/sqlalchemy/migrate_repo/schema.py0000666000175100017510000000626313245511421023331 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Various conveniences used for migration scripts """ from oslo_log import log as logging import sqlalchemy.types from glance.i18n import _LI LOG = logging.getLogger(__name__) String = lambda length: sqlalchemy.types.String( length=length, convert_unicode=False, unicode_error=None, _warn_on_bytestring=False) Text = lambda: sqlalchemy.types.Text( length=None, convert_unicode=False, unicode_error=None, _warn_on_bytestring=False) Boolean = lambda: sqlalchemy.types.Boolean(create_constraint=True, name=None) DateTime = lambda: sqlalchemy.types.DateTime(timezone=False) Integer = lambda: sqlalchemy.types.Integer() BigInteger = lambda: sqlalchemy.types.BigInteger() PickleType = lambda: sqlalchemy.types.PickleType() Numeric = lambda: sqlalchemy.types.Numeric() def from_migration_import(module_name, fromlist): """ Import a migration file and return the module :param module_name: name of migration module to import from (ex: 001_add_images_table) :param fromlist: list of items to import (ex: define_images_table) :returns: module object This bit of ugliness warrants an explanation: As you're writing migrations, you'll frequently want to refer to tables defined in previous migrations. In the interest of not repeating yourself, you need a way of importing that table into a 'future' migration. However, tables are bound to metadata, so what you need to import is really a table factory, which you can late-bind to your current metadata object. Moreover, migrations begin with a number (001...), which means they aren't valid Python identifiers. This means we can't perform a 'normal' import on them (the Python lexer will 'splode). Instead, we need to use __import__ magic to bring the table-factory into our namespace. Example Usage: (define_images_table,) = from_migration_import( '001_add_images_table', ['define_images_table']) images = define_images_table(meta) # Refer to images table """ module_path = 'glance.db.sqlalchemy.migrate_repo.versions.%s' % module_name module = __import__(module_path, globals(), locals(), fromlist, 0) return [getattr(module, item) for item in fromlist] def create_tables(tables): for table in tables: LOG.info(_LI("creating table %(table)s"), {'table': table}) table.create() def drop_tables(tables): for table in tables: LOG.info(_LI("dropping table %(table)s"), {'table': table}) table.drop() glance-16.0.0/glance/db/sqlalchemy/migrate_repo/README0000666000175100017510000000017313245511421022371 0ustar zuulzuul00000000000000This is a database migration repository. More information at https://git.openstack.org/cgit/openstack/sqlalchemy-migrate/ glance-16.0.0/glance/db/sqlalchemy/migrate_repo/__init__.py0000666000175100017510000000000013245511421023607 0ustar zuulzuul00000000000000glance-16.0.0/glance/db/sqlalchemy/migrate_repo/migrate.cfg0000666000175100017510000000174113245511421023624 0ustar zuulzuul00000000000000[db_settings] # Used to identify which repository this database is versioned under. # You can use the name of your project. repository_id=Glance Migrations # The name of the database table used to track the schema version. # This name shouldn't already be used by your project. # If this is changed once a database is under version control, you'll need to # change the table name in each database too. version_table=migrate_version # When committing a change script, Migrate will attempt to generate the # sql for all supported databases; normally, if one of them fails - probably # because you don't have that database installed - it is ignored and the # commit continues, perhaps ending successfully. # Databases in this list MUST compile successfully during a commit, or the # entire commit will fail. List the databases your application will actually # be using to ensure your updates to that database work properly. # This must be a list; example: ['postgres','sqlite'] required_dbs=[] glance-16.0.0/glance/db/sqlalchemy/migrate_repo/manage.py0000666000175100017510000000141213245511421023310 0ustar zuulzuul00000000000000#!/usr/bin/env python # Copyright (c) 2012 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from migrate.versioning.shell import main # This should probably be a console script entry point. if __name__ == '__main__': main(debug='False', repository='.') glance-16.0.0/glance/db/sqlalchemy/api.py0000666000175100017510000020630413245511421020163 0ustar zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # Copyright 2010-2011 OpenStack Foundation # Copyright 2012 Justin Santa Barbara # Copyright 2013 IBM Corp. # Copyright 2015 Mirantis, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Defines interface for DB access.""" import datetime import threading from oslo_config import cfg from oslo_db import exception as db_exception from oslo_db.sqlalchemy import session from oslo_log import log as logging from oslo_utils import excutils import osprofiler.sqlalchemy from retrying import retry import six # NOTE(jokke): simplified transition to py3, behaves like py2 xrange from six.moves import range import sqlalchemy from sqlalchemy.ext.compiler import compiles from sqlalchemy import MetaData, Table import sqlalchemy.orm as sa_orm from sqlalchemy import sql import sqlalchemy.sql as sa_sql from glance.common import exception from glance.common import timeutils from glance.common import utils from glance.db.sqlalchemy.metadef_api import (resource_type as metadef_resource_type_api) from glance.db.sqlalchemy.metadef_api import (resource_type_association as metadef_association_api) from glance.db.sqlalchemy.metadef_api import namespace as metadef_namespace_api from glance.db.sqlalchemy.metadef_api import object as metadef_object_api from glance.db.sqlalchemy.metadef_api import property as metadef_property_api from glance.db.sqlalchemy.metadef_api import tag as metadef_tag_api from glance.db.sqlalchemy import models from glance.db import utils as db_utils from glance.i18n import _, _LW, _LI, _LE sa_logger = None LOG = logging.getLogger(__name__) STATUSES = ['active', 'saving', 'queued', 'killed', 'pending_delete', 'deleted', 'deactivated', 'importing', 'uploading'] CONF = cfg.CONF CONF.import_group("profiler", "glance.common.wsgi") _FACADE = None _LOCK = threading.Lock() def _retry_on_deadlock(exc): """Decorator to retry a DB API call if Deadlock was received.""" if isinstance(exc, db_exception.DBDeadlock): LOG.warn(_LW("Deadlock detected. Retrying...")) return True return False def _create_facade_lazily(): global _LOCK, _FACADE if _FACADE is None: with _LOCK: if _FACADE is None: _FACADE = session.EngineFacade.from_config(CONF) if CONF.profiler.enabled and CONF.profiler.trace_sqlalchemy: osprofiler.sqlalchemy.add_tracing(sqlalchemy, _FACADE.get_engine(), "db") return _FACADE def get_engine(): facade = _create_facade_lazily() return facade.get_engine() def get_session(autocommit=True, expire_on_commit=False): facade = _create_facade_lazily() return facade.get_session(autocommit=autocommit, expire_on_commit=expire_on_commit) def _validate_db_int(**kwargs): """Make sure that all arguments are less than or equal to 2 ** 31 - 1. This limitation is introduced because databases stores INT in 4 bytes. If the validation fails for some argument, exception.Invalid is raised with appropriate information. """ max_int = (2 ** 31) - 1 for param_key, param_value in kwargs.items(): if param_value and param_value > max_int: msg = _("'%(param)s' value out of range, " "must not exceed %(max)d.") % {"param": param_key, "max": max_int} raise exception.Invalid(msg) def clear_db_env(): """ Unset global configuration variables for database. """ global _FACADE _FACADE = None def _check_mutate_authorization(context, image_ref): if not is_image_mutable(context, image_ref): LOG.warn(_LW("Attempted to modify image user did not own.")) msg = _("You do not own this image") if image_ref.visibility in ['private', 'shared']: exc_class = exception.Forbidden else: # 'public', or 'community' exc_class = exception.ForbiddenPublicImage raise exc_class(msg) def image_create(context, values, v1_mode=False): """Create an image from the values dictionary.""" image = _image_update(context, values, None, purge_props=False) if v1_mode: image = db_utils.mutate_image_dict_to_v1(image) return image def image_update(context, image_id, values, purge_props=False, from_state=None, v1_mode=False): """ Set the given properties on an image and update it. :raises: ImageNotFound if image does not exist. """ image = _image_update(context, values, image_id, purge_props, from_state=from_state) if v1_mode: image = db_utils.mutate_image_dict_to_v1(image) return image @retry(retry_on_exception=_retry_on_deadlock, wait_fixed=500, stop_max_attempt_number=50) def image_destroy(context, image_id): """Destroy the image or raise if it does not exist.""" session = get_session() with session.begin(): image_ref = _image_get(context, image_id, session=session) # Perform authorization check _check_mutate_authorization(context, image_ref) image_ref.delete(session=session) delete_time = image_ref.deleted_at _image_locations_delete_all(context, image_id, delete_time, session) _image_property_delete_all(context, image_id, delete_time, session) _image_member_delete_all(context, image_id, delete_time, session) _image_tag_delete_all(context, image_id, delete_time, session) return _normalize_locations(context, image_ref) def _normalize_locations(context, image, force_show_deleted=False): """ Generate suitable dictionary list for locations field of image. We don't need to set other data fields of location record which return from image query. """ if image['status'] == 'deactivated' and not context.is_admin: # Locations are not returned for a deactivated image for non-admin user image['locations'] = [] return image if force_show_deleted: locations = image['locations'] else: locations = [x for x in image['locations'] if not x.deleted] image['locations'] = [{'id': loc['id'], 'url': loc['value'], 'metadata': loc['meta_data'], 'status': loc['status']} for loc in locations] return image def _normalize_tags(image): undeleted_tags = [x for x in image['tags'] if not x.deleted] image['tags'] = [tag['value'] for tag in undeleted_tags] return image def image_get(context, image_id, session=None, force_show_deleted=False, v1_mode=False): image = _image_get(context, image_id, session=session, force_show_deleted=force_show_deleted) image = _normalize_locations(context, image.to_dict(), force_show_deleted=force_show_deleted) if v1_mode: image = db_utils.mutate_image_dict_to_v1(image) return image def _check_image_id(image_id): """ check if the given image id is valid before executing operations. For now, we only check its length. The original purpose of this method is wrapping the different behaviors between MySql and DB2 when the image id length is longer than the defined length in database model. :param image_id: The id of the image we want to check :returns: Raise NoFound exception if given image id is invalid """ if (image_id and len(image_id) > models.Image.id.property.columns[0].type.length): raise exception.ImageNotFound() def _image_get(context, image_id, session=None, force_show_deleted=False): """Get an image or raise if it does not exist.""" _check_image_id(image_id) session = session or get_session() try: query = session.query(models.Image).options( sa_orm.joinedload(models.Image.properties)).options( sa_orm.joinedload( models.Image.locations)).filter_by(id=image_id) # filter out deleted images if context disallows it if not force_show_deleted and not context.can_see_deleted: query = query.filter_by(deleted=False) image = query.one() except sa_orm.exc.NoResultFound: msg = "No image found with ID %s" % image_id LOG.debug(msg) raise exception.ImageNotFound(msg) # Make sure they can look at it if not is_image_visible(context, image): msg = "Forbidding request, image %s not visible" % image_id LOG.debug(msg) raise exception.Forbidden(msg) return image def is_image_mutable(context, image): """Return True if the image is mutable in this context.""" # Is admin == image mutable if context.is_admin: return True # No owner == image not mutable if image['owner'] is None or context.owner is None: return False # Image only mutable by its owner return image['owner'] == context.owner def is_image_visible(context, image, status=None): """Return True if the image is visible in this context.""" return db_utils.is_image_visible(context, image, image_member_find, status) def _get_default_column_value(column_type): """Return the default value of the columns from DB table In postgreDB case, if no right default values are being set, an psycopg2.DataError will be thrown. """ type_schema = { 'datetime': None, 'big_integer': 0, 'integer': 0, 'string': '' } if isinstance(column_type, sa_sql.type_api.Variant): return _get_default_column_value(column_type.impl) return type_schema[column_type.__visit_name__] def _paginate_query(query, model, limit, sort_keys, marker=None, sort_dir=None, sort_dirs=None): """Returns a query with sorting / pagination criteria added. Pagination works by requiring a unique sort_key, specified by sort_keys. (If sort_keys is not unique, then we risk looping through values.) We use the last row in the previous page as the 'marker' for pagination. So we must return values that follow the passed marker in the order. With a single-valued sort_key, this would be easy: sort_key > X. With a compound-values sort_key, (k1, k2, k3) we must do this to repeat the lexicographical ordering: (k1 > X1) or (k1 == X1 && k2 > X2) or (k1 == X1 && k2 == X2 && k3 > X3) We also have to cope with different sort_directions. Typically, the id of the last row is used as the client-facing pagination marker, then the actual marker object must be fetched from the db and passed in to us as marker. :param query: the query object to which we should add paging/sorting :param model: the ORM model class :param limit: maximum number of items to return :param sort_keys: array of attributes by which results should be sorted :param marker: the last item of the previous page; we returns the next results after this value. :param sort_dir: direction in which results should be sorted (asc, desc) :param sort_dirs: per-column array of sort_dirs, corresponding to sort_keys :rtype: sqlalchemy.orm.query.Query :returns: The query with sorting/pagination added. """ if 'id' not in sort_keys: # TODO(justinsb): If this ever gives a false-positive, check # the actual primary key, rather than assuming its id LOG.warn(_LW('Id not in sort_keys; is sort_keys unique?')) assert(not (sort_dir and sort_dirs)) # nosec # nosec: This function runs safely if the assertion fails. # Default the sort direction to ascending if sort_dir is None: sort_dir = 'asc' # Ensure a per-column sort direction if sort_dirs is None: sort_dirs = [sort_dir] * len(sort_keys) assert(len(sort_dirs) == len(sort_keys)) # nosec # nosec: This function runs safely if the assertion fails. if len(sort_dirs) < len(sort_keys): sort_dirs += [sort_dir] * (len(sort_keys) - len(sort_dirs)) # Add sorting for current_sort_key, current_sort_dir in zip(sort_keys, sort_dirs): sort_dir_func = { 'asc': sqlalchemy.asc, 'desc': sqlalchemy.desc, }[current_sort_dir] try: sort_key_attr = getattr(model, current_sort_key) except AttributeError: raise exception.InvalidSortKey() query = query.order_by(sort_dir_func(sort_key_attr)) default = '' # Default to an empty string if NULL # Add pagination if marker is not None: marker_values = [] for sort_key in sort_keys: v = getattr(marker, sort_key) if v is None: v = default marker_values.append(v) # Build up an array of sort criteria as in the docstring criteria_list = [] for i in range(len(sort_keys)): crit_attrs = [] for j in range(i): model_attr = getattr(model, sort_keys[j]) default = _get_default_column_value( model_attr.property.columns[0].type) attr = sa_sql.expression.case([(model_attr != None, model_attr), ], else_=default) crit_attrs.append((attr == marker_values[j])) model_attr = getattr(model, sort_keys[i]) default = _get_default_column_value( model_attr.property.columns[0].type) attr = sa_sql.expression.case([(model_attr != None, model_attr), ], else_=default) if sort_dirs[i] == 'desc': crit_attrs.append((attr < marker_values[i])) elif sort_dirs[i] == 'asc': crit_attrs.append((attr > marker_values[i])) else: raise ValueError(_("Unknown sort direction, " "must be 'desc' or 'asc'")) criteria = sa_sql.and_(*crit_attrs) criteria_list.append(criteria) f = sa_sql.or_(*criteria_list) query = query.filter(f) if limit is not None: query = query.limit(limit) return query def _make_conditions_from_filters(filters, is_public=None): # NOTE(venkatesh) make copy of the filters are to be altered in this # method. filters = filters.copy() image_conditions = [] prop_conditions = [] tag_conditions = [] if is_public is not None: if is_public: image_conditions.append(models.Image.visibility == 'public') else: image_conditions.append(models.Image.visibility != 'public') if 'checksum' in filters: checksum = filters.pop('checksum') image_conditions.append(models.Image.checksum == checksum) for (k, v) in filters.pop('properties', {}).items(): prop_filters = _make_image_property_condition(key=k, value=v) prop_conditions.append(prop_filters) if 'changes-since' in filters: # normalize timestamp to UTC, as sqlalchemy doesn't appear to # respect timezone offsets changes_since = timeutils.normalize_time(filters.pop('changes-since')) image_conditions.append(models.Image.updated_at > changes_since) if 'deleted' in filters: deleted_filter = filters.pop('deleted') image_conditions.append(models.Image.deleted == deleted_filter) # TODO(bcwaldon): handle this logic in registry server if not deleted_filter: image_statuses = [s for s in STATUSES if s != 'killed'] image_conditions.append(models.Image.status.in_(image_statuses)) if 'tags' in filters: tags = filters.pop('tags') for tag in tags: tag_filters = [models.ImageTag.deleted == False] tag_filters.extend([models.ImageTag.value == tag]) tag_conditions.append(tag_filters) filters = {k: v for k, v in filters.items() if v is not None} # need to copy items because filters is modified in the loop body # (filters.pop(k)) keys = list(filters.keys()) for k in keys: key = k if k.endswith('_min') or k.endswith('_max'): key = key[0:-4] try: v = int(filters.pop(k)) except ValueError: msg = _("Unable to filter on a range " "with a non-numeric value.") raise exception.InvalidFilterRangeValue(msg) if k.endswith('_min'): image_conditions.append(getattr(models.Image, key) >= v) if k.endswith('_max'): image_conditions.append(getattr(models.Image, key) <= v) elif k in ['created_at', 'updated_at']: attr_value = getattr(models.Image, key) operator, isotime = utils.split_filter_op(filters.pop(k)) try: parsed_time = timeutils.parse_isotime(isotime) threshold = timeutils.normalize_time(parsed_time) except ValueError: msg = (_("Bad \"%s\" query filter format. " "Use ISO 8601 DateTime notation.") % k) raise exception.InvalidParameterValue(msg) comparison = utils.evaluate_filter_op(attr_value, operator, threshold) image_conditions.append(comparison) elif k in ['name', 'id', 'status', 'container_format', 'disk_format']: attr_value = getattr(models.Image, key) operator, list_value = utils.split_filter_op(filters.pop(k)) if operator == 'in': threshold = utils.split_filter_value_for_quotes(list_value) comparison = attr_value.in_(threshold) image_conditions.append(comparison) elif operator == 'eq': image_conditions.append(attr_value == list_value) else: msg = (_("Unable to filter by unknown operator '%s'.") % operator) raise exception.InvalidFilterOperatorValue(msg) for (k, value) in filters.items(): if hasattr(models.Image, k): image_conditions.append(getattr(models.Image, k) == value) else: prop_filters = _make_image_property_condition(key=k, value=value) prop_conditions.append(prop_filters) return image_conditions, prop_conditions, tag_conditions def _make_image_property_condition(key, value): prop_filters = [models.ImageProperty.deleted == False] prop_filters.extend([models.ImageProperty.name == key]) prop_filters.extend([models.ImageProperty.value == value]) return prop_filters def _select_images_query(context, image_conditions, admin_as_user, member_status, visibility): session = get_session() img_conditional_clause = sa_sql.and_(*image_conditions) regular_user = (not context.is_admin) or admin_as_user query_member = session.query(models.Image).join( models.Image.members).filter(img_conditional_clause) if regular_user: member_filters = [models.ImageMember.deleted == False] member_filters.extend([models.Image.visibility == 'shared']) if context.owner is not None: member_filters.extend([models.ImageMember.member == context.owner]) if member_status != 'all': member_filters.extend([ models.ImageMember.status == member_status]) query_member = query_member.filter(sa_sql.and_(*member_filters)) query_image = session.query(models.Image).filter(img_conditional_clause) if regular_user: visibility_filters = [ models.Image.visibility == 'public', models.Image.visibility == 'community', ] query_image = query_image .filter(sa_sql.or_(*visibility_filters)) query_image_owner = None if context.owner is not None: query_image_owner = session.query(models.Image).filter( models.Image.owner == context.owner).filter( img_conditional_clause) if query_image_owner is not None: query = query_image.union(query_image_owner, query_member) else: query = query_image.union(query_member) return query else: # Admin user return query_image def image_get_all(context, filters=None, marker=None, limit=None, sort_key=None, sort_dir=None, member_status='accepted', is_public=None, admin_as_user=False, return_tag=False, v1_mode=False): """ Get all images that match zero or more filters. :param filters: dict of filter keys and values. If a 'properties' key is present, it is treated as a dict of key/value filters on the image properties attribute :param marker: image id after which to start page :param limit: maximum number of images to return :param sort_key: list of image attributes by which results should be sorted :param sort_dir: directions in which results should be sorted (asc, desc) :param member_status: only return shared images that have this membership status :param is_public: If true, return only public images. If false, return only private and shared images. :param admin_as_user: For backwards compatibility. If true, then return to an admin the equivalent set of images which it would see if it was a regular user :param return_tag: To indicates whether image entry in result includes it relevant tag entries. This could improve upper-layer query performance, to prevent using separated calls :param v1_mode: If true, mutates the 'visibility' value of each image into the v1-compatible field 'is_public' """ sort_key = ['created_at'] if not sort_key else sort_key default_sort_dir = 'desc' if not sort_dir: sort_dir = [default_sort_dir] * len(sort_key) elif len(sort_dir) == 1: default_sort_dir = sort_dir[0] sort_dir *= len(sort_key) filters = filters or {} visibility = filters.pop('visibility', None) showing_deleted = 'changes-since' in filters or filters.get('deleted', False) img_cond, prop_cond, tag_cond = _make_conditions_from_filters( filters, is_public) query = _select_images_query(context, img_cond, admin_as_user, member_status, visibility) if visibility is not None: # with a visibility, we always and only include images with that # visibility query = query.filter(models.Image.visibility == visibility) elif context.owner is None: # without either a visibility or an owner, we never include # 'community' images query = query.filter(models.Image.visibility != 'community') else: # without a visibility and with an owner, we only want to include # 'community' images if and only if they are owned by this owner community_filters = [ models.Image.owner == context.owner, models.Image.visibility != 'community', ] query = query.filter(sa_sql.or_(*community_filters)) if prop_cond: for prop_condition in prop_cond: query = query.join(models.ImageProperty, aliased=True).filter( sa_sql.and_(*prop_condition)) if tag_cond: for tag_condition in tag_cond: query = query.join(models.ImageTag, aliased=True).filter( sa_sql.and_(*tag_condition)) marker_image = None if marker is not None: marker_image = _image_get(context, marker, force_show_deleted=showing_deleted) for key in ['created_at', 'id']: if key not in sort_key: sort_key.append(key) sort_dir.append(default_sort_dir) query = _paginate_query(query, models.Image, limit, sort_key, marker=marker_image, sort_dir=None, sort_dirs=sort_dir) query = query.options(sa_orm.joinedload( models.Image.properties)).options( sa_orm.joinedload(models.Image.locations)) if return_tag: query = query.options(sa_orm.joinedload(models.Image.tags)) images = [] for image in query.all(): image_dict = image.to_dict() image_dict = _normalize_locations(context, image_dict, force_show_deleted=showing_deleted) if return_tag: image_dict = _normalize_tags(image_dict) if v1_mode: image_dict = db_utils.mutate_image_dict_to_v1(image_dict) images.append(image_dict) return images def _drop_protected_attrs(model_class, values): """ Removed protected attributes from values dictionary using the models __protected_attributes__ field. """ for attr in model_class.__protected_attributes__: if attr in values: del values[attr] def _image_get_disk_usage_by_owner(owner, session, image_id=None): query = session.query(models.Image) query = query.filter(models.Image.owner == owner) if image_id is not None: query = query.filter(models.Image.id != image_id) query = query.filter(models.Image.size > 0) query = query.filter(~models.Image.status.in_(['killed', 'deleted'])) images = query.all() total = 0 for i in images: locations = [l for l in i.locations if l['status'] != 'deleted'] total += (i.size * len(locations)) return total def _validate_image(values, mandatory_status=True): """ Validates the incoming data and raises a Invalid exception if anything is out of order. :param values: Mapping of image metadata to check :param mandatory_status: Whether to validate status from values """ if mandatory_status: status = values.get('status') if not status: msg = "Image status is required." raise exception.Invalid(msg) if status not in STATUSES: msg = "Invalid image status '%s' for image." % status raise exception.Invalid(msg) # validate integer values to eliminate DBError on save _validate_db_int(min_disk=values.get('min_disk'), min_ram=values.get('min_ram')) return values def _update_values(image_ref, values): for k in values: if getattr(image_ref, k) != values[k]: setattr(image_ref, k, values[k]) @retry(retry_on_exception=_retry_on_deadlock, wait_fixed=500, stop_max_attempt_number=50) @utils.no_4byte_params def _image_update(context, values, image_id, purge_props=False, from_state=None): """ Used internally by image_create and image_update :param context: Request context :param values: A dict of attributes to set :param image_id: If None, create the image, otherwise, find and update it """ # NOTE(jbresnah) values is altered in this so a copy is needed values = values.copy() session = get_session() with session.begin(): # Remove the properties passed in the values mapping. We # handle properties separately from base image attributes, # and leaving properties in the values mapping will cause # a SQLAlchemy model error because SQLAlchemy expects the # properties attribute of an Image model to be a list and # not a dict. properties = values.pop('properties', {}) location_data = values.pop('locations', None) new_status = values.get('status') if image_id: image_ref = _image_get(context, image_id, session=session) current = image_ref.status # Perform authorization check _check_mutate_authorization(context, image_ref) else: if values.get('size') is not None: values['size'] = int(values['size']) if 'min_ram' in values: values['min_ram'] = int(values['min_ram'] or 0) if 'min_disk' in values: values['min_disk'] = int(values['min_disk'] or 0) values['protected'] = bool(values.get('protected', False)) image_ref = models.Image() values = db_utils.ensure_image_dict_v2_compliant(values) # Need to canonicalize ownership if 'owner' in values and not values['owner']: values['owner'] = None if image_id: # Don't drop created_at if we're passing it in... _drop_protected_attrs(models.Image, values) # NOTE(iccha-sethi): updated_at must be explicitly set in case # only ImageProperty table was modifited values['updated_at'] = timeutils.utcnow() if image_id: query = session.query(models.Image).filter_by(id=image_id) if from_state: query = query.filter_by(status=from_state) mandatory_status = True if new_status else False _validate_image(values, mandatory_status=mandatory_status) # Validate fields for Images table. This is similar to what is done # for the query result update except that we need to do it prior # in this case. values = {key: values[key] for key in values if key in image_ref.to_dict()} updated = query.update(values, synchronize_session='fetch') if not updated: msg = (_('cannot transition from %(current)s to ' '%(next)s in update (wanted ' 'from_state=%(from)s)') % {'current': current, 'next': new_status, 'from': from_state}) raise exception.Conflict(msg) image_ref = _image_get(context, image_id, session=session) else: image_ref.update(values) # Validate the attributes before we go any further. From my # investigation, the @validates decorator does not validate # on new records, only on existing records, which is, well, # idiotic. values = _validate_image(image_ref.to_dict()) _update_values(image_ref, values) try: image_ref.save(session=session) except db_exception.DBDuplicateEntry: raise exception.Duplicate("Image ID %s already exists!" % values['id']) _set_properties_for_image(context, image_ref, properties, purge_props, session) if location_data: _image_locations_set(context, image_ref.id, location_data, session=session) return image_get(context, image_ref.id) @utils.no_4byte_params def image_location_add(context, image_id, location, session=None): deleted = location['status'] in ('deleted', 'pending_delete') delete_time = timeutils.utcnow() if deleted else None location_ref = models.ImageLocation(image_id=image_id, value=location['url'], meta_data=location['metadata'], status=location['status'], deleted=deleted, deleted_at=delete_time) session = session or get_session() location_ref.save(session=session) @utils.no_4byte_params def image_location_update(context, image_id, location, session=None): loc_id = location.get('id') if loc_id is None: msg = _("The location data has an invalid ID: %d") % loc_id raise exception.Invalid(msg) try: session = session or get_session() location_ref = session.query(models.ImageLocation).filter_by( id=loc_id).filter_by(image_id=image_id).one() deleted = location['status'] in ('deleted', 'pending_delete') updated_time = timeutils.utcnow() delete_time = updated_time if deleted else None location_ref.update({"value": location['url'], "meta_data": location['metadata'], "status": location['status'], "deleted": deleted, "updated_at": updated_time, "deleted_at": delete_time}) location_ref.save(session=session) except sa_orm.exc.NoResultFound: msg = (_("No location found with ID %(loc)s from image %(img)s") % dict(loc=loc_id, img=image_id)) LOG.warn(msg) raise exception.NotFound(msg) def image_location_delete(context, image_id, location_id, status, delete_time=None, session=None): if status not in ('deleted', 'pending_delete'): msg = _("The status of deleted image location can only be set to " "'pending_delete' or 'deleted'") raise exception.Invalid(msg) try: session = session or get_session() location_ref = session.query(models.ImageLocation).filter_by( id=location_id).filter_by(image_id=image_id).one() delete_time = delete_time or timeutils.utcnow() location_ref.update({"deleted": True, "status": status, "updated_at": delete_time, "deleted_at": delete_time}) location_ref.save(session=session) except sa_orm.exc.NoResultFound: msg = (_("No location found with ID %(loc)s from image %(img)s") % dict(loc=location_id, img=image_id)) LOG.warn(msg) raise exception.NotFound(msg) def _image_locations_set(context, image_id, locations, session=None): # NOTE(zhiyan): 1. Remove records from DB for deleted locations session = session or get_session() query = session.query(models.ImageLocation).filter_by( image_id=image_id).filter_by(deleted=False) loc_ids = [loc['id'] for loc in locations if loc.get('id')] if loc_ids: query = query.filter(~models.ImageLocation.id.in_(loc_ids)) for loc_id in [loc_ref.id for loc_ref in query.all()]: image_location_delete(context, image_id, loc_id, 'deleted', session=session) # NOTE(zhiyan): 2. Adding or update locations for loc in locations: if loc.get('id') is None: image_location_add(context, image_id, loc, session=session) else: image_location_update(context, image_id, loc, session=session) def _image_locations_delete_all(context, image_id, delete_time=None, session=None): """Delete all image locations for given image""" session = session or get_session() location_refs = session.query(models.ImageLocation).filter_by( image_id=image_id).filter_by(deleted=False).all() for loc_id in [loc_ref.id for loc_ref in location_refs]: image_location_delete(context, image_id, loc_id, 'deleted', delete_time=delete_time, session=session) @utils.no_4byte_params def _set_properties_for_image(context, image_ref, properties, purge_props=False, session=None): """ Create or update a set of image_properties for a given image :param context: Request context :param image_ref: An Image object :param properties: A dict of properties to set :param session: A SQLAlchemy session to use (if present) """ orig_properties = {} for prop_ref in image_ref.properties: orig_properties[prop_ref.name] = prop_ref for name, value in six.iteritems(properties): prop_values = {'image_id': image_ref.id, 'name': name, 'value': value} if name in orig_properties: prop_ref = orig_properties[name] _image_property_update(context, prop_ref, prop_values, session=session) else: image_property_create(context, prop_values, session=session) if purge_props: for key in orig_properties.keys(): if key not in properties: prop_ref = orig_properties[key] image_property_delete(context, prop_ref.name, image_ref.id, session=session) def _image_child_entry_delete_all(child_model_cls, image_id, delete_time=None, session=None): """Deletes all the child entries for the given image id. Deletes all the child entries of the given child entry ORM model class using the parent image's id. The child entry ORM model class can be one of the following: model.ImageLocation, model.ImageProperty, model.ImageMember and model.ImageTag. :param child_model_cls: the ORM model class. :param image_id: id of the image whose child entries are to be deleted. :param delete_time: datetime of deletion to be set. If None, uses current datetime. :param session: A SQLAlchemy session to use (if present) :rtype: int :returns: The number of child entries got soft-deleted. """ session = session or get_session() query = session.query(child_model_cls).filter_by( image_id=image_id).filter_by(deleted=False) delete_time = delete_time or timeutils.utcnow() count = query.update({"deleted": True, "deleted_at": delete_time}) return count def image_property_create(context, values, session=None): """Create an ImageProperty object.""" prop_ref = models.ImageProperty() prop = _image_property_update(context, prop_ref, values, session=session) return prop.to_dict() def _image_property_update(context, prop_ref, values, session=None): """ Used internally by image_property_create and image_property_update. """ _drop_protected_attrs(models.ImageProperty, values) values["deleted"] = False prop_ref.update(values) prop_ref.save(session=session) return prop_ref def image_property_delete(context, prop_ref, image_ref, session=None): """ Used internally by image_property_create and image_property_update. """ session = session or get_session() prop = session.query(models.ImageProperty).filter_by(image_id=image_ref, name=prop_ref).one() prop.delete(session=session) return prop def _image_property_delete_all(context, image_id, delete_time=None, session=None): """Delete all image properties for given image""" props_updated_count = _image_child_entry_delete_all(models.ImageProperty, image_id, delete_time, session) return props_updated_count @utils.no_4byte_params def image_member_create(context, values, session=None): """Create an ImageMember object.""" memb_ref = models.ImageMember() _image_member_update(context, memb_ref, values, session=session) return _image_member_format(memb_ref) def _image_member_format(member_ref): """Format a member ref for consumption outside of this module.""" return { 'id': member_ref['id'], 'image_id': member_ref['image_id'], 'member': member_ref['member'], 'can_share': member_ref['can_share'], 'status': member_ref['status'], 'created_at': member_ref['created_at'], 'updated_at': member_ref['updated_at'], 'deleted': member_ref['deleted'] } def image_member_update(context, memb_id, values): """Update an ImageMember object.""" session = get_session() memb_ref = _image_member_get(context, memb_id, session) _image_member_update(context, memb_ref, values, session) return _image_member_format(memb_ref) def _image_member_update(context, memb_ref, values, session=None): """Apply supplied dictionary of values to a Member object.""" _drop_protected_attrs(models.ImageMember, values) values["deleted"] = False values.setdefault('can_share', False) memb_ref.update(values) memb_ref.save(session=session) return memb_ref def image_member_delete(context, memb_id, session=None): """Delete an ImageMember object.""" session = session or get_session() member_ref = _image_member_get(context, memb_id, session) _image_member_delete(context, member_ref, session) def _image_member_delete(context, memb_ref, session): memb_ref.delete(session=session) def _image_member_delete_all(context, image_id, delete_time=None, session=None): """Delete all image members for given image""" members_updated_count = _image_child_entry_delete_all(models.ImageMember, image_id, delete_time, session) return members_updated_count def _image_member_get(context, memb_id, session): """Fetch an ImageMember entity by id.""" query = session.query(models.ImageMember) query = query.filter_by(id=memb_id) return query.one() def image_member_find(context, image_id=None, member=None, status=None, include_deleted=False): """Find all members that meet the given criteria. Note, currently include_deleted should be true only when create a new image membership, as there may be a deleted image membership between the same image and tenant, the membership will be reused in this case. It should be false in other cases. :param image_id: identifier of image entity :param member: tenant to which membership has been granted :include_deleted: A boolean indicating whether the result should include the deleted record of image member """ session = get_session() members = _image_member_find(context, session, image_id, member, status, include_deleted) return [_image_member_format(m) for m in members] def _image_member_find(context, session, image_id=None, member=None, status=None, include_deleted=False): query = session.query(models.ImageMember) if not include_deleted: query = query.filter_by(deleted=False) if not context.is_admin: query = query.join(models.Image) filters = [ models.Image.owner == context.owner, models.ImageMember.member == context.owner, ] query = query.filter(sa_sql.or_(*filters)) if image_id is not None: query = query.filter(models.ImageMember.image_id == image_id) if member is not None: query = query.filter(models.ImageMember.member == member) if status is not None: query = query.filter(models.ImageMember.status == status) return query.all() def image_member_count(context, image_id): """Return the number of image members for this image :param image_id: identifier of image entity """ session = get_session() if not image_id: msg = _("Image id is required.") raise exception.Invalid(msg) query = session.query(models.ImageMember) query = query.filter_by(deleted=False) query = query.filter(models.ImageMember.image_id == str(image_id)) return query.count() def image_tag_set_all(context, image_id, tags): # NOTE(kragniz): tag ordering should match exactly what was provided, so a # subsequent call to image_tag_get_all returns them in the correct order session = get_session() existing_tags = image_tag_get_all(context, image_id, session) tags_created = [] for tag in tags: if tag not in tags_created and tag not in existing_tags: tags_created.append(tag) image_tag_create(context, image_id, tag, session) for tag in existing_tags: if tag not in tags: image_tag_delete(context, image_id, tag, session) @utils.no_4byte_params def image_tag_create(context, image_id, value, session=None): """Create an image tag.""" session = session or get_session() tag_ref = models.ImageTag(image_id=image_id, value=value) tag_ref.save(session=session) return tag_ref['value'] def image_tag_delete(context, image_id, value, session=None): """Delete an image tag.""" _check_image_id(image_id) session = session or get_session() query = session.query(models.ImageTag).filter_by( image_id=image_id).filter_by( value=value).filter_by(deleted=False) try: tag_ref = query.one() except sa_orm.exc.NoResultFound: raise exception.NotFound() tag_ref.delete(session=session) def _image_tag_delete_all(context, image_id, delete_time=None, session=None): """Delete all image tags for given image""" tags_updated_count = _image_child_entry_delete_all(models.ImageTag, image_id, delete_time, session) return tags_updated_count def image_tag_get_all(context, image_id, session=None): """Get a list of tags for a specific image.""" _check_image_id(image_id) session = session or get_session() tags = session.query(models.ImageTag.value).filter_by( image_id=image_id).filter_by(deleted=False).all() return [tag[0] for tag in tags] class DeleteFromSelect(sa_sql.expression.UpdateBase): def __init__(self, table, select, column): self.table = table self.select = select self.column = column # NOTE(abhishekk): MySQL doesn't yet support subquery with # 'LIMIT & IN/ALL/ANY/SOME' We need work around this with nesting select. @compiles(DeleteFromSelect) def visit_delete_from_select(element, compiler, **kw): return "DELETE FROM %s WHERE %s in (SELECT T1.%s FROM (%s) as T1)" % ( compiler.process(element.table, asfrom=True), compiler.process(element.column), element.column.name, compiler.process(element.select)) def purge_deleted_rows(context, age_in_days, max_rows, session=None): """Purges soft deleted rows Deletes rows of table images, table tasks and all dependent tables according to given age for relevant models. """ # check max_rows for its maximum limit _validate_db_int(max_rows=max_rows) session = session or get_session() metadata = MetaData(get_engine()) deleted_age = timeutils.utcnow() - datetime.timedelta(days=age_in_days) tables = [] for model_class in models.__dict__.values(): if not hasattr(model_class, '__tablename__'): continue if hasattr(model_class, 'deleted'): tables.append(model_class.__tablename__) # get rid of FK constraints for tbl in ('images', 'tasks'): try: tables.remove(tbl) except ValueError: LOG.warning(_LW('Expected table %(tbl)s was not found in DB.'), {'tbl': tbl}) else: tables.append(tbl) for tbl in tables: tab = Table(tbl, metadata, autoload=True) LOG.info( _LI('Purging deleted rows older than %(age_in_days)d day(s) ' 'from table %(tbl)s'), {'age_in_days': age_in_days, 'tbl': tbl}) column = tab.c.id deleted_at_column = tab.c.deleted_at query_delete = sql.select( [column], deleted_at_column < deleted_age).order_by( deleted_at_column).limit(max_rows) delete_statement = DeleteFromSelect(tab, query_delete, column) try: with session.begin(): result = session.execute(delete_statement) except db_exception.DBReferenceError as ex: with excutils.save_and_reraise_exception(): LOG.error(_LE('DBError detected when purging from ' "%(tablename)s: %(error)s"), {'tablename': tbl, 'error': six.text_type(ex)}) rows = result.rowcount LOG.info(_LI('Deleted %(rows)d row(s) from table %(tbl)s'), {'rows': rows, 'tbl': tbl}) def user_get_storage_usage(context, owner_id, image_id=None, session=None): _check_image_id(image_id) session = session or get_session() total_size = _image_get_disk_usage_by_owner( owner_id, session, image_id=image_id) return total_size def _task_info_format(task_info_ref): """Format a task info ref for consumption outside of this module""" if task_info_ref is None: return {} return { 'task_id': task_info_ref['task_id'], 'input': task_info_ref['input'], 'result': task_info_ref['result'], 'message': task_info_ref['message'], } def _task_info_create(context, task_id, values, session=None): """Create an TaskInfo object""" session = session or get_session() task_info_ref = models.TaskInfo() task_info_ref.task_id = task_id task_info_ref.update(values) task_info_ref.save(session=session) return _task_info_format(task_info_ref) def _task_info_update(context, task_id, values, session=None): """Update an TaskInfo object""" session = session or get_session() task_info_ref = _task_info_get(context, task_id, session=session) if task_info_ref: task_info_ref.update(values) task_info_ref.save(session=session) return _task_info_format(task_info_ref) def _task_info_get(context, task_id, session=None): """Fetch an TaskInfo entity by task_id""" session = session or get_session() query = session.query(models.TaskInfo) query = query.filter_by(task_id=task_id) try: task_info_ref = query.one() except sa_orm.exc.NoResultFound: LOG.debug("TaskInfo was not found for task with id %(task_id)s", {'task_id': task_id}) task_info_ref = None return task_info_ref def task_create(context, values, session=None): """Create a task object""" values = values.copy() session = session or get_session() with session.begin(): task_info_values = _pop_task_info_values(values) task_ref = models.Task() _task_update(context, task_ref, values, session=session) _task_info_create(context, task_ref.id, task_info_values, session=session) return task_get(context, task_ref.id, session) def _pop_task_info_values(values): task_info_values = {} for k, v in list(values.items()): if k in ['input', 'result', 'message']: values.pop(k) task_info_values[k] = v return task_info_values def task_update(context, task_id, values, session=None): """Update a task object""" session = session or get_session() with session.begin(): task_info_values = _pop_task_info_values(values) task_ref = _task_get(context, task_id, session) _drop_protected_attrs(models.Task, values) values['updated_at'] = timeutils.utcnow() _task_update(context, task_ref, values, session) if task_info_values: _task_info_update(context, task_id, task_info_values, session) return task_get(context, task_id, session) def task_get(context, task_id, session=None, force_show_deleted=False): """Fetch a task entity by id""" task_ref = _task_get(context, task_id, session=session, force_show_deleted=force_show_deleted) return _task_format(task_ref, task_ref.info) def task_delete(context, task_id, session=None): """Delete a task""" session = session or get_session() task_ref = _task_get(context, task_id, session=session) task_ref.delete(session=session) return _task_format(task_ref, task_ref.info) def _task_soft_delete(context, session=None): """Scrub task entities which are expired """ expires_at = models.Task.expires_at session = session or get_session() query = session.query(models.Task) query = (query.filter(models.Task.owner == context.owner) .filter_by(deleted=False) .filter(expires_at <= timeutils.utcnow())) values = {'deleted': True, 'deleted_at': timeutils.utcnow()} with session.begin(): query.update(values) def task_get_all(context, filters=None, marker=None, limit=None, sort_key='created_at', sort_dir='desc', admin_as_user=False): """ Get all tasks that match zero or more filters. :param filters: dict of filter keys and values. :param marker: task id after which to start page :param limit: maximum number of tasks to return :param sort_key: task attribute by which results should be sorted :param sort_dir: direction in which results should be sorted (asc, desc) :param admin_as_user: For backwards compatibility. If true, then return to an admin the equivalent set of tasks which it would see if it were a regular user :returns: tasks set """ filters = filters or {} session = get_session() query = session.query(models.Task) if not (context.is_admin or admin_as_user) and context.owner is not None: query = query.filter(models.Task.owner == context.owner) _task_soft_delete(context, session=session) showing_deleted = False if 'deleted' in filters: deleted_filter = filters.pop('deleted') query = query.filter_by(deleted=deleted_filter) showing_deleted = deleted_filter for (k, v) in filters.items(): if v is not None: key = k if hasattr(models.Task, key): query = query.filter(getattr(models.Task, key) == v) marker_task = None if marker is not None: marker_task = _task_get(context, marker, force_show_deleted=showing_deleted) sort_keys = ['created_at', 'id'] if sort_key not in sort_keys: sort_keys.insert(0, sort_key) query = _paginate_query(query, models.Task, limit, sort_keys, marker=marker_task, sort_dir=sort_dir) task_refs = query.all() tasks = [] for task_ref in task_refs: tasks.append(_task_format(task_ref, task_info_ref=None)) return tasks def _is_task_visible(context, task): """Return True if the task is visible in this context.""" # Is admin == task visible if context.is_admin: return True # No owner == task visible if task['owner'] is None: return True # Perform tests based on whether we have an owner if context.owner is not None: if context.owner == task['owner']: return True return False def _task_get(context, task_id, session=None, force_show_deleted=False): """Fetch a task entity by id""" session = session or get_session() query = session.query(models.Task).options( sa_orm.joinedload(models.Task.info) ).filter_by(id=task_id) if not force_show_deleted and not context.can_see_deleted: query = query.filter_by(deleted=False) try: task_ref = query.one() except sa_orm.exc.NoResultFound: LOG.debug("No task found with ID %s", task_id) raise exception.TaskNotFound(task_id=task_id) # Make sure the task is visible if not _is_task_visible(context, task_ref): msg = "Forbidding request, task %s is not visible" % task_id LOG.debug(msg) raise exception.Forbidden(msg) return task_ref def _task_update(context, task_ref, values, session=None): """Apply supplied dictionary of values to a task object.""" if 'deleted' not in values: values["deleted"] = False task_ref.update(values) task_ref.save(session=session) return task_ref def _task_format(task_ref, task_info_ref=None): """Format a task ref for consumption outside of this module""" task_dict = { 'id': task_ref['id'], 'type': task_ref['type'], 'status': task_ref['status'], 'owner': task_ref['owner'], 'expires_at': task_ref['expires_at'], 'created_at': task_ref['created_at'], 'updated_at': task_ref['updated_at'], 'deleted_at': task_ref['deleted_at'], 'deleted': task_ref['deleted'] } if task_info_ref: task_info_dict = { 'input': task_info_ref['input'], 'result': task_info_ref['result'], 'message': task_info_ref['message'], } task_dict.update(task_info_dict) return task_dict def metadef_namespace_get_all(context, marker=None, limit=None, sort_key=None, sort_dir=None, filters=None, session=None): """List all available namespaces.""" session = session or get_session() namespaces = metadef_namespace_api.get_all( context, session, marker, limit, sort_key, sort_dir, filters) return namespaces def metadef_namespace_get(context, namespace_name, session=None): """Get a namespace or raise if it does not exist or is not visible.""" session = session or get_session() return metadef_namespace_api.get( context, namespace_name, session) @utils.no_4byte_params def metadef_namespace_create(context, values, session=None): """Create a namespace or raise if it already exists.""" session = session or get_session() return metadef_namespace_api.create(context, values, session) @utils.no_4byte_params def metadef_namespace_update(context, namespace_id, namespace_dict, session=None): """Update a namespace or raise if it does not exist or not visible""" session = session or get_session() return metadef_namespace_api.update( context, namespace_id, namespace_dict, session) def metadef_namespace_delete(context, namespace_name, session=None): """Delete the namespace and all foreign references""" session = session or get_session() return metadef_namespace_api.delete_cascade( context, namespace_name, session) def metadef_object_get_all(context, namespace_name, session=None): """Get a metadata-schema object or raise if it does not exist.""" session = session or get_session() return metadef_object_api.get_all( context, namespace_name, session) def metadef_object_get(context, namespace_name, object_name, session=None): """Get a metadata-schema object or raise if it does not exist.""" session = session or get_session() return metadef_object_api.get( context, namespace_name, object_name, session) @utils.no_4byte_params def metadef_object_create(context, namespace_name, object_dict, session=None): """Create a metadata-schema object or raise if it already exists.""" session = session or get_session() return metadef_object_api.create( context, namespace_name, object_dict, session) @utils.no_4byte_params def metadef_object_update(context, namespace_name, object_id, object_dict, session=None): """Update an object or raise if it does not exist or not visible.""" session = session or get_session() return metadef_object_api.update( context, namespace_name, object_id, object_dict, session) def metadef_object_delete(context, namespace_name, object_name, session=None): """Delete an object or raise if namespace or object doesn't exist.""" session = session or get_session() return metadef_object_api.delete( context, namespace_name, object_name, session) def metadef_object_delete_namespace_content( context, namespace_name, session=None): """Delete an object or raise if namespace or object doesn't exist.""" session = session or get_session() return metadef_object_api.delete_by_namespace_name( context, namespace_name, session) def metadef_object_count(context, namespace_name, session=None): """Get count of properties for a namespace, raise if ns doesn't exist.""" session = session or get_session() return metadef_object_api.count(context, namespace_name, session) def metadef_property_get_all(context, namespace_name, session=None): """Get a metadef property or raise if it does not exist.""" session = session or get_session() return metadef_property_api.get_all(context, namespace_name, session) def metadef_property_get(context, namespace_name, property_name, session=None): """Get a metadef property or raise if it does not exist.""" session = session or get_session() return metadef_property_api.get( context, namespace_name, property_name, session) @utils.no_4byte_params def metadef_property_create(context, namespace_name, property_dict, session=None): """Create a metadef property or raise if it already exists.""" session = session or get_session() return metadef_property_api.create( context, namespace_name, property_dict, session) @utils.no_4byte_params def metadef_property_update(context, namespace_name, property_id, property_dict, session=None): """Update an object or raise if it does not exist or not visible.""" session = session or get_session() return metadef_property_api.update( context, namespace_name, property_id, property_dict, session) def metadef_property_delete(context, namespace_name, property_name, session=None): """Delete a property or raise if it or namespace doesn't exist.""" session = session or get_session() return metadef_property_api.delete( context, namespace_name, property_name, session) def metadef_property_delete_namespace_content( context, namespace_name, session=None): """Delete a property or raise if it or namespace doesn't exist.""" session = session or get_session() return metadef_property_api.delete_by_namespace_name( context, namespace_name, session) def metadef_property_count(context, namespace_name, session=None): """Get count of properties for a namespace, raise if ns doesn't exist.""" session = session or get_session() return metadef_property_api.count(context, namespace_name, session) def metadef_resource_type_create(context, values, session=None): """Create a resource_type""" session = session or get_session() return metadef_resource_type_api.create( context, values, session) def metadef_resource_type_get(context, resource_type_name, session=None): """Get a resource_type""" session = session or get_session() return metadef_resource_type_api.get( context, resource_type_name, session) def metadef_resource_type_get_all(context, session=None): """list all resource_types""" session = session or get_session() return metadef_resource_type_api.get_all(context, session) def metadef_resource_type_delete(context, resource_type_name, session=None): """Get a resource_type""" session = session or get_session() return metadef_resource_type_api.delete( context, resource_type_name, session) def metadef_resource_type_association_get( context, namespace_name, resource_type_name, session=None): session = session or get_session() return metadef_association_api.get( context, namespace_name, resource_type_name, session) def metadef_resource_type_association_create( context, namespace_name, values, session=None): session = session or get_session() return metadef_association_api.create( context, namespace_name, values, session) def metadef_resource_type_association_delete( context, namespace_name, resource_type_name, session=None): session = session or get_session() return metadef_association_api.delete( context, namespace_name, resource_type_name, session) def metadef_resource_type_association_get_all_by_namespace( context, namespace_name, session=None): session = session or get_session() return metadef_association_api.get_all_by_namespace( context, namespace_name, session) def metadef_tag_get_all( context, namespace_name, filters=None, marker=None, limit=None, sort_key=None, sort_dir=None, session=None): """Get metadata-schema tags or raise if none exist.""" session = session or get_session() return metadef_tag_api.get_all( context, namespace_name, session, filters, marker, limit, sort_key, sort_dir) def metadef_tag_get(context, namespace_name, name, session=None): """Get a metadata-schema tag or raise if it does not exist.""" session = session or get_session() return metadef_tag_api.get( context, namespace_name, name, session) @utils.no_4byte_params def metadef_tag_create(context, namespace_name, tag_dict, session=None): """Create a metadata-schema tag or raise if it already exists.""" session = session or get_session() return metadef_tag_api.create( context, namespace_name, tag_dict, session) def metadef_tag_create_tags(context, namespace_name, tag_list, session=None): """Create a metadata-schema tag or raise if it already exists.""" session = get_session() return metadef_tag_api.create_tags( context, namespace_name, tag_list, session) @utils.no_4byte_params def metadef_tag_update(context, namespace_name, id, tag_dict, session=None): """Update an tag or raise if it does not exist or not visible.""" session = session or get_session() return metadef_tag_api.update( context, namespace_name, id, tag_dict, session) def metadef_tag_delete(context, namespace_name, name, session=None): """Delete an tag or raise if namespace or tag doesn't exist.""" session = session or get_session() return metadef_tag_api.delete( context, namespace_name, name, session) def metadef_tag_delete_namespace_content( context, namespace_name, session=None): """Delete an tag or raise if namespace or tag doesn't exist.""" session = session or get_session() return metadef_tag_api.delete_by_namespace_name( context, namespace_name, session) def metadef_tag_count(context, namespace_name, session=None): """Get count of tags for a namespace, raise if ns doesn't exist.""" session = session or get_session() return metadef_tag_api.count(context, namespace_name, session) glance-16.0.0/glance/db/sqlalchemy/__init__.py0000666000175100017510000000000013245511421021132 0ustar zuulzuul00000000000000glance-16.0.0/glance/db/sqlalchemy/models_metadef.py0000666000175100017510000001575513245511421022372 0ustar zuulzuul00000000000000# Copyright (c) 2014 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ SQLAlchemy models for glance metadata schema """ from oslo_db.sqlalchemy import models from sqlalchemy import Boolean from sqlalchemy import Column from sqlalchemy import DateTime from sqlalchemy.ext.declarative import declarative_base from sqlalchemy import ForeignKey from sqlalchemy import Index from sqlalchemy import Integer from sqlalchemy.orm import relationship from sqlalchemy import String from sqlalchemy import Text from sqlalchemy import UniqueConstraint from glance.common import timeutils from glance.db.sqlalchemy.models import JSONEncodedDict class DictionaryBase(models.ModelBase): metadata = None def to_dict(self): d = {} for c in self.__table__.columns: d[c.name] = self[c.name] return d BASE_DICT = declarative_base(cls=DictionaryBase) class GlanceMetadefBase(models.TimestampMixin): """Base class for Glance Metadef Models.""" __table_args__ = {'mysql_engine': 'InnoDB', 'mysql_charset': 'utf8'} __table_initialized__ = False __protected_attributes__ = set(["created_at", "updated_at"]) created_at = Column(DateTime, default=lambda: timeutils.utcnow(), nullable=False) # TODO(wko): Column `updated_at` have no default value in # OpenStack common code. We should decide, is this value # required and make changes in oslo (if required) or # in glance (if not). updated_at = Column(DateTime, default=lambda: timeutils.utcnow(), nullable=True, onupdate=lambda: timeutils.utcnow()) class MetadefNamespace(BASE_DICT, GlanceMetadefBase): """Represents a metadata-schema namespace in the datastore.""" __tablename__ = 'metadef_namespaces' __table_args__ = (UniqueConstraint('namespace', name='uq_metadef_namespaces' '_namespace'), Index('ix_metadef_namespaces_owner', 'owner') ) id = Column(Integer, primary_key=True, nullable=False) namespace = Column(String(80), nullable=False) display_name = Column(String(80)) description = Column(Text()) visibility = Column(String(32)) protected = Column(Boolean) owner = Column(String(255), nullable=False) class MetadefObject(BASE_DICT, GlanceMetadefBase): """Represents a metadata-schema object in the datastore.""" __tablename__ = 'metadef_objects' __table_args__ = (UniqueConstraint('namespace_id', 'name', name='uq_metadef_objects_namespace_id' '_name'), Index('ix_metadef_objects_name', 'name') ) id = Column(Integer, primary_key=True, nullable=False) namespace_id = Column(Integer(), ForeignKey('metadef_namespaces.id'), nullable=False) name = Column(String(80), nullable=False) description = Column(Text()) required = Column(Text()) json_schema = Column(JSONEncodedDict(), default={}, nullable=False) class MetadefProperty(BASE_DICT, GlanceMetadefBase): """Represents a metadata-schema namespace-property in the datastore.""" __tablename__ = 'metadef_properties' __table_args__ = (UniqueConstraint('namespace_id', 'name', name='uq_metadef_properties_namespace' '_id_name'), Index('ix_metadef_properties_name', 'name') ) id = Column(Integer, primary_key=True, nullable=False) namespace_id = Column(Integer(), ForeignKey('metadef_namespaces.id'), nullable=False) name = Column(String(80), nullable=False) json_schema = Column(JSONEncodedDict(), default={}, nullable=False) class MetadefNamespaceResourceType(BASE_DICT, GlanceMetadefBase): """Represents a metadata-schema namespace-property in the datastore.""" __tablename__ = 'metadef_namespace_resource_types' __table_args__ = (Index('ix_metadef_ns_res_types_namespace_id', 'namespace_id'), ) resource_type_id = Column(Integer, ForeignKey('metadef_resource_types.id'), primary_key=True, nullable=False) namespace_id = Column(Integer, ForeignKey('metadef_namespaces.id'), primary_key=True, nullable=False) properties_target = Column(String(80)) prefix = Column(String(80)) class MetadefResourceType(BASE_DICT, GlanceMetadefBase): """Represents a metadata-schema resource type in the datastore.""" __tablename__ = 'metadef_resource_types' __table_args__ = (UniqueConstraint('name', name='uq_metadef_resource_types_name'), ) id = Column(Integer, primary_key=True, nullable=False) name = Column(String(80), nullable=False) protected = Column(Boolean, nullable=False, default=False) associations = relationship( "MetadefNamespaceResourceType", primaryjoin=id == MetadefNamespaceResourceType.resource_type_id) class MetadefTag(BASE_DICT, GlanceMetadefBase): """Represents a metadata-schema tag in the data store.""" __tablename__ = 'metadef_tags' __table_args__ = (UniqueConstraint('namespace_id', 'name', name='uq_metadef_tags_namespace_id' '_name'), Index('ix_metadef_tags_name', 'name') ) id = Column(Integer, primary_key=True, nullable=False) namespace_id = Column(Integer(), ForeignKey('metadef_namespaces.id'), nullable=False) name = Column(String(80), nullable=False) def register_models(engine): """Create database tables for all models with the given engine.""" models = (MetadefNamespace, MetadefObject, MetadefProperty, MetadefTag, MetadefResourceType, MetadefNamespaceResourceType) for model in models: model.metadata.create_all(engine) def unregister_models(engine): """Drop database tables for all models with the given engine.""" models = (MetadefObject, MetadefProperty, MetadefNamespaceResourceType, MetadefTag, MetadefNamespace, MetadefResourceType) for model in models: model.metadata.drop_all(engine) glance-16.0.0/glance/db/sqlalchemy/metadata.py0000666000175100017510000004314713245511421021176 0ustar zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Copyright 2013 OpenStack Foundation # Copyright 2013 Intel Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import json import os from os.path import isfile from os.path import join import re from oslo_config import cfg from oslo_log import log as logging from oslo_utils import encodeutils import six import sqlalchemy from sqlalchemy import and_ from sqlalchemy.schema import MetaData from sqlalchemy.sql import select from glance.common import timeutils from glance.i18n import _, _LE, _LI, _LW LOG = logging.getLogger(__name__) metadata_opts = [ cfg.StrOpt('metadata_source_path', default='/etc/glance/metadefs/', help=_(""" Absolute path to the directory where JSON metadefs files are stored. Glance Metadata Definitions ("metadefs") are served from the database, but are stored in files in the JSON format. The files in this directory are used to initialize the metadefs in the database. Additionally, when metadefs are exported from the database, the files are written to this directory. NOTE: If you plan to export metadefs, make sure that this directory has write permissions set for the user being used to run the glance-api service. Possible values: * String value representing a valid absolute pathname Related options: * None """)), ] CONF = cfg.CONF CONF.register_opts(metadata_opts) def get_metadef_namespaces_table(meta): return sqlalchemy.Table('metadef_namespaces', meta, autoload=True) def get_metadef_resource_types_table(meta): return sqlalchemy.Table('metadef_resource_types', meta, autoload=True) def get_metadef_namespace_resource_types_table(meta): return sqlalchemy.Table('metadef_namespace_resource_types', meta, autoload=True) def get_metadef_properties_table(meta): return sqlalchemy.Table('metadef_properties', meta, autoload=True) def get_metadef_objects_table(meta): return sqlalchemy.Table('metadef_objects', meta, autoload=True) def get_metadef_tags_table(meta): return sqlalchemy.Table('metadef_tags', meta, autoload=True) def _get_resource_type_id(meta, name): rt_table = get_metadef_resource_types_table(meta) resource_type = ( select([rt_table.c.id]). where(rt_table.c.name == name). select_from(rt_table). execute().fetchone()) if resource_type: return resource_type[0] return None def _get_resource_type(meta, resource_type_id): rt_table = get_metadef_resource_types_table(meta) return ( rt_table.select(). where(rt_table.c.id == resource_type_id). execute().fetchone()) def _get_namespace_resource_types(meta, namespace_id): namespace_resource_types_table = ( get_metadef_namespace_resource_types_table(meta)) return ( namespace_resource_types_table.select(). where(namespace_resource_types_table.c.namespace_id == namespace_id). execute().fetchall()) def _get_namespace_resource_type_by_ids(meta, namespace_id, rt_id): namespace_resource_types_table = ( get_metadef_namespace_resource_types_table(meta)) return ( namespace_resource_types_table.select(). where(and_( namespace_resource_types_table.c.namespace_id == namespace_id, namespace_resource_types_table.c.resource_type_id == rt_id)). execute().fetchone()) def _get_properties(meta, namespace_id): properties_table = get_metadef_properties_table(meta) return ( properties_table.select(). where(properties_table.c.namespace_id == namespace_id). execute().fetchall()) def _get_objects(meta, namespace_id): objects_table = get_metadef_objects_table(meta) return ( objects_table.select(). where(objects_table.c.namespace_id == namespace_id). execute().fetchall()) def _get_tags(meta, namespace_id): tags_table = get_metadef_tags_table(meta) return ( tags_table.select(). where(tags_table.c.namespace_id == namespace_id). execute().fetchall()) def _get_resource_id(table, namespace_id, resource_name): resource = ( select([table.c.id]). where(and_(table.c.namespace_id == namespace_id, table.c.name == resource_name)). select_from(table). execute().fetchone()) if resource: return resource[0] return None def _clear_metadata(meta): metadef_tables = [get_metadef_properties_table(meta), get_metadef_objects_table(meta), get_metadef_tags_table(meta), get_metadef_namespace_resource_types_table(meta), get_metadef_namespaces_table(meta), get_metadef_resource_types_table(meta)] for table in metadef_tables: table.delete().execute() LOG.info(_LI("Table %s has been cleared"), table) def _clear_namespace_metadata(meta, namespace_id): metadef_tables = [get_metadef_properties_table(meta), get_metadef_objects_table(meta), get_metadef_tags_table(meta), get_metadef_namespace_resource_types_table(meta)] namespaces_table = get_metadef_namespaces_table(meta) for table in metadef_tables: table.delete().where(table.c.namespace_id == namespace_id).execute() namespaces_table.delete().where( namespaces_table.c.id == namespace_id).execute() def _populate_metadata(meta, metadata_path=None, merge=False, prefer_new=False, overwrite=False): if not metadata_path: metadata_path = CONF.metadata_source_path try: if isfile(metadata_path): json_schema_files = [metadata_path] else: json_schema_files = [f for f in os.listdir(metadata_path) if isfile(join(metadata_path, f)) and f.endswith('.json')] except OSError as e: LOG.error(encodeutils.exception_to_unicode(e)) return if not json_schema_files: LOG.error(_LE("Json schema files not found in %s. Aborting."), metadata_path) return namespaces_table = get_metadef_namespaces_table(meta) namespace_rt_table = get_metadef_namespace_resource_types_table(meta) objects_table = get_metadef_objects_table(meta) tags_table = get_metadef_tags_table(meta) properties_table = get_metadef_properties_table(meta) resource_types_table = get_metadef_resource_types_table(meta) for json_schema_file in json_schema_files: try: file = join(metadata_path, json_schema_file) with open(file) as json_file: metadata = json.load(json_file) except Exception as e: LOG.error(_LE("Failed to parse json file %(file_path)s while " "populating metadata due to: %(error_msg)s"), {"file_path": file, "error_msg": encodeutils.exception_to_unicode(e)}) continue values = { 'namespace': metadata.get('namespace'), 'display_name': metadata.get('display_name'), 'description': metadata.get('description'), 'visibility': metadata.get('visibility'), 'protected': metadata.get('protected'), 'owner': metadata.get('owner', 'admin') } db_namespace = select( [namespaces_table.c.id] ).where( namespaces_table.c.namespace == values['namespace'] ).select_from( namespaces_table ).execute().fetchone() if db_namespace and overwrite: LOG.info(_LI("Overwriting namespace %s"), values['namespace']) _clear_namespace_metadata(meta, db_namespace[0]) db_namespace = None if not db_namespace: values.update({'created_at': timeutils.utcnow()}) _insert_data_to_db(namespaces_table, values) db_namespace = select( [namespaces_table.c.id] ).where( namespaces_table.c.namespace == values['namespace'] ).select_from( namespaces_table ).execute().fetchone() elif not merge: LOG.info(_LI("Skipping namespace %s. It already exists in the " "database."), values['namespace']) continue elif prefer_new: values.update({'updated_at': timeutils.utcnow()}) _update_data_in_db(namespaces_table, values, namespaces_table.c.id, db_namespace[0]) namespace_id = db_namespace[0] for resource_type in metadata.get('resource_type_associations', []): rt_id = _get_resource_type_id(meta, resource_type['name']) if not rt_id: val = { 'name': resource_type['name'], 'created_at': timeutils.utcnow(), 'protected': True } _insert_data_to_db(resource_types_table, val) rt_id = _get_resource_type_id(meta, resource_type['name']) elif prefer_new: val = {'updated_at': timeutils.utcnow()} _update_data_in_db(resource_types_table, val, resource_types_table.c.id, rt_id) values = { 'namespace_id': namespace_id, 'resource_type_id': rt_id, 'properties_target': resource_type.get( 'properties_target'), 'prefix': resource_type.get('prefix') } namespace_resource_type = _get_namespace_resource_type_by_ids( meta, namespace_id, rt_id) if not namespace_resource_type: values.update({'created_at': timeutils.utcnow()}) _insert_data_to_db(namespace_rt_table, values) elif prefer_new: values.update({'updated_at': timeutils.utcnow()}) _update_rt_association(namespace_rt_table, values, rt_id, namespace_id) for property, schema in six.iteritems(metadata.get('properties', {})): values = { 'name': property, 'namespace_id': namespace_id, 'json_schema': json.dumps(schema) } property_id = _get_resource_id(properties_table, namespace_id, property) if not property_id: values.update({'created_at': timeutils.utcnow()}) _insert_data_to_db(properties_table, values) elif prefer_new: values.update({'updated_at': timeutils.utcnow()}) _update_data_in_db(properties_table, values, properties_table.c.id, property_id) for object in metadata.get('objects', []): values = { 'name': object['name'], 'description': object.get('description'), 'namespace_id': namespace_id, 'json_schema': json.dumps( object.get('properties')) } object_id = _get_resource_id(objects_table, namespace_id, object['name']) if not object_id: values.update({'created_at': timeutils.utcnow()}) _insert_data_to_db(objects_table, values) elif prefer_new: values.update({'updated_at': timeutils.utcnow()}) _update_data_in_db(objects_table, values, objects_table.c.id, object_id) for tag in metadata.get('tags', []): values = { 'name': tag.get('name'), 'namespace_id': namespace_id, } tag_id = _get_resource_id(tags_table, namespace_id, tag['name']) if not tag_id: values.update({'created_at': timeutils.utcnow()}) _insert_data_to_db(tags_table, values) elif prefer_new: values.update({'updated_at': timeutils.utcnow()}) _update_data_in_db(tags_table, values, tags_table.c.id, tag_id) LOG.info(_LI("File %s loaded to database."), file) LOG.info(_LI("Metadata loading finished")) def _insert_data_to_db(table, values, log_exception=True): try: table.insert(values=values).execute() except sqlalchemy.exc.IntegrityError: if log_exception: LOG.warning(_LW("Duplicate entry for values: %s"), values) def _update_data_in_db(table, values, column, value): try: (table.update(values=values). where(column == value).execute()) except sqlalchemy.exc.IntegrityError: LOG.warning(_LW("Duplicate entry for values: %s"), values) def _update_rt_association(table, values, rt_id, namespace_id): try: (table.update(values=values). where(and_(table.c.resource_type_id == rt_id, table.c.namespace_id == namespace_id)).execute()) except sqlalchemy.exc.IntegrityError: LOG.warning(_LW("Duplicate entry for values: %s"), values) def _export_data_to_file(meta, path): if not path: path = CONF.metadata_source_path namespace_table = get_metadef_namespaces_table(meta) namespaces = namespace_table.select().execute().fetchall() pattern = re.compile('[\W_]+', re.UNICODE) for id, namespace in enumerate(namespaces, start=1): namespace_id = namespace['id'] namespace_file_name = pattern.sub('', namespace['display_name']) values = { 'namespace': namespace['namespace'], 'display_name': namespace['display_name'], 'description': namespace['description'], 'visibility': namespace['visibility'], 'protected': namespace['protected'], 'resource_type_associations': [], 'properties': {}, 'objects': [], 'tags': [] } namespace_resource_types = _get_namespace_resource_types(meta, namespace_id) db_objects = _get_objects(meta, namespace_id) db_properties = _get_properties(meta, namespace_id) db_tags = _get_tags(meta, namespace_id) resource_types = [] for namespace_resource_type in namespace_resource_types: resource_type = _get_resource_type( meta, namespace_resource_type['resource_type_id']) resource_types.append({ 'name': resource_type['name'], 'prefix': namespace_resource_type['prefix'], 'properties_target': namespace_resource_type[ 'properties_target'] }) values.update({ 'resource_type_associations': resource_types }) objects = [] for object in db_objects: objects.append({ "name": object['name'], "description": object['description'], "properties": json.loads(object['json_schema']) }) values.update({ 'objects': objects }) properties = {} for property in db_properties: properties.update({ property['name']: json.loads(property['json_schema']) }) values.update({ 'properties': properties }) tags = [] for tag in db_tags: tags.append({ "name": tag['name'] }) values.update({ 'tags': tags }) try: file_name = ''.join([path, namespace_file_name, '.json']) if isfile(file_name): LOG.info(_LI("Overwriting: %s"), file_name) with open(file_name, 'w') as json_file: json_file.write(json.dumps(values)) except Exception as e: LOG.exception(encodeutils.exception_to_unicode(e)) LOG.info(_LI("Namespace %(namespace)s saved in %(file)s"), { 'namespace': namespace_file_name, 'file': file_name}) def db_load_metadefs(engine, metadata_path=None, merge=False, prefer_new=False, overwrite=False): meta = MetaData() meta.bind = engine if not merge and (prefer_new or overwrite): LOG.error(_LE("To use --prefer_new or --overwrite you need to combine " "of these options with --merge option.")) return if prefer_new and overwrite and merge: LOG.error(_LE("Please provide no more than one option from this list: " "--prefer_new, --overwrite")) return _populate_metadata(meta, metadata_path, merge, prefer_new, overwrite) def db_unload_metadefs(engine): meta = MetaData() meta.bind = engine _clear_metadata(meta) def db_export_metadefs(engine, metadata_path=None): meta = MetaData() meta.bind = engine _export_data_to_file(meta, metadata_path) glance-16.0.0/glance/db/simple/0000775000175100017510000000000013245511661016166 5ustar zuulzuul00000000000000glance-16.0.0/glance/db/simple/api.py0000666000175100017510000020342313245511421017311 0ustar zuulzuul00000000000000# Copyright 2012 OpenStack, Foundation # Copyright 2013 IBM Corp. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import functools import uuid from oslo_config import cfg from oslo_log import log as logging import six from glance.common import exception from glance.common import timeutils from glance.common import utils from glance.db import utils as db_utils from glance.i18n import _, _LI, _LW CONF = cfg.CONF LOG = logging.getLogger(__name__) DATA = { 'images': {}, 'members': {}, 'metadef_namespace_resource_types': [], 'metadef_namespaces': [], 'metadef_objects': [], 'metadef_properties': [], 'metadef_resource_types': [], 'metadef_tags': [], 'tags': {}, 'locations': [], 'tasks': {}, 'task_info': {}, } INDEX = 0 def log_call(func): @functools.wraps(func) def wrapped(*args, **kwargs): LOG.info(_LI('Calling %(funcname)s: args=%(args)s, ' 'kwargs=%(kwargs)s'), {"funcname": func.__name__, "args": args, "kwargs": kwargs}) output = func(*args, **kwargs) LOG.info(_LI('Returning %(funcname)s: %(output)s'), {"funcname": func.__name__, "output": output}) return output return wrapped def configure(): if CONF.workers not in [0, 1]: msg = _('CONF.workers should be set to 0 or 1 when using the ' 'db.simple.api backend. Fore more info, see ' 'https://bugs.launchpad.net/glance/+bug/1619508') LOG.critical(msg) raise SystemExit(msg) def reset(): global DATA DATA = { 'images': {}, 'members': [], 'metadef_namespace_resource_types': [], 'metadef_namespaces': [], 'metadef_objects': [], 'metadef_properties': [], 'metadef_resource_types': [], 'metadef_tags': [], 'tags': {}, 'locations': [], 'tasks': {}, 'task_info': {}, } def clear_db_env(*args, **kwargs): """ Setup global environment configuration variables. We have no connection-oriented environment variables, so this is a NOOP. """ pass def _get_session(): return DATA @utils.no_4byte_params def _image_location_format(image_id, value, meta_data, status, deleted=False): dt = timeutils.utcnow() return { 'id': str(uuid.uuid4()), 'image_id': image_id, 'created_at': dt, 'updated_at': dt, 'deleted_at': dt if deleted else None, 'deleted': deleted, 'url': value, 'metadata': meta_data, 'status': status, } def _image_property_format(image_id, name, value): return { 'image_id': image_id, 'name': name, 'value': value, 'deleted': False, 'deleted_at': None, } def _image_member_format(image_id, tenant_id, can_share, status='pending', deleted=False): dt = timeutils.utcnow() return { 'id': str(uuid.uuid4()), 'image_id': image_id, 'member': tenant_id, 'can_share': can_share, 'status': status, 'created_at': dt, 'updated_at': dt, 'deleted': deleted, } def _pop_task_info_values(values): task_info_values = {} for k, v in list(values.items()): if k in ['input', 'result', 'message']: values.pop(k) task_info_values[k] = v return task_info_values def _format_task_from_db(task_ref, task_info_ref): task = copy.deepcopy(task_ref) if task_info_ref: task_info = copy.deepcopy(task_info_ref) task_info_values = _pop_task_info_values(task_info) task.update(task_info_values) return task def _task_format(task_id, **values): dt = timeutils.utcnow() task = { 'id': task_id, 'type': 'import', 'status': 'pending', 'owner': None, 'expires_at': None, 'created_at': dt, 'updated_at': dt, 'deleted_at': None, 'deleted': False, } task.update(values) return task def _task_info_format(task_id, **values): task_info = { 'task_id': task_id, 'input': None, 'result': None, 'message': None, } task_info.update(values) return task_info @utils.no_4byte_params def _image_update(image, values, properties): # NOTE(bcwaldon): store properties as a list to match sqlalchemy driver properties = [{'name': k, 'value': v, 'image_id': image['id'], 'deleted': False} for k, v in properties.items()] if 'properties' not in image.keys(): image['properties'] = [] image['properties'].extend(properties) values = db_utils.ensure_image_dict_v2_compliant(values) image.update(values) return image def _image_format(image_id, **values): dt = timeutils.utcnow() image = { 'id': image_id, 'name': None, 'owner': None, 'locations': [], 'status': 'queued', 'protected': False, 'visibility': 'shared', 'container_format': None, 'disk_format': None, 'min_ram': 0, 'min_disk': 0, 'size': None, 'virtual_size': None, 'checksum': None, 'tags': [], 'created_at': dt, 'updated_at': dt, 'deleted_at': None, 'deleted': False, } locations = values.pop('locations', None) if locations is not None: image['locations'] = [] for location in locations: location_ref = _image_location_format(image_id, location['url'], location['metadata'], location['status']) image['locations'].append(location_ref) DATA['locations'].append(location_ref) return _image_update(image, values, values.pop('properties', {})) def _filter_images(images, filters, context, status='accepted', is_public=None, admin_as_user=False): filtered_images = [] if 'properties' in filters: prop_filter = filters.pop('properties') filters.update(prop_filter) if status == 'all': status = None visibility = filters.pop('visibility', None) for image in images: member = image_member_find(context, image_id=image['id'], member=context.owner, status=status) is_member = len(member) > 0 has_ownership = context.owner and image['owner'] == context.owner image_is_public = image['visibility'] == 'public' image_is_community = image['visibility'] == 'community' image_is_shared = image['visibility'] == 'shared' acts_as_admin = context.is_admin and not admin_as_user can_see = (image_is_public or image_is_community or has_ownership or (is_member and image_is_shared) or acts_as_admin) if not can_see: continue if visibility: if visibility == 'public': if not image_is_public: continue elif visibility == 'private': if not (image['visibility'] == 'private'): continue if not (has_ownership or acts_as_admin): continue elif visibility == 'shared': if not image_is_shared: continue elif visibility == 'community': if not image_is_community: continue else: if (not has_ownership) and image_is_community: continue if is_public is not None: if not image_is_public == is_public: continue to_add = True for k, value in six.iteritems(filters): key = k if k.endswith('_min') or k.endswith('_max'): key = key[0:-4] try: value = int(value) except ValueError: msg = _("Unable to filter on a range " "with a non-numeric value.") raise exception.InvalidFilterRangeValue(msg) if k.endswith('_min'): to_add = image.get(key) >= value elif k.endswith('_max'): to_add = image.get(key) <= value elif k in ['created_at', 'updated_at']: attr_value = image.get(key) operator, isotime = utils.split_filter_op(value) parsed_time = timeutils.parse_isotime(isotime) threshold = timeutils.normalize_time(parsed_time) to_add = utils.evaluate_filter_op(attr_value, operator, threshold) elif k in ['name', 'id', 'status', 'container_format', 'disk_format']: attr_value = image.get(key) operator, list_value = utils.split_filter_op(value) if operator == 'in': threshold = utils.split_filter_value_for_quotes(list_value) to_add = attr_value in threshold elif operator == 'eq': to_add = (attr_value == list_value) else: msg = (_("Unable to filter by unknown operator '%s'.") % operator) raise exception.InvalidFilterOperatorValue(msg) elif k != 'is_public' and image.get(k) is not None: to_add = image.get(key) == value elif k == 'tags': filter_tags = value image_tags = image_tag_get_all(context, image['id']) for tag in filter_tags: if tag not in image_tags: to_add = False break else: to_add = False for p in image['properties']: properties = {p['name']: p['value'], 'deleted': p['deleted']} to_add |= (properties.get(key) == value and properties.get('deleted') is False) if not to_add: break if to_add: filtered_images.append(image) return filtered_images def _do_pagination(context, images, marker, limit, show_deleted, status='accepted'): start = 0 end = -1 if marker is None: start = 0 else: # Check that the image is accessible _image_get(context, marker, force_show_deleted=show_deleted, status=status) for i, image in enumerate(images): if image['id'] == marker: start = i + 1 break else: raise exception.ImageNotFound() end = start + limit if limit is not None else None return images[start:end] def _sort_images(images, sort_key, sort_dir): sort_key = ['created_at'] if not sort_key else sort_key default_sort_dir = 'desc' if not sort_dir: sort_dir = [default_sort_dir] * len(sort_key) elif len(sort_dir) == 1: default_sort_dir = sort_dir[0] sort_dir *= len(sort_key) for key in ['created_at', 'id']: if key not in sort_key: sort_key.append(key) sort_dir.append(default_sort_dir) for key in sort_key: if images and not (key in images[0]): raise exception.InvalidSortKey() if any(dir for dir in sort_dir if dir not in ['asc', 'desc']): raise exception.InvalidSortDir() if len(sort_key) != len(sort_dir): raise exception.Invalid(message='Number of sort dirs does not match ' 'the number of sort keys') for key, dir in reversed(list(zip(sort_key, sort_dir))): reverse = dir == 'desc' images.sort(key=lambda x: x[key] or '', reverse=reverse) return images def _image_get(context, image_id, force_show_deleted=False, status=None): try: image = DATA['images'][image_id] except KeyError: LOG.warn(_LW('Could not find image %s') % image_id) raise exception.ImageNotFound() if image['deleted'] and not (force_show_deleted or context.can_see_deleted): LOG.warn(_LW('Unable to get deleted image')) raise exception.ImageNotFound() if not is_image_visible(context, image): LOG.warn(_LW('Unable to get unowned image')) raise exception.Forbidden("Image not visible to you") return image @log_call def image_get(context, image_id, session=None, force_show_deleted=False, v1_mode=False): image = copy.deepcopy(_image_get(context, image_id, force_show_deleted)) image = _normalize_locations(context, image, force_show_deleted=force_show_deleted) if v1_mode: image = db_utils.mutate_image_dict_to_v1(image) return image @log_call def image_get_all(context, filters=None, marker=None, limit=None, sort_key=None, sort_dir=None, member_status='accepted', is_public=None, admin_as_user=False, return_tag=False, v1_mode=False): filters = filters or {} images = DATA['images'].values() images = _filter_images(images, filters, context, member_status, is_public, admin_as_user) images = _sort_images(images, sort_key, sort_dir) images = _do_pagination(context, images, marker, limit, filters.get('deleted')) force_show_deleted = True if filters.get('deleted') else False res = [] for image in images: img = _normalize_locations(context, copy.deepcopy(image), force_show_deleted=force_show_deleted) if return_tag: img['tags'] = image_tag_get_all(context, img['id']) if v1_mode: img = db_utils.mutate_image_dict_to_v1(img) res.append(img) return res @log_call def image_property_create(context, values): image = _image_get(context, values['image_id']) prop = _image_property_format(values['image_id'], values['name'], values['value']) image['properties'].append(prop) return prop @log_call def image_property_delete(context, prop_ref, image_ref): prop = None for p in DATA['images'][image_ref]['properties']: if p['name'] == prop_ref: prop = p if not prop: raise exception.NotFound() prop['deleted_at'] = timeutils.utcnow() prop['deleted'] = True return prop @log_call def image_member_find(context, image_id=None, member=None, status=None, include_deleted=False): filters = [] images = DATA['images'] members = DATA['members'] def is_visible(member): return (member['member'] == context.owner or images[member['image_id']]['owner'] == context.owner) if not context.is_admin: filters.append(is_visible) if image_id is not None: filters.append(lambda m: m['image_id'] == image_id) if member is not None: filters.append(lambda m: m['member'] == member) if status is not None: filters.append(lambda m: m['status'] == status) for f in filters: members = filter(f, members) return [copy.deepcopy(m) for m in members] @log_call def image_member_count(context, image_id): """Return the number of image members for this image :param image_id: identifier of image entity """ if not image_id: msg = _("Image id is required.") raise exception.Invalid(msg) members = DATA['members'] return len([x for x in members if x['image_id'] == image_id]) @log_call @utils.no_4byte_params def image_member_create(context, values): member = _image_member_format(values['image_id'], values['member'], values.get('can_share', False), values.get('status', 'pending'), values.get('deleted', False)) global DATA DATA['members'].append(member) return copy.deepcopy(member) @log_call def image_member_update(context, member_id, values): global DATA for member in DATA['members']: if member['id'] == member_id: member.update(values) member['updated_at'] = timeutils.utcnow() return copy.deepcopy(member) else: raise exception.NotFound() @log_call def image_member_delete(context, member_id): global DATA for i, member in enumerate(DATA['members']): if member['id'] == member_id: del DATA['members'][i] break else: raise exception.NotFound() @log_call @utils.no_4byte_params def image_location_add(context, image_id, location): deleted = location['status'] in ('deleted', 'pending_delete') location_ref = _image_location_format(image_id, value=location['url'], meta_data=location['metadata'], status=location['status'], deleted=deleted) DATA['locations'].append(location_ref) image = DATA['images'][image_id] image.setdefault('locations', []).append(location_ref) @log_call @utils.no_4byte_params def image_location_update(context, image_id, location): loc_id = location.get('id') if loc_id is None: msg = _("The location data has an invalid ID: %d") % loc_id raise exception.Invalid(msg) deleted = location['status'] in ('deleted', 'pending_delete') updated_time = timeutils.utcnow() delete_time = updated_time if deleted else None updated = False for loc in DATA['locations']: if loc['id'] == loc_id and loc['image_id'] == image_id: loc.update({"value": location['url'], "meta_data": location['metadata'], "status": location['status'], "deleted": deleted, "updated_at": updated_time, "deleted_at": delete_time}) updated = True break if not updated: msg = (_("No location found with ID %(loc)s from image %(img)s") % dict(loc=loc_id, img=image_id)) LOG.warn(msg) raise exception.NotFound(msg) @log_call def image_location_delete(context, image_id, location_id, status, delete_time=None): if status not in ('deleted', 'pending_delete'): msg = _("The status of deleted image location can only be set to " "'pending_delete' or 'deleted'.") raise exception.Invalid(msg) deleted = False for loc in DATA['locations']: if loc['id'] == location_id and loc['image_id'] == image_id: deleted = True delete_time = delete_time or timeutils.utcnow() loc.update({"deleted": deleted, "status": status, "updated_at": delete_time, "deleted_at": delete_time}) break if not deleted: msg = (_("No location found with ID %(loc)s from image %(img)s") % dict(loc=location_id, img=image_id)) LOG.warn(msg) raise exception.NotFound(msg) def _image_locations_set(context, image_id, locations): # NOTE(zhiyan): 1. Remove records from DB for deleted locations used_loc_ids = [loc['id'] for loc in locations if loc.get('id')] image = DATA['images'][image_id] for loc in image['locations']: if loc['id'] not in used_loc_ids and not loc['deleted']: image_location_delete(context, image_id, loc['id'], 'deleted') for i, loc in enumerate(DATA['locations']): if (loc['image_id'] == image_id and loc['id'] not in used_loc_ids and not loc['deleted']): del DATA['locations'][i] # NOTE(zhiyan): 2. Adding or update locations for loc in locations: if loc.get('id') is None: image_location_add(context, image_id, loc) else: image_location_update(context, image_id, loc) def _image_locations_delete_all(context, image_id, delete_time=None): image = DATA['images'][image_id] for loc in image['locations']: if not loc['deleted']: image_location_delete(context, image_id, loc['id'], 'deleted', delete_time=delete_time) for i, loc in enumerate(DATA['locations']): if image_id == loc['image_id'] and loc['deleted'] == False: del DATA['locations'][i] def _normalize_locations(context, image, force_show_deleted=False): """ Generate suitable dictionary list for locations field of image. We don't need to set other data fields of location record which return from image query. """ if image['status'] == 'deactivated' and not context.is_admin: # Locations are not returned for a deactivated image for non-admin user image['locations'] = [] return image if force_show_deleted: locations = image['locations'] else: locations = [x for x in image['locations'] if not x['deleted']] image['locations'] = [{'id': loc['id'], 'url': loc['url'], 'metadata': loc['metadata'], 'status': loc['status']} for loc in locations] return image @log_call def image_create(context, image_values, v1_mode=False): global DATA image_id = image_values.get('id', str(uuid.uuid4())) if image_id in DATA['images']: raise exception.Duplicate() if 'status' not in image_values: raise exception.Invalid('status is a required attribute') allowed_keys = set(['id', 'name', 'status', 'min_ram', 'min_disk', 'size', 'virtual_size', 'checksum', 'locations', 'owner', 'protected', 'is_public', 'container_format', 'disk_format', 'created_at', 'updated_at', 'deleted', 'deleted_at', 'properties', 'tags', 'visibility']) incorrect_keys = set(image_values.keys()) - allowed_keys if incorrect_keys: raise exception.Invalid( 'The keys %s are not valid' % str(incorrect_keys)) image = _image_format(image_id, **image_values) DATA['images'][image_id] = image DATA['tags'][image_id] = image.pop('tags', []) image = _normalize_locations(context, copy.deepcopy(image)) if v1_mode: image = db_utils.mutate_image_dict_to_v1(image) return image @log_call def image_update(context, image_id, image_values, purge_props=False, from_state=None, v1_mode=False): global DATA try: image = DATA['images'][image_id] except KeyError: raise exception.ImageNotFound() location_data = image_values.pop('locations', None) if location_data is not None: _image_locations_set(context, image_id, location_data) # replace values for properties that already exist new_properties = image_values.pop('properties', {}) for prop in image['properties']: if prop['name'] in new_properties: prop['value'] = new_properties.pop(prop['name']) elif purge_props: # this matches weirdness in the sqlalchemy api prop['deleted'] = True image['updated_at'] = timeutils.utcnow() _image_update(image, image_values, new_properties) DATA['images'][image_id] = image image = _normalize_locations(context, copy.deepcopy(image)) if v1_mode: image = db_utils.mutate_image_dict_to_v1(image) return image @log_call def image_destroy(context, image_id): global DATA try: delete_time = timeutils.utcnow() DATA['images'][image_id]['deleted'] = True DATA['images'][image_id]['deleted_at'] = delete_time # NOTE(flaper87): Move the image to one of the deleted statuses # if it hasn't been done yet. if (DATA['images'][image_id]['status'] not in ['deleted', 'pending_delete']): DATA['images'][image_id]['status'] = 'deleted' _image_locations_delete_all(context, image_id, delete_time=delete_time) for prop in DATA['images'][image_id]['properties']: image_property_delete(context, prop['name'], image_id) members = image_member_find(context, image_id=image_id) for member in members: image_member_delete(context, member['id']) tags = image_tag_get_all(context, image_id) for tag in tags: image_tag_delete(context, image_id, tag) return _normalize_locations(context, copy.deepcopy(DATA['images'][image_id])) except KeyError: raise exception.ImageNotFound() @log_call def image_tag_get_all(context, image_id): return DATA['tags'].get(image_id, []) @log_call def image_tag_get(context, image_id, value): tags = image_tag_get_all(context, image_id) if value in tags: return value else: raise exception.NotFound() @log_call def image_tag_set_all(context, image_id, values): global DATA DATA['tags'][image_id] = list(values) @log_call @utils.no_4byte_params def image_tag_create(context, image_id, value): global DATA DATA['tags'][image_id].append(value) return value @log_call def image_tag_delete(context, image_id, value): global DATA try: DATA['tags'][image_id].remove(value) except ValueError: raise exception.NotFound() def is_image_visible(context, image, status=None): if status == 'all': status = None return db_utils.is_image_visible(context, image, image_member_find, status) def user_get_storage_usage(context, owner_id, image_id=None, session=None): images = image_get_all(context, filters={'owner': owner_id}) total = 0 for image in images: if image['status'] in ['killed', 'deleted']: continue if image['id'] != image_id: locations = [loc for loc in image['locations'] if loc.get('status') != 'deleted'] total += (image['size'] * len(locations)) return total @log_call def task_create(context, values): """Create a task object""" global DATA task_values = copy.deepcopy(values) task_id = task_values.get('id', str(uuid.uuid4())) required_attributes = ['type', 'status', 'input'] allowed_attributes = ['id', 'type', 'status', 'input', 'result', 'owner', 'message', 'expires_at', 'created_at', 'updated_at', 'deleted_at', 'deleted'] if task_id in DATA['tasks']: raise exception.Duplicate() for key in required_attributes: if key not in task_values: raise exception.Invalid('%s is a required attribute' % key) incorrect_keys = set(task_values.keys()) - set(allowed_attributes) if incorrect_keys: raise exception.Invalid( 'The keys %s are not valid' % str(incorrect_keys)) task_info_values = _pop_task_info_values(task_values) task = _task_format(task_id, **task_values) DATA['tasks'][task_id] = task task_info = _task_info_create(task['id'], task_info_values) return _format_task_from_db(task, task_info) @log_call def task_update(context, task_id, values): """Update a task object""" global DATA task_values = copy.deepcopy(values) task_info_values = _pop_task_info_values(task_values) try: task = DATA['tasks'][task_id] except KeyError: LOG.debug("No task found with ID %s", task_id) raise exception.TaskNotFound(task_id=task_id) task.update(task_values) task['updated_at'] = timeutils.utcnow() DATA['tasks'][task_id] = task task_info = _task_info_update(task['id'], task_info_values) return _format_task_from_db(task, task_info) @log_call def task_get(context, task_id, force_show_deleted=False): task, task_info = _task_get(context, task_id, force_show_deleted) return _format_task_from_db(task, task_info) def _task_get(context, task_id, force_show_deleted=False): try: task = DATA['tasks'][task_id] except KeyError: msg = _LW('Could not find task %s') % task_id LOG.warn(msg) raise exception.TaskNotFound(task_id=task_id) if task['deleted'] and not (force_show_deleted or context.can_see_deleted): msg = _LW('Unable to get deleted task %s') % task_id LOG.warn(msg) raise exception.TaskNotFound(task_id=task_id) if not _is_task_visible(context, task): LOG.debug("Forbidding request, task %s is not visible", task_id) msg = _("Forbidding request, task %s is not visible") % task_id raise exception.Forbidden(msg) task_info = _task_info_get(task_id) return task, task_info @log_call def task_delete(context, task_id): global DATA try: DATA['tasks'][task_id]['deleted'] = True DATA['tasks'][task_id]['deleted_at'] = timeutils.utcnow() DATA['tasks'][task_id]['updated_at'] = timeutils.utcnow() return copy.deepcopy(DATA['tasks'][task_id]) except KeyError: LOG.debug("No task found with ID %s", task_id) raise exception.TaskNotFound(task_id=task_id) def _task_soft_delete(context): """Scrub task entities which are expired """ global DATA now = timeutils.utcnow() tasks = DATA['tasks'].values() for task in tasks: if(task['owner'] == context.owner and task['deleted'] == False and task['expires_at'] <= now): task['deleted'] = True task['deleted_at'] = timeutils.utcnow() @log_call def task_get_all(context, filters=None, marker=None, limit=None, sort_key='created_at', sort_dir='desc'): """ Get all tasks that match zero or more filters. :param filters: dict of filter keys and values. :param marker: task id after which to start page :param limit: maximum number of tasks to return :param sort_key: task attribute by which results should be sorted :param sort_dir: direction in which results should be sorted (asc, desc) :returns: tasks set """ _task_soft_delete(context) filters = filters or {} tasks = DATA['tasks'].values() tasks = _filter_tasks(tasks, filters, context) tasks = _sort_tasks(tasks, sort_key, sort_dir) tasks = _paginate_tasks(context, tasks, marker, limit, filters.get('deleted')) filtered_tasks = [] for task in tasks: filtered_tasks.append(_format_task_from_db(task, task_info_ref=None)) return filtered_tasks def _is_task_visible(context, task): """Return True if the task is visible in this context.""" # Is admin == task visible if context.is_admin: return True # No owner == task visible if task['owner'] is None: return True # Perform tests based on whether we have an owner if context.owner is not None: if context.owner == task['owner']: return True return False def _filter_tasks(tasks, filters, context, admin_as_user=False): filtered_tasks = [] for task in tasks: has_ownership = context.owner and task['owner'] == context.owner can_see = (has_ownership or (context.is_admin and not admin_as_user)) if not can_see: continue add = True for k, value in six.iteritems(filters): add = task[k] == value and task['deleted'] is False if not add: break if add: filtered_tasks.append(task) return filtered_tasks def _sort_tasks(tasks, sort_key, sort_dir): reverse = False if tasks and not (sort_key in tasks[0]): raise exception.InvalidSortKey() keyfn = lambda x: (x[sort_key] if x[sort_key] is not None else '', x['created_at'], x['id']) reverse = sort_dir == 'desc' tasks.sort(key=keyfn, reverse=reverse) return tasks def _paginate_tasks(context, tasks, marker, limit, show_deleted): start = 0 end = -1 if marker is None: start = 0 else: # Check that the task is accessible _task_get(context, marker, force_show_deleted=show_deleted) for i, task in enumerate(tasks): if task['id'] == marker: start = i + 1 break else: if task: raise exception.TaskNotFound(task_id=task['id']) else: msg = _("Task does not exist") raise exception.NotFound(message=msg) end = start + limit if limit is not None else None return tasks[start:end] def _task_info_create(task_id, values): """Create a Task Info for Task with given task ID""" global DATA task_info = _task_info_format(task_id, **values) DATA['task_info'][task_id] = task_info return task_info def _task_info_update(task_id, values): """Update Task Info for Task with given task ID and updated values""" global DATA try: task_info = DATA['task_info'][task_id] except KeyError: LOG.debug("No task info found with task id %s", task_id) raise exception.TaskNotFound(task_id=task_id) task_info.update(values) DATA['task_info'][task_id] = task_info return task_info def _task_info_get(task_id): """Get Task Info for Task with given task ID""" global DATA try: task_info = DATA['task_info'][task_id] except KeyError: msg = _LW('Could not find task info %s') % task_id LOG.warn(msg) raise exception.TaskNotFound(task_id=task_id) return task_info def _metadef_delete_namespace_content(get_func, key, context, namespace_name): global DATA metadefs = get_func(context, namespace_name) data = DATA[key] for metadef in metadefs: data.remove(metadef) return metadefs @log_call @utils.no_4byte_params def metadef_namespace_create(context, values): """Create a namespace object""" global DATA namespace_values = copy.deepcopy(values) namespace_name = namespace_values.get('namespace') required_attributes = ['namespace', 'owner'] allowed_attributes = ['namespace', 'owner', 'display_name', 'description', 'visibility', 'protected'] for namespace in DATA['metadef_namespaces']: if namespace['namespace'] == namespace_name: LOG.debug("Can not create the metadata definition namespace. " "Namespace=%s already exists.", namespace_name) raise exception.MetadefDuplicateNamespace( namespace_name=namespace_name) for key in required_attributes: if key not in namespace_values: raise exception.Invalid('%s is a required attribute' % key) incorrect_keys = set(namespace_values.keys()) - set(allowed_attributes) if incorrect_keys: raise exception.Invalid( 'The keys %s are not valid' % str(incorrect_keys)) namespace = _format_namespace(namespace_values) DATA['metadef_namespaces'].append(namespace) return namespace @log_call @utils.no_4byte_params def metadef_namespace_update(context, namespace_id, values): """Update a namespace object""" global DATA namespace_values = copy.deepcopy(values) namespace = metadef_namespace_get_by_id(context, namespace_id) if namespace['namespace'] != values['namespace']: for db_namespace in DATA['metadef_namespaces']: if db_namespace['namespace'] == values['namespace']: LOG.debug("Invalid update. It would result in a duplicate " "metadata definition namespace with the same " "name of %s", values['namespace']) emsg = (_("Invalid update. It would result in a duplicate" " metadata definition namespace with the same" " name of %s") % values['namespace']) raise exception.MetadefDuplicateNamespace(emsg) DATA['metadef_namespaces'].remove(namespace) namespace.update(namespace_values) namespace['updated_at'] = timeutils.utcnow() DATA['metadef_namespaces'].append(namespace) return namespace @log_call def metadef_namespace_get_by_id(context, namespace_id): """Get a namespace object""" try: namespace = next(namespace for namespace in DATA['metadef_namespaces'] if namespace['id'] == namespace_id) except StopIteration: msg = (_("Metadata definition namespace not found for id=%s") % namespace_id) LOG.warn(msg) raise exception.MetadefNamespaceNotFound(msg) if not _is_namespace_visible(context, namespace): LOG.debug("Forbidding request, metadata definition namespace=%s " "is not visible.", namespace.namespace) emsg = _("Forbidding request, metadata definition namespace=%s " "is not visible.") % namespace.namespace raise exception.MetadefForbidden(emsg) return namespace @log_call def metadef_namespace_get(context, namespace_name): """Get a namespace object""" try: namespace = next(namespace for namespace in DATA['metadef_namespaces'] if namespace['namespace'] == namespace_name) except StopIteration: LOG.debug("No namespace found with name %s", namespace_name) raise exception.MetadefNamespaceNotFound( namespace_name=namespace_name) _check_namespace_visibility(context, namespace, namespace_name) return namespace @log_call def metadef_namespace_get_all(context, marker=None, limit=None, sort_key='created_at', sort_dir='desc', filters=None): """Get a namespaces list""" resource_types = filters.get('resource_types', []) if filters else [] visibility = filters.get('visibility') if filters else None namespaces = [] for namespace in DATA['metadef_namespaces']: if not _is_namespace_visible(context, namespace): continue if visibility and namespace['visibility'] != visibility: continue if resource_types: for association in DATA['metadef_namespace_resource_types']: if association['namespace_id'] == namespace['id']: if association['name'] in resource_types: break else: continue namespaces.append(namespace) return namespaces @log_call def metadef_namespace_delete(context, namespace_name): """Delete a namespace object""" global DATA namespace = metadef_namespace_get(context, namespace_name) DATA['metadef_namespaces'].remove(namespace) return namespace @log_call def metadef_namespace_delete_content(context, namespace_name): """Delete a namespace content""" global DATA namespace = metadef_namespace_get(context, namespace_name) namespace_id = namespace['id'] objects = [] for object in DATA['metadef_objects']: if object['namespace_id'] != namespace_id: objects.append(object) DATA['metadef_objects'] = objects properties = [] for property in DATA['metadef_objects']: if property['namespace_id'] != namespace_id: properties.append(object) DATA['metadef_objects'] = properties return namespace @log_call def metadef_object_get(context, namespace_name, object_name): """Get a metadef object""" namespace = metadef_namespace_get(context, namespace_name) _check_namespace_visibility(context, namespace, namespace_name) for object in DATA['metadef_objects']: if (object['namespace_id'] == namespace['id'] and object['name'] == object_name): return object else: LOG.debug("The metadata definition object with name=%(name)s" " was not found in namespace=%(namespace_name)s.", {'name': object_name, 'namespace_name': namespace_name}) raise exception.MetadefObjectNotFound(namespace_name=namespace_name, object_name=object_name) @log_call def metadef_object_get_by_id(context, namespace_name, object_id): """Get a metadef object""" namespace = metadef_namespace_get(context, namespace_name) _check_namespace_visibility(context, namespace, namespace_name) for object in DATA['metadef_objects']: if (object['namespace_id'] == namespace['id'] and object['id'] == object_id): return object else: msg = (_("Metadata definition object not found for id=%s") % object_id) LOG.warn(msg) raise exception.MetadefObjectNotFound(msg) @log_call def metadef_object_get_all(context, namespace_name): """Get a metadef objects list""" namespace = metadef_namespace_get(context, namespace_name) objects = [] _check_namespace_visibility(context, namespace, namespace_name) for object in DATA['metadef_objects']: if object['namespace_id'] == namespace['id']: objects.append(object) return objects @log_call @utils.no_4byte_params def metadef_object_create(context, namespace_name, values): """Create a metadef object""" global DATA object_values = copy.deepcopy(values) object_name = object_values['name'] required_attributes = ['name'] allowed_attributes = ['name', 'description', 'json_schema', 'required'] namespace = metadef_namespace_get(context, namespace_name) for object in DATA['metadef_objects']: if (object['name'] == object_name and object['namespace_id'] == namespace['id']): LOG.debug("A metadata definition object with name=%(name)s " "in namespace=%(namespace_name)s already exists.", {'name': object_name, 'namespace_name': namespace_name}) raise exception.MetadefDuplicateObject( object_name=object_name, namespace_name=namespace_name) for key in required_attributes: if key not in object_values: raise exception.Invalid('%s is a required attribute' % key) incorrect_keys = set(object_values.keys()) - set(allowed_attributes) if incorrect_keys: raise exception.Invalid( 'The keys %s are not valid' % str(incorrect_keys)) object_values['namespace_id'] = namespace['id'] _check_namespace_visibility(context, namespace, namespace_name) object = _format_object(object_values) DATA['metadef_objects'].append(object) return object @log_call @utils.no_4byte_params def metadef_object_update(context, namespace_name, object_id, values): """Update a metadef object""" global DATA namespace = metadef_namespace_get(context, namespace_name) _check_namespace_visibility(context, namespace, namespace_name) object = metadef_object_get_by_id(context, namespace_name, object_id) if object['name'] != values['name']: for db_object in DATA['metadef_objects']: if (db_object['name'] == values['name'] and db_object['namespace_id'] == namespace['id']): LOG.debug("Invalid update. It would result in a duplicate " "metadata definition object with same name=%(name)s " "in namespace=%(namespace_name)s.", {'name': object['name'], 'namespace_name': namespace_name}) emsg = (_("Invalid update. It would result in a duplicate" " metadata definition object with the same" " name=%(name)s " " in namespace=%(namespace_name)s.") % {'name': object['name'], 'namespace_name': namespace_name}) raise exception.MetadefDuplicateObject(emsg) DATA['metadef_objects'].remove(object) object.update(values) object['updated_at'] = timeutils.utcnow() DATA['metadef_objects'].append(object) return object @log_call def metadef_object_delete(context, namespace_name, object_name): """Delete a metadef object""" global DATA object = metadef_object_get(context, namespace_name, object_name) DATA['metadef_objects'].remove(object) return object def metadef_object_delete_namespace_content(context, namespace_name, session=None): """Delete an object or raise if namespace or object doesn't exist.""" return _metadef_delete_namespace_content( metadef_object_get_all, 'metadef_objects', context, namespace_name) @log_call def metadef_object_count(context, namespace_name): """Get metadef object count in a namespace""" namespace = metadef_namespace_get(context, namespace_name) _check_namespace_visibility(context, namespace, namespace_name) count = 0 for object in DATA['metadef_objects']: if object['namespace_id'] == namespace['id']: count = count + 1 return count @log_call def metadef_property_count(context, namespace_name): """Get properties count in a namespace""" namespace = metadef_namespace_get(context, namespace_name) _check_namespace_visibility(context, namespace, namespace_name) count = 0 for property in DATA['metadef_properties']: if property['namespace_id'] == namespace['id']: count = count + 1 return count @log_call @utils.no_4byte_params def metadef_property_create(context, namespace_name, values): """Create a metadef property""" global DATA property_values = copy.deepcopy(values) property_name = property_values['name'] required_attributes = ['name'] allowed_attributes = ['name', 'description', 'json_schema', 'required'] namespace = metadef_namespace_get(context, namespace_name) for property in DATA['metadef_properties']: if (property['name'] == property_name and property['namespace_id'] == namespace['id']): LOG.debug("Can not create metadata definition property. A property" " with name=%(name)s already exists in" " namespace=%(namespace_name)s.", {'name': property_name, 'namespace_name': namespace_name}) raise exception.MetadefDuplicateProperty( property_name=property_name, namespace_name=namespace_name) for key in required_attributes: if key not in property_values: raise exception.Invalid('%s is a required attribute' % key) incorrect_keys = set(property_values.keys()) - set(allowed_attributes) if incorrect_keys: raise exception.Invalid( 'The keys %s are not valid' % str(incorrect_keys)) property_values['namespace_id'] = namespace['id'] _check_namespace_visibility(context, namespace, namespace_name) property = _format_property(property_values) DATA['metadef_properties'].append(property) return property @log_call @utils.no_4byte_params def metadef_property_update(context, namespace_name, property_id, values): """Update a metadef property""" global DATA namespace = metadef_namespace_get(context, namespace_name) _check_namespace_visibility(context, namespace, namespace_name) property = metadef_property_get_by_id(context, namespace_name, property_id) if property['name'] != values['name']: for db_property in DATA['metadef_properties']: if (db_property['name'] == values['name'] and db_property['namespace_id'] == namespace['id']): LOG.debug("Invalid update. It would result in a duplicate" " metadata definition property with the same" " name=%(name)s" " in namespace=%(namespace_name)s.", {'name': property['name'], 'namespace_name': namespace_name}) emsg = (_("Invalid update. It would result in a duplicate" " metadata definition property with the same" " name=%(name)s" " in namespace=%(namespace_name)s.") % {'name': property['name'], 'namespace_name': namespace_name}) raise exception.MetadefDuplicateProperty(emsg) DATA['metadef_properties'].remove(property) property.update(values) property['updated_at'] = timeutils.utcnow() DATA['metadef_properties'].append(property) return property @log_call def metadef_property_get_all(context, namespace_name): """Get a metadef properties list""" namespace = metadef_namespace_get(context, namespace_name) properties = [] _check_namespace_visibility(context, namespace, namespace_name) for property in DATA['metadef_properties']: if property['namespace_id'] == namespace['id']: properties.append(property) return properties @log_call def metadef_property_get_by_id(context, namespace_name, property_id): """Get a metadef property""" namespace = metadef_namespace_get(context, namespace_name) _check_namespace_visibility(context, namespace, namespace_name) for property in DATA['metadef_properties']: if (property['namespace_id'] == namespace['id'] and property['id'] == property_id): return property else: msg = (_("Metadata definition property not found for id=%s") % property_id) LOG.warn(msg) raise exception.MetadefPropertyNotFound(msg) @log_call def metadef_property_get(context, namespace_name, property_name): """Get a metadef property""" namespace = metadef_namespace_get(context, namespace_name) _check_namespace_visibility(context, namespace, namespace_name) for property in DATA['metadef_properties']: if (property['namespace_id'] == namespace['id'] and property['name'] == property_name): return property else: LOG.debug("No property found with name=%(name)s in" " namespace=%(namespace_name)s ", {'name': property_name, 'namespace_name': namespace_name}) raise exception.MetadefPropertyNotFound(namespace_name=namespace_name, property_name=property_name) @log_call def metadef_property_delete(context, namespace_name, property_name): """Delete a metadef property""" global DATA property = metadef_property_get(context, namespace_name, property_name) DATA['metadef_properties'].remove(property) return property def metadef_property_delete_namespace_content(context, namespace_name, session=None): """Delete a property or raise if it or namespace doesn't exist.""" return _metadef_delete_namespace_content( metadef_property_get_all, 'metadef_properties', context, namespace_name) @log_call def metadef_resource_type_create(context, values): """Create a metadef resource type""" global DATA resource_type_values = copy.deepcopy(values) resource_type_name = resource_type_values['name'] allowed_attrubites = ['name', 'protected'] for resource_type in DATA['metadef_resource_types']: if resource_type['name'] == resource_type_name: raise exception.Duplicate() incorrect_keys = set(resource_type_values.keys()) - set(allowed_attrubites) if incorrect_keys: raise exception.Invalid( 'The keys %s are not valid' % str(incorrect_keys)) resource_type = _format_resource_type(resource_type_values) DATA['metadef_resource_types'].append(resource_type) return resource_type @log_call def metadef_resource_type_get_all(context): """List all resource types""" return DATA['metadef_resource_types'] @log_call def metadef_resource_type_get(context, resource_type_name): """Get a resource type""" try: resource_type = next(resource_type for resource_type in DATA['metadef_resource_types'] if resource_type['name'] == resource_type_name) except StopIteration: LOG.debug("No resource type found with name %s", resource_type_name) raise exception.MetadefResourceTypeNotFound( resource_type_name=resource_type_name) return resource_type @log_call def metadef_resource_type_association_create(context, namespace_name, values): global DATA association_values = copy.deepcopy(values) namespace = metadef_namespace_get(context, namespace_name) resource_type_name = association_values['name'] resource_type = metadef_resource_type_get(context, resource_type_name) required_attributes = ['name', 'properties_target', 'prefix'] allowed_attributes = copy.deepcopy(required_attributes) for association in DATA['metadef_namespace_resource_types']: if (association['namespace_id'] == namespace['id'] and association['resource_type'] == resource_type['id']): LOG.debug("The metadata definition resource-type association of" " resource_type=%(resource_type_name)s to" " namespace=%(namespace_name)s, already exists.", {'resource_type_name': resource_type_name, 'namespace_name': namespace_name}) raise exception.MetadefDuplicateResourceTypeAssociation( resource_type_name=resource_type_name, namespace_name=namespace_name) for key in required_attributes: if key not in association_values: raise exception.Invalid('%s is a required attribute' % key) incorrect_keys = set(association_values.keys()) - set(allowed_attributes) if incorrect_keys: raise exception.Invalid( 'The keys %s are not valid' % str(incorrect_keys)) association = _format_association(namespace, resource_type, association_values) DATA['metadef_namespace_resource_types'].append(association) return association @log_call def metadef_resource_type_association_get(context, namespace_name, resource_type_name): namespace = metadef_namespace_get(context, namespace_name) resource_type = metadef_resource_type_get(context, resource_type_name) for association in DATA['metadef_namespace_resource_types']: if (association['namespace_id'] == namespace['id'] and association['resource_type'] == resource_type['id']): return association else: LOG.debug("No resource type association found associated with " "namespace %s and resource type %s", namespace_name, resource_type_name) raise exception.MetadefResourceTypeAssociationNotFound( resource_type_name=resource_type_name, namespace_name=namespace_name) @log_call def metadef_resource_type_association_get_all_by_namespace(context, namespace_name): namespace = metadef_namespace_get(context, namespace_name) namespace_resource_types = [] for resource_type in DATA['metadef_namespace_resource_types']: if resource_type['namespace_id'] == namespace['id']: namespace_resource_types.append(resource_type) return namespace_resource_types @log_call def metadef_resource_type_association_delete(context, namespace_name, resource_type_name): global DATA resource_type = metadef_resource_type_association_get(context, namespace_name, resource_type_name) DATA['metadef_namespace_resource_types'].remove(resource_type) return resource_type @log_call def metadef_tag_get(context, namespace_name, name): """Get a metadef tag""" namespace = metadef_namespace_get(context, namespace_name) _check_namespace_visibility(context, namespace, namespace_name) for tag in DATA['metadef_tags']: if tag['namespace_id'] == namespace['id'] and tag['name'] == name: return tag else: LOG.debug("The metadata definition tag with name=%(name)s" " was not found in namespace=%(namespace_name)s.", {'name': name, 'namespace_name': namespace_name}) raise exception.MetadefTagNotFound(name=name, namespace_name=namespace_name) @log_call def metadef_tag_get_by_id(context, namespace_name, id): """Get a metadef tag""" namespace = metadef_namespace_get(context, namespace_name) _check_namespace_visibility(context, namespace, namespace_name) for tag in DATA['metadef_tags']: if tag['namespace_id'] == namespace['id'] and tag['id'] == id: return tag else: msg = (_("Metadata definition tag not found for id=%s") % id) LOG.warn(msg) raise exception.MetadefTagNotFound(msg) @log_call def metadef_tag_get_all(context, namespace_name, filters=None, marker=None, limit=None, sort_key='created_at', sort_dir=None, session=None): """Get a metadef tags list""" namespace = metadef_namespace_get(context, namespace_name) _check_namespace_visibility(context, namespace, namespace_name) tags = [] for tag in DATA['metadef_tags']: if tag['namespace_id'] == namespace['id']: tags.append(tag) return tags @log_call @utils.no_4byte_params def metadef_tag_create(context, namespace_name, values): """Create a metadef tag""" global DATA tag_values = copy.deepcopy(values) tag_name = tag_values['name'] required_attributes = ['name'] allowed_attributes = ['name'] namespace = metadef_namespace_get(context, namespace_name) for tag in DATA['metadef_tags']: if tag['name'] == tag_name and tag['namespace_id'] == namespace['id']: LOG.debug("A metadata definition tag with name=%(name)s" " in namespace=%(namespace_name)s already exists.", {'name': tag_name, 'namespace_name': namespace_name}) raise exception.MetadefDuplicateTag( name=tag_name, namespace_name=namespace_name) for key in required_attributes: if key not in tag_values: raise exception.Invalid('%s is a required attribute' % key) incorrect_keys = set(tag_values.keys()) - set(allowed_attributes) if incorrect_keys: raise exception.Invalid( 'The keys %s are not valid' % str(incorrect_keys)) tag_values['namespace_id'] = namespace['id'] _check_namespace_visibility(context, namespace, namespace_name) tag = _format_tag(tag_values) DATA['metadef_tags'].append(tag) return tag @log_call def metadef_tag_create_tags(context, namespace_name, tag_list): """Create a metadef tag""" global DATA namespace = metadef_namespace_get(context, namespace_name) _check_namespace_visibility(context, namespace, namespace_name) required_attributes = ['name'] allowed_attributes = ['name'] data_tag_list = [] tag_name_list = [] for tag_value in tag_list: tag_values = copy.deepcopy(tag_value) tag_name = tag_values['name'] for key in required_attributes: if key not in tag_values: raise exception.Invalid('%s is a required attribute' % key) incorrect_keys = set(tag_values.keys()) - set(allowed_attributes) if incorrect_keys: raise exception.Invalid( 'The keys %s are not valid' % str(incorrect_keys)) if tag_name in tag_name_list: LOG.debug("A metadata definition tag with name=%(name)s" " in namespace=%(namespace_name)s already exists.", {'name': tag_name, 'namespace_name': namespace_name}) raise exception.MetadefDuplicateTag( name=tag_name, namespace_name=namespace_name) else: tag_name_list.append(tag_name) tag_values['namespace_id'] = namespace['id'] data_tag_list.append(_format_tag(tag_values)) DATA['metadef_tags'] = [] for tag in data_tag_list: DATA['metadef_tags'].append(tag) return data_tag_list @log_call @utils.no_4byte_params def metadef_tag_update(context, namespace_name, id, values): """Update a metadef tag""" global DATA namespace = metadef_namespace_get(context, namespace_name) _check_namespace_visibility(context, namespace, namespace_name) tag = metadef_tag_get_by_id(context, namespace_name, id) if tag['name'] != values['name']: for db_tag in DATA['metadef_tags']: if (db_tag['name'] == values['name'] and db_tag['namespace_id'] == namespace['id']): LOG.debug("Invalid update. It would result in a duplicate" " metadata definition tag with same name=%(name)s " " in namespace=%(namespace_name)s.", {'name': tag['name'], 'namespace_name': namespace_name}) raise exception.MetadefDuplicateTag( name=tag['name'], namespace_name=namespace_name) DATA['metadef_tags'].remove(tag) tag.update(values) tag['updated_at'] = timeutils.utcnow() DATA['metadef_tags'].append(tag) return tag @log_call def metadef_tag_delete(context, namespace_name, name): """Delete a metadef tag""" global DATA tags = metadef_tag_get(context, namespace_name, name) DATA['metadef_tags'].remove(tags) return tags def metadef_tag_delete_namespace_content(context, namespace_name, session=None): """Delete an tag or raise if namespace or tag doesn't exist.""" return _metadef_delete_namespace_content( metadef_tag_get_all, 'metadef_tags', context, namespace_name) @log_call def metadef_tag_count(context, namespace_name): """Get metadef tag count in a namespace""" namespace = metadef_namespace_get(context, namespace_name) _check_namespace_visibility(context, namespace, namespace_name) count = 0 for tag in DATA['metadef_tags']: if tag['namespace_id'] == namespace['id']: count = count + 1 return count def _format_association(namespace, resource_type, association_values): association = { 'namespace_id': namespace['id'], 'resource_type': resource_type['id'], 'properties_target': None, 'prefix': None, 'created_at': timeutils.utcnow(), 'updated_at': timeutils.utcnow() } association.update(association_values) return association def _format_resource_type(values): dt = timeutils.utcnow() resource_type = { 'id': _get_metadef_id(), 'name': values['name'], 'protected': True, 'created_at': dt, 'updated_at': dt } resource_type.update(values) return resource_type def _format_property(values): property = { 'id': _get_metadef_id(), 'namespace_id': None, 'name': None, 'json_schema': None } property.update(values) return property def _format_namespace(values): dt = timeutils.utcnow() namespace = { 'id': _get_metadef_id(), 'namespace': None, 'display_name': None, 'description': None, 'visibility': 'private', 'protected': False, 'owner': None, 'created_at': dt, 'updated_at': dt } namespace.update(values) return namespace def _format_object(values): dt = timeutils.utcnow() object = { 'id': _get_metadef_id(), 'namespace_id': None, 'name': None, 'description': None, 'json_schema': None, 'required': None, 'created_at': dt, 'updated_at': dt } object.update(values) return object def _format_tag(values): dt = timeutils.utcnow() tag = { 'id': _get_metadef_id(), 'namespace_id': None, 'name': None, 'created_at': dt, 'updated_at': dt } tag.update(values) return tag def _is_namespace_visible(context, namespace): """Return true if namespace is visible in this context""" if context.is_admin: return True if namespace.get('visibility', '') == 'public': return True if namespace['owner'] is None: return True if context.owner is not None: if context.owner == namespace['owner']: return True return False def _check_namespace_visibility(context, namespace, namespace_name): if not _is_namespace_visible(context, namespace): LOG.debug("Forbidding request, metadata definition namespace=%s " "is not visible.", namespace_name) emsg = _("Forbidding request, metadata definition namespace=%s" " is not visible.") % namespace_name raise exception.MetadefForbidden(emsg) def _get_metadef_id(): global INDEX INDEX += 1 return INDEX glance-16.0.0/glance/db/simple/__init__.py0000666000175100017510000000000013245511421020261 0ustar zuulzuul00000000000000glance-16.0.0/glance/db/registry/0000775000175100017510000000000013245511661016545 5ustar zuulzuul00000000000000glance-16.0.0/glance/db/registry/api.py0000666000175100017510000004223113245511421017666 0ustar zuulzuul00000000000000# Copyright 2013 Red Hat, Inc. # Copyright 2015 Mirantis, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ This is the Registry's Driver API. This API relies on the registry RPC client (version >= 2). The functions bellow work as a proxy for the database back-end configured in the registry service, which means that everything returned by that back-end will be also returned by this API. This API exists for supporting deployments not willing to put database credentials in glance-api. Those deployments can rely on this registry driver that will talk to a remote registry service, which will then access the database back-end. """ import functools from glance.db import utils as db_utils from glance.registry.client.v2 import api def configure(): api.configure_registry_client() def _get_client(func): """Injects a client instance to the each function This decorator creates an instance of the Registry client and passes it as an argument to each function in this API. """ @functools.wraps(func) def wrapper(context, *args, **kwargs): client = api.get_registry_client(context) return func(client, *args, **kwargs) return wrapper @_get_client def image_create(client, values, v1_mode=False): """Create an image from the values dictionary.""" return client.image_create(values=values, v1_mode=v1_mode) @_get_client def image_update(client, image_id, values, purge_props=False, from_state=None, v1_mode=False): """ Set the given properties on an image and update it. :raises ImageNotFound: if image does not exist. """ return client.image_update(values=values, image_id=image_id, purge_props=purge_props, from_state=from_state, v1_mode=v1_mode) @_get_client def image_destroy(client, image_id): """Destroy the image or raise if it does not exist.""" return client.image_destroy(image_id=image_id) @_get_client def image_get(client, image_id, force_show_deleted=False, v1_mode=False): return client.image_get(image_id=image_id, force_show_deleted=force_show_deleted, v1_mode=v1_mode) def is_image_visible(context, image, status=None): """Return True if the image is visible in this context.""" return db_utils.is_image_visible(context, image, image_member_find, status) @_get_client def image_get_all(client, filters=None, marker=None, limit=None, sort_key=None, sort_dir=None, member_status='accepted', is_public=None, admin_as_user=False, return_tag=False, v1_mode=False): """ Get all images that match zero or more filters. :param filters: dict of filter keys and values. If a 'properties' key is present, it is treated as a dict of key/value filters on the image properties attribute :param marker: image id after which to start page :param limit: maximum number of images to return :param sort_key: image attribute by which results should be sorted :param sort_dir: direction in which results should be sorted (asc, desc) :param member_status: only return shared images that have this membership status :param is_public: If true, return only public images. If false, return only private and shared images. :param admin_as_user: For backwards compatibility. If true, then return to an admin the equivalent set of images which it would see if it were a regular user :param return_tag: To indicates whether image entry in result includes it relevant tag entries. This could improve upper-layer query performance, to prevent using separated calls :param v1_mode: If true, mutates the 'visibility' value of each image into the v1-compatible field 'is_public' """ sort_key = ['created_at'] if not sort_key else sort_key sort_dir = ['desc'] if not sort_dir else sort_dir return client.image_get_all(filters=filters, marker=marker, limit=limit, sort_key=sort_key, sort_dir=sort_dir, member_status=member_status, is_public=is_public, admin_as_user=admin_as_user, return_tag=return_tag, v1_mode=v1_mode) @_get_client def image_property_create(client, values, session=None): """Create an ImageProperty object""" return client.image_property_create(values=values) @_get_client def image_property_delete(client, prop_ref, image_ref, session=None): """ Used internally by _image_property_create and image_property_update """ return client.image_property_delete(prop_ref=prop_ref, image_ref=image_ref) @_get_client def image_member_create(client, values, session=None): """Create an ImageMember object""" return client.image_member_create(values=values) @_get_client def image_member_update(client, memb_id, values): """Update an ImageMember object""" return client.image_member_update(memb_id=memb_id, values=values) @_get_client def image_member_delete(client, memb_id, session=None): """Delete an ImageMember object""" client.image_member_delete(memb_id=memb_id) @_get_client def image_member_find(client, image_id=None, member=None, status=None, include_deleted=False): """Find all members that meet the given criteria. Note, currently include_deleted should be true only when create a new image membership, as there may be a deleted image membership between the same image and tenant, the membership will be reused in this case. It should be false in other cases. :param image_id: identifier of image entity :param member: tenant to which membership has been granted :include_deleted: A boolean indicating whether the result should include the deleted record of image member """ return client.image_member_find(image_id=image_id, member=member, status=status, include_deleted=include_deleted) @_get_client def image_member_count(client, image_id): """Return the number of image members for this image :param image_id: identifier of image entity """ return client.image_member_count(image_id=image_id) @_get_client def image_tag_set_all(client, image_id, tags): client.image_tag_set_all(image_id=image_id, tags=tags) @_get_client def image_tag_create(client, image_id, value, session=None): """Create an image tag.""" return client.image_tag_create(image_id=image_id, value=value) @_get_client def image_tag_delete(client, image_id, value, session=None): """Delete an image tag.""" client.image_tag_delete(image_id=image_id, value=value) @_get_client def image_tag_get_all(client, image_id, session=None): """Get a list of tags for a specific image.""" return client.image_tag_get_all(image_id=image_id) @_get_client def image_location_delete(client, image_id, location_id, status, session=None): """Delete an image location.""" client.image_location_delete(image_id=image_id, location_id=location_id, status=status) @_get_client def image_location_update(client, image_id, location, session=None): """Update image location.""" client.image_location_update(image_id=image_id, location=location) @_get_client def user_get_storage_usage(client, owner_id, image_id=None, session=None): return client.user_get_storage_usage(owner_id=owner_id, image_id=image_id) @_get_client def task_get(client, task_id, session=None, force_show_deleted=False): """Get a single task object :returns: task dictionary """ return client.task_get(task_id=task_id, session=session, force_show_deleted=force_show_deleted) @_get_client def task_get_all(client, filters=None, marker=None, limit=None, sort_key='created_at', sort_dir='desc', admin_as_user=False): """Get all tasks that match zero or more filters. :param filters: dict of filter keys and values. :param marker: task id after which to start page :param limit: maximum number of tasks to return :param sort_key: task attribute by which results should be sorted :param sort_dir: direction in which results should be sorted (asc, desc) :param admin_as_user: For backwards compatibility. If true, then return to an admin the equivalent set of tasks which it would see if it were a regular user :returns: tasks set """ return client.task_get_all(filters=filters, marker=marker, limit=limit, sort_key=sort_key, sort_dir=sort_dir, admin_as_user=admin_as_user) @_get_client def task_create(client, values, session=None): """Create a task object""" return client.task_create(values=values, session=session) @_get_client def task_delete(client, task_id, session=None): """Delete a task object""" return client.task_delete(task_id=task_id, session=session) @_get_client def task_update(client, task_id, values, session=None): return client.task_update(task_id=task_id, values=values, session=session) # Metadef @_get_client def metadef_namespace_get_all( client, marker=None, limit=None, sort_key='created_at', sort_dir=None, filters=None, session=None): return client.metadef_namespace_get_all( marker=marker, limit=limit, sort_key=sort_key, sort_dir=sort_dir, filters=filters) @_get_client def metadef_namespace_get(client, namespace_name, session=None): return client.metadef_namespace_get(namespace_name=namespace_name) @_get_client def metadef_namespace_create(client, values, session=None): return client.metadef_namespace_create(values=values) @_get_client def metadef_namespace_update( client, namespace_id, namespace_dict, session=None): return client.metadef_namespace_update( namespace_id=namespace_id, namespace_dict=namespace_dict) @_get_client def metadef_namespace_delete(client, namespace_name, session=None): return client.metadef_namespace_delete( namespace_name=namespace_name) @_get_client def metadef_object_get_all(client, namespace_name, session=None): return client.metadef_object_get_all( namespace_name=namespace_name) @_get_client def metadef_object_get( client, namespace_name, object_name, session=None): return client.metadef_object_get( namespace_name=namespace_name, object_name=object_name) @_get_client def metadef_object_create( client, namespace_name, object_dict, session=None): return client.metadef_object_create( namespace_name=namespace_name, object_dict=object_dict) @_get_client def metadef_object_update( client, namespace_name, object_id, object_dict, session=None): return client.metadef_object_update( namespace_name=namespace_name, object_id=object_id, object_dict=object_dict) @_get_client def metadef_object_delete( client, namespace_name, object_name, session=None): return client.metadef_object_delete( namespace_name=namespace_name, object_name=object_name) @_get_client def metadef_object_delete_namespace_content( client, namespace_name, session=None): return client.metadef_object_delete_namespace_content( namespace_name=namespace_name) @_get_client def metadef_object_count( client, namespace_name, session=None): return client.metadef_object_count( namespace_name=namespace_name) @_get_client def metadef_property_get_all( client, namespace_name, session=None): return client.metadef_property_get_all( namespace_name=namespace_name) @_get_client def metadef_property_get( client, namespace_name, property_name, session=None): return client.metadef_property_get( namespace_name=namespace_name, property_name=property_name) @_get_client def metadef_property_create( client, namespace_name, property_dict, session=None): return client.metadef_property_create( namespace_name=namespace_name, property_dict=property_dict) @_get_client def metadef_property_update( client, namespace_name, property_id, property_dict, session=None): return client.metadef_property_update( namespace_name=namespace_name, property_id=property_id, property_dict=property_dict) @_get_client def metadef_property_delete( client, namespace_name, property_name, session=None): return client.metadef_property_delete( namespace_name=namespace_name, property_name=property_name) @_get_client def metadef_property_delete_namespace_content( client, namespace_name, session=None): return client.metadef_property_delete_namespace_content( namespace_name=namespace_name) @_get_client def metadef_property_count( client, namespace_name, session=None): return client.metadef_property_count( namespace_name=namespace_name) @_get_client def metadef_resource_type_create(client, values, session=None): return client.metadef_resource_type_create(values=values) @_get_client def metadef_resource_type_get( client, resource_type_name, session=None): return client.metadef_resource_type_get( resource_type_name=resource_type_name) @_get_client def metadef_resource_type_get_all(client, session=None): return client.metadef_resource_type_get_all() @_get_client def metadef_resource_type_delete( client, resource_type_name, session=None): return client.metadef_resource_type_delete( resource_type_name=resource_type_name) @_get_client def metadef_resource_type_association_get( client, namespace_name, resource_type_name, session=None): return client.metadef_resource_type_association_get( namespace_name=namespace_name, resource_type_name=resource_type_name) @_get_client def metadef_resource_type_association_create( client, namespace_name, values, session=None): return client.metadef_resource_type_association_create( namespace_name=namespace_name, values=values) @_get_client def metadef_resource_type_association_delete( client, namespace_name, resource_type_name, session=None): return client.metadef_resource_type_association_delete( namespace_name=namespace_name, resource_type_name=resource_type_name) @_get_client def metadef_resource_type_association_get_all_by_namespace( client, namespace_name, session=None): return client.metadef_resource_type_association_get_all_by_namespace( namespace_name=namespace_name) @_get_client def metadef_tag_get_all(client, namespace_name, filters=None, marker=None, limit=None, sort_key='created_at', sort_dir=None, session=None): return client.metadef_tag_get_all( namespace_name=namespace_name, filters=filters, marker=marker, limit=limit, sort_key=sort_key, sort_dir=sort_dir, session=session) @_get_client def metadef_tag_get(client, namespace_name, name, session=None): return client.metadef_tag_get( namespace_name=namespace_name, name=name) @_get_client def metadef_tag_create( client, namespace_name, tag_dict, session=None): return client.metadef_tag_create( namespace_name=namespace_name, tag_dict=tag_dict) @_get_client def metadef_tag_create_tags( client, namespace_name, tag_list, session=None): return client.metadef_tag_create_tags( namespace_name=namespace_name, tag_list=tag_list) @_get_client def metadef_tag_update( client, namespace_name, id, tag_dict, session=None): return client.metadef_tag_update( namespace_name=namespace_name, id=id, tag_dict=tag_dict) @_get_client def metadef_tag_delete( client, namespace_name, name, session=None): return client.metadef_tag_delete( namespace_name=namespace_name, name=name) @_get_client def metadef_tag_delete_namespace_content( client, namespace_name, session=None): return client.metadef_tag_delete_namespace_content( namespace_name=namespace_name) @_get_client def metadef_tag_count(client, namespace_name, session=None): return client.metadef_tag_count(namespace_name=namespace_name) glance-16.0.0/glance/db/registry/__init__.py0000666000175100017510000000000013245511421020640 0ustar zuulzuul00000000000000glance-16.0.0/glance/db/__init__.py0000666000175100017510000010351713245511421017011 0ustar zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # Copyright 2010-2012 OpenStack Foundation # Copyright 2013 IBM Corp. # Copyright 2015 Mirantis, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg from oslo_utils import importutils from wsme.rest import json from glance.api.v2.model.metadef_property_type import PropertyType from glance.common import crypt from glance.common import exception from glance.common import location_strategy import glance.domain import glance.domain.proxy from glance.i18n import _ CONF = cfg.CONF CONF.import_opt('image_size_cap', 'glance.common.config') CONF.import_opt('metadata_encryption_key', 'glance.common.config') def get_api(v1_mode=False): """ When using v2_registry with v2_api or alone, it is essential that the opt 'data_api' be set to 'glance.db.registry.api'. This requires us to differentiate what this method returns as the db api. i.e., we do not want to return 'glance.db.registry.api' for a call from v1 api. Reference bug #1516706 """ if v1_mode: # prevent v1_api from talking to v2_registry. if CONF.data_api == 'glance.db.simple.api': api = importutils.import_module(CONF.data_api) else: api = importutils.import_module('glance.db.sqlalchemy.api') else: api = importutils.import_module(CONF.data_api) if hasattr(api, 'configure'): api.configure() return api def unwrap(db_api): return db_api # attributes common to all models BASE_MODEL_ATTRS = set(['id', 'created_at', 'updated_at', 'deleted_at', 'deleted']) IMAGE_ATTRS = BASE_MODEL_ATTRS | set(['name', 'status', 'size', 'virtual_size', 'disk_format', 'container_format', 'min_disk', 'min_ram', 'is_public', 'locations', 'checksum', 'owner', 'protected']) class ImageRepo(object): def __init__(self, context, db_api): self.context = context self.db_api = db_api def get(self, image_id): try: db_api_image = dict(self.db_api.image_get(self.context, image_id)) if db_api_image['deleted']: raise exception.ImageNotFound() except (exception.ImageNotFound, exception.Forbidden): msg = _("No image found with ID %s") % image_id raise exception.ImageNotFound(msg) tags = self.db_api.image_tag_get_all(self.context, image_id) image = self._format_image_from_db(db_api_image, tags) return ImageProxy(image, self.context, self.db_api) def list(self, marker=None, limit=None, sort_key=None, sort_dir=None, filters=None, member_status='accepted'): sort_key = ['created_at'] if not sort_key else sort_key sort_dir = ['desc'] if not sort_dir else sort_dir db_api_images = self.db_api.image_get_all( self.context, filters=filters, marker=marker, limit=limit, sort_key=sort_key, sort_dir=sort_dir, member_status=member_status, return_tag=True) images = [] for db_api_image in db_api_images: db_image = dict(db_api_image) image = self._format_image_from_db(db_image, db_image['tags']) images.append(image) return images def _format_image_from_db(self, db_image, db_tags): properties = {} for prop in db_image.pop('properties'): # NOTE(markwash) db api requires us to filter deleted if not prop['deleted']: properties[prop['name']] = prop['value'] locations = [loc for loc in db_image['locations'] if loc['status'] == 'active'] if CONF.metadata_encryption_key: key = CONF.metadata_encryption_key for l in locations: l['url'] = crypt.urlsafe_decrypt(key, l['url']) return glance.domain.Image( image_id=db_image['id'], name=db_image['name'], status=db_image['status'], created_at=db_image['created_at'], updated_at=db_image['updated_at'], visibility=db_image['visibility'], min_disk=db_image['min_disk'], min_ram=db_image['min_ram'], protected=db_image['protected'], locations=location_strategy.get_ordered_locations(locations), checksum=db_image['checksum'], owner=db_image['owner'], disk_format=db_image['disk_format'], container_format=db_image['container_format'], size=db_image['size'], virtual_size=db_image['virtual_size'], extra_properties=properties, tags=db_tags ) def _format_image_to_db(self, image): locations = image.locations if CONF.metadata_encryption_key: key = CONF.metadata_encryption_key ld = [] for loc in locations: url = crypt.urlsafe_encrypt(key, loc['url']) ld.append({'url': url, 'metadata': loc['metadata'], 'status': loc['status'], # NOTE(zhiyan): New location has no ID field. 'id': loc.get('id')}) locations = ld return { 'id': image.image_id, 'name': image.name, 'status': image.status, 'created_at': image.created_at, 'min_disk': image.min_disk, 'min_ram': image.min_ram, 'protected': image.protected, 'locations': locations, 'checksum': image.checksum, 'owner': image.owner, 'disk_format': image.disk_format, 'container_format': image.container_format, 'size': image.size, 'virtual_size': image.virtual_size, 'visibility': image.visibility, 'properties': dict(image.extra_properties), } def add(self, image): image_values = self._format_image_to_db(image) if (image_values['size'] is not None and image_values['size'] > CONF.image_size_cap): raise exception.ImageSizeLimitExceeded # the updated_at value is not set in the _format_image_to_db # function since it is specific to image create image_values['updated_at'] = image.updated_at new_values = self.db_api.image_create(self.context, image_values) self.db_api.image_tag_set_all(self.context, image.image_id, image.tags) image.created_at = new_values['created_at'] image.updated_at = new_values['updated_at'] def save(self, image, from_state=None): image_values = self._format_image_to_db(image) if (image_values['size'] is not None and image_values['size'] > CONF.image_size_cap): raise exception.ImageSizeLimitExceeded try: new_values = self.db_api.image_update(self.context, image.image_id, image_values, purge_props=True, from_state=from_state) except (exception.ImageNotFound, exception.Forbidden): msg = _("No image found with ID %s") % image.image_id raise exception.ImageNotFound(msg) self.db_api.image_tag_set_all(self.context, image.image_id, image.tags) image.updated_at = new_values['updated_at'] def remove(self, image): try: self.db_api.image_update(self.context, image.image_id, {'status': image.status}, purge_props=True) except (exception.ImageNotFound, exception.Forbidden): msg = _("No image found with ID %s") % image.image_id raise exception.ImageNotFound(msg) # NOTE(markwash): don't update tags? new_values = self.db_api.image_destroy(self.context, image.image_id) image.updated_at = new_values['updated_at'] class ImageProxy(glance.domain.proxy.Image): def __init__(self, image, context, db_api): self.context = context self.db_api = db_api self.image = image super(ImageProxy, self).__init__(image) class ImageMemberRepo(object): def __init__(self, context, db_api, image): self.context = context self.db_api = db_api self.image = image def _format_image_member_from_db(self, db_image_member): return glance.domain.ImageMembership( id=db_image_member['id'], image_id=db_image_member['image_id'], member_id=db_image_member['member'], status=db_image_member['status'], created_at=db_image_member['created_at'], updated_at=db_image_member['updated_at'] ) def _format_image_member_to_db(self, image_member): image_member = {'image_id': self.image.image_id, 'member': image_member.member_id, 'status': image_member.status, 'created_at': image_member.created_at} return image_member def list(self): db_members = self.db_api.image_member_find( self.context, image_id=self.image.image_id) image_members = [] for db_member in db_members: image_members.append(self._format_image_member_from_db(db_member)) return image_members def add(self, image_member): try: self.get(image_member.member_id) except exception.NotFound: pass else: msg = _('The target member %(member_id)s is already ' 'associated with image %(image_id)s.') % { 'member_id': image_member.member_id, 'image_id': self.image.image_id} raise exception.Duplicate(msg) image_member_values = self._format_image_member_to_db(image_member) # Note(shalq): find the image member including the member marked with # deleted. We will use only one record to represent membership between # the same image and member. The record of the deleted image member # will be reused, if it exists, update its properties instead of # creating a new one. members = self.db_api.image_member_find(self.context, image_id=self.image.image_id, member=image_member.member_id, include_deleted=True) if members: new_values = self.db_api.image_member_update(self.context, members[0]['id'], image_member_values) else: new_values = self.db_api.image_member_create(self.context, image_member_values) image_member.created_at = new_values['created_at'] image_member.updated_at = new_values['updated_at'] image_member.id = new_values['id'] def remove(self, image_member): try: self.db_api.image_member_delete(self.context, image_member.id) except (exception.NotFound, exception.Forbidden): msg = _("The specified member %s could not be found") raise exception.NotFound(msg % image_member.id) def save(self, image_member, from_state=None): image_member_values = self._format_image_member_to_db(image_member) try: new_values = self.db_api.image_member_update(self.context, image_member.id, image_member_values) except (exception.NotFound, exception.Forbidden): raise exception.NotFound() image_member.updated_at = new_values['updated_at'] def get(self, member_id): try: db_api_image_member = self.db_api.image_member_find( self.context, self.image.image_id, member_id) if not db_api_image_member: raise exception.NotFound() except (exception.NotFound, exception.Forbidden): raise exception.NotFound() image_member = self._format_image_member_from_db( db_api_image_member[0]) return image_member class TaskRepo(object): def __init__(self, context, db_api): self.context = context self.db_api = db_api def _format_task_from_db(self, db_task): return glance.domain.Task( task_id=db_task['id'], task_type=db_task['type'], status=db_task['status'], owner=db_task['owner'], expires_at=db_task['expires_at'], created_at=db_task['created_at'], updated_at=db_task['updated_at'], task_input=db_task['input'], result=db_task['result'], message=db_task['message'], ) def _format_task_stub_from_db(self, db_task): return glance.domain.TaskStub( task_id=db_task['id'], task_type=db_task['type'], status=db_task['status'], owner=db_task['owner'], expires_at=db_task['expires_at'], created_at=db_task['created_at'], updated_at=db_task['updated_at'], ) def _format_task_to_db(self, task): task = {'id': task.task_id, 'type': task.type, 'status': task.status, 'input': task.task_input, 'result': task.result, 'owner': task.owner, 'message': task.message, 'expires_at': task.expires_at, 'created_at': task.created_at, 'updated_at': task.updated_at, } return task def get(self, task_id): try: db_api_task = self.db_api.task_get(self.context, task_id) except (exception.NotFound, exception.Forbidden): msg = _('Could not find task %s') % task_id raise exception.NotFound(msg) return self._format_task_from_db(db_api_task) def list(self, marker=None, limit=None, sort_key='created_at', sort_dir='desc', filters=None): db_api_tasks = self.db_api.task_get_all(self.context, filters=filters, marker=marker, limit=limit, sort_key=sort_key, sort_dir=sort_dir) return [self._format_task_stub_from_db(task) for task in db_api_tasks] def save(self, task): task_values = self._format_task_to_db(task) try: updated_values = self.db_api.task_update(self.context, task.task_id, task_values) except (exception.NotFound, exception.Forbidden): msg = _('Could not find task %s') % task.task_id raise exception.NotFound(msg) task.updated_at = updated_values['updated_at'] def add(self, task): task_values = self._format_task_to_db(task) updated_values = self.db_api.task_create(self.context, task_values) task.created_at = updated_values['created_at'] task.updated_at = updated_values['updated_at'] def remove(self, task): task_values = self._format_task_to_db(task) try: self.db_api.task_update(self.context, task.task_id, task_values) updated_values = self.db_api.task_delete(self.context, task.task_id) except (exception.NotFound, exception.Forbidden): msg = _('Could not find task %s') % task.task_id raise exception.NotFound(msg) task.updated_at = updated_values['updated_at'] task.deleted_at = updated_values['deleted_at'] class MetadefNamespaceRepo(object): def __init__(self, context, db_api): self.context = context self.db_api = db_api def _format_namespace_from_db(self, namespace_obj): return glance.domain.MetadefNamespace( namespace_id=namespace_obj['id'], namespace=namespace_obj['namespace'], display_name=namespace_obj['display_name'], description=namespace_obj['description'], owner=namespace_obj['owner'], visibility=namespace_obj['visibility'], protected=namespace_obj['protected'], created_at=namespace_obj['created_at'], updated_at=namespace_obj['updated_at'] ) def _format_namespace_to_db(self, namespace_obj): namespace = { 'namespace': namespace_obj.namespace, 'display_name': namespace_obj.display_name, 'description': namespace_obj.description, 'visibility': namespace_obj.visibility, 'protected': namespace_obj.protected, 'owner': namespace_obj.owner } return namespace def add(self, namespace): self.db_api.metadef_namespace_create( self.context, self._format_namespace_to_db(namespace) ) def get(self, namespace): try: db_api_namespace = self.db_api.metadef_namespace_get( self.context, namespace) except (exception.NotFound, exception.Forbidden): msg = _('Could not find namespace %s') % namespace raise exception.NotFound(msg) return self._format_namespace_from_db(db_api_namespace) def list(self, marker=None, limit=None, sort_key='created_at', sort_dir='desc', filters=None): db_namespaces = self.db_api.metadef_namespace_get_all( self.context, marker=marker, limit=limit, sort_key=sort_key, sort_dir=sort_dir, filters=filters ) return [self._format_namespace_from_db(namespace_obj) for namespace_obj in db_namespaces] def remove(self, namespace): try: self.db_api.metadef_namespace_delete(self.context, namespace.namespace) except (exception.NotFound, exception.Forbidden): msg = _("The specified namespace %s could not be found") raise exception.NotFound(msg % namespace.namespace) def remove_objects(self, namespace): try: self.db_api.metadef_object_delete_namespace_content( self.context, namespace.namespace ) except (exception.NotFound, exception.Forbidden): msg = _("The specified namespace %s could not be found") raise exception.NotFound(msg % namespace.namespace) def remove_properties(self, namespace): try: self.db_api.metadef_property_delete_namespace_content( self.context, namespace.namespace ) except (exception.NotFound, exception.Forbidden): msg = _("The specified namespace %s could not be found") raise exception.NotFound(msg % namespace.namespace) def remove_tags(self, namespace): try: self.db_api.metadef_tag_delete_namespace_content( self.context, namespace.namespace ) except (exception.NotFound, exception.Forbidden): msg = _("The specified namespace %s could not be found") raise exception.NotFound(msg % namespace.namespace) def object_count(self, namespace_name): return self.db_api.metadef_object_count( self.context, namespace_name ) def property_count(self, namespace_name): return self.db_api.metadef_property_count( self.context, namespace_name ) def save(self, namespace): try: self.db_api.metadef_namespace_update( self.context, namespace.namespace_id, self._format_namespace_to_db(namespace) ) except exception.NotFound as e: raise exception.NotFound(explanation=e.msg) return namespace class MetadefObjectRepo(object): def __init__(self, context, db_api): self.context = context self.db_api = db_api self.meta_namespace_repo = MetadefNamespaceRepo(context, db_api) def _format_metadef_object_from_db(self, metadata_object, namespace_entity): required_str = metadata_object['required'] required_list = required_str.split(",") if required_str else [] # Convert the persisted json schema to a dict of PropertyTypes property_types = {} json_props = metadata_object['json_schema'] for id in json_props: property_types[id] = json.fromjson(PropertyType, json_props[id]) return glance.domain.MetadefObject( namespace=namespace_entity, object_id=metadata_object['id'], name=metadata_object['name'], required=required_list, description=metadata_object['description'], properties=property_types, created_at=metadata_object['created_at'], updated_at=metadata_object['updated_at'] ) def _format_metadef_object_to_db(self, metadata_object): required_str = (",".join(metadata_object.required) if metadata_object.required else None) # Convert the model PropertyTypes dict to a JSON string properties = metadata_object.properties db_schema = {} if properties: for k, v in properties.items(): json_data = json.tojson(PropertyType, v) db_schema[k] = json_data db_metadata_object = { 'name': metadata_object.name, 'required': required_str, 'description': metadata_object.description, 'json_schema': db_schema } return db_metadata_object def add(self, metadata_object): self.db_api.metadef_object_create( self.context, metadata_object.namespace, self._format_metadef_object_to_db(metadata_object) ) def get(self, namespace, object_name): try: namespace_entity = self.meta_namespace_repo.get(namespace) db_metadata_object = self.db_api.metadef_object_get( self.context, namespace, object_name) except (exception.NotFound, exception.Forbidden): msg = _('Could not find metadata object %s') % object_name raise exception.NotFound(msg) return self._format_metadef_object_from_db(db_metadata_object, namespace_entity) def list(self, marker=None, limit=None, sort_key='created_at', sort_dir='desc', filters=None): namespace = filters['namespace'] namespace_entity = self.meta_namespace_repo.get(namespace) db_metadata_objects = self.db_api.metadef_object_get_all( self.context, namespace) return [self._format_metadef_object_from_db(metadata_object, namespace_entity) for metadata_object in db_metadata_objects] def remove(self, metadata_object): try: self.db_api.metadef_object_delete( self.context, metadata_object.namespace.namespace, metadata_object.name ) except (exception.NotFound, exception.Forbidden): msg = _("The specified metadata object %s could not be found") raise exception.NotFound(msg % metadata_object.name) def save(self, metadata_object): try: self.db_api.metadef_object_update( self.context, metadata_object.namespace.namespace, metadata_object.object_id, self._format_metadef_object_to_db(metadata_object)) except exception.NotFound as e: raise exception.NotFound(explanation=e.msg) return metadata_object class MetadefResourceTypeRepo(object): def __init__(self, context, db_api): self.context = context self.db_api = db_api self.meta_namespace_repo = MetadefNamespaceRepo(context, db_api) def _format_resource_type_from_db(self, resource_type, namespace): return glance.domain.MetadefResourceType( namespace=namespace, name=resource_type['name'], prefix=resource_type['prefix'], properties_target=resource_type['properties_target'], created_at=resource_type['created_at'], updated_at=resource_type['updated_at'] ) def _format_resource_type_to_db(self, resource_type): db_resource_type = { 'name': resource_type.name, 'prefix': resource_type.prefix, 'properties_target': resource_type.properties_target } return db_resource_type def add(self, resource_type): self.db_api.metadef_resource_type_association_create( self.context, resource_type.namespace, self._format_resource_type_to_db(resource_type) ) def get(self, resource_type, namespace): namespace_entity = self.meta_namespace_repo.get(namespace) db_resource_type = ( self.db_api. metadef_resource_type_association_get( self.context, namespace, resource_type ) ) return self._format_resource_type_from_db(db_resource_type, namespace_entity) def list(self, filters=None): namespace = filters['namespace'] if namespace: namespace_entity = self.meta_namespace_repo.get(namespace) db_resource_types = ( self.db_api. metadef_resource_type_association_get_all_by_namespace( self.context, namespace ) ) return [self._format_resource_type_from_db(resource_type, namespace_entity) for resource_type in db_resource_types] else: db_resource_types = ( self.db_api. metadef_resource_type_get_all(self.context) ) return [glance.domain.MetadefResourceType( namespace=None, name=resource_type['name'], prefix=None, properties_target=None, created_at=resource_type['created_at'], updated_at=resource_type['updated_at'] ) for resource_type in db_resource_types] def remove(self, resource_type): try: self.db_api.metadef_resource_type_association_delete( self.context, resource_type.namespace.namespace, resource_type.name) except (exception.NotFound, exception.Forbidden): msg = _("The specified resource type %s could not be found ") raise exception.NotFound(msg % resource_type.name) class MetadefPropertyRepo(object): def __init__(self, context, db_api): self.context = context self.db_api = db_api self.meta_namespace_repo = MetadefNamespaceRepo(context, db_api) def _format_metadef_property_from_db( self, property, namespace_entity): return glance.domain.MetadefProperty( namespace=namespace_entity, property_id=property['id'], name=property['name'], schema=property['json_schema'] ) def _format_metadef_property_to_db(self, property): db_metadata_object = { 'name': property.name, 'json_schema': property.schema } return db_metadata_object def add(self, property): self.db_api.metadef_property_create( self.context, property.namespace, self._format_metadef_property_to_db(property) ) def get(self, namespace, property_name): try: namespace_entity = self.meta_namespace_repo.get(namespace) db_property_type = self.db_api.metadef_property_get( self.context, namespace, property_name ) except (exception.NotFound, exception.Forbidden): msg = _('Could not find property %s') % property_name raise exception.NotFound(msg) return self._format_metadef_property_from_db( db_property_type, namespace_entity) def list(self, marker=None, limit=None, sort_key='created_at', sort_dir='desc', filters=None): namespace = filters['namespace'] namespace_entity = self.meta_namespace_repo.get(namespace) db_properties = self.db_api.metadef_property_get_all( self.context, namespace) return ( [self._format_metadef_property_from_db( property, namespace_entity) for property in db_properties] ) def remove(self, property): try: self.db_api.metadef_property_delete( self.context, property.namespace.namespace, property.name) except (exception.NotFound, exception.Forbidden): msg = _("The specified property %s could not be found") raise exception.NotFound(msg % property.name) def save(self, property): try: self.db_api.metadef_property_update( self.context, property.namespace.namespace, property.property_id, self._format_metadef_property_to_db(property) ) except exception.NotFound as e: raise exception.NotFound(explanation=e.msg) return property class MetadefTagRepo(object): def __init__(self, context, db_api): self.context = context self.db_api = db_api self.meta_namespace_repo = MetadefNamespaceRepo(context, db_api) def _format_metadef_tag_from_db(self, metadata_tag, namespace_entity): return glance.domain.MetadefTag( namespace=namespace_entity, tag_id=metadata_tag['id'], name=metadata_tag['name'], created_at=metadata_tag['created_at'], updated_at=metadata_tag['updated_at'] ) def _format_metadef_tag_to_db(self, metadata_tag): db_metadata_tag = { 'name': metadata_tag.name } return db_metadata_tag def add(self, metadata_tag): self.db_api.metadef_tag_create( self.context, metadata_tag.namespace, self._format_metadef_tag_to_db(metadata_tag) ) def add_tags(self, metadata_tags): tag_list = [] namespace = None for metadata_tag in metadata_tags: tag_list.append(self._format_metadef_tag_to_db(metadata_tag)) if namespace is None: namespace = metadata_tag.namespace self.db_api.metadef_tag_create_tags( self.context, namespace, tag_list) def get(self, namespace, name): try: namespace_entity = self.meta_namespace_repo.get(namespace) db_metadata_tag = self.db_api.metadef_tag_get( self.context, namespace, name) except (exception.NotFound, exception.Forbidden): msg = _('Could not find metadata tag %s') % name raise exception.NotFound(msg) return self._format_metadef_tag_from_db(db_metadata_tag, namespace_entity) def list(self, marker=None, limit=None, sort_key='created_at', sort_dir='desc', filters=None): namespace = filters['namespace'] namespace_entity = self.meta_namespace_repo.get(namespace) db_metadata_tag = self.db_api.metadef_tag_get_all( self.context, namespace, filters, marker, limit, sort_key, sort_dir) return [self._format_metadef_tag_from_db(metadata_tag, namespace_entity) for metadata_tag in db_metadata_tag] def remove(self, metadata_tag): try: self.db_api.metadef_tag_delete( self.context, metadata_tag.namespace.namespace, metadata_tag.name ) except (exception.NotFound, exception.Forbidden): msg = _("The specified metadata tag %s could not be found") raise exception.NotFound(msg % metadata_tag.name) def save(self, metadata_tag): try: self.db_api.metadef_tag_update( self.context, metadata_tag.namespace.namespace, metadata_tag.tag_id, self._format_metadef_tag_to_db(metadata_tag)) except exception.NotFound as e: raise exception.NotFound(explanation=e.msg) return metadata_tag glance-16.0.0/glance/db/utils.py0000666000175100017510000000473213245511421016411 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from glance.common import exception from glance.i18n import _ def mutate_image_dict_to_v1(image): """ Replaces a v2-style image dictionary's 'visibility' member with the equivalent v1-style 'is_public' member. """ visibility = image.pop('visibility') is_image_public = 'public' == visibility image['is_public'] = is_image_public return image def ensure_image_dict_v2_compliant(image): """ Accepts an image dictionary that contains a v1-style 'is_public' member and returns the equivalent v2-style image dictionary. """ if ('is_public' in image): if ('visibility' in image): msg = _("Specifying both 'visibility' and 'is_public' is not " "permiitted.") raise exception.Invalid(msg) else: image['visibility'] = ('public' if image.pop('is_public') else 'shared') return image def is_image_visible(context, image, image_member_find, status=None): """Return True if the image is visible in this context.""" # Is admin == image visible if context.is_admin: return True # No owner == image visible if image['owner'] is None: return True # Public or Community visibility == image visible if image['visibility'] in ['public', 'community']: return True # Perform tests based on whether we have an owner if context.owner is not None: if context.owner == image['owner']: return True # Figure out if this image is shared with that tenant if 'shared' == image['visibility']: members = image_member_find(context, image_id=image['id'], member=context.owner, status=status) if members: return True # Private image return False glance-16.0.0/glance/db/metadata.py0000666000175100017510000000411213245511421017021 0ustar zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Copyright 2013 OpenStack Foundation # Copyright 2013 Intel Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Metadata setup commands.""" import threading from oslo_config import cfg from oslo_db import options as db_options from stevedore import driver from glance.db.sqlalchemy import api as db_api _IMPL = None _LOCK = threading.Lock() db_options.set_defaults(cfg.CONF) def get_backend(): global _IMPL if _IMPL is None: with _LOCK: if _IMPL is None: _IMPL = driver.DriverManager( "glance.database.metadata_backend", cfg.CONF.database.backend).driver return _IMPL def load_metadefs(): """Read metadefinition files and insert data into the database""" return get_backend().db_load_metadefs(engine=db_api.get_engine(), metadata_path=None, merge=False, prefer_new=False, overwrite=False) def unload_metadefs(): """Unload metadefinitions from database""" return get_backend().db_unload_metadefs(engine=db_api.get_engine()) def export_metadefs(): """Export metadefinitions from database to files""" return get_backend().db_export_metadefs(engine=db_api.get_engine(), metadata_path=None) glance-16.0.0/glance/db/migration.py0000666000175100017510000000316713245511421017243 0ustar zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Copyright 2013 OpenStack Foundation # Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Database setup and migration commands.""" import os import threading from oslo_config import cfg from oslo_db import options as db_options from stevedore import driver _IMPL = None _LOCK = threading.Lock() db_options.set_defaults(cfg.CONF) def get_backend(): global _IMPL if _IMPL is None: with _LOCK: if _IMPL is None: _IMPL = driver.DriverManager( "glance.database.migration_backend", cfg.CONF.database.backend).driver return _IMPL # Migration-related constants EXPAND_BRANCH = 'expand' CONTRACT_BRANCH = 'contract' CURRENT_RELEASE = 'queens' ALEMBIC_INIT_VERSION = 'liberty' LATEST_REVISION = 'queens_contract01' INIT_VERSION = 0 MIGRATE_REPO_PATH = os.path.join( os.path.abspath(os.path.dirname(__file__)), 'sqlalchemy', 'migrate_repo', ) glance-16.0.0/glance/notifier.py0000666000175100017510000007565313245511421016515 0ustar zuulzuul00000000000000# Copyright 2011, OpenStack Foundation # Copyright 2012, Red Hat, Inc. # Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import abc import glance_store from oslo_config import cfg from oslo_log import log as logging import oslo_messaging from oslo_utils import encodeutils from oslo_utils import excutils import six import webob from glance.common import exception from glance.common import timeutils from glance.domain import proxy as domain_proxy from glance.i18n import _, _LE notifier_opts = [ cfg.StrOpt('default_publisher_id', default="image.localhost", help=_(""" Default publisher_id for outgoing Glance notifications. This is the value that the notification driver will use to identify messages for events originating from the Glance service. Typically, this is the hostname of the instance that generated the message. Possible values: * Any reasonable instance identifier, for example: image.host1 Related options: * None """)), cfg.ListOpt('disabled_notifications', default=[], help=_(""" List of notifications to be disabled. Specify a list of notifications that should not be emitted. A notification can be given either as a notification type to disable a single event notification, or as a notification group prefix to disable all event notifications within a group. Possible values: A comma-separated list of individual notification types or notification groups to be disabled. Currently supported groups: * image * image.member * task * metadef_namespace * metadef_object * metadef_property * metadef_resource_type * metadef_tag For a complete listing and description of each event refer to: http://docs.openstack.org/developer/glance/notifications.html The values must be specified as: . For example: image.create,task.success,metadef_tag Related options: * None """)), ] CONF = cfg.CONF CONF.register_opts(notifier_opts) LOG = logging.getLogger(__name__) def set_defaults(control_exchange='glance'): oslo_messaging.set_transport_defaults(control_exchange) def get_transport(): return oslo_messaging.get_notification_transport(CONF) class Notifier(object): """Uses a notification strategy to send out messages about events.""" def __init__(self): publisher_id = CONF.default_publisher_id self._transport = get_transport() self._notifier = oslo_messaging.Notifier(self._transport, publisher_id=publisher_id) def warn(self, event_type, payload): self._notifier.warn({}, event_type, payload) def info(self, event_type, payload): self._notifier.info({}, event_type, payload) def error(self, event_type, payload): self._notifier.error({}, event_type, payload) def _get_notification_group(notification): return notification.split('.', 1)[0] def _is_notification_enabled(notification): disabled_notifications = CONF.disabled_notifications notification_group = _get_notification_group(notification) notifications = (notification, notification_group) for disabled_notification in disabled_notifications: if disabled_notification in notifications: return False return True def _send_notification(notify, notification_type, payload): if _is_notification_enabled(notification_type): notify(notification_type, payload) def format_image_notification(image): """ Given a glance.domain.Image object, return a dictionary of relevant notification information. We purposely do not include 'location' as it may contain credentials. """ return { 'id': image.image_id, 'name': image.name, 'status': image.status, 'created_at': timeutils.isotime(image.created_at), 'updated_at': timeutils.isotime(image.updated_at), 'min_disk': image.min_disk, 'min_ram': image.min_ram, 'protected': image.protected, 'checksum': image.checksum, 'owner': image.owner, 'disk_format': image.disk_format, 'container_format': image.container_format, 'size': image.size, 'virtual_size': image.virtual_size, 'is_public': image.visibility == 'public', 'visibility': image.visibility, 'properties': dict(image.extra_properties), 'tags': list(image.tags), 'deleted': False, 'deleted_at': None, } def format_image_member_notification(image_member): """Given a glance.domain.ImageMember object, return a dictionary of relevant notification information. """ return { 'image_id': image_member.image_id, 'member_id': image_member.member_id, 'status': image_member.status, 'created_at': timeutils.isotime(image_member.created_at), 'updated_at': timeutils.isotime(image_member.updated_at), 'deleted': False, 'deleted_at': None, } def format_task_notification(task): # NOTE(nikhil): input is not passed to the notifier payload as it may # contain sensitive info. return { 'id': task.task_id, 'type': task.type, 'status': task.status, 'result': None, 'owner': task.owner, 'message': None, 'expires_at': timeutils.isotime(task.expires_at), 'created_at': timeutils.isotime(task.created_at), 'updated_at': timeutils.isotime(task.updated_at), 'deleted': False, 'deleted_at': None, } def format_metadef_namespace_notification(metadef_namespace): return { 'namespace': metadef_namespace.namespace, 'namespace_old': metadef_namespace.namespace, 'display_name': metadef_namespace.display_name, 'protected': metadef_namespace.protected, 'visibility': metadef_namespace.visibility, 'owner': metadef_namespace.owner, 'description': metadef_namespace.description, 'created_at': timeutils.isotime(metadef_namespace.created_at), 'updated_at': timeutils.isotime(metadef_namespace.updated_at), 'deleted': False, 'deleted_at': None, } def format_metadef_object_notification(metadef_object): object_properties = metadef_object.properties or {} properties = [] for name, prop in six.iteritems(object_properties): object_property = _format_metadef_object_property(name, prop) properties.append(object_property) return { 'namespace': metadef_object.namespace, 'name': metadef_object.name, 'name_old': metadef_object.name, 'properties': properties, 'required': metadef_object.required, 'description': metadef_object.description, 'created_at': timeutils.isotime(metadef_object.created_at), 'updated_at': timeutils.isotime(metadef_object.updated_at), 'deleted': False, 'deleted_at': None, } def _format_metadef_object_property(name, metadef_property): return { 'name': name, 'type': metadef_property.type or None, 'title': metadef_property.title or None, 'description': metadef_property.description or None, 'default': metadef_property.default or None, 'minimum': metadef_property.minimum or None, 'maximum': metadef_property.maximum or None, 'enum': metadef_property.enum or None, 'pattern': metadef_property.pattern or None, 'minLength': metadef_property.minLength or None, 'maxLength': metadef_property.maxLength or None, 'confidential': metadef_property.confidential or None, 'items': metadef_property.items or None, 'uniqueItems': metadef_property.uniqueItems or None, 'minItems': metadef_property.minItems or None, 'maxItems': metadef_property.maxItems or None, 'additionalItems': metadef_property.additionalItems or None, } def format_metadef_property_notification(metadef_property): schema = metadef_property.schema return { 'namespace': metadef_property.namespace, 'name': metadef_property.name, 'name_old': metadef_property.name, 'type': schema.get('type'), 'title': schema.get('title'), 'description': schema.get('description'), 'default': schema.get('default'), 'minimum': schema.get('minimum'), 'maximum': schema.get('maximum'), 'enum': schema.get('enum'), 'pattern': schema.get('pattern'), 'minLength': schema.get('minLength'), 'maxLength': schema.get('maxLength'), 'confidential': schema.get('confidential'), 'items': schema.get('items'), 'uniqueItems': schema.get('uniqueItems'), 'minItems': schema.get('minItems'), 'maxItems': schema.get('maxItems'), 'additionalItems': schema.get('additionalItems'), 'deleted': False, 'deleted_at': None, } def format_metadef_resource_type_notification(metadef_resource_type): return { 'namespace': metadef_resource_type.namespace, 'name': metadef_resource_type.name, 'name_old': metadef_resource_type.name, 'prefix': metadef_resource_type.prefix, 'properties_target': metadef_resource_type.properties_target, 'created_at': timeutils.isotime(metadef_resource_type.created_at), 'updated_at': timeutils.isotime(metadef_resource_type.updated_at), 'deleted': False, 'deleted_at': None, } def format_metadef_tag_notification(metadef_tag): return { 'namespace': metadef_tag.namespace, 'name': metadef_tag.name, 'name_old': metadef_tag.name, 'created_at': timeutils.isotime(metadef_tag.created_at), 'updated_at': timeutils.isotime(metadef_tag.updated_at), 'deleted': False, 'deleted_at': None, } class NotificationBase(object): def get_payload(self, obj): return {} def send_notification(self, notification_id, obj, extra_payload=None): payload = self.get_payload(obj) if extra_payload is not None: payload.update(extra_payload) _send_notification(self.notifier.info, notification_id, payload) @six.add_metaclass(abc.ABCMeta) class NotificationProxy(NotificationBase): def __init__(self, repo, context, notifier): self.repo = repo self.context = context self.notifier = notifier super_class = self.get_super_class() super_class.__init__(self, repo) @abc.abstractmethod def get_super_class(self): pass @six.add_metaclass(abc.ABCMeta) class NotificationRepoProxy(NotificationBase): def __init__(self, repo, context, notifier): self.repo = repo self.context = context self.notifier = notifier proxy_kwargs = {'context': self.context, 'notifier': self.notifier} proxy_class = self.get_proxy_class() super_class = self.get_super_class() super_class.__init__(self, repo, proxy_class, proxy_kwargs) @abc.abstractmethod def get_super_class(self): pass @abc.abstractmethod def get_proxy_class(self): pass @six.add_metaclass(abc.ABCMeta) class NotificationFactoryProxy(object): def __init__(self, factory, context, notifier): kwargs = {'context': context, 'notifier': notifier} proxy_class = self.get_proxy_class() super_class = self.get_super_class() super_class.__init__(self, factory, proxy_class, kwargs) @abc.abstractmethod def get_super_class(self): pass @abc.abstractmethod def get_proxy_class(self): pass class ImageProxy(NotificationProxy, domain_proxy.Image): def get_super_class(self): return domain_proxy.Image def get_payload(self, obj): return format_image_notification(obj) def _format_image_send(self, bytes_sent): return { 'bytes_sent': bytes_sent, 'image_id': self.repo.image_id, 'owner_id': self.repo.owner, 'receiver_tenant_id': self.context.tenant, 'receiver_user_id': self.context.user, } def _get_chunk_data_iterator(self, data, chunk_size=None): sent = 0 for chunk in data: yield chunk sent += len(chunk) if sent != (chunk_size or self.repo.size): notify = self.notifier.error else: notify = self.notifier.info try: _send_notification(notify, 'image.send', self._format_image_send(sent)) except Exception as err: msg = (_LE("An error occurred during image.send" " notification: %(err)s") % {'err': err}) LOG.error(msg) def get_data(self, offset=0, chunk_size=None): # Due to the need of evaluating subsequent proxies, this one # should return a generator, the call should be done before # generator creation data = self.repo.get_data(offset=offset, chunk_size=chunk_size) return self._get_chunk_data_iterator(data, chunk_size=chunk_size) def set_data(self, data, size=None): self.send_notification('image.prepare', self.repo) notify_error = self.notifier.error try: self.repo.set_data(data, size) except glance_store.StorageFull as e: msg = (_("Image storage media is full: %s") % encodeutils.exception_to_unicode(e)) _send_notification(notify_error, 'image.upload', msg) raise webob.exc.HTTPRequestEntityTooLarge(explanation=msg) except glance_store.StorageWriteDenied as e: msg = (_("Insufficient permissions on image storage media: %s") % encodeutils.exception_to_unicode(e)) _send_notification(notify_error, 'image.upload', msg) raise webob.exc.HTTPServiceUnavailable(explanation=msg) except ValueError as e: msg = (_("Cannot save data for image %(image_id)s: %(error)s") % {'image_id': self.repo.image_id, 'error': encodeutils.exception_to_unicode(e)}) _send_notification(notify_error, 'image.upload', msg) raise webob.exc.HTTPBadRequest( explanation=encodeutils.exception_to_unicode(e)) except exception.Duplicate as e: msg = (_("Unable to upload duplicate image data for image" "%(image_id)s: %(error)s") % {'image_id': self.repo.image_id, 'error': encodeutils.exception_to_unicode(e)}) _send_notification(notify_error, 'image.upload', msg) raise webob.exc.HTTPConflict(explanation=msg) except exception.Forbidden as e: msg = (_("Not allowed to upload image data for image %(image_id)s:" " %(error)s") % {'image_id': self.repo.image_id, 'error': encodeutils.exception_to_unicode(e)}) _send_notification(notify_error, 'image.upload', msg) raise webob.exc.HTTPForbidden(explanation=msg) except exception.NotFound as e: exc_str = encodeutils.exception_to_unicode(e) msg = (_("Image %(image_id)s could not be found after upload." " The image may have been deleted during the upload:" " %(error)s") % {'image_id': self.repo.image_id, 'error': exc_str}) _send_notification(notify_error, 'image.upload', msg) raise webob.exc.HTTPNotFound(explanation=exc_str) except webob.exc.HTTPError as e: with excutils.save_and_reraise_exception(): msg = (_("Failed to upload image data for image %(image_id)s" " due to HTTP error: %(error)s") % {'image_id': self.repo.image_id, 'error': encodeutils.exception_to_unicode(e)}) _send_notification(notify_error, 'image.upload', msg) except Exception as e: with excutils.save_and_reraise_exception(): msg = (_("Failed to upload image data for image %(image_id)s " "due to internal error: %(error)s") % {'image_id': self.repo.image_id, 'error': encodeutils.exception_to_unicode(e)}) _send_notification(notify_error, 'image.upload', msg) else: self.send_notification('image.upload', self.repo) self.send_notification('image.activate', self.repo) class ImageMemberProxy(NotificationProxy, domain_proxy.ImageMember): def get_super_class(self): return domain_proxy.ImageMember class ImageFactoryProxy(NotificationFactoryProxy, domain_proxy.ImageFactory): def get_super_class(self): return domain_proxy.ImageFactory def get_proxy_class(self): return ImageProxy class ImageRepoProxy(NotificationRepoProxy, domain_proxy.Repo): def get_super_class(self): return domain_proxy.Repo def get_proxy_class(self): return ImageProxy def get_payload(self, obj): return format_image_notification(obj) def save(self, image, from_state=None): super(ImageRepoProxy, self).save(image, from_state=from_state) self.send_notification('image.update', image) def add(self, image): super(ImageRepoProxy, self).add(image) self.send_notification('image.create', image) def remove(self, image): super(ImageRepoProxy, self).remove(image) self.send_notification('image.delete', image, extra_payload={ 'deleted': True, 'deleted_at': timeutils.isotime() }) class ImageMemberRepoProxy(NotificationBase, domain_proxy.MemberRepo): def __init__(self, repo, image, context, notifier): self.repo = repo self.image = image self.context = context self.notifier = notifier proxy_kwargs = {'context': self.context, 'notifier': self.notifier} proxy_class = self.get_proxy_class() super_class = self.get_super_class() super_class.__init__(self, image, repo, proxy_class, proxy_kwargs) def get_super_class(self): return domain_proxy.MemberRepo def get_proxy_class(self): return ImageMemberProxy def get_payload(self, obj): return format_image_member_notification(obj) def save(self, member, from_state=None): super(ImageMemberRepoProxy, self).save(member, from_state=from_state) self.send_notification('image.member.update', member) def add(self, member): super(ImageMemberRepoProxy, self).add(member) self.send_notification('image.member.create', member) def remove(self, member): super(ImageMemberRepoProxy, self).remove(member) self.send_notification('image.member.delete', member, extra_payload={ 'deleted': True, 'deleted_at': timeutils.isotime() }) class TaskProxy(NotificationProxy, domain_proxy.Task): def get_super_class(self): return domain_proxy.Task def get_payload(self, obj): return format_task_notification(obj) def begin_processing(self): super(TaskProxy, self).begin_processing() self.send_notification('task.processing', self.repo) def succeed(self, result): super(TaskProxy, self).succeed(result) self.send_notification('task.success', self.repo) def fail(self, message): super(TaskProxy, self).fail(message) self.send_notification('task.failure', self.repo) def run(self, executor): super(TaskProxy, self).run(executor) self.send_notification('task.run', self.repo) class TaskFactoryProxy(NotificationFactoryProxy, domain_proxy.TaskFactory): def get_super_class(self): return domain_proxy.TaskFactory def get_proxy_class(self): return TaskProxy class TaskRepoProxy(NotificationRepoProxy, domain_proxy.TaskRepo): def get_super_class(self): return domain_proxy.TaskRepo def get_proxy_class(self): return TaskProxy def get_payload(self, obj): return format_task_notification(obj) def add(self, task): result = super(TaskRepoProxy, self).add(task) self.send_notification('task.create', task) return result def remove(self, task): result = super(TaskRepoProxy, self).remove(task) self.send_notification('task.delete', task, extra_payload={ 'deleted': True, 'deleted_at': timeutils.isotime() }) return result class TaskStubProxy(NotificationProxy, domain_proxy.TaskStub): def get_super_class(self): return domain_proxy.TaskStub class TaskStubRepoProxy(NotificationRepoProxy, domain_proxy.TaskStubRepo): def get_super_class(self): return domain_proxy.TaskStubRepo def get_proxy_class(self): return TaskStubProxy class MetadefNamespaceProxy(NotificationProxy, domain_proxy.MetadefNamespace): def get_super_class(self): return domain_proxy.MetadefNamespace class MetadefNamespaceFactoryProxy(NotificationFactoryProxy, domain_proxy.MetadefNamespaceFactory): def get_super_class(self): return domain_proxy.MetadefNamespaceFactory def get_proxy_class(self): return MetadefNamespaceProxy class MetadefNamespaceRepoProxy(NotificationRepoProxy, domain_proxy.MetadefNamespaceRepo): def get_super_class(self): return domain_proxy.MetadefNamespaceRepo def get_proxy_class(self): return MetadefNamespaceProxy def get_payload(self, obj): return format_metadef_namespace_notification(obj) def save(self, metadef_namespace): name = getattr(metadef_namespace, '_old_namespace', metadef_namespace.namespace) result = super(MetadefNamespaceRepoProxy, self).save(metadef_namespace) self.send_notification( 'metadef_namespace.update', metadef_namespace, extra_payload={ 'namespace_old': name, }) return result def add(self, metadef_namespace): result = super(MetadefNamespaceRepoProxy, self).add(metadef_namespace) self.send_notification('metadef_namespace.create', metadef_namespace) return result def remove(self, metadef_namespace): result = super(MetadefNamespaceRepoProxy, self).remove( metadef_namespace) self.send_notification( 'metadef_namespace.delete', metadef_namespace, extra_payload={'deleted': True, 'deleted_at': timeutils.isotime()} ) return result def remove_objects(self, metadef_namespace): result = super(MetadefNamespaceRepoProxy, self).remove_objects( metadef_namespace) self.send_notification('metadef_namespace.delete_objects', metadef_namespace) return result def remove_properties(self, metadef_namespace): result = super(MetadefNamespaceRepoProxy, self).remove_properties( metadef_namespace) self.send_notification('metadef_namespace.delete_properties', metadef_namespace) return result def remove_tags(self, metadef_namespace): result = super(MetadefNamespaceRepoProxy, self).remove_tags( metadef_namespace) self.send_notification('metadef_namespace.delete_tags', metadef_namespace) return result class MetadefObjectProxy(NotificationProxy, domain_proxy.MetadefObject): def get_super_class(self): return domain_proxy.MetadefObject class MetadefObjectFactoryProxy(NotificationFactoryProxy, domain_proxy.MetadefObjectFactory): def get_super_class(self): return domain_proxy.MetadefObjectFactory def get_proxy_class(self): return MetadefObjectProxy class MetadefObjectRepoProxy(NotificationRepoProxy, domain_proxy.MetadefObjectRepo): def get_super_class(self): return domain_proxy.MetadefObjectRepo def get_proxy_class(self): return MetadefObjectProxy def get_payload(self, obj): return format_metadef_object_notification(obj) def save(self, metadef_object): name = getattr(metadef_object, '_old_name', metadef_object.name) result = super(MetadefObjectRepoProxy, self).save(metadef_object) self.send_notification( 'metadef_object.update', metadef_object, extra_payload={ 'namespace': metadef_object.namespace.namespace, 'name_old': name, }) return result def add(self, metadef_object): result = super(MetadefObjectRepoProxy, self).add(metadef_object) self.send_notification('metadef_object.create', metadef_object) return result def remove(self, metadef_object): result = super(MetadefObjectRepoProxy, self).remove(metadef_object) self.send_notification( 'metadef_object.delete', metadef_object, extra_payload={ 'deleted': True, 'deleted_at': timeutils.isotime(), 'namespace': metadef_object.namespace.namespace } ) return result class MetadefPropertyProxy(NotificationProxy, domain_proxy.MetadefProperty): def get_super_class(self): return domain_proxy.MetadefProperty class MetadefPropertyFactoryProxy(NotificationFactoryProxy, domain_proxy.MetadefPropertyFactory): def get_super_class(self): return domain_proxy.MetadefPropertyFactory def get_proxy_class(self): return MetadefPropertyProxy class MetadefPropertyRepoProxy(NotificationRepoProxy, domain_proxy.MetadefPropertyRepo): def get_super_class(self): return domain_proxy.MetadefPropertyRepo def get_proxy_class(self): return MetadefPropertyProxy def get_payload(self, obj): return format_metadef_property_notification(obj) def save(self, metadef_property): name = getattr(metadef_property, '_old_name', metadef_property.name) result = super(MetadefPropertyRepoProxy, self).save(metadef_property) self.send_notification( 'metadef_property.update', metadef_property, extra_payload={ 'namespace': metadef_property.namespace.namespace, 'name_old': name, }) return result def add(self, metadef_property): result = super(MetadefPropertyRepoProxy, self).add(metadef_property) self.send_notification('metadef_property.create', metadef_property) return result def remove(self, metadef_property): result = super(MetadefPropertyRepoProxy, self).remove(metadef_property) self.send_notification( 'metadef_property.delete', metadef_property, extra_payload={ 'deleted': True, 'deleted_at': timeutils.isotime(), 'namespace': metadef_property.namespace.namespace } ) return result class MetadefResourceTypeProxy(NotificationProxy, domain_proxy.MetadefResourceType): def get_super_class(self): return domain_proxy.MetadefResourceType class MetadefResourceTypeFactoryProxy(NotificationFactoryProxy, domain_proxy.MetadefResourceTypeFactory): def get_super_class(self): return domain_proxy.MetadefResourceTypeFactory def get_proxy_class(self): return MetadefResourceTypeProxy class MetadefResourceTypeRepoProxy(NotificationRepoProxy, domain_proxy.MetadefResourceTypeRepo): def get_super_class(self): return domain_proxy.MetadefResourceTypeRepo def get_proxy_class(self): return MetadefResourceTypeProxy def get_payload(self, obj): return format_metadef_resource_type_notification(obj) def add(self, md_resource_type): result = super(MetadefResourceTypeRepoProxy, self).add( md_resource_type) self.send_notification('metadef_resource_type.create', md_resource_type) return result def remove(self, md_resource_type): result = super(MetadefResourceTypeRepoProxy, self).remove( md_resource_type) self.send_notification( 'metadef_resource_type.delete', md_resource_type, extra_payload={ 'deleted': True, 'deleted_at': timeutils.isotime(), 'namespace': md_resource_type.namespace.namespace } ) return result class MetadefTagProxy(NotificationProxy, domain_proxy.MetadefTag): def get_super_class(self): return domain_proxy.MetadefTag class MetadefTagFactoryProxy(NotificationFactoryProxy, domain_proxy.MetadefTagFactory): def get_super_class(self): return domain_proxy.MetadefTagFactory def get_proxy_class(self): return MetadefTagProxy class MetadefTagRepoProxy(NotificationRepoProxy, domain_proxy.MetadefTagRepo): def get_super_class(self): return domain_proxy.MetadefTagRepo def get_proxy_class(self): return MetadefTagProxy def get_payload(self, obj): return format_metadef_tag_notification(obj) def save(self, metadef_tag): name = getattr(metadef_tag, '_old_name', metadef_tag.name) result = super(MetadefTagRepoProxy, self).save(metadef_tag) self.send_notification( 'metadef_tag.update', metadef_tag, extra_payload={ 'namespace': metadef_tag.namespace.namespace, 'name_old': name, }) return result def add(self, metadef_tag): result = super(MetadefTagRepoProxy, self).add(metadef_tag) self.send_notification('metadef_tag.create', metadef_tag) return result def add_tags(self, metadef_tags): result = super(MetadefTagRepoProxy, self).add_tags(metadef_tags) for metadef_tag in metadef_tags: self.send_notification('metadef_tag.create', metadef_tag) return result def remove(self, metadef_tag): result = super(MetadefTagRepoProxy, self).remove(metadef_tag) self.send_notification( 'metadef_tag.delete', metadef_tag, extra_payload={ 'deleted': True, 'deleted_at': timeutils.isotime(), 'namespace': metadef_tag.namespace.namespace } ) return result glance-16.0.0/glance/version.py0000666000175100017510000000125613245511421016347 0ustar zuulzuul00000000000000# Copyright 2012 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import pbr.version version_info = pbr.version.VersionInfo('glance') glance-16.0.0/glance/common/0000775000175100017510000000000013245511661015600 5ustar zuulzuul00000000000000glance-16.0.0/glance/common/rpc.py0000666000175100017510000002304213245511421016733 0ustar zuulzuul00000000000000# Copyright 2013 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ RPC Controller """ import datetime import traceback from oslo_config import cfg from oslo_log import log as logging from oslo_utils import encodeutils import oslo_utils.importutils as imp import six from webob import exc from glance.common import client from glance.common import exception from glance.common import timeutils from glance.common import wsgi from glance.i18n import _, _LE LOG = logging.getLogger(__name__) rpc_opts = [ cfg.ListOpt('allowed_rpc_exception_modules', default=['glance.common.exception', 'builtins', 'exceptions', ], help=_(""" List of allowed exception modules to handle RPC exceptions. Provide a comma separated list of modules whose exceptions are permitted to be recreated upon receiving exception data via an RPC call made to Glance. The default list includes ``glance.common.exception``, ``builtins``, and ``exceptions``. The RPC protocol permits interaction with Glance via calls across a network or within the same system. Including a list of exception namespaces with this option enables RPC to propagate the exceptions back to the users. Possible values: * A comma separated list of valid exception modules Related options: * None """)), ] CONF = cfg.CONF CONF.register_opts(rpc_opts) class RPCJSONSerializer(wsgi.JSONResponseSerializer): @staticmethod def _to_primitive(_type, _value): return {"_type": _type, "_value": _value} def _sanitizer(self, obj): if isinstance(obj, datetime.datetime): return self._to_primitive("datetime", obj.isoformat()) return super(RPCJSONSerializer, self)._sanitizer(obj) class RPCJSONDeserializer(wsgi.JSONRequestDeserializer): @staticmethod def _to_datetime(obj): return timeutils.normalize_time(timeutils.parse_isotime(obj)) def _sanitizer(self, obj): try: _type, _value = obj["_type"], obj["_value"] return getattr(self, "_to_" + _type)(_value) except (KeyError, AttributeError): return obj class Controller(object): """ Base RPCController. This is the base controller for RPC based APIs. Commands handled by this controller respect the following form: :: [{ 'command': 'method_name', 'kwargs': {...} }] The controller is capable of processing more than one command per request and will always return a list of results. :param bool raise_exc: Specifies whether to raise exceptions instead of "serializing" them. """ def __init__(self, raise_exc=False): self._registered = {} self.raise_exc = raise_exc def register(self, resource, filtered=None, excluded=None, refiner=None): """ Exports methods through the RPC Api. :param resource: Resource's instance to register. :param filtered: List of methods that *can* be registered. Read as "Method must be in this list". :param excluded: List of methods to exclude. :param refiner: Callable to use as filter for methods. :raises TypeError: If refiner is not callable. """ funcs = [x for x in dir(resource) if not x.startswith("_")] if filtered: funcs = [f for f in funcs if f in filtered] if excluded: funcs = [f for f in funcs if f not in excluded] if refiner: funcs = filter(refiner, funcs) for name in funcs: meth = getattr(resource, name) if not callable(meth): continue self._registered[name] = meth def __call__(self, req, body): """ Executes the command """ if not isinstance(body, list): msg = _("Request must be a list of commands") raise exc.HTTPBadRequest(explanation=msg) def validate(cmd): if not isinstance(cmd, dict): msg = _("Bad Command: %s") % str(cmd) raise exc.HTTPBadRequest(explanation=msg) command, kwargs = cmd.get("command"), cmd.get("kwargs") if (not command or not isinstance(command, six.string_types) or (kwargs and not isinstance(kwargs, dict))): msg = _("Wrong command structure: %s") % (str(cmd)) raise exc.HTTPBadRequest(explanation=msg) method = self._registered.get(command) if not method: # Just raise 404 if the user tries to # access a private method. No need for # 403 here since logically the command # is not registered to the rpc dispatcher raise exc.HTTPNotFound(explanation=_("Command not found")) return True # If more than one command were sent then they might # be intended to be executed sequentially, that for, # lets first verify they're all valid before executing # them. commands = filter(validate, body) results = [] for cmd in commands: # kwargs is not required command, kwargs = cmd["command"], cmd.get("kwargs", {}) method = self._registered[command] try: result = method(req.context, **kwargs) except Exception as e: if self.raise_exc: raise cls, val = e.__class__, encodeutils.exception_to_unicode(e) msg = (_LE("RPC Call Error: %(val)s\n%(tb)s") % dict(val=val, tb=traceback.format_exc())) LOG.error(msg) # NOTE(flaper87): Don't propagate all exceptions # but the ones allowed by the user. module = cls.__module__ if module not in CONF.allowed_rpc_exception_modules: cls = exception.RPCError val = encodeutils.exception_to_unicode( exception.RPCError(cls=cls, val=val)) cls_path = "%s.%s" % (cls.__module__, cls.__name__) result = {"_error": {"cls": cls_path, "val": val}} results.append(result) return results class RPCClient(client.BaseClient): def __init__(self, *args, **kwargs): self._serializer = RPCJSONSerializer() self._deserializer = RPCJSONDeserializer() self.raise_exc = kwargs.pop("raise_exc", True) self.base_path = kwargs.pop("base_path", '/rpc') super(RPCClient, self).__init__(*args, **kwargs) @client.handle_unauthenticated def bulk_request(self, commands): """ Execute multiple commands in a single request. :param commands: List of commands to send. Commands must respect the following form :: { 'command': 'method_name', 'kwargs': method_kwargs } """ body = self._serializer.to_json(commands) response = super(RPCClient, self).do_request('POST', self.base_path, body) return self._deserializer.from_json(response.read()) def do_request(self, method, **kwargs): """ Simple do_request override. This method serializes the outgoing body and builds the command that will be sent. :param method: The remote python method to call :param kwargs: Dynamic parameters that will be passed to the remote method. """ content = self.bulk_request([{'command': method, 'kwargs': kwargs}]) # NOTE(flaper87): Return the first result if # a single command was executed. content = content[0] # NOTE(flaper87): Check if content is an error # and re-raise it if raise_exc is True. Before # checking if content contains the '_error' key, # verify if it is an instance of dict - since the # RPC call may have returned something different. if self.raise_exc and (isinstance(content, dict) and '_error' in content): error = content['_error'] try: exc_cls = imp.import_class(error['cls']) raise exc_cls(error['val']) except ImportError: # NOTE(flaper87): The exception # class couldn't be imported, using # a generic exception. raise exception.RPCError(**error) return content def __getattr__(self, item): """ This method returns a method_proxy that will execute the rpc call in the registry service. """ if item.startswith('_'): raise AttributeError(item) def method_proxy(**kw): return self.do_request(item, **kw) return method_proxy glance-16.0.0/glance/common/wsgi_app.py0000666000175100017510000000431213245511421017757 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os import glance_store from oslo_config import cfg from oslo_log import log as logging import osprofiler.initializer from glance.common import config from glance import notifier CONF = cfg.CONF CONF.import_group("profiler", "glance.common.wsgi") logging.register_options(CONF) CONFIG_FILES = ['glance-api-paste.ini', 'glance-image-import.conf', 'glance-api.conf'] def _get_config_files(env=None): if env is None: env = os.environ dirname = env.get('OS_GLANCE_CONFIG_DIR', '/etc/glance').strip() config_files = [] for config_file in CONFIG_FILES: cfg_file = os.path.join(dirname, config_file) # As 'glance-image-import.conf' is optional conf file # so include it only if it's existing. if config_file == 'glance-image-import.conf' and ( not os.path.exists(cfg_file)): continue config_files.append(cfg_file) return config_files def _setup_os_profiler(): notifier.set_defaults() if CONF.profiler.enabled: osprofiler.initializer.init_from_conf(conf=CONF, context={}, project='glance', service='api', host=CONF.bind_host) def init_app(): config_files = _get_config_files() CONF([], project='glance', default_config_files=config_files) logging.setup(CONF, "glance") glance_store.register_opts(CONF) glance_store.create_stores(CONF) glance_store.verify_default_store() _setup_os_profiler() return config.load_paste_app('glance-api') glance-16.0.0/glance/common/client.py0000666000175100017510000005550513245511421017436 0ustar zuulzuul00000000000000# Copyright 2010-2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # HTTPSClientAuthConnection code comes courtesy of ActiveState website: # http://code.activestate.com/recipes/ # 577548-https-httplib-client-connection-with-certificate-v/ import collections import copy import errno import functools import os import re try: from eventlet.green import socket from eventlet.green import ssl except ImportError: import socket import ssl import osprofiler.web try: import sendfile # noqa SENDFILE_SUPPORTED = True except ImportError: SENDFILE_SUPPORTED = False from oslo_log import log as logging from oslo_utils import encodeutils from oslo_utils import netutils import six from six.moves import http_client # NOTE(jokke): simplified transition to py3, behaves like py2 xrange from six.moves import range import six.moves.urllib.parse as urlparse from glance.common import auth from glance.common import exception from glance.common import utils from glance.i18n import _ LOG = logging.getLogger(__name__) # common chunk size for get and put CHUNKSIZE = 65536 VERSION_REGEX = re.compile(r"/?v[0-9\.]+") def handle_unauthenticated(func): """ Wrap a function to re-authenticate and retry. """ @functools.wraps(func) def wrapped(self, *args, **kwargs): try: return func(self, *args, **kwargs) except exception.NotAuthenticated: self._authenticate(force_reauth=True) return func(self, *args, **kwargs) return wrapped def handle_redirects(func): """ Wrap the _do_request function to handle HTTP redirects. """ MAX_REDIRECTS = 5 @functools.wraps(func) def wrapped(self, method, url, body, headers): for i in range(MAX_REDIRECTS): try: return func(self, method, url, body, headers) except exception.RedirectException as redirect: if redirect.url is None: raise exception.InvalidRedirect() url = redirect.url raise exception.MaxRedirectsExceeded(redirects=MAX_REDIRECTS) return wrapped class HTTPSClientAuthConnection(http_client.HTTPSConnection): """ Class to make a HTTPS connection, with support for full client-based SSL Authentication :see http://code.activestate.com/recipes/ 577548-https-httplib-client-connection-with-certificate-v/ """ def __init__(self, host, port, key_file, cert_file, ca_file, timeout=None, insecure=False): http_client.HTTPSConnection.__init__(self, host, port, key_file=key_file, cert_file=cert_file) self.key_file = key_file self.cert_file = cert_file self.ca_file = ca_file self.timeout = timeout self.insecure = insecure def connect(self): """ Connect to a host on a given (SSL) port. If ca_file is pointing somewhere, use it to check Server Certificate. Redefined/copied and extended from httplib.py:1105 (Python 2.6.x). This is needed to pass cert_reqs=ssl.CERT_REQUIRED as parameter to ssl.wrap_socket(), which forces SSL to check server certificate against our client certificate. """ sock = socket.create_connection((self.host, self.port), self.timeout) if self._tunnel_host: self.sock = sock self._tunnel() # Check CA file unless 'insecure' is specified if self.insecure is True: self.sock = ssl.wrap_socket(sock, self.key_file, self.cert_file, cert_reqs=ssl.CERT_NONE) else: self.sock = ssl.wrap_socket(sock, self.key_file, self.cert_file, ca_certs=self.ca_file, cert_reqs=ssl.CERT_REQUIRED) class BaseClient(object): """A base client class""" DEFAULT_PORT = 80 DEFAULT_DOC_ROOT = None # Standard CA file locations for Debian/Ubuntu, RedHat/Fedora, # Suse, FreeBSD/OpenBSD DEFAULT_CA_FILE_PATH = ('/etc/ssl/certs/ca-certificates.crt:' '/etc/pki/tls/certs/ca-bundle.crt:' '/etc/ssl/ca-bundle.pem:' '/etc/ssl/cert.pem') OK_RESPONSE_CODES = ( http_client.OK, http_client.CREATED, http_client.ACCEPTED, http_client.NO_CONTENT, ) REDIRECT_RESPONSE_CODES = ( http_client.MOVED_PERMANENTLY, http_client.FOUND, http_client.SEE_OTHER, http_client.USE_PROXY, http_client.TEMPORARY_REDIRECT, ) def __init__(self, host, port=None, timeout=None, use_ssl=False, auth_token=None, creds=None, doc_root=None, key_file=None, cert_file=None, ca_file=None, insecure=False, configure_via_auth=True): """ Creates a new client to some service. :param host: The host where service resides :param port: The port where service resides :param timeout: Connection timeout. :param use_ssl: Should we use HTTPS? :param auth_token: The auth token to pass to the server :param creds: The credentials to pass to the auth plugin :param doc_root: Prefix for all URLs we request from host :param key_file: Optional PEM-formatted file that contains the private key. If use_ssl is True, and this param is None (the default), then an environ variable GLANCE_CLIENT_KEY_FILE is looked for. If no such environ variable is found, ClientConnectionError will be raised. :param cert_file: Optional PEM-formatted certificate chain file. If use_ssl is True, and this param is None (the default), then an environ variable GLANCE_CLIENT_CERT_FILE is looked for. If no such environ variable is found, ClientConnectionError will be raised. :param ca_file: Optional CA cert file to use in SSL connections If use_ssl is True, and this param is None (the default), then an environ variable GLANCE_CLIENT_CA_FILE is looked for. :param insecure: Optional. If set then the server's certificate will not be verified. :param configure_via_auth: Optional. Defaults to True. If set, the URL returned from the service catalog for the image endpoint will **override** the URL supplied to in the host parameter. """ self.host = host self.port = port or self.DEFAULT_PORT self.timeout = timeout # A value of '0' implies never timeout if timeout == 0: self.timeout = None self.use_ssl = use_ssl self.auth_token = auth_token self.creds = creds or {} self.connection = None self.configure_via_auth = configure_via_auth # doc_root can be a nullstring, which is valid, and why we # cannot simply do doc_root or self.DEFAULT_DOC_ROOT below. self.doc_root = (doc_root if doc_root is not None else self.DEFAULT_DOC_ROOT) self.key_file = key_file self.cert_file = cert_file self.ca_file = ca_file self.insecure = insecure self.auth_plugin = self.make_auth_plugin(self.creds, self.insecure) self.connect_kwargs = self.get_connect_kwargs() def get_connect_kwargs(self): # Both secure and insecure connections have a timeout option connect_kwargs = {'timeout': self.timeout} if self.use_ssl: if self.key_file is None: self.key_file = os.environ.get('GLANCE_CLIENT_KEY_FILE') if self.cert_file is None: self.cert_file = os.environ.get('GLANCE_CLIENT_CERT_FILE') if self.ca_file is None: self.ca_file = os.environ.get('GLANCE_CLIENT_CA_FILE') # Check that key_file/cert_file are either both set or both unset if self.cert_file is not None and self.key_file is None: msg = _("You have selected to use SSL in connecting, " "and you have supplied a cert, " "however you have failed to supply either a " "key_file parameter or set the " "GLANCE_CLIENT_KEY_FILE environ variable") raise exception.ClientConnectionError(msg) if self.key_file is not None and self.cert_file is None: msg = _("You have selected to use SSL in connecting, " "and you have supplied a key, " "however you have failed to supply either a " "cert_file parameter or set the " "GLANCE_CLIENT_CERT_FILE environ variable") raise exception.ClientConnectionError(msg) if (self.key_file is not None and not os.path.exists(self.key_file)): msg = _("The key file you specified %s does not " "exist") % self.key_file raise exception.ClientConnectionError(msg) connect_kwargs['key_file'] = self.key_file if (self.cert_file is not None and not os.path.exists(self.cert_file)): msg = _("The cert file you specified %s does not " "exist") % self.cert_file raise exception.ClientConnectionError(msg) connect_kwargs['cert_file'] = self.cert_file if (self.ca_file is not None and not os.path.exists(self.ca_file)): msg = _("The CA file you specified %s does not " "exist") % self.ca_file raise exception.ClientConnectionError(msg) if self.ca_file is None: for ca in self.DEFAULT_CA_FILE_PATH.split(":"): if os.path.exists(ca): self.ca_file = ca break connect_kwargs['ca_file'] = self.ca_file connect_kwargs['insecure'] = self.insecure return connect_kwargs def configure_from_url(self, url): """ Setups the connection based on the given url. The form is: ://:port/doc_root """ LOG.debug("Configuring from URL: %s", url) parsed = urlparse.urlparse(url) self.use_ssl = parsed.scheme == 'https' self.host = parsed.hostname self.port = parsed.port or 80 self.doc_root = parsed.path.rstrip('/') # We need to ensure a version identifier is appended to the doc_root if not VERSION_REGEX.match(self.doc_root): if self.DEFAULT_DOC_ROOT: doc_root = self.DEFAULT_DOC_ROOT.lstrip('/') self.doc_root += '/' + doc_root LOG.debug("Appending doc_root %(doc_root)s to URL %(url)s", {'doc_root': doc_root, 'url': url}) # ensure connection kwargs are re-evaluated after the service catalog # publicURL is parsed for potential SSL usage self.connect_kwargs = self.get_connect_kwargs() def make_auth_plugin(self, creds, insecure): """ Returns an instantiated authentication plugin. """ strategy = creds.get('strategy', 'noauth') plugin = auth.get_plugin_from_strategy(strategy, creds, insecure, self.configure_via_auth) return plugin def get_connection_type(self): """ Returns the proper connection type """ if self.use_ssl: return HTTPSClientAuthConnection else: return http_client.HTTPConnection def _authenticate(self, force_reauth=False): """ Use the authentication plugin to authenticate and set the auth token. :param force_reauth: For re-authentication to bypass cache. """ auth_plugin = self.auth_plugin if not auth_plugin.is_authenticated or force_reauth: auth_plugin.authenticate() self.auth_token = auth_plugin.auth_token management_url = auth_plugin.management_url if management_url and self.configure_via_auth: self.configure_from_url(management_url) @handle_unauthenticated def do_request(self, method, action, body=None, headers=None, params=None): """ Make a request, returning an HTTP response object. :param method: HTTP verb (GET, POST, PUT, etc.) :param action: Requested path to append to self.doc_root :param body: Data to send in the body of the request :param headers: Headers to send with the request :param params: Key/value pairs to use in query string :returns: HTTP response object """ if not self.auth_token: self._authenticate() url = self._construct_url(action, params) # NOTE(ameade): We need to copy these kwargs since they can be altered # in _do_request but we need the originals if handle_unauthenticated # calls this function again. return self._do_request(method=method, url=url, body=copy.deepcopy(body), headers=copy.deepcopy(headers)) def _construct_url(self, action, params=None): """ Create a URL object we can use to pass to _do_request(). """ action = urlparse.quote(action) path = '/'.join([self.doc_root or '', action.lstrip('/')]) scheme = "https" if self.use_ssl else "http" if netutils.is_valid_ipv6(self.host): netloc = "[%s]:%d" % (self.host, self.port) else: netloc = "%s:%d" % (self.host, self.port) if isinstance(params, dict): for (key, value) in list(params.items()): if value is None: del params[key] continue if not isinstance(value, six.string_types): value = str(value) params[key] = encodeutils.safe_encode(value) query = urlparse.urlencode(params) else: query = None url = urlparse.ParseResult(scheme, netloc, path, '', query, '') log_msg = _("Constructed URL: %s") LOG.debug(log_msg, url.geturl()) return url def _encode_headers(self, headers): """ Encodes headers. Note: This should be used right before sending anything out. :param headers: Headers to encode :returns: Dictionary with encoded headers' names and values """ if six.PY3: to_str = str else: to_str = encodeutils.safe_encode return {to_str(h): to_str(v) for h, v in six.iteritems(headers)} @handle_redirects def _do_request(self, method, url, body, headers): """ Connects to the server and issues a request. Handles converting any returned HTTP error status codes to OpenStack/Glance exceptions and closing the server connection. Returns the result data, or raises an appropriate exception. :param method: HTTP method ("GET", "POST", "PUT", etc...) :param url: urlparse.ParsedResult object with URL information :param body: data to send (as string, filelike or iterable), or None (default) :param headers: mapping of key/value pairs to add as headers :note If the body param has a read attribute, and method is either POST or PUT, this method will automatically conduct a chunked-transfer encoding and use the body as a file object or iterable, transferring chunks of data using the connection's send() method. This allows large objects to be transferred efficiently without buffering the entire body in memory. """ if url.query: path = url.path + "?" + url.query else: path = url.path try: connection_type = self.get_connection_type() headers = self._encode_headers(headers or {}) headers.update(osprofiler.web.get_trace_id_headers()) if 'x-auth-token' not in headers and self.auth_token: headers['x-auth-token'] = self.auth_token c = connection_type(url.hostname, url.port, **self.connect_kwargs) def _pushing(method): return method.lower() in ('post', 'put') def _simple(body): return body is None or isinstance(body, bytes) def _filelike(body): return hasattr(body, 'read') def _sendbody(connection, iter): connection.endheaders() for sent in iter: # iterator has done the heavy lifting pass def _chunkbody(connection, iter): connection.putheader('Transfer-Encoding', 'chunked') connection.endheaders() for chunk in iter: connection.send('%x\r\n%s\r\n' % (len(chunk), chunk)) connection.send('0\r\n\r\n') # Do a simple request or a chunked request, depending # on whether the body param is file-like or iterable and # the method is PUT or POST # if not _pushing(method) or _simple(body): # Simple request... c.request(method, path, body, headers) elif _filelike(body) or self._iterable(body): c.putrequest(method, path) use_sendfile = self._sendable(body) # According to HTTP/1.1, Content-Length and Transfer-Encoding # conflict. for header, value in headers.items(): if use_sendfile or header.lower() != 'content-length': c.putheader(header, str(value)) iter = utils.chunkreadable(body) if use_sendfile: # send actual file without copying into userspace _sendbody(c, iter) else: # otherwise iterate and chunk _chunkbody(c, iter) else: raise TypeError('Unsupported image type: %s' % body.__class__) res = c.getresponse() def _retry(res): return res.getheader('Retry-After') def read_body(res): body = res.read() if six.PY3: body = body.decode('utf-8') return body status_code = self.get_status_code(res) if status_code in self.OK_RESPONSE_CODES: return res elif status_code in self.REDIRECT_RESPONSE_CODES: raise exception.RedirectException(res.getheader('Location')) elif status_code == http_client.UNAUTHORIZED: raise exception.NotAuthenticated(read_body(res)) elif status_code == http_client.FORBIDDEN: raise exception.Forbidden(read_body(res)) elif status_code == http_client.NOT_FOUND: raise exception.NotFound(read_body(res)) elif status_code == http_client.CONFLICT: raise exception.Duplicate(read_body(res)) elif status_code == http_client.BAD_REQUEST: raise exception.Invalid(read_body(res)) elif status_code == http_client.MULTIPLE_CHOICES: raise exception.MultipleChoices(body=read_body(res)) elif status_code == http_client.REQUEST_ENTITY_TOO_LARGE: raise exception.LimitExceeded(retry=_retry(res), body=read_body(res)) elif status_code == http_client.INTERNAL_SERVER_ERROR: raise exception.ServerError() elif status_code == http_client.SERVICE_UNAVAILABLE: raise exception.ServiceUnavailable(retry=_retry(res)) else: raise exception.UnexpectedStatus(status=status_code, body=read_body(res)) except (socket.error, IOError) as e: raise exception.ClientConnectionError(e) def _seekable(self, body): # pipes are not seekable, avoids sendfile() failure on e.g. # cat /path/to/image | glance add ... # or where add command is launched via popen try: os.lseek(body.fileno(), 0, os.SEEK_CUR) return True except OSError as e: return (e.errno != errno.ESPIPE) def _sendable(self, body): return (SENDFILE_SUPPORTED and hasattr(body, 'fileno') and self._seekable(body) and not self.use_ssl) def _iterable(self, body): return isinstance(body, collections.Iterable) def get_status_code(self, response): """ Returns the integer status code from the response, which can be either a Webob.Response (used in testing) or httplib.Response """ if hasattr(response, 'status_int'): return response.status_int else: return response.status def _extract_params(self, actual_params, allowed_params): """ Extract a subset of keys from a dictionary. The filters key will also be extracted, and each of its values will be returned as an individual param. :param actual_params: dict of keys to filter :param allowed_params: list of keys that 'actual_params' will be reduced to :returns: subset of 'params' dict """ try: # expect 'filters' param to be a dict here result = dict(actual_params.get('filters')) except TypeError: result = {} for allowed_param in allowed_params: if allowed_param in actual_params: result[allowed_param] = actual_params[allowed_param] return result glance-16.0.0/glance/common/wsme_utils.py0000666000175100017510000000442413245511421020345 0ustar zuulzuul00000000000000# Copyright (c) 2014 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from datetime import datetime from wsme import types as wsme_types from glance.common import timeutils class WSMEModelTransformer(object): def to_dict(self): # Return the wsme_attributes names:values as a dict my_dict = {} for attribute in self._wsme_attributes: value = getattr(self, attribute.name) if value is not wsme_types.Unset: my_dict.update({attribute.name: value}) return my_dict @classmethod def to_wsme_model(model, db_entity, self_link=None, schema=None): # Return the wsme_attributes names:values as a dict names = [] for attribute in model._wsme_attributes: names.append(attribute.name) values = {} for name in names: value = getattr(db_entity, name, None) if value is not None: if type(value) == datetime: iso_datetime_value = timeutils.isotime(value) values.update({name: iso_datetime_value}) else: values.update({name: value}) if schema: values['schema'] = schema model_object = model(**values) # 'self' kwarg is used in wsme.types.Base.__init__(self, ..) and # conflicts during initialization. self_link is a proxy field to self. if self_link: model_object.self = self_link return model_object @classmethod def get_mandatory_attrs(cls): return [attr.name for attr in cls._wsme_attributes if attr.mandatory] def _get_value(obj): if obj is not wsme_types.Unset: return obj else: return None glance-16.0.0/glance/common/config.py0000666000175100017510000007342713245511421017430 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Routines for configuring Glance """ import logging import os from oslo_config import cfg from oslo_middleware import cors from oslo_policy import policy from paste import deploy from glance.i18n import _ from glance.version import version_info as version paste_deploy_opts = [ cfg.StrOpt('flavor', sample_default='keystone', help=_(""" Deployment flavor to use in the server application pipeline. Provide a string value representing the appropriate deployment flavor used in the server application pipleline. This is typically the partial name of a pipeline in the paste configuration file with the service name removed. For example, if your paste section name in the paste configuration file is [pipeline:glance-api-keystone], set ``flavor`` to ``keystone``. Possible values: * String value representing a partial pipeline name. Related Options: * config_file """)), cfg.StrOpt('config_file', sample_default='glance-api-paste.ini', help=_(""" Name of the paste configuration file. Provide a string value representing the name of the paste configuration file to use for configuring piplelines for server application deployments. NOTES: * Provide the name or the path relative to the glance directory for the paste configuration file and not the absolute path. * The sample paste configuration file shipped with Glance need not be edited in most cases as it comes with ready-made pipelines for all common deployment flavors. If no value is specified for this option, the ``paste.ini`` file with the prefix of the corresponding Glance service's configuration file name will be searched for in the known configuration directories. (For example, if this option is missing from or has no value set in ``glance-api.conf``, the service will look for a file named ``glance-api-paste.ini``.) If the paste configuration file is not found, the service will not start. Possible values: * A string value representing the name of the paste configuration file. Related Options: * flavor """)), ] image_format_opts = [ cfg.ListOpt('container_formats', default=['ami', 'ari', 'aki', 'bare', 'ovf', 'ova', 'docker'], help=_("Supported values for the 'container_format' " "image attribute"), deprecated_opts=[cfg.DeprecatedOpt('container_formats', group='DEFAULT')]), cfg.ListOpt('disk_formats', default=['ami', 'ari', 'aki', 'vhd', 'vhdx', 'vmdk', 'raw', 'qcow2', 'vdi', 'iso', 'ploop'], help=_("Supported values for the 'disk_format' " "image attribute"), deprecated_opts=[cfg.DeprecatedOpt('disk_formats', group='DEFAULT')]), ] task_opts = [ cfg.IntOpt('task_time_to_live', default=48, help=_("Time in hours for which a task lives after, either " "succeeding or failing"), deprecated_opts=[cfg.DeprecatedOpt('task_time_to_live', group='DEFAULT')]), cfg.StrOpt('task_executor', default='taskflow', help=_(""" Task executor to be used to run task scripts. Provide a string value representing the executor to use for task executions. By default, ``TaskFlow`` executor is used. ``TaskFlow`` helps make task executions easy, consistent, scalable and reliable. It also enables creation of lightweight task objects and/or functions that are combined together into flows in a declarative manner. Possible values: * taskflow Related Options: * None """)), cfg.StrOpt('work_dir', sample_default='/work_dir', help=_(""" Absolute path to the work directory to use for asynchronous task operations. The directory set here will be used to operate over images - normally before they are imported in the destination store. NOTE: When providing a value for ``work_dir``, please make sure that enough space is provided for concurrent tasks to run efficiently without running out of space. A rough estimation can be done by multiplying the number of ``max_workers`` with an average image size (e.g 500MB). The image size estimation should be done based on the average size in your deployment. Note that depending on the tasks running you may need to multiply this number by some factor depending on what the task does. For example, you may want to double the available size if image conversion is enabled. All this being said, remember these are just estimations and you should do them based on the worst case scenario and be prepared to act in case they were wrong. Possible values: * String value representing the absolute path to the working directory Related Options: * None """)), ] _DEPRECATE_GLANCE_V1_MSG = _('The Images (Glance) version 1 API has been ' 'DEPRECATED in the Newton release and will be ' 'removed on or after Pike release, following ' 'the standard OpenStack deprecation policy. ' 'Hence, the configuration options specific to ' 'the Images (Glance) v1 API are hereby ' 'deprecated and subject to removal. Operators ' 'are advised to deploy the Images (Glance) v2 ' 'API.') common_opts = [ cfg.BoolOpt('allow_additional_image_properties', default=True, help=_(""" Allow users to add additional/custom properties to images. Glance defines a standard set of properties (in its schema) that appear on every image. These properties are also known as ``base properties``. In addition to these properties, Glance allows users to add custom properties to images. These are known as ``additional properties``. By default, this configuration option is set to ``True`` and users are allowed to add additional properties. The number of additional properties that can be added to an image can be controlled via ``image_property_quota`` configuration option. Possible values: * True * False Related options: * image_property_quota """)), cfg.IntOpt('image_member_quota', default=128, help=_(""" Maximum number of image members per image. This limits the maximum of users an image can be shared with. Any negative value is interpreted as unlimited. Related options: * None """)), cfg.IntOpt('image_property_quota', default=128, help=_(""" Maximum number of properties allowed on an image. This enforces an upper limit on the number of additional properties an image can have. Any negative value is interpreted as unlimited. NOTE: This won't have any impact if additional properties are disabled. Please refer to ``allow_additional_image_properties``. Related options: * ``allow_additional_image_properties`` """)), cfg.IntOpt('image_tag_quota', default=128, help=_(""" Maximum number of tags allowed on an image. Any negative value is interpreted as unlimited. Related options: * None """)), cfg.IntOpt('image_location_quota', default=10, help=_(""" Maximum number of locations allowed on an image. Any negative value is interpreted as unlimited. Related options: * None """)), # TODO(abashmak): Add choices parameter to this option: # choices('glance.db.sqlalchemy.api', # 'glance.db.registry.api', # 'glance.db.simple.api') # This will require a fix to the functional tests which # set this option to a test version of the registry api module: # (glance.tests.functional.v2.registry_data_api), in order to # bypass keystone authentication for the Registry service. # All such tests are contained in: # glance/tests/functional/v2/test_images.py cfg.StrOpt('data_api', default='glance.db.sqlalchemy.api', deprecated_for_removal=True, deprecated_since="Queens", deprecated_reason=_(""" Glance registry service is deprecated for removal. More information can be found from the spec: http://specs.openstack.org/openstack/glance-specs/specs/queens/approved/glance/deprecate-registry.html """), help=_(""" Python module path of data access API. Specifies the path to the API to use for accessing the data model. This option determines how the image catalog data will be accessed. Possible values: * glance.db.sqlalchemy.api * glance.db.registry.api * glance.db.simple.api If this option is set to ``glance.db.sqlalchemy.api`` then the image catalog data is stored in and read from the database via the SQLAlchemy Core and ORM APIs. Setting this option to ``glance.db.registry.api`` will force all database access requests to be routed through the Registry service. This avoids data access from the Glance API nodes for an added layer of security, scalability and manageability. NOTE: In v2 OpenStack Images API, the registry service is optional. In order to use the Registry API in v2, the option ``enable_v2_registry`` must be set to ``True``. Finally, when this configuration option is set to ``glance.db.simple.api``, image catalog data is stored in and read from an in-memory data structure. This is primarily used for testing. Related options: * enable_v2_api * enable_v2_registry """)), cfg.IntOpt('limit_param_default', default=25, min=1, help=_(""" The default number of results to return for a request. Responses to certain API requests, like list images, may return multiple items. The number of results returned can be explicitly controlled by specifying the ``limit`` parameter in the API request. However, if a ``limit`` parameter is not specified, this configuration value will be used as the default number of results to be returned for any API request. NOTES: * The value of this configuration option may not be greater than the value specified by ``api_limit_max``. * Setting this to a very large value may slow down database queries and increase response times. Setting this to a very low value may result in poor user experience. Possible values: * Any positive integer Related options: * api_limit_max """)), cfg.IntOpt('api_limit_max', default=1000, min=1, help=_(""" Maximum number of results that could be returned by a request. As described in the help text of ``limit_param_default``, some requests may return multiple results. The number of results to be returned are governed either by the ``limit`` parameter in the request or the ``limit_param_default`` configuration option. The value in either case, can't be greater than the absolute maximum defined by this configuration option. Anything greater than this value is trimmed down to the maximum value defined here. NOTE: Setting this to a very large value may slow down database queries and increase response times. Setting this to a very low value may result in poor user experience. Possible values: * Any positive integer Related options: * limit_param_default """)), cfg.BoolOpt('show_image_direct_url', default=False, help=_(""" Show direct image location when returning an image. This configuration option indicates whether to show the direct image location when returning image details to the user. The direct image location is where the image data is stored in backend storage. This image location is shown under the image property ``direct_url``. When multiple image locations exist for an image, the best location is displayed based on the location strategy indicated by the configuration option ``location_strategy``. NOTES: * Revealing image locations can present a GRAVE SECURITY RISK as image locations can sometimes include credentials. Hence, this is set to ``False`` by default. Set this to ``True`` with EXTREME CAUTION and ONLY IF you know what you are doing! * If an operator wishes to avoid showing any image location(s) to the user, then both this option and ``show_multiple_locations`` MUST be set to ``False``. Possible values: * True * False Related options: * show_multiple_locations * location_strategy """)), # NOTE(flaper87): The policy.json file should be updated and the locaiton # related rules set to admin only once this option is finally removed. cfg.BoolOpt('show_multiple_locations', default=False, deprecated_for_removal=True, deprecated_reason=_('This option will be removed in the Pike ' 'release or later because the same ' 'functionality can be achieved with ' 'greater granularity by using policies. ' 'Please see the Newton ' 'release notes for more information.'), deprecated_since='Newton', help=_(""" Show all image locations when returning an image. This configuration option indicates whether to show all the image locations when returning image details to the user. When multiple image locations exist for an image, the locations are ordered based on the location strategy indicated by the configuration opt ``location_strategy``. The image locations are shown under the image property ``locations``. NOTES: * Revealing image locations can present a GRAVE SECURITY RISK as image locations can sometimes include credentials. Hence, this is set to ``False`` by default. Set this to ``True`` with EXTREME CAUTION and ONLY IF you know what you are doing! * If an operator wishes to avoid showing any image location(s) to the user, then both this option and ``show_image_direct_url`` MUST be set to ``False``. Possible values: * True * False Related options: * show_image_direct_url * location_strategy """)), cfg.IntOpt('image_size_cap', default=1099511627776, min=1, max=9223372036854775808, help=_(""" Maximum size of image a user can upload in bytes. An image upload greater than the size mentioned here would result in an image creation failure. This configuration option defaults to 1099511627776 bytes (1 TiB). NOTES: * This value should only be increased after careful consideration and must be set less than or equal to 8 EiB (9223372036854775808). * This value must be set with careful consideration of the backend storage capacity. Setting this to a very low value may result in a large number of image failures. And, setting this to a very large value may result in faster consumption of storage. Hence, this must be set according to the nature of images created and storage capacity available. Possible values: * Any positive number less than or equal to 9223372036854775808 """)), cfg.StrOpt('user_storage_quota', default='0', help=_(""" Maximum amount of image storage per tenant. This enforces an upper limit on the cumulative storage consumed by all images of a tenant across all stores. This is a per-tenant limit. The default unit for this configuration option is Bytes. However, storage units can be specified using case-sensitive literals ``B``, ``KB``, ``MB``, ``GB`` and ``TB`` representing Bytes, KiloBytes, MegaBytes, GigaBytes and TeraBytes respectively. Note that there should not be any space between the value and unit. Value ``0`` signifies no quota enforcement. Negative values are invalid and result in errors. Possible values: * A string that is a valid concatenation of a non-negative integer representing the storage value and an optional string literal representing storage units as mentioned above. Related options: * None """)), # NOTE(nikhil): Even though deprecated, the configuration option # ``enable_v1_api`` is set to True by default on purpose. Having it enabled # helps the projects that haven't been able to fully move to v2 yet by # keeping the devstack setup to use glance v1 as well. We need to switch it # to False by default soon after Newton is cut so that we can identify the # projects that haven't moved to v2 yet and start having some interesting # conversations with them. Switching to False in Newton may result into # destabilizing the gate and affect the release. cfg.BoolOpt('enable_v1_api', default=True, deprecated_reason=_DEPRECATE_GLANCE_V1_MSG, deprecated_since='Newton', help=_(""" Deploy the v1 OpenStack Images API. When this option is set to ``True``, Glance service will respond to requests on registered endpoints conforming to the v1 OpenStack Images API. NOTES: * If this option is enabled, then ``enable_v1_registry`` must also be set to ``True`` to enable mandatory usage of Registry service with v1 API. * If this option is disabled, then the ``enable_v1_registry`` option, which is enabled by default, is also recommended to be disabled. * This option is separate from ``enable_v2_api``, both v1 and v2 OpenStack Images API can be deployed independent of each other. * If deploying only the v2 Images API, this option, which is enabled by default, should be disabled. Possible values: * True * False Related options: * enable_v1_registry * enable_v2_api """)), cfg.BoolOpt('enable_v2_api', default=True, deprecated_reason=_('The Images (Glance) version 1 API has ' 'been DEPRECATED in the Newton release. ' 'It will be removed on or after Pike ' 'release, following the standard ' 'OpenStack deprecation policy. Once we ' 'remove the Images (Glance) v1 API, only ' 'the Images (Glance) v2 API can be ' 'deployed and will be enabled by default ' 'making this option redundant.'), deprecated_since='Newton', help=_(""" Deploy the v2 OpenStack Images API. When this option is set to ``True``, Glance service will respond to requests on registered endpoints conforming to the v2 OpenStack Images API. NOTES: * If this option is disabled, then the ``enable_v2_registry`` option, which is enabled by default, is also recommended to be disabled. * This option is separate from ``enable_v1_api``, both v1 and v2 OpenStack Images API can be deployed independent of each other. * If deploying only the v1 Images API, this option, which is enabled by default, should be disabled. Possible values: * True * False Related options: * enable_v2_registry * enable_v1_api """)), cfg.BoolOpt('enable_v1_registry', default=True, deprecated_reason=_DEPRECATE_GLANCE_V1_MSG, deprecated_since='Newton', help=_(""" Deploy the v1 API Registry service. When this option is set to ``True``, the Registry service will be enabled in Glance for v1 API requests. NOTES: * Use of Registry is mandatory in v1 API, so this option must be set to ``True`` if the ``enable_v1_api`` option is enabled. * If deploying only the v2 OpenStack Images API, this option, which is enabled by default, should be disabled. Possible values: * True * False Related options: * enable_v1_api """)), cfg.BoolOpt('enable_v2_registry', default=True, deprecated_for_removal=True, deprecated_since="Queens", deprecated_reason=_(""" Glance registry service is deprecated for removal. More information can be found from the spec: http://specs.openstack.org/openstack/glance-specs/specs/queens/approved/glance/deprecate-registry.html """), help=_(""" Deploy the v2 API Registry service. When this option is set to ``True``, the Registry service will be enabled in Glance for v2 API requests. NOTES: * Use of Registry is optional in v2 API, so this option must only be enabled if both ``enable_v2_api`` is set to ``True`` and the ``data_api`` option is set to ``glance.db.registry.api``. * If deploying only the v1 OpenStack Images API, this option, which is enabled by default, should be disabled. Possible values: * True * False Related options: * enable_v2_api * data_api """)), cfg.HostAddressOpt('pydev_worker_debug_host', sample_default='localhost', help=_(""" Host address of the pydev server. Provide a string value representing the hostname or IP of the pydev server to use for debugging. The pydev server listens for debug connections on this address, facilitating remote debugging in Glance. Possible values: * Valid hostname * Valid IP address Related options: * None """)), cfg.PortOpt('pydev_worker_debug_port', default=5678, help=_(""" Port number that the pydev server will listen on. Provide a port number to bind the pydev server to. The pydev process accepts debug connections on this port and facilitates remote debugging in Glance. Possible values: * A valid port number Related options: * None """)), cfg.StrOpt('metadata_encryption_key', secret=True, help=_(""" AES key for encrypting store location metadata. Provide a string value representing the AES cipher to use for encrypting Glance store metadata. NOTE: The AES key to use must be set to a random string of length 16, 24 or 32 bytes. Possible values: * String value representing a valid AES key Related options: * None """)), cfg.StrOpt('digest_algorithm', default='sha256', help=_(""" Digest algorithm to use for digital signature. Provide a string value representing the digest algorithm to use for generating digital signatures. By default, ``sha256`` is used. To get a list of the available algorithms supported by the version of OpenSSL on your platform, run the command: ``openssl list-message-digest-algorithms``. Examples are 'sha1', 'sha256', and 'sha512'. NOTE: ``digest_algorithm`` is not related to Glance's image signing and verification. It is only used to sign the universally unique identifier (UUID) as a part of the certificate file and key file validation. Possible values: * An OpenSSL message digest algorithm identifier Relation options: * None """)), cfg.StrOpt('node_staging_uri', default='file:///tmp/staging/', help=_(""" The URL provides location where the temporary data will be stored This option is for Glance internal use only. Glance will save the image data uploaded by the user to 'staging' endpoint during the image import process. This option does not change the 'staging' API endpoint by any means. NOTE: It is discouraged to use same path as [task]/work_dir NOTE: 'file://' is the only option api_image_import flow will support for now. NOTE: The staging path must be on shared filesystem available to all Glance API nodes. Possible values: * String starting with 'file://' followed by absolute FS path Related options: * [task]/work_dir * [DEFAULT]/enable_image_import (*deprecated*) """)), cfg.BoolOpt('enable_image_import', default=True, deprecated_for_removal=True, deprecated_reason=_(""" This option is deprecated for removal in Rocky. It was introduced to make sure that the API is not enabled before the '[DEFAULT]/node_staging_uri' is defined and is long term redundant."""), deprecated_since='Pike', help=_(""" Enables the Image Import workflow introduced in Pike As '[DEFAULT]/node_staging_uri' is required for the Image Import, it's disabled per default in Pike, enabled per default in Queens and removed in Rocky. This allows Glance to operate with previous version configs upon upgrade. Setting this option to False will disable the endpoints related to Image Import Refactoring work. Related options: * [DEFAULT]/node_staging_uri""")), cfg.ListOpt('enabled_import_methods', item_type=cfg.types.String(quotes=True), bounds=True, default=['glance-direct', 'web-download'], help=_(""" List of enabled Image Import Methods Both 'glance-direct' and 'web-download' are enabled by default. Related options: * [DEFAULT]/node_staging_uri * [DEFAULT]/enable_image_import""")), ] CONF = cfg.CONF CONF.register_opts(paste_deploy_opts, group='paste_deploy') CONF.register_opts(image_format_opts, group='image_format') CONF.register_opts(task_opts, group='task') CONF.register_opts(common_opts) policy.Enforcer(CONF) def parse_args(args=None, usage=None, default_config_files=None): CONF(args=args, project='glance', version=version.cached_version_string(), usage=usage, default_config_files=default_config_files) def parse_cache_args(args=None): config_files = cfg.find_config_files(project='glance', prog='glance-cache') parse_args(args=args, default_config_files=config_files) def _get_deployment_flavor(flavor=None): """ Retrieve the paste_deploy.flavor config item, formatted appropriately for appending to the application name. :param flavor: if specified, use this setting rather than the paste_deploy.flavor configuration setting """ if not flavor: flavor = CONF.paste_deploy.flavor return '' if not flavor else ('-' + flavor) def _get_paste_config_path(): paste_suffix = '-paste.ini' conf_suffix = '.conf' if CONF.config_file: # Assume paste config is in a paste.ini file corresponding # to the last config file path = CONF.config_file[-1].replace(conf_suffix, paste_suffix) else: path = CONF.prog + paste_suffix return CONF.find_file(os.path.basename(path)) def _get_deployment_config_file(): """ Retrieve the deployment_config_file config item, formatted as an absolute pathname. """ path = CONF.paste_deploy.config_file if not path: path = _get_paste_config_path() if not path or not (os.path.isfile(os.path.abspath(path))): msg = _("Unable to locate paste config file for %s.") % CONF.prog raise RuntimeError(msg) return os.path.abspath(path) def load_paste_app(app_name, flavor=None, conf_file=None): """ Builds and returns a WSGI app from a paste config file. We assume the last config file specified in the supplied ConfigOpts object is the paste config file, if conf_file is None. :param app_name: name of the application to load :param flavor: name of the variant of the application to load :param conf_file: path to the paste config file :raises RuntimeError: when config file cannot be located or application cannot be loaded from config file """ # append the deployment flavor to the application name, # in order to identify the appropriate paste pipeline app_name += _get_deployment_flavor(flavor) if not conf_file: conf_file = _get_deployment_config_file() try: logger = logging.getLogger(__name__) logger.debug("Loading %(app_name)s from %(conf_file)s", {'conf_file': conf_file, 'app_name': app_name}) app = deploy.loadapp("config:%s" % conf_file, name=app_name) # Log the options used when starting if we're in debug mode... if CONF.debug: CONF.log_opt_values(logger, logging.DEBUG) return app except (LookupError, ImportError) as e: msg = (_("Unable to load %(app_name)s from " "configuration file %(conf_file)s." "\nGot: %(e)r") % {'app_name': app_name, 'conf_file': conf_file, 'e': e}) logger.error(msg) raise RuntimeError(msg) def set_config_defaults(): """This method updates all configuration default values.""" set_cors_middleware_defaults() def set_cors_middleware_defaults(): """Update default configuration options for oslo.middleware.""" cors.set_defaults( allow_headers=['Content-MD5', 'X-Image-Meta-Checksum', 'X-Storage-Token', 'Accept-Encoding', 'X-Auth-Token', 'X-Identity-Status', 'X-Roles', 'X-Service-Catalog', 'X-User-Id', 'X-Tenant-Id', 'X-OpenStack-Request-ID'], expose_headers=['X-Image-Meta-Checksum', 'X-Auth-Token', 'X-Subject-Token', 'X-Service-Token', 'X-OpenStack-Request-ID'], allow_methods=['GET', 'PUT', 'POST', 'DELETE', 'PATCH'] ) glance-16.0.0/glance/common/store_utils.py0000666000175100017510000001132413245511421020523 0ustar zuulzuul00000000000000# Copyright 2014 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import sys import glance_store as store_api from oslo_config import cfg from oslo_log import log as logging from oslo_utils import encodeutils import six.moves.urllib.parse as urlparse import glance.db as db_api from glance.i18n import _LE, _LW from glance import scrubber LOG = logging.getLogger(__name__) CONF = cfg.CONF RESTRICTED_URI_SCHEMAS = frozenset(['file', 'filesystem', 'swift+config']) def safe_delete_from_backend(context, image_id, location): """ Given a location, delete an image from the store and update location status to db. This function try to handle all known exceptions which might be raised by those calls on store and DB modules in its implementation. :param context: The request context :param image_id: The image identifier :param location: The image location entry """ try: ret = store_api.delete_from_backend(location['url'], context=context) location['status'] = 'deleted' if 'id' in location: db_api.get_api().image_location_delete(context, image_id, location['id'], 'deleted') return ret except store_api.NotFound: msg = _LW('Failed to delete image %s in store from URI') % image_id LOG.warn(msg) except store_api.StoreDeleteNotSupported as e: LOG.warn(encodeutils.exception_to_unicode(e)) except store_api.UnsupportedBackend: exc_type = sys.exc_info()[0].__name__ msg = (_LE('Failed to delete image %(image_id)s from store: %(exc)s') % dict(image_id=image_id, exc=exc_type)) LOG.error(msg) def schedule_delayed_delete_from_backend(context, image_id, location): """ Given a location, schedule the deletion of an image location and update location status to db. :param context: The request context :param image_id: The image identifier :param location: The image location entry """ db_queue = scrubber.get_scrub_queue() if not CONF.use_user_token: context = None ret = db_queue.add_location(image_id, location) if ret: location['status'] = 'pending_delete' if 'id' in location: # NOTE(zhiyan): New added image location entry will has no 'id' # field since it has not been saved to DB. db_api.get_api().image_location_delete(context, image_id, location['id'], 'pending_delete') else: db_api.get_api().image_location_add(context, image_id, location) return ret def delete_image_location_from_backend(context, image_id, location): """ Given a location, immediately or schedule the deletion of an image location and update location status to db. :param context: The request context :param image_id: The image identifier :param location: The image location entry """ deleted = False if CONF.delayed_delete: deleted = schedule_delayed_delete_from_backend(context, image_id, location) if not deleted: # NOTE(zhiyan) If image metadata has not been saved to DB # such as uploading process failure then we can't use # location status mechanism to support image pending delete. safe_delete_from_backend(context, image_id, location) def validate_external_location(uri): """ Validate if URI of external location are supported. Only over non-local store types are OK, i.e. Swift, HTTP. Note the absence of 'file://' for security reasons, see LP bug #942118, 1400966, 'swift+config://' is also absent for security reasons, see LP bug #1334196. :param uri: The URI of external image location. :returns: Whether given URI of external image location are OK. """ if not uri: return False # TODO(zhiyan): This function could be moved to glance_store. # TODO(gm): Use a whitelist of allowed schemes scheme = urlparse.urlparse(uri).scheme return (scheme in store_api.get_known_schemes() and scheme not in RESTRICTED_URI_SCHEMAS) glance-16.0.0/glance/common/property_utils.py0000666000175100017510000002260713245511421021261 0ustar zuulzuul00000000000000# Copyright 2013 Rackspace # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import re import sys from oslo_config import cfg from oslo_log import log as logging from oslo_policy import policy from six.moves import configparser import glance.api.policy from glance.common import exception from glance.i18n import _, _LE, _LW # SafeConfigParser was deprecated in Python 3.2 if sys.version_info >= (3, 2): CONFIG = configparser.ConfigParser() else: CONFIG = configparser.SafeConfigParser() LOG = logging.getLogger(__name__) property_opts = [ cfg.StrOpt('property_protection_file', help=_(""" The location of the property protection file. Provide a valid path to the property protection file which contains the rules for property protections and the roles/policies associated with them. A property protection file, when set, restricts the Glance image properties to be created, read, updated and/or deleted by a specific set of users that are identified by either roles or policies. If this configuration option is not set, by default, property protections won't be enforced. If a value is specified and the file is not found, the glance-api service will fail to start. More information on property protections can be found at: https://docs.openstack.org/glance/latest/admin/property-protections.html Possible values: * Empty string * Valid path to the property protection configuration file Related options: * property_protection_rule_format """)), cfg.StrOpt('property_protection_rule_format', default='roles', choices=('roles', 'policies'), help=_(""" Rule format for property protection. Provide the desired way to set property protection on Glance image properties. The two permissible values are ``roles`` and ``policies``. The default value is ``roles``. If the value is ``roles``, the property protection file must contain a comma separated list of user roles indicating permissions for each of the CRUD operations on each property being protected. If set to ``policies``, a policy defined in policy.json is used to express property protections for each of the CRUD operations. Examples of how property protections are enforced based on ``roles`` or ``policies`` can be found at: https://docs.openstack.org/glance/latest/admin/property-protections.html#examples Possible values: * roles * policies Related options: * property_protection_file """)), ] CONF = cfg.CONF CONF.register_opts(property_opts) # NOTE (spredzy): Due to the particularly lengthy name of the exception # and the number of occurrence it is raise in this file, a variable is # created InvalidPropProtectConf = exception.InvalidPropertyProtectionConfiguration def is_property_protection_enabled(): if CONF.property_protection_file: return True return False class PropertyRules(object): def __init__(self, policy_enforcer=None): self.rules = [] self.prop_exp_mapping = {} self.policies = [] self.policy_enforcer = policy_enforcer or glance.api.policy.Enforcer() self.prop_prot_rule_format = CONF.property_protection_rule_format self.prop_prot_rule_format = self.prop_prot_rule_format.lower() self._load_rules() def _load_rules(self): try: conf_file = CONF.find_file(CONF.property_protection_file) CONFIG.read(conf_file) except Exception as e: msg = (_LE("Couldn't find property protection file %(file)s: " "%(error)s.") % {'file': CONF.property_protection_file, 'error': e}) LOG.error(msg) raise InvalidPropProtectConf() if self.prop_prot_rule_format not in ['policies', 'roles']: msg = _LE("Invalid value '%s' for " "'property_protection_rule_format'. " "The permitted values are " "'roles' and 'policies'") % self.prop_prot_rule_format LOG.error(msg) raise InvalidPropProtectConf() operations = ['create', 'read', 'update', 'delete'] properties = CONFIG.sections() for property_exp in properties: property_dict = {} compiled_rule = self._compile_rule(property_exp) for operation in operations: permissions = CONFIG.get(property_exp, operation) if permissions: if self.prop_prot_rule_format == 'policies': if ',' in permissions: LOG.error( _LE("Multiple policies '%s' not allowed " "for a given operation. Policies can be " "combined in the policy file"), permissions) raise InvalidPropProtectConf() self.prop_exp_mapping[compiled_rule] = property_exp self._add_policy_rules(property_exp, operation, permissions) permissions = [permissions] else: permissions = [permission.strip() for permission in permissions.split(',')] if '@' in permissions and '!' in permissions: msg = (_LE( "Malformed property protection rule in " "[%(prop)s] %(op)s=%(perm)s: '@' and '!' " "are mutually exclusive") % dict(prop=property_exp, op=operation, perm=permissions)) LOG.error(msg) raise InvalidPropProtectConf() property_dict[operation] = permissions else: property_dict[operation] = [] LOG.warn( _LW('Property protection on operation %(operation)s' ' for rule %(rule)s is not found. No role will be' ' allowed to perform this operation.') % {'operation': operation, 'rule': property_exp}) self.rules.append((compiled_rule, property_dict)) def _compile_rule(self, rule): try: return re.compile(rule) except Exception as e: msg = (_LE("Encountered a malformed property protection rule" " %(rule)s: %(error)s.") % {'rule': rule, 'error': e}) LOG.error(msg) raise InvalidPropProtectConf() def _add_policy_rules(self, property_exp, action, rule): """Add policy rules to the policy enforcer. For example, if the file listed as property_protection_file has: [prop_a] create = glance_creator then the corresponding policy rule would be: "prop_a:create": "rule:glance_creator" where glance_creator is defined in policy.json. For example: "glance_creator": "role:admin or role:glance_create_user" """ rule = "rule:%s" % rule rule_name = "%s:%s" % (property_exp, action) rule_dict = policy.Rules.from_dict({ rule_name: rule }) self.policy_enforcer.add_rules(rule_dict) def _check_policy(self, property_exp, action, context): try: action = ":".join([property_exp, action]) self.policy_enforcer.enforce(context, action, {}) except exception.Forbidden: return False return True def check_property_rules(self, property_name, action, context): roles = context.roles # Include service roles to check if an action can be # performed on the property or not if context.service_roles: roles.extend(context.service_roles) if not self.rules: return True if action not in ['create', 'read', 'update', 'delete']: return False for rule_exp, rule in self.rules: if rule_exp.search(str(property_name)): break else: # no matching rules return False rule_roles = rule.get(action) if rule_roles: if '!' in rule_roles: return False elif '@' in rule_roles: return True if self.prop_prot_rule_format == 'policies': prop_exp_key = self.prop_exp_mapping[rule_exp] return self._check_policy(prop_exp_key, action, context) if set(roles).intersection(set([role.lower() for role in rule_roles])): return True return False glance-16.0.0/glance/common/trust_auth.py0000666000175100017510000001070613245511421020354 0ustar zuulzuul00000000000000# Copyright (c) 2015 Mirantis, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from keystoneauth1 import exceptions as ka_exceptions from keystoneauth1 import loading as ka_loading from keystoneclient.v3 import client as ks_client from oslo_config import cfg from oslo_log import log as logging CONF = cfg.CONF CONF.register_opt(cfg.IntOpt('timeout'), group='keystone_authtoken') LOG = logging.getLogger(__name__) class TokenRefresher(object): """Class that responsible for token refreshing with trusts""" def __init__(self, user_plugin, user_project, user_roles): """Prepare all parameters and clients required to refresh token""" # step 1: create trust to ensure that we can always update token # trustor = user who made the request trustor_client = self._load_client(user_plugin) trustor_id = trustor_client.session.get_user_id() # get trustee user client that impersonates main user trustee_user_auth = ka_loading.load_auth_from_conf_options( CONF, 'keystone_authtoken') # save service user client because we need new service token # to refresh trust-scoped client later self.trustee_user_client = self._load_client(trustee_user_auth) trustee_id = self.trustee_user_client.session.get_user_id() self.trust_id = trustor_client.trusts.create(trustor_user=trustor_id, trustee_user=trustee_id, impersonation=True, role_names=user_roles, project=user_project).id LOG.debug("Trust %s has been created.", self.trust_id) # step 2: postpone trust-scoped client initialization # until we need to refresh the token self.trustee_client = None def refresh_token(self): """Receive new token if user need to update old token :return: new token that can be used for authentication """ LOG.debug("Requesting the new token with trust %s", self.trust_id) if self.trustee_client is None: self.trustee_client = self._refresh_trustee_client() try: return self.trustee_client.session.get_token() except ka_exceptions.Unauthorized: # in case of Unauthorized exceptions try to refresh client because # service user token may expired self.trustee_client = self._refresh_trustee_client() return self.trustee_client.session.get_token() def release_resources(self): """Release keystone resources required for refreshing""" try: if self.trustee_client is None: self._refresh_trustee_client().trusts.delete(self.trust_id) else: self.trustee_client.trusts.delete(self.trust_id) except ka_exceptions.Unauthorized: # service user token may expire when we are trying to delete token # so need to update client to ensure that this is not the reason # of failure self.trustee_client = self._refresh_trustee_client() self.trustee_client.trusts.delete(self.trust_id) def _refresh_trustee_client(self): # Remove project_name and project_id, since we need a trust scoped # auth object kwargs = { 'project_name': None, 'project_domain_name': None, 'project_id': None, 'trust_id': self.trust_id } trustee_auth = ka_loading.load_auth_from_conf_options( CONF, 'keystone_authtoken', **kwargs) return self._load_client(trustee_auth) @staticmethod def _load_client(plugin): # load client from auth settings and user plugin sess = ka_loading.load_session_from_conf_options( CONF, 'keystone_authtoken', auth=plugin) return ks_client.Client(session=sess) glance-16.0.0/glance/common/exception.py0000666000175100017510000003435013245511421020151 0ustar zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Glance exception subclasses""" import six import six.moves.urllib.parse as urlparse from glance.i18n import _ _FATAL_EXCEPTION_FORMAT_ERRORS = False class RedirectException(Exception): def __init__(self, url): self.url = urlparse.urlparse(url) class GlanceException(Exception): """ Base Glance Exception To correctly use this class, inherit from it and define a 'message' property. That message will get printf'd with the keyword arguments provided to the constructor. """ message = _("An unknown exception occurred") def __init__(self, message=None, *args, **kwargs): if not message: message = self.message try: if kwargs: message = message % kwargs except Exception: if _FATAL_EXCEPTION_FORMAT_ERRORS: raise else: # at least get the core message out if something happened pass self.msg = message super(GlanceException, self).__init__(message) def __unicode__(self): # NOTE(flwang): By default, self.msg is an instance of Message, which # can't be converted by str(). Based on the definition of # __unicode__, it should return unicode always. return six.text_type(self.msg) class MissingCredentialError(GlanceException): message = _("Missing required credential: %(required)s") class BadAuthStrategy(GlanceException): message = _("Incorrect auth strategy, expected \"%(expected)s\" but " "received \"%(received)s\"") class NotFound(GlanceException): message = _("An object with the specified identifier was not found.") class BadStoreUri(GlanceException): message = _("The Store URI was malformed.") class Duplicate(GlanceException): message = _("An object with the same identifier already exists.") class Conflict(GlanceException): message = _("An object with the same identifier is currently being " "operated on.") class StorageQuotaFull(GlanceException): message = _("The size of the data %(image_size)s will exceed the limit. " "%(remaining)s bytes remaining.") class AuthBadRequest(GlanceException): message = _("Connect error/bad request to Auth service at URL %(url)s.") class AuthUrlNotFound(GlanceException): message = _("Auth service at URL %(url)s not found.") class AuthorizationFailure(GlanceException): message = _("Authorization failed.") class NotAuthenticated(GlanceException): message = _("You are not authenticated.") class UploadException(GlanceException): message = _('Image upload problem: %s') class Forbidden(GlanceException): message = _("You are not authorized to complete %(action)s action.") class ForbiddenPublicImage(Forbidden): message = _("You are not authorized to complete this action.") class ProtectedImageDelete(Forbidden): message = _("Image %(image_id)s is protected and cannot be deleted.") class ProtectedMetadefNamespaceDelete(Forbidden): message = _("Metadata definition namespace %(namespace)s is protected" " and cannot be deleted.") class ProtectedMetadefNamespacePropDelete(Forbidden): message = _("Metadata definition property %(property_name)s is protected" " and cannot be deleted.") class ProtectedMetadefObjectDelete(Forbidden): message = _("Metadata definition object %(object_name)s is protected" " and cannot be deleted.") class ProtectedMetadefResourceTypeAssociationDelete(Forbidden): message = _("Metadata definition resource-type-association" " %(resource_type)s is protected and cannot be deleted.") class ProtectedMetadefResourceTypeSystemDelete(Forbidden): message = _("Metadata definition resource-type %(resource_type_name)s is" " a seeded-system type and cannot be deleted.") class ProtectedMetadefTagDelete(Forbidden): message = _("Metadata definition tag %(tag_name)s is protected" " and cannot be deleted.") class Invalid(GlanceException): message = _("Data supplied was not valid.") class InvalidSortKey(Invalid): message = _("Sort key supplied was not valid.") class InvalidSortDir(Invalid): message = _("Sort direction supplied was not valid.") class InvalidPropertyProtectionConfiguration(Invalid): message = _("Invalid configuration in property protection file.") class InvalidSwiftStoreConfiguration(Invalid): message = _("Invalid configuration in glance-swift conf file.") class InvalidFilterOperatorValue(Invalid): message = _("Unable to filter using the specified operator.") class InvalidFilterRangeValue(Invalid): message = _("Unable to filter using the specified range.") class InvalidOptionValue(Invalid): message = _("Invalid value for option %(option)s: %(value)s") class ReadonlyProperty(Forbidden): message = _("Attribute '%(property)s' is read-only.") class ReservedProperty(Forbidden): message = _("Attribute '%(property)s' is reserved.") class AuthorizationRedirect(GlanceException): message = _("Redirecting to %(uri)s for authorization.") class ClientConnectionError(GlanceException): message = _("There was an error connecting to a server") class ClientConfigurationError(GlanceException): message = _("There was an error configuring the client.") class MultipleChoices(GlanceException): message = _("The request returned a 302 Multiple Choices. This generally " "means that you have not included a version indicator in a " "request URI.\n\nThe body of response returned:\n%(body)s") class LimitExceeded(GlanceException): message = _("The request returned a 413 Request Entity Too Large. This " "generally means that rate limiting or a quota threshold was " "breached.\n\nThe response body:\n%(body)s") def __init__(self, *args, **kwargs): self.retry_after = (int(kwargs['retry']) if kwargs.get('retry') else None) super(LimitExceeded, self).__init__(*args, **kwargs) class ServiceUnavailable(GlanceException): message = _("The request returned 503 Service Unavailable. This " "generally occurs on service overload or other transient " "outage.") def __init__(self, *args, **kwargs): self.retry_after = (int(kwargs['retry']) if kwargs.get('retry') else None) super(ServiceUnavailable, self).__init__(*args, **kwargs) class ServerError(GlanceException): message = _("The request returned 500 Internal Server Error.") class UnexpectedStatus(GlanceException): message = _("The request returned an unexpected status: %(status)s." "\n\nThe response body:\n%(body)s") class InvalidContentType(GlanceException): message = _("Invalid content type %(content_type)s") class BadRegistryConnectionConfiguration(GlanceException): message = _("Registry was not configured correctly on API server. " "Reason: %(reason)s") class BadDriverConfiguration(GlanceException): message = _("Driver %(driver_name)s could not be configured correctly. " "Reason: %(reason)s") class MaxRedirectsExceeded(GlanceException): message = _("Maximum redirects (%(redirects)s) was exceeded.") class InvalidRedirect(GlanceException): message = _("Received invalid HTTP redirect.") class NoServiceEndpoint(GlanceException): message = _("Response from Keystone does not contain a Glance endpoint.") class RegionAmbiguity(GlanceException): message = _("Multiple 'image' service matches for region %(region)s. This " "generally means that a region is required and you have not " "supplied one.") class WorkerCreationFailure(GlanceException): message = _("Server worker creation failed: %(reason)s.") class SchemaLoadError(GlanceException): message = _("Unable to load schema: %(reason)s") class InvalidObject(GlanceException): message = _("Provided object does not match schema " "'%(schema)s': %(reason)s") class ImageSizeLimitExceeded(GlanceException): message = _("The provided image is too large.") class FailedToGetScrubberJobs(GlanceException): message = _("Scrubber encountered an error while trying to fetch " "scrub jobs.") class ImageMemberLimitExceeded(LimitExceeded): message = _("The limit has been exceeded on the number of allowed image " "members for this image. Attempted: %(attempted)s, " "Maximum: %(maximum)s") class ImagePropertyLimitExceeded(LimitExceeded): message = _("The limit has been exceeded on the number of allowed image " "properties. Attempted: %(attempted)s, Maximum: %(maximum)s") class ImageTagLimitExceeded(LimitExceeded): message = _("The limit has been exceeded on the number of allowed image " "tags. Attempted: %(attempted)s, Maximum: %(maximum)s") class ImageLocationLimitExceeded(LimitExceeded): message = _("The limit has been exceeded on the number of allowed image " "locations. Attempted: %(attempted)s, Maximum: %(maximum)s") class SIGHUPInterrupt(GlanceException): message = _("System SIGHUP signal received.") class RPCError(GlanceException): message = _("%(cls)s exception was raised in the last rpc call: %(val)s") class TaskException(GlanceException): message = _("An unknown task exception occurred") class BadTaskConfiguration(GlanceException): message = _("Task was not configured properly") class ImageNotFound(NotFound): message = _("Image with the given id %(image_id)s was not found") class TaskNotFound(TaskException, NotFound): message = _("Task with the given id %(task_id)s was not found") class InvalidTaskStatus(TaskException, Invalid): message = _("Provided status of task is unsupported: %(status)s") class InvalidTaskType(TaskException, Invalid): message = _("Provided type of task is unsupported: %(type)s") class InvalidTaskStatusTransition(TaskException, Invalid): message = _("Status transition from %(cur_status)s to" " %(new_status)s is not allowed") class ImportTaskError(TaskException, Invalid): message = _("An import task exception occurred") class DuplicateLocation(Duplicate): message = _("The location %(location)s already exists") class InvalidParameterValue(Invalid): message = _("Invalid value '%(value)s' for parameter '%(param)s': " "%(extra_msg)s") class InvalidImageStatusTransition(Invalid): message = _("Image status transition from %(cur_status)s to" " %(new_status)s is not allowed") class MetadefDuplicateNamespace(Duplicate): message = _("The metadata definition namespace=%(namespace_name)s" " already exists.") class MetadefDuplicateObject(Duplicate): message = _("A metadata definition object with name=%(object_name)s" " already exists in namespace=%(namespace_name)s.") class MetadefDuplicateProperty(Duplicate): message = _("A metadata definition property with name=%(property_name)s" " already exists in namespace=%(namespace_name)s.") class MetadefDuplicateResourceType(Duplicate): message = _("A metadata definition resource-type with" " name=%(resource_type_name)s already exists.") class MetadefDuplicateResourceTypeAssociation(Duplicate): message = _("The metadata definition resource-type association of" " resource-type=%(resource_type_name)s to" " namespace=%(namespace_name)s" " already exists.") class MetadefDuplicateTag(Duplicate): message = _("A metadata tag with name=%(name)s" " already exists in namespace=%(namespace_name)s." " (Please note that metadata tag names are" " case insensitive).") class MetadefForbidden(Forbidden): message = _("You are not authorized to complete this action.") class MetadefIntegrityError(Forbidden): message = _("The metadata definition %(record_type)s with" " name=%(record_name)s not deleted." " Other records still refer to it.") class MetadefNamespaceNotFound(NotFound): message = _("Metadata definition namespace=%(namespace_name)s" " was not found.") class MetadefObjectNotFound(NotFound): message = _("The metadata definition object with" " name=%(object_name)s was not found in" " namespace=%(namespace_name)s.") class MetadefPropertyNotFound(NotFound): message = _("The metadata definition property with" " name=%(property_name)s was not found in" " namespace=%(namespace_name)s.") class MetadefResourceTypeNotFound(NotFound): message = _("The metadata definition resource-type with" " name=%(resource_type_name)s, was not found.") class MetadefResourceTypeAssociationNotFound(NotFound): message = _("The metadata definition resource-type association of" " resource-type=%(resource_type_name)s to" " namespace=%(namespace_name)s," " was not found.") class MetadefTagNotFound(NotFound): message = _("The metadata definition tag with" " name=%(name)s was not found in" " namespace=%(namespace_name)s.") class InvalidDataMigrationScript(GlanceException): message = _("Invalid data migration script '%(script)s'. A valid data " "migration script must implement functions 'has_migrations' " "and 'migrate'.") glance-16.0.0/glance/common/wsgi.py0000666000175100017510000012720413245511421017125 0ustar zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # Copyright 2010 OpenStack Foundation # Copyright 2014 IBM Corp. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Utility methods for working with WSGI servers """ from __future__ import print_function import errno import functools import os import signal import sys import time from eventlet.green import socket from eventlet.green import ssl import eventlet.greenio import eventlet.wsgi import glance_store from oslo_concurrency import processutils from oslo_config import cfg from oslo_log import log as logging from oslo_serialization import jsonutils from oslo_utils import encodeutils from oslo_utils import strutils from osprofiler import opts as profiler_opts import routes.middleware import six import webob.dec import webob.exc from webob import multidict from glance.common import config from glance.common import exception from glance.common import utils from glance import i18n from glance.i18n import _, _LE, _LI, _LW bind_opts = [ cfg.HostAddressOpt('bind_host', default='0.0.0.0', help=_(""" IP address to bind the glance servers to. Provide an IP address to bind the glance server to. The default value is ``0.0.0.0``. Edit this option to enable the server to listen on one particular IP address on the network card. This facilitates selection of a particular network interface for the server. Possible values: * A valid IPv4 address * A valid IPv6 address Related options: * None """)), cfg.PortOpt('bind_port', help=_(""" Port number on which the server will listen. Provide a valid port number to bind the server's socket to. This port is then set to identify processes and forward network messages that arrive at the server. The default bind_port value for the API server is 9292 and for the registry server is 9191. Possible values: * A valid port number (0 to 65535) Related options: * None """)), ] socket_opts = [ cfg.IntOpt('backlog', default=4096, min=1, help=_(""" Set the number of incoming connection requests. Provide a positive integer value to limit the number of requests in the backlog queue. The default queue size is 4096. An incoming connection to a TCP listener socket is queued before a connection can be established with the server. Setting the backlog for a TCP socket ensures a limited queue size for incoming traffic. Possible values: * Positive integer Related options: * None """)), cfg.IntOpt('tcp_keepidle', default=600, min=1, help=_(""" Set the wait time before a connection recheck. Provide a positive integer value representing time in seconds which is set as the idle wait time before a TCP keep alive packet can be sent to the host. The default value is 600 seconds. Setting ``tcp_keepidle`` helps verify at regular intervals that a connection is intact and prevents frequent TCP connection reestablishment. Possible values: * Positive integer value representing time in seconds Related options: * None """)), cfg.StrOpt('ca_file', sample_default='/etc/ssl/cafile', help=_(""" Absolute path to the CA file. Provide a string value representing a valid absolute path to the Certificate Authority file to use for client authentication. A CA file typically contains necessary trusted certificates to use for the client authentication. This is essential to ensure that a secure connection is established to the server via the internet. Possible values: * Valid absolute path to the CA file Related options: * None """)), cfg.StrOpt('cert_file', sample_default='/etc/ssl/certs', help=_(""" Absolute path to the certificate file. Provide a string value representing a valid absolute path to the certificate file which is required to start the API service securely. A certificate file typically is a public key container and includes the server's public key, server name, server information and the signature which was a result of the verification process using the CA certificate. This is required for a secure connection establishment. Possible values: * Valid absolute path to the certificate file Related options: * None """)), cfg.StrOpt('key_file', sample_default='/etc/ssl/key/key-file.pem', help=_(""" Absolute path to a private key file. Provide a string value representing a valid absolute path to a private key file which is required to establish the client-server connection. Possible values: * Absolute path to the private key file Related options: * None """)), ] eventlet_opts = [ cfg.IntOpt('workers', min=0, help=_(""" Number of Glance worker processes to start. Provide a non-negative integer value to set the number of child process workers to service requests. By default, the number of CPUs available is set as the value for ``workers`` limited to 8. For example if the processor count is 6, 6 workers will be used, if the processor count is 24 only 8 workers will be used. The limit will only apply to the default value, if 24 workers is configured, 24 is used. Each worker process is made to listen on the port set in the configuration file and contains a greenthread pool of size 1000. NOTE: Setting the number of workers to zero, triggers the creation of a single API process with a greenthread pool of size 1000. Possible values: * 0 * Positive integer value (typically equal to the number of CPUs) Related options: * None """)), cfg.IntOpt('max_header_line', default=16384, min=0, help=_(""" Maximum line size of message headers. Provide an integer value representing a length to limit the size of message headers. The default value is 16384. NOTE: ``max_header_line`` may need to be increased when using large tokens (typically those generated by the Keystone v3 API with big service catalogs). However, it is to be kept in mind that larger values for ``max_header_line`` would flood the logs. Setting ``max_header_line`` to 0 sets no limit for the line size of message headers. Possible values: * 0 * Positive integer Related options: * None """)), cfg.BoolOpt('http_keepalive', default=True, help=_(""" Set keep alive option for HTTP over TCP. Provide a boolean value to determine sending of keep alive packets. If set to ``False``, the server returns the header "Connection: close". If set to ``True``, the server returns a "Connection: Keep-Alive" in its responses. This enables retention of the same TCP connection for HTTP conversations instead of opening a new one with each new request. This option must be set to ``False`` if the client socket connection needs to be closed explicitly after the response is received and read successfully by the client. Possible values: * True * False Related options: * None """)), cfg.IntOpt('client_socket_timeout', default=900, min=0, help=_(""" Timeout for client connections' socket operations. Provide a valid integer value representing time in seconds to set the period of wait before an incoming connection can be closed. The default value is 900 seconds. The value zero implies wait forever. Possible values: * Zero * Positive integer Related options: * None """)), ] wsgi_opts = [ cfg.StrOpt('secure_proxy_ssl_header', deprecated_for_removal=True, deprecated_reason=_('Use the http_proxy_to_wsgi middleware ' 'instead.'), help=_('The HTTP header used to determine the scheme for the ' 'original request, even if it was removed by an SSL ' 'terminating proxy. Typical value is ' '"HTTP_X_FORWARDED_PROTO".')), ] LOG = logging.getLogger(__name__) CONF = cfg.CONF CONF.register_opts(bind_opts) CONF.register_opts(socket_opts) CONF.register_opts(eventlet_opts) CONF.register_opts(wsgi_opts) profiler_opts.set_defaults(CONF) ASYNC_EVENTLET_THREAD_POOL_LIST = [] # Detect if we're running under the uwsgi server try: import uwsgi LOG.debug('Detected running under uwsgi') except ImportError: LOG.debug('Detected not running under uwsgi') uwsgi = None def get_num_workers(): """Return the configured number of workers.""" if CONF.workers is None: # None implies the number of CPUs limited to 8 # See Launchpad bug #1748916 and the config help text workers = processutils.get_worker_count() return workers if workers < 8 else 8 return CONF.workers def get_bind_addr(default_port=None): """Return the host and port to bind to.""" return (CONF.bind_host, CONF.bind_port or default_port) def ssl_wrap_socket(sock): """ Wrap an existing socket in SSL :param sock: non-SSL socket to wrap :returns: An SSL wrapped socket """ utils.validate_key_cert(CONF.key_file, CONF.cert_file) ssl_kwargs = { 'server_side': True, 'certfile': CONF.cert_file, 'keyfile': CONF.key_file, 'cert_reqs': ssl.CERT_NONE, } if CONF.ca_file: ssl_kwargs['ca_certs'] = CONF.ca_file ssl_kwargs['cert_reqs'] = ssl.CERT_REQUIRED return ssl.wrap_socket(sock, **ssl_kwargs) def get_socket(default_port): """ Bind socket to bind ip:port in conf note: Mostly comes from Swift with a few small changes... :param default_port: port to bind to if none is specified in conf :returns: a socket object as returned from socket.listen or ssl.wrap_socket if conf specifies cert_file """ bind_addr = get_bind_addr(default_port) # TODO(jaypipes): eventlet's greened socket module does not actually # support IPv6 in getaddrinfo(). We need to get around this in the # future or monitor upstream for a fix address_family = [ addr[0] for addr in socket.getaddrinfo(bind_addr[0], bind_addr[1], socket.AF_UNSPEC, socket.SOCK_STREAM) if addr[0] in (socket.AF_INET, socket.AF_INET6) ][0] use_ssl = CONF.key_file or CONF.cert_file if use_ssl and (not CONF.key_file or not CONF.cert_file): raise RuntimeError(_("When running server in SSL mode, you must " "specify both a cert_file and key_file " "option value in your configuration file")) sock = utils.get_test_suite_socket() retry_until = time.time() + 30 while not sock and time.time() < retry_until: try: sock = eventlet.listen(bind_addr, backlog=CONF.backlog, family=address_family) except socket.error as err: if err.args[0] != errno.EADDRINUSE: raise eventlet.sleep(0.1) if not sock: raise RuntimeError(_("Could not bind to %(host)s:%(port)s after" " trying for 30 seconds") % {'host': bind_addr[0], 'port': bind_addr[1]}) return sock def set_eventlet_hub(): try: eventlet.hubs.use_hub('poll') except Exception: try: eventlet.hubs.use_hub('selects') except Exception: msg = _("eventlet 'poll' nor 'selects' hubs are available " "on this platform") raise exception.WorkerCreationFailure( reason=msg) def initialize_glance_store(): """Initialize glance store.""" glance_store.register_opts(CONF) glance_store.create_stores(CONF) glance_store.verify_default_store() def get_asynchronous_eventlet_pool(size=1000): """Return eventlet pool to caller. Also store pools created in global list, to wait on it after getting signal for graceful shutdown. :param size: eventlet pool size :returns: eventlet pool """ global ASYNC_EVENTLET_THREAD_POOL_LIST pool = eventlet.GreenPool(size=size) # Add pool to global ASYNC_EVENTLET_THREAD_POOL_LIST ASYNC_EVENTLET_THREAD_POOL_LIST.append(pool) return pool class Server(object): """Server class to manage multiple WSGI sockets and applications. This class requires initialize_glance_store set to True if glance store needs to be initialized. """ def __init__(self, threads=1000, initialize_glance_store=False): os.umask(0o27) # ensure files are created with the correct privileges self._logger = logging.getLogger("eventlet.wsgi.server") self.threads = threads self.children = set() self.stale_children = set() self.running = True # NOTE(abhishek): Allows us to only re-initialize glance_store when # the API's configuration reloads. self.initialize_glance_store = initialize_glance_store self.pgid = os.getpid() try: # NOTE(flaper87): Make sure this process # runs in its own process group. os.setpgid(self.pgid, self.pgid) except OSError: # NOTE(flaper87): When running glance-control, # (glance's functional tests, for example) # setpgid fails with EPERM as glance-control # creates a fresh session, of which the newly # launched service becomes the leader (session # leaders may not change process groups) # # Running glance-(api|registry) is safe and # shouldn't raise any error here. self.pgid = 0 def hup(self, *args): """ Reloads configuration files with zero down time """ signal.signal(signal.SIGHUP, signal.SIG_IGN) raise exception.SIGHUPInterrupt def kill_children(self, *args): """Kills the entire process group.""" signal.signal(signal.SIGTERM, signal.SIG_IGN) signal.signal(signal.SIGINT, signal.SIG_IGN) signal.signal(signal.SIGCHLD, signal.SIG_IGN) self.running = False os.killpg(self.pgid, signal.SIGTERM) def start(self, application, default_port): """ Run a WSGI server with the given application. :param application: The application to be run in the WSGI server :param default_port: Port to bind to if none is specified in conf """ self.application = application self.default_port = default_port self.configure() self.start_wsgi() def start_wsgi(self): workers = get_num_workers() if workers == 0: # Useful for profiling, test, debug etc. self.pool = self.create_pool() self.pool.spawn_n(self._single_run, self.application, self.sock) return else: LOG.info(_LI("Starting %d workers"), workers) signal.signal(signal.SIGTERM, self.kill_children) signal.signal(signal.SIGINT, self.kill_children) signal.signal(signal.SIGHUP, self.hup) while len(self.children) < workers: self.run_child() def create_pool(self): return get_asynchronous_eventlet_pool(size=self.threads) def _remove_children(self, pid): if pid in self.children: self.children.remove(pid) LOG.info(_LI('Removed dead child %s'), pid) elif pid in self.stale_children: self.stale_children.remove(pid) LOG.info(_LI('Removed stale child %s'), pid) else: LOG.warn(_LW('Unrecognised child %s') % pid) def _verify_and_respawn_children(self, pid, status): if len(self.stale_children) == 0: LOG.debug('No stale children') if os.WIFEXITED(status) and os.WEXITSTATUS(status) != 0: LOG.error(_LE('Not respawning child %d, cannot ' 'recover from termination') % pid) if not self.children and not self.stale_children: LOG.info( _LI('All workers have terminated. Exiting')) self.running = False else: if len(self.children) < get_num_workers(): self.run_child() def wait_on_children(self): while self.running: try: pid, status = os.wait() if os.WIFEXITED(status) or os.WIFSIGNALED(status): self._remove_children(pid) self._verify_and_respawn_children(pid, status) except OSError as err: if err.errno not in (errno.EINTR, errno.ECHILD): raise except KeyboardInterrupt: LOG.info(_LI('Caught keyboard interrupt. Exiting.')) break except exception.SIGHUPInterrupt: self.reload() continue eventlet.greenio.shutdown_safe(self.sock) self.sock.close() LOG.debug('Exited') def configure(self, old_conf=None, has_changed=None): """ Apply configuration settings :param old_conf: Cached old configuration settings (if any) :param has changed: callable to determine if a parameter has changed """ eventlet.wsgi.MAX_HEADER_LINE = CONF.max_header_line self.client_socket_timeout = CONF.client_socket_timeout or None self.configure_socket(old_conf, has_changed) if self.initialize_glance_store: initialize_glance_store() def reload(self): """ Reload and re-apply configuration settings Existing child processes are sent a SIGHUP signal and will exit after completing existing requests. New child processes, which will have the updated configuration, are spawned. This allows preventing interruption to the service. """ def _has_changed(old, new, param): old = old.get(param) new = getattr(new, param) return (new != old) old_conf = utils.stash_conf_values() has_changed = functools.partial(_has_changed, old_conf, CONF) CONF.reload_config_files() os.killpg(self.pgid, signal.SIGHUP) self.stale_children = self.children self.children = set() # Ensure any logging config changes are picked up logging.setup(CONF, 'glance') config.set_config_defaults() self.configure(old_conf, has_changed) self.start_wsgi() def wait(self): """Wait until all servers have completed running.""" try: if self.children: self.wait_on_children() else: self.pool.waitall() except KeyboardInterrupt: pass def run_child(self): def child_hup(*args): """Shuts down child processes, existing requests are handled.""" signal.signal(signal.SIGHUP, signal.SIG_IGN) eventlet.wsgi.is_accepting = False self.sock.close() pid = os.fork() if pid == 0: signal.signal(signal.SIGHUP, child_hup) signal.signal(signal.SIGTERM, signal.SIG_DFL) # ignore the interrupt signal to avoid a race whereby # a child worker receives the signal before the parent # and is respawned unnecessarily as a result signal.signal(signal.SIGINT, signal.SIG_IGN) # The child has no need to stash the unwrapped # socket, and the reference prevents a clean # exit on sighup self._sock = None self.run_server() LOG.info(_LI('Child %d exiting normally'), os.getpid()) # self.pool.waitall() is now called in wsgi's server so # it's safe to exit here sys.exit(0) else: LOG.info(_LI('Started child %s'), pid) self.children.add(pid) def run_server(self): """Run a WSGI server.""" if cfg.CONF.pydev_worker_debug_host: utils.setup_remote_pydev_debug(cfg.CONF.pydev_worker_debug_host, cfg.CONF.pydev_worker_debug_port) eventlet.wsgi.HttpProtocol.default_request_version = "HTTP/1.0" self.pool = self.create_pool() try: eventlet.wsgi.server(self.sock, self.application, log=self._logger, custom_pool=self.pool, debug=False, keepalive=CONF.http_keepalive, socket_timeout=self.client_socket_timeout) except socket.error as err: if err[0] != errno.EINVAL: raise # waiting on async pools if ASYNC_EVENTLET_THREAD_POOL_LIST: for pool in ASYNC_EVENTLET_THREAD_POOL_LIST: pool.waitall() def _single_run(self, application, sock): """Start a WSGI server in a new green thread.""" LOG.info(_LI("Starting single process server")) eventlet.wsgi.server(sock, application, custom_pool=self.pool, log=self._logger, debug=False, keepalive=CONF.http_keepalive, socket_timeout=self.client_socket_timeout) def configure_socket(self, old_conf=None, has_changed=None): """ Ensure a socket exists and is appropriately configured. This function is called on start up, and can also be called in the event of a configuration reload. When called for the first time a new socket is created. If reloading and either bind_host or bind port have been changed the existing socket must be closed and a new socket opened (laws of physics). In all other cases (bind_host/bind_port have not changed) the existing socket is reused. :param old_conf: Cached old configuration settings (if any) :param has changed: callable to determine if a parameter has changed """ # Do we need a fresh socket? new_sock = (old_conf is None or ( has_changed('bind_host') or has_changed('bind_port'))) # Will we be using https? use_ssl = not (not CONF.cert_file or not CONF.key_file) # Were we using https before? old_use_ssl = (old_conf is not None and not ( not old_conf.get('key_file') or not old_conf.get('cert_file'))) # Do we now need to perform an SSL wrap on the socket? wrap_sock = use_ssl is True and (old_use_ssl is False or new_sock) # Do we now need to perform an SSL unwrap on the socket? unwrap_sock = use_ssl is False and old_use_ssl is True if new_sock: self._sock = None if old_conf is not None: self.sock.close() _sock = get_socket(self.default_port) _sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) # sockets can hang around forever without keepalive _sock.setsockopt(socket.SOL_SOCKET, socket.SO_KEEPALIVE, 1) self._sock = _sock if wrap_sock: self.sock = ssl_wrap_socket(self._sock) if unwrap_sock: self.sock = self._sock if new_sock and not use_ssl: self.sock = self._sock # Pick up newly deployed certs if old_conf is not None and use_ssl is True and old_use_ssl is True: if has_changed('cert_file') or has_changed('key_file'): utils.validate_key_cert(CONF.key_file, CONF.cert_file) if has_changed('cert_file'): self.sock.certfile = CONF.cert_file if has_changed('key_file'): self.sock.keyfile = CONF.key_file if new_sock or (old_conf is not None and has_changed('tcp_keepidle')): # This option isn't available in the OS X version of eventlet if hasattr(socket, 'TCP_KEEPIDLE'): self.sock.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPIDLE, CONF.tcp_keepidle) if old_conf is not None and has_changed('backlog'): self.sock.listen(CONF.backlog) class Middleware(object): """ Base WSGI middleware wrapper. These classes require an application to be initialized that will be called next. By default the middleware will simply call its wrapped app, or you can override __call__ to customize its behavior. """ def __init__(self, application): self.application = application @classmethod def factory(cls, global_conf, **local_conf): def filter(app): return cls(app) return filter def process_request(self, req): """ Called on each request. If this returns None, the next application down the stack will be executed. If it returns a response then that response will be returned and execution will stop here. """ return None def process_response(self, response): """Do whatever you'd like to the response.""" return response @webob.dec.wsgify def __call__(self, req): response = self.process_request(req) if response: return response response = req.get_response(self.application) response.request = req try: return self.process_response(response) except webob.exc.HTTPException as e: return e class Debug(Middleware): """ Helper class that can be inserted into any WSGI application chain to get information about the request and response. """ @webob.dec.wsgify def __call__(self, req): print(("*" * 40) + " REQUEST ENVIRON") for key, value in req.environ.items(): print(key, "=", value) print('') resp = req.get_response(self.application) print(("*" * 40) + " RESPONSE HEADERS") for (key, value) in six.iteritems(resp.headers): print(key, "=", value) print('') resp.app_iter = self.print_generator(resp.app_iter) return resp @staticmethod def print_generator(app_iter): """ Iterator that prints the contents of a wrapper string iterator when iterated. """ print(("*" * 40) + " BODY") for part in app_iter: sys.stdout.write(part) sys.stdout.flush() yield part print() class APIMapper(routes.Mapper): """ Handle route matching when url is '' because routes.Mapper returns an error in this case. """ def routematch(self, url=None, environ=None): if url is "": result = self._match("", environ) return result[0], result[1] return routes.Mapper.routematch(self, url, environ) class RejectMethodController(object): def reject(self, req, allowed_methods, *args, **kwargs): LOG.debug("The method %s is not allowed for this resource", req.environ['REQUEST_METHOD']) raise webob.exc.HTTPMethodNotAllowed( headers=[('Allow', allowed_methods)]) class Router(object): """ WSGI middleware that maps incoming requests to WSGI apps. """ def __init__(self, mapper): """ Create a router for the given routes.Mapper. Each route in `mapper` must specify a 'controller', which is a WSGI app to call. You'll probably want to specify an 'action' as well and have your controller be a wsgi.Controller, who will route the request to the action method. Examples: mapper = routes.Mapper() sc = ServerController() # Explicit mapping of one route to a controller+action mapper.connect(None, "/svrlist", controller=sc, action="list") # Actions are all implicitly defined mapper.resource("server", "servers", controller=sc) # Pointing to an arbitrary WSGI app. You can specify the # {path_info:.*} parameter so the target app can be handed just that # section of the URL. mapper.connect(None, "/v1.0/{path_info:.*}", controller=BlogApp()) """ mapper.redirect("", "/") self.map = mapper self._router = routes.middleware.RoutesMiddleware(self._dispatch, self.map) @classmethod def factory(cls, global_conf, **local_conf): return cls(APIMapper()) @webob.dec.wsgify def __call__(self, req): """ Route the incoming request to a controller based on self.map. If no match, return either a 404(Not Found) or 501(Not Implemented). """ return self._router @staticmethod @webob.dec.wsgify def _dispatch(req): """ Called by self._router after matching the incoming request to a route and putting the information into req.environ. Either returns 404, 501, or the routed WSGI app's response. """ match = req.environ['wsgiorg.routing_args'][1] if not match: implemented_http_methods = ['GET', 'HEAD', 'POST', 'PUT', 'DELETE', 'PATCH'] if req.environ['REQUEST_METHOD'] not in implemented_http_methods: return webob.exc.HTTPNotImplemented() else: return webob.exc.HTTPNotFound() app = match['controller'] return app class _UWSGIChunkFile(object): def read(self, length=None): position = 0 if length == 0: return b"" if length and length < 0: length = None response = [] while True: data = uwsgi.chunked_read() # Return everything if we reached the end of the file if not data: break response.append(data) # Return the data if we've reached the length if length is not None: position += len(data) if position >= length: break return b''.join(response) class Request(webob.Request): """Add some OpenStack API-specific logic to the base webob.Request.""" def __init__(self, environ, *args, **kwargs): if CONF.secure_proxy_ssl_header: scheme = environ.get(CONF.secure_proxy_ssl_header) if scheme: environ['wsgi.url_scheme'] = scheme super(Request, self).__init__(environ, *args, **kwargs) @property def body_file(self): if uwsgi: if self.headers.get('transfer-encoding', '').lower() == 'chunked': return _UWSGIChunkFile() return super(Request, self).body_file @body_file.setter def body_file(self, value): # NOTE(cdent): If you have a property setter in a superclass, it will # not be inherited. webob.Request.body_file.fset(self, value) @property def params(self): """Override params property of webob.request.BaseRequest. Added an 'encoded_params' attribute in case of PY2 to avoid encoding values in next subsequent calls to the params property. """ if six.PY2: encoded_params = getattr(self, 'encoded_params', None) if encoded_params is None: params = super(Request, self).params params_dict = multidict.MultiDict() for key, value in params.items(): params_dict.add(key, encodeutils.safe_encode(value)) setattr(self, 'encoded_params', multidict.NestedMultiDict(params_dict)) return self.encoded_params return super(Request, self).params def best_match_content_type(self): """Determine the requested response content-type.""" supported = ('application/json',) bm = self.accept.best_match(supported) return bm or 'application/json' def get_content_type(self, allowed_content_types): """Determine content type of the request body.""" if "Content-Type" not in self.headers: raise exception.InvalidContentType(content_type=None) content_type = self.content_type if content_type not in allowed_content_types: raise exception.InvalidContentType(content_type=content_type) else: return content_type def best_match_language(self): """Determines best available locale from the Accept-Language header. :returns: the best language match or None if the 'Accept-Language' header was not available in the request. """ if not self.accept_language: return None langs = i18n.get_available_languages('glance') return self.accept_language.best_match(langs) def get_range_from_request(self, image_size): """Return the `Range` in a request.""" range_str = self.headers.get('Range') if range_str is not None: # NOTE(dharinic): We do not support multi range requests. if ',' in range_str: msg = ("Requests with multiple ranges are not supported in " "Glance. You may make multiple single-range requests " "instead.") raise webob.exc.HTTPBadRequest(explanation=msg) range_ = webob.byterange.Range.parse(range_str) if range_ is None: msg = ("Invalid Range header.") raise webob.exc.HTTPRequestRangeNotSatisfiable(msg) # NOTE(dharinic): Ensure that a range like bytes=4- for an image # size of 3 is invalidated as per rfc7233. if range_.start >= image_size: msg = ("Invalid start position in Range header. " "Start position MUST be in the inclusive range [0, %s]." % (image_size - 1)) raise webob.exc.HTTPRequestRangeNotSatisfiable(msg) return range_ # NOTE(dharinic): For backward compatibility reasons, we maintain # support for 'Content-Range' in requests even though it's not # correct to use it in requests.. c_range_str = self.headers.get('Content-Range') if c_range_str is not None: content_range = webob.byterange.ContentRange.parse(c_range_str) # NOTE(dharinic): Ensure that a content range like 1-4/* for an # image size of 3 is invalidated. if content_range is None: msg = ("Invalid Content-Range header.") raise webob.exc.HTTPRequestRangeNotSatisfiable(msg) if (content_range.length is None and content_range.stop > image_size): msg = ("Invalid stop position in Content-Range header. " "The stop position MUST be in the inclusive range " "[0, %s]." % (image_size - 1)) raise webob.exc.HTTPRequestRangeNotSatisfiable(msg) if content_range.start >= image_size: msg = ("Invalid start position in Content-Range header. " "Start position MUST be in the inclusive range [0, %s]." % (image_size - 1)) raise webob.exc.HTTPRequestRangeNotSatisfiable(msg) return content_range class JSONRequestDeserializer(object): valid_transfer_encoding = frozenset(['chunked', 'compress', 'deflate', 'gzip', 'identity']) httpverb_may_have_body = frozenset({'POST', 'PUT', 'PATCH'}) @classmethod def is_valid_encoding(cls, request): request_encoding = request.headers.get('transfer-encoding', '').lower() return request_encoding in cls.valid_transfer_encoding @classmethod def is_valid_method(cls, request): return request.method.upper() in cls.httpverb_may_have_body def has_body(self, request): """ Returns whether a Webob.Request object will possess an entity body. :param request: Webob.Request object """ if self.is_valid_encoding(request) and self.is_valid_method(request): request.is_body_readable = True return True if request.content_length is not None and request.content_length > 0: return True return False @staticmethod def _sanitizer(obj): """Sanitizer method that will be passed to jsonutils.loads.""" return obj def from_json(self, datastring): try: jsondata = jsonutils.loads(datastring, object_hook=self._sanitizer) if not isinstance(jsondata, (dict, list)): msg = _('Unexpected body type. Expected list/dict.') raise webob.exc.HTTPBadRequest(explanation=msg) return jsondata except ValueError: msg = _('Malformed JSON in request body.') raise webob.exc.HTTPBadRequest(explanation=msg) def default(self, request): if self.has_body(request): return {'body': self.from_json(request.body)} else: return {} class JSONResponseSerializer(object): def _sanitizer(self, obj): """Sanitizer method that will be passed to jsonutils.dumps.""" if hasattr(obj, "to_dict"): return obj.to_dict() if isinstance(obj, multidict.MultiDict): return obj.mixed() return jsonutils.to_primitive(obj) def to_json(self, data): return jsonutils.dump_as_bytes(data, default=self._sanitizer) def default(self, response, result): response.content_type = 'application/json' body = self.to_json(result) body = encodeutils.to_utf8(body) response.body = body def translate_exception(req, e): """Translates all translatable elements of the given exception.""" # The RequestClass attribute in the webob.dec.wsgify decorator # does not guarantee that the request object will be a particular # type; this check is therefore necessary. if not hasattr(req, "best_match_language"): return e locale = req.best_match_language() if isinstance(e, webob.exc.HTTPError): e.explanation = i18n.translate(e.explanation, locale) e.detail = i18n.translate(e.detail, locale) if getattr(e, 'body_template', None): e.body_template = i18n.translate(e.body_template, locale) return e class Resource(object): """ WSGI app that handles (de)serialization and controller dispatch. Reads routing information supplied by RoutesMiddleware and calls the requested action method upon its deserializer, controller, and serializer. Those three objects may implement any of the basic controller action methods (create, update, show, index, delete) along with any that may be specified in the api router. A 'default' method may also be implemented to be used in place of any non-implemented actions. Deserializer methods must accept a request argument and return a dictionary. Controller methods must accept a request argument. Additionally, they must also accept keyword arguments that represent the keys returned by the Deserializer. They may raise a webob.exc exception or return a dict, which will be serialized by requested content type. """ def __init__(self, controller, deserializer=None, serializer=None): """ :param controller: object that implement methods created by routes lib :param deserializer: object that supports webob request deserialization through controller-like actions :param serializer: object that supports webob response serialization through controller-like actions """ self.controller = controller self.serializer = serializer or JSONResponseSerializer() self.deserializer = deserializer or JSONRequestDeserializer() @webob.dec.wsgify(RequestClass=Request) def __call__(self, request): """WSGI method that controls (de)serialization and method dispatch.""" action_args = self.get_action_args(request.environ) action = action_args.pop('action', None) body_reject = strutils.bool_from_string( action_args.pop('body_reject', None)) try: if body_reject and self.deserializer.has_body(request): msg = _('A body is not expected with this request.') raise webob.exc.HTTPBadRequest(explanation=msg) deserialized_request = self.dispatch(self.deserializer, action, request) action_args.update(deserialized_request) action_result = self.dispatch(self.controller, action, request, **action_args) except webob.exc.WSGIHTTPException as e: exc_info = sys.exc_info() e = translate_exception(request, e) six.reraise(type(e), e, exc_info[2]) except UnicodeDecodeError: msg = _("Error decoding your request. Either the URL or the " "request body contained characters that could not be " "decoded by Glance") raise webob.exc.HTTPBadRequest(explanation=msg) except Exception as e: LOG.exception(_LE("Caught error: %s"), encodeutils.exception_to_unicode(e)) response = webob.exc.HTTPInternalServerError() return response # We cannot serialize an Exception, so return the action_result if isinstance(action_result, Exception): return action_result try: response = webob.Response(request=request) self.dispatch(self.serializer, action, response, action_result) # encode all headers in response to utf-8 to prevent unicode errors for name, value in list(response.headers.items()): if six.PY2 and isinstance(value, six.text_type): response.headers[name] = encodeutils.safe_encode(value) return response except webob.exc.WSGIHTTPException as e: return translate_exception(request, e) except webob.exc.HTTPException as e: return e # return unserializable result (typically a webob exc) except Exception: return action_result def dispatch(self, obj, action, *args, **kwargs): """Find action-specific method on self and call it.""" try: method = getattr(obj, action) except AttributeError: method = getattr(obj, 'default') return method(*args, **kwargs) def get_action_args(self, request_environment): """Parse dictionary created by routes library.""" try: args = request_environment['wsgiorg.routing_args'][1].copy() except Exception: return {} try: del args['controller'] except KeyError: pass try: del args['format'] except KeyError: pass return args glance-16.0.0/glance/common/auth.py0000666000175100017510000002344713245511421017121 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ This auth module is intended to allow OpenStack client-tools to select from a variety of authentication strategies, including NoAuth (the default), and Keystone (an identity management system). > auth_plugin = AuthPlugin(creds) > auth_plugin.authenticate() > auth_plugin.auth_token abcdefg > auth_plugin.management_url http://service_endpoint/ """ import httplib2 from keystoneclient import service_catalog as ks_service_catalog from oslo_serialization import jsonutils from six.moves import http_client as http # NOTE(jokke): simplified transition to py3, behaves like py2 xrange from six.moves import range import six.moves.urllib.parse as urlparse from glance.common import exception from glance.i18n import _ class BaseStrategy(object): def __init__(self): self.auth_token = None # TODO(sirp): Should expose selecting public/internal/admin URL. self.management_url = None def authenticate(self): raise NotImplementedError @property def is_authenticated(self): raise NotImplementedError @property def strategy(self): raise NotImplementedError class NoAuthStrategy(BaseStrategy): def authenticate(self): pass @property def is_authenticated(self): return True @property def strategy(self): return 'noauth' class KeystoneStrategy(BaseStrategy): MAX_REDIRECTS = 10 def __init__(self, creds, insecure=False, configure_via_auth=True): self.creds = creds self.insecure = insecure self.configure_via_auth = configure_via_auth super(KeystoneStrategy, self).__init__() def check_auth_params(self): # Ensure that supplied credential parameters are as required for required in ('username', 'password', 'auth_url', 'strategy'): if self.creds.get(required) is None: raise exception.MissingCredentialError(required=required) if self.creds['strategy'] != 'keystone': raise exception.BadAuthStrategy(expected='keystone', received=self.creds['strategy']) # For v2.0 also check tenant is present if self.creds['auth_url'].rstrip('/').endswith('v2.0'): if self.creds.get("tenant") is None: raise exception.MissingCredentialError(required='tenant') def authenticate(self): """Authenticate with the Keystone service. There are a few scenarios to consider here: 1. Which version of Keystone are we using? v1 which uses headers to pass the credentials, or v2 which uses a JSON encoded request body? 2. Keystone may respond back with a redirection using a 305 status code. 3. We may attempt a v1 auth when v2 is what's called for. In this case, we rewrite the url to contain /v2.0/ and retry using the v2 protocol. """ def _authenticate(auth_url): # If OS_AUTH_URL is missing a trailing slash add one if not auth_url.endswith('/'): auth_url += '/' token_url = urlparse.urljoin(auth_url, "tokens") # 1. Check Keystone version is_v2 = auth_url.rstrip('/').endswith('v2.0') if is_v2: self._v2_auth(token_url) else: self._v1_auth(token_url) self.check_auth_params() auth_url = self.creds['auth_url'] for redirect_iter in range(self.MAX_REDIRECTS): try: _authenticate(auth_url) except exception.AuthorizationRedirect as e: # 2. Keystone may redirect us auth_url = e.url except exception.AuthorizationFailure: # 3. In some configurations nova makes redirection to # v2.0 keystone endpoint. Also, new location does not # contain real endpoint, only hostname and port. if 'v2.0' not in auth_url: auth_url = urlparse.urljoin(auth_url, 'v2.0/') else: # If we successfully auth'd, then memorize the correct auth_url # for future use. self.creds['auth_url'] = auth_url break else: # Guard against a redirection loop raise exception.MaxRedirectsExceeded(redirects=self.MAX_REDIRECTS) def _v1_auth(self, token_url): creds = self.creds headers = { 'X-Auth-User': creds['username'], 'X-Auth-Key': creds['password'] } tenant = creds.get('tenant') if tenant: headers['X-Auth-Tenant'] = tenant resp, resp_body = self._do_request(token_url, 'GET', headers=headers) def _management_url(self, resp): for url_header in ('x-image-management-url', 'x-server-management-url', 'x-glance'): try: return resp[url_header] except KeyError as e: not_found = e raise not_found if resp.status in (http.OK, http.NO_CONTENT): try: if self.configure_via_auth: self.management_url = _management_url(self, resp) self.auth_token = resp['x-auth-token'] except KeyError: raise exception.AuthorizationFailure() elif resp.status == http.USE_PROXY: raise exception.AuthorizationRedirect(uri=resp['location']) elif resp.status == http.BAD_REQUEST: raise exception.AuthBadRequest(url=token_url) elif resp.status == http.UNAUTHORIZED: raise exception.NotAuthenticated() elif resp.status == http.NOT_FOUND: raise exception.AuthUrlNotFound(url=token_url) else: raise Exception(_('Unexpected response: %s') % resp.status) def _v2_auth(self, token_url): creds = self.creds creds = { "auth": { "tenantName": creds['tenant'], "passwordCredentials": { "username": creds['username'], "password": creds['password'] } } } headers = {'Content-Type': 'application/json'} req_body = jsonutils.dumps(creds) resp, resp_body = self._do_request( token_url, 'POST', headers=headers, body=req_body) if resp.status == http.OK: resp_auth = jsonutils.loads(resp_body)['access'] creds_region = self.creds.get('region') if self.configure_via_auth: endpoint = get_endpoint(resp_auth['serviceCatalog'], endpoint_region=creds_region) self.management_url = endpoint self.auth_token = resp_auth['token']['id'] elif resp.status == http.USE_PROXY: raise exception.RedirectException(resp['location']) elif resp.status == http.BAD_REQUEST: raise exception.AuthBadRequest(url=token_url) elif resp.status == http.UNAUTHORIZED: raise exception.NotAuthenticated() elif resp.status == http.NOT_FOUND: raise exception.AuthUrlNotFound(url=token_url) else: raise Exception(_('Unexpected response: %s') % resp.status) @property def is_authenticated(self): return self.auth_token is not None @property def strategy(self): return 'keystone' def _do_request(self, url, method, headers=None, body=None): headers = headers or {} conn = httplib2.Http() conn.force_exception_to_status_code = True conn.disable_ssl_certificate_validation = self.insecure headers['User-Agent'] = 'glance-client' resp, resp_body = conn.request(url, method, headers=headers, body=body) return resp, resp_body def get_plugin_from_strategy(strategy, creds=None, insecure=False, configure_via_auth=True): if strategy == 'noauth': return NoAuthStrategy() elif strategy == 'keystone': return KeystoneStrategy(creds, insecure, configure_via_auth=configure_via_auth) else: raise Exception(_("Unknown auth strategy '%s'") % strategy) def get_endpoint(service_catalog, service_type='image', endpoint_region=None, endpoint_type='publicURL'): """ Select an endpoint from the service catalog We search the full service catalog for services matching both type and region. If the client supplied no region then any 'image' endpoint is considered a match. There must be one -- and only one -- successful match in the catalog, otherwise we will raise an exception. """ endpoints = ks_service_catalog.ServiceCatalogV2( {'serviceCatalog': service_catalog} ).get_urls(service_type=service_type, region_name=endpoint_region, endpoint_type=endpoint_type) if endpoints is None: raise exception.NoServiceEndpoint() elif len(endpoints) == 1: return endpoints[0] else: raise exception.RegionAmbiguity(region=endpoint_region) glance-16.0.0/glance/common/location_strategy/0000775000175100017510000000000013245511661021332 5ustar zuulzuul00000000000000glance-16.0.0/glance/common/location_strategy/__init__.py0000666000175100017510000001123713245511421023443 0ustar zuulzuul00000000000000# Copyright 2014 IBM Corp. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy from oslo_config import cfg from oslo_log import log as logging import stevedore from glance.i18n import _, _LE location_strategy_opts = [ cfg.StrOpt('location_strategy', default='location_order', choices=('location_order', 'store_type'), help=_(""" Strategy to determine the preference order of image locations. This configuration option indicates the strategy to determine the order in which an image's locations must be accessed to serve the image's data. Glance then retrieves the image data from the first responsive active location it finds in this list. This option takes one of two possible values ``location_order`` and ``store_type``. The default value is ``location_order``, which suggests that image data be served by using locations in the order they are stored in Glance. The ``store_type`` value sets the image location preference based on the order in which the storage backends are listed as a comma separated list for the configuration option ``store_type_preference``. Possible values: * location_order * store_type Related options: * store_type_preference """)), ] CONF = cfg.CONF CONF.register_opts(location_strategy_opts) LOG = logging.getLogger(__name__) def _load_strategies(): """Load all strategy modules.""" modules = {} namespace = "glance.common.image_location_strategy.modules" ex = stevedore.extension.ExtensionManager(namespace) for module_name in ex.names(): try: mgr = stevedore.driver.DriverManager( namespace=namespace, name=module_name, invoke_on_load=False) # Obtain module name strategy_name = str(mgr.driver.get_strategy_name()) if strategy_name in modules: msg = (_('%(strategy)s is registered as a module twice. ' '%(module)s is not being used.') % {'strategy': strategy_name, 'module': module_name}) LOG.warn(msg) else: # Initialize strategy module mgr.driver.init() modules[strategy_name] = mgr.driver except Exception as e: LOG.error(_LE("Failed to load location strategy module " "%(module)s: %(e)s") % {'module': module_name, 'e': e}) return modules _available_strategies = _load_strategies() # TODO(kadachi): Not used but don't remove this until glance_store # development/migration stage. def verify_location_strategy(conf=None, strategies=_available_strategies): """Validate user configured 'location_strategy' option value.""" if not conf: conf = CONF.location_strategy if conf not in strategies: msg = (_('Invalid location_strategy option: %(name)s. ' 'The valid strategy option(s) is(are): %(strategies)s') % {'name': conf, 'strategies': ", ".join(strategies.keys())}) LOG.error(msg) raise RuntimeError(msg) def get_ordered_locations(locations, **kwargs): """ Order image location list by configured strategy. :param locations: The original image location list. :param kwargs: Strategy-specific arguments for under layer strategy module. :returns: The image location list with strategy-specific order. """ if not locations: return [] strategy_module = _available_strategies[CONF.location_strategy] return strategy_module.get_ordered_locations(copy.deepcopy(locations), **kwargs) def choose_best_location(locations, **kwargs): """ Choose best location from image location list by configured strategy. :param locations: The original image location list. :param kwargs: Strategy-specific arguments for under layer strategy module. :returns: The best location from image location list. """ locations = get_ordered_locations(locations, **kwargs) if locations: return locations[0] else: return None glance-16.0.0/glance/common/location_strategy/store_type.py0000666000175100017510000001052713245511421024102 0ustar zuulzuul00000000000000# Copyright 2014 IBM Corp. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Storage preference based location strategy module""" from oslo_config import cfg import six import six.moves.urllib.parse as urlparse from glance.i18n import _ store_type_opts = [ cfg.ListOpt('store_type_preference', default=[], help=_(""" Preference order of storage backends. Provide a comma separated list of store names in the order in which images should be retrieved from storage backends. These store names must be registered with the ``stores`` configuration option. NOTE: The ``store_type_preference`` configuration option is applied only if ``store_type`` is chosen as a value for the ``location_strategy`` configuration option. An empty list will not change the location order. Possible values: * Empty list * Comma separated list of registered store names. Legal values are: * file * http * rbd * swift * sheepdog * cinder * vmware Related options: * location_strategy * stores """)) ] CONF = cfg.CONF CONF.register_opts(store_type_opts, group='store_type_location_strategy') _STORE_TO_SCHEME_MAP = {} def get_strategy_name(): """Return strategy module name.""" return 'store_type' def init(): """Initialize strategy module.""" # NOTE(zhiyan): We have a plan to do a reusable glance client library for # all clients like Nova and Cinder in near period, it would be able to # contains common code to provide uniform image service interface for them, # just like Brick in Cinder, this code can be moved to there and shared # between Glance and client both side. So this implementation as far as # possible to prevent make relationships with Glance(server)-specific code, # for example: using functions within store module to validate # 'store_type_preference' option. mapping = {'file': ['file', 'filesystem'], 'http': ['http', 'https'], 'rbd': ['rbd'], 'swift': ['swift', 'swift+https', 'swift+http'], 'sheepdog': ['sheepdog'], 'cinder': ['cinder'], 'vmware': ['vsphere']} _STORE_TO_SCHEME_MAP.clear() _STORE_TO_SCHEME_MAP.update(mapping) def get_ordered_locations(locations, uri_key='url', **kwargs): """ Order image location list. :param locations: The original image location list. :param uri_key: The key name for location URI in image location dictionary. :returns: The image location list with preferred store type order. """ def _foreach_store_type_preference(): store_types = CONF.store_type_location_strategy.store_type_preference for preferred_store in store_types: preferred_store = str(preferred_store).strip() if not preferred_store: continue yield preferred_store if not locations: return locations preferences = {} others = [] for preferred_store in _foreach_store_type_preference(): preferences[preferred_store] = [] for location in locations: uri = location.get(uri_key) if not uri: continue pieces = urlparse.urlparse(uri.strip()) store_name = None for store, schemes in six.iteritems(_STORE_TO_SCHEME_MAP): if pieces.scheme.strip() in schemes: store_name = store break if store_name in preferences: preferences[store_name].append(location) else: others.append(location) ret = [] # NOTE(zhiyan): While configuration again since py26 does not support # ordereddict container. for preferred_store in _foreach_store_type_preference(): ret.extend(preferences[preferred_store]) ret.extend(others) return ret glance-16.0.0/glance/common/location_strategy/location_order.py0000666000175100017510000000207013245511421024702 0ustar zuulzuul00000000000000# Copyright 2014 IBM Corp. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Image location order based location strategy module""" def get_strategy_name(): """Return strategy module name.""" return 'location_order' def init(): """Initialize strategy module.""" pass def get_ordered_locations(locations, **kwargs): """ Order image location list. :param locations: The original image location list. :returns: The image location list with original natural order. """ return locations glance-16.0.0/glance/common/swift_store_utils.py0000666000175100017510000001163113245511421021740 0ustar zuulzuul00000000000000# Copyright 2014 Rackspace # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import sys from oslo_config import cfg from oslo_log import log as logging from six.moves import configparser from glance.common import exception from glance.i18n import _, _LE swift_opts = [ cfg.StrOpt('default_swift_reference', default="ref1", help=_(""" Reference to default Swift account/backing store parameters. Provide a string value representing a reference to the default set of parameters required for using swift account/backing store for image storage. The default reference value for this configuration option is 'ref1'. This configuration option dereferences the parameters and facilitates image storage in Swift storage backend every time a new image is added. Possible values: * A valid string value Related options: * None """)), cfg.StrOpt('swift_store_auth_address', deprecated_reason=(""" The option auth_address in the Swift back-end configuration file is used instead. """), help=_('The address where the Swift authentication service ' 'is listening.')), cfg.StrOpt('swift_store_user', secret=True, deprecated_reason=(""" The option 'user' in the Swift back-end configuration file is set instead. """), help=_('The user to authenticate against the Swift ' 'authentication service.')), cfg.StrOpt('swift_store_key', secret=True, deprecated_reason=(""" The option 'key' in the Swift back-end configuration file is used to set the authentication key instead. """), help=_('Auth key for the user authenticating against the ' 'Swift authentication service.')), cfg.StrOpt('swift_store_config_file', secret=True, help=_(""" File containing the swift account(s) configurations. Include a string value representing the path to a configuration file that has references for each of the configured Swift account(s)/backing stores. By default, no file path is specified and customized Swift referencing is diabled. Configuring this option is highly recommended while using Swift storage backend for image storage as it helps avoid storage of credentials in the database. Possible values: * None * String value representing a valid configuration file path Related options: * None """)), ] # SafeConfigParser was deprecated in Python 3.2 if sys.version_info >= (3, 2): CONFIG = configparser.ConfigParser() else: CONFIG = configparser.SafeConfigParser() LOG = logging.getLogger(__name__) CONF = cfg.CONF CONF.register_opts(swift_opts) def is_multiple_swift_store_accounts_enabled(): if CONF.swift_store_config_file is None: return False return True class SwiftParams(object): def __init__(self): if is_multiple_swift_store_accounts_enabled(): self.params = self._load_config() else: self.params = self._form_default_params() def _form_default_params(self): default = {} if (CONF.swift_store_user and CONF.swift_store_key and CONF.swift_store_auth_address): default['user'] = CONF.swift_store_user default['key'] = CONF.swift_store_key default['auth_address'] = CONF.swift_store_auth_address return {CONF.default_swift_reference: default} return {} def _load_config(self): try: conf_file = CONF.find_file(CONF.swift_store_config_file) CONFIG.read(conf_file) except Exception as e: msg = (_LE("swift config file %(conf_file)s:%(exc)s not found") % {'conf_file': CONF.swift_store_config_file, 'exc': e}) LOG.error(msg) raise exception.InvalidSwiftStoreConfiguration() account_params = {} account_references = CONFIG.sections() for ref in account_references: reference = {} try: reference['auth_address'] = CONFIG.get(ref, 'auth_address') reference['user'] = CONFIG.get(ref, 'user') reference['key'] = CONFIG.get(ref, 'key') account_params[ref] = reference except (ValueError, SyntaxError, configparser.NoOptionError) as e: LOG.exception(_LE("Invalid format of swift store config " "cfg")) return account_params glance-16.0.0/glance/common/__init__.py0000666000175100017510000000000013245511421017673 0ustar zuulzuul00000000000000glance-16.0.0/glance/common/utils.py0000666000175100017510000006371413245511421017321 0ustar zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # Copyright 2014 SoftLayer Technologies, Inc. # Copyright 2015 Mirantis, Inc # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ System-level utilities and helper functions. """ import errno try: from eventlet import sleep except ImportError: from time import sleep from eventlet.green import socket import functools import os import re import uuid from OpenSSL import crypto from oslo_config import cfg from oslo_log import log as logging from oslo_utils import encodeutils from oslo_utils import excutils from oslo_utils import netutils from oslo_utils import strutils import six from six.moves import urllib from webob import exc from glance.common import exception from glance.common import timeutils from glance.i18n import _, _LE, _LW CONF = cfg.CONF LOG = logging.getLogger(__name__) # Whitelist of v1 API headers of form x-image-meta-xxx IMAGE_META_HEADERS = ['x-image-meta-location', 'x-image-meta-size', 'x-image-meta-is_public', 'x-image-meta-disk_format', 'x-image-meta-container_format', 'x-image-meta-name', 'x-image-meta-status', 'x-image-meta-copy_from', 'x-image-meta-uri', 'x-image-meta-checksum', 'x-image-meta-created_at', 'x-image-meta-updated_at', 'x-image-meta-deleted_at', 'x-image-meta-min_ram', 'x-image-meta-min_disk', 'x-image-meta-owner', 'x-image-meta-store', 'x-image-meta-id', 'x-image-meta-protected', 'x-image-meta-deleted', 'x-image-meta-virtual_size'] GLANCE_TEST_SOCKET_FD_STR = 'GLANCE_TEST_SOCKET_FD' def chunkreadable(iter, chunk_size=65536): """ Wrap a readable iterator with a reader yielding chunks of a preferred size, otherwise leave iterator unchanged. :param iter: an iter which may also be readable :param chunk_size: maximum size of chunk """ return chunkiter(iter, chunk_size) if hasattr(iter, 'read') else iter def chunkiter(fp, chunk_size=65536): """ Return an iterator to a file-like obj which yields fixed size chunks :param fp: a file-like object :param chunk_size: maximum size of chunk """ while True: chunk = fp.read(chunk_size) if chunk: yield chunk else: break def cooperative_iter(iter): """ Return an iterator which schedules after each iteration. This can prevent eventlet thread starvation. :param iter: an iterator to wrap """ try: for chunk in iter: sleep(0) yield chunk except Exception as err: with excutils.save_and_reraise_exception(): msg = _LE("Error: cooperative_iter exception %s") % err LOG.error(msg) def cooperative_read(fd): """ Wrap a file descriptor's read with a partial function which schedules after each read. This can prevent eventlet thread starvation. :param fd: a file descriptor to wrap """ def readfn(*args): result = fd.read(*args) sleep(0) return result return readfn MAX_COOP_READER_BUFFER_SIZE = 134217728 # 128M seems like a sane buffer limit CONF.import_group('import_filtering_opts', 'glance.async.flows._internal_plugins') def validate_import_uri(uri): """Validate requested uri for Image Import web-download. :param uri: target uri to be validated """ if not uri: return False parsed_uri = urllib.parse.urlparse(uri) scheme = parsed_uri.scheme host = parsed_uri.hostname port = parsed_uri.port wl_schemes = CONF.import_filtering_opts.allowed_schemes bl_schemes = CONF.import_filtering_opts.disallowed_schemes wl_hosts = CONF.import_filtering_opts.allowed_hosts bl_hosts = CONF.import_filtering_opts.disallowed_hosts wl_ports = CONF.import_filtering_opts.allowed_ports bl_ports = CONF.import_filtering_opts.disallowed_ports # NOTE(jokke): Checking if both allowed and disallowed are defined and # logging it to inform only allowed will be obeyed. if wl_schemes and bl_schemes: bl_schemes = [] LOG.debug("Both allowed and disallowed schemes has been configured. " "Will only process allowed list.") if wl_hosts and bl_hosts: bl_hosts = [] LOG.debug("Both allowed and disallowed hosts has been configured. " "Will only process allowed list.") if wl_ports and bl_ports: bl_ports = [] LOG.debug("Both allowed and disallowed ports has been configured. " "Will only process allowed list.") if not scheme or ((wl_schemes and scheme not in wl_schemes) or parsed_uri.scheme in bl_schemes): return False if not host or ((wl_hosts and host not in wl_hosts) or host in bl_hosts): return False if port and ((wl_ports and port not in wl_ports) or port in bl_ports): return False return True class CooperativeReader(object): """ An eventlet thread friendly class for reading in image data. When accessing data either through the iterator or the read method we perform a sleep to allow a co-operative yield. When there is more than one image being uploaded/downloaded this prevents eventlet thread starvation, ie allows all threads to be scheduled periodically rather than having the same thread be continuously active. """ def __init__(self, fd): """ :param fd: Underlying image file object """ self.fd = fd self.iterator = None # NOTE(markwash): if the underlying supports read(), overwrite the # default iterator-based implementation with cooperative_read which # is more straightforward if hasattr(fd, 'read'): self.read = cooperative_read(fd) else: self.iterator = None self.buffer = b'' self.position = 0 def read(self, length=None): """Return the requested amount of bytes, fetching the next chunk of the underlying iterator when needed. This is replaced with cooperative_read in __init__ if the underlying fd already supports read(). """ if length is None: if len(self.buffer) - self.position > 0: # if no length specified but some data exists in buffer, # return that data and clear the buffer result = self.buffer[self.position:] self.buffer = b'' self.position = 0 return str(result) else: # otherwise read the next chunk from the underlying iterator # and return it as a whole. Reset the buffer, as subsequent # calls may specify the length try: if self.iterator is None: self.iterator = self.__iter__() return next(self.iterator) except StopIteration: return '' finally: self.buffer = b'' self.position = 0 else: result = bytearray() while len(result) < length: if self.position < len(self.buffer): to_read = length - len(result) chunk = self.buffer[self.position:self.position + to_read] result.extend(chunk) # This check is here to prevent potential OOM issues if # this code is called with unreasonably high values of read # size. Currently it is only called from the HTTP clients # of Glance backend stores, which use httplib for data # streaming, which has readsize hardcoded to 8K, so this # check should never fire. Regardless it still worths to # make the check, as the code may be reused somewhere else. if len(result) >= MAX_COOP_READER_BUFFER_SIZE: raise exception.LimitExceeded() self.position += len(chunk) else: try: if self.iterator is None: self.iterator = self.__iter__() self.buffer = next(self.iterator) self.position = 0 except StopIteration: self.buffer = b'' self.position = 0 return bytes(result) return bytes(result) def __iter__(self): return cooperative_iter(self.fd.__iter__()) class LimitingReader(object): """ Reader designed to fail when reading image data past the configured allowable amount. """ def __init__(self, data, limit): """ :param data: Underlying image data object :param limit: maximum number of bytes the reader should allow """ self.data = data self.limit = limit self.bytes_read = 0 def __iter__(self): for chunk in self.data: self.bytes_read += len(chunk) if self.bytes_read > self.limit: raise exception.ImageSizeLimitExceeded() else: yield chunk def read(self, i): result = self.data.read(i) self.bytes_read += len(result) if self.bytes_read > self.limit: raise exception.ImageSizeLimitExceeded() return result def image_meta_to_http_headers(image_meta): """ Returns a set of image metadata into a dict of HTTP headers that can be fed to either a Webob Request object or an httplib.HTTP(S)Connection object :param image_meta: Mapping of image metadata """ headers = {} for k, v in image_meta.items(): if v is not None: if k == 'properties': for pk, pv in v.items(): if pv is not None: headers["x-image-meta-property-%s" % pk.lower()] = six.text_type(pv) else: headers["x-image-meta-%s" % k.lower()] = six.text_type(v) return headers def get_image_meta_from_headers(response): """ Processes HTTP headers from a supplied response that match the x-image-meta and x-image-meta-property and returns a mapping of image metadata and properties :param response: Response to process """ result = {} properties = {} if hasattr(response, 'getheaders'): # httplib.HTTPResponse headers = response.getheaders() else: # webob.Response headers = response.headers.items() for key, value in headers: key = str(key.lower()) if key.startswith('x-image-meta-property-'): field_name = key[len('x-image-meta-property-'):].replace('-', '_') properties[field_name] = value or None elif key.startswith('x-image-meta-'): field_name = key[len('x-image-meta-'):].replace('-', '_') if 'x-image-meta-' + field_name not in IMAGE_META_HEADERS: msg = _("Bad header: %(header_name)s") % {'header_name': key} raise exc.HTTPBadRequest(msg, content_type="text/plain") result[field_name] = value or None result['properties'] = properties for key, nullable in [('size', False), ('min_disk', False), ('min_ram', False), ('virtual_size', True)]: if key in result: try: result[key] = int(result[key]) except ValueError: if nullable and result[key] == str(None): result[key] = None else: extra = (_("Cannot convert image %(key)s '%(value)s' " "to an integer.") % {'key': key, 'value': result[key]}) raise exception.InvalidParameterValue(value=result[key], param=key, extra_msg=extra) if result[key] is not None and result[key] < 0: extra = _('Cannot be a negative value.') raise exception.InvalidParameterValue(value=result[key], param=key, extra_msg=extra) for key in ('is_public', 'deleted', 'protected'): if key in result: result[key] = strutils.bool_from_string(result[key]) return result def create_mashup_dict(image_meta): """ Returns a dictionary-like mashup of the image core properties and the image custom properties from given image metadata. :param image_meta: metadata of image with core and custom properties """ d = {} for key, value in six.iteritems(image_meta): if isinstance(value, dict): for subkey, subvalue in six.iteritems( create_mashup_dict(value)): if subkey not in image_meta: d[subkey] = subvalue else: d[key] = value return d def safe_mkdirs(path): try: os.makedirs(path) except OSError as e: if e.errno != errno.EEXIST: raise def mutating(func): """Decorator to enforce read-only logic""" @functools.wraps(func) def wrapped(self, req, *args, **kwargs): if req.context.read_only: msg = "Read-only access" LOG.debug(msg) raise exc.HTTPForbidden(msg, request=req, content_type="text/plain") return func(self, req, *args, **kwargs) return wrapped def setup_remote_pydev_debug(host, port): error_msg = _LE('Error setting up the debug environment. Verify that the' ' option pydev_worker_debug_host is pointing to a valid ' 'hostname or IP on which a pydev server is listening on' ' the port indicated by pydev_worker_debug_port.') try: try: from pydev import pydevd except ImportError: import pydevd pydevd.settrace(host, port=port, stdoutToServer=True, stderrToServer=True) return True except Exception: with excutils.save_and_reraise_exception(): LOG.exception(error_msg) def validate_key_cert(key_file, cert_file): try: error_key_name = "private key" error_filename = key_file with open(key_file, 'r') as keyfile: key_str = keyfile.read() key = crypto.load_privatekey(crypto.FILETYPE_PEM, key_str) error_key_name = "certificate" error_filename = cert_file with open(cert_file, 'r') as certfile: cert_str = certfile.read() cert = crypto.load_certificate(crypto.FILETYPE_PEM, cert_str) except IOError as ioe: raise RuntimeError(_("There is a problem with your %(error_key_name)s " "%(error_filename)s. Please verify it." " Error: %(ioe)s") % {'error_key_name': error_key_name, 'error_filename': error_filename, 'ioe': ioe}) except crypto.Error as ce: raise RuntimeError(_("There is a problem with your %(error_key_name)s " "%(error_filename)s. Please verify it. OpenSSL" " error: %(ce)s") % {'error_key_name': error_key_name, 'error_filename': error_filename, 'ce': ce}) try: data = str(uuid.uuid4()) # On Python 3, explicitly encode to UTF-8 to call crypto.sign() which # requires bytes. Otherwise, it raises a deprecation warning (and # will raise an error later). data = encodeutils.to_utf8(data) digest = CONF.digest_algorithm if digest == 'sha1': LOG.warn( _LW('The FIPS (FEDERAL INFORMATION PROCESSING STANDARDS)' ' state that the SHA-1 is not suitable for' ' general-purpose digital signature applications (as' ' specified in FIPS 186-3) that require 112 bits of' ' security. The default value is sha1 in Kilo for a' ' smooth upgrade process, and it will be updated' ' with sha256 in next release(L).')) out = crypto.sign(key, data, digest) crypto.verify(cert, out, data, digest) except crypto.Error as ce: raise RuntimeError(_("There is a problem with your key pair. " "Please verify that cert %(cert_file)s and " "key %(key_file)s belong together. OpenSSL " "error %(ce)s") % {'cert_file': cert_file, 'key_file': key_file, 'ce': ce}) def get_test_suite_socket(): global GLANCE_TEST_SOCKET_FD_STR if GLANCE_TEST_SOCKET_FD_STR in os.environ: fd = int(os.environ[GLANCE_TEST_SOCKET_FD_STR]) sock = socket.fromfd(fd, socket.AF_INET, socket.SOCK_STREAM) if six.PY2: sock = socket.SocketType(_sock=sock) sock.listen(CONF.backlog) del os.environ[GLANCE_TEST_SOCKET_FD_STR] os.close(fd) return sock return None def is_valid_hostname(hostname): """Verify whether a hostname (not an FQDN) is valid.""" return re.match('^[a-zA-Z0-9-]+$', hostname) is not None def is_valid_fqdn(fqdn): """Verify whether a host is a valid FQDN.""" return re.match('^[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}$', fqdn) is not None def parse_valid_host_port(host_port): """ Given a "host:port" string, attempts to parse it as intelligently as possible to determine if it is valid. This includes IPv6 [host]:port form, IPv4 ip:port form, and hostname:port or fqdn:port form. Invalid inputs will raise a ValueError, while valid inputs will return a (host, port) tuple where the port will always be of type int. """ try: try: host, port = netutils.parse_host_port(host_port) except Exception: raise ValueError(_('Host and port "%s" is not valid.') % host_port) if not netutils.is_valid_port(port): raise ValueError(_('Port "%s" is not valid.') % port) # First check for valid IPv6 and IPv4 addresses, then a generic # hostname. Failing those, if the host includes a period, then this # should pass a very generic FQDN check. The FQDN check for letters at # the tail end will weed out any hilariously absurd IPv4 addresses. if not (netutils.is_valid_ipv6(host) or netutils.is_valid_ipv4(host) or is_valid_hostname(host) or is_valid_fqdn(host)): raise ValueError(_('Host "%s" is not valid.') % host) except Exception as ex: raise ValueError(_('%s ' 'Please specify a host:port pair, where host is an ' 'IPv4 address, IPv6 address, hostname, or FQDN. If ' 'using an IPv6 address, enclose it in brackets ' 'separately from the port (i.e., ' '"[fe80::a:b:c]:9876").') % ex) return (host, int(port)) try: REGEX_4BYTE_UNICODE = re.compile(u'[\U00010000-\U0010ffff]') except re.error: # UCS-2 build case REGEX_4BYTE_UNICODE = re.compile(u'[\uD800-\uDBFF][\uDC00-\uDFFF]') def no_4byte_params(f): """ Checks that no 4 byte unicode characters are allowed in dicts' keys/values and string's parameters """ def wrapper(*args, **kwargs): def _is_match(some_str): return (isinstance(some_str, six.text_type) and REGEX_4BYTE_UNICODE.findall(some_str) != []) def _check_dict(data_dict): # a dict of dicts has to be checked recursively for key, value in six.iteritems(data_dict): if isinstance(value, dict): _check_dict(value) else: if _is_match(key): msg = _("Property names can't contain 4 byte unicode.") raise exception.Invalid(msg) if _is_match(value): msg = (_("%s can't contain 4 byte unicode characters.") % key.title()) raise exception.Invalid(msg) for data_dict in [arg for arg in args if isinstance(arg, dict)]: _check_dict(data_dict) # now check args for str values for arg in args: if _is_match(arg): msg = _("Param values can't contain 4 byte unicode.") raise exception.Invalid(msg) # check kwargs as well, as params are passed as kwargs via # registry calls _check_dict(kwargs) return f(*args, **kwargs) return wrapper def stash_conf_values(): """ Make a copy of some of the current global CONF's settings. Allows determining if any of these values have changed when the config is reloaded. """ conf = { 'bind_host': CONF.bind_host, 'bind_port': CONF.bind_port, 'tcp_keepidle': CONF.cert_file, 'backlog': CONF.backlog, 'key_file': CONF.key_file, 'cert_file': CONF.cert_file } return conf def split_filter_op(expression): """Split operator from threshold in an expression. Designed for use on a comparative-filtering query field. When no operator is found, default to an equality comparison. :param expression: the expression to parse :returns: a tuple (operator, threshold) parsed from expression """ left, sep, right = expression.partition(':') if sep: # If the expression is a date of the format ISO 8601 like # CCYY-MM-DDThh:mm:ss+hh:mm and has no operator, it should # not be partitioned, and a default operator of eq should be # assumed. try: timeutils.parse_isotime(expression) op = 'eq' threshold = expression except ValueError: op = left threshold = right else: op = 'eq' # default operator threshold = left # NOTE stevelle decoding escaped values may be needed later return op, threshold def validate_quotes(value): """Validate filter values Validation opening/closing quotes in the expression. """ open_quotes = True for i in range(len(value)): if value[i] == '"': if i and value[i - 1] == '\\': continue if open_quotes: if i and value[i - 1] != ',': msg = _("Invalid filter value %s. There is no comma " "before opening quotation mark.") % value raise exception.InvalidParameterValue(message=msg) else: if i + 1 != len(value) and value[i + 1] != ",": msg = _("Invalid filter value %s. There is no comma " "after closing quotation mark.") % value raise exception.InvalidParameterValue(message=msg) open_quotes = not open_quotes if not open_quotes: msg = _("Invalid filter value %s. The quote is not closed.") % value raise exception.InvalidParameterValue(message=msg) def split_filter_value_for_quotes(value): """Split filter values Split values by commas and quotes for 'in' operator, according api-wg. """ validate_quotes(value) tmp = re.compile(r''' "( # if found a double-quote [^\"\\]* # take characters either non-quotes or backslashes (?:\\. # take backslashes and character after it [^\"\\]*)* # take characters either non-quotes or backslashes ) # before double-quote ",? # a double-quote with comma maybe | ([^,]+),? # if not found double-quote take any non-comma # characters with comma maybe | , # if we have only comma take empty string ''', re.VERBOSE) return [val[0] or val[1] for val in re.findall(tmp, value)] def evaluate_filter_op(value, operator, threshold): """Evaluate a comparison operator. Designed for use on a comparative-filtering query field. :param value: evaluated against the operator, as left side of expression :param operator: any supported filter operation :param threshold: to compare value against, as right side of expression :raises InvalidFilterOperatorValue: if an unknown operator is provided :returns: boolean result of applied comparison """ if operator == 'gt': return value > threshold elif operator == 'gte': return value >= threshold elif operator == 'lt': return value < threshold elif operator == 'lte': return value <= threshold elif operator == 'neq': return value != threshold elif operator == 'eq': return value == threshold msg = _("Unable to filter on a unknown operator.") raise exception.InvalidFilterOperatorValue(msg) glance-16.0.0/glance/common/crypt.py0000666000175100017510000000647213245511421017320 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Routines for URL-safe encrypting/decrypting """ import base64 import os import random from cryptography.hazmat.backends import default_backend from cryptography.hazmat.primitives.ciphers import algorithms from cryptography.hazmat.primitives.ciphers import Cipher from cryptography.hazmat.primitives.ciphers import modes from oslo_utils import encodeutils import six # NOTE(jokke): simplified transition to py3, behaves like py2 xrange from six.moves import range def urlsafe_encrypt(key, plaintext, blocksize=16): """ Encrypts plaintext. Resulting ciphertext will contain URL-safe characters. If plaintext is Unicode, encode it to UTF-8 before encryption. :param key: AES secret key :param plaintext: Input text to be encrypted :param blocksize: Non-zero integer multiple of AES blocksize in bytes (16) :returns: Resulting ciphertext """ def pad(text): """ Pads text to be encrypted """ pad_length = (blocksize - len(text) % blocksize) # NOTE(rosmaita): I know this looks stupid, but we can't just # use os.urandom() to get the bytes because we use char(0) as # a delimiter pad = b''.join(six.int2byte(random.SystemRandom().randint(1, 0xFF)) for i in range(pad_length - 1)) # We use chr(0) as a delimiter between text and padding return text + b'\0' + pad plaintext = encodeutils.to_utf8(plaintext) key = encodeutils.to_utf8(key) # random initial 16 bytes for CBC init_vector = os.urandom(16) backend = default_backend() cypher = Cipher(algorithms.AES(key), modes.CBC(init_vector), backend=backend) encryptor = cypher.encryptor() padded = encryptor.update( pad(six.binary_type(plaintext))) + encryptor.finalize() encoded = base64.urlsafe_b64encode(init_vector + padded) if six.PY3: encoded = encoded.decode('ascii') return encoded def urlsafe_decrypt(key, ciphertext): """ Decrypts URL-safe base64 encoded ciphertext. On Python 3, the result is decoded from UTF-8. :param key: AES secret key :param ciphertext: The encrypted text to decrypt :returns: Resulting plaintext """ # Cast from unicode ciphertext = encodeutils.to_utf8(ciphertext) key = encodeutils.to_utf8(key) ciphertext = base64.urlsafe_b64decode(ciphertext) backend = default_backend() cypher = Cipher(algorithms.AES(key), modes.CBC(ciphertext[:16]), backend=backend) decryptor = cypher.decryptor() padded = decryptor.update(ciphertext[16:]) + decryptor.finalize() text = padded[:padded.rfind(b'\0')] if six.PY3: text = text.decode('utf-8') return text glance-16.0.0/glance/common/scripts/0000775000175100017510000000000013245511661017267 5ustar zuulzuul00000000000000glance-16.0.0/glance/common/scripts/image_import/0000775000175100017510000000000013245511661021743 5ustar zuulzuul00000000000000glance-16.0.0/glance/common/scripts/image_import/main.py0000666000175100017510000001425113245511421023240 0ustar zuulzuul00000000000000# Copyright 2014 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. __all__ = [ 'run', ] from oslo_concurrency import lockutils from oslo_log import log as logging from oslo_utils import encodeutils from oslo_utils import excutils import six from glance.api.v2 import images as v2_api from glance.common import exception from glance.common.scripts import utils as script_utils from glance.common import store_utils from glance.i18n import _, _LE, _LI, _LW LOG = logging.getLogger(__name__) def run(t_id, context, task_repo, image_repo, image_factory): LOG.info(_LI('Task %(task_id)s beginning import ' 'execution.'), {'task_id': t_id}) _execute(t_id, task_repo, image_repo, image_factory) # NOTE(nikhil): This lock prevents more than N number of threads to be spawn # simultaneously. The number N represents the number of threads in the # executor pool. The value is set to 10 in the eventlet executor. @lockutils.synchronized("glance_import") def _execute(t_id, task_repo, image_repo, image_factory): task = script_utils.get_task(task_repo, t_id) if task is None: # NOTE: This happens if task is not found in the database. In # such cases, there is no way to update the task status so, # it's ignored here. return try: task_input = script_utils.unpack_task_input(task) uri = script_utils.validate_location_uri(task_input.get('import_from')) image_id = import_image(image_repo, image_factory, task_input, t_id, uri) task.succeed({'image_id': image_id}) except Exception as e: # Note: The message string contains Error in it to indicate # in the task.message that it's a error message for the user. # TODO(nikhil): need to bring back save_and_reraise_exception when # necessary err_msg = ("Error: " + six.text_type(type(e)) + ': ' + encodeutils.exception_to_unicode(e)) log_msg = _LE(err_msg + ("Task ID %s" % task.task_id)) # noqa LOG.exception(log_msg) task.fail(_LE(err_msg)) # noqa finally: task_repo.save(task) def import_image(image_repo, image_factory, task_input, task_id, uri): original_image = create_image(image_repo, image_factory, task_input.get('image_properties'), task_id) # NOTE: set image status to saving just before setting data original_image.status = 'saving' image_repo.save(original_image) image_id = original_image.image_id # NOTE: Retrieving image from the database because the Image object # returned from create_image method does not have appropriate factories # wrapped around it. new_image = image_repo.get(image_id) set_image_data(new_image, uri, task_id) try: # NOTE: Check if the Image is not deleted after setting the data # before saving the active image. Here if image status is # saving, then new_image is saved as it contains updated location, # size, virtual_size and checksum information and the status of # new_image is already set to active in set_image_data() call. image = image_repo.get(image_id) if image.status == 'saving': image_repo.save(new_image) return image_id else: msg = _("The Image %(image_id)s object being created by this task " "%(task_id)s, is no longer in valid status for further " "processing.") % {"image_id": image_id, "task_id": task_id} raise exception.Conflict(msg) except (exception.Conflict, exception.NotFound, exception.NotAuthenticated): with excutils.save_and_reraise_exception(): if new_image.locations: for location in new_image.locations: store_utils.delete_image_location_from_backend( new_image.context, image_id, location) def create_image(image_repo, image_factory, image_properties, task_id): properties = {} # NOTE: get the base properties for key in v2_api.get_base_properties(): try: properties[key] = image_properties.pop(key) except KeyError: LOG.debug("Task ID %(task_id)s: Ignoring property %(k)s for " "setting base properties while creating " "Image.", {'task_id': task_id, 'k': key}) # NOTE: get the rest of the properties and pass them as # extra_properties for Image to be created with them. properties['extra_properties'] = image_properties script_utils.set_base_image_properties(properties=properties) image = image_factory.new_image(**properties) image_repo.add(image) return image def set_image_data(image, uri, task_id): data_iter = None try: LOG.info(_LI("Task %(task_id)s: Got image data uri %(data_uri)s to be " "imported"), {"data_uri": uri, "task_id": task_id}) data_iter = script_utils.get_image_data_iter(uri) image.set_data(data_iter) except Exception as e: with excutils.save_and_reraise_exception(): LOG.warn(_LW("Task %(task_id)s failed with exception %(error)s") % {"error": encodeutils.exception_to_unicode(e), "task_id": task_id}) LOG.info(_LI("Task %(task_id)s: Could not import image file" " %(image_data)s"), {"image_data": uri, "task_id": task_id}) finally: if hasattr(data_iter, 'close'): data_iter.close() glance-16.0.0/glance/common/scripts/image_import/__init__.py0000666000175100017510000000000013245511421024036 0ustar zuulzuul00000000000000glance-16.0.0/glance/common/scripts/api_image_import/0000775000175100017510000000000013245511661022574 5ustar zuulzuul00000000000000glance-16.0.0/glance/common/scripts/api_image_import/main.py0000666000175100017510000001243613245511421024074 0ustar zuulzuul00000000000000# Copyright 2014 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. __all__ = [ 'run', ] from oslo_concurrency import lockutils from oslo_log import log as logging from oslo_utils import encodeutils from oslo_utils import excutils import six from glance.api.v2 import images as v2_api from glance.common import exception from glance.common.scripts import utils as script_utils from glance.common import store_utils from glance.i18n import _ LOG = logging.getLogger(__name__) def run(t_id, context, task_repo, image_repo, image_factory): LOG.info('Task %(task_id)s beginning image import ' 'execution.', {'task_id': t_id}) _execute(t_id, task_repo, image_repo, image_factory) # NOTE(nikhil): This lock prevents more than N number of threads to be spawn # simultaneously. The number N represents the number of threads in the # executor pool. The value is set to 10 in the eventlet executor. @lockutils.synchronized("glance_image_import") def _execute(t_id, task_repo, image_repo, image_factory): task = script_utils.get_task(task_repo, t_id) if task is None: # NOTE: This happens if task is not found in the database. In # such cases, there is no way to update the task status so, # it's ignored here. return try: task_input = script_utils.unpack_task_input(task) image_id = task_input.get('image_id') task.succeed({'image_id': image_id}) except Exception as e: # Note: The message string contains Error in it to indicate # in the task.message that it's a error message for the user. # TODO(nikhil): need to bring back save_and_reraise_exception when # necessary err_msg = ("Error: " + six.text_type(type(e)) + ': ' + encodeutils.exception_to_unicode(e)) log_msg = err_msg + ("Task ID %s" % task.task_id) LOG.exception(log_msg) task.fail(_(err_msg)) # noqa finally: task_repo.save(task) def import_image(image_repo, image_factory, task_input, task_id, uri): original_image = v2_api.create_image(image_repo, image_factory, task_input.get('image_properties'), task_id) # NOTE: set image status to saving just before setting data original_image.status = 'saving' image_repo.save(original_image) image_id = original_image.image_id # NOTE: Retrieving image from the database because the Image object # returned from create_image method does not have appropriate factories # wrapped around it. new_image = image_repo.get(image_id) set_image_data(new_image, uri, task_id) try: # NOTE: Check if the Image is not deleted after setting the data # before saving the active image. Here if image status is # saving, then new_image is saved as it contains updated location, # size, virtual_size and checksum information and the status of # new_image is already set to active in set_image_data() call. image = image_repo.get(image_id) if image.status == 'saving': image_repo.save(new_image) return image_id else: msg = _("The Image %(image_id)s object being created by this task " "%(task_id)s, is no longer in valid status for further " "processing.") % {"image_id": image_id, "task_id": task_id} raise exception.Conflict(msg) except (exception.Conflict, exception.NotFound, exception.NotAuthenticated): with excutils.save_and_reraise_exception(): if new_image.locations: for location in new_image.locations: store_utils.delete_image_location_from_backend( new_image.context, image_id, location) def set_image_data(image, uri, task_id): data_iter = None try: LOG.info("Task %(task_id)s: Got image data uri %(data_uri)s to be " "imported", {"data_uri": uri, "task_id": task_id}) data_iter = script_utils.get_image_data_iter(uri) image.set_data(data_iter) except Exception as e: with excutils.save_and_reraise_exception(): LOG.warn("Task %(task_id)s failed with exception %(error)s" % {"error": encodeutils.exception_to_unicode(e), "task_id": task_id}) LOG.info("Task %(task_id)s: Could not import image file" " %(image_data)s", {"image_data": uri, "task_id": task_id}) finally: if hasattr(data_iter, 'close'): data_iter.close() glance-16.0.0/glance/common/scripts/api_image_import/__init__.py0000666000175100017510000000000013245511421024667 0ustar zuulzuul00000000000000glance-16.0.0/glance/common/scripts/__init__.py0000666000175100017510000000460713245511421021403 0ustar zuulzuul00000000000000# Copyright 2014 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_log import log as logging from glance.common.scripts.api_image_import import main as api_image_import from glance.common.scripts.image_import import main as image_import from glance.i18n import _LE, _LI LOG = logging.getLogger(__name__) def run_task(task_id, task_type, context, task_repo=None, image_repo=None, image_factory=None): # TODO(nikhil): if task_repo is None get new task repo # TODO(nikhil): if image_repo is None get new image repo # TODO(nikhil): if image_factory is None get new image factory LOG.info(_LI("Loading known task scripts for task_id %(task_id)s " "of type %(task_type)s"), {'task_id': task_id, 'task_type': task_type}) if task_type == 'import': image_import.run(task_id, context, task_repo, image_repo, image_factory) elif task_type == 'api_image_import': api_image_import.run(task_id, context, task_repo, image_repo, image_factory) else: msg = _LE("This task type %(task_type)s is not supported by the " "current deployment of Glance. Please refer the " "documentation provided by OpenStack or your operator " "for more information.") % {'task_type': task_type} LOG.error(msg) task = task_repo.get(task_id) task.fail(msg) if task_repo: task_repo.save(task) else: LOG.error(_LE("Failed to save task %(task_id)s in DB as task_repo " "is %(task_repo)s"), {"task_id": task_id, "task_repo": task_repo}) glance-16.0.0/glance/common/scripts/utils.py0000666000175100017510000001135513245511421021002 0ustar zuulzuul00000000000000# Copyright 2014 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. __all__ = [ 'get_task', 'unpack_task_input', 'set_base_image_properties', 'validate_location_uri', 'get_image_data_iter', ] from oslo_log import log as logging from six.moves import urllib from glance.common import exception from glance.i18n import _, _LE LOG = logging.getLogger(__name__) def get_task(task_repo, task_id): """Gets a TaskProxy object. :param task_repo: TaskRepo object used to perform DB operations :param task_id: ID of the Task """ task = None try: task = task_repo.get(task_id) except exception.NotFound: msg = _LE('Task not found for task_id %s') % task_id LOG.exception(msg) return task def unpack_task_input(task): """Verifies and returns valid task input dictionary. :param task: Task domain object """ task_type = task.type task_input = task.task_input if task_type == 'api_image_import': if not task_input: msg = _("Input to api_image_import task is empty.") raise exception.Invalid(msg) if 'image_id' not in task_input: msg = _("Missing required 'image_id' field") raise exception.Invalid(msg) else: for key in ["import_from", "import_from_format", "image_properties"]: if key not in task_input: msg = (_("Input does not contain '%(key)s' field") % {"key": key}) raise exception.Invalid(msg) return task_input def set_base_image_properties(properties=None): """Sets optional base properties for creating Image. :param properties: Input dict to set some base properties """ if isinstance(properties, dict) and len(properties) == 0: # TODO(nikhil): We can make these properties configurable while # implementing the pipeline logic for the scripts. The below shown # are placeholders to show that the scripts work on 'devstack' # environment. properties['disk_format'] = 'qcow2' properties['container_format'] = 'bare' def validate_location_uri(location): """Validate location uri into acceptable format. :param location: Location uri to be validated """ if not location: raise exception.BadStoreUri(_('Invalid location: %s') % location) elif location.startswith(('http://', 'https://')): return location # NOTE: file type uri is being avoided for security reasons, # see LP bug #942118 #1400966. elif location.startswith(("file:///", "filesystem:///")): msg = _("File based imports are not allowed. Please use a non-local " "source of image data.") # NOTE: raise BadStoreUri and let the encompassing block save the error # msg in the task.message. raise exception.BadStoreUri(msg) else: # TODO(nikhil): add other supported uris supported = ['http', ] msg = _("The given uri is not valid. Please specify a " "valid uri from the following list of supported uri " "%(supported)s") % {'supported': supported} raise urllib.error.URLError(msg) def get_image_data_iter(uri): """Returns iterable object either for local file or uri :param uri: uri (remote or local) to the datasource we want to iterate Validation/sanitization of the uri is expected to happen before we get here. """ # NOTE(flaper87): This is safe because the input uri is already # verified before the task is created. if uri.startswith("file://"): uri = uri.split("file://")[-1] # NOTE(flaper87): The caller of this function expects to have # an iterable object. FileObjects in python are iterable, therefore # we are returning it as is. # The file descriptor will be eventually cleaned up by the garbage # collector once its ref-count is dropped to 0. That is, when there # wont be any references pointing to this file. # # We're not using StringIO or other tools to avoid reading everything # into memory. Some images may be quite heavy. return open(uri, "r") return urllib.request.urlopen(uri) glance-16.0.0/glance/common/timeutils.py0000666000175100017510000000557013245511421020174 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Time related utilities and helper functions. """ import datetime import iso8601 from monotonic import monotonic as now # noqa from oslo_utils import encodeutils # ISO 8601 extended time format with microseconds _ISO8601_TIME_FORMAT_SUBSECOND = '%Y-%m-%dT%H:%M:%S.%f' _ISO8601_TIME_FORMAT = '%Y-%m-%dT%H:%M:%S' PERFECT_TIME_FORMAT = _ISO8601_TIME_FORMAT_SUBSECOND def isotime(at=None, subsecond=False): """Stringify time in ISO 8601 format.""" if not at: at = utcnow() st = at.strftime(_ISO8601_TIME_FORMAT if not subsecond else _ISO8601_TIME_FORMAT_SUBSECOND) tz = at.tzinfo.tzname(None) if at.tzinfo else 'UTC' # Need to handle either iso8601 or python UTC format st += ('Z' if tz in ['UTC', 'UTC+00:00'] else tz) return st def parse_isotime(timestr): """Parse time from ISO 8601 format.""" try: return iso8601.parse_date(timestr) except iso8601.ParseError as e: raise ValueError(encodeutils.exception_to_unicode(e)) except TypeError as e: raise ValueError(encodeutils.exception_to_unicode(e)) def utcnow(with_timezone=False): """Overridable version of utils.utcnow that can return a TZ-aware datetime. """ if utcnow.override_time: try: return utcnow.override_time.pop(0) except AttributeError: return utcnow.override_time if with_timezone: return datetime.datetime.now(tz=iso8601.iso8601.UTC) return datetime.datetime.utcnow() def normalize_time(timestamp): """Normalize time in arbitrary timezone to UTC naive object.""" offset = timestamp.utcoffset() if offset is None: return timestamp return timestamp.replace(tzinfo=None) - offset def iso8601_from_timestamp(timestamp, microsecond=False): """Returns an iso8601 formatted date from timestamp.""" return isotime(datetime.datetime.utcfromtimestamp(timestamp), microsecond) utcnow.override_time = None def delta_seconds(before, after): """Return the difference between two timing objects. Compute the difference in seconds between two date, time, or datetime objects (as a float, to microsecond resolution). """ delta = after - before return datetime.timedelta.total_seconds(delta) glance-16.0.0/glance/gateway.py0000666000175100017510000002713013245511421016322 0ustar zuulzuul00000000000000# Copyright 2012 OpenStack Foundation # Copyright 2013 IBM Corp. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import glance_store from glance.api import authorization from glance.api import policy from glance.api import property_protections from glance.common import property_utils from glance.common import store_utils import glance.db import glance.domain import glance.location import glance.notifier import glance.quota class Gateway(object): def __init__(self, db_api=None, store_api=None, notifier=None, policy_enforcer=None): self.db_api = db_api or glance.db.get_api() self.store_api = store_api or glance_store self.store_utils = store_utils self.notifier = notifier or glance.notifier.Notifier() self.policy = policy_enforcer or policy.Enforcer() def get_image_factory(self, context): image_factory = glance.domain.ImageFactory() store_image_factory = glance.location.ImageFactoryProxy( image_factory, context, self.store_api, self.store_utils) quota_image_factory = glance.quota.ImageFactoryProxy( store_image_factory, context, self.db_api, self.store_utils) policy_image_factory = policy.ImageFactoryProxy( quota_image_factory, context, self.policy) notifier_image_factory = glance.notifier.ImageFactoryProxy( policy_image_factory, context, self.notifier) if property_utils.is_property_protection_enabled(): property_rules = property_utils.PropertyRules(self.policy) pif = property_protections.ProtectedImageFactoryProxy( notifier_image_factory, context, property_rules) authorized_image_factory = authorization.ImageFactoryProxy( pif, context) else: authorized_image_factory = authorization.ImageFactoryProxy( notifier_image_factory, context) return authorized_image_factory def get_image_member_factory(self, context): image_factory = glance.domain.ImageMemberFactory() quota_image_factory = glance.quota.ImageMemberFactoryProxy( image_factory, context, self.db_api, self.store_utils) policy_member_factory = policy.ImageMemberFactoryProxy( quota_image_factory, context, self.policy) authorized_image_factory = authorization.ImageMemberFactoryProxy( policy_member_factory, context) return authorized_image_factory def get_repo(self, context): image_repo = glance.db.ImageRepo(context, self.db_api) store_image_repo = glance.location.ImageRepoProxy( image_repo, context, self.store_api, self.store_utils) quota_image_repo = glance.quota.ImageRepoProxy( store_image_repo, context, self.db_api, self.store_utils) policy_image_repo = policy.ImageRepoProxy( quota_image_repo, context, self.policy) notifier_image_repo = glance.notifier.ImageRepoProxy( policy_image_repo, context, self.notifier) if property_utils.is_property_protection_enabled(): property_rules = property_utils.PropertyRules(self.policy) pir = property_protections.ProtectedImageRepoProxy( notifier_image_repo, context, property_rules) authorized_image_repo = authorization.ImageRepoProxy( pir, context) else: authorized_image_repo = authorization.ImageRepoProxy( notifier_image_repo, context) return authorized_image_repo def get_member_repo(self, image, context): image_member_repo = glance.db.ImageMemberRepo( context, self.db_api, image) store_image_repo = glance.location.ImageMemberRepoProxy( image_member_repo, image, context, self.store_api) policy_member_repo = policy.ImageMemberRepoProxy( store_image_repo, image, context, self.policy) notifier_member_repo = glance.notifier.ImageMemberRepoProxy( policy_member_repo, image, context, self.notifier) authorized_member_repo = authorization.ImageMemberRepoProxy( notifier_member_repo, image, context) return authorized_member_repo def get_task_factory(self, context): task_factory = glance.domain.TaskFactory() policy_task_factory = policy.TaskFactoryProxy( task_factory, context, self.policy) notifier_task_factory = glance.notifier.TaskFactoryProxy( policy_task_factory, context, self.notifier) authorized_task_factory = authorization.TaskFactoryProxy( notifier_task_factory, context) return authorized_task_factory def get_task_repo(self, context): task_repo = glance.db.TaskRepo(context, self.db_api) policy_task_repo = policy.TaskRepoProxy( task_repo, context, self.policy) notifier_task_repo = glance.notifier.TaskRepoProxy( policy_task_repo, context, self.notifier) authorized_task_repo = authorization.TaskRepoProxy( notifier_task_repo, context) return authorized_task_repo def get_task_stub_repo(self, context): task_stub_repo = glance.db.TaskRepo(context, self.db_api) policy_task_stub_repo = policy.TaskStubRepoProxy( task_stub_repo, context, self.policy) notifier_task_stub_repo = glance.notifier.TaskStubRepoProxy( policy_task_stub_repo, context, self.notifier) authorized_task_stub_repo = authorization.TaskStubRepoProxy( notifier_task_stub_repo, context) return authorized_task_stub_repo def get_task_executor_factory(self, context): task_repo = self.get_task_repo(context) image_repo = self.get_repo(context) image_factory = self.get_image_factory(context) return glance.domain.TaskExecutorFactory(task_repo, image_repo, image_factory) def get_metadef_namespace_factory(self, context): ns_factory = glance.domain.MetadefNamespaceFactory() policy_ns_factory = policy.MetadefNamespaceFactoryProxy( ns_factory, context, self.policy) notifier_ns_factory = glance.notifier.MetadefNamespaceFactoryProxy( policy_ns_factory, context, self.notifier) authorized_ns_factory = authorization.MetadefNamespaceFactoryProxy( notifier_ns_factory, context) return authorized_ns_factory def get_metadef_namespace_repo(self, context): ns_repo = glance.db.MetadefNamespaceRepo(context, self.db_api) policy_ns_repo = policy.MetadefNamespaceRepoProxy( ns_repo, context, self.policy) notifier_ns_repo = glance.notifier.MetadefNamespaceRepoProxy( policy_ns_repo, context, self.notifier) authorized_ns_repo = authorization.MetadefNamespaceRepoProxy( notifier_ns_repo, context) return authorized_ns_repo def get_metadef_object_factory(self, context): object_factory = glance.domain.MetadefObjectFactory() policy_object_factory = policy.MetadefObjectFactoryProxy( object_factory, context, self.policy) notifier_object_factory = glance.notifier.MetadefObjectFactoryProxy( policy_object_factory, context, self.notifier) authorized_object_factory = authorization.MetadefObjectFactoryProxy( notifier_object_factory, context) return authorized_object_factory def get_metadef_object_repo(self, context): object_repo = glance.db.MetadefObjectRepo(context, self.db_api) policy_object_repo = policy.MetadefObjectRepoProxy( object_repo, context, self.policy) notifier_object_repo = glance.notifier.MetadefObjectRepoProxy( policy_object_repo, context, self.notifier) authorized_object_repo = authorization.MetadefObjectRepoProxy( notifier_object_repo, context) return authorized_object_repo def get_metadef_resource_type_factory(self, context): resource_type_factory = glance.domain.MetadefResourceTypeFactory() policy_resource_type_factory = policy.MetadefResourceTypeFactoryProxy( resource_type_factory, context, self.policy) notifier_resource_type_factory = ( glance.notifier.MetadefResourceTypeFactoryProxy( policy_resource_type_factory, context, self.notifier) ) authorized_resource_type_factory = ( authorization.MetadefResourceTypeFactoryProxy( notifier_resource_type_factory, context) ) return authorized_resource_type_factory def get_metadef_resource_type_repo(self, context): resource_type_repo = glance.db.MetadefResourceTypeRepo( context, self.db_api) policy_object_repo = policy.MetadefResourceTypeRepoProxy( resource_type_repo, context, self.policy) notifier_object_repo = glance.notifier.MetadefResourceTypeRepoProxy( policy_object_repo, context, self.notifier) authorized_object_repo = authorization.MetadefResourceTypeRepoProxy( notifier_object_repo, context) return authorized_object_repo def get_metadef_property_factory(self, context): prop_factory = glance.domain.MetadefPropertyFactory() policy_prop_factory = policy.MetadefPropertyFactoryProxy( prop_factory, context, self.policy) notifier_prop_factory = glance.notifier.MetadefPropertyFactoryProxy( policy_prop_factory, context, self.notifier) authorized_prop_factory = authorization.MetadefPropertyFactoryProxy( notifier_prop_factory, context) return authorized_prop_factory def get_metadef_property_repo(self, context): prop_repo = glance.db.MetadefPropertyRepo(context, self.db_api) policy_prop_repo = policy.MetadefPropertyRepoProxy( prop_repo, context, self.policy) notifier_prop_repo = glance.notifier.MetadefPropertyRepoProxy( policy_prop_repo, context, self.notifier) authorized_prop_repo = authorization.MetadefPropertyRepoProxy( notifier_prop_repo, context) return authorized_prop_repo def get_metadef_tag_factory(self, context): tag_factory = glance.domain.MetadefTagFactory() policy_tag_factory = policy.MetadefTagFactoryProxy( tag_factory, context, self.policy) notifier_tag_factory = glance.notifier.MetadefTagFactoryProxy( policy_tag_factory, context, self.notifier) authorized_tag_factory = authorization.MetadefTagFactoryProxy( notifier_tag_factory, context) return authorized_tag_factory def get_metadef_tag_repo(self, context): tag_repo = glance.db.MetadefTagRepo(context, self.db_api) policy_tag_repo = policy.MetadefTagRepoProxy( tag_repo, context, self.policy) notifier_tag_repo = glance.notifier.MetadefTagRepoProxy( policy_tag_repo, context, self.notifier) authorized_tag_repo = authorization.MetadefTagRepoProxy( notifier_tag_repo, context) return authorized_tag_repo glance-16.0.0/glance/hacking/0000775000175100017510000000000013245511661015714 5ustar zuulzuul00000000000000glance-16.0.0/glance/hacking/__init__.py0000666000175100017510000000000013245511421020007 0ustar zuulzuul00000000000000glance-16.0.0/glance/hacking/checks.py0000666000175100017510000001216513245511421017527 0ustar zuulzuul00000000000000# Copyright (c) 2014 OpenStack Foundation. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import re """ Guidelines for writing new hacking checks - Use only for Glance-specific tests. OpenStack general tests should be submitted to the common 'hacking' module. - Pick numbers in the range G3xx. Find the current test with the highest allocated number and then pick the next value. If nova has an N3xx code for that test, use the same number. - Keep the test method code in the source file ordered based on the G3xx value. - List the new rule in the top level HACKING.rst file - Add test cases for each new rule to glance/tests/test_hacking.py """ asse_trueinst_re = re.compile( r"(.)*assertTrue\(isinstance\((\w|\.|\'|\"|\[|\])+, " "(\w|\.|\'|\"|\[|\])+\)\)") asse_equal_type_re = re.compile( r"(.)*assertEqual\(type\((\w|\.|\'|\"|\[|\])+\), " "(\w|\.|\'|\"|\[|\])+\)") asse_equal_end_with_none_re = re.compile( r"(.)*assertEqual\((\w|\.|\'|\"|\[|\])+, None\)") asse_equal_start_with_none_re = re.compile( r"(.)*assertEqual\(None, (\w|\.|\'|\"|\[|\])+\)") unicode_func_re = re.compile(r"(\s|\W|^)unicode\(") dict_constructor_with_list_copy_re = re.compile(r".*\bdict\((\[)?(\(|\[)") def assert_true_instance(logical_line): """Check for assertTrue(isinstance(a, b)) sentences G316 """ if asse_trueinst_re.match(logical_line): yield (0, "G316: assertTrue(isinstance(a, b)) sentences not allowed") def assert_equal_type(logical_line): """Check for assertEqual(type(A), B) sentences G317 """ if asse_equal_type_re.match(logical_line): yield (0, "G317: assertEqual(type(A), B) sentences not allowed") def assert_equal_none(logical_line): """Check for assertEqual(A, None) or assertEqual(None, A) sentences G318 """ res = (asse_equal_start_with_none_re.match(logical_line) or asse_equal_end_with_none_re.match(logical_line)) if res: yield (0, "G318: assertEqual(A, None) or assertEqual(None, A) " "sentences not allowed") def no_translate_debug_logs(logical_line, filename): dirs = [ "glance/api", "glance/cmd", "glance/common", "glance/db", "glance/domain", "glance/image_cache", "glance/quota", "glance/registry", "glance/store", "glance/tests", ] if max([name in filename for name in dirs]): if logical_line.startswith("LOG.debug(_("): yield(0, "G319: Don't translate debug level logs") def no_direct_use_of_unicode_function(logical_line): """Check for use of unicode() builtin G320 """ if unicode_func_re.match(logical_line): yield(0, "G320: Use six.text_type() instead of unicode()") def check_no_contextlib_nested(logical_line): msg = ("G327: contextlib.nested is deprecated since Python 2.7. See " "https://docs.python.org/2/library/contextlib.html#contextlib." "nested for more information.") if ("with contextlib.nested(" in logical_line or "with nested(" in logical_line): yield(0, msg) def dict_constructor_with_list_copy(logical_line): msg = ("G328: Must use a dict comprehension instead of a dict constructor " "with a sequence of key-value pairs.") if dict_constructor_with_list_copy_re.match(logical_line): yield (0, msg) def check_python3_xrange(logical_line): if re.search(r"\bxrange\s*\(", logical_line): yield(0, "G329: Do not use xrange. Use range, or six.moves.range for " "large loops.") def check_python3_no_iteritems(logical_line): msg = ("G330: Use six.iteritems() or dict.items() instead of " "dict.iteritems().") if re.search(r".*\.iteritems\(\)", logical_line): yield(0, msg) def check_python3_no_iterkeys(logical_line): msg = ("G331: Use six.iterkeys() or dict.keys() instead of " "dict.iterkeys().") if re.search(r".*\.iterkeys\(\)", logical_line): yield(0, msg) def check_python3_no_itervalues(logical_line): msg = ("G332: Use six.itervalues() or dict.values instead of " "dict.itervalues().") if re.search(r".*\.itervalues\(\)", logical_line): yield(0, msg) def factory(register): register(assert_true_instance) register(assert_equal_type) register(assert_equal_none) register(no_translate_debug_logs) register(no_direct_use_of_unicode_function) register(check_no_contextlib_nested) register(dict_constructor_with_list_copy) register(check_python3_xrange) register(check_python3_no_iteritems) register(check_python3_no_iterkeys) register(check_python3_no_itervalues) glance-16.0.0/glance/opts.py0000666000175100017510000001320213245511421015641 0ustar zuulzuul00000000000000# Copyright (c) 2014 OpenStack Foundation. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. __all__ = [ 'list_api_opts', 'list_registry_opts', 'list_scrubber_opts', 'list_cache_opts', 'list_manage_opts', 'list_image_import_opts', ] import copy import itertools from osprofiler import opts as profiler import glance.api.middleware.context import glance.api.versions import glance.async.flows._internal_plugins import glance.async.flows.api_image_import import glance.async.flows.convert from glance.async.flows.plugins import plugin_opts import glance.async.taskflow_executor import glance.common.config import glance.common.location_strategy import glance.common.location_strategy.store_type import glance.common.property_utils import glance.common.rpc import glance.common.wsgi import glance.image_cache import glance.image_cache.drivers.sqlite import glance.notifier import glance.registry import glance.registry.client import glance.registry.client.v1.api import glance.scrubber _api_opts = [ (None, list(itertools.chain( glance.api.middleware.context.context_opts, glance.api.versions.versions_opts, glance.common.config.common_opts, glance.common.location_strategy.location_strategy_opts, glance.common.property_utils.property_opts, glance.common.rpc.rpc_opts, glance.common.wsgi.bind_opts, glance.common.wsgi.eventlet_opts, glance.common.wsgi.socket_opts, glance.common.wsgi.wsgi_opts, glance.image_cache.drivers.sqlite.sqlite_opts, glance.image_cache.image_cache_opts, glance.notifier.notifier_opts, glance.registry.registry_addr_opts, glance.registry.client.registry_client_ctx_opts, glance.registry.client.registry_client_opts, glance.registry.client.v1.api.registry_client_ctx_opts, glance.scrubber.scrubber_opts))), ('image_format', glance.common.config.image_format_opts), ('task', glance.common.config.task_opts), ('taskflow_executor', list(itertools.chain( glance.async.taskflow_executor.taskflow_executor_opts, glance.async.flows.convert.convert_task_opts))), ('store_type_location_strategy', glance.common.location_strategy.store_type.store_type_opts), profiler.list_opts()[0], ('paste_deploy', glance.common.config.paste_deploy_opts) ] _registry_opts = [ (None, list(itertools.chain( glance.api.middleware.context.context_opts, glance.common.config.common_opts, glance.common.wsgi.bind_opts, glance.common.wsgi.socket_opts, glance.common.wsgi.wsgi_opts, glance.common.wsgi.eventlet_opts))), profiler.list_opts()[0], ('paste_deploy', glance.common.config.paste_deploy_opts) ] _scrubber_opts = [ (None, list(itertools.chain( glance.common.config.common_opts, glance.scrubber.scrubber_opts, glance.scrubber.scrubber_cmd_opts, glance.scrubber.scrubber_cmd_cli_opts))), ] _cache_opts = [ (None, list(itertools.chain( glance.common.config.common_opts, glance.image_cache.drivers.sqlite.sqlite_opts, glance.image_cache.image_cache_opts, glance.registry.registry_addr_opts, glance.registry.client.registry_client_opts, glance.registry.client.registry_client_ctx_opts))), ] _manage_opts = [ (None, []) ] _image_import_opts = [ ('image_import_opts', glance.async.flows.api_image_import.api_import_opts), ('import_filtering_opts', glance.async.flows._internal_plugins.import_filtering_opts), ] def list_api_opts(): """Return a list of oslo_config options available in Glance API service. Each element of the list is a tuple. The first element is the name of the group under which the list of elements in the second element will be registered. A group name of None corresponds to the [DEFAULT] group in config files. This function is also discoverable via the 'glance.api' entry point under the 'oslo_config.opts' namespace. The purpose of this is to allow tools like the Oslo sample config file generator to discover the options exposed to users by Glance. :returns: a list of (group_name, opts) tuples """ return [(g, copy.deepcopy(o)) for g, o in _api_opts] def list_registry_opts(): """Return a list of oslo_config options available in Glance Registry service. """ return [(g, copy.deepcopy(o)) for g, o in _registry_opts] def list_scrubber_opts(): """Return a list of oslo_config options available in Glance Scrubber service. """ return [(g, copy.deepcopy(o)) for g, o in _scrubber_opts] def list_cache_opts(): """Return a list of oslo_config options available in Glance Cache service. """ return [(g, copy.deepcopy(o)) for g, o in _cache_opts] def list_manage_opts(): """Return a list of oslo_config options available in Glance manage.""" return [(g, copy.deepcopy(o)) for g, o in _manage_opts] def list_image_import_opts(): """Return a list of oslo_config options available for Image Import""" opts = copy.deepcopy(_image_import_opts) opts.extend(plugin_opts.get_plugin_opts()) return [(g, copy.deepcopy(o)) for g, o in opts] glance-16.0.0/glance/schema.py0000666000175100017510000001760213245511421016124 0ustar zuulzuul00000000000000# Copyright 2012 OpenStack Foundation. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import jsonschema from oslo_utils import encodeutils import six from glance.common import exception from glance.i18n import _ class Schema(object): def __init__(self, name, properties=None, links=None, required=None, definitions=None): self.name = name if properties is None: properties = {} self.properties = properties self.links = links self.required = required self.definitions = definitions def validate(self, obj): try: jsonschema.validate(obj, self.raw()) except jsonschema.ValidationError as e: reason = encodeutils.exception_to_unicode(e) raise exception.InvalidObject(schema=self.name, reason=reason) def filter(self, obj): filtered = {} for key, value in six.iteritems(obj): if self._filter_func(self.properties, key): filtered[key] = value # NOTE(flaper87): This exists to allow for v1, null properties, # to be used with the V2 API. During Kilo, it was allowed for the # later to return None values without considering that V1 allowed # for custom properties to be None, which is something V2 doesn't # allow for. This small hack here will set V1 custom `None` pro- # perties to an empty string so that they will be updated along # with the image (if an update happens). # # We could skip the properties that are `None` but that would bring # back the behavior we moved away from. Note that we can't consider # doing a schema migration because we don't know which properties # are "custom" and which came from `schema-image` if those custom # properties were created with v1. if key not in self.properties and value is None: filtered[key] = '' return filtered @staticmethod def _filter_func(properties, key): return key in properties def merge_properties(self, properties): # Ensure custom props aren't attempting to override base props original_keys = set(self.properties.keys()) new_keys = set(properties.keys()) intersecting_keys = original_keys.intersection(new_keys) conflicting_keys = [k for k in intersecting_keys if self.properties[k] != properties[k]] if conflicting_keys: props = ', '.join(conflicting_keys) reason = _("custom properties (%(props)s) conflict " "with base properties") raise exception.SchemaLoadError(reason=reason % {'props': props}) self.properties.update(properties) def raw(self): raw = { 'name': self.name, 'properties': self.properties, 'additionalProperties': False, } if self.definitions: raw['definitions'] = self.definitions if self.required: raw['required'] = self.required if self.links: raw['links'] = self.links return raw def minimal(self): minimal = { 'name': self.name, 'properties': self.properties } if self.definitions: minimal['definitions'] = self.definitions if self.required: minimal['required'] = self.required return minimal class PermissiveSchema(Schema): @staticmethod def _filter_func(properties, key): return True def raw(self): raw = super(PermissiveSchema, self).raw() raw['additionalProperties'] = {'type': 'string'} return raw def minimal(self): minimal = super(PermissiveSchema, self).raw() return minimal class CollectionSchema(object): def __init__(self, name, item_schema): self.name = name self.item_schema = item_schema def raw(self): definitions = None if self.item_schema.definitions: definitions = self.item_schema.definitions self.item_schema.definitions = None raw = { 'name': self.name, 'properties': { self.name: { 'type': 'array', 'items': self.item_schema.raw(), }, 'first': {'type': 'string'}, 'next': {'type': 'string'}, 'schema': {'type': 'string'}, }, 'links': [ {'rel': 'first', 'href': '{first}'}, {'rel': 'next', 'href': '{next}'}, {'rel': 'describedby', 'href': '{schema}'}, ], } if definitions: raw['definitions'] = definitions self.item_schema.definitions = definitions return raw def minimal(self): definitions = None if self.item_schema.definitions: definitions = self.item_schema.definitions self.item_schema.definitions = None minimal = { 'name': self.name, 'properties': { self.name: { 'type': 'array', 'items': self.item_schema.minimal(), }, 'schema': {'type': 'string'}, }, 'links': [ {'rel': 'describedby', 'href': '{schema}'}, ], } if definitions: minimal['definitions'] = definitions self.item_schema.definitions = definitions return minimal class DictCollectionSchema(Schema): def __init__(self, name, item_schema): self.name = name self.item_schema = item_schema def raw(self): definitions = None if self.item_schema.definitions: definitions = self.item_schema.definitions self.item_schema.definitions = None raw = { 'name': self.name, 'properties': { self.name: { 'type': 'object', 'additionalProperties': self.item_schema.raw(), }, 'first': {'type': 'string'}, 'next': {'type': 'string'}, 'schema': {'type': 'string'}, }, 'links': [ {'rel': 'first', 'href': '{first}'}, {'rel': 'next', 'href': '{next}'}, {'rel': 'describedby', 'href': '{schema}'}, ], } if definitions: raw['definitions'] = definitions self.item_schema.definitions = definitions return raw def minimal(self): definitions = None if self.item_schema.definitions: definitions = self.item_schema.definitions self.item_schema.definitions = None minimal = { 'name': self.name, 'properties': { self.name: { 'type': 'object', 'additionalProperties': self.item_schema.minimal(), }, 'schema': {'type': 'string'}, }, 'links': [ {'rel': 'describedby', 'href': '{schema}'}, ], } if definitions: minimal['definitions'] = definitions self.item_schema.definitions = definitions return minimal glance-16.0.0/glance/quota/0000775000175100017510000000000013245511661015441 5ustar zuulzuul00000000000000glance-16.0.0/glance/quota/__init__.py0000666000175100017510000003363713245511421017562 0ustar zuulzuul00000000000000# Copyright 2013, Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import glance_store as store from oslo_config import cfg from oslo_log import log as logging from oslo_utils import encodeutils from oslo_utils import excutils import glance.api.common import glance.common.exception as exception from glance.common import utils import glance.domain import glance.domain.proxy from glance.i18n import _, _LI LOG = logging.getLogger(__name__) CONF = cfg.CONF CONF.import_opt('image_member_quota', 'glance.common.config') CONF.import_opt('image_property_quota', 'glance.common.config') CONF.import_opt('image_tag_quota', 'glance.common.config') def _enforce_image_tag_quota(tags): if CONF.image_tag_quota < 0: # If value is negative, allow unlimited number of tags return if not tags: return if len(tags) > CONF.image_tag_quota: raise exception.ImageTagLimitExceeded(attempted=len(tags), maximum=CONF.image_tag_quota) def _calc_required_size(context, image, locations): required_size = None if image.size: required_size = image.size * len(locations) else: for location in locations: size_from_backend = None try: size_from_backend = store.get_size_from_backend( location['url'], context=context) except (store.UnknownScheme, store.NotFound): pass except store.BadStoreUri: raise exception.BadStoreUri if size_from_backend: required_size = size_from_backend * len(locations) break return required_size def _enforce_image_location_quota(image, locations, is_setter=False): if CONF.image_location_quota < 0: # If value is negative, allow unlimited number of locations return attempted = len(image.locations) + len(locations) attempted = attempted if not is_setter else len(locations) maximum = CONF.image_location_quota if attempted > maximum: raise exception.ImageLocationLimitExceeded(attempted=attempted, maximum=maximum) class ImageRepoProxy(glance.domain.proxy.Repo): def __init__(self, image_repo, context, db_api, store_utils): self.image_repo = image_repo self.db_api = db_api proxy_kwargs = {'context': context, 'db_api': db_api, 'store_utils': store_utils} super(ImageRepoProxy, self).__init__(image_repo, item_proxy_class=ImageProxy, item_proxy_kwargs=proxy_kwargs) def _enforce_image_property_quota(self, attempted): if CONF.image_property_quota < 0: # If value is negative, allow unlimited number of properties return maximum = CONF.image_property_quota if attempted > maximum: kwargs = {'attempted': attempted, 'maximum': maximum} exc = exception.ImagePropertyLimitExceeded(**kwargs) LOG.debug(encodeutils.exception_to_unicode(exc)) raise exc def save(self, image, from_state=None): if image.added_new_properties(): self._enforce_image_property_quota(len(image.extra_properties)) return super(ImageRepoProxy, self).save(image, from_state=from_state) def add(self, image): self._enforce_image_property_quota(len(image.extra_properties)) return super(ImageRepoProxy, self).add(image) class ImageFactoryProxy(glance.domain.proxy.ImageFactory): def __init__(self, factory, context, db_api, store_utils): proxy_kwargs = {'context': context, 'db_api': db_api, 'store_utils': store_utils} super(ImageFactoryProxy, self).__init__(factory, proxy_class=ImageProxy, proxy_kwargs=proxy_kwargs) def new_image(self, **kwargs): tags = kwargs.pop('tags', set([])) _enforce_image_tag_quota(tags) return super(ImageFactoryProxy, self).new_image(tags=tags, **kwargs) class QuotaImageTagsProxy(object): def __init__(self, orig_set): if orig_set is None: orig_set = set([]) self.tags = orig_set def add(self, item): self.tags.add(item) _enforce_image_tag_quota(self.tags) def __cast__(self, *args, **kwargs): return self.tags.__cast__(*args, **kwargs) def __contains__(self, *args, **kwargs): return self.tags.__contains__(*args, **kwargs) def __eq__(self, other): return self.tags == other def __ne__(self, other): return not self.__eq__(other) def __iter__(self, *args, **kwargs): return self.tags.__iter__(*args, **kwargs) def __len__(self, *args, **kwargs): return self.tags.__len__(*args, **kwargs) def __getattr__(self, name): return getattr(self.tags, name) class ImageMemberFactoryProxy(glance.domain.proxy.ImageMembershipFactory): def __init__(self, member_factory, context, db_api, store_utils): self.db_api = db_api self.context = context proxy_kwargs = {'context': context, 'db_api': db_api, 'store_utils': store_utils} super(ImageMemberFactoryProxy, self).__init__( member_factory, proxy_class=ImageMemberProxy, proxy_kwargs=proxy_kwargs) def _enforce_image_member_quota(self, image): if CONF.image_member_quota < 0: # If value is negative, allow unlimited number of members return current_member_count = self.db_api.image_member_count(self.context, image.image_id) attempted = current_member_count + 1 maximum = CONF.image_member_quota if attempted > maximum: raise exception.ImageMemberLimitExceeded(attempted=attempted, maximum=maximum) def new_image_member(self, image, member_id): self._enforce_image_member_quota(image) return super(ImageMemberFactoryProxy, self).new_image_member(image, member_id) class QuotaImageLocationsProxy(object): def __init__(self, image, context, db_api): self.image = image self.context = context self.db_api = db_api self.locations = image.locations def __cast__(self, *args, **kwargs): return self.locations.__cast__(*args, **kwargs) def __contains__(self, *args, **kwargs): return self.locations.__contains__(*args, **kwargs) def __delitem__(self, *args, **kwargs): return self.locations.__delitem__(*args, **kwargs) def __delslice__(self, *args, **kwargs): return self.locations.__delslice__(*args, **kwargs) def __eq__(self, other): return self.locations == other def __ne__(self, other): return not self.__eq__(other) def __getitem__(self, *args, **kwargs): return self.locations.__getitem__(*args, **kwargs) def __iadd__(self, other): if not hasattr(other, '__iter__'): raise TypeError() self._check_user_storage_quota(other) return self.locations.__iadd__(other) def __iter__(self, *args, **kwargs): return self.locations.__iter__(*args, **kwargs) def __len__(self, *args, **kwargs): return self.locations.__len__(*args, **kwargs) def __setitem__(self, key, value): return self.locations.__setitem__(key, value) def count(self, *args, **kwargs): return self.locations.count(*args, **kwargs) def index(self, *args, **kwargs): return self.locations.index(*args, **kwargs) def pop(self, *args, **kwargs): return self.locations.pop(*args, **kwargs) def remove(self, *args, **kwargs): return self.locations.remove(*args, **kwargs) def reverse(self, *args, **kwargs): return self.locations.reverse(*args, **kwargs) def _check_user_storage_quota(self, locations): required_size = _calc_required_size(self.context, self.image, locations) glance.api.common.check_quota(self.context, required_size, self.db_api) _enforce_image_location_quota(self.image, locations) def __copy__(self): return type(self)(self.image, self.context, self.db_api) def __deepcopy__(self, memo): # NOTE(zhiyan): Only copy location entries, others can be reused. self.image.locations = copy.deepcopy(self.locations, memo) return type(self)(self.image, self.context, self.db_api) def append(self, object): self._check_user_storage_quota([object]) return self.locations.append(object) def insert(self, index, object): self._check_user_storage_quota([object]) return self.locations.insert(index, object) def extend(self, iter): self._check_user_storage_quota(iter) return self.locations.extend(iter) class ImageProxy(glance.domain.proxy.Image): def __init__(self, image, context, db_api, store_utils): self.image = image self.context = context self.db_api = db_api self.store_utils = store_utils super(ImageProxy, self).__init__(image) self.orig_props = set(image.extra_properties.keys()) def set_data(self, data, size=None): remaining = glance.api.common.check_quota( self.context, size, self.db_api, image_id=self.image.image_id) if remaining is not None: # NOTE(jbresnah) we are trying to enforce a quota, put a limit # reader on the data data = utils.LimitingReader(data, remaining) try: self.image.set_data(data, size=size) except exception.ImageSizeLimitExceeded: raise exception.StorageQuotaFull(image_size=size, remaining=remaining) # NOTE(jbresnah) If two uploads happen at the same time and neither # properly sets the size attribute[1] then there is a race condition # that will allow for the quota to be broken[2]. Thus we must recheck # the quota after the upload and thus after we know the size. # # Also, when an upload doesn't set the size properly then the call to # check_quota above returns None and so utils.LimitingReader is not # used above. Hence the store (e.g. filesystem store) may have to # download the entire file before knowing the actual file size. Here # also we need to check for the quota again after the image has been # downloaded to the store. # # [1] For e.g. when using chunked transfers the 'Content-Length' # header is not set. # [2] For e.g.: # - Upload 1 does not exceed quota but upload 2 exceeds quota. # Both uploads are to different locations # - Upload 2 completes before upload 1 and writes image.size. # - Immediately, upload 1 completes and (over)writes image.size # with the smaller size. # - Now, to glance, image has not exceeded quota but, in # reality, the quota has been exceeded. try: glance.api.common.check_quota( self.context, self.image.size, self.db_api, image_id=self.image.image_id) except exception.StorageQuotaFull: with excutils.save_and_reraise_exception(): LOG.info(_LI('Cleaning up %s after exceeding the quota.'), self.image.image_id) self.store_utils.safe_delete_from_backend( self.context, self.image.image_id, self.image.locations[0]) @property def tags(self): return QuotaImageTagsProxy(self.image.tags) @tags.setter def tags(self, value): _enforce_image_tag_quota(value) self.image.tags = value @property def locations(self): return QuotaImageLocationsProxy(self.image, self.context, self.db_api) @locations.setter def locations(self, value): _enforce_image_location_quota(self.image, value, is_setter=True) if not isinstance(value, (list, QuotaImageLocationsProxy)): raise exception.Invalid(_('Invalid locations: %s') % value) required_size = _calc_required_size(self.context, self.image, value) glance.api.common.check_quota( self.context, required_size, self.db_api, image_id=self.image.image_id) self.image.locations = value def added_new_properties(self): current_props = set(self.image.extra_properties.keys()) return bool(current_props.difference(self.orig_props)) class ImageMemberProxy(glance.domain.proxy.ImageMember): def __init__(self, image_member, context, db_api, store_utils): self.image_member = image_member self.context = context self.db_api = db_api self.store_utils = store_utils super(ImageMemberProxy, self).__init__(image_member) glance-16.0.0/glance/cmd/0000775000175100017510000000000013245511661015053 5ustar zuulzuul00000000000000glance-16.0.0/glance/cmd/registry.py0000666000175100017510000000545213245511421017277 0ustar zuulzuul00000000000000#!/usr/bin/env python # Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Reference implementation server for Glance Registry """ import os import sys import eventlet from oslo_utils import encodeutils # Monkey patch socket and time # NOTE(jokke): As per the eventlet commit # b756447bab51046dfc6f1e0e299cc997ab343701 there's circular import happening # which can be solved making sure the hubs are properly and fully imported # before calling monkey_patch(). This is solved in eventlet 0.22.0 but we # need to address it before that is widely used around. eventlet.hubs.get_hub() eventlet.patcher.monkey_patch(all=False, socket=True, time=True, thread=True) # If ../glance/__init__.py exists, add ../ to Python search path, so that # it will override what happens to be installed in /usr/(local/)lib/python... possible_topdir = os.path.normpath(os.path.join(os.path.abspath(sys.argv[0]), os.pardir, os.pardir)) if os.path.exists(os.path.join(possible_topdir, 'glance', '__init__.py')): sys.path.insert(0, possible_topdir) from oslo_config import cfg from oslo_log import log as logging import osprofiler.initializer from glance.common import config from glance.common import wsgi from glance import notifier CONF = cfg.CONF CONF.import_group("profiler", "glance.common.wsgi") logging.register_options(CONF) def main(): try: config.parse_args() config.set_config_defaults() wsgi.set_eventlet_hub() logging.setup(CONF, 'glance') notifier.set_defaults() if CONF.profiler.enabled: osprofiler.initializer.init_from_conf( conf=CONF, context={}, project="glance", service="registry", host=CONF.bind_host ) server = wsgi.Server() server.start(config.load_paste_app('glance-registry'), default_port=9191) server.wait() except RuntimeError as e: sys.exit("ERROR: %s" % encodeutils.exception_to_unicode(e)) if __name__ == '__main__': main() glance-16.0.0/glance/cmd/control.py0000666000175100017510000003306713245511421017112 0ustar zuulzuul00000000000000# Copyright (c) 2011 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. """ Helper script for starting/stopping/reloading Glance server programs. Thanks for some of the code, Swifties ;) """ from __future__ import print_function from __future__ import with_statement import argparse import fcntl import os import resource import signal import subprocess import sys import tempfile import time # If ../glance/__init__.py exists, add ../ to Python search path, so that # it will override what happens to be installed in /usr/(local/)lib/python... possible_topdir = os.path.normpath(os.path.join(os.path.abspath(sys.argv[0]), os.pardir, os.pardir)) if os.path.exists(os.path.join(possible_topdir, 'glance', '__init__.py')): sys.path.insert(0, possible_topdir) from oslo_config import cfg from oslo_utils import units # NOTE(jokke): simplified transition to py3, behaves like py2 xrange from six.moves import range from glance.common import config from glance.i18n import _ CONF = cfg.CONF ALL_COMMANDS = ['start', 'status', 'stop', 'shutdown', 'restart', 'reload', 'force-reload'] ALL_SERVERS = ['api', 'registry', 'scrubber'] RELOAD_SERVERS = ['glance-api', 'glance-registry'] GRACEFUL_SHUTDOWN_SERVERS = ['glance-api', 'glance-registry', 'glance-scrubber'] MAX_DESCRIPTORS = 32768 MAX_MEMORY = 2 * units.Gi # 2 GB USAGE = """%(prog)s [options] [CONFPATH] Where is one of: all, {0} And command is one of: {1} And CONFPATH is the optional configuration file to use.""".format( ', '.join(ALL_SERVERS), ', '.join(ALL_COMMANDS)) exitcode = 0 def gated_by(predicate): def wrap(f): def wrapped_f(*args): if predicate: return f(*args) else: return None return wrapped_f return wrap def pid_files(server, pid_file): pid_files = [] if pid_file: if os.path.exists(os.path.abspath(pid_file)): pid_files = [os.path.abspath(pid_file)] else: if os.path.exists('/var/run/glance/%s.pid' % server): pid_files = ['/var/run/glance/%s.pid' % server] for pid_file in pid_files: pid = int(open(pid_file).read().strip()) yield pid_file, pid def do_start(verb, pid_file, server, args): if verb != 'Respawn' and pid_file == CONF.pid_file: for pid_file, pid in pid_files(server, pid_file): if os.path.exists('/proc/%s' % pid): print(_("%(serv)s appears to already be running: %(pid)s") % {'serv': server, 'pid': pid_file}) return else: print(_("Removing stale pid file %s") % pid_file) os.unlink(pid_file) try: resource.setrlimit(resource.RLIMIT_NOFILE, (MAX_DESCRIPTORS, MAX_DESCRIPTORS)) resource.setrlimit(resource.RLIMIT_DATA, (MAX_MEMORY, MAX_MEMORY)) except ValueError: print(_('Unable to increase file descriptor limit. ' 'Running as non-root?')) os.environ['PYTHON_EGG_CACHE'] = '/tmp' def write_pid_file(pid_file, pid): with open(pid_file, 'w') as fp: fp.write('%d\n' % pid) def redirect_to_null(fds): with open(os.devnull, 'r+b') as nullfile: for desc in fds: # close fds try: os.dup2(nullfile.fileno(), desc) except OSError: pass def redirect_to_syslog(fds, server): log_cmd = 'logger' log_cmd_params = '-t "%s[%d]"' % (server, os.getpid()) process = subprocess.Popen([log_cmd, log_cmd_params], stdin=subprocess.PIPE) for desc in fds: # pipe to logger command try: os.dup2(process.stdin.fileno(), desc) except OSError: pass def redirect_stdio(server, capture_output): input = [sys.stdin.fileno()] output = [sys.stdout.fileno(), sys.stderr.fileno()] redirect_to_null(input) if capture_output: redirect_to_syslog(output, server) else: redirect_to_null(output) @gated_by(CONF.capture_output) def close_stdio_on_exec(): fds = [sys.stdin.fileno(), sys.stdout.fileno(), sys.stderr.fileno()] for desc in fds: # set close on exec flag fcntl.fcntl(desc, fcntl.F_SETFD, fcntl.FD_CLOEXEC) def launch(pid_file, conf_file=None, capture_output=False, await_time=0): args = [server] if conf_file: args += ['--config-file', conf_file] msg = (_('%(verb)sing %(serv)s with %(conf)s') % {'verb': verb, 'serv': server, 'conf': conf_file}) else: msg = (_('%(verb)sing %(serv)s') % {'verb': verb, 'serv': server}) print(msg) close_stdio_on_exec() pid = os.fork() if pid == 0: os.setsid() redirect_stdio(server, capture_output) try: os.execlp('%s' % server, *args) except OSError as e: msg = (_('unable to launch %(serv)s. Got error: %(e)s') % {'serv': server, 'e': e}) sys.exit(msg) sys.exit(0) else: write_pid_file(pid_file, pid) await_child(pid, await_time) return pid @gated_by(CONF.await_child) def await_child(pid, await_time): bail_time = time.time() + await_time while time.time() < bail_time: reported_pid, status = os.waitpid(pid, os.WNOHANG) if reported_pid == pid: global exitcode exitcode = os.WEXITSTATUS(status) break time.sleep(0.05) conf_file = None if args and os.path.exists(args[0]): conf_file = os.path.abspath(os.path.expanduser(args[0])) return launch(pid_file, conf_file, CONF.capture_output, CONF.await_child) def do_check_status(pid_file, server): if os.path.exists(pid_file): with open(pid_file, 'r') as pidfile: pid = pidfile.read().strip() print(_("%(serv)s (pid %(pid)s) is running...") % {'serv': server, 'pid': pid}) else: print(_("%s is stopped") % server) def get_pid_file(server, pid_file): pid_file = (os.path.abspath(pid_file) if pid_file else '/var/run/glance/%s.pid' % server) dir, file = os.path.split(pid_file) if not os.path.exists(dir): try: os.makedirs(dir) except OSError: pass if not os.access(dir, os.W_OK): fallback = os.path.join(tempfile.mkdtemp(), '%s.pid' % server) msg = (_('Unable to create pid file %(pid)s. Running as non-root?\n' 'Falling back to a temp file, you can stop %(service)s ' 'service using:\n' ' %(file)s %(server)s stop --pid-file %(fb)s') % {'pid': pid_file, 'service': server, 'file': __file__, 'server': server, 'fb': fallback}) print(msg) pid_file = fallback return pid_file def do_reload(pid_file, server): if server not in RELOAD_SERVERS: msg = (_('Reload of %(serv)s not supported') % {'serv': server}) sys.exit(msg) pid = None if os.path.exists(pid_file): with open(pid_file, 'r') as pidfile: pid = int(pidfile.read().strip()) else: msg = (_('Server %(serv)s is stopped') % {'serv': server}) sys.exit(msg) sig = signal.SIGHUP try: print(_('Reloading %(serv)s (pid %(pid)s) with signal(%(sig)s)') % {'serv': server, 'pid': pid, 'sig': sig}) os.kill(pid, sig) except OSError: print(_("Process %d not running") % pid) def do_stop(server, args, graceful=False): if graceful and server in GRACEFUL_SHUTDOWN_SERVERS: sig = signal.SIGHUP else: sig = signal.SIGTERM did_anything = False pfiles = pid_files(server, CONF.pid_file) for pid_file, pid in pfiles: did_anything = True try: os.unlink(pid_file) except OSError: pass try: print(_('Stopping %(serv)s (pid %(pid)s) with signal(%(sig)s)') % {'serv': server, 'pid': pid, 'sig': sig}) os.kill(pid, sig) except OSError: print(_("Process %d not running") % pid) for pid_file, pid in pfiles: for _junk in range(150): # 15 seconds if not os.path.exists('/proc/%s' % pid): break time.sleep(0.1) else: print(_('Waited 15 seconds for pid %(pid)s (%(file)s) to die;' ' giving up') % {'pid': pid, 'file': pid_file}) if not did_anything: print(_('%s is already stopped') % server) def add_command_parsers(subparsers): cmd_parser = argparse.ArgumentParser(add_help=False) cmd_subparsers = cmd_parser.add_subparsers(dest='command') for cmd in ALL_COMMANDS: parser = cmd_subparsers.add_parser(cmd) parser.add_argument('args', nargs=argparse.REMAINDER) for server in ALL_SERVERS: full_name = 'glance-' + server parser = subparsers.add_parser(server, parents=[cmd_parser]) parser.set_defaults(servers=[full_name]) parser = subparsers.add_parser(full_name, parents=[cmd_parser]) parser.set_defaults(servers=[full_name]) parser = subparsers.add_parser('all', parents=[cmd_parser]) parser.set_defaults(servers=['glance-' + s for s in ALL_SERVERS]) def main(): global exitcode opts = [ cfg.SubCommandOpt('server', title='Server types', help='Available server types', handler=add_command_parsers), cfg.StrOpt('pid-file', metavar='PATH', help='File to use as pid file. Default: ' '/var/run/glance/$server.pid.'), cfg.IntOpt('await-child', metavar='DELAY', default=0, help='Period to wait for service death ' 'in order to report exit code ' '(default is to not wait at all).'), cfg.BoolOpt('capture-output', default=False, help='Capture stdout/err in syslog ' 'instead of discarding it.'), cfg.BoolOpt('respawn', default=False, help='Restart service on unexpected death.'), ] CONF.register_cli_opts(opts) config.parse_args(usage=USAGE) @gated_by(CONF.await_child) @gated_by(CONF.respawn) def mutually_exclusive(): sys.stderr.write('--await-child and --respawn are mutually exclusive') sys.exit(1) mutually_exclusive() @gated_by(CONF.respawn) def anticipate_respawn(children): while children: pid, status = os.wait() if pid in children: (pid_file, server, args) = children.pop(pid) running = os.path.exists(pid_file) one_second_ago = time.time() - 1 bouncing = (running and os.path.getmtime(pid_file) >= one_second_ago) if running and not bouncing: args = (pid_file, server, args) new_pid = do_start('Respawn', *args) children[new_pid] = args else: rsn = 'bouncing' if bouncing else 'deliberately stopped' print(_('Suppressed respawn as %(serv)s was %(rsn)s.') % {'serv': server, 'rsn': rsn}) if CONF.server.command == 'start': children = {} for server in CONF.server.servers: pid_file = get_pid_file(server, CONF.pid_file) args = (pid_file, server, CONF.server.args) pid = do_start('Start', *args) children[pid] = args anticipate_respawn(children) if CONF.server.command == 'status': for server in CONF.server.servers: pid_file = get_pid_file(server, CONF.pid_file) do_check_status(pid_file, server) if CONF.server.command == 'stop': for server in CONF.server.servers: do_stop(server, CONF.server.args) if CONF.server.command == 'shutdown': for server in CONF.server.servers: do_stop(server, CONF.server.args, graceful=True) if CONF.server.command == 'restart': for server in CONF.server.servers: do_stop(server, CONF.server.args) for server in CONF.server.servers: pid_file = get_pid_file(server, CONF.pid_file) do_start('Restart', pid_file, server, CONF.server.args) if CONF.server.command in ('reload', 'force-reload'): for server in CONF.server.servers: pid_file = get_pid_file(server, CONF.pid_file) do_reload(pid_file, server) sys.exit(exitcode) glance-16.0.0/glance/cmd/replicator.py0000666000175100017510000006731513245511421017601 0ustar zuulzuul00000000000000#!/usr/bin/env python # Copyright 2012 Michael Still and Canonical Inc # Copyright 2014 SoftLayer Technologies, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from __future__ import print_function import os import sys from oslo_config import cfg from oslo_log import log as logging from oslo_serialization import jsonutils from oslo_utils import encodeutils from oslo_utils import uuidutils import six from six.moves import http_client as http import six.moves.urllib.parse as urlparse from webob import exc from glance.common import config from glance.common import exception from glance.common import utils from glance.i18n import _, _LE, _LI, _LW LOG = logging.getLogger(__name__) # NOTE: positional arguments will be parsed before until # this bug is corrected https://bugs.launchpad.net/oslo.config/+bug/1392428 cli_opts = [ cfg.IntOpt('chunksize', short='c', default=65536, help="Amount of data to transfer per HTTP write."), cfg.StrOpt('dontreplicate', short='D', default=('created_at date deleted_at location updated_at'), help="List of fields to not replicate."), cfg.BoolOpt('metaonly', short='m', default=False, help="Only replicate metadata, not images."), cfg.StrOpt('token', short='t', default='', help=("Pass in your authentication token if you have " "one. If you use this option the same token is " "used for both the source and the target.")), cfg.StrOpt('mastertoken', short='M', default='', deprecated_since='Pike', deprecated_reason='use sourcetoken instead', help=("Pass in your authentication token if you have " "one. This is the token used for the source system.")), cfg.StrOpt('slavetoken', short='S', default='', deprecated_since='Pike', deprecated_reason='use targettoken instead', help=("Pass in your authentication token if you have " "one. This is the token used for the target system.")), cfg.StrOpt('command', positional=True, help="Command to be given to replicator"), cfg.MultiStrOpt('args', positional=True, help="Arguments for the command"), ] CONF = cfg.CONF CONF.register_cli_opts(cli_opts) # TODO(stevelle) Remove deprecated opts some time after Queens CONF.register_opt( cfg.StrOpt('sourcetoken', default='', deprecated_opts=[cfg.DeprecatedOpt('mastertoken')], help=("Pass in your authentication token if you have " "one. This is the token used for the source."))) CONF.register_opt( cfg.StrOpt('targettoken', default='', deprecated_opts=[cfg.DeprecatedOpt('slavetoken')], help=("Pass in your authentication token if you have " "one. This is the token used for the target."))) logging.register_options(CONF) CONF.set_default(name='use_stderr', default=True) # If ../glance/__init__.py exists, add ../ to Python search path, so that # it will override what happens to be installed in /usr/(local/)lib/python... possible_topdir = os.path.normpath(os.path.join(os.path.abspath(sys.argv[0]), os.pardir, os.pardir)) if os.path.exists(os.path.join(possible_topdir, 'glance', '__init__.py')): sys.path.insert(0, possible_topdir) COMMANDS = """Commands: help Output help for one of the commands below compare What is missing from the target glance? dump Dump the contents of a glance instance to local disk. livecopy Load the contents of one glance instance into another. load Load the contents of a local directory into glance. size Determine the size of a glance instance if dumped to disk. """ IMAGE_ALREADY_PRESENT_MESSAGE = _('The image %s is already present on ' 'the target, but our check for it did ' 'not find it. This indicates that we ' 'do not have permissions to see all ' 'the images on the target server.') class ImageService(object): def __init__(self, conn, auth_token): """Initialize the ImageService. conn: a http_client.HTTPConnection to the glance server auth_token: authentication token to pass in the x-auth-token header """ self.auth_token = auth_token self.conn = conn def _http_request(self, method, url, headers, body, ignore_result_body=False): """Perform an HTTP request against the server. method: the HTTP method to use url: the URL to request (not including server portion) headers: headers for the request body: body to send with the request ignore_result_body: the body of the result will be ignored Returns: a http_client response object """ if self.auth_token: headers.setdefault('x-auth-token', self.auth_token) LOG.debug('Request: %(method)s http://%(server)s:%(port)s' '%(url)s with headers %(headers)s', {'method': method, 'server': self.conn.host, 'port': self.conn.port, 'url': url, 'headers': repr(headers)}) self.conn.request(method, url, body, headers) response = self.conn.getresponse() headers = self._header_list_to_dict(response.getheaders()) code = response.status code_description = http.responses[code] LOG.debug('Response: %(code)s %(status)s %(headers)s', {'code': code, 'status': code_description, 'headers': repr(headers)}) if code == http.BAD_REQUEST: raise exc.HTTPBadRequest( explanation=response.read()) if code == http.INTERNAL_SERVER_ERROR: raise exc.HTTPInternalServerError( explanation=response.read()) if code == http.UNAUTHORIZED: raise exc.HTTPUnauthorized( explanation=response.read()) if code == http.FORBIDDEN: raise exc.HTTPForbidden( explanation=response.read()) if code == http.CONFLICT: raise exc.HTTPConflict( explanation=response.read()) if ignore_result_body: # NOTE: because we are pipelining requests through a single HTTP # connection, http_client requires that we read the response body # before we can make another request. If the caller knows they # don't care about the body, they can ask us to do that for them. response.read() return response def get_images(self): """Return a detailed list of images. Yields a series of images as dicts containing metadata. """ params = {'is_public': None} while True: url = '/v1/images/detail' query = urlparse.urlencode(params) if query: url += '?%s' % query response = self._http_request('GET', url, {}, '') result = jsonutils.loads(response.read()) if not result or 'images' not in result or not result['images']: return for image in result.get('images', []): params['marker'] = image['id'] yield image def get_image(self, image_uuid): """Fetch image data from glance. image_uuid: the id of an image Returns: a http_client Response object where the body is the image. """ url = '/v1/images/%s' % image_uuid return self._http_request('GET', url, {}, '') @staticmethod def _header_list_to_dict(headers): """Expand a list of headers into a dictionary. headers: a list of [(key, value), (key, value), (key, value)] Returns: a dictionary representation of the list """ d = {} for (header, value) in headers: if header.startswith('x-image-meta-property-'): prop = header.replace('x-image-meta-property-', '') d.setdefault('properties', {}) d['properties'][prop] = value else: d[header.replace('x-image-meta-', '')] = value return d def get_image_meta(self, image_uuid): """Return the metadata for a single image. image_uuid: the id of an image Returns: image metadata as a dictionary """ url = '/v1/images/%s' % image_uuid response = self._http_request('HEAD', url, {}, '', ignore_result_body=True) return self._header_list_to_dict(response.getheaders()) @staticmethod def _dict_to_headers(d): """Convert a dictionary into one suitable for a HTTP request. d: a dictionary Returns: the same dictionary, with x-image-meta added to every key """ h = {} for key in d: if key == 'properties': for subkey in d[key]: if d[key][subkey] is None: h['x-image-meta-property-%s' % subkey] = '' else: h['x-image-meta-property-%s' % subkey] = d[key][subkey] else: h['x-image-meta-%s' % key] = d[key] return h def add_image(self, image_meta, image_data): """Upload an image. image_meta: image metadata as a dictionary image_data: image data as a object with a read() method Returns: a tuple of (http response headers, http response body) """ url = '/v1/images' headers = self._dict_to_headers(image_meta) headers['Content-Type'] = 'application/octet-stream' headers['Content-Length'] = int(image_meta['size']) response = self._http_request('POST', url, headers, image_data) headers = self._header_list_to_dict(response.getheaders()) LOG.debug('Image post done') body = response.read() return headers, body def add_image_meta(self, image_meta): """Update image metadata. image_meta: image metadata as a dictionary Returns: a tuple of (http response headers, http response body) """ url = '/v1/images/%s' % image_meta['id'] headers = self._dict_to_headers(image_meta) headers['Content-Type'] = 'application/octet-stream' response = self._http_request('PUT', url, headers, '') headers = self._header_list_to_dict(response.getheaders()) LOG.debug('Image post done') body = response.read() return headers, body def get_image_service(): """Get a copy of the image service. This is done like this to make it easier to mock out ImageService. """ return ImageService def _human_readable_size(num, suffix='B'): for unit in ['', 'Ki', 'Mi', 'Gi', 'Ti', 'Pi', 'Ei', 'Zi']: if abs(num) < 1024.0: return "%3.1f %s%s" % (num, unit, suffix) num /= 1024.0 return "%.1f %s%s" % (num, 'Yi', suffix) def replication_size(options, args): """%(prog)s size Determine the size of a glance instance if dumped to disk. server:port: the location of the glance instance. """ # Make sure server info is provided if args is None or len(args) < 1: raise TypeError(_("Too few arguments.")) server, port = utils.parse_valid_host_port(args.pop()) total_size = 0 count = 0 imageservice = get_image_service() client = imageservice(http.HTTPConnection(server, port), options.targettoken) for image in client.get_images(): LOG.debug('Considering image: %(image)s', {'image': image}) if image['status'] == 'active': total_size += int(image['size']) count += 1 print(_('Total size is %(size)d bytes (%(human_size)s) across ' '%(img_count)d images') % {'size': total_size, 'human_size': _human_readable_size(total_size), 'img_count': count}) def replication_dump(options, args): """%(prog)s dump Dump the contents of a glance instance to local disk. server:port: the location of the glance instance. path: a directory on disk to contain the data. """ # Make sure server and path are provided if len(args) < 2: raise TypeError(_("Too few arguments.")) path = args.pop() server, port = utils.parse_valid_host_port(args.pop()) imageservice = get_image_service() client = imageservice(http.HTTPConnection(server, port), options.sourcetoken) for image in client.get_images(): LOG.debug('Considering: %(image_id)s (%(image_name)s) ' '(%(image_size)d bytes)', {'image_id': image['id'], 'image_name': image.get('name', '--unnamed--'), 'image_size': image['size']}) data_path = os.path.join(path, image['id']) data_filename = data_path + '.img' if not os.path.exists(data_path): LOG.info(_LI('Storing: %(image_id)s (%(image_name)s)' ' (%(image_size)d bytes) in %(data_filename)s'), {'image_id': image['id'], 'image_name': image.get('name', '--unnamed--'), 'image_size': image['size'], 'data_filename': data_filename}) # Dump glance information if six.PY3: f = open(data_path, 'w', encoding='utf-8') else: f = open(data_path, 'w') with f: f.write(jsonutils.dumps(image)) if image['status'] == 'active' and not options.metaonly: # Now fetch the image. The metadata returned in headers here # is the same as that which we got from the detailed images # request earlier, so we can ignore it here. Note that we also # only dump active images. LOG.debug('Image %s is active', image['id']) image_response = client.get_image(image['id']) with open(data_filename, 'wb') as f: while True: chunk = image_response.read(options.chunksize) if not chunk: break f.write(chunk) def _dict_diff(a, b): """A one way dictionary diff. a: a dictionary b: a dictionary Returns: True if the dictionaries are different """ # Only things the source has which the target lacks matter if set(a.keys()) - set(b.keys()): LOG.debug('metadata diff -- source has extra keys: %(keys)s', {'keys': ' '.join(set(a.keys()) - set(b.keys()))}) return True for key in a: if str(a[key]) != str(b[key]): LOG.debug('metadata diff -- value differs for key ' '%(key)s: source "%(source_value)s" vs ' 'target "%(target_value)s"', {'key': key, 'source_value': a[key], 'target_value': b[key]}) return True return False def replication_load(options, args): """%(prog)s load Load the contents of a local directory into glance. server:port: the location of the glance instance. path: a directory on disk containing the data. """ # Make sure server and path are provided if len(args) < 2: raise TypeError(_("Too few arguments.")) path = args.pop() server, port = utils.parse_valid_host_port(args.pop()) imageservice = get_image_service() client = imageservice(http.HTTPConnection(server, port), options.targettoken) updated = [] for ent in os.listdir(path): if uuidutils.is_uuid_like(ent): image_uuid = ent LOG.info(_LI('Considering: %s'), image_uuid) meta_file_name = os.path.join(path, image_uuid) with open(meta_file_name) as meta_file: meta = jsonutils.loads(meta_file.read()) # Remove keys which don't make sense for replication for key in options.dontreplicate.split(' '): if key in meta: LOG.debug('Stripping %(header)s from saved ' 'metadata', {'header': key}) del meta[key] if _image_present(client, image_uuid): # NOTE(mikal): Perhaps we just need to update the metadata? # Note that we don't attempt to change an image file once it # has been uploaded. LOG.debug('Image %s already present', image_uuid) headers = client.get_image_meta(image_uuid) for key in options.dontreplicate.split(' '): if key in headers: LOG.debug('Stripping %(header)s from target ' 'metadata', {'header': key}) del headers[key] if _dict_diff(meta, headers): LOG.info(_LI('Image %s metadata has changed'), image_uuid) headers, body = client.add_image_meta(meta) _check_upload_response_headers(headers, body) updated.append(meta['id']) else: if not os.path.exists(os.path.join(path, image_uuid + '.img')): LOG.debug('%s dump is missing image data, skipping', image_uuid) continue # Upload the image itself with open(os.path.join(path, image_uuid + '.img')) as img_file: try: headers, body = client.add_image(meta, img_file) _check_upload_response_headers(headers, body) updated.append(meta['id']) except exc.HTTPConflict: LOG.error(_LE(IMAGE_ALREADY_PRESENT_MESSAGE) % image_uuid) # noqa return updated def replication_livecopy(options, args): """%(prog)s livecopy Load the contents of one glance instance into another. fromserver:port: the location of the source glance instance. toserver:port: the location of the target glance instance. """ # Make sure from-server and to-server are provided if len(args) < 2: raise TypeError(_("Too few arguments.")) imageservice = get_image_service() target_server, target_port = utils.parse_valid_host_port(args.pop()) target_conn = http.HTTPConnection(target_server, target_port) target_client = imageservice(target_conn, options.targettoken) source_server, source_port = utils.parse_valid_host_port(args.pop()) source_conn = http.HTTPConnection(source_server, source_port) source_client = imageservice(source_conn, options.sourcetoken) updated = [] for image in source_client.get_images(): LOG.debug('Considering %(id)s', {'id': image['id']}) for key in options.dontreplicate.split(' '): if key in image: LOG.debug('Stripping %(header)s from source metadata', {'header': key}) del image[key] if _image_present(target_client, image['id']): # NOTE(mikal): Perhaps we just need to update the metadata? # Note that we don't attempt to change an image file once it # has been uploaded. headers = target_client.get_image_meta(image['id']) if headers['status'] == 'active': for key in options.dontreplicate.split(' '): if key in image: LOG.debug('Stripping %(header)s from source ' 'metadata', {'header': key}) del image[key] if key in headers: LOG.debug('Stripping %(header)s from target ' 'metadata', {'header': key}) del headers[key] if _dict_diff(image, headers): LOG.info(_LI('Image %(image_id)s (%(image_name)s) ' 'metadata has changed'), {'image_id': image['id'], 'image_name': image.get('name', '--unnamed--')}) headers, body = target_client.add_image_meta(image) _check_upload_response_headers(headers, body) updated.append(image['id']) elif image['status'] == 'active': LOG.info(_LI('Image %(image_id)s (%(image_name)s) ' '(%(image_size)d bytes) ' 'is being synced'), {'image_id': image['id'], 'image_name': image.get('name', '--unnamed--'), 'image_size': image['size']}) if not options.metaonly: image_response = source_client.get_image(image['id']) try: headers, body = target_client.add_image(image, image_response) _check_upload_response_headers(headers, body) updated.append(image['id']) except exc.HTTPConflict: LOG.error(_LE(IMAGE_ALREADY_PRESENT_MESSAGE) % image['id']) # noqa return updated def replication_compare(options, args): """%(prog)s compare Compare the contents of fromserver with those of toserver. fromserver:port: the location of the source glance instance. toserver:port: the location of the target glance instance. """ # Make sure from-server and to-server are provided if len(args) < 2: raise TypeError(_("Too few arguments.")) imageservice = get_image_service() target_server, target_port = utils.parse_valid_host_port(args.pop()) target_conn = http.HTTPConnection(target_server, target_port) target_client = imageservice(target_conn, options.targettoken) source_server, source_port = utils.parse_valid_host_port(args.pop()) source_conn = http.HTTPConnection(source_server, source_port) source_client = imageservice(source_conn, options.sourcetoken) differences = {} for image in source_client.get_images(): if _image_present(target_client, image['id']): headers = target_client.get_image_meta(image['id']) for key in options.dontreplicate.split(' '): if key in image: LOG.debug('Stripping %(header)s from source metadata', {'header': key}) del image[key] if key in headers: LOG.debug('Stripping %(header)s from target metadata', {'header': key}) del headers[key] for key in image: if image[key] != headers.get(key): LOG.warn(_LW('%(image_id)s: field %(key)s differs ' '(source is %(source_value)s, destination ' 'is %(target_value)s)') % {'image_id': image['id'], 'key': key, 'source_value': image[key], 'target_value': headers.get(key, 'undefined')}) differences[image['id']] = 'diff' else: LOG.debug('%(image_id)s is identical', {'image_id': image['id']}) elif image['status'] == 'active': LOG.warn(_LW('Image %(image_id)s ("%(image_name)s") ' 'entirely missing from the destination') % {'image_id': image['id'], 'image_name': image.get('name', '--unnamed')}) differences[image['id']] = 'missing' return differences def _check_upload_response_headers(headers, body): """Check that the headers of an upload are reasonable. headers: the headers from the upload body: the body from the upload """ if 'status' not in headers: try: d = jsonutils.loads(body) if 'image' in d and 'status' in d['image']: return except Exception: raise exception.UploadException(body) def _image_present(client, image_uuid): """Check if an image is present in glance. client: the ImageService image_uuid: the image uuid to check Returns: True if the image is present """ headers = client.get_image_meta(image_uuid) return 'status' in headers def print_help(options, args): """Print help specific to a command. options: the parsed command line options args: the command line """ if not args: print(COMMANDS) else: command_name = args.pop() command = lookup_command(command_name) print(command.__doc__ % {'prog': os.path.basename(sys.argv[0])}) def lookup_command(command_name): """Lookup a command. command_name: the command name Returns: a method which implements that command """ BASE_COMMANDS = {'help': print_help} REPLICATION_COMMANDS = {'compare': replication_compare, 'dump': replication_dump, 'livecopy': replication_livecopy, 'load': replication_load, 'size': replication_size} commands = {} for command_set in (BASE_COMMANDS, REPLICATION_COMMANDS): commands.update(command_set) try: command = commands[command_name] except KeyError: if command_name: sys.exit(_("Unknown command: %s") % command_name) else: command = commands['help'] return command def main(): """The main function.""" try: config.parse_args() except RuntimeError as e: sys.exit("ERROR: %s" % encodeutils.exception_to_unicode(e)) except SystemExit as e: sys.exit("Please specify one command") # Setup logging logging.setup(CONF, 'glance') if CONF.token: CONF.sourcetoken = CONF.token CONF.targettoken = CONF.token command = lookup_command(CONF.command) try: command(CONF, CONF.args) except TypeError as e: LOG.error(_LE(command.__doc__) % {'prog': command.__name__}) # noqa sys.exit("ERROR: %s" % encodeutils.exception_to_unicode(e)) except ValueError as e: LOG.error(_LE(command.__doc__) % {'prog': command.__name__}) # noqa sys.exit("ERROR: %s" % encodeutils.exception_to_unicode(e)) if __name__ == '__main__': main() glance-16.0.0/glance/cmd/cache_manage.py0000666000175100017510000003722313245511421020003 0ustar zuulzuul00000000000000#!/usr/bin/env python # Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ A simple cache management utility for Glance. """ from __future__ import print_function import argparse import collections import datetime import functools import os import sys import time from oslo_utils import encodeutils import prettytable from six.moves import input # If ../glance/__init__.py exists, add ../ to Python search path, so that # it will override what happens to be installed in /usr/(local/)lib/python... possible_topdir = os.path.normpath(os.path.join(os.path.abspath(sys.argv[0]), os.pardir, os.pardir)) if os.path.exists(os.path.join(possible_topdir, 'glance', '__init__.py')): sys.path.insert(0, possible_topdir) from glance.common import exception import glance.image_cache.client from glance.version import version_info as version SUCCESS = 0 FAILURE = 1 def catch_error(action): """Decorator to provide sensible default error handling for actions.""" def wrap(func): @functools.wraps(func) def wrapper(*args, **kwargs): try: ret = func(*args, **kwargs) return SUCCESS if ret is None else ret except exception.NotFound: options = args[0] print("Cache management middleware not enabled on host %s" % options.host) return FAILURE except exception.Forbidden: print("Not authorized to make this request.") return FAILURE except Exception as e: options = args[0] if options.debug: raise print("Failed to %s. Got error:" % action) pieces = encodeutils.exception_to_unicode(e).split('\n') for piece in pieces: print(piece) return FAILURE return wrapper return wrap @catch_error('show cached images') def list_cached(args): """%(prog)s list-cached [options] List all images currently cached. """ client = get_client(args) images = client.get_cached_images() if not images: print("No cached images.") return SUCCESS print("Found %d cached images..." % len(images)) pretty_table = prettytable.PrettyTable(("ID", "Last Accessed (UTC)", "Last Modified (UTC)", "Size", "Hits")) pretty_table.align['Size'] = "r" pretty_table.align['Hits'] = "r" for image in images: last_accessed = image['last_accessed'] if last_accessed == 0: last_accessed = "N/A" else: last_accessed = datetime.datetime.utcfromtimestamp( last_accessed).isoformat() pretty_table.add_row(( image['image_id'], last_accessed, datetime.datetime.utcfromtimestamp( image['last_modified']).isoformat(), image['size'], image['hits'])) print(pretty_table.get_string()) return SUCCESS @catch_error('show queued images') def list_queued(args): """%(prog)s list-queued [options] List all images currently queued for caching. """ client = get_client(args) images = client.get_queued_images() if not images: print("No queued images.") return SUCCESS print("Found %d queued images..." % len(images)) pretty_table = prettytable.PrettyTable(("ID",)) for image in images: pretty_table.add_row((image,)) print(pretty_table.get_string()) @catch_error('queue the specified image for caching') def queue_image(args): """%(prog)s queue-image [options] Queues an image for caching. """ if len(args.command) == 2: image_id = args.command[1] else: print("Please specify one and only ID of the image you wish to ") print("queue from the cache as the first argument") return FAILURE if (not args.force and not user_confirm("Queue image %(image_id)s for caching?" % {'image_id': image_id}, default=False)): return SUCCESS client = get_client(args) client.queue_image_for_caching(image_id) if args.verbose: print("Queued image %(image_id)s for caching" % {'image_id': image_id}) return SUCCESS @catch_error('delete the specified cached image') def delete_cached_image(args): """%(prog)s delete-cached-image [options] Deletes an image from the cache. """ if len(args.command) == 2: image_id = args.command[1] else: print("Please specify one and only ID of the image you wish to ") print("delete from the cache as the first argument") return FAILURE if (not args.force and not user_confirm("Delete cached image %(image_id)s?" % {'image_id': image_id}, default=False)): return SUCCESS client = get_client(args) client.delete_cached_image(image_id) if args.verbose: print("Deleted cached image %(image_id)s" % {'image_id': image_id}) return SUCCESS @catch_error('Delete all cached images') def delete_all_cached_images(args): """%(prog)s delete-all-cached-images [options] Remove all images from the cache. """ if (not args.force and not user_confirm("Delete all cached images?", default=False)): return SUCCESS client = get_client(args) num_deleted = client.delete_all_cached_images() if args.verbose: print("Deleted %(num_deleted)s cached images" % {'num_deleted': num_deleted}) return SUCCESS @catch_error('delete the specified queued image') def delete_queued_image(args): """%(prog)s delete-queued-image [options] Deletes an image from the cache. """ if len(args.command) == 2: image_id = args.command[1] else: print("Please specify one and only ID of the image you wish to ") print("delete from the cache as the first argument") return FAILURE if (not args.force and not user_confirm("Delete queued image %(image_id)s?" % {'image_id': image_id}, default=False)): return SUCCESS client = get_client(args) client.delete_queued_image(image_id) if args.verbose: print("Deleted queued image %(image_id)s" % {'image_id': image_id}) return SUCCESS @catch_error('Delete all queued images') def delete_all_queued_images(args): """%(prog)s delete-all-queued-images [options] Remove all images from the cache queue. """ if (not args.force and not user_confirm("Delete all queued images?", default=False)): return SUCCESS client = get_client(args) num_deleted = client.delete_all_queued_images() if args.verbose: print("Deleted %(num_deleted)s queued images" % {'num_deleted': num_deleted}) return SUCCESS def get_client(options): """Return a new client object to a Glance server. specified by the --host and --port options supplied to the CLI """ return glance.image_cache.client.get_client( host=options.host, port=options.port, username=options.os_username, password=options.os_password, tenant=options.os_tenant_name, auth_url=options.os_auth_url, auth_strategy=options.os_auth_strategy, auth_token=options.os_auth_token, region=options.os_region_name, insecure=options.insecure) def env(*vars, **kwargs): """Search for the first defined of possibly many env vars. Returns the first environment variable defined in vars, or returns the default defined in kwargs. """ for v in vars: value = os.environ.get(v) if value: return value return kwargs.get('default', '') def print_help(args): """ Print help specific to a command """ command = lookup_command(args.command[1]) print(command.__doc__ % {'prog': os.path.basename(sys.argv[0])}) def parse_args(parser): """Set up the CLI and config-file options that may be parsed and program commands. :param parser: The option parser """ parser.add_argument('command', default='help', nargs='*', help='The command to execute') parser.add_argument('-v', '--verbose', default=False, action="store_true", help="Print more verbose output.") parser.add_argument('-d', '--debug', default=False, action="store_true", help="Print debugging output.") parser.add_argument('-H', '--host', metavar="ADDRESS", default="0.0.0.0", help="Address of Glance API host.") parser.add_argument('-p', '--port', dest="port", metavar="PORT", type=int, default=9292, help="Port the Glance API host listens on.") parser.add_argument('-k', '--insecure', dest="insecure", default=False, action="store_true", help='Explicitly allow glance to perform "insecure" ' "SSL (https) requests. The server's certificate " "will not be verified against any certificate " "authorities. This option should be used with " "caution.") parser.add_argument('-f', '--force', dest="force", default=False, action="store_true", help="Prevent select actions from requesting " "user confirmation.") parser.add_argument('--os-auth-token', dest='os_auth_token', default=env('OS_AUTH_TOKEN'), help='Defaults to env[OS_AUTH_TOKEN].') parser.add_argument('-A', '--os_auth_token', '--auth_token', dest='os_auth_token', help=argparse.SUPPRESS) parser.add_argument('--os-username', dest='os_username', default=env('OS_USERNAME'), help='Defaults to env[OS_USERNAME].') parser.add_argument('-I', '--os_username', dest='os_username', help=argparse.SUPPRESS) parser.add_argument('--os-password', dest='os_password', default=env('OS_PASSWORD'), help='Defaults to env[OS_PASSWORD].') parser.add_argument('-K', '--os_password', dest='os_password', help=argparse.SUPPRESS) parser.add_argument('--os-region-name', dest='os_region_name', default=env('OS_REGION_NAME'), help='Defaults to env[OS_REGION_NAME].') parser.add_argument('-R', '--os_region_name', dest='os_region_name', help=argparse.SUPPRESS) parser.add_argument('--os-tenant-id', dest='os_tenant_id', default=env('OS_TENANT_ID'), help='Defaults to env[OS_TENANT_ID].') parser.add_argument('--os_tenant_id', dest='os_tenant_id', help=argparse.SUPPRESS) parser.add_argument('--os-tenant-name', dest='os_tenant_name', default=env('OS_TENANT_NAME'), help='Defaults to env[OS_TENANT_NAME].') parser.add_argument('-T', '--os_tenant_name', dest='os_tenant_name', help=argparse.SUPPRESS) parser.add_argument('--os-auth-url', default=env('OS_AUTH_URL'), help='Defaults to env[OS_AUTH_URL].') parser.add_argument('-N', '--os_auth_url', dest='os_auth_url', help=argparse.SUPPRESS) parser.add_argument('-S', '--os_auth_strategy', dest="os_auth_strategy", metavar="STRATEGY", help="Authentication strategy (keystone or noauth).") version_string = version.cached_version_string() parser.add_argument('--version', action='version', version=version_string) return parser.parse_args() CACHE_COMMANDS = collections.OrderedDict() CACHE_COMMANDS['help'] = ( print_help, 'Output help for one of the commands below') CACHE_COMMANDS['list-cached'] = ( list_cached, 'List all images currently cached') CACHE_COMMANDS['list-queued'] = ( list_queued, 'List all images currently queued for caching') CACHE_COMMANDS['queue-image'] = ( queue_image, 'Queue an image for caching') CACHE_COMMANDS['delete-cached-image'] = ( delete_cached_image, 'Purges an image from the cache') CACHE_COMMANDS['delete-all-cached-images'] = ( delete_all_cached_images, 'Removes all images from the cache') CACHE_COMMANDS['delete-queued-image'] = ( delete_queued_image, 'Deletes an image from the cache queue') CACHE_COMMANDS['delete-all-queued-images'] = ( delete_all_queued_images, 'Deletes all images from the cache queue') def _format_command_help(): """Formats the help string for subcommands.""" help_msg = "Commands:\n\n" for command, info in CACHE_COMMANDS.items(): if command == 'help': command = 'help ' help_msg += " %-28s%s\n\n" % (command, info[1]) return help_msg def lookup_command(command_name): try: command = CACHE_COMMANDS[command_name] return command[0] except KeyError: print('\nError: "%s" is not a valid command.\n' % command_name) print(_format_command_help()) sys.exit("Unknown command: %(cmd_name)s" % {'cmd_name': command_name}) def user_confirm(prompt, default=False): """Yes/No question dialog with user. :param prompt: question/statement to present to user (string) :param default: boolean value to return if empty string is received as response to prompt """ if default: prompt_default = "[Y/n]" else: prompt_default = "[y/N]" answer = input("%s %s " % (prompt, prompt_default)) if answer == "": return default else: return answer.lower() in ("yes", "y") def main(): parser = argparse.ArgumentParser( description=_format_command_help(), formatter_class=argparse.RawDescriptionHelpFormatter) args = parse_args(parser) if args.command[0] == 'help' and len(args.command) == 1: parser.print_help() return # Look up the command to run command = lookup_command(args.command[0]) try: start_time = time.time() result = command(args) end_time = time.time() if args.verbose: print("Completed in %-0.4f sec." % (end_time - start_time)) sys.exit(result) except (RuntimeError, NotImplementedError) as e: sys.exit("ERROR: %s" % e) if __name__ == '__main__': main() glance-16.0.0/glance/cmd/api.py0000666000175100017510000000632013245511421016173 0ustar zuulzuul00000000000000#!/usr/bin/env python # Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Glance API Server """ import os import sys import eventlet from oslo_utils import encodeutils # Monkey patch socket, time, select, threads # NOTE(jokke): As per the eventlet commit # b756447bab51046dfc6f1e0e299cc997ab343701 there's circular import happening # which can be solved making sure the hubs are properly and fully imported # before calling monkey_patch(). This is solved in eventlet 0.22.0 but we # need to address it before that is widely used around. eventlet.hubs.get_hub() eventlet.patcher.monkey_patch(all=False, socket=True, time=True, select=True, thread=True, os=True) # If ../glance/__init__.py exists, add ../ to Python search path, so that # it will override what happens to be installed in /usr/(local/)lib/python... possible_topdir = os.path.normpath(os.path.join(os.path.abspath(sys.argv[0]), os.pardir, os.pardir)) if os.path.exists(os.path.join(possible_topdir, 'glance', '__init__.py')): sys.path.insert(0, possible_topdir) import glance_store from oslo_config import cfg from oslo_log import log as logging import osprofiler.initializer from glance.common import config from glance.common import exception from glance.common import wsgi from glance import notifier CONF = cfg.CONF CONF.import_group("profiler", "glance.common.wsgi") logging.register_options(CONF) KNOWN_EXCEPTIONS = (RuntimeError, exception.WorkerCreationFailure, glance_store.exceptions.BadStoreConfiguration, ValueError) def fail(e): global KNOWN_EXCEPTIONS return_code = KNOWN_EXCEPTIONS.index(type(e)) + 1 sys.stderr.write("ERROR: %s\n" % encodeutils.exception_to_unicode(e)) sys.exit(return_code) def main(): try: config.parse_args() config.set_config_defaults() wsgi.set_eventlet_hub() logging.setup(CONF, 'glance') notifier.set_defaults() if CONF.profiler.enabled: osprofiler.initializer.init_from_conf( conf=CONF, context={}, project="glance", service="api", host=CONF.bind_host ) server = wsgi.Server(initialize_glance_store=True) server.start(config.load_paste_app('glance-api'), default_port=9292) server.wait() except KNOWN_EXCEPTIONS as e: fail(e) if __name__ == '__main__': main() glance-16.0.0/glance/cmd/scrubber.py0000666000175100017510000000477613245511421017246 0ustar zuulzuul00000000000000#!/usr/bin/env python # Copyright 2011-2012 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Glance Scrub Service """ import os import sys # If ../glance/__init__.py exists, add ../ to Python search path, so that # it will override what happens to be installed in /usr/(local/)lib/python... possible_topdir = os.path.normpath(os.path.join(os.path.abspath(sys.argv[0]), os.pardir, os.pardir)) if os.path.exists(os.path.join(possible_topdir, 'glance', '__init__.py')): sys.path.insert(0, possible_topdir) import eventlet import glance_store from oslo_config import cfg from oslo_log import log as logging from glance.common import config from glance import scrubber # NOTE(jokke): As per the eventlet commit # b756447bab51046dfc6f1e0e299cc997ab343701 there's circular import happening # which can be solved making sure the hubs are properly and fully imported # before calling monkey_patch(). This is solved in eventlet 0.22.0 but we # need to address it before that is widely used around. eventlet.hubs.get_hub() eventlet.patcher.monkey_patch(all=False, socket=True, time=True, select=True, thread=True, os=True) CONF = cfg.CONF logging.register_options(CONF) CONF.set_default(name='use_stderr', default=True) def main(): CONF.register_cli_opts(scrubber.scrubber_cmd_cli_opts) CONF.register_opts(scrubber.scrubber_cmd_opts) try: config.parse_args() logging.setup(CONF, 'glance') glance_store.register_opts(config.CONF) glance_store.create_stores(config.CONF) glance_store.verify_default_store() app = scrubber.Scrubber(glance_store) if CONF.daemon: server = scrubber.Daemon(CONF.wakeup_time) server.start(app) server.wait() else: app.run() except RuntimeError as e: sys.exit("ERROR: %s" % e) if __name__ == '__main__': main() glance-16.0.0/glance/cmd/cache_cleaner.py0000666000175100017510000000415413245511421020161 0ustar zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Glance Image Cache Invalid Cache Entry and Stalled Image cleaner This is meant to be run as a periodic task from cron. If something goes wrong while we're caching an image (for example the fetch times out, or an exception is raised), we create an 'invalid' entry. These entires are left around for debugging purposes. However, after some period of time, we want to clean these up. Also, if an incomplete image hangs around past the image_cache_stall_time period, we automatically sweep it up. """ import os import sys from oslo_log import log as logging # If ../glance/__init__.py exists, add ../ to Python search path, so that # it will override what happens to be installed in /usr/(local/)lib/python... possible_topdir = os.path.normpath(os.path.join(os.path.abspath(sys.argv[0]), os.pardir, os.pardir)) if os.path.exists(os.path.join(possible_topdir, 'glance', '__init__.py')): sys.path.insert(0, possible_topdir) from glance.common import config from glance.image_cache import cleaner CONF = config.CONF logging.register_options(CONF) CONF.set_default(name='use_stderr', default=True) def main(): try: config.parse_cache_args() logging.setup(CONF, 'glance') app = cleaner.Cleaner() app.run() except RuntimeError as e: sys.exit("ERROR: %s" % e) glance-16.0.0/glance/cmd/__init__.py0000666000175100017510000000125013245511421017156 0ustar zuulzuul00000000000000# Copyright 2013 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from glance import i18n i18n.enable_lazy() glance-16.0.0/glance/cmd/cache_pruner.py0000666000175100017510000000333113245511421020057 0ustar zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Glance Image Cache Pruner This is meant to be run as a periodic task, perhaps every half-hour. """ import os import sys from oslo_log import log as logging # If ../glance/__init__.py exists, add ../ to Python search path, so that # it will override what happens to be installed in /usr/(local/)lib/python... possible_topdir = os.path.normpath(os.path.join(os.path.abspath(sys.argv[0]), os.pardir, os.pardir)) if os.path.exists(os.path.join(possible_topdir, 'glance', '__init__.py')): sys.path.insert(0, possible_topdir) from glance.common import config from glance.image_cache import pruner CONF = config.CONF logging.register_options(CONF) CONF.set_default(name='use_stderr', default=True) def main(): try: config.parse_cache_args() logging.setup(CONF, 'glance') app = pruner.Pruner() app.run() except RuntimeError as e: sys.exit("ERROR: %s" % e) glance-16.0.0/glance/cmd/manage.py0000666000175100017510000005412213245511421016655 0ustar zuulzuul00000000000000#!/usr/bin/env python # Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Glance Management Utility """ from __future__ import print_function # FIXME(sirp): When we have glance-admin we can consider merging this into it # Perhaps for consistency with Nova, we would then rename glance-admin -> # glance-manage (or the other way around) import os import sys import time # If ../glance/__init__.py exists, add ../ to Python search path, so that # it will override what happens to be installed in /usr/(local/)lib/python... possible_topdir = os.path.normpath(os.path.join(os.path.abspath(sys.argv[0]), os.pardir, os.pardir)) if os.path.exists(os.path.join(possible_topdir, 'glance', '__init__.py')): sys.path.insert(0, possible_topdir) from alembic import command as alembic_command from oslo_config import cfg from oslo_db import exception as db_exc from oslo_log import log as logging from oslo_utils import encodeutils import six from glance.common import config from glance.common import exception from glance import context from glance.db import migration as db_migration from glance.db.sqlalchemy import alembic_migrations from glance.db.sqlalchemy.alembic_migrations import data_migrations from glance.db.sqlalchemy import api as db_api from glance.db.sqlalchemy import metadata from glance.i18n import _ CONF = cfg.CONF USE_TRIGGERS = True # Decorators for actions def args(*args, **kwargs): def _decorator(func): func.__dict__.setdefault('args', []).insert(0, (args, kwargs)) return func return _decorator class DbCommands(object): """Class for managing the db""" def __init__(self): pass def version(self): """Print database's current migration level""" current_heads = alembic_migrations.get_current_alembic_heads() if current_heads: # Migrations are managed by alembic for head in current_heads: print(head) else: # Migrations are managed by legacy versioning scheme print(_('Database is either not under migration control or under ' 'legacy migration control, please run ' '"glance-manage db sync" to place the database under ' 'alembic migration control.')) def check(self): """Report any pending database upgrades. An exit code of 3 indicates db expand is needed, see stdout output. An exit code of 4 indicates db migrate is needed, see stdout output. An exit code of 5 indicates db contract is needed, see stdout output. """ engine = db_api.get_engine() self._validate_engine(engine) curr_heads = alembic_migrations.get_current_alembic_heads() expand_heads = alembic_migrations.get_alembic_branch_head( db_migration.EXPAND_BRANCH) contract_heads = alembic_migrations.get_alembic_branch_head( db_migration.CONTRACT_BRANCH) if (contract_heads in curr_heads): print(_('Database is up to date. No upgrades needed.')) sys.exit() elif ((not expand_heads) or (expand_heads not in curr_heads)): print(_('Your database is not up to date. ' 'Your first step is to run `glance-manage db expand`.')) sys.exit(3) elif data_migrations.has_pending_migrations(db_api.get_engine()): print(_('Your database is not up to date. ' 'Your next step is to run `glance-manage db migrate`.')) sys.exit(4) elif ((not contract_heads) or (contract_heads not in curr_heads)): print(_('Your database is not up to date. ' 'Your next step is to run `glance-manage db contract`.')) sys.exit(5) @args('--version', metavar='', help='Database version') def upgrade(self, version='heads'): """Upgrade the database's migration level""" self._sync(version) @args('--version', metavar='', help='Database version') def version_control(self, version=db_migration.ALEMBIC_INIT_VERSION): """Place a database under migration control""" if version is None: version = db_migration.ALEMBIC_INIT_VERSION a_config = alembic_migrations.get_alembic_config() alembic_command.stamp(a_config, version) print(_("Placed database under migration control at " "revision:"), version) @args('--version', metavar='', help='Database version') def sync(self, version=None): """Perform a complete (offline) database migration""" global USE_TRIGGERS # This flags let's us bypass trigger setup & teardown for non-rolling # upgrades. We set this as a global variable immediately before handing # off to sqlalchemy-migrate, because we can't pass arguments directly # to migrations that depend on it. USE_TRIGGERS = False curr_heads = alembic_migrations.get_current_alembic_heads() contract = alembic_migrations.get_alembic_branch_head( db_migration.CONTRACT_BRANCH) if (contract in curr_heads): print(_('Database is up to date. No migrations needed.')) sys.exit() try: # NOTE(abhishekk): db_sync should not be used for online # migrations. self.expand(online_migration=False) self.migrate(online_migration=False) self.contract(online_migration=False) print(_('Database is synced successfully.')) except exception.GlanceException as e: sys.exit(_('Failed to sync database: ERROR: %s') % e) def _sync(self, version): """ Place an existing database under migration control and upgrade it. """ alembic_migrations.place_database_under_alembic_control() a_config = alembic_migrations.get_alembic_config() alembic_command.upgrade(a_config, version) heads = alembic_migrations.get_current_alembic_heads() if heads is None: raise exception.GlanceException("Database sync failed") revs = ", ".join(heads) if version == 'heads': print(_("Upgraded database, current revision(s):"), revs) else: print(_('Upgraded database to: %(v)s, current revision(s): %(r)s') % {'v': version, 'r': revs}) def _validate_engine(self, engine): """Check engine is valid or not. MySql is only supported for online upgrade. Adding sqlite as engine to support existing functional test cases. :param engine: database engine name """ if engine.engine.name not in ['mysql', 'sqlite']: sys.exit(_('Rolling upgrades are currently supported only for ' 'MySQL and Sqlite')) def expand(self, online_migration=True): """Run the expansion phase of a database migration.""" if online_migration: self._validate_engine(db_api.get_engine()) curr_heads = alembic_migrations.get_current_alembic_heads() expand_head = alembic_migrations.get_alembic_branch_head( db_migration.EXPAND_BRANCH) contract_head = alembic_migrations.get_alembic_branch_head( db_migration.CONTRACT_BRANCH) if not expand_head: sys.exit(_('Database expansion failed. Couldn\'t find head ' 'revision of expand branch.')) elif (contract_head in curr_heads): print(_('Database is up to date. No migrations needed.')) sys.exit() if expand_head not in curr_heads: self._sync(version=expand_head) curr_heads = alembic_migrations.get_current_alembic_heads() if expand_head not in curr_heads: sys.exit(_('Database expansion failed. Database expansion ' 'should have brought the database version up to ' '"%(e_rev)s" revision. But, current revisions are' ': %(curr_revs)s ') % {'e_rev': expand_head, 'curr_revs': curr_heads}) else: print(_('Database expansion is up to date. No expansion needed.')) def contract(self, online_migration=True): """Run the contraction phase of a database migration.""" if online_migration: self._validate_engine(db_api.get_engine()) curr_heads = alembic_migrations.get_current_alembic_heads() contract_head = alembic_migrations.get_alembic_branch_head( db_migration.CONTRACT_BRANCH) if not contract_head: sys.exit(_('Database contraction failed. Couldn\'t find head ' 'revision of contract branch.')) elif (contract_head in curr_heads): print(_('Database is up to date. No migrations needed.')) sys.exit() expand_head = alembic_migrations.get_alembic_branch_head( db_migration.EXPAND_BRANCH) if expand_head not in curr_heads: sys.exit(_('Database contraction did not run. Database ' 'contraction cannot be run before database expansion. ' 'Run database expansion first using ' '"glance-manage db expand"')) if data_migrations.has_pending_migrations(db_api.get_engine()): sys.exit(_('Database contraction did not run. Database ' 'contraction cannot be run before data migration is ' 'complete. Run data migration using "glance-manage db ' 'migrate".')) self._sync(version=contract_head) curr_heads = alembic_migrations.get_current_alembic_heads() if contract_head not in curr_heads: sys.exit(_('Database contraction failed. Database contraction ' 'should have brought the database version up to ' '"%(e_rev)s" revision. But, current revisions are: ' '%(curr_revs)s ') % {'e_rev': expand_head, 'curr_revs': curr_heads}) def migrate(self, online_migration=True): """Run the data migration phase of a database migration.""" if online_migration: self._validate_engine(db_api.get_engine()) curr_heads = alembic_migrations.get_current_alembic_heads() contract_head = alembic_migrations.get_alembic_branch_head( db_migration.CONTRACT_BRANCH) if (contract_head in curr_heads): print(_('Database is up to date. No migrations needed.')) sys.exit() expand_head = alembic_migrations.get_alembic_branch_head( db_migration.EXPAND_BRANCH) if expand_head not in curr_heads: sys.exit(_('Data migration did not run. Data migration cannot be ' 'run before database expansion. Run database ' 'expansion first using "glance-manage db expand"')) if data_migrations.has_pending_migrations(db_api.get_engine()): rows_migrated = data_migrations.migrate(db_api.get_engine()) print(_('Migrated %s rows') % rows_migrated) else: print(_('Database migration is up to date. No migration needed.')) @args('--path', metavar='', help='Path to the directory or file ' 'where json metadata is stored') @args('--merge', action='store_true', help='Merge files with data that is in the database. By default it ' 'prefers existing data over new. This logic can be changed by ' 'combining --merge option with one of these two options: ' '--prefer_new or --overwrite.') @args('--prefer_new', action='store_true', help='Prefer new metadata over existing. Existing metadata ' 'might be overwritten. Needs to be combined with --merge ' 'option.') @args('--overwrite', action='store_true', help='Drop and rewrite metadata. Needs to be combined with --merge ' 'option') def load_metadefs(self, path=None, merge=False, prefer_new=False, overwrite=False): """Load metadefinition json files to database""" metadata.db_load_metadefs(db_api.get_engine(), path, merge, prefer_new, overwrite) def unload_metadefs(self): """Unload metadefinitions from database""" metadata.db_unload_metadefs(db_api.get_engine()) @args('--path', metavar='', help='Path to the directory where ' 'json metadata files should be ' 'saved.') def export_metadefs(self, path=None): """Export metadefinitions data from database to files""" metadata.db_export_metadefs(db_api.get_engine(), path) @args('--age_in_days', type=int, help='Purge deleted rows older than age in days') @args('--max_rows', type=int, help='Limit number of records to delete') def purge(self, age_in_days=30, max_rows=100): """Purge deleted rows older than a given age from glance tables.""" try: age_in_days = int(age_in_days) except ValueError: sys.exit(_("Invalid int value for age_in_days: " "%(age_in_days)s") % {'age_in_days': age_in_days}) try: max_rows = int(max_rows) except ValueError: sys.exit(_("Invalid int value for max_rows: " "%(max_rows)s") % {'max_rows': max_rows}) if age_in_days < 0: sys.exit(_("Must supply a non-negative value for age.")) if age_in_days >= (int(time.time()) / 86400): sys.exit(_("Maximal age is count of days since epoch.")) if max_rows < 1: sys.exit(_("Minimal rows limit is 1.")) ctx = context.get_admin_context(show_deleted=True) try: db_api.purge_deleted_rows(ctx, age_in_days, max_rows) except exception.Invalid as exc: sys.exit(exc.msg) except db_exc.DBReferenceError: sys.exit(_("Purge command failed, check glance-manage" " logs for more details.")) class DbLegacyCommands(object): """Class for managing the db using legacy commands""" def __init__(self, command_object): self.command_object = command_object def version(self): self.command_object.version() def upgrade(self, version='heads'): self.command_object.upgrade(CONF.command.version) def version_control(self, version=db_migration.ALEMBIC_INIT_VERSION): self.command_object.version_control(CONF.command.version) def sync(self, version=None): self.command_object.sync(CONF.command.version) def expand(self): self.command_object.expand() def contract(self): self.command_object.contract() def migrate(self): self.command_object.migrate() def check(self): self.command_object.check() def load_metadefs(self, path=None, merge=False, prefer_new=False, overwrite=False): self.command_object.load_metadefs(CONF.command.path, CONF.command.merge, CONF.command.prefer_new, CONF.command.overwrite) def unload_metadefs(self): self.command_object.unload_metadefs() def export_metadefs(self, path=None): self.command_object.export_metadefs(CONF.command.path) def add_legacy_command_parsers(command_object, subparsers): legacy_command_object = DbLegacyCommands(command_object) parser = subparsers.add_parser('db_version') parser.set_defaults(action_fn=legacy_command_object.version) parser.set_defaults(action='db_version') parser = subparsers.add_parser('db_upgrade') parser.set_defaults(action_fn=legacy_command_object.upgrade) parser.add_argument('version', nargs='?') parser.set_defaults(action='db_upgrade') parser = subparsers.add_parser('db_version_control') parser.set_defaults(action_fn=legacy_command_object.version_control) parser.add_argument('version', nargs='?') parser.set_defaults(action='db_version_control') parser = subparsers.add_parser('db_sync') parser.set_defaults(action_fn=legacy_command_object.sync) parser.add_argument('version', nargs='?') parser.set_defaults(action='db_sync') parser = subparsers.add_parser('db_expand') parser.set_defaults(action_fn=legacy_command_object.expand) parser.set_defaults(action='db_expand') parser = subparsers.add_parser('db_contract') parser.set_defaults(action_fn=legacy_command_object.contract) parser.set_defaults(action='db_contract') parser = subparsers.add_parser('db_migrate') parser.set_defaults(action_fn=legacy_command_object.migrate) parser.set_defaults(action='db_migrate') parser = subparsers.add_parser('db_check') parser.set_defaults(action_fn=legacy_command_object.check) parser.set_defaults(action='db_check') parser = subparsers.add_parser('db_load_metadefs') parser.set_defaults(action_fn=legacy_command_object.load_metadefs) parser.add_argument('path', nargs='?') parser.add_argument('merge', nargs='?') parser.add_argument('prefer_new', nargs='?') parser.add_argument('overwrite', nargs='?') parser.set_defaults(action='db_load_metadefs') parser = subparsers.add_parser('db_unload_metadefs') parser.set_defaults(action_fn=legacy_command_object.unload_metadefs) parser.set_defaults(action='db_unload_metadefs') parser = subparsers.add_parser('db_export_metadefs') parser.set_defaults(action_fn=legacy_command_object.export_metadefs) parser.add_argument('path', nargs='?') parser.set_defaults(action='db_export_metadefs') def add_command_parsers(subparsers): command_object = DbCommands() parser = subparsers.add_parser('db') parser.set_defaults(command_object=command_object) category_subparsers = parser.add_subparsers(dest='action') for (action, action_fn) in methods_of(command_object): parser = category_subparsers.add_parser(action) action_kwargs = [] for args, kwargs in getattr(action_fn, 'args', []): # FIXME(basha): hack to assume dest is the arg name without # the leading hyphens if no dest is supplied kwargs.setdefault('dest', args[0][2:]) if kwargs['dest'].startswith('action_kwarg_'): action_kwargs.append( kwargs['dest'][len('action_kwarg_'):]) else: action_kwargs.append(kwargs['dest']) kwargs['dest'] = 'action_kwarg_' + kwargs['dest'] parser.add_argument(*args, **kwargs) parser.set_defaults(action_fn=action_fn) parser.set_defaults(action_kwargs=action_kwargs) parser.add_argument('action_args', nargs='*') add_legacy_command_parsers(command_object, subparsers) command_opt = cfg.SubCommandOpt('command', title='Commands', help='Available commands', handler=add_command_parsers) CATEGORIES = { 'db': DbCommands, } def methods_of(obj): """Get all callable methods of an object that don't start with underscore returns a list of tuples of the form (method_name, method) """ result = [] for i in dir(obj): if callable(getattr(obj, i)) and not i.startswith('_'): result.append((i, getattr(obj, i))) return result def main(): CONF.register_cli_opt(command_opt) if len(sys.argv) < 2: script_name = sys.argv[0] print("%s category action []" % script_name) print(_("Available categories:")) for category in CATEGORIES: print(_("\t%s") % category) sys.exit(2) try: logging.register_options(CONF) CONF.set_default(name='use_stderr', default=True) cfg_files = cfg.find_config_files(project='glance', prog='glance-registry') cfg_files.extend(cfg.find_config_files(project='glance', prog='glance-api')) cfg_files.extend(cfg.find_config_files(project='glance', prog='glance-manage')) config.parse_args(default_config_files=cfg_files) config.set_config_defaults() logging.setup(CONF, 'glance') except RuntimeError as e: sys.exit("ERROR: %s" % e) try: if CONF.command.action.startswith('db'): return CONF.command.action_fn() else: func_kwargs = {} for k in CONF.command.action_kwargs: v = getattr(CONF.command, 'action_kwarg_' + k) if v is None: continue if isinstance(v, six.string_types): v = encodeutils.safe_decode(v) func_kwargs[k] = v func_args = [encodeutils.safe_decode(arg) for arg in CONF.command.action_args] return CONF.command.action_fn(*func_args, **func_kwargs) except exception.GlanceException as e: sys.exit("ERROR: %s" % encodeutils.exception_to_unicode(e)) if __name__ == '__main__': main() glance-16.0.0/glance/cmd/cache_prefetcher.py0000666000175100017510000000353013245511421020674 0ustar zuulzuul00000000000000#!/usr/bin/env python # Copyright 2011-2012 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Glance Image Cache Pre-fetcher This is meant to be run from the command line after queueing images to be pretched. """ import os import sys # If ../glance/__init__.py exists, add ../ to Python search path, so that # it will override what happens to be installed in /usr/(local/)lib/python... possible_topdir = os.path.normpath(os.path.join(os.path.abspath(sys.argv[0]), os.pardir, os.pardir)) if os.path.exists(os.path.join(possible_topdir, 'glance', '__init__.py')): sys.path.insert(0, possible_topdir) import glance_store from oslo_log import log as logging from glance.common import config from glance.image_cache import prefetcher CONF = config.CONF logging.register_options(CONF) CONF.set_default(name='use_stderr', default=True) def main(): try: config.parse_cache_args() logging.setup(CONF, 'glance') glance_store.register_opts(config.CONF) glance_store.create_stores(config.CONF) glance_store.verify_default_store() app = prefetcher.Prefetcher() app.run() except RuntimeError as e: sys.exit("ERROR: %s" % e) if __name__ == '__main__': main() glance-16.0.0/glance/i18n.py0000666000175100017510000000207613245511421015442 0ustar zuulzuul00000000000000# Copyright 2014 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_i18n import * # noqa _translators = TranslatorFactory(domain='glance') # The primary translation function using the well-known name "_" _ = _translators.primary # i18n log translation functions are deprecated. While removing the invocations # requires a lot of reviewing effort, we decide to make it as no-op functions. def _LI(msg): return msg def _LW(msg): return msg def _LE(msg): return msg def _LC(msg): return msg glance-16.0.0/glance/scrubber.py0000666000175100017510000003172113245511421016471 0ustar zuulzuul00000000000000# Copyright 2010 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import calendar import time import eventlet from glance_store import exceptions as store_exceptions from oslo_config import cfg from oslo_log import log as logging from oslo_utils import encodeutils from glance.common import crypt from glance.common import exception from glance.common import timeutils from glance import context import glance.db as db_api from glance.i18n import _, _LC, _LE, _LI, _LW LOG = logging.getLogger(__name__) scrubber_opts = [ cfg.IntOpt('scrub_time', default=0, min=0, help=_(""" The amount of time, in seconds, to delay image scrubbing. When delayed delete is turned on, an image is put into ``pending_delete`` state upon deletion until the scrubber deletes its image data. Typically, soon after the image is put into ``pending_delete`` state, it is available for scrubbing. However, scrubbing can be delayed until a later point using this configuration option. This option denotes the time period an image spends in ``pending_delete`` state before it is available for scrubbing. It is important to realize that this has storage implications. The larger the ``scrub_time``, the longer the time to reclaim backend storage from deleted images. Possible values: * Any non-negative integer Related options: * ``delayed_delete`` """)), cfg.IntOpt('scrub_pool_size', default=1, min=1, help=_(""" The size of thread pool to be used for scrubbing images. When there are a large number of images to scrub, it is beneficial to scrub images in parallel so that the scrub queue stays in control and the backend storage is reclaimed in a timely fashion. This configuration option denotes the maximum number of images to be scrubbed in parallel. The default value is one, which signifies serial scrubbing. Any value above one indicates parallel scrubbing. Possible values: * Any non-zero positive integer Related options: * ``delayed_delete`` """)), cfg.BoolOpt('delayed_delete', default=False, help=_(""" Turn on/off delayed delete. Typically when an image is deleted, the ``glance-api`` service puts the image into ``deleted`` state and deletes its data at the same time. Delayed delete is a feature in Glance that delays the actual deletion of image data until a later point in time (as determined by the configuration option ``scrub_time``). When delayed delete is turned on, the ``glance-api`` service puts the image into ``pending_delete`` state upon deletion and leaves the image data in the storage backend for the image scrubber to delete at a later time. The image scrubber will move the image into ``deleted`` state upon successful deletion of image data. NOTE: When delayed delete is turned on, image scrubber MUST be running as a periodic task to prevent the backend storage from filling up with undesired usage. Possible values: * True * False Related options: * ``scrub_time`` * ``wakeup_time`` * ``scrub_pool_size`` """)), ] scrubber_cmd_opts = [ cfg.IntOpt('wakeup_time', default=300, min=0, help=_(""" Time interval, in seconds, between scrubber runs in daemon mode. Scrubber can be run either as a cron job or daemon. When run as a daemon, this configuration time specifies the time period between two runs. When the scrubber wakes up, it fetches and scrubs all ``pending_delete`` images that are available for scrubbing after taking ``scrub_time`` into consideration. If the wakeup time is set to a large number, there may be a large number of images to be scrubbed for each run. Also, this impacts how quickly the backend storage is reclaimed. Possible values: * Any non-negative integer Related options: * ``daemon`` * ``delayed_delete`` """)) ] scrubber_cmd_cli_opts = [ cfg.BoolOpt('daemon', short='D', default=False, help=_(""" Run scrubber as a daemon. This boolean configuration option indicates whether scrubber should run as a long-running process that wakes up at regular intervals to scrub images. The wake up interval can be specified using the configuration option ``wakeup_time``. If this configuration option is set to ``False``, which is the default value, scrubber runs once to scrub images and exits. In this case, if the operator wishes to implement continuous scrubbing of images, scrubber needs to be scheduled as a cron job. Possible values: * True * False Related options: * ``wakeup_time`` """)) ] CONF = cfg.CONF CONF.register_opts(scrubber_opts) CONF.import_opt('metadata_encryption_key', 'glance.common.config') REASONABLE_DB_PAGE_SIZE = 1000 class ScrubDBQueue(object): """Database-based image scrub queue class.""" def __init__(self): self.scrub_time = CONF.scrub_time self.metadata_encryption_key = CONF.metadata_encryption_key self.admin_context = context.get_admin_context(show_deleted=True) def add_location(self, image_id, location): """Adding image location to scrub queue. :param image_id: The opaque image identifier :param location: The opaque image location :returns: A boolean value to indicate success or not """ loc_id = location.get('id') if loc_id: db_api.get_api().image_location_delete(self.admin_context, image_id, loc_id, 'pending_delete') return True else: return False def _get_images_page(self, marker): filters = {'deleted': True, 'status': 'pending_delete'} return db_api.get_api().image_get_all(self.admin_context, filters=filters, marker=marker, limit=REASONABLE_DB_PAGE_SIZE) def _get_all_images(self): """Generator to fetch all appropriate images, paging as needed.""" marker = None while True: images = self._get_images_page(marker) if len(images) == 0: break marker = images[-1]['id'] for image in images: yield image def get_all_locations(self): """Returns a list of image id and location tuple from scrub queue. :returns: a list of image id, location id and uri tuple from scrub queue """ ret = [] for image in self._get_all_images(): deleted_at = image.get('deleted_at') if not deleted_at: continue # NOTE: Strip off microseconds which may occur after the last '.,' # Example: 2012-07-07T19:14:34.974216 deleted_at = timeutils.isotime(deleted_at) date_str = deleted_at.rsplit('.', 1)[0].rsplit(',', 1)[0] delete_time = calendar.timegm(time.strptime(date_str, "%Y-%m-%dT%H:%M:%SZ")) if delete_time + self.scrub_time > time.time(): continue for loc in image['locations']: if loc['status'] != 'pending_delete': continue if self.metadata_encryption_key: uri = crypt.urlsafe_decrypt(self.metadata_encryption_key, loc['url']) else: uri = loc['url'] ret.append((image['id'], loc['id'], uri)) return ret def has_image(self, image_id): """Returns whether the queue contains an image or not. :param image_id: The opaque image identifier :returns: a boolean value to inform including or not """ try: image = db_api.get_api().image_get(self.admin_context, image_id) return image['status'] == 'pending_delete' except exception.NotFound: return False _db_queue = None def get_scrub_queue(): global _db_queue if not _db_queue: _db_queue = ScrubDBQueue() return _db_queue class Daemon(object): def __init__(self, wakeup_time=300, threads=100): LOG.info(_LI("Starting Daemon: wakeup_time=%(wakeup_time)s " "threads=%(threads)s"), {'wakeup_time': wakeup_time, 'threads': threads}) self.wakeup_time = wakeup_time self.event = eventlet.event.Event() # This pool is used for periodic instantiation of scrubber self.daemon_pool = eventlet.greenpool.GreenPool(threads) def start(self, application): self._run(application) def wait(self): try: self.event.wait() except KeyboardInterrupt: msg = _LI("Daemon Shutdown on KeyboardInterrupt") LOG.info(msg) def _run(self, application): LOG.debug("Running application") self.daemon_pool.spawn_n(application.run, self.event) eventlet.spawn_after(self.wakeup_time, self._run, application) LOG.debug("Next run scheduled in %s seconds", self.wakeup_time) class Scrubber(object): def __init__(self, store_api): LOG.info(_LI("Initializing scrubber")) self.store_api = store_api self.admin_context = context.get_admin_context(show_deleted=True) self.db_queue = get_scrub_queue() self.pool = eventlet.greenpool.GreenPool(CONF.scrub_pool_size) def _get_delete_jobs(self): try: records = self.db_queue.get_all_locations() except Exception as err: # Note(dharinic): spawn_n, in Daemon mode will log the # exception raised. Otherwise, exit 1 will occur. msg = (_LC("Can not get scrub jobs from queue: %s") % encodeutils.exception_to_unicode(err)) LOG.critical(msg) raise exception.FailedToGetScrubberJobs() delete_jobs = {} for image_id, loc_id, loc_uri in records: if image_id not in delete_jobs: delete_jobs[image_id] = [] delete_jobs[image_id].append((image_id, loc_id, loc_uri)) return delete_jobs def run(self, event=None): delete_jobs = self._get_delete_jobs() if delete_jobs: list(self.pool.starmap(self._scrub_image, delete_jobs.items())) def _scrub_image(self, image_id, delete_jobs): if len(delete_jobs) == 0: return LOG.info(_LI("Scrubbing image %(id)s from %(count)d locations."), {'id': image_id, 'count': len(delete_jobs)}) success = True for img_id, loc_id, uri in delete_jobs: try: self._delete_image_location_from_backend(img_id, loc_id, uri) except Exception: success = False if success: image = db_api.get_api().image_get(self.admin_context, image_id) if image['status'] == 'pending_delete': db_api.get_api().image_update(self.admin_context, image_id, {'status': 'deleted'}) LOG.info(_LI("Image %s has been scrubbed successfully"), image_id) else: LOG.warn(_LW("One or more image locations couldn't be scrubbed " "from backend. Leaving image '%s' in 'pending_delete'" " status") % image_id) def _delete_image_location_from_backend(self, image_id, loc_id, uri): try: LOG.debug("Scrubbing image %s from a location.", image_id) try: self.store_api.delete_from_backend(uri, self.admin_context) except store_exceptions.NotFound: LOG.info(_LI("Image location for image '%s' not found in " "backend; Marking image location deleted in " "db."), image_id) if loc_id != '-': db_api.get_api().image_location_delete(self.admin_context, image_id, int(loc_id), 'deleted') LOG.info(_LI("Image %s is scrubbed from a location."), image_id) except Exception as e: LOG.error(_LE("Unable to scrub image %(id)s from a location. " "Reason: %(exc)s ") % {'id': image_id, 'exc': encodeutils.exception_to_unicode(e)}) raise glance-16.0.0/glance/registry/0000775000175100017510000000000013245511661016160 5ustar zuulzuul00000000000000glance-16.0.0/glance/registry/api/0000775000175100017510000000000013245511661016731 5ustar zuulzuul00000000000000glance-16.0.0/glance/registry/api/v2/0000775000175100017510000000000013245511661017260 5ustar zuulzuul00000000000000glance-16.0.0/glance/registry/api/v2/rpc.py0000666000175100017510000000312513245511421020413 0ustar zuulzuul00000000000000# Copyright 2013 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ RPC Controller """ from oslo_config import cfg from glance.common import rpc from glance.common import wsgi import glance.db from glance.i18n import _ CONF = cfg.CONF class Controller(rpc.Controller): def __init__(self, raise_exc=False): super(Controller, self).__init__(raise_exc) # NOTE(flaper87): Avoid using registry's db # driver for the registry service. It would # end up in an infinite loop. if CONF.data_api == "glance.db.registry.api": msg = _("Registry service can't use %s") % CONF.data_api raise RuntimeError(msg) # NOTE(flaper87): Register the # db_api as a resource to expose. db_api = glance.db.get_api() self.register(glance.db.unwrap(db_api)) def create_resource(): """Images resource factory method.""" deserializer = rpc.RPCJSONDeserializer() serializer = rpc.RPCJSONSerializer() return wsgi.Resource(Controller(), deserializer, serializer) glance-16.0.0/glance/registry/api/v2/__init__.py0000666000175100017510000000214513245511421021367 0ustar zuulzuul00000000000000# Copyright 2013 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from glance.common import wsgi from glance.registry.api.v2 import rpc def init(mapper): rpc_resource = rpc.create_resource() mapper.connect("/rpc", controller=rpc_resource, conditions=dict(method=["POST"]), action="__call__") class API(wsgi.Router): """WSGI entry point for all Registry requests.""" def __init__(self, mapper): mapper = mapper or wsgi.APIMapper() init(mapper) super(API, self).__init__(mapper) glance-16.0.0/glance/registry/api/__init__.py0000666000175100017510000000260413245511421021040 0ustar zuulzuul00000000000000# Copyright 2014 Hewlett-Packard Development Company, L.P. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import debtcollector from oslo_config import cfg from glance.common import wsgi from glance.registry.api import v1 from glance.registry.api import v2 CONF = cfg.CONF CONF.import_opt('enable_v1_registry', 'glance.common.config') CONF.import_opt('enable_v2_registry', 'glance.common.config') class API(wsgi.Router): """WSGI entry point for all Registry requests.""" def __init__(self, mapper): mapper = mapper or wsgi.APIMapper() if CONF.enable_v1_registry: v1.init(mapper) if CONF.enable_v2_registry: debtcollector.deprecate("Glance Registry service has been " "deprecated for removal.") v2.init(mapper) super(API, self).__init__(mapper) glance-16.0.0/glance/registry/api/v1/0000775000175100017510000000000013245511661017257 5ustar zuulzuul00000000000000glance-16.0.0/glance/registry/api/v1/members.py0000666000175100017510000003463013245511421021265 0ustar zuulzuul00000000000000# Copyright 2010-2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_log import log as logging from oslo_utils import encodeutils import webob.exc from glance.common import exception from glance.common import utils from glance.common import wsgi import glance.db from glance.i18n import _, _LI, _LW LOG = logging.getLogger(__name__) class Controller(object): def _check_can_access_image_members(self, context): if context.owner is None and not context.is_admin: raise webob.exc.HTTPUnauthorized(_("No authenticated user")) def __init__(self): self.db_api = glance.db.get_api() def is_image_sharable(self, context, image): """Return True if the image can be shared to others in this context.""" # Is admin == image sharable if context.is_admin: return True # Only allow sharing if we have an owner if context.owner is None: return False # If we own the image, we can share it if context.owner == image['owner']: return True members = self.db_api.image_member_find(context, image_id=image['id'], member=context.owner) if members: return members[0]['can_share'] return False def index(self, req, image_id): """ Get the members of an image. """ try: self.db_api.image_get(req.context, image_id, v1_mode=True) except exception.NotFound: msg = _("Image %(id)s not found") % {'id': image_id} LOG.warn(msg) raise webob.exc.HTTPNotFound(msg) except exception.Forbidden: # If it's private and doesn't belong to them, don't let on # that it exists msg = _LW("Access denied to image %(id)s but returning" " 'not found'") % {'id': image_id} LOG.warn(msg) raise webob.exc.HTTPNotFound() members = self.db_api.image_member_find(req.context, image_id=image_id) LOG.debug("Returning member list for image %(id)s", {'id': image_id}) return dict(members=make_member_list(members, member_id='member', can_share='can_share')) @utils.mutating def update_all(self, req, image_id, body): """ Replaces the members of the image with those specified in the body. The body is a dict with the following format:: {'memberships': [ {'member_id': , ['can_share': [True|False]]}, ... ]} """ self._check_can_access_image_members(req.context) # Make sure the image exists try: image = self.db_api.image_get(req.context, image_id, v1_mode=True) except exception.NotFound: msg = _("Image %(id)s not found") % {'id': image_id} LOG.warn(msg) raise webob.exc.HTTPNotFound(msg) except exception.Forbidden: # If it's private and doesn't belong to them, don't let on # that it exists msg = _LW("Access denied to image %(id)s but returning" " 'not found'") % {'id': image_id} LOG.warn(msg) raise webob.exc.HTTPNotFound() # Can they manipulate the membership? if not self.is_image_sharable(req.context, image): msg = (_LW("User lacks permission to share image %(id)s") % {'id': image_id}) LOG.warn(msg) msg = _("No permission to share that image") raise webob.exc.HTTPForbidden(msg) # Get the membership list try: memb_list = body['memberships'] except Exception as e: # Malformed entity... msg = _LW("Invalid membership association specified for " "image %(id)s") % {'id': image_id} LOG.warn(msg) msg = (_("Invalid membership association: %s") % encodeutils.exception_to_unicode(e)) raise webob.exc.HTTPBadRequest(explanation=msg) add = [] existing = {} # Walk through the incoming memberships for memb in memb_list: try: datum = dict(image_id=image['id'], member=memb['member_id'], can_share=None) except Exception as e: # Malformed entity... msg = _LW("Invalid membership association specified for " "image %(id)s") % {'id': image_id} LOG.warn(msg) msg = (_("Invalid membership association: %s") % encodeutils.exception_to_unicode(e)) raise webob.exc.HTTPBadRequest(explanation=msg) # Figure out what can_share should be if 'can_share' in memb: datum['can_share'] = bool(memb['can_share']) # Try to find the corresponding membership members = self.db_api.image_member_find(req.context, image_id=datum['image_id'], member=datum['member'], include_deleted=True) try: member = members[0] except IndexError: # Default can_share datum['can_share'] = bool(datum['can_share']) add.append(datum) else: # Are we overriding can_share? if datum['can_share'] is None: datum['can_share'] = members[0]['can_share'] existing[member['id']] = { 'values': datum, 'membership': member, } # We now have a filtered list of memberships to add and # memberships to modify. Let's start by walking through all # the existing image memberships... existing_members = self.db_api.image_member_find(req.context, image_id=image['id'], include_deleted=True) for member in existing_members: if member['id'] in existing: # Just update the membership in place update = existing[member['id']]['values'] self.db_api.image_member_update(req.context, member['id'], update) else: if not member['deleted']: # Outdated one; needs to be deleted self.db_api.image_member_delete(req.context, member['id']) # Now add the non-existent ones for memb in add: self.db_api.image_member_create(req.context, memb) # Make an appropriate result LOG.info(_LI("Successfully updated memberships for image %(id)s"), {'id': image_id}) return webob.exc.HTTPNoContent() @utils.mutating def update(self, req, image_id, id, body=None): """ Adds a membership to the image, or updates an existing one. If a body is present, it is a dict with the following format:: {'member': { 'can_share': [True|False] }} If `can_share` is provided, the member's ability to share is set accordingly. If it is not provided, existing memberships remain unchanged and new memberships default to False. """ self._check_can_access_image_members(req.context) # Make sure the image exists try: image = self.db_api.image_get(req.context, image_id, v1_mode=True) except exception.NotFound: msg = _("Image %(id)s not found") % {'id': image_id} LOG.warn(msg) raise webob.exc.HTTPNotFound(msg) except exception.Forbidden: # If it's private and doesn't belong to them, don't let on # that it exists msg = _LW("Access denied to image %(id)s but returning" " 'not found'") % {'id': image_id} LOG.warn(msg) raise webob.exc.HTTPNotFound() # Can they manipulate the membership? if not self.is_image_sharable(req.context, image): msg = (_LW("User lacks permission to share image %(id)s") % {'id': image_id}) LOG.warn(msg) msg = _("No permission to share that image") raise webob.exc.HTTPForbidden(msg) # Determine the applicable can_share value can_share = None if body: try: can_share = bool(body['member']['can_share']) except Exception as e: # Malformed entity... msg = _LW("Invalid membership association specified for " "image %(id)s") % {'id': image_id} LOG.warn(msg) msg = (_("Invalid membership association: %s") % encodeutils.exception_to_unicode(e)) raise webob.exc.HTTPBadRequest(explanation=msg) # Look up an existing membership... members = self.db_api.image_member_find(req.context, image_id=image_id, member=id, include_deleted=True) if members: if can_share is not None: values = dict(can_share=can_share) self.db_api.image_member_update(req.context, members[0]['id'], values) else: values = dict(image_id=image['id'], member=id, can_share=bool(can_share)) self.db_api.image_member_create(req.context, values) LOG.info(_LI("Successfully updated a membership for image %(id)s"), {'id': image_id}) return webob.exc.HTTPNoContent() @utils.mutating def delete(self, req, image_id, id): """ Removes a membership from the image. """ self._check_can_access_image_members(req.context) # Make sure the image exists try: image = self.db_api.image_get(req.context, image_id, v1_mode=True) except exception.NotFound: msg = _("Image %(id)s not found") % {'id': image_id} LOG.warn(msg) raise webob.exc.HTTPNotFound(msg) except exception.Forbidden: # If it's private and doesn't belong to them, don't let on # that it exists msg = _LW("Access denied to image %(id)s but returning" " 'not found'") % {'id': image_id} LOG.warn(msg) raise webob.exc.HTTPNotFound() # Can they manipulate the membership? if not self.is_image_sharable(req.context, image): msg = (_LW("User lacks permission to share image %(id)s") % {'id': image_id}) LOG.warn(msg) msg = _("No permission to share that image") raise webob.exc.HTTPForbidden(msg) # Look up an existing membership members = self.db_api.image_member_find(req.context, image_id=image_id, member=id) if members: self.db_api.image_member_delete(req.context, members[0]['id']) else: LOG.debug("%(id)s is not a member of image %(image_id)s", {'id': id, 'image_id': image_id}) msg = _("Membership could not be found.") raise webob.exc.HTTPNotFound(explanation=msg) # Make an appropriate result LOG.info(_LI("Successfully deleted a membership from image %(id)s"), {'id': image_id}) return webob.exc.HTTPNoContent() def default(self, req, *args, **kwargs): """This will cover the missing 'show' and 'create' actions""" LOG.debug("The method %s is not allowed for this resource", req.environ['REQUEST_METHOD']) raise webob.exc.HTTPMethodNotAllowed( headers=[('Allow', 'PUT, DELETE')]) def index_shared_images(self, req, id): """ Retrieves images shared with the given member. """ try: members = self.db_api.image_member_find(req.context, member=id) except exception.NotFound: msg = _LW("Member %(id)s not found") % {'id': id} LOG.warn(msg) msg = _("Membership could not be found.") raise webob.exc.HTTPBadRequest(explanation=msg) LOG.debug("Returning list of images shared with member %(id)s", {'id': id}) return dict(shared_images=make_member_list(members, image_id='image_id', can_share='can_share')) def make_member_list(members, **attr_map): """ Create a dict representation of a list of members which we can use to serialize the members list. Keyword arguments map the names of optional attributes to include to the database attribute. """ def _fetch_memb(memb, attr_map): return {k: memb[v] for k, v in attr_map.items() if v in memb.keys()} # Return the list of members with the given attribute mapping return [_fetch_memb(memb, attr_map) for memb in members] def create_resource(): """Image members resource factory method.""" deserializer = wsgi.JSONRequestDeserializer() serializer = wsgi.JSONResponseSerializer() return wsgi.Resource(Controller(), deserializer, serializer) glance-16.0.0/glance/registry/api/v1/images.py0000666000175100017510000005201213245511421021072 0ustar zuulzuul00000000000000# Copyright 2010-2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Reference implementation registry server WSGI controller """ from oslo_config import cfg from oslo_log import log as logging from oslo_utils import encodeutils from oslo_utils import strutils from oslo_utils import uuidutils from webob import exc from glance.common import exception from glance.common import timeutils from glance.common import utils from glance.common import wsgi import glance.db from glance.i18n import _, _LE, _LI, _LW LOG = logging.getLogger(__name__) CONF = cfg.CONF DISPLAY_FIELDS_IN_INDEX = ['id', 'name', 'size', 'disk_format', 'container_format', 'checksum'] SUPPORTED_FILTERS = ['name', 'status', 'container_format', 'disk_format', 'min_ram', 'min_disk', 'size_min', 'size_max', 'changes-since', 'protected'] SUPPORTED_SORT_KEYS = ('name', 'status', 'container_format', 'disk_format', 'size', 'id', 'created_at', 'updated_at') SUPPORTED_SORT_DIRS = ('asc', 'desc') SUPPORTED_PARAMS = ('limit', 'marker', 'sort_key', 'sort_dir') def _normalize_image_location_for_db(image_data): """ This function takes the legacy locations field and the newly added location_data field from the image_data values dictionary which flows over the wire between the registry and API servers and converts it into the location_data format only which is then consumable by the Image object. :param image_data: a dict of values representing information in the image :returns: a new image data dict """ if 'locations' not in image_data and 'location_data' not in image_data: image_data['locations'] = None return image_data locations = image_data.pop('locations', []) location_data = image_data.pop('location_data', []) location_data_dict = {} for l in locations: location_data_dict[l] = {} for l in location_data: location_data_dict[l['url']] = {'metadata': l['metadata'], 'status': l['status'], # Note(zhiyan): New location has no ID. 'id': l['id'] if 'id' in l else None} # NOTE(jbresnah) preserve original order. tests assume original order, # should that be defined functionality ordered_keys = locations[:] for ld in location_data: if ld['url'] not in ordered_keys: ordered_keys.append(ld['url']) location_data = [] for loc in ordered_keys: data = location_data_dict[loc] if data: location_data.append({'url': loc, 'metadata': data['metadata'], 'status': data['status'], 'id': data['id']}) else: location_data.append({'url': loc, 'metadata': {}, 'status': 'active', 'id': None}) image_data['locations'] = location_data return image_data class Controller(object): def __init__(self): self.db_api = glance.db.get_api() def _get_images(self, context, filters, **params): """Get images, wrapping in exception if necessary.""" # NOTE(markwash): for backwards compatibility, is_public=True for # admins actually means "treat me as if I'm not an admin and show me # all my images" if context.is_admin and params.get('is_public') is True: params['admin_as_user'] = True del params['is_public'] try: return self.db_api.image_get_all(context, filters=filters, v1_mode=True, **params) except exception.ImageNotFound: LOG.warn(_LW("Invalid marker. Image %(id)s could not be " "found.") % {'id': params.get('marker')}) msg = _("Invalid marker. Image could not be found.") raise exc.HTTPBadRequest(explanation=msg) except exception.Forbidden: LOG.warn(_LW("Access denied to image %(id)s but returning " "'not found'") % {'id': params.get('marker')}) msg = _("Invalid marker. Image could not be found.") raise exc.HTTPBadRequest(explanation=msg) except Exception: LOG.exception(_LE("Unable to get images")) raise def index(self, req): """Return a basic filtered list of public, non-deleted images :param req: the Request object coming from the wsgi layer :returns: a mapping of the following form .. code-block:: python dict(images=[image_list]) Where image_list is a sequence of mappings :: { 'id': , 'name': , 'size': , 'disk_format': , 'container_format': , 'checksum': } """ params = self._get_query_params(req) images = self._get_images(req.context, **params) results = [] for image in images: result = {} for field in DISPLAY_FIELDS_IN_INDEX: result[field] = image[field] results.append(result) LOG.debug("Returning image list") return dict(images=results) def detail(self, req): """Return a filtered list of public, non-deleted images in detail :param req: the Request object coming from the wsgi layer :returns: a mapping of the following form :: {'images': [{ 'id': , 'name': , 'size': , 'disk_format': , 'container_format': , 'checksum': , 'min_disk': , 'min_ram': , 'store': , 'status': , 'created_at': , 'updated_at': , 'deleted_at': |, 'properties': {'distro': 'Ubuntu 10.04 LTS', {...}} }, {...}] } """ params = self._get_query_params(req) images = self._get_images(req.context, **params) image_dicts = [make_image_dict(i) for i in images] LOG.debug("Returning detailed image list") return dict(images=image_dicts) def _get_query_params(self, req): """Extract necessary query parameters from http request. :param req: the Request object coming from the wsgi layer :returns: dictionary of filters to apply to list of images """ params = { 'filters': self._get_filters(req), 'limit': self._get_limit(req), 'sort_key': [self._get_sort_key(req)], 'sort_dir': [self._get_sort_dir(req)], 'marker': self._get_marker(req), } if req.context.is_admin: # Only admin gets to look for non-public images params['is_public'] = self._get_is_public(req) # need to coy items because the params is modified in the loop body items = list(params.items()) for key, value in items: if value is None: del params[key] # Fix for LP Bug #1132294 # Ensure all shared images are returned in v1 params['member_status'] = 'all' return params def _get_filters(self, req): """Return a dictionary of query param filters from the request :param req: the Request object coming from the wsgi layer :returns: a dict of key/value filters """ filters = {} properties = {} for param in req.params: if param in SUPPORTED_FILTERS: filters[param] = req.params.get(param) if param.startswith('property-'): _param = param[9:] properties[_param] = req.params.get(param) if 'changes-since' in filters: isotime = filters['changes-since'] try: filters['changes-since'] = timeutils.parse_isotime(isotime) except ValueError: raise exc.HTTPBadRequest(_("Unrecognized changes-since value")) if 'protected' in filters: value = self._get_bool(filters['protected']) if value is None: raise exc.HTTPBadRequest(_("protected must be True, or " "False")) filters['protected'] = value # only allow admins to filter on 'deleted' if req.context.is_admin: deleted_filter = self._parse_deleted_filter(req) if deleted_filter is not None: filters['deleted'] = deleted_filter elif 'changes-since' not in filters: filters['deleted'] = False elif 'changes-since' not in filters: filters['deleted'] = False if properties: filters['properties'] = properties return filters def _get_limit(self, req): """Parse a limit query param into something usable.""" try: limit = int(req.params.get('limit', CONF.limit_param_default)) except ValueError: raise exc.HTTPBadRequest(_("limit param must be an integer")) if limit < 0: raise exc.HTTPBadRequest(_("limit param must be positive")) return min(CONF.api_limit_max, limit) def _get_marker(self, req): """Parse a marker query param into something usable.""" marker = req.params.get('marker') if marker and not uuidutils.is_uuid_like(marker): msg = _('Invalid marker format') raise exc.HTTPBadRequest(explanation=msg) return marker def _get_sort_key(self, req): """Parse a sort key query param from the request object.""" sort_key = req.params.get('sort_key', 'created_at') if sort_key is not None and sort_key not in SUPPORTED_SORT_KEYS: _keys = ', '.join(SUPPORTED_SORT_KEYS) msg = _("Unsupported sort_key. Acceptable values: %s") % (_keys,) raise exc.HTTPBadRequest(explanation=msg) return sort_key def _get_sort_dir(self, req): """Parse a sort direction query param from the request object.""" sort_dir = req.params.get('sort_dir', 'desc') if sort_dir is not None and sort_dir not in SUPPORTED_SORT_DIRS: _keys = ', '.join(SUPPORTED_SORT_DIRS) msg = _("Unsupported sort_dir. Acceptable values: %s") % (_keys,) raise exc.HTTPBadRequest(explanation=msg) return sort_dir def _get_bool(self, value): value = value.lower() if value == 'true' or value == '1': return True elif value == 'false' or value == '0': return False return None def _get_is_public(self, req): """Parse is_public into something usable.""" is_public = req.params.get('is_public') if is_public is None: # NOTE(vish): This preserves the default value of showing only # public images. return True elif is_public.lower() == 'none': return None value = self._get_bool(is_public) if value is None: raise exc.HTTPBadRequest(_("is_public must be None, True, or " "False")) return value def _parse_deleted_filter(self, req): """Parse deleted into something usable.""" deleted = req.params.get('deleted') if deleted is None: return None return strutils.bool_from_string(deleted) def show(self, req, id): """Return data about the given image id.""" try: image = self.db_api.image_get(req.context, id, v1_mode=True) LOG.debug("Successfully retrieved image %(id)s", {'id': id}) except exception.ImageNotFound: LOG.info(_LI("Image %(id)s not found"), {'id': id}) raise exc.HTTPNotFound() except exception.Forbidden: # If it's private and doesn't belong to them, don't let on # that it exists LOG.info(_LI("Access denied to image %(id)s but returning" " 'not found'"), {'id': id}) raise exc.HTTPNotFound() except Exception: LOG.exception(_LE("Unable to show image %s") % id) raise return dict(image=make_image_dict(image)) @utils.mutating def delete(self, req, id): """Deletes an existing image with the registry. :param req: wsgi Request object :param id: The opaque internal identifier for the image :returns: 200 if delete was successful, a fault if not. On success, the body contains the deleted image information as a mapping. """ try: deleted_image = self.db_api.image_destroy(req.context, id) LOG.info(_LI("Successfully deleted image %(id)s"), {'id': id}) return dict(image=make_image_dict(deleted_image)) except exception.ForbiddenPublicImage: LOG.info(_LI("Delete denied for public image %(id)s"), {'id': id}) raise exc.HTTPForbidden() except exception.Forbidden: # If it's private and doesn't belong to them, don't let on # that it exists LOG.info(_LI("Access denied to image %(id)s but returning" " 'not found'"), {'id': id}) return exc.HTTPNotFound() except exception.ImageNotFound: LOG.info(_LI("Image %(id)s not found"), {'id': id}) return exc.HTTPNotFound() except Exception: LOG.exception(_LE("Unable to delete image %s") % id) raise @utils.mutating def create(self, req, body): """Registers a new image with the registry. :param req: wsgi Request object :param body: Dictionary of information about the image :returns: The newly-created image information as a mapping, which will include the newly-created image's internal id in the 'id' field """ image_data = body['image'] # Ensure the image has a status set image_data.setdefault('status', 'active') # Set up the image owner if not req.context.is_admin or 'owner' not in image_data: image_data['owner'] = req.context.owner image_id = image_data.get('id') if image_id and not uuidutils.is_uuid_like(image_id): LOG.info(_LI("Rejecting image creation request for invalid image " "id '%(bad_id)s'"), {'bad_id': image_id}) msg = _("Invalid image id format") return exc.HTTPBadRequest(explanation=msg) if 'location' in image_data: image_data['locations'] = [image_data.pop('location')] try: image_data = _normalize_image_location_for_db(image_data) image_data = self.db_api.image_create(req.context, image_data, v1_mode=True) image_data = dict(image=make_image_dict(image_data)) LOG.info(_LI("Successfully created image %(id)s"), {'id': image_data['image']['id']}) return image_data except exception.Duplicate: msg = _("Image with identifier %s already exists!") % image_id LOG.warn(msg) return exc.HTTPConflict(msg) except exception.Invalid as e: msg = (_("Failed to add image metadata. " "Got error: %s") % encodeutils.exception_to_unicode(e)) LOG.error(msg) return exc.HTTPBadRequest(msg) except Exception: LOG.exception(_LE("Unable to create image %s"), image_id) raise @utils.mutating def update(self, req, id, body): """Updates an existing image with the registry. :param req: wsgi Request object :param body: Dictionary of information about the image :param id: The opaque internal identifier for the image :returns: Returns the updated image information as a mapping, """ image_data = body['image'] from_state = body.get('from_state') # Prohibit modification of 'owner' if not req.context.is_admin and 'owner' in image_data: del image_data['owner'] if 'location' in image_data: image_data['locations'] = [image_data.pop('location')] purge_props = req.headers.get("X-Glance-Registry-Purge-Props", "false") try: # These fields hold sensitive data, which should not be printed in # the logs. sensitive_fields = ['locations', 'location_data'] LOG.debug("Updating image %(id)s with metadata: %(image_data)r", {'id': id, 'image_data': {k: v for k, v in image_data.items() if k not in sensitive_fields}}) image_data = _normalize_image_location_for_db(image_data) if purge_props == "true": purge_props = True else: purge_props = False updated_image = self.db_api.image_update(req.context, id, image_data, purge_props=purge_props, from_state=from_state, v1_mode=True) LOG.info(_LI("Updating metadata for image %(id)s"), {'id': id}) return dict(image=make_image_dict(updated_image)) except exception.Invalid as e: msg = (_("Failed to update image metadata. " "Got error: %s") % encodeutils.exception_to_unicode(e)) LOG.error(msg) return exc.HTTPBadRequest(msg) except exception.ImageNotFound: LOG.info(_LI("Image %(id)s not found"), {'id': id}) raise exc.HTTPNotFound(body='Image not found', request=req, content_type='text/plain') except exception.ForbiddenPublicImage: LOG.info(_LI("Update denied for public image %(id)s"), {'id': id}) raise exc.HTTPForbidden() except exception.Forbidden: # If it's private and doesn't belong to them, don't let on # that it exists LOG.info(_LI("Access denied to image %(id)s but returning" " 'not found'"), {'id': id}) raise exc.HTTPNotFound(body='Image not found', request=req, content_type='text/plain') except exception.Conflict as e: LOG.info(encodeutils.exception_to_unicode(e)) raise exc.HTTPConflict(body='Image operation conflicts', request=req, content_type='text/plain') except Exception: LOG.exception(_LE("Unable to update image %s") % id) raise def _limit_locations(image): locations = image.pop('locations', []) image['location_data'] = locations image['location'] = None for loc in locations: if loc['status'] == 'active': image['location'] = loc['url'] break def make_image_dict(image): """Create a dict representation of an image which we can use to serialize the image. """ def _fetch_attrs(d, attrs): return {a: d[a] for a in attrs if a in d.keys()} # TODO(sirp): should this be a dict, or a list of dicts? # A plain dict is more convenient, but list of dicts would provide # access to created_at, etc properties = {p['name']: p['value'] for p in image['properties'] if not p['deleted']} image_dict = _fetch_attrs(image, glance.db.IMAGE_ATTRS) image_dict['properties'] = properties _limit_locations(image_dict) return image_dict def create_resource(): """Images resource factory method.""" deserializer = wsgi.JSONRequestDeserializer() serializer = wsgi.JSONResponseSerializer() return wsgi.Resource(Controller(), deserializer, serializer) glance-16.0.0/glance/registry/api/v1/__init__.py0000666000175100017510000000662613245511421021376 0ustar zuulzuul00000000000000# Copyright 2010-2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from glance.common import wsgi from glance.registry.api.v1 import images from glance.registry.api.v1 import members def init(mapper): images_resource = images.create_resource() mapper.connect("/", controller=images_resource, action="index") mapper.connect("/images", controller=images_resource, action="index", conditions={'method': ['GET']}) mapper.connect("/images", controller=images_resource, action="create", conditions={'method': ['POST']}) mapper.connect("/images/detail", controller=images_resource, action="detail", conditions={'method': ['GET']}) mapper.connect("/images/{id}", controller=images_resource, action="show", conditions=dict(method=["GET"])) mapper.connect("/images/{id}", controller=images_resource, action="update", conditions=dict(method=["PUT"])) mapper.connect("/images/{id}", controller=images_resource, action="delete", conditions=dict(method=["DELETE"])) members_resource = members.create_resource() mapper.connect("/images/{image_id}/members", controller=members_resource, action="index", conditions={'method': ['GET']}) mapper.connect("/images/{image_id}/members", controller=members_resource, action="create", conditions={'method': ['POST']}) mapper.connect("/images/{image_id}/members", controller=members_resource, action="update_all", conditions=dict(method=["PUT"])) mapper.connect("/images/{image_id}/members/{id}", controller=members_resource, action="show", conditions={'method': ['GET']}) mapper.connect("/images/{image_id}/members/{id}", controller=members_resource, action="update", conditions={'method': ['PUT']}) mapper.connect("/images/{image_id}/members/{id}", controller=members_resource, action="delete", conditions={'method': ['DELETE']}) mapper.connect("/shared-images/{id}", controller=members_resource, action="index_shared_images") class API(wsgi.Router): """WSGI entry point for all Registry requests.""" def __init__(self, mapper): mapper = mapper or wsgi.APIMapper() init(mapper) super(API, self).__init__(mapper) glance-16.0.0/glance/registry/__init__.py0000666000175100017510000000361113245511421020266 0ustar zuulzuul00000000000000# Copyright 2010-2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Registry API """ from oslo_config import cfg from glance.i18n import _ registry_addr_opts = [ cfg.HostAddressOpt('registry_host', default='0.0.0.0', deprecated_for_removal=True, deprecated_since="Queens", deprecated_reason=_(""" Glance registry service is deprecated for removal. More information can be found from the spec: http://specs.openstack.org/openstack/glance-specs/specs/queens/approved/glance/deprecate-registry.html """), help=_(""" Address the registry server is hosted on. Possible values: * A valid IP or hostname Related options: * None """)), cfg.PortOpt('registry_port', default=9191, deprecated_for_removal=True, deprecated_since="Queens", deprecated_reason=_(""" Glance registry service is deprecated for removal. More information can be found from the spec: http://specs.openstack.org/openstack/glance-specs/specs/queens/approved/glance/deprecate-registry.html """), help=_(""" Port the registry server is listening on. Possible values: * A valid port number Related options: * None """)), ] CONF = cfg.CONF CONF.register_opts(registry_addr_opts) glance-16.0.0/glance/registry/client/0000775000175100017510000000000013245511661017436 5ustar zuulzuul00000000000000glance-16.0.0/glance/registry/client/v2/0000775000175100017510000000000013245511661017765 5ustar zuulzuul00000000000000glance-16.0.0/glance/registry/client/v2/client.py0000666000175100017510000000153213245511421021612 0ustar zuulzuul00000000000000# Copyright 2013 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Simple client class to speak with any RESTful service that implements the Glance Registry API """ from glance.common import rpc class RegistryClient(rpc.RPCClient): """Registry's V2 Client.""" DEFAULT_PORT = 9191 glance-16.0.0/glance/registry/client/v2/api.py0000666000175100017510000000710113245511421021103 0ustar zuulzuul00000000000000# Copyright 2013 Red Hat, Inc # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Registry's Client V2 """ import os from oslo_config import cfg from oslo_log import log as logging from glance.common import exception from glance.i18n import _ from glance.registry.client.v2 import client LOG = logging.getLogger(__name__) CONF = cfg.CONF _registry_client = 'glance.registry.client' CONF.import_opt('registry_client_protocol', _registry_client) CONF.import_opt('registry_client_key_file', _registry_client) CONF.import_opt('registry_client_cert_file', _registry_client) CONF.import_opt('registry_client_ca_file', _registry_client) CONF.import_opt('registry_client_insecure', _registry_client) CONF.import_opt('registry_client_timeout', _registry_client) CONF.import_opt('use_user_token', _registry_client) CONF.import_opt('admin_user', _registry_client) CONF.import_opt('admin_password', _registry_client) CONF.import_opt('admin_tenant_name', _registry_client) CONF.import_opt('auth_url', _registry_client) CONF.import_opt('auth_strategy', _registry_client) CONF.import_opt('auth_region', _registry_client) _CLIENT_CREDS = None _CLIENT_HOST = None _CLIENT_PORT = None _CLIENT_KWARGS = {} def configure_registry_client(): """ Sets up a registry client for use in registry lookups """ global _CLIENT_KWARGS, _CLIENT_HOST, _CLIENT_PORT try: host, port = CONF.registry_host, CONF.registry_port except cfg.ConfigFileValueError: msg = _("Configuration option was not valid") LOG.error(msg) raise exception.BadRegistryConnectionConfiguration(msg) except IndexError: msg = _("Could not find required configuration option") LOG.error(msg) raise exception.BadRegistryConnectionConfiguration(msg) _CLIENT_HOST = host _CLIENT_PORT = port _CLIENT_KWARGS = { 'use_ssl': CONF.registry_client_protocol.lower() == 'https', 'key_file': CONF.registry_client_key_file, 'cert_file': CONF.registry_client_cert_file, 'ca_file': CONF.registry_client_ca_file, 'insecure': CONF.registry_client_insecure, 'timeout': CONF.registry_client_timeout, } if not CONF.use_user_token: configure_registry_admin_creds() def configure_registry_admin_creds(): global _CLIENT_CREDS if CONF.auth_url or os.getenv('OS_AUTH_URL'): strategy = 'keystone' else: strategy = CONF.auth_strategy _CLIENT_CREDS = { 'user': CONF.admin_user, 'password': CONF.admin_password, 'username': CONF.admin_user, 'tenant': CONF.admin_tenant_name, 'auth_url': os.getenv('OS_AUTH_URL') or CONF.auth_url, 'strategy': strategy, 'region': CONF.auth_region, } def get_registry_client(cxt): global _CLIENT_CREDS, _CLIENT_KWARGS, _CLIENT_HOST, _CLIENT_PORT kwargs = _CLIENT_KWARGS.copy() if CONF.use_user_token: kwargs['auth_token'] = cxt.auth_token if _CLIENT_CREDS: kwargs['creds'] = _CLIENT_CREDS return client.RegistryClient(_CLIENT_HOST, _CLIENT_PORT, **kwargs) glance-16.0.0/glance/registry/client/v2/__init__.py0000666000175100017510000000000013245511421022060 0ustar zuulzuul00000000000000glance-16.0.0/glance/registry/client/__init__.py0000666000175100017510000002371213245511421021550 0ustar zuulzuul00000000000000# Copyright 2013 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg from glance.i18n import _ registry_client_opts = [ cfg.StrOpt('registry_client_protocol', default='http', choices=('http', 'https'), deprecated_for_removal=True, deprecated_since="Queens", deprecated_reason=_(""" Glance registry service is deprecated for removal. More information can be found from the spec: http://specs.openstack.org/openstack/glance-specs/specs/queens/approved/glance/deprecate-registry.html """), help=_(""" Protocol to use for communication with the registry server. Provide a string value representing the protocol to use for communication with the registry server. By default, this option is set to ``http`` and the connection is not secure. This option can be set to ``https`` to establish a secure connection to the registry server. In this case, provide a key to use for the SSL connection using the ``registry_client_key_file`` option. Also include the CA file and cert file using the options ``registry_client_ca_file`` and ``registry_client_cert_file`` respectively. Possible values: * http * https Related options: * registry_client_key_file * registry_client_cert_file * registry_client_ca_file """)), cfg.StrOpt('registry_client_key_file', sample_default='/etc/ssl/key/key-file.pem', deprecated_for_removal=True, deprecated_since="Queens", deprecated_reason=_(""" Glance registry service is deprecated for removal. More information can be found from the spec: http://specs.openstack.org/openstack/glance-specs/specs/queens/approved/glance/deprecate-registry.html """), help=_(""" Absolute path to the private key file. Provide a string value representing a valid absolute path to the private key file to use for establishing a secure connection to the registry server. NOTE: This option must be set if ``registry_client_protocol`` is set to ``https``. Alternatively, the GLANCE_CLIENT_KEY_FILE environment variable may be set to a filepath of the key file. Possible values: * String value representing a valid absolute path to the key file. Related options: * registry_client_protocol """)), cfg.StrOpt('registry_client_cert_file', sample_default='/etc/ssl/certs/file.crt', deprecated_for_removal=True, deprecated_since="Queens", deprecated_reason=_(""" Glance registry service is deprecated for removal. More information can be found from the spec: http://specs.openstack.org/openstack/glance-specs/specs/queens/approved/glance/deprecate-registry.html """), help=_(""" Absolute path to the certificate file. Provide a string value representing a valid absolute path to the certificate file to use for establishing a secure connection to the registry server. NOTE: This option must be set if ``registry_client_protocol`` is set to ``https``. Alternatively, the GLANCE_CLIENT_CERT_FILE environment variable may be set to a filepath of the certificate file. Possible values: * String value representing a valid absolute path to the certificate file. Related options: * registry_client_protocol """)), cfg.StrOpt('registry_client_ca_file', sample_default='/etc/ssl/cafile/file.ca', deprecated_for_removal=True, deprecated_since="Queens", deprecated_reason=_(""" Glance registry service is deprecated for removal. More information can be found from the spec: http://specs.openstack.org/openstack/glance-specs/specs/queens/approved/glance/deprecate-registry.html """), help=_(""" Absolute path to the Certificate Authority file. Provide a string value representing a valid absolute path to the certificate authority file to use for establishing a secure connection to the registry server. NOTE: This option must be set if ``registry_client_protocol`` is set to ``https``. Alternatively, the GLANCE_CLIENT_CA_FILE environment variable may be set to a filepath of the CA file. This option is ignored if the ``registry_client_insecure`` option is set to ``True``. Possible values: * String value representing a valid absolute path to the CA file. Related options: * registry_client_protocol * registry_client_insecure """)), cfg.BoolOpt('registry_client_insecure', default=False, deprecated_for_removal=True, deprecated_since="Queens", deprecated_reason=_(""" Glance registry service is deprecated for removal. More information can be found from the spec: http://specs.openstack.org/openstack/glance-specs/specs/queens/approved/glance/deprecate-registry.html """), help=_(""" Set verification of the registry server certificate. Provide a boolean value to determine whether or not to validate SSL connections to the registry server. By default, this option is set to ``False`` and the SSL connections are validated. If set to ``True``, the connection to the registry server is not validated via a certifying authority and the ``registry_client_ca_file`` option is ignored. This is the registry's equivalent of specifying --insecure on the command line using glanceclient for the API. Possible values: * True * False Related options: * registry_client_protocol * registry_client_ca_file """)), cfg.IntOpt('registry_client_timeout', default=600, min=0, deprecated_for_removal=True, deprecated_since="Queens", deprecated_reason=_(""" Glance registry service is deprecated for removal. More information can be found from the spec: http://specs.openstack.org/openstack/glance-specs/specs/queens/approved/glance/deprecate-registry.html """), help=_(""" Timeout value for registry requests. Provide an integer value representing the period of time in seconds that the API server will wait for a registry request to complete. The default value is 600 seconds. A value of 0 implies that a request will never timeout. Possible values: * Zero * Positive integer Related options: * None """)), ] _DEPRECATE_USE_USER_TOKEN_MSG = ('This option was considered harmful and ' 'has been deprecated in M release. It will ' 'be removed in O release. For more ' 'information read OSSN-0060. ' 'Related functionality with uploading big ' 'images has been implemented with Keystone ' 'trusts support.') registry_client_ctx_opts = [ cfg.BoolOpt('use_user_token', default=True, deprecated_for_removal=True, deprecated_reason=_DEPRECATE_USE_USER_TOKEN_MSG, help=_('Whether to pass through the user token when ' 'making requests to the registry. To prevent ' 'failures with token expiration during big ' 'files upload, it is recommended to set this ' 'parameter to False.' 'If "use_user_token" is not in effect, then ' 'admin credentials can be specified.')), cfg.StrOpt('admin_user', secret=True, deprecated_for_removal=True, deprecated_reason=_DEPRECATE_USE_USER_TOKEN_MSG, help=_('The administrators user name. ' 'If "use_user_token" is not in effect, then ' 'admin credentials can be specified.')), cfg.StrOpt('admin_password', secret=True, deprecated_for_removal=True, deprecated_reason=_DEPRECATE_USE_USER_TOKEN_MSG, help=_('The administrators password. ' 'If "use_user_token" is not in effect, then ' 'admin credentials can be specified.')), cfg.StrOpt('admin_tenant_name', secret=True, deprecated_for_removal=True, deprecated_reason=_DEPRECATE_USE_USER_TOKEN_MSG, help=_('The tenant name of the administrative user. ' 'If "use_user_token" is not in effect, then ' 'admin tenant name can be specified.')), cfg.StrOpt('auth_url', deprecated_for_removal=True, deprecated_reason=_DEPRECATE_USE_USER_TOKEN_MSG, help=_('The URL to the keystone service. ' 'If "use_user_token" is not in effect and ' 'using keystone auth, then URL of keystone ' 'can be specified.')), cfg.StrOpt('auth_strategy', default='noauth', deprecated_for_removal=True, deprecated_reason=_DEPRECATE_USE_USER_TOKEN_MSG, help=_('The strategy to use for authentication. ' 'If "use_user_token" is not in effect, then ' 'auth strategy can be specified.')), cfg.StrOpt('auth_region', deprecated_for_removal=True, deprecated_reason=_DEPRECATE_USE_USER_TOKEN_MSG, help=_('The region for the authentication service. ' 'If "use_user_token" is not in effect and ' 'using keystone auth, then region name can ' 'be specified.')), ] CONF = cfg.CONF CONF.register_opts(registry_client_opts) CONF.register_opts(registry_client_ctx_opts) glance-16.0.0/glance/registry/client/v1/0000775000175100017510000000000013245511661017764 5ustar zuulzuul00000000000000glance-16.0.0/glance/registry/client/v1/client.py0000666000175100017510000002666613245511421021630 0ustar zuulzuul00000000000000# Copyright 2013 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Simple client class to speak with any RESTful service that implements the Glance Registry API """ from oslo_log import log as logging from oslo_serialization import jsonutils from oslo_utils import excutils import six from glance.common.client import BaseClient from glance.common import crypt from glance.common import exception from glance.i18n import _LE from glance.registry.api.v1 import images LOG = logging.getLogger(__name__) class RegistryClient(BaseClient): """A client for the Registry image metadata service.""" DEFAULT_PORT = 9191 def __init__(self, host=None, port=None, metadata_encryption_key=None, identity_headers=None, **kwargs): """ :param metadata_encryption_key: Key used to encrypt 'location' metadata """ self.metadata_encryption_key = metadata_encryption_key # NOTE (dprince): by default base client overwrites host and port # settings when using keystone. configure_via_auth=False disables # this behaviour to ensure we still send requests to the Registry API self.identity_headers = identity_headers # store available passed request id for do_request call self._passed_request_id = kwargs.pop('request_id', None) BaseClient.__init__(self, host, port, configure_via_auth=False, **kwargs) def decrypt_metadata(self, image_metadata): if self.metadata_encryption_key: if image_metadata.get('location'): location = crypt.urlsafe_decrypt(self.metadata_encryption_key, image_metadata['location']) image_metadata['location'] = location if image_metadata.get('location_data'): ld = [] for loc in image_metadata['location_data']: url = crypt.urlsafe_decrypt(self.metadata_encryption_key, loc['url']) ld.append({'id': loc['id'], 'url': url, 'metadata': loc['metadata'], 'status': loc['status']}) image_metadata['location_data'] = ld return image_metadata def encrypt_metadata(self, image_metadata): if self.metadata_encryption_key: location_url = image_metadata.get('location') if location_url: location = crypt.urlsafe_encrypt(self.metadata_encryption_key, location_url, 64) image_metadata['location'] = location if image_metadata.get('location_data'): ld = [] for loc in image_metadata['location_data']: if loc['url'] == location_url: url = location else: url = crypt.urlsafe_encrypt( self.metadata_encryption_key, loc['url'], 64) ld.append({'url': url, 'metadata': loc['metadata'], 'status': loc['status'], # NOTE(zhiyan): New location has no ID field. 'id': loc.get('id')}) image_metadata['location_data'] = ld return image_metadata def get_images(self, **kwargs): """ Returns a list of image id/name mappings from Registry :param filters: dict of keys & expected values to filter results :param marker: image id after which to start page :param limit: max number of images to return :param sort_key: results will be ordered by this image attribute :param sort_dir: direction in which to order results (asc, desc) """ params = self._extract_params(kwargs, images.SUPPORTED_PARAMS) res = self.do_request("GET", "/images", params=params) image_list = jsonutils.loads(res.read())['images'] for image in image_list: image = self.decrypt_metadata(image) return image_list def do_request(self, method, action, **kwargs): try: kwargs['headers'] = kwargs.get('headers', {}) kwargs['headers'].update(self.identity_headers or {}) if self._passed_request_id: request_id = self._passed_request_id if six.PY3 and isinstance(request_id, bytes): request_id = request_id.decode('utf-8') kwargs['headers']['X-Openstack-Request-ID'] = request_id res = super(RegistryClient, self).do_request(method, action, **kwargs) status = res.status request_id = res.getheader('x-openstack-request-id') if six.PY3 and isinstance(request_id, bytes): request_id = request_id.decode('utf-8') LOG.debug("Registry request %(method)s %(action)s HTTP %(status)s" " request id %(request_id)s", {'method': method, 'action': action, 'status': status, 'request_id': request_id}) # a 404 condition is not fatal, we shouldn't log at a fatal # level for it. except exception.NotFound: raise # The following exception logging should only really be used # in extreme and unexpected cases. except Exception as exc: with excutils.save_and_reraise_exception(): exc_name = exc.__class__.__name__ LOG.exception(_LE("Registry client request %(method)s " "%(action)s raised %(exc_name)s"), {'method': method, 'action': action, 'exc_name': exc_name}) return res def get_images_detailed(self, **kwargs): """ Returns a list of detailed image data mappings from Registry :param filters: dict of keys & expected values to filter results :param marker: image id after which to start page :param limit: max number of images to return :param sort_key: results will be ordered by this image attribute :param sort_dir: direction in which to order results (asc, desc) """ params = self._extract_params(kwargs, images.SUPPORTED_PARAMS) res = self.do_request("GET", "/images/detail", params=params) image_list = jsonutils.loads(res.read())['images'] for image in image_list: image = self.decrypt_metadata(image) return image_list def get_image(self, image_id): """Returns a mapping of image metadata from Registry.""" res = self.do_request("GET", "/images/%s" % image_id) data = jsonutils.loads(res.read())['image'] return self.decrypt_metadata(data) def add_image(self, image_metadata): """ Tells registry about an image's metadata """ headers = { 'Content-Type': 'application/json', } if 'image' not in image_metadata: image_metadata = dict(image=image_metadata) encrypted_metadata = self.encrypt_metadata(image_metadata['image']) image_metadata['image'] = encrypted_metadata body = jsonutils.dump_as_bytes(image_metadata) res = self.do_request("POST", "/images", body=body, headers=headers) # Registry returns a JSONified dict(image=image_info) data = jsonutils.loads(res.read()) image = data['image'] return self.decrypt_metadata(image) def update_image(self, image_id, image_metadata, purge_props=False, from_state=None): """ Updates Registry's information about an image """ if 'image' not in image_metadata: image_metadata = dict(image=image_metadata) encrypted_metadata = self.encrypt_metadata(image_metadata['image']) image_metadata['image'] = encrypted_metadata image_metadata['from_state'] = from_state body = jsonutils.dump_as_bytes(image_metadata) headers = { 'Content-Type': 'application/json', } if purge_props: headers["X-Glance-Registry-Purge-Props"] = "true" res = self.do_request("PUT", "/images/%s" % image_id, body=body, headers=headers) data = jsonutils.loads(res.read()) image = data['image'] return self.decrypt_metadata(image) def delete_image(self, image_id): """ Deletes Registry's information about an image """ res = self.do_request("DELETE", "/images/%s" % image_id) data = jsonutils.loads(res.read()) image = data['image'] return image def get_image_members(self, image_id): """Return a list of membership associations from Registry.""" res = self.do_request("GET", "/images/%s/members" % image_id) data = jsonutils.loads(res.read())['members'] return data def get_member_images(self, member_id): """Return a list of membership associations from Registry.""" res = self.do_request("GET", "/shared-images/%s" % member_id) data = jsonutils.loads(res.read())['shared_images'] return data def replace_members(self, image_id, member_data): """Replace registry's information about image membership.""" if isinstance(member_data, (list, tuple)): member_data = dict(memberships=list(member_data)) elif (isinstance(member_data, dict) and 'memberships' not in member_data): member_data = dict(memberships=[member_data]) body = jsonutils.dump_as_bytes(member_data) headers = {'Content-Type': 'application/json', } res = self.do_request("PUT", "/images/%s/members" % image_id, body=body, headers=headers) return self.get_status_code(res) == 204 def add_member(self, image_id, member_id, can_share=None): """Add to registry's information about image membership.""" body = None headers = {} # Build up a body if can_share is specified if can_share is not None: body = jsonutils.dump_as_bytes( dict(member=dict(can_share=can_share))) headers['Content-Type'] = 'application/json' url = "/images/%s/members/%s" % (image_id, member_id) res = self.do_request("PUT", url, body=body, headers=headers) return self.get_status_code(res) == 204 def delete_member(self, image_id, member_id): """Delete registry's information about image membership.""" res = self.do_request("DELETE", "/images/%s/members/%s" % (image_id, member_id)) return self.get_status_code(res) == 204 glance-16.0.0/glance/registry/client/v1/api.py0000666000175100017510000001665413245511421021117 0ustar zuulzuul00000000000000# Copyright 2010-2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Registry's Client API """ import os from oslo_config import cfg from oslo_log import log as logging from oslo_serialization import jsonutils from glance.common import exception from glance.i18n import _ from glance.registry.client.v1 import client LOG = logging.getLogger(__name__) registry_client_ctx_opts = [ cfg.BoolOpt('send_identity_headers', default=False, help=_(""" Send headers received from identity when making requests to registry. Typically, Glance registry can be deployed in multiple flavors, which may or may not include authentication. For example, ``trusted-auth`` is a flavor that does not require the registry service to authenticate the requests it receives. However, the registry service may still need a user context to be populated to serve the requests. This can be achieved by the caller (the Glance API usually) passing through the headers it received from authenticating with identity for the same request. The typical headers sent are ``X-User-Id``, ``X-Tenant-Id``, ``X-Roles``, ``X-Identity-Status`` and ``X-Service-Catalog``. Provide a boolean value to determine whether to send the identity headers to provide tenant and user information along with the requests to registry service. By default, this option is set to ``False``, which means that user and tenant information is not available readily. It must be obtained by authenticating. Hence, if this is set to ``False``, ``flavor`` must be set to value that either includes authentication or authenticated user context. Possible values: * True * False Related options: * flavor """)), ] CONF = cfg.CONF CONF.register_opts(registry_client_ctx_opts) _registry_client = 'glance.registry.client' CONF.import_opt('registry_client_protocol', _registry_client) CONF.import_opt('registry_client_key_file', _registry_client) CONF.import_opt('registry_client_cert_file', _registry_client) CONF.import_opt('registry_client_ca_file', _registry_client) CONF.import_opt('registry_client_insecure', _registry_client) CONF.import_opt('registry_client_timeout', _registry_client) CONF.import_opt('use_user_token', _registry_client) CONF.import_opt('admin_user', _registry_client) CONF.import_opt('admin_password', _registry_client) CONF.import_opt('admin_tenant_name', _registry_client) CONF.import_opt('auth_url', _registry_client) CONF.import_opt('auth_strategy', _registry_client) CONF.import_opt('auth_region', _registry_client) CONF.import_opt('metadata_encryption_key', 'glance.common.config') _CLIENT_CREDS = None _CLIENT_HOST = None _CLIENT_PORT = None _CLIENT_KWARGS = {} # AES key used to encrypt 'location' metadata _METADATA_ENCRYPTION_KEY = None def configure_registry_client(): """ Sets up a registry client for use in registry lookups """ global _CLIENT_KWARGS, _CLIENT_HOST, _CLIENT_PORT, _METADATA_ENCRYPTION_KEY try: host, port = CONF.registry_host, CONF.registry_port except cfg.ConfigFileValueError: msg = _("Configuration option was not valid") LOG.error(msg) raise exception.BadRegistryConnectionConfiguration(reason=msg) except IndexError: msg = _("Could not find required configuration option") LOG.error(msg) raise exception.BadRegistryConnectionConfiguration(reason=msg) _CLIENT_HOST = host _CLIENT_PORT = port _METADATA_ENCRYPTION_KEY = CONF.metadata_encryption_key _CLIENT_KWARGS = { 'use_ssl': CONF.registry_client_protocol.lower() == 'https', 'key_file': CONF.registry_client_key_file, 'cert_file': CONF.registry_client_cert_file, 'ca_file': CONF.registry_client_ca_file, 'insecure': CONF.registry_client_insecure, 'timeout': CONF.registry_client_timeout, } if not CONF.use_user_token: configure_registry_admin_creds() def configure_registry_admin_creds(): global _CLIENT_CREDS if CONF.auth_url or os.getenv('OS_AUTH_URL'): strategy = 'keystone' else: strategy = CONF.auth_strategy _CLIENT_CREDS = { 'user': CONF.admin_user, 'password': CONF.admin_password, 'username': CONF.admin_user, 'tenant': CONF.admin_tenant_name, 'auth_url': os.getenv('OS_AUTH_URL') or CONF.auth_url, 'strategy': strategy, 'region': CONF.auth_region, } def get_registry_client(cxt): global _CLIENT_CREDS, _CLIENT_KWARGS, _CLIENT_HOST, _CLIENT_PORT global _METADATA_ENCRYPTION_KEY kwargs = _CLIENT_KWARGS.copy() if CONF.use_user_token: kwargs['auth_token'] = cxt.auth_token if _CLIENT_CREDS: kwargs['creds'] = _CLIENT_CREDS if CONF.send_identity_headers: identity_headers = { 'X-User-Id': cxt.user or '', 'X-Tenant-Id': cxt.tenant or '', 'X-Roles': ','.join(cxt.roles), 'X-Identity-Status': 'Confirmed', 'X-Service-Catalog': jsonutils.dumps(cxt.service_catalog), } kwargs['identity_headers'] = identity_headers kwargs['request_id'] = cxt.request_id return client.RegistryClient(_CLIENT_HOST, _CLIENT_PORT, _METADATA_ENCRYPTION_KEY, **kwargs) def get_images_list(context, **kwargs): c = get_registry_client(context) return c.get_images(**kwargs) def get_images_detail(context, **kwargs): c = get_registry_client(context) return c.get_images_detailed(**kwargs) def get_image_metadata(context, image_id): c = get_registry_client(context) return c.get_image(image_id) def add_image_metadata(context, image_meta): LOG.debug("Adding image metadata...") c = get_registry_client(context) return c.add_image(image_meta) def update_image_metadata(context, image_id, image_meta, purge_props=False, from_state=None): LOG.debug("Updating image metadata for image %s...", image_id) c = get_registry_client(context) return c.update_image(image_id, image_meta, purge_props=purge_props, from_state=from_state) def delete_image_metadata(context, image_id): LOG.debug("Deleting image metadata for image %s...", image_id) c = get_registry_client(context) return c.delete_image(image_id) def get_image_members(context, image_id): c = get_registry_client(context) return c.get_image_members(image_id) def get_member_images(context, member_id): c = get_registry_client(context) return c.get_member_images(member_id) def replace_members(context, image_id, member_data): c = get_registry_client(context) return c.replace_members(image_id, member_data) def add_member(context, image_id, member_id, can_share=None): c = get_registry_client(context) return c.add_member(image_id, member_id, can_share=can_share) def delete_member(context, image_id, member_id): c = get_registry_client(context) return c.delete_member(image_id, member_id) glance-16.0.0/glance/registry/client/v1/__init__.py0000666000175100017510000000000013245511421022057 0ustar zuulzuul00000000000000glance-16.0.0/glance/__init__.py0000666000175100017510000000000013245511421016403 0ustar zuulzuul00000000000000glance-16.0.0/glance/locale/0000775000175100017510000000000013245511661015547 5ustar zuulzuul00000000000000glance-16.0.0/glance/locale/ja/0000775000175100017510000000000013245511661016141 5ustar zuulzuul00000000000000glance-16.0.0/glance/locale/ja/LC_MESSAGES/0000775000175100017510000000000013245511661017726 5ustar zuulzuul00000000000000glance-16.0.0/glance/locale/ja/LC_MESSAGES/glance.po0000666000175100017510000025151713245511426021533 0ustar zuulzuul00000000000000# Translations template for glance. # Copyright (C) 2015 ORGANIZATION # This file is distributed under the same license as the glance project. # # Translators: # Tomoyuki KATO , 2013 # Andreas Jaeger , 2016. #zanata # Shu Muto , 2018. #zanata msgid "" msgstr "" "Project-Id-Version: glance VERSION\n" "Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n" "POT-Creation-Date: 2018-02-09 19:32+0000\n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=UTF-8\n" "Content-Transfer-Encoding: 8bit\n" "PO-Revision-Date: 2018-02-09 10:00+0000\n" "Last-Translator: Shu Muto \n" "Language: ja\n" "Plural-Forms: nplurals=1; plural=0;\n" "Generated-By: Babel 2.0\n" "X-Generator: Zanata 3.9.6\n" "Language-Team: Japanese\n" #, python-format msgid "\t%s" msgstr "\t%s" #, python-format msgid "%(cls)s exception was raised in the last rpc call: %(val)s" msgstr "最後㮠RPC 呼ã³å‡ºã—ã§ %(cls)s 例外ãŒç™ºç”Ÿã—ã¾ã—ãŸ: %(val)s" #, python-format msgid "%(m_id)s not found in the member list of the image %(i_id)s." msgstr "イメージ %(i_id)s ã®ãƒ¡ãƒ³ãƒãƒ¼ãƒªã‚¹ãƒˆã§ %(m_id)s ãŒè¦‹ã¤ã‹ã‚Šã¾ã›ã‚“。" #, python-format msgid "%(serv)s (pid %(pid)s) is running..." msgstr "%(serv)s (pid %(pid)s) ãŒå®Ÿè¡Œä¸­..." #, python-format msgid "%(serv)s appears to already be running: %(pid)s" msgstr "%(serv)s ã¯æ—¢ã«å®Ÿè¡Œã•れã¦ã„ã‚‹å¯èƒ½æ€§ãŒã‚りã¾ã™: %(pid)s" #, python-format msgid "" "%(strategy)s is registered as a module twice. %(module)s is not being used." msgstr "" "%(strategy)s ã¯ãƒ¢ã‚¸ãƒ¥ãƒ¼ãƒ«ã¨ã—㦠2 回登録ã•れã¦ã„ã¾ã™ã€‚%(module)s ã¯ä½¿ç”¨ã•れã¦" "ã„ã¾ã›ã‚“。" #, python-format msgid "" "%(task_id)s of %(task_type)s not configured properly. Could not load the " "filesystem store" msgstr "" "%(task_type)s ã® %(task_id)s ãŒæ­£ã—ã設定ã•れã¦ã„ã¾ã›ã‚“。ファイルシステムスト" "アをロードã§ãã¾ã›ã‚“ã§ã—ãŸã€‚" #, python-format msgid "" "%(task_id)s of %(task_type)s not configured properly. Missing work dir: " "%(work_dir)s" msgstr "" "%(task_type)s ã® %(task_id)s ãŒé©åˆ‡ã«è¨­å®šã•れã¦ã„ã¾ã›ã‚“。作業ディレクトリー " "%(work_dir)s ãŒã‚りã¾ã›ã‚“" #, python-format msgid "%(verb)sing %(serv)s" msgstr "%(serv)s ã® %(verb)s 中" #, python-format msgid "%(verb)sing %(serv)s with %(conf)s" msgstr "%(conf)s を使用ã—㦠%(serv)s ã‚’ %(verb)s 中" #, python-format msgid "" "%s Please specify a host:port pair, where host is an IPv4 address, IPv6 " "address, hostname, or FQDN. If using an IPv6 address, enclose it in brackets " "separately from the port (i.e., \"[fe80::a:b:c]:9876\")." msgstr "" "%s host:port ã®ãƒšã‚¢ã‚’指定ã—ã¦ãã ã•ã„。host 㯠IPv4 アドレスã€IPv6 アドレスã€" "ホストåã€ã¾ãŸã¯ FQDN ã§ã™ã€‚IPv6 アドレスを使用ã™ã‚‹å ´åˆã¯ã€ã‚¢ãƒ‰ãƒ¬ã‚¹ã‚’大括弧ã§" "囲んã§ãƒãƒ¼ãƒˆã¨åŒºåˆ¥ã—ã¦ãã ã•ã„ (例ãˆã°ã€\"[fe80::a:b:c]:9876\")。" #, python-format msgid "%s can't contain 4 byte unicode characters." msgstr "%s ã« 4 ãƒã‚¤ãƒˆã® Unicode 文字ãŒå«ã¾ã‚Œã¦ã„ã¦ã¯ãªã‚Šã¾ã›ã‚“。" #, python-format msgid "%s is already stopped" msgstr "%s ã¯æ—¢ã«åœæ­¢ã—ã¦ã„ã¾ã™" #, python-format msgid "%s is stopped" msgstr "%s ã¯åœæ­¢ã—ã¦ã„ã¾ã™" msgid "'node_staging_uri' is not set correctly. Could not load staging store." msgstr "" "'node_staging_uri' ãŒæ­£ã—ã設定ã•れã¦ã„ã¾ã›ã‚“。ステージングストアをロードã§ã" "ã¾ã›ã‚“ã§ã—ãŸã€‚" msgid "" "--os_auth_url option or OS_AUTH_URL environment variable required when " "keystone authentication strategy is enabled\n" msgstr "" "keystone èªè¨¼æˆ¦ç•¥ãŒæœ‰åйãªå ´åˆã¯ã€--os_auth_url オプションã¾ãŸã¯OS_AUTH_URL ç’°" "境変数ãŒå¿…è¦ã§ã™\n" msgid "A body is not expected with this request." msgstr "ã“ã®è¦æ±‚ã§ã¯æœ¬æ–‡ã¯äºˆæœŸã•れã¾ã›ã‚“。" #, python-format msgid "" "A metadata definition object with name=%(object_name)s already exists in " "namespace=%(namespace_name)s." msgstr "" "name=%(object_name)s ã®ãƒ¡ã‚¿ãƒ‡ãƒ¼ã‚¿å®šç¾©ã‚ªãƒ–ジェクトã¯ã€namespace=" "%(namespace_name)s ã«æ—¢ã«å­˜åœ¨ã—ã¾ã™ã€‚" #, python-format msgid "" "A metadata definition property with name=%(property_name)s already exists in " "namespace=%(namespace_name)s." msgstr "" "name=%(property_name)s ã®ãƒ¡ã‚¿ãƒ‡ãƒ¼ã‚¿å®šç¾©ãƒ—ロパティーã¯ã€namespace=" "%(namespace_name)s ã«æ—¢ã«å­˜åœ¨ã—ã¾ã™ã€‚" #, python-format msgid "" "A metadata definition resource-type with name=%(resource_type_name)s already " "exists." msgstr "" "name=%(resource_type_name)s ã®ãƒ¡ã‚¿ãƒ‡ãƒ¼ã‚¿å®šç¾©ãƒªã‚½ãƒ¼ã‚¹ã‚¿ã‚¤ãƒ—ã¯ã€æ—¢ã«å­˜åœ¨ã—ã¾" "ã™ã€‚" #, python-format msgid "" "A metadata tag with name=%(name)s already exists in namespace=" "%(namespace_name)s. (Please note that metadata tag names are case " "insensitive)." msgstr "" "name=%(name)s ã®ãƒ¡ã‚¿ãƒ‡ãƒ¼ã‚¿ã‚¿ã‚°ã¯æ—¢ã« namespace=%(namespace_name)s ã«å­˜åœ¨ã—ã¾" "ã™ã€‚(メタデータã®ã‚¿ã‚°åã¯å¤§æ–‡å­—å°æ–‡å­—を区別ã—ãªã„ã“ã¨ã«æ³¨æ„ã—ã¦ãã ã•ã„。)" msgid "A set of URLs to access the image file kept in external store" msgstr "" "外部ストアã«ä¿æŒã•れã¦ã„るイメージファイルã«ã‚¢ã‚¯ã‚»ã‚¹ã™ã‚‹ãŸã‚ã®ä¸€é€£ã® URL" msgid "Amount of disk space (in GB) required to boot image." msgstr "イメージã®ãƒ–ートã«å¿…è¦ãªãƒ‡ã‚£ã‚¹ã‚¯ã‚¹ãƒšãƒ¼ã‚¹ã®é‡ (GB)" msgid "Amount of ram (in MB) required to boot image." msgstr "イメージã®ãƒ–ートã«å¿…è¦ãª RAM ã®é‡ (MB)" msgid "An identifier for the image" msgstr "イメージ㮠ID" msgid "An identifier for the image member (tenantId)" msgstr "イメージメンãƒãƒ¼ã® ID (テナント ID)" msgid "An identifier for the owner of this task" msgstr "ã“ã®ã‚¿ã‚¹ã‚¯ã®æ‰€æœ‰è€… ID" msgid "An identifier for the task" msgstr "タスク㮠ID" msgid "An image file url" msgstr "イメージファイル㮠URL" msgid "An image schema url" msgstr "イメージスキーマ㮠URL" msgid "An image self url" msgstr "イメージ自体㮠URL" #, python-format msgid "An image with identifier %s already exists" msgstr "ID %s ã®ã‚¤ãƒ¡ãƒ¼ã‚¸ã¯æ—¢ã«å­˜åœ¨ã—ã¾ã™" msgid "An import task exception occurred" msgstr "インãƒãƒ¼ãƒˆã‚¿ã‚¹ã‚¯ã®ä¾‹å¤–ãŒç™ºç”Ÿã—ã¾ã—ãŸ" msgid "An object with the same identifier already exists." msgstr "åŒã˜ ID ã®ã‚ªãƒ–ã‚¸ã‚§ã‚¯ãƒˆãŒæ—¢ã«å­˜åœ¨ã—ã¾ã™ã€‚" msgid "An object with the same identifier is currently being operated on." msgstr "ç¾åœ¨ã€åŒã˜ ID ã‚’æŒã¤ã‚ªãƒ–ã‚¸ã‚§ã‚¯ãƒˆãŒæ“作ã•れã¦ã„ã¾ã™ã€‚" msgid "An object with the specified identifier was not found." msgstr "指定ã•れ㟠ID ã‚’æŒã¤ã‚ªãƒ–ジェクトãŒè¦‹ã¤ã‹ã‚Šã¾ã›ã‚“ã§ã—ãŸã€‚" msgid "An unknown exception occurred" msgstr "䏿˜Žãªä¾‹å¤–ãŒç™ºç”Ÿã—ã¾ã—ãŸ" msgid "An unknown task exception occurred" msgstr "䏿˜Žãªã‚¿ã‚¹ã‚¯ä¾‹å¤–ãŒç™ºç”Ÿã—ã¾ã—ãŸ" #, python-format msgid "Attempt to upload duplicate image: %s" msgstr "é‡è¤‡ã—ãŸã‚¤ãƒ¡ãƒ¼ã‚¸ã®ã‚¢ãƒƒãƒ—ロードを試行ã—ã¾ã™: %s" msgid "Attempted to update Location field for an image not in queued status." msgstr "" "待機状æ³ã«ãªã£ã¦ã„ãªã„イメージã®ã€Œå ´æ‰€ã€ãƒ•ィールドを更新ã—よã†ã¨ã—ã¾ã—ãŸã€‚" #, python-format msgid "Attribute '%(property)s' is read-only." msgstr "属性 '%(property)s' ã¯èª­ã¿å–り専用ã§ã™ã€‚" #, python-format msgid "Attribute '%(property)s' is reserved." msgstr "属性 '%(property)s' ã¯äºˆç´„ã•れã¦ã„ã¾ã™ã€‚" #, python-format msgid "Attribute '%s' is read-only." msgstr "属性 '%s' ã¯èª­ã¿å–り専用ã§ã™ã€‚" #, python-format msgid "Attribute '%s' is reserved." msgstr "属性 '%s' ã¯äºˆç´„ã•れã¦ã„ã¾ã™ã€‚" msgid "Attribute container_format can be only replaced for a queued image." msgstr "" "キューã«å…¥ã‚Œã‚‰ã‚ŒãŸã‚¤ãƒ¡ãƒ¼ã‚¸ã«ã¤ã„ã¦ã®ã¿å±žæ€§ container_format ã‚’ç½®æ›ã§ãã¾ã™ã€‚" msgid "Attribute disk_format can be only replaced for a queued image." msgstr "" "キューã«å…¥ã‚Œã‚‰ã‚ŒãŸã‚¤ãƒ¡ãƒ¼ã‚¸ã«ã¤ã„ã¦ã®ã¿å±žæ€§ disk_format ã‚’ç½®æ›ã§ãã¾ã™ã€‚" #, python-format msgid "Auth service at URL %(url)s not found." msgstr "URL %(url)s ã®èªè¨¼ã‚µãƒ¼ãƒ“スãŒè¦‹ã¤ã‹ã‚Šã¾ã›ã‚“。" #, python-format msgid "" "Authentication error - the token may have expired during file upload. " "Deleting image data for %s." msgstr "" "èªè¨¼ã‚¨ãƒ©ãƒ¼ - トークンãŒãƒ•ァイルアップロード中ã«å¤±åйã—ãŸå¯èƒ½æ€§ãŒã‚りã¾ã™ã€‚ %s " "ã¸ã®ã‚¤ãƒ¡ãƒ¼ã‚¸ãƒ‡ãƒ¼ã‚¿ã‚’削除ã—ã¾ã™ã€‚" msgid "Authorization failed." msgstr "許å¯ãŒå¤±æ•—ã—ã¾ã—ãŸã€‚" msgid "Available categories:" msgstr "使用å¯èƒ½ã‚«ãƒ†ã‚´ãƒªãƒ¼:" #, python-format msgid "Bad \"%s\" query filter format. Use ISO 8601 DateTime notation." msgstr "" "æ­£ã—ããªã„ \"%s\" 照会フィルター形å¼ã€‚ISO 8601 DateTime 表記を使用ã—ã¦ãã ã•" "ã„。" #, python-format msgid "Bad Command: %s" msgstr "æ­£ã—ããªã„コマンド: %s" #, python-format msgid "Bad header: %(header_name)s" msgstr "ãƒ˜ãƒƒãƒ€ãƒ¼ãŒæ­£ã—ãã‚りã¾ã›ã‚“: %(header_name)s" #, python-format msgid "Bad value passed to filter %(filter)s got %(val)s" msgstr "æ­£ã—ããªã„値ãŒãƒ•ィルター %(filter)s ã«æ¸¡ã•れã€%(val)s ãŒå–å¾—ã•れã¾ã—ãŸ" #, python-format msgid "Badly formed S3 URI: %(uri)s" msgstr "S3 URI ã®å½¢å¼ãŒæ­£ã—ãã‚りã¾ã›ã‚“: %(uri)s" #, python-format msgid "Badly formed credentials '%(creds)s' in Swift URI" msgstr "Swift URI 内ã®è³‡æ ¼æƒ…å ± '%(creds)s' ã®å½¢å¼ãŒæ­£ã—ãã‚りã¾ã›ã‚“" msgid "Badly formed credentials in Swift URI." msgstr "Swift URI 内ã®è³‡æ ¼æƒ…å ±ã®å½¢å¼ãŒæ­£ã—ãã‚りã¾ã›ã‚“。" msgid "Body expected in request." msgstr "è¦æ±‚ã®æœ¬ä½“ãŒå¿…è¦ã§ã™ã€‚" msgid "Cannot be a negative value" msgstr "è² ã®å€¤ã«ã™ã‚‹ã“ã¨ã¯ã§ãã¾ã›ã‚“" msgid "Cannot be a negative value." msgstr "è² ã®å€¤ã«ã™ã‚‹ã“ã¨ã¯ã§ãã¾ã›ã‚“。" #, python-format msgid "Cannot convert image %(key)s '%(value)s' to an integer." msgstr "イメージ %(key)s '%(value)s' ã‚’æ•´æ•°ã«å¤‰æ›ã§ãã¾ã›ã‚“。" msgid "Cannot remove last location in the image." msgstr "ã‚¤ãƒ¡ãƒ¼ã‚¸å†…ã®æœ€å¾Œã®å ´æ‰€ã¯å‰Šé™¤ã§ãã¾ã›ã‚“。" #, python-format msgid "Cannot save data for image %(image_id)s: %(error)s" msgstr "イメージ %(image_id)s ã®ãƒ‡ãƒ¼ã‚¿ã‚’ä¿å­˜ã§ãã¾ã›ã‚“: %(error)s" msgid "Cannot set locations to empty list." msgstr "空ã®ãƒªã‚¹ãƒˆã«å ´æ‰€ã‚’設定ã™ã‚‹ã“ã¨ã¯ã§ãã¾ã›ã‚“。" msgid "Cannot upload to an unqueued image" msgstr "キューã«å…¥ã‚Œã‚‰ã‚Œã¦ã„ãªã„イメージã«å¯¾ã—ã¦ã‚¢ãƒƒãƒ—ロードã§ãã¾ã›ã‚“" #, python-format msgid "Checksum verification failed. Aborted caching of image '%s'." msgstr "" "ãƒã‚§ãƒƒã‚¯ã‚µãƒ ã®æ¤œè¨¼ã«å¤±æ•—ã—ã¾ã—ãŸã€‚イメージ '%s' ã®ã‚­ãƒ£ãƒƒã‚·ãƒ¥ã‚’打ã¡åˆ‡ã‚Šã¾ã—" "ãŸã€‚" msgid "Client disconnected before sending all data to backend" msgstr "ã™ã¹ã¦ã®ãƒ‡ãƒ¼ã‚¿ã‚’ãƒãƒƒã‚¯ã‚¨ãƒ³ãƒ‰ã¸é€ä¿¡ã™ã‚‹å‰ã«ã‚¯ãƒ©ã‚¤ã‚¢ãƒ³ãƒˆãŒåˆ‡æ–­ã•れã¾ã—ãŸ" msgid "Command not found" msgstr "コマンドãŒè¦‹ã¤ã‹ã‚Šã¾ã›ã‚“" msgid "Configuration option was not valid" msgstr "æ§‹æˆã‚ªãƒ—ションãŒç„¡åйã§ã—ãŸ" #, python-format msgid "Connect error/bad request to Auth service at URL %(url)s." msgstr "接続エラー/URL %(url)s ã®èªè¨¼ã‚µãƒ¼ãƒ“スã«å¯¾ã™ã‚‹æ­£ã—ããªã„è¦æ±‚。" #, python-format msgid "Constructed URL: %s" msgstr "URL ã‚’æ§‹æˆã—ã¾ã—ãŸ: %s" msgid "Container format is not specified." msgstr "ã‚³ãƒ³ãƒ†ãƒŠãƒ¼ãƒ•ã‚©ãƒ¼ãƒžãƒƒãƒˆãŒæŒ‡å®šã•れã¦ã„ã¾ã›ã‚“。" msgid "Content-Type must be application/octet-stream" msgstr "Content-Type 㯠application/octet-stream ã§ãªã‘れã°ãªã‚Šã¾ã›ã‚“" #, python-format msgid "Corrupt image download for image %(image_id)s" msgstr "イメージ %(image_id)s ã®ã‚¤ãƒ¡ãƒ¼ã‚¸ãƒ€ã‚¦ãƒ³ãƒ­ãƒ¼ãƒ‰ãŒå£Šã‚Œã¦ã„ã¾ã™" #, python-format msgid "Could not bind to %(host)s:%(port)s after trying for 30 seconds" msgstr "30 ç§’é–“ã®è©¦è¡Œå¾Œã« %(host)s:%(port)s ã«ãƒã‚¤ãƒ³ãƒ‰ã§ãã¾ã›ã‚“ã§ã—ãŸ" msgid "Could not find OVF file in OVA archive file." msgstr "OVA アーカイブファイル内㫠OVF ファイルãŒè¦‹ã¤ã‹ã‚Šã¾ã›ã‚“ã§ã—ãŸã€‚" #, python-format msgid "Could not find metadata object %s" msgstr "メタデータオブジェクト %s ãŒè¦‹ã¤ã‹ã‚Šã¾ã›ã‚“ã§ã—ãŸ" #, python-format msgid "Could not find metadata tag %s" msgstr "メタデータタグ %s ãŒè¦‹ã¤ã‹ã‚Šã¾ã›ã‚“ã§ã—ãŸ" #, python-format msgid "Could not find namespace %s" msgstr "åå‰ç©ºé–“ %s ãŒè¦‹ã¤ã‹ã‚Šã¾ã›ã‚“ã§ã—ãŸ" #, python-format msgid "Could not find property %s" msgstr "プロパティー %s ãŒè¦‹ã¤ã‹ã‚Šã¾ã›ã‚“ã§ã—ãŸ" msgid "Could not find required configuration option" msgstr "å¿…è¦ãªè¨­å®šã‚ªãƒ—ションãŒè¦‹ã¤ã‹ã‚Šã¾ã›ã‚“ã§ã—ãŸ" #, python-format msgid "Could not find task %s" msgstr "タスク %s ãŒè¦‹ã¤ã‹ã‚Šã¾ã›ã‚“ã§ã—ãŸ" #, python-format msgid "Could not update image: %s" msgstr "イメージを更新ã§ãã¾ã›ã‚“ã§ã—ãŸ: %s" #, python-format msgid "Couldn't create metadata namespace: %s" msgstr "メタデータåå‰ç©ºé–“ %s を作æˆã§ãã¾ã›ã‚“ã§ã—ãŸ" #, python-format msgid "Couldn't create metadata object: %s" msgstr "メタデータオブジェクト %s を作æˆã§ãã¾ã›ã‚“ã§ã—ãŸ" #, python-format msgid "Couldn't create metadata property: %s" msgstr "メタデータプロパティ %s を作æˆã§ãã¾ã›ã‚“ã§ã—ãŸ" #, python-format msgid "Couldn't create metadata tag: %s" msgstr "メタデータタグ %s を作æˆã§ãã¾ã›ã‚“ã§ã—ãŸ" #, python-format msgid "Couldn't update metadata namespace: %s" msgstr "メタデータåå‰ç©ºé–“ %s ã‚’æ›´æ–°ã§ãã¾ã›ã‚“ã§ã—ãŸ" #, python-format msgid "Couldn't update metadata object: %s" msgstr "メタデータオブジェクト %s ã‚’æ›´æ–°ã§ãã¾ã›ã‚“ã§ã—ãŸ" #, python-format msgid "Couldn't update metadata property: %s" msgstr "メタデータプロパティ %s ã‚’æ›´æ–°ã§ãã¾ã›ã‚“ã§ã—ãŸ" #, python-format msgid "Couldn't update metadata tag: %s" msgstr "メタデータタグ %s ã‚’æ›´æ–°ã§ãã¾ã›ã‚“ã§ã—ãŸ" msgid "Currently, OVA packages containing multiple disk are not supported." msgstr "ç¾åœ¨ã€è¤‡æ•°ã®ãƒ‡ã‚£ã‚¹ã‚¯ã‚’å«ã‚€ OVA パッケージã¯ã‚µãƒãƒ¼ãƒˆã•れã¾ã›ã‚“。" msgid "Custom property should not be greater than 255 characters." msgstr "カスタムプロパティ㯠255 文字より大ããã¦ã¯ã„ã‘ã¾ã›ã‚“。" #, python-format msgid "Data for image_id not found: %s" msgstr "image_id ã®ãƒ‡ãƒ¼ã‚¿ãŒè¦‹ã¤ã‹ã‚Šã¾ã›ã‚“: %s" msgid "Data supplied was not valid." msgstr "指定ã•れãŸãƒ‡ãƒ¼ã‚¿ãŒç„¡åйã§ã—ãŸã€‚" msgid "" "Database contraction did not run. Database contraction cannot be run before " "data migration is complete. Run data migration using \"glance-manage db " "migrate\"." msgstr "" "データベースã®ç· çµãŒå®Ÿè¡Œã•れã¦ã„ã¾ã›ã‚“。データベースã®ç· çµã¯ãƒ‡ãƒ¼ã‚¿ã®ç§»è¡ŒãŒå®Œ" "了ã™ã‚‹å‰ã«ã¯å®Ÿè¡Œã§ãã¾ã›ã‚“。ã¾ãšã€\"glance-manage db migrate\" を使用ã—ã¦ãƒ‡ãƒ¼" "ã‚¿ã®ç§»è¡Œã‚’実行ã—ã¦ãã ã•ã„。" msgid "" "Database contraction did not run. Database contraction cannot be run before " "database expansion. Run database expansion first using \"glance-manage db " "expand\"" msgstr "" "データベースã®ç· çµãŒå®Ÿè¡Œã•れã¦ã„ã¾ã›ã‚“。データベースã®ç· çµã¯ãƒ‡ãƒ¼ã‚¿ãƒ™ãƒ¼ã‚¹ã®å±•" "é–‹ã®å‰ã«ã¯å®Ÿè¡Œã§ãã¾ã›ã‚“。ã¾ãšã€\"glance-manage db expand\" を使用ã—ã¦ãƒ‡ãƒ¼ã‚¿" "ベースã®å±•開を実行ã—ã¦ãã ã•ã„。" msgid "" "Database contraction failed. Couldn't find head revision of contract branch." msgstr "" "データベースã®ç· çµã«å¤±æ•—ã—ã¾ã—ãŸã€‚ç· çµãƒ–ランãƒã®ãƒ˜ãƒƒãƒ‰ãƒªãƒ“ジョンãŒè¦‹ã¤ã‹ã‚Šã¾" "ã›ã‚“。" msgid "" "Database expansion failed. Couldn't find head revision of expand branch." msgstr "" "データベースã®å±•é–‹ã«å¤±æ•—ã—ã¾ã—ãŸã€‚展開ブランãƒã®ãƒ˜ãƒƒãƒ‰ãƒªãƒ“ジョンãŒè¦‹ã¤ã‹ã‚Šã¾" "ã›ã‚“。" #, python-format msgid "" "Database expansion failed. Database expansion should have brought the " "database version up to \"%(e_rev)s\" revision. But, current revisions are: " "%(curr_revs)s " msgstr "" "データベースã®å±•é–‹ã«å¤±æ•—ã—ã¾ã—ãŸã€‚データベースã®å±•é–‹ã«ã‚ˆã£ã¦ã€ãƒ‡ãƒ¼ã‚¿ãƒ™ãƒ¼ã‚¹ã®" "ãƒãƒ¼ã‚¸ãƒ§ãƒ³ãŒãƒªãƒ“ジョン \"%(e_rev)s\" ã«ãªã£ã¦ã„ã‚‹ã¯ãšã§ã™ã€‚ ã—ã‹ã—ã€ç¾åœ¨ã®ãƒª" "ビジョン㯠%(curr_revs)s ã§ã™ã€‚" msgid "" "Database is either not under migration control or under legacy migration " "control, please run \"glance-manage db sync\" to place the database under " "alembic migration control." msgstr "" "データベースã¯ç§»è¡Œåˆ¶å¾¡ä¸‹ã«ã‚‚従æ¥ã®ç§»è¡Œåˆ¶å¾¡ä¸‹ã«ã‚‚ãªã„ãŸã‚ã€\"glance-manage db " "sync\" を実行ã—ã¦ç§»è¡Œåˆ¶å¾¡ä¸‹ã«ç½®ã„ã¦ãã ã•ã„。" msgid "Database is synced successfully." msgstr "データベースã®åŒæœŸã«æˆåŠŸã—ã¾ã—ãŸã€‚" msgid "Database is up to date. No migrations needed." msgstr "ãƒ‡ãƒ¼ã‚¿ãƒ™ãƒ¼ã‚¹ã¯æœ€æ–°ã§ã™ã€‚移行ã¯å¿…è¦ã‚りã¾ã›ã‚“。" msgid "Database is up to date. No upgrades needed." msgstr "ãƒ‡ãƒ¼ã‚¿ãƒ™ãƒ¼ã‚¹ã¯æœ€æ–°ã§ã™ã€‚アップグレードã¯å¿…è¦ã‚りã¾ã›ã‚“。" msgid "Date and time of image member creation" msgstr "イメージメンãƒãƒ¼ã®ä½œæˆæ—¥æ™‚" msgid "Date and time of image registration" msgstr "イメージ登録日時" msgid "Date and time of last modification of image member" msgstr "イメージメンãƒãƒ¼ã®æœ€çµ‚変更日時" msgid "Date and time of namespace creation" msgstr "åå‰ç©ºé–“ã®ä½œæˆæ—¥æ™‚" msgid "Date and time of object creation" msgstr "オブジェクトã®ä½œæˆæ—¥æ™‚" msgid "Date and time of resource type association" msgstr "リソースタイプ関連付ã‘ã®æ—¥æ™‚" msgid "Date and time of tag creation" msgstr "ã‚¿ã‚°ã®ä½œæˆæ—¥æ™‚" msgid "Date and time of the last image modification" msgstr "ã‚¤ãƒ¡ãƒ¼ã‚¸ã®æœ€çµ‚変更日時" msgid "Date and time of the last namespace modification" msgstr "åå‰ç©ºé–“ã®æœ€çµ‚変更日時" msgid "Date and time of the last object modification" msgstr "ã‚ªãƒ–ã‚¸ã‚§ã‚¯ãƒˆã®æœ€çµ‚変更日時" msgid "Date and time of the last resource type association modification" msgstr "リソースタイプ関連付ã‘ã®æœ€çµ‚変更日時" msgid "Date and time of the last tag modification" msgstr "ã‚¿ã‚°ã®æœ€çµ‚変更日時" msgid "Datetime when this resource was created" msgstr "ã“ã®ãƒªã‚½ãƒ¼ã‚¹ãŒä½œæˆã•ã‚ŒãŸæ—¥æ™‚" msgid "Datetime when this resource was updated" msgstr "ã“ã®ãƒªã‚½ãƒ¼ã‚¹ãŒæ›´æ–°ã•ã‚ŒãŸæ—¥æ™‚" msgid "Datetime when this resource would be subject to removal" msgstr "ã“ã®ãƒªã‚½ãƒ¼ã‚¹ãŒå‰Šé™¤ã•れる日時" #, python-format msgid "Denying attempt to upload image because it exceeds the quota: %s" msgstr "" "イメージをアップロードã—よã†ã¨ã—ã¾ã—ãŸãŒã€å‰²ã‚Šå½“ã¦é‡ã‚’è¶…ãˆã¦ã—ã¾ã†ãŸã‚ã€æ‹’å¦" "ã•れã¦ã„ã¾ã™: %s" #, python-format msgid "Denying attempt to upload image larger than %d bytes." msgstr "%d ãƒã‚¤ãƒˆã‚ˆã‚Šå¤§ãã„イメージã®ã‚¢ãƒƒãƒ—ロード試行を拒å¦ã—ã¦ã„ã¾ã™ã€‚" msgid "Descriptive name for the image" msgstr "イメージã®è¨˜è¿°å" msgid "Disk format is not specified." msgstr "ãƒ‡ã‚£ã‚¹ã‚¯ãƒ•ã‚©ãƒ¼ãƒžãƒƒãƒˆãŒæŒ‡å®šã•れã¦ã„ã¾ã›ã‚“。" #, python-format msgid "" "Driver %(driver_name)s could not be configured correctly. Reason: %(reason)s" msgstr "" "ドライãƒãƒ¼ %(driver_name)s ã‚’æ­£ã—ã設定ã§ãã¾ã›ã‚“ã§ã—ãŸã€‚ç†ç”±: %(reason)s" msgid "" "Error decoding your request. Either the URL or the request body contained " "characters that could not be decoded by Glance" msgstr "" "è¦æ±‚ã®ãƒ‡ã‚³ãƒ¼ãƒ‰ã®ã‚¨ãƒ©ãƒ¼ã€‚URL ã¾ãŸã¯è¦æ±‚本文㫠Glance ã§ãƒ‡ã‚³ãƒ¼ãƒ‰ã§ããªã„文字ãŒ" "å«ã¾ã‚Œã¦ã„ã¾ã—ãŸã€‚" #, python-format msgid "Error fetching members of image %(image_id)s: %(inner_msg)s" msgstr "イメージ %(image_id)s ã®ãƒ¡ãƒ³ãƒãƒ¼ã®å–得中ã®ã‚¨ãƒ©ãƒ¼: %(inner_msg)s" msgid "Error in store configuration. Adding images to store is disabled." msgstr "" "ストア設定ã«ã‚¨ãƒ©ãƒ¼ãŒã‚りã¾ã™ã€‚ストアã¸ã®ã‚¤ãƒ¡ãƒ¼ã‚¸ã®è¿½åŠ ãŒç„¡åйã«ãªã£ã¦ã„ã¾ã™ã€‚" msgid "Expected a member in the form: {\"member\": \"image_id\"}" msgstr "次ã®å½¢å¼ã§ãƒ¡ãƒ³ãƒãƒ¼ã‚’予期: {\"member\": \"image_id\"}" msgid "Expected a status in the form: {\"status\": \"status\"}" msgstr "次ã®å½¢å¼ã§çŠ¶æ…‹ã‚’äºˆæœŸ: {\"status\": \"status\"}" msgid "External source should not be empty" msgstr "外部ソースã¯ç©ºã§ã‚ã£ã¦ã¯ãªã‚Šã¾ã›ã‚“" #, python-format msgid "External sources are not supported: '%s'" msgstr "外部ソースã¯ã‚µãƒãƒ¼ãƒˆã•れã¦ã„ã¾ã›ã‚“: '%s'" #, python-format msgid "Failed to activate image. Got error: %s" msgstr "イメージã®ã‚¢ã‚¯ãƒ†ã‚£ãƒ–化ã«å¤±æ•—ã—ã¾ã—ãŸã€‚å—ã‘å–ã£ãŸã‚¨ãƒ©ãƒ¼: %s" #, python-format msgid "Failed to add image metadata. Got error: %s" msgstr "イメージメタデータを追加ã§ãã¾ã›ã‚“ã§ã—ãŸã€‚å—ã‘å–ã£ãŸã‚¨ãƒ©ãƒ¼: %s" #, python-format msgid "Failed to find image %(image_id)s to delete" msgstr "削除ã™ã‚‹ã‚¤ãƒ¡ãƒ¼ã‚¸ %(image_id)s ãŒè¦‹ã¤ã‹ã‚Šã¾ã›ã‚“ã§ã—ãŸ" #, python-format msgid "Failed to find image to delete: %s" msgstr "削除ã™ã‚‹ã‚¤ãƒ¡ãƒ¼ã‚¸ãŒè¦‹ã¤ã‹ã‚Šã¾ã›ã‚“ã§ã—ãŸ: %s" #, python-format msgid "Failed to find image to update: %s" msgstr "æ›´æ–°ã™ã‚‹ã‚¤ãƒ¡ãƒ¼ã‚¸ãŒè¦‹ã¤ã‹ã‚Šã¾ã›ã‚“ã§ã—ãŸ: %s" #, python-format msgid "Failed to find resource type %(resourcetype)s to delete" msgstr "削除ã™ã‚‹ãƒªã‚½ãƒ¼ã‚¹ã‚¿ã‚¤ãƒ— %(resourcetype)s ãŒè¦‹ã¤ã‹ã‚Šã¾ã›ã‚“ã§ã—ãŸ" #, python-format msgid "Failed to initialize the image cache database. Got error: %s" msgstr "" "ã‚¤ãƒ¡ãƒ¼ã‚¸ã‚­ãƒ£ãƒƒã‚·ãƒ¥ãƒ‡ãƒ¼ã‚¿ãƒ™ãƒ¼ã‚¹ã‚’åˆæœŸåŒ–ã§ãã¾ã›ã‚“ã§ã—ãŸã€‚å—ã‘å–ã£ãŸã‚¨ãƒ©ãƒ¼: %s" #, python-format msgid "Failed to read %s from config" msgstr "設定ã‹ã‚‰ %s を読ã¿å–ã‚‹ã“ã¨ãŒã§ãã¾ã›ã‚“ã§ã—ãŸ" #, python-format msgid "Failed to reserve image. Got error: %s" msgstr "イメージを予約ã§ãã¾ã›ã‚“ã§ã—ãŸã€‚å—ã‘å–ã£ãŸã‚¨ãƒ©ãƒ¼: %s" #, python-format msgid "Failed to sync database: ERROR: %s" msgstr "データベースã®åŒæœŸã«å¤±æ•—ã—ã¾ã—ãŸ: エラー: %s" #, python-format msgid "Failed to update image metadata. Got error: %s" msgstr "イメージメタデータを更新ã§ãã¾ã›ã‚“ã§ã—ãŸã€‚エラー: %s" #, python-format msgid "Failed to upload image %s" msgstr "イメージ %s をアップロードã§ãã¾ã›ã‚“ã§ã—ãŸ" #, python-format msgid "" "Failed to upload image data for image %(image_id)s due to HTTP error: " "%(error)s" msgstr "" "HTTP エラーãŒç™ºç”Ÿã—ãŸãŸã‚ã€ã‚¤ãƒ¡ãƒ¼ã‚¸ %(image_id)s ã®ã‚¤ãƒ¡ãƒ¼ã‚¸ãƒ‡ãƒ¼ã‚¿ã®ã‚¢ãƒƒãƒ—ロー" "ドã«å¤±æ•—ã—ã¾ã—ãŸ: %(error)s" #, python-format msgid "" "Failed to upload image data for image %(image_id)s due to internal error: " "%(error)s" msgstr "" "内部エラーãŒç™ºç”Ÿã—ãŸãŸã‚ã€ã‚¤ãƒ¡ãƒ¼ã‚¸ %(image_id)s ã®ã‚¤ãƒ¡ãƒ¼ã‚¸ãƒ‡ãƒ¼ã‚¿ã‚’アップロー" "ドã§ãã¾ã›ã‚“ã§ã—ãŸ: %(error)s" #, python-format msgid "File %(path)s has invalid backing file %(bfile)s, aborting." msgstr "" "ファイル %(path)s ã«ç„¡åйãªãƒãƒƒã‚­ãƒ³ã‚°ãƒ•ァイル %(bfile)s ãŒã‚りã¾ã™ã€‚打ã¡åˆ‡ã‚Šã¾" "ã™ã€‚" msgid "" "File based imports are not allowed. Please use a non-local source of image " "data." msgstr "" "ファイルベースã®ã‚¤ãƒ³ãƒãƒ¼ãƒˆã¯è¨±å¯ã•れã¾ã›ã‚“。イメージデータã®éžãƒ­ãƒ¼ã‚«ãƒ«ã‚½ãƒ¼ã‚¹" "を使用ã—ã¦ãã ã•ã„。" msgid "Forbidden image access" msgstr "イメージã«ã‚¢ã‚¯ã‚»ã‚¹ã™ã‚‹æ¨©é™ãŒã‚りã¾ã›ã‚“" #, python-format msgid "Forbidden to delete a %s image." msgstr "%s イメージã®å‰Šé™¤ã¯ç¦æ­¢ã•れã¦ã„ã¾ã™ã€‚" #, python-format msgid "Forbidden to delete image: %s" msgstr "イメージã®å‰Šé™¤ã¯ç¦æ­¢ã•れã¦ã„ã¾ã™: %s" #, python-format msgid "Forbidden to modify '%(key)s' of %(status)s image." msgstr "%(status)s イメージ㮠'%(key)s' を変更ã™ã‚‹ã“ã¨ã¯ç¦æ­¢ã•れã¦ã„ã¾ã™ã€‚" #, python-format msgid "Forbidden to modify '%s' of image." msgstr "イメージ㮠'%s' を変更ã™ã‚‹ã“ã¨ã¯ç¦æ­¢ã•れã¦ã„ã¾ã™ã€‚" msgid "Forbidden to reserve image." msgstr "イメージã®äºˆç´„ã¯ç¦æ­¢ã•れã¦ã„ã¾ã™ã€‚" msgid "Forbidden to update deleted image." msgstr "削除ã•れãŸã‚¤ãƒ¡ãƒ¼ã‚¸ã®æ›´æ–°ã¯ç¦æ­¢ã•れã¦ã„ã¾ã™ã€‚" #, python-format msgid "Forbidden to update image: %s" msgstr "ã‚¤ãƒ¡ãƒ¼ã‚¸ã®æ›´æ–°ã¯ç¦æ­¢ã•れã¦ã„ã¾ã™: %s" #, python-format msgid "Forbidden upload attempt: %s" msgstr "ç¦æ­¢ã•れã¦ã„るアップロードã®è©¦è¡Œ: %s" #, python-format msgid "Forbidding request, metadata definition namespace=%s is not visible." msgstr "è¦æ±‚ã¯ç¦æ­¢ã•れã¦ã„ã¾ã™ã€‚メタデータ定義 namespace=%s を表示ã§ãã¾ã›ã‚“" #, python-format msgid "Forbidding request, task %s is not visible" msgstr "è¦æ±‚ã‚’ç¦æ­¢ã—ã¦ã„ã¾ã™ã€‚タスク %s ã¯è¡¨ç¤ºã•れã¾ã›ã‚“" msgid "Format of the container" msgstr "コンテナーã®å½¢å¼" msgid "Format of the disk" msgstr "ディスクã®å½¢å¼" #, python-format msgid "Host \"%s\" is not valid." msgstr "ホスト \"%s\" ãŒç„¡åйã§ã™ã€‚" #, python-format msgid "Host and port \"%s\" is not valid." msgstr "ホストãŠã‚ˆã³ãƒãƒ¼ãƒˆ \"%s\" ãŒç„¡åйã§ã™ã€‚" msgid "" "Human-readable informative message only included when appropriate (usually " "on failure)" msgstr "" "é©åˆ‡ãªå ´åˆ (通常ã¯éšœå®³ç™ºç”Ÿæ™‚) ã«ã®ã¿ã€äººé–“ãŒèª­ã¿å–れる情報メッセージãŒå«ã¾ã‚Œ" "ã¾ã™" msgid "If true, image will not be deletable." msgstr "true ã®å ´åˆã€ã‚¤ãƒ¡ãƒ¼ã‚¸ã¯å‰Šé™¤å¯èƒ½ã«ãªã‚Šã¾ã›ã‚“。" msgid "If true, namespace will not be deletable." msgstr "true ã®å ´åˆã€åå‰ç©ºé–“ã¯å‰Šé™¤å¯èƒ½ã«ãªã‚Šã¾ã›ã‚“。" #, python-format msgid "Image %(id)s could not be deleted because it is in use: %(exc)s" msgstr "イメージ %(id)s ã¯ä½¿ç”¨ä¸­ã®ãŸã‚削除ã§ãã¾ã›ã‚“ã§ã—ãŸ: %(exc)s" #, python-format msgid "Image %(id)s not found" msgstr "イメージ %(id)s ãŒè¦‹ã¤ã‹ã‚Šã¾ã›ã‚“" #, python-format msgid "" "Image %(image_id)s could not be found after upload. The image may have been " "deleted during the upload: %(error)s" msgstr "" "アップロード後ã«ã‚¤ãƒ¡ãƒ¼ã‚¸ %(image_id)s ãŒè¦‹ã¤ã‹ã‚Šã¾ã›ã‚“ã§ã—ãŸã€‚ã“ã®ã‚¤ãƒ¡ãƒ¼ã‚¸ã¯" "アップロード中ã«å‰Šé™¤ã•れãŸå¯èƒ½æ€§ãŒã‚りã¾ã™: %(error)s" #, python-format msgid "Image %(image_id)s is protected and cannot be deleted." msgstr "イメージ %(image_id)s ã¯ä¿è­·ã•れã¦ã„ã‚‹ãŸã‚ã€å‰Šé™¤ã§ãã¾ã›ã‚“。" #, python-format msgid "" "Image %s could not be found after upload. The image may have been deleted " "during the upload, cleaning up the chunks uploaded." msgstr "" "アップロード後ã«ã‚¤ãƒ¡ãƒ¼ã‚¸ %s ãŒè¦‹ã¤ã‹ã‚Šã¾ã›ã‚“ã§ã—ãŸã€‚イメージã¯ã‚¢ãƒƒãƒ—ロード中" "ã«å‰Šé™¤ã•れãŸå¯èƒ½æ€§ãŒã‚りã¾ã™ã€‚アップロードã•れãŸãƒãƒ£ãƒ³ã‚¯ã‚’クリーンアップ中ã§" "ã™ã€‚" #, python-format msgid "" "Image %s could not be found after upload. The image may have been deleted " "during the upload." msgstr "" "アップロード後ã«ã‚¤ãƒ¡ãƒ¼ã‚¸ %s ãŒè¦‹ã¤ã‹ã‚Šã¾ã›ã‚“ã§ã—ãŸã€‚ã“ã®ã‚¤ãƒ¡ãƒ¼ã‚¸ã¯ã‚¢ãƒƒãƒ—ロー" "ド中ã«å‰Šé™¤ã•れãŸå¯èƒ½æ€§ãŒã‚りã¾ã™ã€‚" #, python-format msgid "Image %s is deactivated" msgstr "イメージ %s ã¯éžã‚¢ã‚¯ãƒ†ã‚£ãƒ–化ã•れã¦ã„ã¾ã™" #, python-format msgid "Image %s is not active" msgstr "イメージ %s ã¯ã‚¢ã‚¯ãƒ†ã‚£ãƒ–ã§ã¯ã‚りã¾ã›ã‚“" #, python-format msgid "Image %s not found." msgstr "イメージ %s ãŒè¦‹ã¤ã‹ã‚Šã¾ã›ã‚“。" #, python-format msgid "Image exceeds the storage quota: %s" msgstr "イメージãŒã‚¹ãƒˆãƒ¬ãƒ¼ã‚¸ã‚¯ã‚©ãƒ¼ã‚¿ã‚’è¶…ãˆã¦ã„ã¾ã™: %s" msgid "Image id is required." msgstr "イメージ ID ãŒå¿…è¦ã§ã™ã€‚" msgid "Image import is not supported at this site." msgstr "ã“ã®ã‚µã‚¤ãƒˆã§ã¯ã‚¤ãƒ¡ãƒ¼ã‚¸ã®ã‚¤ãƒ³ãƒãƒ¼ãƒˆã¯ã‚µãƒãƒ¼ãƒˆã•れã¦ã„ã¾ã›ã‚“。" msgid "Image is protected" msgstr "イメージã¯ä¿è­·ã•れã¦ã„ã¾ã™" #, python-format msgid "Image member limit exceeded for image %(id)s: %(e)s:" msgstr "イメージ %(id)s ã®ãƒ¡ãƒ³ãƒãƒ¼æ•°ãŒã‚¤ãƒ¡ãƒ¼ã‚¸ãƒ¡ãƒ³ãƒãƒ¼ä¸Šé™ã‚’è¶…ãˆã¾ã—ãŸ: %(e)s:" #, python-format msgid "Image name too long: %d" msgstr "イメージåãŒé•·ã™ãŽã¾ã™: %d" msgid "Image operation conflicts" msgstr "イメージæ“作ãŒç«¶åˆã—ã¦ã„ã¾ã™" #, python-format msgid "" "Image status transition from %(cur_status)s to %(new_status)s is not allowed" msgstr "" "%(cur_status)s ã‹ã‚‰ %(new_status)s ã¸ã®ã‚¤ãƒ¡ãƒ¼ã‚¸ã®ã‚¹ãƒ†ãƒ¼ã‚¿ã‚¹ç§»è¡Œã¯è¨±å¯ã•れã¾ã›" "ã‚“" #, python-format msgid "Image storage media is full: %s" msgstr "イメージストレージã®ãƒ¡ãƒ‡ã‚£ã‚¢ãŒãƒ•ルã§ã™: %s" #, python-format msgid "Image tag limit exceeded for image %(id)s: %(e)s:" msgstr "イメージ %(id)s ã®ã‚¤ãƒ¡ãƒ¼ã‚¸ã‚¿ã‚°ä¸Šé™ã‚’è¶…ãˆã¾ã—ãŸ: %(e)s:" #, python-format msgid "Image upload problem: %s" msgstr "イメージã®ã‚¢ãƒƒãƒ—ロードå•題: %s" #, python-format msgid "Image with identifier %s already exists!" msgstr "ID %s ã®ã‚¤ãƒ¡ãƒ¼ã‚¸ã¯æ—¢ã«å­˜åœ¨ã—ã¾ã™ã€‚" #, python-format msgid "Image with identifier %s has been deleted." msgstr "ID %s ã®ã‚¤ãƒ¡ãƒ¼ã‚¸ãŒå‰Šé™¤ã•れã¾ã—ãŸã€‚" #, python-format msgid "Image with identifier %s not found" msgstr "ID %s ã®ã‚¤ãƒ¡ãƒ¼ã‚¸ãŒè¦‹ã¤ã‹ã‚Šã¾ã›ã‚“" #, python-format msgid "Image with the given id %(image_id)s was not found" msgstr "指定ã•れ㟠ID %(image_id)s ã‚’æŒã¤ã‚¤ãƒ¡ãƒ¼ã‚¸ãŒè¦‹ã¤ã‹ã‚Šã¾ã›ã‚“ã§ã—ãŸ" msgid "Import request requires a 'method' field." msgstr "インãƒãƒ¼ãƒˆãƒªã‚¯ã‚¨ã‚¹ãƒˆã¯ã€'method' フィールドãŒå¿…è¦ã§ã™ã€‚" msgid "Import request requires a 'name' field." msgstr "インãƒãƒ¼ãƒˆãƒªã‚¯ã‚¨ã‚¹ãƒˆã¯ã€'name' フィールドãŒå¿…è¦ã§ã™ã€‚" #, python-format msgid "" "Incorrect auth strategy, expected \"%(expected)s\" but received " "\"%(received)s\"" msgstr "" "èªè¨¼ã‚¹ãƒˆãƒ©ãƒ†ã‚¸ãƒ¼ãŒèª¤ã£ã¦ã„ã¾ã™ã€‚\"%(expected)s\" ãŒå¿…è¦ã§ã™ãŒã€\"%(received)s" "\" ã‚’å—ã‘å–りã¾ã—ãŸ" #, python-format msgid "Incorrect request: %s" msgstr "æ­£ã—ããªã„è¦æ±‚: %s" #, python-format msgid "Input does not contain '%(key)s' field" msgstr "入力㫠'%(key)s' フィールドãŒå«ã¾ã‚Œã¦ã„ã¾ã›ã‚“" #, python-format msgid "Insufficient permissions on image storage media: %s" msgstr "イメージストレージã®ãƒ¡ãƒ‡ã‚£ã‚¢ã«å¯¾ã™ã‚‹è¨±å¯ãŒä¸å分ã§ã™: %s" #, python-format msgid "Invalid JSON pointer for this resource: '/%s'" msgstr "ã“ã®ãƒªã‚½ãƒ¼ã‚¹ã® JSON ãƒã‚¤ãƒ³ã‚¿ãƒ¼ã¯ç„¡åйã§ã™: '/%s'" #, python-format msgid "Invalid checksum '%s': can't exceed 32 characters" msgstr "無効ãªãƒã‚§ãƒƒã‚¯ã‚µãƒ  '%s': 32文字を超ãˆã‚‹ã“ã¨ã¯ã§ãã¾ã›ã‚“" msgid "Invalid configuration in glance-swift conf file." msgstr "glance-swift 設定ファイルã®è¨­å®šãŒç„¡åйã§ã™ã€‚" msgid "Invalid configuration in property protection file." msgstr "プロパティーä¿è­·ãƒ•ァイルã§è¨­å®šãŒç„¡åйã§ã™ã€‚" #, python-format msgid "Invalid container format '%s' for image." msgstr "ã‚³ãƒ³ãƒ†ãƒŠãƒ¼å½¢å¼ '%s' ã¯ã‚¤ãƒ¡ãƒ¼ã‚¸ã«ã¯ç„¡åйã§ã™ã€‚" #, python-format msgid "Invalid content type %(content_type)s" msgstr "コンテンツタイプ %(content_type)s ãŒç„¡åйã§ã™" #, python-format msgid "Invalid disk format '%s' for image." msgstr "ãƒ‡ã‚£ã‚¹ã‚¯å½¢å¼ '%s' ã¯ã‚¤ãƒ¡ãƒ¼ã‚¸ã«ã¯ç„¡åйã§ã™ã€‚" #, python-format msgid "Invalid filter value %s. The quote is not closed." msgstr "無効ãªãƒ•ィルター値 %s。引用符ãŒçµ„ã¿ã«ãªã£ã¦ã„ã¾ã›ã‚“。" #, python-format msgid "" "Invalid filter value %s. There is no comma after closing quotation mark." msgstr "無効ãªãƒ•ィルター値 %s。終了引用符ã®å¾Œã«ã‚³ãƒ³ãƒžãŒã‚りã¾ã›ã‚“。" #, python-format msgid "" "Invalid filter value %s. There is no comma before opening quotation mark." msgstr "無効ãªãƒ•ィルター値 %s。開始引用符ã®å‰ã«ã‚³ãƒ³ãƒžãŒã‚りã¾ã›ã‚“。" msgid "Invalid image id format" msgstr "イメージ ID ã®å½¢å¼ãŒç„¡åйã§ã™" #, python-format msgid "Invalid int value for max_rows: %(max_rows)s" msgstr "max_rows ã®æ•´æ•°å€¤ %(max_rows)s ã¯ç„¡åйã§ã™ã€‚" msgid "Invalid location" msgstr "無効ãªå ´æ‰€" #, python-format msgid "Invalid location %s" msgstr "無効ãªå ´æ‰€ %s" #, python-format msgid "Invalid location: %s" msgstr "無効ãªå ´æ‰€: %s" #, python-format msgid "" "Invalid location_strategy option: %(name)s. The valid strategy option(s) " "is(are): %(strategies)s" msgstr "" "location_strategy オプションãŒç„¡åйã§ã™: %(name)s。有効ãªã‚¹ãƒˆãƒ©ãƒ†ã‚¸ãƒ¼ã‚ªãƒ—ショ" "ン: %(strategies)s" msgid "Invalid locations" msgstr "無効ãªå ´æ‰€" #, python-format msgid "Invalid locations: %s" msgstr "無効ãªå ´æ‰€: %s" msgid "Invalid marker format" msgstr "マーカーフォーマットãŒç„¡åйã§ã™" msgid "Invalid marker. Image could not be found." msgstr "無効ãªãƒžãƒ¼ã‚«ãƒ¼ã§ã™ã€‚イメージãŒè¦‹ã¤ã‹ã‚Šã¾ã›ã‚“ã§ã—ãŸã€‚" #, python-format msgid "Invalid membership association: %s" msgstr "無効ãªãƒ¡ãƒ³ãƒãƒ¼ã‚·ãƒƒãƒ—ã®é–¢é€£ä»˜ã‘: %s" msgid "" "Invalid mix of disk and container formats. When setting a disk or container " "format to one of 'aki', 'ari', or 'ami', the container and disk formats must " "match." msgstr "" "ディスクã¨ã‚³ãƒ³ãƒ†ãƒŠãƒ¼ã®å½¢å¼ãŒç„¡åйãªå½¢ã§æ··åœ¨ã—ã¦ã„ã¾ã™ã€‚ディスクã¾ãŸã¯ã‚³ãƒ³ãƒ†" "ナーã®å½¢å¼ã‚’ 'aki'ã€'ari'ã€ã¾ãŸã¯ 'ami' ã®ã„ãšã‚Œã‹ã«è¨­å®šã™ã‚‹ã¨ãã¯ã€ã‚³ãƒ³ãƒ†" "ナーã¨ãƒ‡ã‚£ã‚¹ã‚¯ã®å½¢å¼ãŒä¸€è‡´ã—ã¦ã„ãªã‘れã°ãªã‚Šã¾ã›ã‚“。" #, python-format msgid "" "Invalid operation: `%(op)s`. It must be one of the following: %(available)s." msgstr "" "ç„¡åŠ¹ãªæ“作: `%(op)s`。以下ã®ã„ãšã‚Œã‹ã§ãªã‘れã°ãªã‚Šã¾ã›ã‚“: %(available)s。" msgid "Invalid position for adding a location." msgstr "場所ã®è¿½åŠ ä½ç½®ãŒç„¡åйã§ã™ã€‚" msgid "Invalid position for removing a location." msgstr "場所ã®å‰Šé™¤ä½ç½®ãŒç„¡åйã§ã™ã€‚" msgid "Invalid service catalog json." msgstr "無効ãªã‚µãƒ¼ãƒ“スカタログ JSON ファイル。" #, python-format msgid "Invalid sort direction: %s" msgstr "無効ãªã‚½ãƒ¼ãƒˆæ–¹å‘: %s" #, python-format msgid "" "Invalid sort key: %(sort_key)s. It must be one of the following: " "%(available)s." msgstr "" "ソートキー %(sort_key)s ã¯ç„¡åйã§ã™ã€‚ %(available)s ã®ã„ãšã‚Œã‹ã§ãªã‘れã°ãªã‚Šã¾" "ã›ã‚“。" #, python-format msgid "Invalid status value: %s" msgstr "状態ã®å€¤ãŒç„¡åйã§ã™: %s" #, python-format msgid "Invalid status: %s" msgstr "無効ãªçжæ³: %s" #, python-format msgid "Invalid time format for %s." msgstr "%s ã«å¯¾ã™ã‚‹ç„¡åŠ¹ãªæ™‚刻フォーマット。" #, python-format msgid "Invalid type value: %s" msgstr "タイプ値ãŒç„¡åйã§ã™: %s" #, python-format msgid "" "Invalid update. It would result in a duplicate metadata definition namespace " "with the same name of %s" msgstr "" "ç„¡åŠ¹ãªæ›´æ–°ã§ã™ã€‚çµæžœã¨ã—ã¦ã€åŒã˜åå‰ %s ã§ãƒ¡ã‚¿ãƒ‡ãƒ¼ã‚¿å®šç¾©åå‰ç©ºé–“ãŒé‡è¤‡ã—ã¾" "ã™ã€‚" #, python-format msgid "" "Invalid update. It would result in a duplicate metadata definition object " "with the same name=%(name)s in namespace=%(namespace_name)s." msgstr "" "ç„¡åŠ¹ãªæ›´æ–°ã§ã™ã€‚çµæžœã¨ã—ã¦ã€åŒã˜ name=%(name)s ã§ã€namespace=" "%(namespace_name)s ã§ãƒ¡ã‚¿ãƒ‡ãƒ¼ã‚¿å®šç¾©ã‚ªãƒ–ジェクトãŒé‡è¤‡ã—ã¾ã™ã€‚" #, python-format msgid "" "Invalid update. It would result in a duplicate metadata definition object " "with the same name=%(name)s in namespace=%(namespace_name)s." msgstr "" "ç„¡åŠ¹ãªæ›´æ–°ã§ã™ã€‚çµæžœã¨ã—ã¦ã€åŒã˜ name=%(name)s ã§ã€namespace=" "%(namespace_name)s ã§ãƒ¡ã‚¿ãƒ‡ãƒ¼ã‚¿å®šç¾©ã‚ªãƒ–ジェクトãŒé‡è¤‡ã—ã¾ã™ã€‚" #, python-format msgid "" "Invalid update. It would result in a duplicate metadata definition property " "with the same name=%(name)s in namespace=%(namespace_name)s." msgstr "" "ç„¡åŠ¹ãªæ›´æ–°ã§ã™ã€‚çµæžœã¨ã—ã¦ã€åŒã˜ name=%(name)s ã§ã€namespace=" "%(namespace_name)s ã§ãƒ¡ã‚¿ãƒ‡ãƒ¼ã‚¿å®šç¾©ãƒ—ロパティーãŒé‡è¤‡ã—ã¾ã™ã€‚" #, python-format msgid "Invalid value '%(value)s' for parameter '%(param)s': %(extra_msg)s" msgstr "パラメーター '%(param)s' ã®å€¤ '%(value)s' ãŒç„¡åйã§ã™: %(extra_msg)s" #, python-format msgid "" "Invalid value '%s' for 'protected' filter. Valid values are 'true' or " "'false'." msgstr "" "'protected' フィルターã®å€¤ '%s' ãŒç„¡åйã§ã™ã€‚許容ã•れる値㯠'true' ã¾ãŸã¯ " "'false' ã§ã™ã€‚" #, python-format msgid "Invalid value for option %(option)s: %(value)s" msgstr "オプション %(option)s ã®å€¤ãŒç„¡åйã§ã™: %(value)s" #, python-format msgid "Invalid visibility value: %s" msgstr "無効ãªå¯è¦–性ã®å€¤: %s" msgid "It's invalid to provide multiple image sources." msgstr "イメージソースã®è¤‡æ•°æŒ‡å®šã¯ç„¡åйã§ã™ã€‚" #, python-format msgid "It's not allowed to add locations if image status is %s." msgstr "イメージã®çŠ¶æ…‹ãŒ %s ã®å ´åˆã€å ´æ‰€ã‚’追加ã§ãã¾ã›ã‚“。" msgid "It's not allowed to add locations if locations are invisible." msgstr "場所ãŒè¡¨ç¤ºã•れãªã„å ´åˆã€å ´æ‰€ã‚’追加ã§ãã¾ã›ã‚“。" #, python-format msgid "It's not allowed to remove locations if image status is %s." msgstr "イメージã®çŠ¶æ…‹ãŒ %s ã®å ´åˆã€å ´æ‰€ã‚’削除ã§ãã¾ã›ã‚“。" msgid "It's not allowed to remove locations if locations are invisible." msgstr "場所ãŒè¡¨ç¤ºã•れãªã„å ´åˆã€å ´æ‰€ã‚’削除ã§ãã¾ã›ã‚“。" #, python-format msgid "It's not allowed to replace locations if image status is %s." msgstr "イメージã®çŠ¶æ…‹ãŒ %s ã®å ´åˆã€å ´æ‰€ã‚’変更ã§ãã¾ã›ã‚“。" msgid "It's not allowed to update locations if locations are invisible." msgstr "場所ãŒè¡¨ç¤ºã•れãªã„å ´åˆã€å ´æ‰€ã‚’æ›´æ–°ã§ãã¾ã›ã‚“。" msgid "List of strings related to the image" msgstr "イメージã«é–¢é€£ã™ã‚‹æ–‡å­—列ã®ãƒªã‚¹ãƒˆ" msgid "Malformed JSON in request body." msgstr "è¦æ±‚本体㮠JSON ã®å½¢å¼ãŒèª¤ã‚Šã§ã™ã€‚" msgid "Maximal age is count of days since epoch." msgstr "最長存続時間ã¯ã€ã‚¨ãƒãƒƒã‚¯ä»¥é™ã®æ—¥æ•°ã§ã™ã€‚" #, python-format msgid "Maximum redirects (%(redirects)s) was exceeded." msgstr "最大リダイレクト数 (%(redirects)s) ã‚’è¶…ãˆã¾ã—ãŸã€‚" #, python-format msgid "Member %(member_id)s is duplicated for image %(image_id)s" msgstr "イメージ %(image_id)s ã®ãƒ¡ãƒ³ãƒãƒ¼ %(member_id)s ãŒé‡è¤‡ã—ã¦ã„ã¾ã™" msgid "Member can't be empty" msgstr "「メンãƒãƒ¼ã€ã¯ç©ºã«ã§ãã¾ã›ã‚“" msgid "Member to be added not specified" msgstr "追加ã™ã‚‹ãƒ¡ãƒ³ãƒãƒ¼ãŒæŒ‡å®šã•れã¦ã„ã¾ã›ã‚“" msgid "Membership could not be found." msgstr "メンãƒãƒ¼ã‚·ãƒƒãƒ—ãŒè¦‹ã¤ã‹ã‚Šã¾ã›ã‚“ã§ã—ãŸã€‚" #, python-format msgid "" "Metadata definition namespace %(namespace)s is protected and cannot be " "deleted." msgstr "" "メタデータ定義åå‰ç©ºé–“ %(namespace)s ã¯ä¿è­·ã•れã¦ãŠã‚Šã€å‰Šé™¤ã§ãã¾ã›ã‚“。" #, python-format msgid "Metadata definition namespace not found for id=%s" msgstr "id=%s ã®ãƒ¡ã‚¿ãƒ‡ãƒ¼ã‚¿å®šç¾©åå‰ç©ºé–“ãŒè¦‹ã¤ã‹ã‚Šã¾ã›ã‚“" #, python-format msgid "Metadata definition namespace=%(namespace_name)s was not found." msgstr "" "namespace=%(namespace_name)s ã®ãƒ¡ã‚¿ãƒ‡ãƒ¼ã‚¿å®šç¾©åå‰ç©ºé–“ãŒè¦‹ã¤ã‹ã‚Šã¾ã›ã‚“ã§ã—ãŸã€‚" #, python-format msgid "" "Metadata definition object %(object_name)s is protected and cannot be " "deleted." msgstr "" "メタデータ定義オブジェクト %(object_name)s ã¯ä¿è­·ã•れã¦ãŠã‚Šã€å‰Šé™¤ã§ãã¾ã›ã‚“。" #, python-format msgid "Metadata definition object not found for id=%s" msgstr "id=%s ã®ãƒ¡ã‚¿ãƒ‡ãƒ¼ã‚¿å®šç¾©ã‚ªãƒ–ジェクトãŒè¦‹ã¤ã‹ã‚Šã¾ã›ã‚“" #, python-format msgid "" "Metadata definition property %(property_name)s is protected and cannot be " "deleted." msgstr "" "メタデータ定義プロパティー %(property_name)s ã¯ä¿è­·ã•れã¦ãŠã‚Šã€å‰Šé™¤ã§ãã¾ã›" "ん。" #, python-format msgid "Metadata definition property not found for id=%s" msgstr "id=%s ã®ãƒ¡ã‚¿ãƒ‡ãƒ¼ã‚¿å®šç¾©ãƒ—ロパティーãŒè¦‹ã¤ã‹ã‚Šã¾ã›ã‚“" #, python-format msgid "" "Metadata definition resource-type %(resource_type_name)s is a seeded-system " "type and cannot be deleted." msgstr "" "メタデータ定義リソースタイプ %(resource_type_name)s ã¯ã‚·ãƒ¼ãƒ‰ã‚·ã‚¹ãƒ†ãƒ ã‚¿ã‚¤ãƒ—ã§" "ã‚りã€å‰Šé™¤ã§ãã¾ã›ã‚“。" #, python-format msgid "" "Metadata definition resource-type-association %(resource_type)s is protected " "and cannot be deleted." msgstr "" "メタデータ定義リソースタイプ関連付㑠%(resource_type)s ã¯ä¿è­·ã•れã¦ãŠã‚Šã€å‰Šé™¤" "ã§ãã¾ã›ã‚“。" #, python-format msgid "" "Metadata definition tag %(tag_name)s is protected and cannot be deleted." msgstr "メタデータ定義タグ %(tag_name)s ã¯ä¿è­·ã•れã¦ãŠã‚Šã€å‰Šé™¤ã§ãã¾ã›ã‚“。" #, python-format msgid "Metadata definition tag not found for id=%s" msgstr "id=%s ã®ãƒ¡ã‚¿ãƒ‡ãƒ¼ã‚¿å®šç¾©ã‚¿ã‚°ãŒè¦‹ã¤ã‹ã‚Šã¾ã›ã‚“" msgid "Minimal rows limit is 1." msgstr "最少行数制é™ã¯ 1 ã§ã™ã€‚" #, python-format msgid "Missing required credential: %(required)s" msgstr "å¿…é ˆã®è³‡æ ¼æƒ…å ±ãŒã‚りã¾ã›ã‚“: %(required)s" #, python-format msgid "" "Multiple 'image' service matches for region %(region)s. This generally means " "that a region is required and you have not supplied one." msgstr "" "領域 %(region)s ã«å¯¾ã—ã¦è¤‡æ•°ã®ã€Œã‚¤ãƒ¡ãƒ¼ã‚¸ã€ã‚µãƒ¼ãƒ“スãŒä¸€è‡´ã—ã¾ã™ã€‚ã“れã¯ä¸€èˆ¬" "ã«ã€é ˜åŸŸãŒå¿…è¦ã§ã‚ã‚‹ã®ã«ã€é ˜åŸŸã‚’指定ã—ã¦ã„ãªã„ã“ã¨ã‚’æ„味ã—ã¾ã™ã€‚" msgid "Must supply a non-negative value for age." msgstr "存続期間ã«ã¯è² ã§ã¯ãªã„値を指定ã—ã¦ãã ã•ã„。" msgid "No authenticated user" msgstr "èªè¨¼ã•れã¦ã„ãªã„ユーザー" #, python-format msgid "No image found with ID %s" msgstr "ID ㌠%s ã§ã‚るイメージã¯è¦‹ã¤ã‹ã‚Šã¾ã›ã‚“" #, python-format msgid "No location found with ID %(loc)s from image %(img)s" msgstr "イメージ %(img)s 内㧠ID ㌠%(loc)s ã®å ´æ‰€ã¯è¦‹ã¤ã‹ã‚Šã¾ã›ã‚“" msgid "No permission to share that image" msgstr "ãã®ã‚¤ãƒ¡ãƒ¼ã‚¸ã‚’共有ã™ã‚‹è¨±å¯ãŒã‚りã¾ã›ã‚“" #, python-format msgid "Not allowed to create members for image %s." msgstr "イメージ %s ã®ãƒ¡ãƒ³ãƒãƒ¼ã®ä½œæˆã¯è¨±å¯ã•れã¦ã„ã¾ã›ã‚“。" #, python-format msgid "Not allowed to deactivate image in status '%s'" msgstr "状æ³ãŒã€Œ%sã€ã§ã‚るイメージã®éžã‚¢ã‚¯ãƒ†ã‚£ãƒ–化ã¯è¨±å¯ã•れã¦ã„ã¾ã›ã‚“" #, python-format msgid "Not allowed to delete members for image %s." msgstr "イメージ %s ã®ãƒ¡ãƒ³ãƒãƒ¼ã®å‰Šé™¤ã¯è¨±å¯ã•れã¦ã„ã¾ã›ã‚“。" #, python-format msgid "Not allowed to delete tags for image %s." msgstr "イメージ %s ã®ã‚¿ã‚°ã®å‰Šé™¤ã¯è¨±å¯ã•れã¦ã„ã¾ã›ã‚“。" #, python-format msgid "Not allowed to list members for image %s." msgstr "イメージ %s ã®ãƒ¡ãƒ³ãƒãƒ¼ã®ãƒªã‚¹ãƒˆã¯è¨±å¯ã•れã¦ã„ã¾ã›ã‚“。" #, python-format msgid "Not allowed to reactivate image in status '%s'" msgstr "状æ³ãŒã€Œ%sã€ã§ã‚るイメージã®å†ã‚¢ã‚¯ãƒ†ã‚£ãƒ–化ã¯è¨±å¯ã•れã¦ã„ã¾ã›ã‚“" #, python-format msgid "Not allowed to update members for image %s." msgstr "イメージ %s ã®ãƒ¡ãƒ³ãƒãƒ¼ã®æ›´æ–°ã¯è¨±å¯ã•れã¦ã„ã¾ã›ã‚“。" #, python-format msgid "Not allowed to update tags for image %s." msgstr "イメージ %s ã®ã‚¿ã‚°ã®æ›´æ–°ã¯è¨±å¯ã•れã¦ã„ã¾ã›ã‚“。" #, python-format msgid "Not allowed to upload image data for image %(image_id)s: %(error)s" msgstr "" "イメージ %(image_id)s ã§ã¯ã‚¤ãƒ¡ãƒ¼ã‚¸ãƒ‡ãƒ¼ã‚¿ã®ã‚¢ãƒƒãƒ—ロードã¯è¨±å¯ã•れã¾ã›ã‚“: " "%(error)s" msgid "Number of sort dirs does not match the number of sort keys" msgstr "ソート方å‘ã®æ•°ãŒã‚½ãƒ¼ãƒˆã‚­ãƒ¼ã®æ•°ã«ä¸€è‡´ã—ã¾ã›ã‚“" msgid "OVA extract is limited to admin" msgstr "OVA 抽出ãŒå®Ÿè¡Œã§ãã‚‹ã®ã¯ç®¡ç†è€…ã®ã¿ã§ã™" msgid "Old and new sorting syntax cannot be combined" msgstr "æ–°æ—§ã®ã‚½ãƒ¼ãƒˆæ§‹æ–‡ã‚’çµåˆã™ã‚‹ã“ã¨ã¯ã§ãã¾ã›ã‚“" msgid "Only shared images have members." msgstr "共有イメージã®ã¿ãŒãƒ¡ãƒ³ãƒãƒ¼ã‚’æŒã¡ã¾ã™ã€‚" #, python-format msgid "Operation \"%s\" requires a member named \"value\"." msgstr "æ“作 \"%s\" ã«ã¯ \"value\" ã¨ã„ã†åå‰ã®ãƒ¡ãƒ³ãƒãƒ¼ãŒå¿…è¦ã§ã™ã€‚" msgid "" "Operation objects must contain exactly one member named \"add\", \"remove\", " "or \"replace\"." msgstr "" "æ“作オブジェクトã«ã¯ã€\"add\"ã€\"remove\"ã€ã¾ãŸã¯ \"replace\" ã¨ã„ã†åå‰ã®ãƒ¡" "ンãƒãƒ¼ã‚’正確㫠1 ã¤ã ã‘å«ã‚ã‚‹å¿…è¦ãŒã‚りã¾ã™ã€‚" msgid "" "Operation objects must contain only one member named \"add\", \"remove\", or " "\"replace\"." msgstr "" "æ“作オブジェクトã«ã¯ã€\"add\"ã€\"remove\"ã€ã¾ãŸã¯ \"replace\" ã¨ã„ã†åå‰ã®ãƒ¡" "ンãƒãƒ¼ã‚’ 1 ã¤ã—ã‹å«ã‚られã¾ã›ã‚“。" msgid "Operations must be JSON objects." msgstr "æ“作㯠JSON オブジェクトã§ãªã‘れã°ãªã‚Šã¾ã›ã‚“。" #, python-format msgid "Original locations is not empty: %s" msgstr "å…ƒã®å ´æ‰€ã¯ç©ºã§ã¯ã‚りã¾ã›ã‚“: %s" msgid "Owner can't be updated by non admin." msgstr "管ç†è€…ä»¥å¤–ã¯æ‰€æœ‰è€…ã‚’æ›´æ–°ã§ãã¾ã›ã‚“。" msgid "Owner must be specified to create a tag." msgstr "タグを作æˆã™ã‚‹ã«ã¯ã€æ‰€æœ‰è€…を指定ã™ã‚‹å¿…è¦ãŒã‚りã¾ã™ã€‚" msgid "Owner of the image" msgstr "ã‚¤ãƒ¡ãƒ¼ã‚¸ã®æ‰€æœ‰è€…" msgid "Owner of the namespace." msgstr "åå‰ç©ºé–“ã®æ‰€æœ‰è€…。" msgid "Param values can't contain 4 byte unicode." msgstr "Param 値㫠4 ãƒã‚¤ãƒˆã® Unicode ãŒå«ã¾ã‚Œã¦ã„ã¦ã¯ãªã‚Šã¾ã›ã‚“。" msgid "Placed database under migration control at revision:" msgstr "移行制御下ã«ã‚ã‚‹é…ç½®ã•れãŸãƒ‡ãƒ¼ã‚¿ãƒ™ãƒ¼ã‚¹ã¯ãƒªãƒ“ジョン:" #, python-format msgid "Pointer `%s` contains \"~\" not part of a recognized escape sequence." msgstr "" "ãƒã‚¤ãƒ³ã‚¿ãƒ¼ `%s` ã«ã€èªè­˜ã•れã¦ã„るエスケープシーケンスã®ä¸€éƒ¨ã§ã¯ãªã„ \"~\" ãŒ" "å«ã¾ã‚Œã¦ã„ã¾ã™ã€‚" #, python-format msgid "Pointer `%s` contains adjacent \"/\"." msgstr "ãƒã‚¤ãƒ³ã‚¿ãƒ¼ `%s` ã«éš£æŽ¥ã™ã‚‹ \"/\" ãŒå«ã¾ã‚Œã¦ã„ã¾ã™ã€‚" #, python-format msgid "Pointer `%s` does not contains valid token." msgstr "ãƒã‚¤ãƒ³ã‚¿ãƒ¼ `%s` ã«æœ‰åйãªãƒˆãƒ¼ã‚¯ãƒ³ãŒå«ã¾ã‚Œã¦ã„ã¾ã›ã‚“。" #, python-format msgid "Pointer `%s` does not start with \"/\"." msgstr "ãƒã‚¤ãƒ³ã‚¿ãƒ¼ `%s` ã®å…ˆé ­ãŒ \"/\" ã§ã¯ã‚りã¾ã›ã‚“。" #, python-format msgid "Pointer `%s` end with \"/\"." msgstr "ãƒã‚¤ãƒ³ã‚¿ãƒ¼ `%s` ã®æœ«å°¾ãŒ \"/\" ã§ã™ã€‚" #, python-format msgid "Port \"%s\" is not valid." msgstr "ãƒãƒ¼ãƒˆ \"%s\" ãŒç„¡åйã§ã™ã€‚" #, python-format msgid "Process %d not running" msgstr "プロセス %d ã¯å®Ÿè¡Œã•れã¦ã„ã¾ã›ã‚“" #, python-format msgid "Properties %s must be set prior to saving data." msgstr "データã®ä¿å­˜å‰ã«ãƒ—ロパティー %s を設定ã™ã‚‹å¿…è¦ãŒã‚りã¾ã™ã€‚" #, python-format msgid "" "Property %(property_name)s does not start with the expected resource type " "association prefix of '%(prefix)s'." msgstr "" "プロパティー %(property_name)s ã®å…ˆé ­ãŒã€æƒ³å®šã•れるリソースタイプ関連付ã‘ã®ãƒ—" "レフィックス \"%(prefix)s\" ã§ã¯ã‚りã¾ã›ã‚“。" #, python-format msgid "Property %s already present." msgstr "プロパティー %s ã¯æ—¢ã«å­˜åœ¨ã—ã¦ã„ã¾ã™ã€‚" #, python-format msgid "Property %s does not exist." msgstr "プロパティー %s ã¯å­˜åœ¨ã—ã¾ã›ã‚“。" #, python-format msgid "Property %s may not be removed." msgstr "プロパティー %s ã¯å‰Šé™¤ã§ãã¾ã›ã‚“。" #, python-format msgid "Property %s must be set prior to saving data." msgstr "データã®ä¿å­˜å‰ã«ãƒ—ロパティー %s を設定ã™ã‚‹å¿…è¦ãŒã‚りã¾ã™ã€‚" #, python-format msgid "Property '%s' is protected" msgstr "プロパティー '%s' ã¯ä¿è­·ã•れã¦ã„ã¾ã™" msgid "Property names can't contain 4 byte unicode." msgstr "プロパティーåã« 4 ãƒã‚¤ãƒˆã® Unicode ãŒå«ã¾ã‚Œã¦ã„ã¦ã¯ãªã‚Šã¾ã›ã‚“。" #, python-format msgid "" "Provided image size must match the stored image size. (provided size: " "%(ps)d, stored size: %(ss)d)" msgstr "" "指定ã™ã‚‹ã‚¤ãƒ¡ãƒ¼ã‚¸ã®ã‚µã‚¤ã‚ºã¯ã€ä¿ç®¡ã•れã¦ã„るイメージã®ã‚µã‚¤ã‚ºã¨ä¸€è‡´ã—ãªã‘れã°ãª" "りã¾ã›ã‚“。(指定サイズ: %(ps)dã€ä¿ç®¡ã‚µã‚¤ã‚º: %(ss)d)" #, python-format msgid "Provided object does not match schema '%(schema)s': %(reason)s" msgstr "" "指定ã•れãŸã‚ªãƒ–ジェクトãŒã‚¹ã‚­ãƒ¼ãƒž '%(schema)s' ã¨ä¸€è‡´ã—ã¾ã›ã‚“: %(reason)s" #, python-format msgid "Provided status of task is unsupported: %(status)s" msgstr "指定ã•れãŸã‚¿ã‚¹ã‚¯çжæ³ã¯ã‚µãƒãƒ¼ãƒˆã•れã¦ã„ã¾ã›ã‚“: %(status)s" #, python-format msgid "Provided type of task is unsupported: %(type)s" msgstr "指定ã•れãŸã‚¿ã‚¹ã‚¯ã‚¿ã‚¤ãƒ—ã¯ã‚µãƒãƒ¼ãƒˆã•れã¦ã„ã¾ã›ã‚“: %(type)s" msgid "Provides a user friendly description of the namespace." msgstr "分ã‹ã‚Šã‚„ã™ã„åå‰ç©ºé–“ã®èª¬æ˜ŽãŒæä¾›ã•れã¾ã™ã€‚" msgid "Purge command failed, check glance-manage logs for more details." msgstr "" "Purge コマンドãŒå¤±æ•—ã—ã¾ã—ãŸã€‚詳細㯠glance-manage ã®ãƒ­ã‚°ã‚’確èªã—ã¦ä¸‹ã•ã„。" msgid "Received invalid HTTP redirect." msgstr "無効㪠HTTP リダイレクトをå—ã‘å–りã¾ã—ãŸã€‚" #, python-format msgid "Redirecting to %(uri)s for authorization." msgstr "許å¯ã®ãŸã‚ã« %(uri)s ã«ãƒªãƒ€ã‚¤ãƒ¬ã‚¯ãƒˆã—ã¦ã„ã¾ã™ã€‚" #, python-format msgid "Registry service can't use %s" msgstr "レジストリーサービスã§ã¯ %s を使用ã§ãã¾ã›ã‚“" #, python-format msgid "Registry was not configured correctly on API server. Reason: %(reason)s" msgstr "" "レジストリー㌠API サーãƒãƒ¼ã§æ­£ã—ã設定ã•れã¦ã„ã¾ã›ã‚“ã§ã—ãŸã€‚ç†ç”±: %(reason)s" #, python-format msgid "Reload of %(serv)s not supported" msgstr "%(serv)s ã®å†ãƒ­ãƒ¼ãƒ‰ã¯ã‚µãƒãƒ¼ãƒˆã•れã¦ã„ã¾ã›ã‚“" #, python-format msgid "Reloading %(serv)s (pid %(pid)s) with signal(%(sig)s)" msgstr "%(serv)s (pid %(pid)s) をシグナル (%(sig)s) ã«ã‚ˆã‚Šå†ãƒ­ãƒ¼ãƒ‰ä¸­" #, python-format msgid "Removing stale pid file %s" msgstr "失効ã—㟠pid ファイル %s を削除中" msgid "Request body must be a JSON array of operation objects." msgstr "è¦æ±‚本文ã¯ã€æ“作オブジェクト㮠JSON é…列ã§ãªã‘れã°ãªã‚Šã¾ã›ã‚“。" msgid "Request must be a list of commands" msgstr "è¦æ±‚ã¯ã‚³ãƒžãƒ³ãƒ‰ã®ãƒªã‚¹ãƒˆã§ã‚ã‚‹å¿…è¦ãŒã‚りã¾ã™" #, python-format msgid "Required store %s is invalid" msgstr "å¿…é ˆã®ã‚¹ãƒˆã‚¢ %s ãŒç„¡åйã§ã™" msgid "" "Resource type names should be aligned with Heat resource types whenever " "possible: http://docs.openstack.org/developer/heat/template_guide/openstack." "html" msgstr "" "å¯èƒ½ã§ã‚れã°ã€ãƒªã‚½ãƒ¼ã‚¹ã‚¿ã‚¤ãƒ—åã‚’ Heat リソースタイプã¨ä½ç½®åˆã‚ã›ã—ã¾ã™ã€‚" "http://docs.openstack.org/developer/heat/template_guide/openstack.html" msgid "Response from Keystone does not contain a Glance endpoint." msgstr "Keystone ã‹ã‚‰ã®å¿œç­”ã« Glance エンドãƒã‚¤ãƒ³ãƒˆãŒå«ã¾ã‚Œã¦ã„ã¾ã›ã‚“。" msgid "Rolling upgrades are currently supported only for MySQL and Sqlite" msgstr "" "ローリングアップグレードã¯ç¾åœ¨ MySQL 㨠Sqlite ã®ã¿ãŒã‚µãƒãƒ¼ãƒˆã•れã¦ã„ã¾ã™ã€‚" msgid "Scope of image accessibility" msgstr "イメージã®ã‚¢ã‚¯ã‚»ã‚¹å¯èƒ½æ€§ã®ç¯„囲" msgid "Scope of namespace accessibility." msgstr "åå‰ç©ºé–“アクセシビリティーã®ç¯„囲。" msgid "Scrubber encountered an error while trying to fetch scrub jobs." msgstr "スクラブジョブã®å–得を試行中ã«ã‚¨ãƒ©ãƒ¼ãŒç™ºç”Ÿã—ã¾ã—ãŸã€‚" #, python-format msgid "Server %(serv)s is stopped" msgstr "サーãƒãƒ¼ %(serv)s ã¯åœæ­¢ã—ã¦ã„ã¾ã™" #, python-format msgid "Server worker creation failed: %(reason)s." msgstr "サーãƒãƒ¼ãƒ¯ãƒ¼ã‚«ãƒ¼ã®ä½œæˆã«å¤±æ•—ã—ã¾ã—ãŸ: %(reason)s" msgid "Signature verification failed" msgstr "シグニãƒãƒ£ãƒ¼ã®æ¤œè¨¼ãŒå¤±æ•—ã—ã¾ã—ãŸ" msgid "Size of image file in bytes" msgstr "イメージファイルã®ã‚µã‚¤ã‚º (ãƒã‚¤ãƒˆ)" msgid "" "Some resource types allow more than one key / value pair per instance. For " "example, Cinder allows user and image metadata on volumes. Only the image " "properties metadata is evaluated by Nova (scheduling or drivers). This " "property allows a namespace target to remove the ambiguity." msgstr "" "一部ã®ãƒªã‚½ãƒ¼ã‚¹ã‚¿ã‚¤ãƒ—ã§ã¯ã€ã‚¤ãƒ³ã‚¹ã‚¿ãƒ³ã‚¹ã”ã¨ã«è¤‡æ•°ã®ã‚­ãƒ¼/値ã®ãƒšã‚¢ãŒè¨±å¯ã•れã¦ã„" "ã¾ã™ã€‚例ãˆã°ã€Cinder ã¯ãƒœãƒªãƒ¥ãƒ¼ãƒ ä¸Šã®ãƒ¦ãƒ¼ã‚¶ãƒ¼ãŠã‚ˆã³ã‚¤ãƒ¡ãƒ¼ã‚¸ãƒ¡ã‚¿ãƒ‡ãƒ¼ã‚¿ã‚’許å¯ã—" "ã¦ã„ã¾ã™ã€‚イメージプロパティーメタデータã®ã¿ã€Nova (スケジュールã¾ãŸã¯ãƒ‰ãƒ©ã‚¤" "ãƒãƒ¼) ã«ã‚ˆã£ã¦è©•価ã•れã¾ã™ã€‚ã“ã®ãƒ—ロパティーã«ã‚ˆã£ã¦ã€åå‰ç©ºé–“ターゲットã‹ã‚‰" "ã‚ã„ã¾ã„ã•を排除ã§ãã¾ã™ã€‚" msgid "Sort direction supplied was not valid." msgstr "指定ã•れãŸã‚½ãƒ¼ãƒˆæ–¹å‘ãŒç„¡åйã§ã—ãŸã€‚" msgid "Sort key supplied was not valid." msgstr "指定ã•れãŸã‚½ãƒ¼ãƒˆã‚­ãƒ¼ãŒç„¡åйã§ã—ãŸã€‚" msgid "" "Specifies the prefix to use for the given resource type. Any properties in " "the namespace should be prefixed with this prefix when being applied to the " "specified resource type. Must include prefix separator (e.g. a colon :)." msgstr "" "指定ã•れãŸãƒªã‚½ãƒ¼ã‚¹ã‚¿ã‚¤ãƒ—ã«ä½¿ç”¨ã™ã‚‹ãƒ—レフィックスを指定ã—ã¾ã™ã€‚åå‰ç©ºé–“ã«ã‚ã‚‹" "プロパティーã¯ã™ã¹ã¦ã€æŒ‡å®šã•れãŸãƒªã‚½ãƒ¼ã‚¹ã‚¿ã‚¤ãƒ—ã«é©ç”¨ã•れるã¨ãã«ã€ã“ã®ãƒ—レ" "フィックスãŒå…ˆé ­ã«ä»˜ã‘られã¾ã™ã€‚コロン (:) ãªã©ã®ãƒ—レフィックス区切り文字を組" "ã¿è¾¼ã‚€å¿…è¦ãŒã‚りã¾ã™ã€‚" msgid "Status must be \"pending\", \"accepted\" or \"rejected\"." msgstr "状æ³ã¯ã€\"ä¿ç•™ä¸­\"ã€\"å—諾\"ã€ã¾ãŸã¯\"æ‹’å¦\" ã§ãªã‘れã°ãªã‚Šã¾ã›ã‚“。" msgid "Status not specified" msgstr "状æ³ãŒæŒ‡å®šã•れã¦ã„ã¾ã›ã‚“" msgid "Status of the image" msgstr "イメージã®çŠ¶æ…‹" #, python-format msgid "Status transition from %(cur_status)s to %(new_status)s is not allowed" msgstr "%(cur_status)s ã‹ã‚‰ %(new_status)s ã¸ã®çжæ³é·ç§»ã¯è¨±å¯ã•れã¾ã›ã‚“" #, python-format msgid "Stopping %(serv)s (pid %(pid)s) with signal(%(sig)s)" msgstr "%(serv)s (pid %(pid)s) をシグナル (%(sig)s) ã«ã‚ˆã‚Šåœæ­¢ä¸­" #, python-format msgid "Store for image_id not found: %s" msgstr "image_id ã®ã‚¹ãƒˆã‚¢ãŒè¦‹ã¤ã‹ã‚Šã¾ã›ã‚“: %s" #, python-format msgid "Store for scheme %s not found" msgstr "スキーマ %s ã®ã‚¹ãƒˆã‚¢ãŒè¦‹ã¤ã‹ã‚Šã¾ã›ã‚“" #, python-format msgid "" "Supplied %(attr)s (%(supplied)s) and %(attr)s generated from uploaded image " "(%(actual)s) did not match. Setting image status to 'killed'." msgstr "" "指定ã•れ㟠%(attr)s (%(supplied)s) ã¨ã‚¢ãƒƒãƒ—ロードã•れãŸã‚¤ãƒ¡ãƒ¼ã‚¸ (%(actual)s) " "ã‹ã‚‰ç”Ÿæˆã•れ㟠%(attr)s ãŒä¸€è‡´ã—ã¦ã„ã¾ã›ã‚“ã§ã—ãŸã€‚イメージã®çжæ³ã‚’「強制終了" "済ã¿ã€ã«è¨­å®šã—ã¾ã™ã€‚" msgid "Supported values for the 'container_format' image attribute" msgstr "'container_format' イメージ属性ã«å¯¾ã—ã¦ã‚µãƒãƒ¼ãƒˆã•れる値" msgid "Supported values for the 'disk_format' image attribute" msgstr "'disk_format' イメージ属性ã«å¯¾ã—ã¦ã‚µãƒãƒ¼ãƒˆã•れる値" #, python-format msgid "Suppressed respawn as %(serv)s was %(rsn)s." msgstr "%(serv)s ã¨ã—ã¦æŠ‘åˆ¶ã•れãŸå†ä½œæˆã¯ %(rsn)s ã§ã—ãŸã€‚" msgid "System SIGHUP signal received." msgstr "システム SIGHUP シグナルをå—ä¿¡ã—ã¾ã—ãŸã€‚" #, python-format msgid "Task '%s' is required" msgstr "タスク '%s' ãŒå¿…è¦ã§ã™" msgid "Task does not exist" msgstr "タスクãŒå­˜åœ¨ã—ã¾ã›ã‚“" msgid "Task failed due to Internal Error" msgstr "内部エラーãŒåŽŸå› ã§ã‚¿ã‚¹ã‚¯ãŒå¤±æ•—ã—ã¾ã—ãŸ" msgid "Task was not configured properly" msgstr "ã‚¿ã‚¹ã‚¯ãŒæ­£ã—ã設定ã•れã¾ã›ã‚“ã§ã—ãŸ" #, python-format msgid "Task with the given id %(task_id)s was not found" msgstr "指定ã•れ㟠id %(task_id)s ã®ã‚¿ã‚¹ã‚¯ã¯è¦‹ã¤ã‹ã‚Šã¾ã›ã‚“ã§ã—ãŸ" msgid "The \"changes-since\" filter is no longer available on v2." msgstr "\"changes-since\" フィルター㯠v2 上ã§ä½¿ç”¨ã§ããªããªã‚Šã¾ã—ãŸã€‚" #, python-format msgid "The CA file you specified %s does not exist" msgstr "指定ã—㟠CA ファイル %s ã¯å­˜åœ¨ã—ã¾ã›ã‚“" #, python-format msgid "" "The Image %(image_id)s object being created by this task %(task_id)s, is no " "longer in valid status for further processing." msgstr "" "ã“ã®ã‚¿ã‚¹ã‚¯ %(task_id)s ã§ä½œæˆã•れã¦ã„るイメージ %(image_id)s オブジェクトã¯ä»¥" "é™ã®å‡¦ç†ã«æœ‰åйãªçжæ³ã§ã¯ãªããªã‚Šã¾ã—ãŸã€‚" msgid "The Store URI was malformed." msgstr "ストア URI ã®å½¢å¼ã«èª¤ã‚ŠãŒã‚りã¾ã—ãŸã€‚" msgid "" "The URL to the keystone service. If \"use_user_token\" is not in effect and " "using keystone auth, then URL of keystone can be specified." msgstr "" "keystone サービス㮠URL。\"use_user_token\" ãŒç„¡åйã§ã€keystone èªè¨¼ã‚’使用ã—ã¦" "ã„ã‚‹å ´åˆã€keystone ã® URL を指定ã§ãã¾ã™ã€‚" msgid "" "The administrators password. If \"use_user_token\" is not in effect, then " "admin credentials can be specified." msgstr "" "管ç†è€…パスワード。\"use_user_token\" ãŒç„¡åйã§ã‚れã°ã€ç®¡ç†è³‡æ ¼æƒ…報を指定ã§ãã¾" "ã™ã€‚" msgid "" "The administrators user name. If \"use_user_token\" is not in effect, then " "admin credentials can be specified." msgstr "" "管ç†è€…ユーザーå。\"use_user_token\" ãŒç„¡åйã§ã‚れã°ã€ç®¡ç†è³‡æ ¼æƒ…報を指定ã§ãã¾" "ã™ã€‚" #, python-format msgid "The cert file you specified %s does not exist" msgstr "指定ã—ãŸè¨¼æ˜Žæ›¸ãƒ•ァイル %s ã¯å­˜åœ¨ã—ã¾ã›ã‚“" msgid "The current status of this task" msgstr "ã“ã®ã‚¿ã‚¹ã‚¯ã®ç¾è¡Œçжæ³" #, python-format msgid "" "The device housing the image cache directory %(image_cache_dir)s does not " "support xattr. It is likely you need to edit your fstab and add the " "user_xattr option to the appropriate line for the device housing the cache " "directory." msgstr "" "イメージキャッシュディレクトリー %(image_cache_dir)s ãŒæ ¼ç´ã•れã¦ã„るデãƒã‚¤ã‚¹" "ã§ã¯ xattr ã¯ã‚µãƒãƒ¼ãƒˆã•れã¾ã›ã‚“。fstab を編集ã—ã¦ã€ã‚­ãƒ£ãƒƒã‚·ãƒ¥ãƒ‡ã‚£ãƒ¬ã‚¯ãƒˆãƒªãƒ¼ãŒ" "æ ¼ç´ã•れã¦ã„るデãƒã‚¤ã‚¹ã®è©²å½“ã™ã‚‹è¡Œã« user_xattr オプションを追加ã—ãªã‘れã°ãª" "らãªã„å¯èƒ½æ€§ãŒã‚りã¾ã™ã€‚" #, python-format msgid "" "The given uri is not valid. Please specify a valid uri from the following " "list of supported uri %(supported)s" msgstr "" "指定ã—㟠URI ãŒç„¡åйã§ã™ã€‚次ã®ã‚µãƒãƒ¼ãƒˆã•れã¦ã„ã‚‹ URI ã®ãƒªã‚¹ãƒˆã‹ã‚‰ã€æœ‰åŠ¹ãª URI " "を指定ã—ã¦ãã ã•ã„: %(supported)s" #, python-format msgid "The image %s has data on staging" msgstr "イメージ %s ã¯ã‚¹ãƒ†ãƒ¼ã‚¸ãƒ³ã‚°ã«ãƒ‡ãƒ¼ã‚¿ãŒã‚りã¾ã™" #, python-format msgid "" "The image %s is already present on the target, but our check for it did not " "find it. This indicates that we do not have permissions to see all the " "images on the target server." msgstr "" "イメージ %s ã¯æ—¢ã«ã‚¿ãƒ¼ã‚²ãƒƒãƒˆä¸Šã«ã‚りã¾ã™ãŒã€æ¤œæŸ»ã§ã¯è¦‹ã¤ã‹ã‚Šã¾ã›ã‚“ã§ã—ãŸã€‚ã“" "れã¯ã€ã‚¿ãƒ¼ã‚²ãƒƒãƒˆã‚µãƒ¼ãƒãƒ¼ä¸Šã®ã™ã¹ã¦ã®ã‚¤ãƒ¡ãƒ¼ã‚¸ã‚’表示ã™ã‚‹è¨±å¯ã‚’æŒã£ã¦ã„ãªã„ã“ã¨" "を示ã—ã¾ã™ã€‚" #, python-format msgid "The incoming image is too large: %s" msgstr "入力イメージãŒå¤§ãã™ãŽã¾ã™: %s" #, python-format msgid "The key file you specified %s does not exist" msgstr "指定ã—ãŸéµãƒ•ァイル %s ã¯å­˜åœ¨ã—ã¾ã›ã‚“" #, python-format msgid "" "The limit has been exceeded on the number of allowed image locations. " "Attempted: %(attempted)s, Maximum: %(maximum)s" msgstr "" "許å¯ã•れるイメージã®å ´æ‰€ã®æ•°ã®åˆ¶é™ã‚’è¶…ãˆã¾ã—ãŸã€‚試行: %(attempted)sã€æœ€å¤§: " "%(maximum)s" #, python-format msgid "" "The limit has been exceeded on the number of allowed image members for this " "image. Attempted: %(attempted)s, Maximum: %(maximum)s" msgstr "" "ã“ã®ã‚¤ãƒ¡ãƒ¼ã‚¸ã«å¯¾ã—ã¦è¨±å¯ã•れるイメージメンãƒãƒ¼æ•°ã®åˆ¶é™ã‚’è¶…ãˆã¾ã—ãŸã€‚試行: " "%(attempted)sã€æœ€å¤§: %(maximum)s" #, python-format msgid "" "The limit has been exceeded on the number of allowed image properties. " "Attempted: %(attempted)s, Maximum: %(maximum)s" msgstr "" "許å¯ã•れるイメージプロパティー数ã®åˆ¶é™ã‚’è¶…ãˆã¾ã—ãŸã€‚試行: %(attempted)sã€æœ€" "大: %(maximum)s" #, python-format msgid "" "The limit has been exceeded on the number of allowed image properties. " "Attempted: %(num)s, Maximum: %(quota)s" msgstr "" "許å¯ã•れるイメージプロパティー数ã®åˆ¶é™ã‚’è¶…ãˆã¾ã—ãŸã€‚試行: %(num)sã€æœ€å¤§: " "%(quota)s" #, python-format msgid "" "The limit has been exceeded on the number of allowed image tags. Attempted: " "%(attempted)s, Maximum: %(maximum)s" msgstr "" "許å¯ã•れるイメージタグ数ã®åˆ¶é™ã‚’è¶…ãˆã¾ã—ãŸã€‚試行: %(attempted)sã€æœ€å¤§: " "%(maximum)s" #, python-format msgid "The location %(location)s already exists" msgstr "場所 %(location)s ã¯æ—¢ã«å­˜åœ¨ã—ã¾ã™" #, python-format msgid "The location data has an invalid ID: %d" msgstr "場所データ㮠ID ãŒç„¡åйã§ã™: %d" #, python-format msgid "" "The metadata definition %(record_type)s with name=%(record_name)s not " "deleted. Other records still refer to it." msgstr "" "name=%(record_name)s ã®ãƒ¡ã‚¿ãƒ‡ãƒ¼ã‚¿å®šç¾© %(record_type)s ã¯å‰Šé™¤ã•れã¦ã„ã¾ã›ã‚“。" "ä»–ã®ãƒ¬ã‚³ãƒ¼ãƒ‰ãŒã¾ã ã“ã®ãƒ¡ã‚¿ãƒ‡ãƒ¼ã‚¿å®šç¾©ã‚’å‚ç…§ã—ã¦ã„ã¾ã™ã€‚" #, python-format msgid "The metadata definition namespace=%(namespace_name)s already exists." msgstr "メタデータ定義 namespace=%(namespace_name)s ã¯æ—¢ã«å­˜åœ¨ã—ã¾ã™ã€‚" #, python-format msgid "" "The metadata definition object with name=%(object_name)s was not found in " "namespace=%(namespace_name)s." msgstr "" "name=%(object_name)s ã®ãƒ¡ã‚¿ãƒ‡ãƒ¼ã‚¿å®šç¾©ã‚ªãƒ–ジェクトãŒã€namespace=" "%(namespace_name)s ã«è¦‹ã¤ã‹ã‚Šã¾ã›ã‚“ã§ã—ãŸã€‚" #, python-format msgid "" "The metadata definition property with name=%(property_name)s was not found " "in namespace=%(namespace_name)s." msgstr "" "name=%(property_name)s ã®ãƒ¡ã‚¿ãƒ‡ãƒ¼ã‚¿å®šç¾©ãƒ—ロパティーã¯ã€namespace=" "%(namespace_name)s ã«è¦‹ã¤ã‹ã‚Šã¾ã›ã‚“ã§ã—ãŸã€‚" #, python-format msgid "" "The metadata definition resource-type association of resource-type=" "%(resource_type_name)s to namespace=%(namespace_name)s already exists." msgstr "" "resource-type=%(resource_type_name)s ã®ã€namespace=%(namespace_name)s ã¸ã®ãƒ¡" "タデータ定義リソースタイプ関連付ã‘ã¯ã€æ—¢ã«å­˜åœ¨ã—ã¾ã™ã€‚" #, python-format msgid "" "The metadata definition resource-type association of resource-type=" "%(resource_type_name)s to namespace=%(namespace_name)s, was not found." msgstr "" "resource-type=%(resource_type_name)s ã®ã€namespace=%(namespace_name)s ã¸ã®ãƒ¡" "タデータ定義リソースタイプ関連付ã‘ãŒè¦‹ã¤ã‹ã‚Šã¾ã›ã‚“ã§ã—ãŸã€‚" #, python-format msgid "" "The metadata definition resource-type with name=%(resource_type_name)s, was " "not found." msgstr "" "name=%(resource_type_name)s ã®ãƒ¡ã‚¿ãƒ‡ãƒ¼ã‚¿å®šç¾©ãƒªã‚½ãƒ¼ã‚¹ã‚¿ã‚¤ãƒ—ãŒè¦‹ã¤ã‹ã‚Šã¾ã›ã‚“ã§" "ã—ãŸã€‚" #, python-format msgid "" "The metadata definition tag with name=%(name)s was not found in namespace=" "%(namespace_name)s." msgstr "" "name=%(name)s ã®ãƒ¡ã‚¿ãƒ‡ãƒ¼ã‚¿å®šç¾©ã‚¿ã‚°ãŒ namespace=%(namespace_name)s ã«è¦‹ã¤ã‹ã‚Š" "ã¾ã›ã‚“ã§ã—ãŸã€‚" msgid "The parameters required by task, JSON blob" msgstr "タスクã«ã‚ˆã£ã¦è¦æ±‚ã•れるパラメーターã€JSON blob" msgid "The provided image is too large." msgstr "指定ã•れãŸã‚¤ãƒ¡ãƒ¼ã‚¸ãŒå¤§ãã™ãŽã¾ã™ã€‚" msgid "" "The region for the authentication service. If \"use_user_token\" is not in " "effect and using keystone auth, then region name can be specified." msgstr "" "èªè¨¼ã‚µãƒ¼ãƒ“スã®é ˜åŸŸã€‚\"use_user_token\" ãŒç„¡åйã§ã€keystone èªè¨¼ã‚’使用ã—ã¦ã„ã‚‹" "å ´åˆã€é ˜åŸŸåを指定ã§ãã¾ã™ã€‚" msgid "The request returned 500 Internal Server Error." msgstr "è¦æ±‚ã§ã€Œ500 Internal Server Errorã€ãŒè¿”ã•れã¾ã—ãŸã€‚" msgid "" "The request returned 503 Service Unavailable. This generally occurs on " "service overload or other transient outage." msgstr "" "è¦æ±‚ã§ã€Œ503 Service Unavailableã€ãŒè¿”ã•れã¾ã—ãŸã€‚ã“れã¯ä¸€èˆ¬ã«ã€ã‚µãƒ¼ãƒ“スã®éŽè² " "è·ã¾ãŸã¯ä»–ã®ä¸€æ™‚çš„ãªéšœå®³æ™‚ã«èµ·ã“りã¾ã™ã€‚" #, python-format msgid "" "The request returned a 302 Multiple Choices. This generally means that you " "have not included a version indicator in a request URI.\n" "\n" "The body of response returned:\n" "%(body)s" msgstr "" "è¦æ±‚ãŒã€Œ302 Multiple Choicesã€ã‚’è¿”ã—ã¾ã—ãŸã€‚ã“れã¯é€šå¸¸ã€è¦æ±‚ URI ã«ãƒãƒ¼ã‚¸ãƒ§ãƒ³" "標識をå«ã‚ãªã‹ã£ãŸã“ã¨ã‚’æ„味ã—ã¾ã™ã€‚\n" "\n" "è¿”ã•れãŸå¿œç­”ã®æœ¬ä½“:\n" "%(body)s" #, python-format msgid "" "The request returned a 413 Request Entity Too Large. This generally means " "that rate limiting or a quota threshold was breached.\n" "\n" "The response body:\n" "%(body)s" msgstr "" "è¦æ±‚ã§ã€Œ413 Request Entity Too Largeã€ãŒè¿”ã•れã¾ã—ãŸã€‚ã“れã¯ä¸€èˆ¬ã«ã€é€Ÿåº¦åˆ¶é™" "ã¾ãŸã¯å‰²ã‚Šå½“ã¦é‡ã®ã—ãã„値ã«é•åã—ãŸã“ã¨ã‚’æ„味ã—ã¾ã™ã€‚\n" "\n" "応答本体:\n" "%(body)s" #, python-format msgid "" "The request returned an unexpected status: %(status)s.\n" "\n" "The response body:\n" "%(body)s" msgstr "" "è¦æ±‚ã§äºˆæœŸã—ãªã„状æ³ãŒè¿”ã•れã¾ã—ãŸ: %(status)s。\n" "\n" "応答本体:\n" "%(body)s" msgid "" "The requested image has been deactivated. Image data download is forbidden." msgstr "" "è¦æ±‚ã•れãŸã‚¤ãƒ¡ãƒ¼ã‚¸ã¯éžã‚¢ã‚¯ãƒ†ã‚£ãƒ–化ã•れã¦ã„ã¾ã™ã€‚イメージデータã®ãƒ€ã‚¦ãƒ³ãƒ­ãƒ¼ãƒ‰" "ã¯ç¦æ­¢ã•れã¦ã„ã¾ã™ã€‚" msgid "The result of current task, JSON blob" msgstr "ç¾è¡Œã‚¿ã‚¹ã‚¯ã®çµæžœã€JSON blob" #, python-format msgid "" "The size of the data %(image_size)s will exceed the limit. %(remaining)s " "bytes remaining." msgstr "" "データã®ã‚µã‚¤ã‚º %(image_size)s ãŒåˆ¶é™ã‚’è¶…ãˆã¾ã™ã€‚%(remaining)s ãƒã‚¤ãƒˆæ®‹ã•れã¦" "ã„ã¾ã™ã€‚" #, python-format msgid "The specified member %s could not be found" msgstr "指定ã•れãŸãƒ¡ãƒ³ãƒãƒ¼ %s ã¯è¦‹ã¤ã‹ã‚Šã¾ã›ã‚“ã§ã—ãŸ" #, python-format msgid "The specified metadata object %s could not be found" msgstr "指定ã•れãŸãƒ¡ã‚¿ãƒ‡ãƒ¼ã‚¿ã‚ªãƒ–ジェクト %s ã¯è¦‹ã¤ã‹ã‚Šã¾ã›ã‚“ã§ã—ãŸ" #, python-format msgid "The specified metadata tag %s could not be found" msgstr "指定ã•れãŸãƒ¡ã‚¿ãƒ‡ãƒ¼ã‚¿ã‚¿ã‚° %s ãŒè¦‹ã¤ã‹ã‚Šã¾ã›ã‚“ã§ã—ãŸ" #, python-format msgid "The specified namespace %s could not be found" msgstr "指定ã•れãŸåå‰ç©ºé–“ %s ã¯è¦‹ã¤ã‹ã‚Šã¾ã›ã‚“ã§ã—ãŸ" #, python-format msgid "The specified property %s could not be found" msgstr "指定ã•れãŸãƒ—ロパティー %s ã¯è¦‹ã¤ã‹ã‚Šã¾ã›ã‚“ã§ã—ãŸ" #, python-format msgid "The specified resource type %s could not be found " msgstr "指定ã•れãŸãƒªã‚½ãƒ¼ã‚¹ã‚¿ã‚¤ãƒ— %s ã¯è¦‹ã¤ã‹ã‚Šã¾ã›ã‚“ã§ã—ãŸ" msgid "" "The status of deleted image location can only be set to 'pending_delete' or " "'deleted'" msgstr "" "削除ã•れãŸã‚¤ãƒ¡ãƒ¼ã‚¸ã®å ´æ‰€ã®çжæ³ã¯ã€Œpending_deleteã€ã¾ãŸã¯ã€Œdeletedã€ã«ã®ã¿è¨­å®š" "ã§ãã¾ã™" msgid "" "The status of deleted image location can only be set to 'pending_delete' or " "'deleted'." msgstr "" "削除ã•れãŸã‚¤ãƒ¡ãƒ¼ã‚¸ã®å ´æ‰€ã®çжæ³ã¯ã€Œpending_deleteã€ã¾ãŸã¯ã€Œdeletedã€ã«ã®ã¿è¨­å®š" "ã§ãã¾ã™ã€‚" msgid "The status of this image member" msgstr "ã“ã®ã‚¤ãƒ¡ãƒ¼ã‚¸ãƒ¡ãƒ³ãƒãƒ¼ã®çжæ³" msgid "" "The strategy to use for authentication. If \"use_user_token\" is not in " "effect, then auth strategy can be specified." msgstr "" "èªè¨¼ã«ä½¿ç”¨ã•れるストラテジー。\"use_user_token\" ãŒç„¡åйã§ã‚れã°ã€èªè¨¼ã‚¹ãƒˆãƒ©ãƒ†" "ジーを指定ã§ãã¾ã™ã€‚" #, python-format msgid "" "The target member %(member_id)s is already associated with image " "%(image_id)s." msgstr "" "ターゲットメンãƒãƒ¼ %(member_id)s ã¯ã‚¤ãƒ¡ãƒ¼ã‚¸ %(image_id)s ã«æ—¢ã«é–¢é€£ä»˜ã‘られã¦" "ã„ã¾ã™ã€‚" msgid "" "The tenant name of the administrative user. If \"use_user_token\" is not in " "effect, then admin tenant name can be specified." msgstr "" "管ç†ãƒ¦ãƒ¼ã‚¶ãƒ¼ã®ãƒ†ãƒŠãƒ³ãƒˆå。\"use_user_token\" ãŒç„¡åйã§ã‚れã°ã€ç®¡ç†ãƒ†ãƒŠãƒ³ãƒˆåã‚’" "指定ã§ãã¾ã™ã€‚" msgid "The type of task represented by this content" msgstr "ã“ã®ã‚³ãƒ³ãƒ†ãƒ³ãƒ„ã«ã‚ˆã£ã¦è¡¨ã•れるタスクã®ã‚¿ã‚¤ãƒ—" msgid "The unique namespace text." msgstr "固有ã®åå‰ç©ºé–“テキスト。" msgid "The user friendly name for the namespace. Used by UI if available." msgstr "åå‰ç©ºé–“ã®åˆ†ã‹ã‚Šã‚„ã™ã„åå‰ã€‚存在ã™ã‚‹å ´åˆã¯ã€UI ã«ã‚ˆã£ã¦ä½¿ç”¨ã•れã¾ã™ã€‚" #, python-format msgid "" "There is a problem with your %(error_key_name)s %(error_filename)s. Please " "verify it. Error: %(ioe)s" msgstr "" "%(error_key_name)s %(error_filename)s ã«é–¢ã—ã¦å•題ãŒã‚りã¾ã™ã€‚確èªã—ã¦ãã ã•" "ã„。エラー: %(ioe)s" #, python-format msgid "" "There is a problem with your %(error_key_name)s %(error_filename)s. Please " "verify it. OpenSSL error: %(ce)s" msgstr "" "%(error_key_name)s %(error_filename)s ã«é–¢ã—ã¦å•題ãŒã‚りã¾ã™ã€‚確èªã—ã¦ãã ã•" "ã„。OpenSSL エラー: %(ce)s" #, python-format msgid "" "There is a problem with your key pair. Please verify that cert " "%(cert_file)s and key %(key_file)s belong together. OpenSSL error %(ce)s" msgstr "" "ã”使用ã®éµãƒšã‚¢ã«é–¢ã—ã¦å•題ãŒã‚りã¾ã™ã€‚証明書 %(cert_file)s ã¨éµ %(key_file)s " "ãŒãƒšã‚¢ã«ãªã£ã¦ã„ã‚‹ã“ã¨ã‚’確èªã—ã¦ãã ã•ã„。OpenSSL エラー %(ce)s" msgid "There was an error configuring the client." msgstr "クライアントã®è¨­å®šä¸­ã«ã‚¨ãƒ©ãƒ¼ãŒç™ºç”Ÿã—ã¾ã—ãŸã€‚" msgid "There was an error connecting to a server" msgstr "サーãƒãƒ¼ã¸ã®æŽ¥ç¶šä¸­ã«ã‚¨ãƒ©ãƒ¼ãŒç™ºç”Ÿã—ã¾ã—ãŸ" msgid "" "This operation is currently not permitted on Glance Tasks. They are auto " "deleted after reaching the time based on their expires_at property." msgstr "" "ã“ã®æ“作ã¯ã€Glance タスクã§ã¯ç¾åœ¨è¨±å¯ã•れã¦ã„ã¾ã›ã‚“。ã“れらã®ã‚¿ã‚¹ã‚¯ã¯ã€" "expires_at プロパティーã«åŸºã¥ãã€æ™‚é–“ã«é”ã™ã‚‹ã¨è‡ªå‹•çš„ã«å‰Šé™¤ã•れã¾ã™ã€‚" msgid "This operation is currently not permitted on Glance images details." msgstr "ã“ã®æ“作ã¯ã€Glance イメージã®è©³ç´°ã§ã¯ç¾åœ¨è¨±å¯ã•れã¦ã„ã¾ã›ã‚“。" msgid "" "Time in hours for which a task lives after, either succeeding or failing" msgstr "æˆåŠŸã¾ãŸã¯å¤±æ•—ã®å¾Œã§ã‚¿ã‚¹ã‚¯ãŒå­˜ç¶šã™ã‚‹æ™‚é–“ (時)" msgid "Too few arguments." msgstr "引数ãŒå°‘ãªã™ãŽã¾ã™ã€‚" #, python-format msgid "" "Total size is %(size)d bytes (%(human_size)s) across %(img_count)d images" msgstr "" "åˆè¨ˆã‚µã‚¤ã‚ºã¯ã€%(img_count)d) 個ã®ã‚¤ãƒ¡ãƒ¼ã‚¸ã§ %(size)d ãƒã‚¤ãƒˆ (%(human_size)s) " "ã§ã™" msgid "" "URI cannot contain more than one occurrence of a scheme.If you have " "specified a URI like swift://user:pass@http://authurl.com/v1/container/obj, " "you need to change it to use the swift+http:// scheme, like so: swift+http://" "user:pass@authurl.com/v1/container/obj" msgstr "" "URI ã«è¤‡æ•°å›žã€ã‚¹ã‚­ãƒ¼ãƒ ã‚’指定ã™ã‚‹ã“ã¨ã¯ã§ãã¾ã›ã‚“。swift://user:pass@http://" "authurl.com/v1/container/obj ã®ã‚ˆã†ãª URI を指定ã—ãŸå ´åˆã¯ã€æ¬¡ã®ã‚ˆã†ã«ã€swift" "+http:// スキームを使用ã™ã‚‹ã‚ˆã†å¤‰æ›´ã™ã‚‹å¿…è¦ãŒã‚りã¾ã™ã€‚swift+http://user:" "pass@authurl.com/v1/container/obj" msgid "URL to access the image file kept in external store" msgstr "外部ストアã«ä¿æŒã•れã¦ã„るイメージファイルã«ã‚¢ã‚¯ã‚»ã‚¹ã™ã‚‹ãŸã‚ã® URL" #, python-format msgid "" "Unable to create pid file %(pid)s. Running as non-root?\n" "Falling back to a temp file, you can stop %(service)s service using:\n" " %(file)s %(server)s stop --pid-file %(fb)s" msgstr "" "pid ファイル %(pid)s を作æˆã§ãã¾ã›ã‚“。éžãƒ«ãƒ¼ãƒˆã¨ã—ã¦å®Ÿè¡Œã—ã¾ã™ã‹?\n" "一時ファイルã«ãƒ•ォールãƒãƒƒã‚¯ä¸­ã€‚次を使用ã—㦠%(service)s サービスを\n" "åœæ­¢ã§ãã¾ã™: %(file)s %(server)s stop --pid-file %(fb)s" #, python-format msgid "Unable to filter by unknown operator '%s'." msgstr "䏿˜Žãªæ¼”ç®—å­ '%s' ã«ã‚ˆã£ã¦ãƒ•ィルター処ç†ã‚’行ã†ã“ã¨ãŒã§ãã¾ã›ã‚“。" msgid "Unable to filter on a range with a non-numeric value." msgstr "éžæ•°å€¤ã‚’å«ã‚€ç¯„囲ã§ã¯ãƒ•ィルタリングã§ãã¾ã›ã‚“。" msgid "Unable to filter on a unknown operator." msgstr "䏿˜Žãªæ¼”ç®—å­ã«å¯¾ã—ã¦ãƒ•ィルター処ç†ã‚’行ã†ã“ã¨ãŒã§ãã¾ã›ã‚“。" msgid "Unable to filter using the specified operator." msgstr "指定ã•ã‚ŒãŸæ¼”ç®—å­ã‚’使用ã—ã¦ãƒ•ィルター処ç†ãŒã§ãã¾ã›ã‚“。" msgid "Unable to filter using the specified range." msgstr "指定ã•れãŸç¯„囲ã§ã¯ãƒ•ィルタリングã§ãã¾ã›ã‚“。" #, python-format msgid "Unable to find '%s' in JSON Schema change" msgstr "JSON スキーマã®å¤‰æ›´ã§ '%s' ãŒè¦‹ã¤ã‹ã‚Šã¾ã›ã‚“" #, python-format msgid "" "Unable to find `op` in JSON Schema change. It must be one of the following: " "%(available)s." msgstr "" "JSON スキーマã®å¤‰æ›´ã§ `op` ãŒè¦‹ã¤ã‹ã‚Šã¾ã›ã‚“。以下ã®ã„ãšã‚Œã‹ã§ãªã‘れã°ãªã‚Šã¾ã›" "ã‚“: %(available)s。" msgid "Unable to increase file descriptor limit. Running as non-root?" msgstr "ファイル記述å­åˆ¶é™ã‚’増加ã§ãã¾ã›ã‚“。éžãƒ«ãƒ¼ãƒˆã¨ã—ã¦å®Ÿè¡Œã—ã¾ã™ã‹?" #, python-format msgid "" "Unable to load %(app_name)s from configuration file %(conf_file)s.\n" "Got: %(e)r" msgstr "" "設定ファイル %(conf_file)s ã‹ã‚‰ %(app_name)s をロードã§ãã¾ã›ã‚“。\n" "å—ã‘å–ã£ãŸã‚¨ãƒ©ãƒ¼: %(e)r" #, python-format msgid "Unable to load schema: %(reason)s" msgstr "スキーマをロードã§ãã¾ã›ã‚“: %(reason)s" #, python-format msgid "Unable to locate paste config file for %s." msgstr "%s ã® paste 設定ファイルãŒè¦‹ã¤ã‹ã‚Šã¾ã›ã‚“。" #, python-format msgid "Unable to upload duplicate image data for image%(image_id)s: %(error)s" msgstr "" "イメージ %(image_id)s ã®é‡è¤‡ã‚¤ãƒ¡ãƒ¼ã‚¸ãƒ‡ãƒ¼ã‚¿ã¯ã‚¢ãƒƒãƒ—ロードã§ãã¾ã›ã‚“: %(error)s" msgid "Unauthorized image access" msgstr "許å¯ã•れã¦ã„ãªã„イメージアクセス" msgid "Unexpected body type. Expected list/dict." msgstr "予期ã—ãªã„本文タイプ。予期ã•れãŸã®ã¯ãƒªã‚¹ãƒˆã¾ãŸã¯è¾žæ›¸ã§ã™ã€‚" #, python-format msgid "Unexpected response: %s" msgstr "予期ã—ãªã„応答: %s" #, python-format msgid "Unknown auth strategy '%s'" msgstr "䏿˜Žãªèªè¨¼ã‚¹ãƒˆãƒ©ãƒ†ã‚¸ãƒ¼ '%s'" #, python-format msgid "Unknown command: %s" msgstr "䏿˜Žãªã‚³ãƒžãƒ³ãƒ‰: %s" #, python-format msgid "Unknown import method name '%s'." msgstr "䏿˜Žãªã‚¤ãƒ³ãƒãƒ¼ãƒˆãƒ¡ã‚½ãƒƒãƒ‰å '%s' ã§ã™ã€‚" msgid "Unknown sort direction, must be 'desc' or 'asc'" msgstr "ソート方å‘ãŒä¸æ˜Žã§ã™ã€‚'desc' ã¾ãŸã¯ 'asc' ã§ãªã‘れã°ãªã‚Šã¾ã›ã‚“" msgid "Unrecognized JSON Schema draft version" msgstr "èªè­˜ã•れãªã„ JSON スキーマã®ãƒ‰ãƒ©ãƒ•トãƒãƒ¼ã‚¸ãƒ§ãƒ³" msgid "Unrecognized changes-since value" msgstr "èªè­˜ã•れãªã„ changes-since 値" #, python-format msgid "Unsupported sort_dir. Acceptable values: %s" msgstr "サãƒãƒ¼ãƒˆã•れãªã„ sort_dir ã§ã™ã€‚許容値: %s" #, python-format msgid "Unsupported sort_key. Acceptable values: %s" msgstr "サãƒãƒ¼ãƒˆã•れãªã„ sort_key ã§ã™ã€‚許容値: %s" #, python-format msgid "Upgraded database to: %(v)s, current revision(s): %(r)s" msgstr "" "データベース㌠%(v)s ã«ã‚¢ãƒƒãƒ—グレードã•れã¾ã—ãŸã€‚ç¾åœ¨ã®ãƒªãƒ“ジョン: %(r)s" msgid "Upgraded database, current revision(s):" msgstr "データベースãŒã‚¢ãƒƒãƒ—グレードã•れã¾ã—ãŸã€‚ç¾åœ¨ã®ãƒªãƒ“ジョン:" msgid "Virtual size of image in bytes" msgstr "イメージã®ä»®æƒ³ã‚µã‚¤ã‚º (ãƒã‚¤ãƒˆ)" #, python-format msgid "Waited 15 seconds for pid %(pid)s (%(file)s) to die; giving up" msgstr "pid %(pid)s (%(file)s) ãŒåœæ­¢ã™ã‚‹ã¾ã§ 15 ç§’ãŠå¾…ã¡ãã ã•ã„。中断中ã§ã™" msgid "" "When running server in SSL mode, you must specify both a cert_file and " "key_file option value in your configuration file" msgstr "" "サーãƒãƒ¼ã‚’ SSL モードã§å®Ÿè¡Œã™ã‚‹å ´åˆã¯ã€cert_file オプション値㨠key_file オプ" "ション値ã®ä¸¡æ–¹ã‚’è¨­å®šãƒ•ã‚¡ã‚¤ãƒ«ã«æŒ‡å®šã™ã‚‹å¿…è¦ãŒã‚りã¾ã™" msgid "" "Whether to pass through the user token when making requests to the registry. " "To prevent failures with token expiration during big files upload, it is " "recommended to set this parameter to False.If \"use_user_token\" is not in " "effect, then admin credentials can be specified." msgstr "" "レジストリーã«å¯¾ã—ã¦è¦æ±‚を行ã†ã¨ãã«ã€ãƒ¦ãƒ¼ã‚¶ãƒ¼ãƒˆãƒ¼ã‚¯ãƒ³ã‚’パススルーã™ã‚‹ã‹ã©ã†" "ã‹ã€‚サイズã®å¤§ããªãƒ•ァイルã®ã‚¢ãƒƒãƒ—ロード中ã®ãƒˆãƒ¼ã‚¯ãƒ³ã®æœ‰åŠ¹æœŸé™åˆ‡ã‚Œã«ä¼´ã†éšœå®³" "を防ããŸã‚ã«ã€ã“ã®ãƒ‘ラメーター㯠False ã«è¨­å®šã™ã‚‹ã“ã¨ãŒæŽ¨å¥¨ã•れã¾" "ã™ã€‚\"use_user_token\" ãŒç„¡åйã§ã‚ã‚‹å ´åˆã¯ã€ç®¡ç†è€…ã®ã‚¯ãƒ¬ãƒ‡ãƒ³ã‚·ãƒ£ãƒ«ã‚’指定ã§ãã¾" "ã™ã€‚" #, python-format msgid "Wrong command structure: %s" msgstr "æ­£ã—ããªã„コマンド構造: %s" msgid "You are not authenticated." msgstr "èªè¨¼ã•れã¦ã„ã¾ã›ã‚“。" #, python-format msgid "You are not authorized to complete %(action)s action." msgstr "%(action)s アクションã®å®Ÿè¡Œã‚’許å¯ã•れã¦ã„ã¾ã›ã‚“。" msgid "You are not authorized to complete this action." msgstr "ã“ã®ã‚¢ã‚¯ã‚·ãƒ§ãƒ³ã®å®Ÿè¡Œã‚’許å¯ã•れã¦ã„ã¾ã›ã‚“。" #, python-format msgid "You are not authorized to lookup image %s." msgstr "イメージ %s を調ã¹ã‚‹æ¨©é™ãŒã‚りã¾ã›ã‚“。" #, python-format msgid "You are not authorized to lookup the members of the image %s." msgstr "イメージ %s ã®ãƒ¡ãƒ³ãƒãƒ¼ã‚’調ã¹ã‚‹æ¨©é™ãŒã‚りã¾ã›ã‚“。" #, python-format msgid "You are not permitted to create a tag in the namespace owned by '%s'" msgstr "'%s' ãŒæ‰€æœ‰ã™ã‚‹åå‰ç©ºé–“ã§ã®ã‚¿ã‚°ã®ä½œæˆã¯è¨±å¯ã•れã¦ã„ã¾ã›ã‚“" msgid "You are not permitted to create image members for the image." msgstr "ãã®ã‚¤ãƒ¡ãƒ¼ã‚¸ã®ã‚¤ãƒ¡ãƒ¼ã‚¸ãƒ¡ãƒ³ãƒãƒ¼ã®ä½œæˆã¯è¨±å¯ã•れã¦ã„ã¾ã›ã‚“。" #, python-format msgid "You are not permitted to create images owned by '%s'." msgstr "'%s' ã«ã‚ˆã£ã¦æ‰€æœ‰ã•れã¦ã„るイメージã®ä½œæˆã¯è¨±å¯ã•れã¦ã„ã¾ã›ã‚“。" #, python-format msgid "You are not permitted to create namespace owned by '%s'" msgstr "'%s' ã«ã‚ˆã£ã¦æ‰€æœ‰ã•れるåå‰ç©ºé–“ã®ä½œæˆã¯è¨±å¯ã•れã¾ã›ã‚“" #, python-format msgid "You are not permitted to create object owned by '%s'" msgstr "'%s' ã«ã‚ˆã£ã¦æ‰€æœ‰ã•れるオブジェクトã®ä½œæˆã¯è¨±å¯ã•れã¾ã›ã‚“" #, python-format msgid "You are not permitted to create property owned by '%s'" msgstr "'%s' ã«ã‚ˆã£ã¦æ‰€æœ‰ã•れるプロパティーã®ä½œæˆã¯è¨±å¯ã•れã¾ã›ã‚“" #, python-format msgid "You are not permitted to create resource_type owned by '%s'" msgstr "'%s' ã«ã‚ˆã£ã¦æ‰€æœ‰ã•れる resource_type ã®ä½œæˆã¯è¨±å¯ã•れã¾ã›ã‚“" #, python-format msgid "You are not permitted to create this task with owner as: %s" msgstr "所有者 %s を使用ã—ã¦ã“ã®ã‚¿ã‚¹ã‚¯ã‚’作æˆã™ã‚‹ã“ã¨ã¯è¨±å¯ã•れã¾ã›ã‚“" msgid "You are not permitted to deactivate this image." msgstr "ã“ã®ã‚¤ãƒ¡ãƒ¼ã‚¸ã®éžã‚¢ã‚¯ãƒ†ã‚£ãƒ–化ã¯è¨±å¯ã•れã¦ã„ã¾ã›ã‚“。" msgid "You are not permitted to delete this image." msgstr "ã“ã®ã‚¤ãƒ¡ãƒ¼ã‚¸ã®å‰Šé™¤ã¯è¨±å¯ã•れã¦ã„ã¾ã›ã‚“。" msgid "You are not permitted to delete this meta_resource_type." msgstr "ã“ã® meta_resource_type ã®å‰Šé™¤ã¯è¨±å¯ã•れã¾ã›ã‚“。" msgid "You are not permitted to delete this namespace." msgstr "ã“ã®åå‰ç©ºé–“ã®å‰Šé™¤ã¯è¨±å¯ã•れã¾ã›ã‚“。" msgid "You are not permitted to delete this object." msgstr "ã“ã®ã‚ªãƒ–ジェクトã®å‰Šé™¤ã¯è¨±å¯ã•れã¾ã›ã‚“。" msgid "You are not permitted to delete this property." msgstr "ã“ã®ãƒ—ロパティーã®å‰Šé™¤ã¯è¨±å¯ã•れã¾ã›ã‚“。" msgid "You are not permitted to delete this tag." msgstr "ã“ã®ã‚¿ã‚°ã®å‰Šé™¤ã¯è¨±å¯ã•れã¦ã„ã¾ã›ã‚“。" #, python-format msgid "You are not permitted to modify '%(attr)s' on this %(resource)s." msgstr "ã“ã® %(resource)s 上㮠'%(attr)s' ã®å¤‰æ›´ã¯è¨±å¯ã•れã¾ã›ã‚“。" #, python-format msgid "You are not permitted to modify '%s' on this image." msgstr "ã“ã®ã‚¤ãƒ¡ãƒ¼ã‚¸ä¸Šã® '%s' ã®å¤‰æ›´ã¯è¨±å¯ã•れã¾ã›ã‚“。" msgid "You are not permitted to modify locations for this image." msgstr "ã“ã®ã‚¤ãƒ¡ãƒ¼ã‚¸ã®å ´æ‰€ã®å¤‰æ›´ã¯è¨±å¯ã•れã¦ã„ã¾ã›ã‚“。" msgid "You are not permitted to modify tags on this image." msgstr "ã“ã®ã‚¤ãƒ¡ãƒ¼ã‚¸ä¸Šã®ã‚¿ã‚°ã®å¤‰æ›´ã¯è¨±å¯ã•れã¦ã„ã¾ã›ã‚“。" msgid "You are not permitted to modify this image." msgstr "ã“ã®ã‚¤ãƒ¡ãƒ¼ã‚¸ã®å¤‰æ›´ã¯è¨±å¯ã•れã¦ã„ã¾ã›ã‚“。" msgid "You are not permitted to reactivate this image." msgstr "ã“ã®ã‚¤ãƒ¡ãƒ¼ã‚¸ã®å†ã‚¢ã‚¯ãƒ†ã‚£ãƒ–化ã¯è¨±å¯ã•れã¦ã„ã¾ã›ã‚“。" msgid "You are not permitted to set status on this task." msgstr "ã“ã®ã‚¿ã‚¹ã‚¯ã«é–¢ã™ã‚‹çжæ³ã‚’設定ã™ã‚‹ã“ã¨ã¯è¨±å¯ã•れã¾ã›ã‚“。" msgid "You are not permitted to update this namespace." msgstr "ã“ã®åå‰ç©ºé–“ã®æ›´æ–°ã¯è¨±å¯ã•れã¾ã›ã‚“。" msgid "You are not permitted to update this object." msgstr "ã“ã®ã‚ªãƒ–ã‚¸ã‚§ã‚¯ãƒˆã®æ›´æ–°ã¯è¨±å¯ã•れã¾ã›ã‚“。" msgid "You are not permitted to update this property." msgstr "ã“ã®ãƒ—ãƒ­ãƒ‘ãƒ†ã‚£ãƒ¼ã®æ›´æ–°ã¯è¨±å¯ã•れã¾ã›ã‚“。" msgid "You are not permitted to update this tag." msgstr "ã“ã®ã‚¿ã‚°ã®æ›´æ–°ã¯è¨±å¯ã•れã¦ã„ã¾ã›ã‚“。" msgid "You are not permitted to upload data for this image." msgstr "ã“ã®ã‚¤ãƒ¡ãƒ¼ã‚¸ã®ãƒ‡ãƒ¼ã‚¿ã®ã‚¢ãƒƒãƒ—ロードã¯è¨±å¯ã•れã¦ã„ã¾ã›ã‚“。" #, python-format msgid "You cannot add image member for %s" msgstr "%s ã®ã‚¤ãƒ¡ãƒ¼ã‚¸ãƒ¡ãƒ³ãƒãƒ¼ã‚’追加ã§ãã¾ã›ã‚“" #, python-format msgid "You cannot delete image member for %s" msgstr "%s ã®ã‚¤ãƒ¡ãƒ¼ã‚¸ãƒ¡ãƒ³ãƒãƒ¼ã‚’削除ã§ãã¾ã›ã‚“" #, python-format msgid "You cannot get image member for %s" msgstr "%s ã®ã‚¤ãƒ¡ãƒ¼ã‚¸ãƒ¡ãƒ³ãƒãƒ¼ã‚’å–å¾—ã§ãã¾ã›ã‚“" #, python-format msgid "You cannot update image member %s" msgstr "イメージメンãƒãƒ¼ %s ã‚’æ›´æ–°ã§ãã¾ã›ã‚“" msgid "You do not own this image" msgstr "ã“ã®ã‚¤ãƒ¡ãƒ¼ã‚¸ã‚’所有ã—ã¦ã„ã¾ã›ã‚“" msgid "" "You have selected to use SSL in connecting, and you have supplied a cert, " "however you have failed to supply either a key_file parameter or set the " "GLANCE_CLIENT_KEY_FILE environ variable" msgstr "" "接続時㫠SSL を使用ã™ã‚‹ã‚ˆã†é¸æŠžã—ã€è¨¼æ˜Žæ›¸ã‚’指定ã—ã¾ã—ãŸãŒã€key_file パラメー" "ターを指定ã—ãªã‹ã£ãŸã‹ã€GLANCE_CLIENT_KEY_FILE 環境変数を設定ã—ã¾ã›ã‚“ã§ã—ãŸ" msgid "" "You have selected to use SSL in connecting, and you have supplied a key, " "however you have failed to supply either a cert_file parameter or set the " "GLANCE_CLIENT_CERT_FILE environ variable" msgstr "" "接続時㫠SSL を使用ã™ã‚‹ã‚ˆã†é¸æŠžã—ã€éµã‚’指定ã—ã¾ã—ãŸãŒã€cert_file パラメーター" "を指定ã—ãªã‹ã£ãŸã‹ã€GLANCE_CLIENT_CERT_FILE 環境変数を設定ã—ã¾ã›ã‚“ã§ã—ãŸ" msgid "" "Your database is not up to date. Your first step is to run `glance-manage db " "expand`." msgstr "" "ãƒ‡ãƒ¼ã‚¿ãƒ™ãƒ¼ã‚¹ãŒæœ€æ–°ã§ã¯ã‚りã¾ã›ã‚“。最åˆã®ã‚¹ãƒ†ãƒƒãƒ—ã¯ã€`glance-manage db " "expand` ã§ã™ã€‚" msgid "" "Your database is not up to date. Your next step is to run `glance-manage db " "contract`." msgstr "" "ãƒ‡ãƒ¼ã‚¿ãƒ™ãƒ¼ã‚¹ãŒæœ€æ–°ã§ã¯ã‚りã¾ã›ã‚“。次ã®ã‚¹ãƒ†ãƒƒãƒ—ã¯ã€`glance-manage db " "contract` ã§ã™ã€‚" msgid "" "Your database is not up to date. Your next step is to run `glance-manage db " "migrate`." msgstr "" "ãƒ‡ãƒ¼ã‚¿ãƒ™ãƒ¼ã‚¹ãŒæœ€æ–°ã§ã¯ã‚りã¾ã›ã‚“。次ã®ã‚¹ãƒ†ãƒƒãƒ—ã¯ã€`glance-manage db migrate` " "ã§ã™ã€‚" msgid "" "^([0-9a-fA-F]){8}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-" "fA-F]){12}$" msgstr "" "^([0-9a-fA-F]){8}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-" "fA-F]){12}$" #, python-format msgid "__init__() got unexpected keyword argument '%s'" msgstr "__init__() ã§äºˆæœŸã—ãªã„キーワード引数 '%s' ãŒå¾—られã¾ã—ãŸ" #, python-format msgid "" "cannot transition from %(current)s to %(next)s in update (wanted from_state=" "%(from)s)" msgstr "" "æ›´æ–°ã§ %(current)s ã‹ã‚‰ %(next)s ã«ç§»è¡Œã§ãã¾ã›ã‚“ (from_state=%(from)s ãŒå¿…" "è¦)" #, python-format msgid "custom properties (%(props)s) conflict with base properties" msgstr "カスタムプロパティー (%(props)s) ãŒåŸºæœ¬ãƒ—ロパティーã¨ç«¶åˆã—ã¦ã„ã¾ã™" msgid "eventlet 'poll' nor 'selects' hubs are available on this platform" msgstr "" "ã“ã®ãƒ—ラットフォームã§ã¯ eventlet ã®ã€Œpollã€ãƒãƒ–も「selectsã€ãƒãƒ–も使用ã§ãã¾" "ã›ã‚“" msgid "is_public must be None, True, or False" msgstr "is_public ã¯ã€Noneã€Trueã€ã¾ãŸã¯ False ã§ãªã‘れã°ãªã‚Šã¾ã›ã‚“" msgid "limit param must be an integer" msgstr "limit ãƒ‘ãƒ©ãƒ¡ãƒ¼ã‚¿ãƒ¼ã¯æ•´æ•°ã§ãªã‘れã°ãªã‚Šã¾ã›ã‚“" msgid "limit param must be positive" msgstr "limit ãƒ‘ãƒ©ãƒ¡ãƒ¼ã‚¿ãƒ¼ã¯æ­£ã§ãªã‘れã°ãªã‚Šã¾ã›ã‚“" msgid "md5 hash of image contents." msgstr "イメージコンテンツ㮠MD5 ãƒãƒƒã‚·ãƒ¥ã€‚" #, python-format msgid "new_image() got unexpected keywords %s" msgstr "new_image() ã§äºˆæœŸã—ãªã„キーワード %s ãŒå¾—られã¾ã—ãŸ" msgid "protected must be True, or False" msgstr "protected 㯠True ã¾ãŸã¯ False ã§ãªã‘れã°ãªã‚Šã¾ã›ã‚“" #, python-format msgid "unable to launch %(serv)s. Got error: %(e)s" msgstr "%(serv)s ã‚’èµ·å‹•ã§ãã¾ã›ã‚“。å—ã‘å–ã£ãŸã‚¨ãƒ©ãƒ¼: %(e)s" #, python-format msgid "x-openstack-request-id is too long, max size %s" msgstr "x-openstack-request-id ãŒé•·ã™ãŽã¾ã™ã€‚最大サイズ㯠%s ã§ã™" glance-16.0.0/glance/locale/zh_CN/0000775000175100017510000000000013245511661016550 5ustar zuulzuul00000000000000glance-16.0.0/glance/locale/zh_CN/LC_MESSAGES/0000775000175100017510000000000013245511661020335 5ustar zuulzuul00000000000000glance-16.0.0/glance/locale/zh_CN/LC_MESSAGES/glance.po0000666000175100017510000020111413245511421022121 0ustar zuulzuul00000000000000# Translations template for glance. # Copyright (C) 2015 ORGANIZATION # This file is distributed under the same license as the glance project. # # Translators: # blkart , 2015 # Dongliang Yu , 2013 # Kecheng Bi , 2014 # Tom Fifield , 2013 # 颜海峰 , 2014 # Andreas Jaeger , 2016. #zanata # howard lee , 2016. #zanata # blkart , 2017. #zanata msgid "" msgstr "" "Project-Id-Version: glance 15.0.0.0b3.dev29\n" "Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n" "POT-Creation-Date: 2017-06-23 20:54+0000\n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=UTF-8\n" "Content-Transfer-Encoding: 8bit\n" "PO-Revision-Date: 2017-06-24 04:45+0000\n" "Last-Translator: blkart \n" "Language: zh-CN\n" "Plural-Forms: nplurals=1; plural=0;\n" "Generated-By: Babel 2.0\n" "X-Generator: Zanata 3.9.6\n" "Language-Team: Chinese (China)\n" #, python-format msgid "\t%s" msgstr "\t%s" #, python-format msgid "%(cls)s exception was raised in the last rpc call: %(val)s" msgstr "最åŽä¸€ä¸ª RPC 调用中å‘生 %(cls)s 异常:%(val)s" #, python-format msgid "%(m_id)s not found in the member list of the image %(i_id)s." msgstr "åœ¨æ˜ åƒ %(i_id)s çš„æˆå‘˜åˆ—表中找ä¸åˆ° %(m_id)s。" #, python-format msgid "%(serv)s (pid %(pid)s) is running..." msgstr "%(serv)s (pid %(pid)s) 正在è¿è¡Œ..." #, python-format msgid "%(serv)s appears to already be running: %(pid)s" msgstr "%(serv)s 似乎已在è¿è¡Œï¼š%(pid)s" #, python-format msgid "" "%(strategy)s is registered as a module twice. %(module)s is not being used." msgstr "已两次将 %(strategy)s 注册为模å—。未在使用 %(module)s。" #, python-format msgid "" "%(task_id)s of %(task_type)s not configured properly. Could not load the " "filesystem store" msgstr "%(task_id)s(类型为 %(task_type)s)未正确é…置。未能装入文件系统存储器" #, python-format msgid "" "%(task_id)s of %(task_type)s not configured properly. Missing work dir: " "%(work_dir)s" msgstr "" "%(task_id)s(类型为 %(task_type)s)未正确é…置。缺少工作目录:%(work_dir)s" #, python-format msgid "%(verb)sing %(serv)s" msgstr "正在%(verb)s %(serv)s" #, python-format msgid "%(verb)sing %(serv)s with %(conf)s" msgstr "正在%(verb)s %(serv)s(借助 %(conf)s)" #, python-format msgid "" "%s Please specify a host:port pair, where host is an IPv4 address, IPv6 " "address, hostname, or FQDN. If using an IPv6 address, enclose it in brackets " "separately from the port (i.e., \"[fe80::a:b:c]:9876\")." msgstr "" "%s 请指定 host:port 对,其中 host 是 IPv4 地å€ã€IPv6 地å€ã€ä¸»æœºå或 FQDN。如" "果使用 IPv6 地å€ï¼Œè¯·å°†å…¶æ‹¬åœ¨æ–¹æ‹¬å·ä¸­å¹¶ä¸Žç«¯å£éš”开(å³ï¼Œâ€œ[fe80::a:b:" "c]:9876â€ï¼‰ã€‚" #, python-format msgid "%s can't contain 4 byte unicode characters." msgstr "%s ä¸èƒ½åŒ…å« 4 字节 Unicode 字符。" #, python-format msgid "%s is already stopped" msgstr "%s å·²åœæ­¢" #, python-format msgid "%s is stopped" msgstr "%s å·²åœæ­¢" msgid "" "--os_auth_url option or OS_AUTH_URL environment variable required when " "keystone authentication strategy is enabled\n" msgstr "" "当å¯ç”¨äº† keystone 认è¯ç­–ç•¥æ—¶ï¼Œéœ€è¦ --os_auth_url 选项或 OS_AUTH_URL 环境å˜" "é‡\n" msgid "A body is not expected with this request." msgstr "此请求ä¸åº”有主体。" #, python-format msgid "" "A metadata definition object with name=%(object_name)s already exists in " "namespace=%(namespace_name)s." msgstr "" "在å称空间 %(namespace_name)s 中,已存在å称为 %(object_name)s 的元数æ®å®šä¹‰å¯¹" "象。" #, python-format msgid "" "A metadata definition property with name=%(property_name)s already exists in " "namespace=%(namespace_name)s." msgstr "" "在å称空间 %(namespace_name)s 中,已存在å称为 %(property_name)s 的元数æ®å®šä¹‰" "属性。" #, python-format msgid "" "A metadata definition resource-type with name=%(resource_type_name)s already " "exists." msgstr "已存在å称为 %(resource_type_name)s 的元数æ®å®šä¹‰èµ„æºç±»åž‹ã€‚" msgid "A set of URLs to access the image file kept in external store" msgstr "用于访问外部存储器中ä¿ç•™çš„æ˜ åƒæ–‡ä»¶çš„ URL集åˆ" msgid "Amount of disk space (in GB) required to boot image." msgstr "å¼•å¯¼æ˜ åƒæ‰€éœ€çš„ç£ç›˜ç©ºé—´é‡ï¼ˆä»¥ GB 计)。" msgid "Amount of ram (in MB) required to boot image." msgstr "å¼•å¯¼æ˜ åƒæ‰€éœ€çš„ ram é‡ï¼ˆä»¥ MB 计)。" msgid "An identifier for the image" msgstr "映åƒçš„æ ‡è¯†" msgid "An identifier for the image member (tenantId)" msgstr "æ˜ åƒæˆå‘˜çš„æ ‡è¯† (tenantId)" msgid "An identifier for the owner of this task" msgstr "此任务的所有者的标识" msgid "An identifier for the task" msgstr "任务的标识" msgid "An image file url" msgstr "æ˜ åƒæ–‡ä»¶çš„ URL" msgid "An image schema url" msgstr "æ˜ åƒæ¨¡å¼çš„ URL" msgid "An image self url" msgstr "æ˜ åƒæœ¬èº«çš„ URL" #, python-format msgid "An image with identifier %s already exists" msgstr "具有标识 %s 的映åƒå·²å­˜åœ¨" msgid "An import task exception occurred" msgstr "å‘生了导入任务异常。" msgid "An object with the same identifier already exists." msgstr "具有åŒä¸€æ ‡è¯†çš„对象已存在。" msgid "An object with the same identifier is currently being operated on." msgstr "当剿­£åœ¨å¯¹å…·æœ‰åŒä¸€æ ‡è¯†çš„对象进行æ“作。" msgid "An object with the specified identifier was not found." msgstr "找ä¸åˆ°å…·æœ‰æŒ‡å®šæ ‡è¯†çš„对象。" msgid "An unknown exception occurred" msgstr "å‘生未知异常" msgid "An unknown task exception occurred" msgstr "å‘生未知任务异常" #, python-format msgid "Attempt to upload duplicate image: %s" msgstr "请å°è¯•上载é‡å¤æ˜ åƒï¼š%s" msgid "Attempted to update Location field for an image not in queued status." msgstr "å·²å°è¯•更新处于未排队状æ€çš„æ˜ åƒçš„“ä½ç½®â€å­—段。" #, python-format msgid "Attribute '%(property)s' is read-only." msgstr "属性“%(property)sâ€æ˜¯åªè¯»çš„。" #, python-format msgid "Attribute '%(property)s' is reserved." msgstr "属性“%(property)sâ€å·²ä¿ç•™ã€‚" #, python-format msgid "Attribute '%s' is read-only." msgstr "属性“%sâ€æ˜¯åªè¯»çš„。" #, python-format msgid "Attribute '%s' is reserved." msgstr "属性“%sâ€å·²ä¿ç•™ã€‚" msgid "Attribute container_format can be only replaced for a queued image." msgstr "åªèƒ½ä¸ºå·²æŽ’é˜Ÿçš„æ˜ åƒæ›¿æ¢å±žæ€§ container_format。" msgid "Attribute disk_format can be only replaced for a queued image." msgstr "åªèƒ½ä¸ºå·²æŽ’é˜Ÿçš„æ˜ åƒæ›¿æ¢å±žæ€§ disk_format。" #, python-format msgid "Auth service at URL %(url)s not found." msgstr "找ä¸åˆ° URL %(url)s å¤„çš„æŽˆæƒæœåŠ¡ã€‚" #, python-format msgid "" "Authentication error - the token may have expired during file upload. " "Deleting image data for %s." msgstr "认è¯é”™è¯¯ - 文件上传期间此令牌å¯èƒ½å·²åˆ°æœŸã€‚正在删除 %s çš„æ˜ åƒæ•°æ®ã€‚" msgid "Authorization failed." msgstr "授æƒå¤±è´¥ã€‚" msgid "Available categories:" msgstr "å¯ç”¨çš„类别:" #, python-format msgid "Bad \"%s\" query filter format. Use ISO 8601 DateTime notation." msgstr "无效“%sâ€æŸ¥è¯¢è¿‡æ»¤å™¨æ ¼å¼ã€‚请使用 ISO 8601 日期时间注释。" #, python-format msgid "Bad Command: %s" msgstr "命令 %s 䏿­£ç¡®" #, python-format msgid "Bad header: %(header_name)s" msgstr "头 %(header_name)s 䏿­£ç¡®" #, python-format msgid "Bad value passed to filter %(filter)s got %(val)s" msgstr "传递至过滤器 %(filter)s çš„å€¼ä¸æ­£ç¡®ï¼Œå·²èŽ·å– %(val)s" #, python-format msgid "Badly formed S3 URI: %(uri)s" msgstr "S3 URI %(uri)s 的格å¼ä¸æ­£ç¡®" #, python-format msgid "Badly formed credentials '%(creds)s' in Swift URI" msgstr "Swift URI 中凭è¯â€œ%(creds)sâ€çš„æ ¼å¼ä¸æ­£ç¡®" msgid "Badly formed credentials in Swift URI." msgstr "Swift URI 中凭è¯çš„æ ¼å¼ä¸æ­£ç¡®ã€‚" msgid "Body expected in request." msgstr "请求中需è¦ä¸»ä½“。" msgid "Cannot be a negative value" msgstr "ä¸èƒ½ä¸ºè´Ÿå€¼" msgid "Cannot be a negative value." msgstr "ä¸å¾—为负值。" #, python-format msgid "Cannot convert image %(key)s '%(value)s' to an integer." msgstr "æ— æ³•å°†æ˜ åƒ %(key)s“%(value)sâ€è½¬æ¢ä¸ºæ•´æ•°ã€‚" msgid "Cannot remove last location in the image." msgstr "ä¸èƒ½ç§»é™¤æ˜ åƒä¸­çš„æœ€åŽä¸€ä¸ªä½ç½®ã€‚" #, python-format msgid "Cannot save data for image %(image_id)s: %(error)s" msgstr "无法为镜åƒ%(image_id)sä¿å­˜æ•°æ®: %(error)s" msgid "Cannot set locations to empty list." msgstr "ä¸èƒ½å°†ä½ç½®è®¾ç½®ä¸ºç©ºåˆ—表。" msgid "Cannot upload to an unqueued image" msgstr "无法上载至未排队的映åƒ" #, python-format msgid "Checksum verification failed. Aborted caching of image '%s'." msgstr "校验和验è¯å¤±è´¥ã€‚已异常中止映åƒâ€œ%sâ€çš„高速缓存。" msgid "Client disconnected before sending all data to backend" msgstr "客户端在å‘逿‰€æœ‰æ•°æ®åˆ°åŽç«¯æ—¶æ–­å¼€äº†è¿žæŽ¥" msgid "Command not found" msgstr "找ä¸åˆ°å‘½ä»¤" msgid "Configuration option was not valid" msgstr "é…置选项无效" #, python-format msgid "Connect error/bad request to Auth service at URL %(url)s." msgstr "å‘生连接错误,或者对 URL %(url)s å¤„çš„æŽˆæƒæœåŠ¡çš„è¯·æ±‚ä¸æ­£ç¡®ã€‚" #, python-format msgid "Constructed URL: %s" msgstr "已构造 URL:%s" msgid "Container format is not specified." msgstr "未指定容器格å¼ã€‚" msgid "Content-Type must be application/octet-stream" msgstr "Content-Type 必须是 application/octet-stream" #, python-format msgid "Corrupt image download for image %(image_id)s" msgstr "å¯¹äºŽæ˜ åƒ %(image_id)s,映åƒä¸‹è½½å·²æŸå" #, python-format msgid "Could not bind to %(host)s:%(port)s after trying for 30 seconds" msgstr "在å°è¯•时间达到 30 ç§’ä¹‹åŽæœªèƒ½ç»‘定至 %(host)s:%(port)s" msgid "Could not find OVF file in OVA archive file." msgstr "在 OVA 归档文件中找ä¸åˆ° OVF 文件。" #, python-format msgid "Could not find metadata object %s" msgstr "找ä¸åˆ°å…ƒæ•°æ®å¯¹è±¡ %s" #, python-format msgid "Could not find metadata tag %s" msgstr "找ä¸åˆ°å…ƒæ•°æ®æ ‡è®° %s" #, python-format msgid "Could not find namespace %s" msgstr "找ä¸åˆ°å称空间 %s" #, python-format msgid "Could not find property %s" msgstr "找ä¸åˆ°å±žæ€§ %s" msgid "Could not find required configuration option" msgstr "找ä¸åˆ°å¿…需的é…置选项" #, python-format msgid "Could not find task %s" msgstr "找ä¸åˆ°ä»»åŠ¡ %s" #, python-format msgid "Could not update image: %s" msgstr "未能更新映åƒï¼š%s" #, python-format msgid "Couldn't create metadata namespace: %s" msgstr "无法创建元数æ®å‘½å空间:%s" #, python-format msgid "Couldn't create metadata object: %s" msgstr "无法创建元数æ®å¯¹è±¡ï¼š%s" #, python-format msgid "Couldn't create metadata property: %s" msgstr "无法创建元数æ®å±žæ€§ï¼š%s" #, python-format msgid "Couldn't create metadata tag: %s" msgstr "æ— æ³•åˆ›å»ºå…ƒæ•°æ®æ ‡ç­¾ï¼š%s" #, python-format msgid "Couldn't update metadata namespace: %s" msgstr "无法更新元数æ®å‘½å空间:%s" #, python-format msgid "Couldn't update metadata object: %s" msgstr "无法更新元数æ®å¯¹è±¡ï¼š%s" #, python-format msgid "Couldn't update metadata property: %s" msgstr "无法更新元数æ®å±žæ€§ï¼š%s" #, python-format msgid "Couldn't update metadata tag: %s" msgstr "æ— æ³•æ›´æ–°å…ƒæ•°æ®æ ‡ç­¾ï¼š%s" msgid "Currently, OVA packages containing multiple disk are not supported." msgstr "当å‰åŒ…å«å¤šä¸ªç£ç›˜çš„ OVA 包ä¸å—支æŒã€‚" #, python-format msgid "Data for image_id not found: %s" msgstr "找ä¸åˆ° image_id 的数æ®ï¼š%s" msgid "Data supplied was not valid." msgstr "æä¾›çš„æ•°æ®æ— æ•ˆã€‚" msgid "Date and time of image member creation" msgstr "åˆ›å»ºæ˜ åƒæˆå‘˜çš„æ—¥æœŸå’Œæ—¶é—´" msgid "Date and time of image registration" msgstr "注册映åƒçš„æ—¥æœŸå’Œæ—¶é—´" msgid "Date and time of last modification of image member" msgstr "æœ€è¿‘ä¸€æ¬¡ä¿®æ”¹æ˜ åƒæˆå‘˜çš„æ—¥æœŸå’Œæ—¶é—´" msgid "Date and time of namespace creation" msgstr "创建å称空间的日期和时间" msgid "Date and time of object creation" msgstr "创建对象的日期和时间" msgid "Date and time of resource type association" msgstr "å…³è”资æºç±»åž‹çš„æ—¥æœŸå’Œæ—¶é—´" msgid "Date and time of tag creation" msgstr "创建标记的日期和时间" msgid "Date and time of the last image modification" msgstr "最近一次修改映åƒçš„æ—¥æœŸå’Œæ—¶é—´" msgid "Date and time of the last namespace modification" msgstr "最近一次修改å称空间的日期和时间" msgid "Date and time of the last object modification" msgstr "最近一次修改对象的日期和时间" msgid "Date and time of the last resource type association modification" msgstr "最近一次修改资æºç±»åž‹å…³è”的日期和时间" msgid "Date and time of the last tag modification" msgstr "最近一次修改标记的日期和时间" msgid "Datetime when this resource was created" msgstr "此资æºçš„创建日期时间" msgid "Datetime when this resource was updated" msgstr "此资æºçš„æ›´æ–°æ—¥æœŸæ—¶é—´" msgid "Datetime when this resource would be subject to removal" msgstr "将会移除此资æºçš„æ—¥æœŸæ—¶é—´" #, python-format msgid "Denying attempt to upload image because it exceeds the quota: %s" msgstr "正在拒ç»å°è¯•上载映åƒï¼Œå› ä¸ºå®ƒè¶…过é…é¢ï¼š%s" #, python-format msgid "Denying attempt to upload image larger than %d bytes." msgstr "正在拒ç»å°è¯•上载大å°è¶…过 %d 字节的映åƒã€‚" msgid "Descriptive name for the image" msgstr "映åƒçš„æè¿°æ€§åç§°" msgid "Disk format is not specified." msgstr "未指定ç£ç›˜æ ¼å¼ã€‚" #, python-format msgid "" "Driver %(driver_name)s could not be configured correctly. Reason: %(reason)s" msgstr "未能正确é…ç½®é©±åŠ¨ç¨‹åº %(driver_name)s。原因:%(reason)s" msgid "" "Error decoding your request. Either the URL or the request body contained " "characters that could not be decoded by Glance" msgstr "å¯¹è¯·æ±‚è§£ç æ—¶å‡ºé”™ã€‚Glance 无法对 URL 或请求主体包å«çš„字符进行解ç ã€‚" #, python-format msgid "Error fetching members of image %(image_id)s: %(inner_msg)s" msgstr "è®¿å­˜æ˜ åƒ %(image_id)s çš„æˆå‘˜æ—¶å‡ºé”™ï¼š%(inner_msg)s" msgid "Error in store configuration. Adding images to store is disabled." msgstr "存储é…ç½®ä¸­å‡ºé”™ã€‚å·²ç¦æ­¢å°†æ˜ åƒæ·»åŠ è‡³å­˜å‚¨å™¨ã€‚" #, python-format msgid "Error: %(exc_type)s: %(e)s" msgstr "错误: %(exc_type)s: %(e)s" msgid "Expected a member in the form: {\"member\": \"image_id\"}" msgstr "æˆå‘˜åº”为以下格å¼ï¼š{\"member\": \"image_id\"}" msgid "Expected a status in the form: {\"status\": \"status\"}" msgstr "状æ€åº”为以下格å¼ï¼š{\"status\": \"status\"}" msgid "External source should not be empty" msgstr "外部æºä¸åº”为空。" #, python-format msgid "External sources are not supported: '%s'" msgstr "外部æºä¸å—支æŒï¼šâ€œ%sâ€" #, python-format msgid "Failed to activate image. Got error: %s" msgstr "未能激活映åƒã€‚å‘生错误:%s" #, python-format msgid "Failed to add image metadata. Got error: %s" msgstr "未能添加映åƒå…ƒæ•°æ®ã€‚å‘生错误:%s" #, python-format msgid "Failed to find image %(image_id)s to delete" msgstr "未能找到è¦åˆ é™¤çš„æ˜ åƒ %(image_id)s" #, python-format msgid "Failed to find image to delete: %s" msgstr "未能找到è¦åˆ é™¤çš„æ˜ åƒï¼š%s" #, python-format msgid "Failed to find image to update: %s" msgstr "找ä¸åˆ°è¦æ›´æ–°çš„æ˜ åƒï¼š%s" #, python-format msgid "Failed to find resource type %(resourcetype)s to delete" msgstr "找ä¸åˆ°è¦åˆ é™¤çš„资æºç±»åž‹ %(resourcetype)s" #, python-format msgid "Failed to initialize the image cache database. Got error: %s" msgstr "未能åˆå§‹åŒ–映åƒé«˜é€Ÿç¼“存数æ®åº“。å‘生错误:%s" #, python-format msgid "Failed to read %s from config" msgstr "未能从é…ç½®è¯»å– %s" #, python-format msgid "Failed to reserve image. Got error: %s" msgstr "未能ä¿ç•™æ˜ åƒã€‚å‘生错误:%s" #, python-format msgid "Failed to update image metadata. Got error: %s" msgstr "未能更新映åƒå…ƒæ•°æ®ã€‚å‘生错误:%s" #, python-format msgid "Failed to upload image %s" msgstr "ä¸Šä¼ é•œåƒ %s失败" #, python-format msgid "" "Failed to upload image data for image %(image_id)s due to HTTP error: " "%(error)s" msgstr "由于 HTTP é”™è¯¯ï¼Œæœªèƒ½ä¸Šè½½æ˜ åƒ %(image_id)s çš„æ˜ åƒæ•°æ®ï¼š%(error)s" #, python-format msgid "" "Failed to upload image data for image %(image_id)s due to internal error: " "%(error)s" msgstr "ç”±äºŽå†…éƒ¨é”™è¯¯ï¼Œæœªèƒ½ä¸Šè½½æ˜ åƒ %(image_id)s çš„æ˜ åƒæ•°æ®ï¼š%(error)s" #, python-format msgid "File %(path)s has invalid backing file %(bfile)s, aborting." msgstr "文件 %(path)s å…·æœ‰æ— æ•ˆæ”¯æŒæ–‡ä»¶ %(bfile)s,正在异常中止。" msgid "" "File based imports are not allowed. Please use a non-local source of image " "data." msgstr "ä¸å…è®¸åŸºäºŽæ–‡ä»¶çš„å¯¼å…¥ã€‚è¯·ä½¿ç”¨æ˜ åƒæ•°æ®çš„éžæœ¬åœ°æºã€‚" msgid "Forbidden image access" msgstr "ç¦æ­¢è®¿é—®æ˜ åƒ" #, python-format msgid "Forbidden to delete a %s image." msgstr "å·²ç¦æ­¢å¯¹æ˜ åƒ%s进行删除。" #, python-format msgid "Forbidden to delete image: %s" msgstr "å·²ç¦æ­¢åˆ é™¤æ˜ åƒï¼š%s" #, python-format msgid "Forbidden to modify '%(key)s' of %(status)s image." msgstr "ç¦æ­¢ä¿®æ”¹ %(status)s 映åƒçš„“%(key)sâ€" #, python-format msgid "Forbidden to modify '%s' of image." msgstr "å·²ç¦æ­¢ä¿®æ”¹æ˜ åƒçš„“%sâ€ã€‚" msgid "Forbidden to reserve image." msgstr "å·²ç¦æ­¢ä¿ç•™æ˜ åƒã€‚" msgid "Forbidden to update deleted image." msgstr "å·²ç¦æ­¢æ›´æ–°åˆ é™¤çš„æ˜ åƒã€‚" #, python-format msgid "Forbidden to update image: %s" msgstr "å·²ç¦æ­¢æ›´æ–°æ˜ åƒï¼š%s" #, python-format msgid "Forbidden upload attempt: %s" msgstr "å·²ç¦æ­¢è¿›è¡Œä¸Šè½½å°è¯•:%s" #, python-format msgid "Forbidding request, metadata definition namespace=%s is not visible." msgstr "æ­£åœ¨ç¦æ­¢è¯·æ±‚,元数æ®å®šä¹‰å称空间 %s ä¸å¯è§†ã€‚" #, python-format msgid "Forbidding request, task %s is not visible" msgstr "æ­£åœ¨ç¦æ­¢è¯·æ±‚,任务 %s ä¸å¯è§†" msgid "Format of the container" msgstr "容器的格å¼" msgid "Format of the disk" msgstr "ç£ç›˜æ ¼å¼" #, python-format msgid "Host \"%s\" is not valid." msgstr "主机“%sâ€æ— æ•ˆã€‚" #, python-format msgid "Host and port \"%s\" is not valid." msgstr "主机和端å£â€œ%sâ€æ— æ•ˆã€‚" msgid "" "Human-readable informative message only included when appropriate (usually " "on failure)" msgstr "人工å¯è¯»çš„ä¿¡æ¯æ€§æ¶ˆæ¯ï¼Œä»…在适当时(通常在å‘生故障时)æ‰åŒ…括" msgid "If true, image will not be deletable." msgstr "如果为 true,那么映åƒå°†ä¸å¯åˆ é™¤ã€‚" msgid "If true, namespace will not be deletable." msgstr "如果为 true,那么å称空间将ä¸å¯åˆ é™¤ã€‚" #, python-format msgid "Image %(id)s could not be deleted because it is in use: %(exc)s" msgstr "æ˜ åƒ %(id)s 未能删除,因为它正在使用中:%(exc)s" #, python-format msgid "Image %(id)s not found" msgstr "找ä¸åˆ°æ˜ åƒ %(id)s" #, python-format msgid "" "Image %(image_id)s could not be found after upload. The image may have been " "deleted during the upload: %(error)s" msgstr "镜åƒ%(image_id)sä¸Šä¼ åŽæ— æ³•找到。镜åƒåœ¨ä¸Šä¼ è¿‡ç¨‹ä¸­å¯èƒ½è¢«åˆ é™¤: %(error)s" #, python-format msgid "Image %(image_id)s is protected and cannot be deleted." msgstr "æ˜ åƒ %(image_id)s å—ä¿æŠ¤ï¼Œæ— æ³•åˆ é™¤ã€‚" #, python-format msgid "" "Image %s could not be found after upload. The image may have been deleted " "during the upload, cleaning up the chunks uploaded." msgstr "" "在上载之åŽï¼Œæ‰¾ä¸åˆ°æ˜ åƒ %s。å¯èƒ½å·²åœ¨ä¸Šè½½æœŸé—´åˆ é™¤è¯¥æ˜ åƒï¼Œæ­£åœ¨æ¸…除已上载的区å—。" #, python-format msgid "" "Image %s could not be found after upload. The image may have been deleted " "during the upload." msgstr "ä¸Šä¼ åŽæ‰¾ä¸åˆ°æ˜ åƒ %s。此映åƒå¯èƒ½å·²åœ¨ä¸Šä¼ æœŸé—´åˆ é™¤ã€‚" #, python-format msgid "Image %s is deactivated" msgstr "æ˜ åƒ %s 已喿¶ˆæ¿€æ´»" #, python-format msgid "Image %s is not active" msgstr "æ˜ åƒ %s å¤„äºŽä¸æ´»åŠ¨çŠ¶æ€" #, python-format msgid "Image %s not found." msgstr "找ä¸åˆ°æ˜ åƒ %s " #, python-format msgid "Image exceeds the storage quota: %s" msgstr "镜åƒè¶…出存储é™é¢ï¼š %s" msgid "Image id is required." msgstr "éœ€è¦æ˜ åƒæ ‡è¯†ã€‚" msgid "Image is protected" msgstr "映åƒå—ä¿æŠ¤" #, python-format msgid "Image member limit exceeded for image %(id)s: %(e)s:" msgstr "å¯¹äºŽæ˜ åƒ %(id)sï¼Œè¶…è¿‡æ˜ åƒæˆå‘˜é™åˆ¶ï¼š%(e)s:" #, python-format msgid "Image name too long: %d" msgstr "映åƒå称太长:%d" msgid "Image operation conflicts" msgstr "æ˜ åƒæ“作å‘生冲çª" #, python-format msgid "" "Image status transition from %(cur_status)s to %(new_status)s is not allowed" msgstr "ä¸å…许映åƒçжæ€ä»Ž %(cur_status)s 转å˜ä¸º %(new_status)s" #, python-format msgid "Image storage media is full: %s" msgstr "映åƒå­˜å‚¨ä»‹è´¨å·²æ»¡ï¼š%s" #, python-format msgid "Image tag limit exceeded for image %(id)s: %(e)s:" msgstr "å¯¹äºŽæ˜ åƒ %(id)sï¼Œè¶…è¿‡æ˜ åƒæ ‡è®°é™åˆ¶ï¼š%(e)s:" #, python-format msgid "Image upload problem: %s" msgstr "å‘生映åƒä¸Šè½½é—®é¢˜ï¼š%s" #, python-format msgid "Image with identifier %s already exists!" msgstr "具有标识 %s 的映åƒå·²å­˜åœ¨ï¼" #, python-format msgid "Image with identifier %s has been deleted." msgstr "已删除具有标识 %s 的映åƒã€‚" #, python-format msgid "Image with identifier %s not found" msgstr "找ä¸åˆ°å…·æœ‰æ ‡è¯† %s 的映åƒ" #, python-format msgid "Image with the given id %(image_id)s was not found" msgstr "找ä¸åˆ°å…·æœ‰æ‰€ç»™å®šæ ‡è¯† %(image_id)s 的映åƒ" #, python-format msgid "" "Incorrect auth strategy, expected \"%(expected)s\" but received " "\"%(received)s\"" msgstr "授æƒç­–ç•¥ä¸æ­£ç¡®ï¼ŒæœŸæœ›çš„æ˜¯â€œ%(expected)sâ€ï¼Œä½†æŽ¥æ”¶åˆ°çš„æ˜¯â€œ%(received)sâ€" #, python-format msgid "Incorrect request: %s" msgstr "ä»¥ä¸‹è¯·æ±‚ä¸æ­£ç¡®ï¼š%s" #, python-format msgid "Input does not contain '%(key)s' field" msgstr "输入没有包å«â€œ%(key)sâ€å­—段" #, python-format msgid "Insufficient permissions on image storage media: %s" msgstr "对映åƒå­˜å‚¨ä»‹è´¨çš„è®¸å¯æƒä¸è¶³ï¼š%s" #, python-format msgid "Invalid JSON pointer for this resource: '/%s'" msgstr "è¿™ä¸ªèµ„æºæ— æ•ˆçš„JSON指针: '/%s'" #, python-format msgid "Invalid checksum '%s': can't exceed 32 characters" msgstr "校验和“%sâ€æ— æ•ˆï¼šä¸å¾—超过 32 个字符" msgid "Invalid configuration in glance-swift conf file." msgstr "glance-swift é…置文件中的é…置无效。" msgid "Invalid configuration in property protection file." msgstr "å±žæ€§ä¿æŠ¤æ–‡ä»¶ä¸­çš„é…置无效。" #, python-format msgid "Invalid container format '%s' for image." msgstr "对于映åƒï¼Œå®¹å™¨æ ¼å¼â€œ%sâ€æ— æ•ˆã€‚" #, python-format msgid "Invalid content type %(content_type)s" msgstr "内容类型 %(content_type)s 无效" #, python-format msgid "Invalid disk format '%s' for image." msgstr "对于映åƒï¼Œç£ç›˜æ ¼å¼â€œ%sâ€æ— æ•ˆã€‚" #, python-format msgid "Invalid filter value %s. The quote is not closed." msgstr "无效过滤器值 %s。缺少å³å¼•å·ã€‚" #, python-format msgid "" "Invalid filter value %s. There is no comma after closing quotation mark." msgstr "无效过滤器值 %s。å³å¼•å·ä¹‹åŽæ²¡æœ‰é€—å·ã€‚" #, python-format msgid "" "Invalid filter value %s. There is no comma before opening quotation mark." msgstr "无效过滤器值 %s。左引å·ä¹‹å‰æ²¡æœ‰é€—å·ã€‚" msgid "Invalid image id format" msgstr "æ˜ åƒæ ‡è¯†æ ¼å¼æ— æ•ˆ" #, python-format msgid "Invalid int value for age_in_days: %(age_in_days)s" msgstr "age_in_days的无效整形值:%(age_in_days)s" #, python-format msgid "Invalid int value for max_rows: %(max_rows)s" msgstr "max_rows的无效整形值:%(max_rows)s" msgid "Invalid location" msgstr "无效的ä½ç½®" #, python-format msgid "Invalid location %s" msgstr "ä½ç½® %s 无效" #, python-format msgid "Invalid location: %s" msgstr "以下ä½ç½®æ— æ•ˆï¼š%s" #, python-format msgid "" "Invalid location_strategy option: %(name)s. The valid strategy option(s) " "is(are): %(strategies)s" msgstr "location_strategy 选项 %(name)s 无效。有效策略选项如下:%(strategies)s" msgid "Invalid locations" msgstr "无效的ä½ç½®" #, python-format msgid "Invalid locations: %s" msgstr "无效的ä½ç½®ï¼š%s" msgid "Invalid marker format" msgstr "æ ‡è®°ç¬¦æ ¼å¼æ— æ•ˆ" msgid "Invalid marker. Image could not be found." msgstr "标记符无效。找ä¸åˆ°æ˜ åƒã€‚" #, python-format msgid "Invalid membership association: %s" msgstr "æˆå‘˜èµ„æ ¼å…³è”æ— æ•ˆï¼š%s" msgid "" "Invalid mix of disk and container formats. When setting a disk or container " "format to one of 'aki', 'ari', or 'ami', the container and disk formats must " "match." msgstr "" "ç£ç›˜æ ¼å¼ä¸Žå®¹å™¨æ ¼å¼çš„æ··åˆæ— æ•ˆã€‚å°†ç£ç›˜æ ¼å¼æˆ–容器格å¼è®¾ç½®" "为“akiâ€ã€â€œariâ€æˆ–“amiâ€æ—¶ï¼Œå®¹å™¨æ ¼å¼ä¸Žç£ç›˜æ ¼å¼å¿…须匹é…。" #, python-format msgid "" "Invalid operation: `%(op)s`. It must be one of the following: %(available)s." msgstr "æ“作“%(op)sâ€æ— æ•ˆã€‚它必须是下列其中一项:%(available)s。" msgid "Invalid position for adding a location." msgstr "用于添加ä½ç½® (location) çš„ä½ç½® (position) 无效。" msgid "Invalid position for removing a location." msgstr "用于移除ä½ç½® (location) çš„ä½ç½® (position) 无效。" msgid "Invalid service catalog json." msgstr "æœåŠ¡ç›®å½• json 无效。" #, python-format msgid "Invalid sort direction: %s" msgstr "æŽ’åºæ–¹å‘无效:%s" #, python-format msgid "" "Invalid sort key: %(sort_key)s. It must be one of the following: " "%(available)s." msgstr "以下排åºé”®æ— æ•ˆï¼š%(sort_key)s。它必须是下列其中一项:%(available)s。" #, python-format msgid "Invalid status value: %s" msgstr "状æ€å€¼ %s 无效" #, python-format msgid "Invalid status: %s" msgstr "çŠ¶æ€æ— æ•ˆï¼š%s" #, python-format msgid "Invalid time format for %s." msgstr "对于 %sï¼Œæ­¤æ—¶é—´æ ¼å¼æ— æ•ˆã€‚" #, python-format msgid "Invalid type value: %s" msgstr "类型值 %s 无效" #, python-format msgid "" "Invalid update. It would result in a duplicate metadata definition namespace " "with the same name of %s" msgstr "" "更新无效。它将导致出现é‡å¤çš„元数æ®å®šä¹‰å称空间,该å称空间具有åŒä¸€åç§° %s" #, python-format msgid "" "Invalid update. It would result in a duplicate metadata definition object " "with the same name=%(name)s in namespace=%(namespace_name)s." msgstr "" "更新无效。它将导致在å称空间 %(namespace_name)s 中出现é‡å¤çš„元数æ®å®šä¹‰å¯¹è±¡ï¼Œ" "该对象具有åŒä¸€åç§° %(name)s。" #, python-format msgid "" "Invalid update. It would result in a duplicate metadata definition object " "with the same name=%(name)s in namespace=%(namespace_name)s." msgstr "" "更新无效。它将导致在å称空间 %(namespace_name)s 中出现é‡å¤çš„元数æ®å®šä¹‰å¯¹è±¡ï¼Œ" "该对象具有åŒä¸€åç§° %(name)s。" #, python-format msgid "" "Invalid update. It would result in a duplicate metadata definition property " "with the same name=%(name)s in namespace=%(namespace_name)s." msgstr "" "更新无效。它将导致在å称空间 %(namespace_name)s 中出现é‡å¤çš„元数æ®å®šä¹‰å±žæ€§ï¼Œ" "该属性具有åŒä¸€åç§° %(name)s。" #, python-format msgid "Invalid value '%(value)s' for parameter '%(param)s': %(extra_msg)s" msgstr "傿•°â€œ%(param)sâ€çš„值“%(value)sâ€æ— æ•ˆï¼š%(extra_msg)s" #, python-format msgid "Invalid value for option %(option)s: %(value)s" msgstr "选项 %(option)s 的以下值无效:%(value)s" #, python-format msgid "Invalid visibility value: %s" msgstr "å¯è§†æ€§å€¼æ— æ•ˆï¼š%s" msgid "It's invalid to provide multiple image sources." msgstr "æä¾›å¤šä¸ªé•œåƒæºæ— æ•ˆ" #, python-format msgid "It's not allowed to add locations if image status is %s." msgstr "如果镜åƒçжæ€ä¸º %s,则ä¸å…许添加ä½ç½®ã€‚" msgid "It's not allowed to add locations if locations are invisible." msgstr "ä¸å…许添加ä¸å¯è§†çš„ä½ç½®ã€‚" msgid "It's not allowed to remove locations if locations are invisible." msgstr "ä¸å…许移除ä¸å¯è§†çš„ä½ç½®ã€‚" msgid "It's not allowed to update locations if locations are invisible." msgstr "ä¸å…许更新ä¸å¯è§†çš„ä½ç½®ã€‚" msgid "List of strings related to the image" msgstr "与映åƒç›¸å…³çš„字符串的列表" msgid "Malformed JSON in request body." msgstr "请求主体中 JSON 的格å¼ä¸æ­£ç¡®ã€‚" msgid "Maximal age is count of days since epoch." msgstr "最大年龄是自新纪元开始计算的天数。" #, python-format msgid "Maximum redirects (%(redirects)s) was exceeded." msgstr "已超过最大é‡å®šå‘次数 (%(redirects)s)。" #, python-format msgid "Member %(member_id)s is duplicated for image %(image_id)s" msgstr "å¯¹äºŽæ˜ åƒ %(image_id)s,已å¤åˆ¶æˆå‘˜ %(member_id)s" msgid "Member can't be empty" msgstr "æˆå‘˜ä¸èƒ½ä¸ºç©º" msgid "Member to be added not specified" msgstr "æœªæŒ‡å®šè¦æ·»åŠ çš„æˆå‘˜" msgid "Membership could not be found." msgstr "找ä¸åˆ°æˆå‘˜èµ„格。" #, python-format msgid "" "Metadata definition namespace %(namespace)s is protected and cannot be " "deleted." msgstr "元数æ®å®šä¹‰å称空间 %(namespace)s å—ä¿æŠ¤ï¼Œæ— æ³•åˆ é™¤ã€‚" #, python-format msgid "Metadata definition namespace not found for id=%s" msgstr "对于标识 %s,找ä¸åˆ°å…ƒæ•°æ®å®šä¹‰å称空间" #, python-format msgid "" "Metadata definition object %(object_name)s is protected and cannot be " "deleted." msgstr "元数æ®å®šä¹‰å¯¹è±¡ %(object_name)s å—ä¿æŠ¤ï¼Œæ— æ³•åˆ é™¤ã€‚" #, python-format msgid "Metadata definition object not found for id=%s" msgstr "对于标识 %s,找ä¸åˆ°å…ƒæ•°æ®å®šä¹‰å¯¹è±¡" #, python-format msgid "" "Metadata definition property %(property_name)s is protected and cannot be " "deleted." msgstr "元数æ®å®šä¹‰å±žæ€§ %(property_name)s å—ä¿æŠ¤ï¼Œæ— æ³•åˆ é™¤ã€‚" #, python-format msgid "Metadata definition property not found for id=%s" msgstr "对于标识 %s,找ä¸åˆ°å…ƒæ•°æ®å®šä¹‰å±žæ€§" #, python-format msgid "" "Metadata definition resource-type %(resource_type_name)s is a seeded-system " "type and cannot be deleted." msgstr "元数æ®å®šä¹‰èµ„æºç±»åž‹ %(resource_type_name)s 是ç§å­åž‹ç³»ç»Ÿç±»åž‹ï¼Œæ— æ³•删除。" #, python-format msgid "" "Metadata definition resource-type-association %(resource_type)s is protected " "and cannot be deleted." msgstr "元数æ®å®šä¹‰èµ„æºç±»åž‹å…³è” %(resource_type)s å—ä¿æŠ¤ï¼Œæ— æ³•åˆ é™¤ã€‚" #, python-format msgid "" "Metadata definition tag %(tag_name)s is protected and cannot be deleted." msgstr "元数æ®å®šä¹‰æ ‡è®° %(tag_name)s å—ä¿æŠ¤ï¼Œæ— æ³•åˆ é™¤ã€‚" #, python-format msgid "Metadata definition tag not found for id=%s" msgstr "对于标识 %s,找ä¸åˆ°å…ƒæ•°æ®å®šä¹‰æ ‡è®°" msgid "Minimal rows limit is 1." msgstr "最å°è¡Œæ•°é™åˆ¶ä¸º 1。" #, python-format msgid "Missing required credential: %(required)s" msgstr "缺少必需凭è¯ï¼š%(required)s" #, python-format msgid "" "Multiple 'image' service matches for region %(region)s. This generally means " "that a region is required and you have not supplied one." msgstr "" "对于区域 %(region)s,存在多个“映åƒâ€æœåŠ¡åŒ¹é…项。这通常æ„味ç€éœ€è¦åŒºåŸŸå¹¶ä¸”尚未æ" "供一个区域。" msgid "No authenticated user" msgstr "ä¸å­˜åœ¨ä»»ä½•已认è¯çš„用户" #, python-format msgid "No image found with ID %s" msgstr "找ä¸åˆ°ä»»ä½•具有标识 %s 的映åƒ" #, python-format msgid "No location found with ID %(loc)s from image %(img)s" msgstr "åœ¨æ˜ åƒ %(img)s 中找ä¸åˆ°æ ‡è¯†ä¸º %(loc)s çš„ä½ç½®" msgid "No permission to share that image" msgstr "ä¸å­˜åœ¨ä»»ä½•用于共享该映åƒçš„è®¸å¯æƒ" #, python-format msgid "Not allowed to create members for image %s." msgstr "ä¸å…è®¸ä¸ºæ˜ åƒ %s 创建æˆå‘˜ã€‚" #, python-format msgid "Not allowed to deactivate image in status '%s'" msgstr "ä¸å…è®¸å–æ¶ˆæ¿€æ´»çжæ€ä¸ºâ€œ%sâ€çš„æ˜ åƒ" #, python-format msgid "Not allowed to delete members for image %s." msgstr "ä¸å…è®¸ä¸ºæ˜ åƒ %s 删除æˆå‘˜ã€‚" #, python-format msgid "Not allowed to delete tags for image %s." msgstr "ä¸å…è®¸ä¸ºæ˜ åƒ %s 删除标记。" #, python-format msgid "Not allowed to list members for image %s." msgstr "ä¸å…è®¸ä¸ºæ˜ åƒ %s 列示æˆå‘˜ã€‚" #, python-format msgid "Not allowed to reactivate image in status '%s'" msgstr "ä¸å…è®¸é‡æ–°æ¿€æ´»çжæ€ä¸ºâ€œ%sâ€çš„æ˜ åƒ" #, python-format msgid "Not allowed to update members for image %s." msgstr "ä¸å…è®¸ä¸ºæ˜ åƒ %s æ›´æ–°æˆå‘˜ã€‚" #, python-format msgid "Not allowed to update tags for image %s." msgstr "ä¸å…è®¸ä¸ºæ˜ åƒ %s 更新标记。" #, python-format msgid "Not allowed to upload image data for image %(image_id)s: %(error)s" msgstr "ä¸å…许为镜åƒ%(image_id)s上传数æ®:%(error)s" msgid "Number of sort dirs does not match the number of sort keys" msgstr "æŽ’åºæ–¹å‘数与排åºé”®æ•°ä¸åŒ¹é…" msgid "OVA extract is limited to admin" msgstr "OVA æŠ½å–æ“作仅é™ç®¡ç†å‘˜æ‰§è¡Œ" msgid "Old and new sorting syntax cannot be combined" msgstr "æ— æ³•ç»„åˆæ–°æ—§æŽ’åºè¯­æ³•" msgid "Only shared images have members." msgstr "åªæœ‰å·²å…±äº«çš„é•œåƒæ‹¥æœ‰æˆå‘˜." #, python-format msgid "Operation \"%s\" requires a member named \"value\"." msgstr "æ“作“%sâ€éœ€è¦å为“valueâ€çš„æˆå‘˜ã€‚" msgid "" "Operation objects must contain exactly one member named \"add\", \"remove\", " "or \"replace\"." msgstr "æ“作对象必须刚好包å«ä¸€ä¸ªå为“addâ€ã€â€œremoveâ€æˆ–“replaceâ€çš„æˆå‘˜ã€‚" msgid "" "Operation objects must contain only one member named \"add\", \"remove\", or " "\"replace\"." msgstr "æ“作对象必须仅包å«ä¸€ä¸ªå为“addâ€ã€â€œremoveâ€æˆ–“replaceâ€çš„æˆå‘˜ã€‚" msgid "Operations must be JSON objects." msgstr "æ“作必须是 JSON 对象。" #, python-format msgid "Original locations is not empty: %s" msgstr "原ä½ç½®ä¸ä¸ºç©º: %s" msgid "Owner can't be updated by non admin." msgstr "éžç®¡ç†å‘˜æ— æ³•更新所有者。" msgid "Owner must be specified to create a tag." msgstr "必须指定所有者,æ‰èƒ½åˆ›å»ºæ ‡è®°ã€‚" msgid "Owner of the image" msgstr "映åƒçš„æ‰€æœ‰è€…" msgid "Owner of the namespace." msgstr "å称空间的所有者。" msgid "Param values can't contain 4 byte unicode." msgstr "傿•°å€¼ä¸èƒ½åŒ…å« 4 字节 Unicode。" #, python-format msgid "Pointer `%s` contains \"~\" not part of a recognized escape sequence." msgstr "指针“%sâ€åŒ…å«å¹¶éžå¯è¯†åˆ«è½¬ä¹‰åºåˆ—的一部分的“~â€ã€‚" #, python-format msgid "Pointer `%s` contains adjacent \"/\"." msgstr "指针`%s` 包å«è¿žæŽ¥ç¬¦\"/\"." #, python-format msgid "Pointer `%s` does not contains valid token." msgstr "指针`%s` æ²¡æœ‰åŒ…å«æœ‰æ•ˆçš„å£ä»¤" #, python-format msgid "Pointer `%s` does not start with \"/\"." msgstr "指针“%sâ€æ²¡æœ‰ä»¥â€œ/â€å¼€å¤´ã€‚" #, python-format msgid "Pointer `%s` end with \"/\"." msgstr "指针`%s` 以\"/\"结æŸ." #, python-format msgid "Port \"%s\" is not valid." msgstr "端å£â€œ%sâ€æ— æ•ˆã€‚" #, python-format msgid "Process %d not running" msgstr "进程 %d 未在è¿è¡Œ" #, python-format msgid "Properties %s must be set prior to saving data." msgstr "必须在ä¿å­˜æ•°æ®ä¹‹å‰è®¾ç½®å±žæ€§ %s。" #, python-format msgid "" "Property %(property_name)s does not start with the expected resource type " "association prefix of '%(prefix)s'." msgstr "属性 %(property_name)s 未以需è¦çš„资æºç±»åž‹å…³è”å‰ç¼€â€œ%(prefix)sâ€å¼€å¤´ã€‚" #, python-format msgid "Property %s already present." msgstr "属性 %s 已存在。" #, python-format msgid "Property %s does not exist." msgstr "属性 %s ä¸å­˜åœ¨ã€‚" #, python-format msgid "Property %s may not be removed." msgstr "无法除去属性 %s。" #, python-format msgid "Property %s must be set prior to saving data." msgstr "必须在ä¿å­˜æ•°æ®ä¹‹å‰è®¾ç½®å±žæ€§ %s。" #, python-format msgid "Property '%s' is protected" msgstr "属性“%sâ€å—ä¿æŠ¤" msgid "Property names can't contain 4 byte unicode." msgstr "属性åç§°ä¸èƒ½åŒ…å« 4 字节 Unicode。" #, python-format msgid "" "Provided image size must match the stored image size. (provided size: " "%(ps)d, stored size: %(ss)d)" msgstr "" "æä¾›çš„æ˜ åƒå¤§å°å¿…须与存储的映åƒå¤§å°åŒ¹é…。(æä¾›çš„大å°ä¸º %(ps)d,存储的大å°ä¸º " "%(ss)d)" #, python-format msgid "Provided object does not match schema '%(schema)s': %(reason)s" msgstr "æä¾›çš„对象与模å¼â€œ%(schema)sâ€ä¸åŒ¹é…:%(reason)s" #, python-format msgid "Provided status of task is unsupported: %(status)s" msgstr "䏿”¯æŒä»»åŠ¡çš„æ‰€æä¾›çжæ€ï¼š%(status)s" #, python-format msgid "Provided type of task is unsupported: %(type)s" msgstr "䏿”¯æŒä»»åŠ¡çš„æ‰€æä¾›ç±»åž‹ï¼š%(type)s" msgid "Provides a user friendly description of the namespace." msgstr "æä¾›å称空间的用户å‹å¥½æè¿°ã€‚" msgid "Received invalid HTTP redirect." msgstr "接收到无效 HTTP é‡å®šå‘。" #, python-format msgid "Redirecting to %(uri)s for authorization." msgstr "对于授æƒï¼Œæ­£åœ¨é‡å®šå‘至 %(uri)s。" #, python-format msgid "Registry service can't use %s" msgstr "注册æœåŠ¡æ— æ³•ä½¿ç”¨ %s" #, python-format msgid "Registry was not configured correctly on API server. Reason: %(reason)s" msgstr "API æœåŠ¡å™¨ä¸Šæœªæ­£ç¡®é…置注册表。原因:%(reason)s" #, python-format msgid "Reload of %(serv)s not supported" msgstr "䏿”¯æŒé‡æ–°è£…å…¥ %(serv)s" #, python-format msgid "Reloading %(serv)s (pid %(pid)s) with signal(%(sig)s)" msgstr "æ­£åœ¨é‡æ–°è£…å…¥ %(serv)s(pid 为 %(pid)s),信å·ä¸º (%(sig)s)" #, python-format msgid "Removing stale pid file %s" msgstr "移除原有pid文件%s" msgid "Request body must be a JSON array of operation objects." msgstr "请求主体必须是由æ“作对象组æˆçš„ JSON 数组。" msgid "Request must be a list of commands" msgstr "请求必须为命令列表" #, python-format msgid "Required store %s is invalid" msgstr "必需的存储器 %s 无效" msgid "" "Resource type names should be aligned with Heat resource types whenever " "possible: http://docs.openstack.org/developer/heat/template_guide/openstack." "html" msgstr "" "资æºç±»åž‹å称应该尽å¯èƒ½ä¸Ž Heat 资æºç±»åž‹å¯¹é½ï¼šhttp://docs.openstack.org/" "developer/heat/template_guide/openstack.html" msgid "Response from Keystone does not contain a Glance endpoint." msgstr "æ¥è‡ª Keystone çš„å“åº”æ²¡æœ‰åŒ…å« Glance 端点。" msgid "Scope of image accessibility" msgstr "映åƒè¾…助功能选项的作用域" msgid "Scope of namespace accessibility." msgstr "å称空间辅助功能选项的作用域。" #, python-format msgid "Server %(serv)s is stopped" msgstr "æœåС噍 %(serv)s å·²åœæ­¢" #, python-format msgid "Server worker creation failed: %(reason)s." msgstr "æœåŠ¡å™¨å·¥ä½œç¨‹åºåˆ›å»ºå¤±è´¥ï¼š%(reason)s。" msgid "Signature verification failed" msgstr "ç­¾å认è¯å¤±è´¥" msgid "Size of image file in bytes" msgstr "æ˜ åƒæ–‡ä»¶çš„大å°ï¼Œä»¥å­—节计" msgid "" "Some resource types allow more than one key / value pair per instance. For " "example, Cinder allows user and image metadata on volumes. Only the image " "properties metadata is evaluated by Nova (scheduling or drivers). This " "property allows a namespace target to remove the ambiguity." msgstr "" "一些资æºç±»åž‹å…许æ¯ä¸ªå®žä¾‹å…·æœ‰å¤šä¸ªâ€œé”®/值â€å¯¹ã€‚例如,Cinder å…许å·ä¸Šçš„用户元数æ®" "和映åƒå…ƒæ•°æ®ã€‚仅映åƒå±žæ€§å…ƒæ•°æ®æ˜¯é€šè¿‡ Nova(调度或驱动程åºï¼‰æ±‚值。此属性å…许å" "称空间目标除去ä¸ç¡®å®šæ€§ã€‚" msgid "Sort direction supplied was not valid." msgstr "æä¾›çš„æŽ’åºæ–¹å‘无效。" msgid "Sort key supplied was not valid." msgstr "æä¾›çš„æŽ’åºé”®æ— æ•ˆã€‚" msgid "" "Specifies the prefix to use for the given resource type. Any properties in " "the namespace should be prefixed with this prefix when being applied to the " "specified resource type. Must include prefix separator (e.g. a colon :)." msgstr "" "指定è¦ç”¨äºŽç»™å®šçš„资æºç±»åž‹çš„å‰ç¼€ã€‚当应用于指定的资æºç±»åž‹æ—¶ï¼Œå称空间中的任何属" "性都应该使用此å‰ç¼€ä½œä¸ºå‰ç¼€ã€‚必须包括å‰ç¼€åˆ†éš”ç¬¦ï¼ˆä¾‹å¦‚å†’å· :)。" msgid "Status must be \"pending\", \"accepted\" or \"rejected\"." msgstr "状æ€å¿…须为“暂挂â€ã€â€œå·²æŽ¥å—â€æˆ–“已拒ç»â€ã€‚" msgid "Status not specified" msgstr "未指定状æ€" msgid "Status of the image" msgstr "映åƒçš„状æ€" #, python-format msgid "Status transition from %(cur_status)s to %(new_status)s is not allowed" msgstr "ä¸å…许状æ€ä»Ž %(cur_status)s 转å˜ä¸º %(new_status)s" #, python-format msgid "Stopping %(serv)s (pid %(pid)s) with signal(%(sig)s)" msgstr "æ­£åœ¨é€šè¿‡ä¿¡å· (%(sig)s) åœæ­¢ %(serv)s (pid %(pid)s)" #, python-format msgid "Store for image_id not found: %s" msgstr "找ä¸åˆ°ç”¨äºŽ image_id 的存储器:%s" #, python-format msgid "Store for scheme %s not found" msgstr "找ä¸åˆ°ç”¨äºŽæ–¹æ¡ˆ %s 的存储器" #, python-format msgid "" "Supplied %(attr)s (%(supplied)s) and %(attr)s generated from uploaded image " "(%(actual)s) did not match. Setting image status to 'killed'." msgstr "" "æä¾›çš„ %(attr)s (%(supplied)s) ä¸Žæ‰€ä¸Šè½½æ˜ åƒ (%(actual)s) 生æˆçš„ %(attr)s ä¸åŒ¹" "é…。正在将映åƒçжæ€è®¾ç½®ä¸ºâ€œå·²ç»ˆæ­¢â€ã€‚" msgid "Supported values for the 'container_format' image attribute" msgstr "“container_formatâ€æ˜ åƒå±žæ€§æ”¯æŒçš„值" msgid "Supported values for the 'disk_format' image attribute" msgstr "“disk_formatâ€æ˜ åƒå±žæ€§æ”¯æŒçš„值" #, python-format msgid "Suppressed respawn as %(serv)s was %(rsn)s." msgstr "å·²é˜»æ­¢é‡æ–°è¡ç”Ÿï¼Œå› ä¸º %(serv)s 为 %(rsn)s。" msgid "System SIGHUP signal received." msgstr "接收到系统 SIGHUP ä¿¡å·ã€‚" #, python-format msgid "Task '%s' is required" msgstr "需è¦ä»»åŠ¡â€œ%sâ€" msgid "Task does not exist" msgstr "任务ä¸å­˜åœ¨" msgid "Task failed due to Internal Error" msgstr "由于å‘生内部错误而导致任务失败" msgid "Task was not configured properly" msgstr "任务未正确é…ç½®" #, python-format msgid "Task with the given id %(task_id)s was not found" msgstr "找ä¸åˆ°å…·æœ‰ç»™å®šæ ‡è¯† %(task_id)s 的任务" msgid "The \"changes-since\" filter is no longer available on v2." msgstr "“changes-sinceâ€è¿‡æ»¤å™¨åœ¨ v2 上ä¸å†å¯ç”¨ã€‚" #, python-format msgid "The CA file you specified %s does not exist" msgstr "已指定的 CA 文件 %s ä¸å­˜åœ¨" #, python-format msgid "" "The Image %(image_id)s object being created by this task %(task_id)s, is no " "longer in valid status for further processing." msgstr "" "此任务 %(task_id)s æ­£åœ¨åˆ›å»ºçš„æ˜ åƒ %(image_id)s 对象ä¸å†å¤„于有效状æ€ï¼Œæ— æ³•进一" "步处ç†ã€‚" msgid "The Store URI was malformed." msgstr "存储器 URI 的格å¼ä¸æ­£ç¡®ã€‚" msgid "" "The URL to the keystone service. If \"use_user_token\" is not in effect and " "using keystone auth, then URL of keystone can be specified." msgstr "" "keystone æœåŠ¡çš„ URL。如果“use_user_tokenâ€æ²¡æœ‰ç”Ÿæ•ˆå¹¶ä¸”正在使用 keystone 认è¯ï¼Œ" "é‚£ä¹ˆå¯æŒ‡å®š keystone çš„ URL。" msgid "" "The administrators password. If \"use_user_token\" is not in effect, then " "admin credentials can be specified." msgstr "管ç†å‘˜å¯†ç ã€‚如果“use_user_tokenâ€æ²¡æœ‰ç”Ÿæ•ˆï¼Œé‚£ä¹ˆå¯æŒ‡å®šç®¡ç†å‡­è¯ã€‚" msgid "" "The administrators user name. If \"use_user_token\" is not in effect, then " "admin credentials can be specified." msgstr "管ç†å‘˜ç”¨æˆ·å。如果“use_user_tokenâ€æ²¡æœ‰ç”Ÿæ•ˆï¼Œé‚£ä¹ˆå¯æŒ‡å®šç®¡ç†å‡­è¯ã€‚" #, python-format msgid "The cert file you specified %s does not exist" msgstr "已指定的è¯ä¹¦æ–‡ä»¶ %s ä¸å­˜åœ¨" msgid "The current status of this task" msgstr "此任务的当å‰çжæ€" #, python-format msgid "" "The device housing the image cache directory %(image_cache_dir)s does not " "support xattr. It is likely you need to edit your fstab and add the " "user_xattr option to the appropriate line for the device housing the cache " "directory." msgstr "" "存放映åƒé«˜é€Ÿç¼“存目录 %(image_cache_dir)s çš„è®¾å¤‡ä¸æ”¯æŒ xattr。您å¯èƒ½éœ€è¦ç¼–辑 " "fstab å¹¶å°† user_xattr 选项添加至存放该高速缓存目录的设备的相应行。" #, python-format msgid "" "The given uri is not valid. Please specify a valid uri from the following " "list of supported uri %(supported)s" msgstr "" "给定的 URI æ— æ•ˆã€‚è¯·ä»Žå—æ”¯æŒçš„ URI %(supported)s 的以下列表中指定有效 URI" #, python-format msgid "The incoming image is too large: %s" msgstr "引入的映åƒå¤ªå¤§ï¼š%s" #, python-format msgid "The key file you specified %s does not exist" msgstr "已指定的密钥文件 %s ä¸å­˜åœ¨" #, python-format msgid "" "The limit has been exceeded on the number of allowed image locations. " "Attempted: %(attempted)s, Maximum: %(maximum)s" msgstr "" "已超过关于å…许的映åƒä½ç½®æ•°çš„é™åˆ¶ã€‚å·²å°è¯•:%(attempted)s,最大值:%(maximum)s" #, python-format msgid "" "The limit has been exceeded on the number of allowed image members for this " "image. Attempted: %(attempted)s, Maximum: %(maximum)s" msgstr "" "已超过关于å…è®¸çš„æ˜ åƒæˆå‘˜æ•°ï¼ˆå¯¹äºŽæ­¤æ˜ åƒï¼‰çš„é™åˆ¶ã€‚å·²å°è¯•:%(attempted)s,最大" "值:%(maximum)s" #, python-format msgid "" "The limit has been exceeded on the number of allowed image properties. " "Attempted: %(attempted)s, Maximum: %(maximum)s" msgstr "" "已超过关于å…许的映åƒå±žæ€§æ•°çš„é™åˆ¶ã€‚å·²å°è¯•:%(attempted)s,最大值:%(maximum)s" #, python-format msgid "" "The limit has been exceeded on the number of allowed image properties. " "Attempted: %(num)s, Maximum: %(quota)s" msgstr "已超过关于å…许的映åƒå±žæ€§æ•°çš„é™åˆ¶ã€‚å·²å°è¯•:%(num)s,最大值:%(quota)s" #, python-format msgid "" "The limit has been exceeded on the number of allowed image tags. Attempted: " "%(attempted)s, Maximum: %(maximum)s" msgstr "" "已超过关于å…è®¸çš„æ˜ åƒæ ‡è®°æ•°çš„é™åˆ¶ã€‚å·²å°è¯•:%(attempted)s,最大值:%(maximum)s" #, python-format msgid "The location %(location)s already exists" msgstr "ä½ç½® %(location)s 已存在" #, python-format msgid "The location data has an invalid ID: %d" msgstr "ä½ç½®æ•°æ®å…·æœ‰æ— æ•ˆæ ‡è¯†ï¼š%d" #, python-format msgid "" "The metadata definition %(record_type)s with name=%(record_name)s not " "deleted. Other records still refer to it." msgstr "" "未删除å称为 %(record_name)s 的元数æ®å®šä¹‰ %(record_type)s。其他记录ä»ç„¶å¯¹å…¶è¿›" "行引用。" #, python-format msgid "The metadata definition namespace=%(namespace_name)s already exists." msgstr "元数æ®å®šä¹‰å称空间 %(namespace_name)s 已存在。" #, python-format msgid "" "The metadata definition object with name=%(object_name)s was not found in " "namespace=%(namespace_name)s." msgstr "" "在å称空间 %(namespace_name)s 中,找ä¸åˆ°å称为 %(object_name)s 的元数æ®å®šä¹‰å¯¹" "象。" #, python-format msgid "" "The metadata definition property with name=%(property_name)s was not found " "in namespace=%(namespace_name)s." msgstr "" "在å称空间 %(namespace_name)s 中,找ä¸åˆ°å称为 %(property_name)s 的元数æ®å®šä¹‰" "属性。" #, python-format msgid "" "The metadata definition resource-type association of resource-type=" "%(resource_type_name)s to namespace=%(namespace_name)s already exists." msgstr "" "已存在以下两者的元数æ®å®šä¹‰èµ„æºç±»åž‹å…³è”:资æºç±»åž‹ %(resource_type_name)s 与å" "称空间 %(namespace_name)s。" #, python-format msgid "" "The metadata definition resource-type association of resource-type=" "%(resource_type_name)s to namespace=%(namespace_name)s, was not found." msgstr "" "找ä¸åˆ°ä»¥ä¸‹ä¸¤è€…的元数æ®å®šä¹‰èµ„æºç±»åž‹å…³è”:资æºç±»åž‹ %(resource_type_name)s 与å" "称空间 %(namespace_name)s。" #, python-format msgid "" "The metadata definition resource-type with name=%(resource_type_name)s, was " "not found." msgstr "找ä¸åˆ°å称为 %(resource_type_name)s 的元数æ®å®šä¹‰èµ„æºç±»åž‹ã€‚" #, python-format msgid "" "The metadata definition tag with name=%(name)s was not found in namespace=" "%(namespace_name)s." msgstr "" "在å称空间 %(namespace_name)s 中,找ä¸åˆ°å称为 %(name)s 的元数æ®å®šä¹‰æ ‡è®°ã€‚" msgid "The parameters required by task, JSON blob" msgstr "任务 JSON blob æ‰€éœ€çš„å‚æ•°" msgid "The provided image is too large." msgstr "æä¾›çš„æ˜ åƒå¤ªå¤§ã€‚" msgid "" "The region for the authentication service. If \"use_user_token\" is not in " "effect and using keystone auth, then region name can be specified." msgstr "" "ç”¨äºŽè®¤è¯æœåŠ¡çš„åŒºåŸŸã€‚å¦‚æžœâ€œuse_user_tokenâ€æ²¡æœ‰ç”Ÿæ•ˆå¹¶ä¸”正在使用 keystone 认è¯ï¼Œ" "é‚£ä¹ˆå¯æŒ‡å®šåŒºåŸŸå称。" msgid "The request returned 500 Internal Server Error." msgstr "该请求返回了“500 内部æœåŠ¡å™¨é”™è¯¯â€ã€‚" msgid "" "The request returned 503 Service Unavailable. This generally occurs on " "service overload or other transient outage." msgstr "" "该请求返回了“503 æœåŠ¡ä¸å¯ç”¨â€ã€‚这通常在æœåŠ¡è¶…è´Ÿè·æˆ–其他瞬æ€åœæ­¢è¿è¡Œæ—¶å‘生。" #, python-format msgid "" "The request returned a 302 Multiple Choices. This generally means that you " "have not included a version indicator in a request URI.\n" "\n" "The body of response returned:\n" "%(body)s" msgstr "" "该请求返回了“302 多选项â€ã€‚这通常æ„å‘³ç€æ‚¨å°šæœªå°†ç‰ˆæœ¬æŒ‡ç¤ºå™¨åŒ…括在请求 URI 中。\n" "\n" "返回了å“应的主体:\n" "%(body)s" #, python-format msgid "" "The request returned a 413 Request Entity Too Large. This generally means " "that rate limiting or a quota threshold was breached.\n" "\n" "The response body:\n" "%(body)s" msgstr "" "该请求返回了“413 请求实体太大â€ã€‚这通常æ„味ç€å·²è¿å比率é™åˆ¶æˆ–é…é¢é˜ˆå€¼ã€‚\n" "\n" "å“应主体:\n" "%(body)s" #, python-format msgid "" "The request returned an unexpected status: %(status)s.\n" "\n" "The response body:\n" "%(body)s" msgstr "" "该请求返回了æ„外状æ€ï¼š%(status)s。\n" "\n" "å“应主体:\n" "%(body)s" msgid "" "The requested image has been deactivated. Image data download is forbidden." msgstr "所请求映åƒå·²å–æ¶ˆæ¿€æ´»ã€‚å·²ç¦æ­¢ä¸‹è½½æ˜ åƒæ•°æ®ã€‚" msgid "The result of current task, JSON blob" msgstr "当å‰ä»»åŠ¡ JSON blob 的结果" #, python-format msgid "" "The size of the data %(image_size)s will exceed the limit. %(remaining)s " "bytes remaining." msgstr "æ•°æ®å¤§å° %(image_size)s 将超过é™åˆ¶ã€‚将剩余 %(remaining)s 个字节。" #, python-format msgid "The specified member %s could not be found" msgstr "找ä¸åˆ°æŒ‡å®šçš„æˆå‘˜ %s" #, python-format msgid "The specified metadata object %s could not be found" msgstr "找ä¸åˆ°æŒ‡å®šçš„元数æ®å¯¹è±¡ %s" #, python-format msgid "The specified metadata tag %s could not be found" msgstr "找ä¸åˆ°æŒ‡å®šçš„å…ƒæ•°æ®æ ‡è®° %s" #, python-format msgid "The specified namespace %s could not be found" msgstr "找ä¸åˆ°æŒ‡å®šçš„å称空间 %s" #, python-format msgid "The specified property %s could not be found" msgstr "找ä¸åˆ°æŒ‡å®šçš„属性 %s" #, python-format msgid "The specified resource type %s could not be found " msgstr "找ä¸åˆ°æŒ‡å®šçš„资æºç±»åž‹ %s" msgid "" "The status of deleted image location can only be set to 'pending_delete' or " "'deleted'" msgstr "已删除映åƒä½ç½®çš„状æ€åªèƒ½è®¾ç½®ä¸ºâ€œpending_deleteâ€æˆ–“deletedâ€" msgid "" "The status of deleted image location can only be set to 'pending_delete' or " "'deleted'." msgstr "已删除映åƒä½ç½®çš„状æ€åªèƒ½è®¾ç½®ä¸ºâ€œpending_deleteâ€æˆ–“deletedâ€ã€‚" msgid "The status of this image member" msgstr "æ­¤æ˜ åƒæˆå‘˜çš„状æ€" msgid "" "The strategy to use for authentication. If \"use_user_token\" is not in " "effect, then auth strategy can be specified." msgstr "è¦ç”¨äºŽè®¤è¯çš„策略。如果“use_user_tokenâ€æ²¡æœ‰ç”Ÿæ•ˆï¼Œé‚£ä¹ˆå¯æŒ‡å®šè®¤è¯ç­–略。" #, python-format msgid "" "The target member %(member_id)s is already associated with image " "%(image_id)s." msgstr "目标æˆå‘˜ %(member_id)s å·²å…³è”æ˜ åƒ %(image_id)s。" msgid "" "The tenant name of the administrative user. If \"use_user_token\" is not in " "effect, then admin tenant name can be specified." msgstr "" "管ç†ç”¨æˆ·çš„租户å称。如果“use_user_tokenâ€æ²¡æœ‰ç”Ÿæ•ˆï¼Œé‚£ä¹ˆå¯æŒ‡å®šç®¡ç†å‘˜ç§Ÿæˆ·å称。" msgid "The type of task represented by this content" msgstr "此内容表示的任务的类型" msgid "The unique namespace text." msgstr "唯一å称空间文本。" msgid "The user friendly name for the namespace. Used by UI if available." msgstr "å称空间的用户å‹å¥½å称。由 UI 使用(如果å¯ç”¨ï¼‰ã€‚" #, python-format msgid "" "There is a problem with your %(error_key_name)s %(error_filename)s. Please " "verify it. Error: %(ioe)s" msgstr "" "%(error_key_name)s %(error_filename)s 存在问题。请对它进行验è¯ã€‚å‘生错误:" "%(ioe)s" #, python-format msgid "" "There is a problem with your %(error_key_name)s %(error_filename)s. Please " "verify it. OpenSSL error: %(ce)s" msgstr "" "%(error_key_name)s %(error_filename)s 存在问题。请对它进行验è¯ã€‚å‘生 OpenSSL " "错误:%(ce)s" #, python-format msgid "" "There is a problem with your key pair. Please verify that cert " "%(cert_file)s and key %(key_file)s belong together. OpenSSL error %(ce)s" msgstr "" "密钥对存在问题。请验è¯è¯ä¹¦ %(cert_file)s 和密钥 %(key_file)s 是å¦åº”该在一起。" "å‘生 OpenSSL 错误 %(ce)s" msgid "There was an error configuring the client." msgstr "é…置客户机时出错。" msgid "There was an error connecting to a server" msgstr "连接至æœåŠ¡å™¨æ—¶å‡ºé”™" msgid "" "This operation is currently not permitted on Glance Tasks. They are auto " "deleted after reaching the time based on their expires_at property." msgstr "" "当å‰ä¸å…许对 Glance 任务执行此æ“作。到达基于 expires_at 属性的时间åŽï¼Œå®ƒä»¬ä¼š" "自动删除。" msgid "This operation is currently not permitted on Glance images details." msgstr "当å‰ä¸å…许对 Glance 映åƒè¯¦ç»†ä¿¡æ¯æ‰§è¡Œæ­¤æ“作。" msgid "" "Time in hours for which a task lives after, either succeeding or failing" msgstr "任务在æˆåŠŸæˆ–å¤±è´¥ä¹‹åŽç”Ÿå­˜çš„æ—¶é—´ï¼ˆä»¥å°æ—¶è®¡ï¼‰" msgid "Too few arguments." msgstr "å¤ªå°‘å‚æ•°" #, python-format msgid "" "Total size is %(size)d bytes (%(human_size)s) across %(img_count)d images" msgstr "总大å°ä¸º %(size)d 字节(%(human_size)s)(在 %(img_count)d 个映åƒä¸Šï¼‰" msgid "" "URI cannot contain more than one occurrence of a scheme.If you have " "specified a URI like swift://user:pass@http://authurl.com/v1/container/obj, " "you need to change it to use the swift+http:// scheme, like so: swift+http://" "user:pass@authurl.com/v1/container/obj" msgstr "" "URI ä¸èƒ½åŒ…嫿–¹æ¡ˆçš„多个实例。如果已指定类似于 swift://user:pass@http://" "authurl.com/v1/container/obj çš„ URI,那么需è¦å°†å®ƒæ›´æ”¹ä¸ºä½¿ç”¨ swift+http:// æ–¹" "案,类似于以下:swift+http://user:pass@authurl.com/v1/container/obj" msgid "URL to access the image file kept in external store" msgstr "用于访问外部存储器中ä¿ç•™çš„æ˜ åƒæ–‡ä»¶çš„ URL" #, python-format msgid "" "Unable to create pid file %(pid)s. Running as non-root?\n" "Falling back to a temp file, you can stop %(service)s service using:\n" " %(file)s %(server)s stop --pid-file %(fb)s" msgstr "" "无法创建 pid 文件 %(pid)sã€‚æ­£åœ¨ä»¥éž root 用户身份è¿è¡Œå—?\n" "正在回退至临时文件,å¯ä½¿ç”¨ä»¥ä¸‹å‘½ä»¤åœæ­¢ %(service)s æœåŠ¡ï¼š\n" "%(file)s %(server)s stop --pid-file %(fb)s" #, python-format msgid "Unable to filter by unknown operator '%s'." msgstr "无法按未知è¿ç®—符“%sâ€è¿›è¡Œè¿‡æ»¤ã€‚" msgid "Unable to filter on a range with a non-numeric value." msgstr "æ— æ³•å¯¹å…·æœ‰éžæ•°å­—值的范围进行过滤。" msgid "Unable to filter on a unknown operator." msgstr "无法针对未知è¿ç®—符进行过滤。" msgid "Unable to filter using the specified operator." msgstr "无法使用指定è¿ç®—符进行过滤。" msgid "Unable to filter using the specified range." msgstr "无法使用指定的范围进行过滤。" #, python-format msgid "Unable to find '%s' in JSON Schema change" msgstr "在 JSON æ¨¡å¼æ›´æ”¹ä¸­æ‰¾ä¸åˆ°â€œ%sâ€" #, python-format msgid "" "Unable to find `op` in JSON Schema change. It must be one of the following: " "%(available)s." msgstr "在 JSON æ¨¡å¼æ›´æ”¹ä¸­æ‰¾ä¸åˆ°â€œopâ€ã€‚它必须是下列其中一项:%(available)s。" msgid "Unable to increase file descriptor limit. Running as non-root?" msgstr "无法增大文件æè¿°ç¬¦é™åˆ¶ã€‚æ­£åœ¨ä»¥éž root 用户身份è¿è¡Œå—?" #, python-format msgid "" "Unable to load %(app_name)s from configuration file %(conf_file)s.\n" "Got: %(e)r" msgstr "" "无法从é…置文件 %(conf_file)s 装入 %(app_name)s。\n" "å‘生错误:%(e)r" #, python-format msgid "Unable to load schema: %(reason)s" msgstr "无法装入模å¼ï¼š%(reason)s" #, python-format msgid "Unable to locate paste config file for %s." msgstr "对于 %s,找ä¸åˆ°ç²˜è´´é…置文件。" #, python-format msgid "Unable to upload duplicate image data for image%(image_id)s: %(error)s" msgstr "无法为镜åƒ%(image_id)s上传é‡å¤çš„æ•°æ®: %(error)s" msgid "Unauthorized image access" msgstr "æ— æƒè®¿é—®æ˜ åƒ" msgid "Unexpected body type. Expected list/dict." msgstr "æ„外主体类型。应该为 list/dict。" #, python-format msgid "Unexpected response: %s" msgstr "接收到æ„外å“应:%s" #, python-format msgid "Unknown auth strategy '%s'" msgstr "授æƒç­–略“%sâ€æœªçŸ¥" #, python-format msgid "Unknown command: %s" msgstr "未知命令%s" msgid "Unknown sort direction, must be 'desc' or 'asc'" msgstr "æŽ’åºæ–¹å‘未知,必须为“é™åºâ€æˆ–“å‡åºâ€" msgid "Unrecognized JSON Schema draft version" msgstr "无法识别 JSON 模å¼è‰ç¨¿ç‰ˆæœ¬" msgid "Unrecognized changes-since value" msgstr "无法识别 changes-since 值" #, python-format msgid "Unsupported sort_dir. Acceptable values: %s" msgstr "sort_dir ä¸å—支æŒã€‚坿ޥå—值:%s" #, python-format msgid "Unsupported sort_key. Acceptable values: %s" msgstr "sort_key ä¸å—支æŒã€‚坿ޥå—值:%s" msgid "Virtual size of image in bytes" msgstr "映åƒçš„虚拟大å°ï¼Œä»¥å­—节计" #, python-format msgid "Waited 15 seconds for pid %(pid)s (%(file)s) to die; giving up" msgstr "用æ¥ç­‰å¾… pid %(pid)s (%(file)s) 终止的时间已达到 15 秒;正在放弃" msgid "" "When running server in SSL mode, you must specify both a cert_file and " "key_file option value in your configuration file" msgstr "" "以 SSL æ–¹å¼è¿è¡ŒæœåŠ¡å™¨æ—¶ï¼Œå¿…é¡»åœ¨é…ç½®æ–‡ä»¶ä¸­åŒæ—¶æŒ‡å®š cert_file å’Œ key_file 选项" "值" msgid "" "Whether to pass through the user token when making requests to the registry. " "To prevent failures with token expiration during big files upload, it is " "recommended to set this parameter to False.If \"use_user_token\" is not in " "effect, then admin credentials can be specified." msgstr "" "呿³¨å†Œè¡¨è¿›è¡Œè¯·æ±‚时是å¦ä¼ é€’用户令牌。为了防止在上载大文件期间因令牌到期而产生" "æ•…éšœï¼Œå»ºè®®å°†æ­¤å‚æ•°è®¾ç½®ä¸º False。如果“use_user_tokenâ€æœªç”Ÿæ•ˆï¼Œé‚£ä¹ˆå¯ä»¥æŒ‡å®šç®¡ç†" "凭è¯ã€‚" #, python-format msgid "Wrong command structure: %s" msgstr "命令结构 %s 䏿­£ç¡®" msgid "You are not authenticated." msgstr "您未ç»è®¤è¯ã€‚" msgid "You are not authorized to complete this action." msgstr "您无æƒå®Œæˆæ­¤æ“作。" #, python-format msgid "You are not authorized to lookup image %s." msgstr "æœªæŽˆæƒæ‚¨æŸ¥è¯¢æ˜ åƒ %s。" #, python-format msgid "You are not authorized to lookup the members of the image %s." msgstr "æœªæŽˆæƒæ‚¨æŸ¥è¯¢æ˜ åƒ %s çš„æˆå‘˜ã€‚" #, python-format msgid "You are not permitted to create a tag in the namespace owned by '%s'" msgstr "ä¸å…许在由“%sâ€æ‹¥æœ‰çš„å称空间中创建标记" msgid "You are not permitted to create image members for the image." msgstr "ä¸å…许为映åƒåˆ›å»ºæ˜ åƒæˆå‘˜ã€‚" #, python-format msgid "You are not permitted to create images owned by '%s'." msgstr "ä¸å…许创建由“%sâ€æ‹¥æœ‰çš„æ˜ åƒã€‚" #, python-format msgid "You are not permitted to create namespace owned by '%s'" msgstr "ä¸å…许创建由“%sâ€æ‹¥æœ‰çš„å称空间" #, python-format msgid "You are not permitted to create object owned by '%s'" msgstr "ä¸å…许创建由“%sâ€æ‹¥æœ‰çš„对象" #, python-format msgid "You are not permitted to create property owned by '%s'" msgstr "ä¸å…许创建由“%sâ€æ‹¥æœ‰çš„属性" #, python-format msgid "You are not permitted to create resource_type owned by '%s'" msgstr "ä¸å…许创建由“%sâ€æ‹¥æœ‰çš„ resource_type" #, python-format msgid "You are not permitted to create this task with owner as: %s" msgstr "ä¸å…许采用以下身份作为所有者æ¥åˆ›å»ºæ­¤ä»»åŠ¡ï¼š%s" msgid "You are not permitted to deactivate this image." msgstr "ä¸å…è®¸å–æ¶ˆæ¿€æ´»æ­¤æ˜ åƒã€‚" msgid "You are not permitted to delete this image." msgstr "ä¸å…许删除此映åƒã€‚" msgid "You are not permitted to delete this meta_resource_type." msgstr "ä½ ä¸è¢«å…许删除meta_resource_type。" msgid "You are not permitted to delete this namespace." msgstr "ä¸å…许删除此å称空间。" msgid "You are not permitted to delete this object." msgstr "ä½ ä¸è¢«å…许删除这个对象。" msgid "You are not permitted to delete this property." msgstr "ä¸å…许删除此属性。" msgid "You are not permitted to delete this tag." msgstr "ä¸å…许删除此标记。" #, python-format msgid "You are not permitted to modify '%(attr)s' on this %(resource)s." msgstr "ä¸å…许对此 %(resource)s 修改“%(attr)sâ€ã€‚" #, python-format msgid "You are not permitted to modify '%s' on this image." msgstr "ä¸å…许对此映åƒä¿®æ”¹â€œ%sâ€ã€‚" msgid "You are not permitted to modify locations for this image." msgstr "ä¸å…许为此映åƒä¿®æ”¹ä½ç½®ã€‚" msgid "You are not permitted to modify tags on this image." msgstr "ä¸å…许对此映åƒä¿®æ”¹æ ‡è®°ã€‚" msgid "You are not permitted to modify this image." msgstr "ä¸å…许修改此映åƒã€‚" msgid "You are not permitted to reactivate this image." msgstr "ä¸å…è®¸é‡æ–°æ¿€æ´»æ­¤æ˜ åƒã€‚" msgid "You are not permitted to set status on this task." msgstr "ä½ ä¸è¢«å…许设置这个任务的状æ€ã€‚" msgid "You are not permitted to update this namespace." msgstr "ä¸å…许更新此å称空间。" msgid "You are not permitted to update this object." msgstr "ä½ ä¸è¢«å…许更新这个对象。" msgid "You are not permitted to update this property." msgstr "ä¸å…许更新此属性。" msgid "You are not permitted to update this tag." msgstr "ä¸å…许更新此标记。" msgid "You are not permitted to upload data for this image." msgstr "ä¸å…许为此映åƒä¸Šè½½æ•°æ®ã€‚" #, python-format msgid "You cannot add image member for %s" msgstr "无法为 %s æ·»åŠ æ˜ åƒæˆå‘˜" #, python-format msgid "You cannot delete image member for %s" msgstr "无法为 %s åˆ é™¤æ˜ åƒæˆå‘˜" #, python-format msgid "You cannot get image member for %s" msgstr "无法为 %s èŽ·å–æ˜ åƒæˆå‘˜" #, python-format msgid "You cannot update image member %s" msgstr "æ— æ³•æ›´æ–°æ˜ åƒæˆå‘˜ %s" msgid "You do not own this image" msgstr "您未拥有此映åƒ" msgid "" "You have selected to use SSL in connecting, and you have supplied a cert, " "however you have failed to supply either a key_file parameter or set the " "GLANCE_CLIENT_KEY_FILE environ variable" msgstr "" "已选择在连接中使用 SSL,并且已æä¾›è¯ä¹¦ï¼Œä½†æ˜¯æœªèƒ½æä¾› key_file 傿•°æˆ–设置 " "GLANCE_CLIENT_KEY_FILE 环境å˜é‡" msgid "" "You have selected to use SSL in connecting, and you have supplied a key, " "however you have failed to supply either a cert_file parameter or set the " "GLANCE_CLIENT_CERT_FILE environ variable" msgstr "" "已选择在连接中使用 SSL,并且已æä¾›å¯†é’¥ï¼Œä½†æ˜¯æœªèƒ½æä¾› cert_file 傿•°æˆ–设置 " "GLANCE_CLIENT_CERT_FILE 环境å˜é‡" msgid "" "^([0-9a-fA-F]){8}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-" "fA-F]){12}$" msgstr "" "^([0-9a-fA-F]){8}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-" "fA-F]){12}$" #, python-format msgid "__init__() got unexpected keyword argument '%s'" msgstr "__init__() å·²èŽ·å–æ„外的关键字自å˜é‡â€œ%sâ€" #, python-format msgid "" "cannot transition from %(current)s to %(next)s in update (wanted from_state=" "%(from)s)" msgstr "" "在更新中,无法从 %(current)s 转å˜ä¸º %(next)sï¼ˆéœ€è¦ from_state=%(from)s)" #, python-format msgid "custom properties (%(props)s) conflict with base properties" msgstr "定制属性 (%(props)s) 与基本基准冲çª" msgid "eventlet 'poll' nor 'selects' hubs are available on this platform" msgstr "在此平å°ä¸Šï¼Œeventlet“pollâ€å’Œâ€œselectsâ€ä¸»æ•°æ®ä¸­å¿ƒéƒ½ä¸å¯ç”¨" msgid "is_public must be None, True, or False" msgstr "is_public 必须为“无â€ã€True 或 False" msgid "limit param must be an integer" msgstr "limit 傿•°å¿…须为整数" msgid "limit param must be positive" msgstr "limit 傿•°å¿…须为正数" msgid "md5 hash of image contents." msgstr "映åƒå†…容的 md5 散列。" #, python-format msgid "new_image() got unexpected keywords %s" msgstr "new_image() å·²èŽ·å–æ„外的关键字 %s" msgid "protected must be True, or False" msgstr "protected 必须为 True 或 False" #, python-format msgid "unable to launch %(serv)s. Got error: %(e)s" msgstr "无法å¯åЍ %(serv)s。å‘生错误:%(e)s" #, python-format msgid "x-openstack-request-id is too long, max size %s" msgstr "x-openstack-request-id 太长,最大大å°ä¸º %s" glance-16.0.0/glance/locale/ko_KR/0000775000175100017510000000000013245511661016554 5ustar zuulzuul00000000000000glance-16.0.0/glance/locale/ko_KR/LC_MESSAGES/0000775000175100017510000000000013245511661020341 5ustar zuulzuul00000000000000glance-16.0.0/glance/locale/ko_KR/LC_MESSAGES/glance.po0000666000175100017510000021736113245511421022140 0ustar zuulzuul00000000000000# Translations template for glance. # Copyright (C) 2015 ORGANIZATION # This file is distributed under the same license as the glance project. # # Translators: # HyunWoo Jo , 2014 # Andreas Jaeger , 2016. #zanata msgid "" msgstr "" "Project-Id-Version: glance 15.0.0.0b3.dev29\n" "Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n" "POT-Creation-Date: 2017-06-23 20:54+0000\n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=UTF-8\n" "Content-Transfer-Encoding: 8bit\n" "PO-Revision-Date: 2016-04-12 05:21+0000\n" "Last-Translator: Copied by Zanata \n" "Language: ko-KR\n" "Plural-Forms: nplurals=1; plural=0;\n" "Generated-By: Babel 2.0\n" "X-Generator: Zanata 3.9.6\n" "Language-Team: Korean (South Korea)\n" #, python-format msgid "\t%s" msgstr "\t%s" #, python-format msgid "%(cls)s exception was raised in the last rpc call: %(val)s" msgstr "%(cls)s 예외가 마지막 rpc 호출ì—서 ë°œìƒ: %(val)s" #, python-format msgid "%(m_id)s not found in the member list of the image %(i_id)s." msgstr "ì´ë¯¸ì§€ %(i_id)sì˜ ë©¤ë²„ 목ë¡ì—서 %(m_id)sì„(를) ì°¾ì„ ìˆ˜ 없습니다." #, python-format msgid "%(serv)s (pid %(pid)s) is running..." msgstr "%(serv)s(pid %(pid)s)ì´(ê°€) 실행 중..." #, python-format msgid "%(serv)s appears to already be running: %(pid)s" msgstr "%(serv)sì´(ê°€) ì´ë¯¸ 실행 중으로 표시ë¨: %(pid)s" #, python-format msgid "" "%(strategy)s is registered as a module twice. %(module)s is not being used." msgstr "" "%(strategy)sì´(ê°€) 모듈로 ë‘ ë²ˆ 등ë¡ë˜ì—ˆìŠµë‹ˆë‹¤. %(module)sì´(ê°€) 사용ë˜ì§€ 사" "ìš©ë©ë‹ˆë‹¤." #, python-format msgid "" "%(task_id)s of %(task_type)s not configured properly. Could not load the " "filesystem store" msgstr "" "%(task_type)sì˜ %(task_id)sê°€ 제대로 구성ë˜ì§€ 않았습니다. íŒŒì¼ ì‹œìŠ¤í…œ 저장소" "를 로드할 수 없습니다." #, python-format msgid "" "%(task_id)s of %(task_type)s not configured properly. Missing work dir: " "%(work_dir)s" msgstr "" "%(task_type)sì˜ %(task_id)sê°€ 제대로 구성ë˜ì§€ 않았습니다. ëˆ„ë½ ìž‘ì—… 디렉토" "리: %(work_dir)s" #, python-format msgid "%(verb)sing %(serv)s" msgstr "%(serv)sì„(를) %(verb)s 중" #, python-format msgid "%(verb)sing %(serv)s with %(conf)s" msgstr "%(serv)sì—서 %(conf)sê³¼(와) 함께 %(verb)s 중" #, python-format msgid "" "%s Please specify a host:port pair, where host is an IPv4 address, IPv6 " "address, hostname, or FQDN. If using an IPv6 address, enclose it in brackets " "separately from the port (i.e., \"[fe80::a:b:c]:9876\")." msgstr "" "%s 호스트:í¬íЏ ìŒì„ 지정하십시오. 여기서 호스트는 IPv4 주소, IPv6 주소, 호스" "트 ì´ë¦„ ë˜ëŠ” FQDN입니다. IPv6 주소를 사용하는 경우ì—는 í¬íŠ¸ì™€ 분리하여 대괄호" "로 묶으십시오(예: \"[fe80::a:b:c]:9876\")." #, python-format msgid "%s can't contain 4 byte unicode characters." msgstr "%sì—는 4ë°”ì´íЏ 유니코드 문ìžë¥¼ í¬í•¨í•  수 없습니다." #, python-format msgid "%s is already stopped" msgstr "%sì´(ê°€) ì´ë¯¸ 중지ë˜ì—ˆìŠµë‹ˆë‹¤." #, python-format msgid "%s is stopped" msgstr "%sì´(ê°€) 중지ë¨" msgid "" "--os_auth_url option or OS_AUTH_URL environment variable required when " "keystone authentication strategy is enabled\n" msgstr "" "키스톤 ì¸ì¦ ì „ëžµì´ ì‚¬ìš©ë  ê²½ìš° --os_auth_url 옵션 ë˜ëŠ” OS_AUTH_URL 환경 변수" "ê°€ 필요합니다.\n" msgid "A body is not expected with this request." msgstr "ì´ ìš”ì²­ì—는 ë³¸ë¬¸ì´ ì—†ì–´ì•¼ 합니다." #, python-format msgid "" "A metadata definition object with name=%(object_name)s already exists in " "namespace=%(namespace_name)s." msgstr "" "name=%(object_name)sì¸ ë©”íƒ€ë°ì´í„° ì •ì˜ ì˜¤ë¸Œì íŠ¸ê°€ namespace=" "%(namespace_name)sì—서 ì°¾ì„ ìˆ˜ 없습니다." #, python-format msgid "" "A metadata definition property with name=%(property_name)s already exists in " "namespace=%(namespace_name)s." msgstr "" "name=%(property_name)sì¸ ë©”íƒ€ë°ì´í„° ì •ì˜ íŠ¹ì„±ì´ namespace=%(namespace_name)s" "ì— ì´ë¯¸ 존재합니다." #, python-format msgid "" "A metadata definition resource-type with name=%(resource_type_name)s already " "exists." msgstr "" "name=%(resource_type_name)sì¸ ë©”íƒ€ë°ì´í„° ì •ì˜ ìžì› ìœ í˜•ì´ ì´ë¯¸ 존재합니다." msgid "A set of URLs to access the image file kept in external store" msgstr "외부 ì €ìž¥ì†Œì— ë³´ê´€ëœ ì´ë¯¸ì§€ 파ì¼ì— 액세스하기 위한 URL 세트" msgid "Amount of disk space (in GB) required to boot image." msgstr "ì´ë¯¸ì§€ë¥¼ 부팅하는 ë° í•„ìš”í•œ ë””ìŠ¤í¬ ê³µê°„ì˜ ì–‘(MB)" msgid "Amount of ram (in MB) required to boot image." msgstr "ì´ë¯¸ì§€ë¥¼ 부팅하는 ë° í•„ìš”í•œ RAMì˜ ì–‘(MB)" msgid "An identifier for the image" msgstr "ì´ë¯¸ì§€ì— 대한 ID" msgid "An identifier for the image member (tenantId)" msgstr "ì´ë¯¸ì§€ ë©¤ë²„ì— ëŒ€í•œ ID(tenantId)" msgid "An identifier for the owner of this task" msgstr "ì´ íƒœìŠ¤í¬ ì†Œìœ ìžì˜ ID" msgid "An identifier for the task" msgstr "태스í¬ì˜ ID" msgid "An image file url" msgstr "ì´ë¯¸ì§€ íŒŒì¼ url" msgid "An image schema url" msgstr "ì´ë¯¸ì§€ 스키마 url" msgid "An image self url" msgstr "ì´ë¯¸ì§€ ìžì²´ url" #, python-format msgid "An image with identifier %s already exists" msgstr "IDê°€ %sì¸ ì´ë¯¸ì§€ê°€ ì´ë¯¸ 존재함" msgid "An import task exception occurred" msgstr "가져오기 작업 예외 ë°œìƒ" msgid "An object with the same identifier already exists." msgstr "ë™ì¼í•œ ID를 갖는 오브ì íŠ¸ê°€ ì´ë¯¸ 존재합니다. " msgid "An object with the same identifier is currently being operated on." msgstr "ë™ì¼í•œ IDê°€ 있는 오브ì íŠ¸ê°€ 현재 ìž‘ë™ë©ë‹ˆë‹¤." msgid "An object with the specified identifier was not found." msgstr "ì§€ì •ëœ ID를 갖는 오브ì íŠ¸ë¥¼ ì°¾ì„ ìˆ˜ 없습니다." msgid "An unknown exception occurred" msgstr "알 수 없는 예외가 ë°œìƒí–ˆìŒ" msgid "An unknown task exception occurred" msgstr "알 수 없는 íƒœìŠ¤í¬ ì˜ˆì™¸ ë°œìƒ" #, python-format msgid "Attempt to upload duplicate image: %s" msgstr "중복 ì´ë¯¸ì§€ë¥¼ 업로드하려고 ì‹œë„ ì¤‘: %s" msgid "Attempted to update Location field for an image not in queued status." msgstr "" "íì— ë“¤ì–´ê°„ ìƒíƒœì— 있지 ì•Šì€ ì´ë¯¸ì§€ì— 대한 위치 필드를 ì—…ë°ì´íŠ¸í•˜ë ¤ê³  시ë„함" #, python-format msgid "Attribute '%(property)s' is read-only." msgstr "'%(property)s' ì†ì„±ì€ ì½ê¸° 전용입니다." #, python-format msgid "Attribute '%(property)s' is reserved." msgstr "'%(property)s' ì†ì„±ì€ 예약ë˜ì–´ 있습니다." #, python-format msgid "Attribute '%s' is read-only." msgstr "'%s' ì†ì„±ì€ ì½ê¸° 전용입니다." #, python-format msgid "Attribute '%s' is reserved." msgstr "'%s' ì†ì„±ì€ 예약ë˜ì–´ 있습니다." msgid "Attribute container_format can be only replaced for a queued image." msgstr "íì— ìžˆëŠ” ì´ë¯¸ì§€ì— 대해 ì†ì„± container_formatì„ ëŒ€ì²´í•  수 있습니다." msgid "Attribute disk_format can be only replaced for a queued image." msgstr "íì— ìžˆëŠ” ì´ë¯¸ì§€ì— 대해 ì†ì„± disk_formatì„ ëŒ€ì²´í•  수 있습니다." #, python-format msgid "Auth service at URL %(url)s not found." msgstr "URL %(url)sì˜ Auth 서비스를 ì°¾ì„ ìˆ˜ 없습니다." #, python-format msgid "" "Authentication error - the token may have expired during file upload. " "Deleting image data for %s." msgstr "" "ì¸ì¦ 오류 - íŒŒì¼ ì—…ë¡œë“œ ì¤‘ì— í† í°ì´ 만료ë˜ì—ˆìŠµë‹ˆë‹¤. %sì˜ ì´ë¯¸ì§€ ë°ì´í„°ë¥¼ ì‚­ì œ" "합니다." msgid "Authorization failed." msgstr "권한 ë¶€ì—¬ì— ì‹¤íŒ¨í–ˆìŠµë‹ˆë‹¤." msgid "Available categories:" msgstr "사용 가능한 카테고리:" #, python-format msgid "Bad \"%s\" query filter format. Use ISO 8601 DateTime notation." msgstr "" "ìž˜ëª»ëœ \"%s\" 쿼리 í•„í„° 형ì‹ìž…니다. ISO 8601 DateTime í‘œê¸°ë²•ì„ ì‚¬ìš©í•˜ì‹­ì‹œì˜¤." #, python-format msgid "Bad Command: %s" msgstr "ìž˜ëª»ëœ ëª…ë ¹: %s" #, python-format msgid "Bad header: %(header_name)s" msgstr "ìž˜ëª»ëœ í—¤ë”: %(header_name)s" #, python-format msgid "Bad value passed to filter %(filter)s got %(val)s" msgstr "ìž˜ëª»ëœ ê°’ì´ %(filter)s í•„í„°ì— ì „ë‹¬ë¨, %(val)s 제공" #, python-format msgid "Badly formed S3 URI: %(uri)s" msgstr "ì–‘ì‹ì´ ìž˜ëª»ëœ S3 URI: %(uri)s" #, python-format msgid "Badly formed credentials '%(creds)s' in Swift URI" msgstr "Swift URIì— ì–‘ì‹ì´ ìž˜ëª»ëœ ì‹ ìž„ ì •ë³´ '%(creds)s'" msgid "Badly formed credentials in Swift URI." msgstr "Swift URIì— ì–‘ì‹ì´ ìž˜ëª»ëœ ì‹ ìž„ ì •ë³´ê°€ 있습니다." msgid "Body expected in request." msgstr "ìš”ì²­ì— ë³¸ë¬¸ì´ ìžˆì–´ì•¼ 합니다." msgid "Cannot be a negative value" msgstr "ìŒìˆ˜ ê°’ì¼ ìˆ˜ ì—†ìŒ" msgid "Cannot be a negative value." msgstr "ìŒìˆ˜ ê°’ì´ ë  ìˆ˜ 없습니다." #, python-format msgid "Cannot convert image %(key)s '%(value)s' to an integer." msgstr "ì´ë¯¸ì§€ %(key)s '%(value)s'ì„(를) 정수로 변환할 수 없습니다." msgid "Cannot remove last location in the image." msgstr "ì´ë¯¸ì§€ì—서 마지막 위치를 제거할 수 없습니다." #, python-format msgid "Cannot save data for image %(image_id)s: %(error)s" msgstr "ì´ë¯¸ì§€ %(image_id)s ì— ëŒ€í•œ ë°ì´í„° 저장 불가: %(error)s" msgid "Cannot set locations to empty list." msgstr "위치를 비어 있는 목ë¡ìœ¼ë¡œ 설정할 수 없습니다." msgid "Cannot upload to an unqueued image" msgstr "íì— ë“¤ì–´ê°€ì§€ ì•Šì€ ì´ë¯¸ì§€ì— 업로드할 수 ì—†ìŒ" #, python-format msgid "Checksum verification failed. Aborted caching of image '%s'." msgstr "ì²´í¬ì„¬ ê²€ì¦ì— 실패했습니다. '%s' ì´ë¯¸ì§€ ìºì‹œê°€ 중단ë˜ì—ˆìŠµë‹ˆë‹¤." msgid "Client disconnected before sending all data to backend" msgstr "모든 ë°ì´í„°ë¥¼ 백엔드로 전송하기 ì „ì— í´ë¼ì´ì–¸íЏ ì—°ê²°ì´ ëŠì–´ì§" msgid "Command not found" msgstr "ëª…ë ¹ì„ ì°¾ì„ ìˆ˜ ì—†ìŒ" msgid "Configuration option was not valid" msgstr "구성 ì˜µì…˜ì´ ì˜¬ë°”ë¥´ì§€ 않ìŒ" #, python-format msgid "Connect error/bad request to Auth service at URL %(url)s." msgstr "ì—°ê²° 오류/URL %(url)sì—서 Auth ì„œë¹„ìŠ¤ì— ëŒ€í•œ ìž˜ëª»ëœ ìš”ì²­ìž…ë‹ˆë‹¤." #, python-format msgid "Constructed URL: %s" msgstr "URLì„ êµ¬ì„±í•¨: %s" msgid "Container format is not specified." msgstr "컨테ì´ë„ˆ 형ì‹ì´ 지정ë˜ì§€ 않았습니다." msgid "Content-Type must be application/octet-stream" msgstr "Content-Typeì€ application/octet-streamì´ì–´ì•¼ 함" #, python-format msgid "Corrupt image download for image %(image_id)s" msgstr "%(image_id)s ì´ë¯¸ì§€ì— 대한 ì†ìƒëœ ì´ë¯¸ì§€ 다운로드" #, python-format msgid "Could not bind to %(host)s:%(port)s after trying for 30 seconds" msgstr "30ì´ˆ ë™ì•ˆ 시ë„한 후 %(host)s:%(port)sì— ë°”ì¸ë“œí•  수 ì—†ìŒ" msgid "Could not find OVF file in OVA archive file." msgstr "OVA ì•„ì¹´ì´ë¸Œ 파ì¼ì—서 OVF를 ì°¾ì„ ìˆ˜ 없습니다." #, python-format msgid "Could not find metadata object %s" msgstr "메타ë°ì´í„° 오브ì íЏ %sì„(를) ì°¾ì„ ìˆ˜ ì—†ìŒ" #, python-format msgid "Could not find metadata tag %s" msgstr "메타ë°ì´í„° 태그 %sì„(를) ì°¾ì„ ìˆ˜ ì—†ìŒ" #, python-format msgid "Could not find namespace %s" msgstr "%s 네임스페ì´ìŠ¤ë¥¼ ì°¾ì„ ìˆ˜ ì—†ìŒ" #, python-format msgid "Could not find property %s" msgstr "특성 %sì„(를) ì°¾ì„ ìˆ˜ ì—†ìŒ" msgid "Could not find required configuration option" msgstr "필요한 구성 ì˜µì…˜ì„ ì°¾ì„ ìˆ˜ ì—†ìŒ" #, python-format msgid "Could not find task %s" msgstr "íƒœìŠ¤í¬ %sì„(를) ì°¾ì„ ìˆ˜ ì—†ìŒ" #, python-format msgid "Could not update image: %s" msgstr "ì´ë¯¸ì§€ë¥¼ ì—…ë°ì´íŠ¸í•  수 ì—†ìŒ: %s" msgid "Currently, OVA packages containing multiple disk are not supported." msgstr "여러 디스í¬ë¥¼ í¬í•¨í•˜ëŠ” OVA 패키지는 현재 ì§€ì›ë˜ì§€ 않습니다." #, python-format msgid "Data for image_id not found: %s" msgstr "image_idì— ëŒ€í•œ ë°ì´í„°ë¥¼ ì°¾ì„ ìˆ˜ ì—†ìŒ: %s" msgid "Data supplied was not valid." msgstr "ì œê³µëœ ë°ì´í„°ê°€ 올바르지 않습니다." msgid "Date and time of image member creation" msgstr "ì´ë¯¸ì§€ 멤버 작성 ë‚ ì§œ ë° ì‹œê°„" msgid "Date and time of image registration" msgstr "ì´ë¯¸ì§€ ë“±ë¡ ë‚ ì§œ ë° ì‹œê°„" msgid "Date and time of last modification of image member" msgstr "ì´ë¯¸ì§€ ë©¤ë²„ì˜ ìµœì¢… 수정 ë‚ ì§œ ë° ì‹œê°„" msgid "Date and time of namespace creation" msgstr "네임스페ì´ìФ 작성 ë‚ ì§œ ë° ì‹œê°„" msgid "Date and time of object creation" msgstr "오브ì íЏ 작성 ë‚ ì§œ ë° ì‹œê°„" msgid "Date and time of resource type association" msgstr "ìžì› 유형 ì—°ê´€ ë‚ ì§œ ë° ì‹œê°„" msgid "Date and time of tag creation" msgstr "태그 작성 ë‚ ì§œ ë° ì‹œê°„" msgid "Date and time of the last image modification" msgstr "최종 ì´ë¯¸ì§€ ìˆ˜ì •ì˜ ë‚ ì§œ ë° ì‹œê°„" msgid "Date and time of the last namespace modification" msgstr "최종 네임스페ì´ìФ ìˆ˜ì •ì˜ ë‚ ì§œ ë° ì‹œê°„" msgid "Date and time of the last object modification" msgstr "최종 오브ì íЏ ìˆ˜ì •ì˜ ë‚ ì§œ ë° ì‹œê°„" msgid "Date and time of the last resource type association modification" msgstr "최종 ìžì› 유형 ì—°ê´€ ìˆ˜ì •ì˜ ë‚ ì§œ ë° ì‹œê°„" msgid "Date and time of the last tag modification" msgstr "최종 태그 수정 ë‚ ì§œ ë° ì‹œê°„" msgid "Datetime when this resource was created" msgstr "ì´ ìžì›ì´ ìž‘ì„±ëœ Datetime" msgid "Datetime when this resource was updated" msgstr "ì´ ìžì›ì´ ì—…ë°ì´íŠ¸ëœ Datetime" msgid "Datetime when this resource would be subject to removal" msgstr "ì´ ìžì›ì´ 제거ë˜ëŠ” Datetime" #, python-format msgid "Denying attempt to upload image because it exceeds the quota: %s" msgstr "í• ë‹¹ëŸ‰ì„ ì´ˆê³¼í•˜ê¸° ë•Œë¬¸ì— ì´ë¯¸ì§€ 업로드를 거부하는 중: %s" #, python-format msgid "Denying attempt to upload image larger than %d bytes." msgstr "%dë°”ì´íŠ¸ë¥¼ 초과하는 ì´ë¯¸ì§€ì˜ 업로드를 거부하는 중입니다." msgid "Descriptive name for the image" msgstr "ì´ë¯¸ì§€ì— 대한 ì„¤ëª…ì‹ ì´ë¦„" msgid "Disk format is not specified." msgstr "ë””ìŠ¤í¬ í˜•ì‹ì´ 지정ë˜ì§€ 않았습니다." #, python-format msgid "" "Driver %(driver_name)s could not be configured correctly. Reason: %(reason)s" msgstr "" "%(driver_name)s 드ë¼ì´ë²„ê°€ 올바르게 구성ë˜ì§€ 않았습니다. ì´ìœ : %(reason)s" msgid "" "Error decoding your request. Either the URL or the request body contained " "characters that could not be decoded by Glance" msgstr "" "ìš”ì²­ì„ ë””ì½”ë”©í•˜ëŠ” ì¤‘ì— ì˜¤ë¥˜ê°€ ë°œìƒí–ˆìŠµë‹ˆë‹¤. URLì´ë‚˜ 요청 ë³¸ë¬¸ì— Glanceì—서 ë””" "코딩할 수 없는 문ìžê°€ í¬í•¨ë˜ì–´ 있습니다." #, python-format msgid "Error fetching members of image %(image_id)s: %(inner_msg)s" msgstr "ì´ë¯¸ì§€ %(image_id)sì˜ ë©¤ë²„ë¥¼ 페치하는 ì¤‘ì— ì˜¤ë¥˜ ë°œìƒ: %(inner_msg)s" msgid "Error in store configuration. Adding images to store is disabled." msgstr "저장소 êµ¬ì„±ì— ì˜¤ë¥˜ê°€ 있습니다. ì´ë¯¸ì§€ë¥¼ ì €ìž¥ì†Œì— ì¶”ê°€í•  수 없습니다." msgid "Expected a member in the form: {\"member\": \"image_id\"}" msgstr "{\"member\": \"image_id\"} 형ì‹ì˜ 멤버가 있어야 함" msgid "Expected a status in the form: {\"status\": \"status\"}" msgstr "{\"status\": \"status\"} 형ì‹ì˜ ìƒíƒœê°€ 있어야 함" msgid "External source should not be empty" msgstr "외부 소스는 비어있지 않아야 함" #, python-format msgid "External sources are not supported: '%s'" msgstr "외부 소스가 ì§€ì›ë˜ì§€ 않ìŒ: '%s'" #, python-format msgid "Failed to activate image. Got error: %s" msgstr "ì´ë¯¸ì§€ í™œì„±í™”ì— ì‹¤íŒ¨í–ˆìŠµë‹ˆë‹¤. 오류 ë°œìƒ: %s" #, python-format msgid "Failed to add image metadata. Got error: %s" msgstr "ì´ë¯¸ì§€ 메타ë°ì´í„° 추가 실패. 오류 ë°œìƒ: %s" #, python-format msgid "Failed to find image %(image_id)s to delete" msgstr "삭제할 %(image_id)s ì´ë¯¸ì§€ë¥¼ 찾는 ë° ì‹¤íŒ¨í•¨" #, python-format msgid "Failed to find image to delete: %s" msgstr "삭제할 image ê°€ 발견ë˜ì§€ ì•ŠìŒ : %s" #, python-format msgid "Failed to find image to update: %s" msgstr "ì—…ë°ì´íŠ¸í•  ì´ë¯¸ì§€ë¥¼ 찾는 ë° ì‹¤íŒ¨í•¨: %s" #, python-format msgid "Failed to find resource type %(resourcetype)s to delete" msgstr "삭제하기 위한 리소스 타입 %(resourcetype)s 검색 실패" #, python-format msgid "Failed to initialize the image cache database. Got error: %s" msgstr "ì´ë¯¸ì§€ ìºì‹œ ë°ì´í„°ë² ì´ìŠ¤ë¥¼ 초기화하지 못했습니다. 오류 ë°œìƒ: %s" #, python-format msgid "Failed to read %s from config" msgstr "구성ì—서 %sì„(를) ì½ì§€ 못했ìŒ" #, python-format msgid "Failed to reserve image. Got error: %s" msgstr "ì´ë¯¸ì§€ 예약 실패, 오류 ë°œìƒ: %s" #, python-format msgid "Failed to update image metadata. Got error: %s" msgstr "ì´ë¯¸ì§€ 메타ë°ì´í„° ì—…ë°ì´íЏ 실패. 오류 ë°œìƒ: %s" #, python-format msgid "Failed to upload image %s" msgstr "ì´ë¯¸ì§€ %sì„(를) 업로드하지 못했습니다." #, python-format msgid "" "Failed to upload image data for image %(image_id)s due to HTTP error: " "%(error)s" msgstr "" "HTTP 오류로 ì¸í•´ ì´ë¯¸ì§€ %(image_id)sì˜ ì´ë¯¸ì§€ ë°ì´í„° 업로드 실패: %(error)s" #, python-format msgid "" "Failed to upload image data for image %(image_id)s due to internal error: " "%(error)s" msgstr "" "ë‚´ë¶€ 오류로 ì¸í•´ ì´ë¯¸ì§€ %(image_id)sì˜ ì´ë¯¸ì§€ ë°ì´í„° 업로드 실패: %(error)s" #, python-format msgid "File %(path)s has invalid backing file %(bfile)s, aborting." msgstr "" "íŒŒì¼ %(path)sì— ì˜¬ë°”ë¥´ì§€ ì•Šì€ ë°±ì—… íŒŒì¼ %(bfile)sì´(ê°€) 있어 중단합니다." msgid "" "File based imports are not allowed. Please use a non-local source of image " "data." msgstr "" "íŒŒì¼ ê¸°ë°˜ 가져오기는 허용ë˜ì§€ 않습니다. ì´ë¯¸ì§€ ë°ì´í„°ì˜ ë¡œì»¬ì´ ì•„ë‹Œ 소스를 사" "용하십시오." msgid "Forbidden image access" msgstr "ê¸ˆì§€ëœ ì´ë¯¸ì§€ 액세스" #, python-format msgid "Forbidden to delete a %s image." msgstr "%s ì´ë¯¸ì§€ë¥¼ 삭제하는 ê²ƒì€ ê¸ˆì§€ë˜ì–´ 있습니다." #, python-format msgid "Forbidden to delete image: %s" msgstr "ì´ë¯¸ì§€ 삭제가 금지ë¨: %s" #, python-format msgid "Forbidden to modify '%(key)s' of %(status)s image." msgstr "%(status)s ì´ë¯¸ì§€ì˜ '%(key)s' ìˆ˜ì •ì´ ê¸ˆì§€ë˜ì—ˆìŠµë‹ˆë‹¤." #, python-format msgid "Forbidden to modify '%s' of image." msgstr "ì´ë¯¸ì§€ì˜ '%s'ì„(를) 수정하는 ê²ƒì´ ê¸ˆì§€ë˜ì–´ 있습니다." msgid "Forbidden to reserve image." msgstr "ì´ë¯¸ì§€ ì˜ˆì•½ì€ ê¸ˆì§€ë˜ì–´ 있습니다." msgid "Forbidden to update deleted image." msgstr "ì‚­ì œëœ ì´ë¯¸ì§€ë¥¼ ì—…ë°ì´íŠ¸í•˜ëŠ” ê²ƒì€ ê¸ˆì§€ë˜ì–´ 있습니다." #, python-format msgid "Forbidden to update image: %s" msgstr "ì´ë¯¸ì§€ ì—…ë°ì´íŠ¸ê°€ 금지ë¨: %s" #, python-format msgid "Forbidden upload attempt: %s" msgstr "ê¸ˆì§€ëœ ì—…ë¡œë“œ 시ë„: %s" #, python-format msgid "Forbidding request, metadata definition namespace=%s is not visible." msgstr "ìš”ì²­ì´ ê¸ˆì§€ë˜ê³  메타ë°ì´í„° ì •ì˜ namespace=%sì´(ê°€) 표시ë˜ì§€ 않습니다." #, python-format msgid "Forbidding request, task %s is not visible" msgstr "요청 금지. íƒœìŠ¤í¬ %sì´(ê°€) 표시ë˜ì§€ 않ìŒ" msgid "Format of the container" msgstr "컨테ì´ë„ˆì˜ 형ì‹" msgid "Format of the disk" msgstr "디스í¬ì˜ 형ì‹" #, python-format msgid "Host \"%s\" is not valid." msgstr "\"%s\" 호스트가 올바르지 않습니다." #, python-format msgid "Host and port \"%s\" is not valid." msgstr "호스트 ë° í¬íЏ \"%s\"ì´(ê°€) 올바르지 않습니다." msgid "" "Human-readable informative message only included when appropriate (usually " "on failure)" msgstr "" "사용ìžê°€ ì½ì„ 수 있는 ì •ë³´ 메시지는 ì ì ˆí•œ 경우ì—ë§Œ í¬í•¨ë¨ (ì¼ë°˜ì ìœ¼ë¡œ 실패 " "시)" msgid "If true, image will not be deletable." msgstr "trueì¼ ê²½ìš° ì´ë¯¸ì§€ëŠ” ì‚­ì œ 불가능합니다." msgid "If true, namespace will not be deletable." msgstr "trueì¼ ê²½ìš° 네임스페ì´ìŠ¤ëŠ” ì‚­ì œ 불가능합니다." #, python-format msgid "Image %(id)s could not be deleted because it is in use: %(exc)s" msgstr "ì´ë¯¸ì§€ %(id)sì´(ê°€) 사용 중ì´ë¯€ë¡œ ì´ë¥¼ 삭제할 수 ì—†ìŒ: %(exc)s" #, python-format msgid "Image %(id)s not found" msgstr "%(id)s ì´ë¯¸ì§€ë¥¼ ì°¾ì„ ìˆ˜ ì—†ìŒ" #, python-format msgid "" "Image %(image_id)s could not be found after upload. The image may have been " "deleted during the upload: %(error)s" msgstr "" "업로드한 ì´ë¯¸ì§€ %(image_id)sì„(를) ì°¾ì„ ìˆ˜ ì—†ìŒ. ì´ë¯¸ì§€ëŠ” 업로드 ì¤‘ì— ì‚­ì œë˜" "ì—ˆì„ ìˆ˜ 있ìŒ: %(error)s" #, python-format msgid "Image %(image_id)s is protected and cannot be deleted." msgstr "%(image_id)s ì´ë¯¸ì§€ëŠ” 보호ë˜ë¯€ë¡œ 삭제할 수 없습니다." #, python-format msgid "" "Image %s could not be found after upload. The image may have been deleted " "during the upload, cleaning up the chunks uploaded." msgstr "" "업로드 í›„ì— %s ì´ë¯¸ì§€ë¥¼ ì°¾ì„ ìˆ˜ 없습니다. 업로드 ë™ì•ˆ ì´ë¯¸ì§€ê°€ ì‚­ì œë˜ì—ˆì„ 수 " "있습니다. ì—…ë¡œë“œëœ ì²­í¬ë¥¼ 정리합니다." #, python-format msgid "" "Image %s could not be found after upload. The image may have been deleted " "during the upload." msgstr "" "업로드 후 ì´ë¯¸ì§€ %sì„(를) ì°¾ì„ ìˆ˜ 없습니다. 업로드 ì¤‘ì— ì´ë¯¸ì§€ê°€ ì‚­ë˜ì—ˆì„ 수 " "있습니다." #, python-format msgid "Image %s is deactivated" msgstr "%s ì´ë¯¸ì§€ê°€ 비활성화ë¨" #, python-format msgid "Image %s is not active" msgstr "%s ì´ë¯¸ì§€ê°€ 활성 ìƒíƒœê°€ 아님" #, python-format msgid "Image %s not found." msgstr "%s ì´ë¯¸ì§€ë¥¼ ì°¾ì„ ìˆ˜ ì—†ìŒ" #, python-format msgid "Image exceeds the storage quota: %s" msgstr "ì´ë¯¸ì§€ê°€ 스토리지 í• ë‹¹ëŸ‰ì„ ì´ˆê³¼í•¨: %s" msgid "Image id is required." msgstr "ì´ë¯¸ì§€ IDê°€ 필요합니다." msgid "Image is protected" msgstr "ì´ë¯¸ì§€ê°€ 보호ë¨" #, python-format msgid "Image member limit exceeded for image %(id)s: %(e)s:" msgstr "ì´ë¯¸ì§€ %(id)sì— ëŒ€í•œ ì´ë¯¸ì§€ 멤버 한계 초과: %(e)s:" #, python-format msgid "Image name too long: %d" msgstr "ì´ë¯¸ì§€ ì´ë¦„ì´ ë„ˆë¬´ ê¹€: %d" msgid "Image operation conflicts" msgstr "ì´ë¯¸ì§€ ì¡°ìž‘ì´ ì¶©ëŒí•¨" #, python-format msgid "" "Image status transition from %(cur_status)s to %(new_status)s is not allowed" msgstr "" "%(cur_status)sì—서 %(new_status)s(으)ë¡œì˜ ì´ë¯¸ì§€ ìƒíƒœ ì „ì´ê°€ 허용ë˜ì§€ 않ìŒ" #, python-format msgid "Image storage media is full: %s" msgstr "ì´ë¯¸ì§€ 스토리지 미디어 ê³µê°„ì´ ê½‰ ì°¸: %s" #, python-format msgid "Image tag limit exceeded for image %(id)s: %(e)s:" msgstr "ì´ë¯¸ì§€ %(id)sì— ëŒ€í•œ ì´ë¯¸ì§€ 태그 한계 초과: %(e)s:" #, python-format msgid "Image upload problem: %s" msgstr "ì´ë¯¸ì§€ 업로드 문제: %s" #, python-format msgid "Image with identifier %s already exists!" msgstr "IDê°€ %sì¸ ì´ë¯¸ì§€ê°€ ì´ë¯¸ 존재합니다!" #, python-format msgid "Image with identifier %s has been deleted." msgstr "IDê°€ %sì¸ ì´ë¯¸ì§€ê°€ ì‚­ì œë˜ì—ˆìŠµë‹ˆë‹¤." #, python-format msgid "Image with identifier %s not found" msgstr "IDê°€ %sì¸ ì´ë¯¸ì§€ë¥¼ ì°¾ì„ ìˆ˜ ì—†ìŒ" #, python-format msgid "Image with the given id %(image_id)s was not found" msgstr "ì§€ì •ëœ ID %(image_id)sì„(를) 가진 ì´ë¯¸ì§€ë¥¼ ì°¾ì„ ìˆ˜ ì—†ìŒ" #, python-format msgid "" "Incorrect auth strategy, expected \"%(expected)s\" but received " "\"%(received)s\"" msgstr "" "ì¸ì¦ ì „ëžµì´ ì˜¬ë°”ë¥´ì§€ 않ìŒ. 예ìƒ: \"%(expected)s\", 수신: \"%(received)s\"" #, python-format msgid "Incorrect request: %s" msgstr "올바르지 ì•Šì€ ìš”ì²­: %s" #, python-format msgid "Input does not contain '%(key)s' field" msgstr "ìž…ë ¥ì— '%(key)s' 필드가 í¬í•¨ë˜ì–´ 있지 않ìŒ" #, python-format msgid "Insufficient permissions on image storage media: %s" msgstr "ì´ë¯¸ì§€ 스토리지 미디어 권한 부족 : %s" #, python-format msgid "Invalid JSON pointer for this resource: '/%s'" msgstr "ì´ ìžì›ì— 대해 올바르지 ì•Šì€ JSON í¬ì¸í„°: '/%s'" #, python-format msgid "Invalid checksum '%s': can't exceed 32 characters" msgstr "올바르지 ì•Šì€ ì²´í¬ì„¬ '%s': 32ìžë¥¼ 초과할 수 ì—†ìŒ" msgid "Invalid configuration in glance-swift conf file." msgstr "glance-swift 구성 파ì¼ì˜ êµ¬ì„±ì´ ì˜¬ë°”ë¥´ì§€ 않습니다." msgid "Invalid configuration in property protection file." msgstr "특성 보호 파ì¼ì˜ 올바르지 ì•Šì€ êµ¬ì„±ìž…ë‹ˆë‹¤." #, python-format msgid "Invalid container format '%s' for image." msgstr "ì´ë¯¸ì§€ì— 대한 컨테ì´ë„ˆ í˜•ì‹ '%s'ì´(ê°€) 올바르지 않습니다." #, python-format msgid "Invalid content type %(content_type)s" msgstr "올바르지 ì•Šì€ ì»¨í…츠 유형 %(content_type)s" #, python-format msgid "Invalid disk format '%s' for image." msgstr "ì´ë¯¸ì§€ì— 대한 ë””ìŠ¤í¬ í˜•ì‹ '%s'ì´(ê°€) 올바르지 않습니다." #, python-format msgid "Invalid filter value %s. The quote is not closed." msgstr "올바르지 ì•Šì€ í•„í„° ê°’ %s입니다. 따옴표를 ë‹«ì§€ 않았습니다." #, python-format msgid "" "Invalid filter value %s. There is no comma after closing quotation mark." msgstr "올바르지 ì•Šì€ í•„í„° ê°’ %s입니다. 닫기 따옴표 ì „ì— ì‰¼í‘œê°€ 없습니다." #, python-format msgid "" "Invalid filter value %s. There is no comma before opening quotation mark." msgstr "올바르지 ì•Šì€ í•„í„° ê°’ %s입니다. 열기 따옴표 ì „ì— ì‰¼í‘œê°€ 없습니다." msgid "Invalid image id format" msgstr "올바르지 ì•Šì€ ì´ë¯¸ì§€ ID 형ì‹" msgid "Invalid location" msgstr "ìž˜ëª»ëœ ìœ„ì¹˜" #, python-format msgid "Invalid location %s" msgstr "올바르지 ì•Šì€ ìœ„ì¹˜ %s" #, python-format msgid "Invalid location: %s" msgstr "올바르지 ì•Šì€ ìœ„ì¹˜: %s" #, python-format msgid "" "Invalid location_strategy option: %(name)s. The valid strategy option(s) " "is(are): %(strategies)s" msgstr "" "올바르지 ì•Šì€ location_strategy 옵션: %(name)s. 올바른 ì „ëžµ 옵션 : " "%(strategies)s" msgid "Invalid locations" msgstr "ìž˜ëª»ëœ ìœ„ì¹˜ë“¤" #, python-format msgid "Invalid locations: %s" msgstr "올바르지 ì•Šì€ ìœ„ì¹˜: %s" msgid "Invalid marker format" msgstr "올바르지 ì•Šì€ ë§ˆì»¤ 형ì‹" msgid "Invalid marker. Image could not be found." msgstr "올바르지 ì•Šì€ ë§ˆì»¤ìž…ë‹ˆë‹¤. ì´ë¯¸ì§€ë¥¼ ì°¾ì„ ìˆ˜ 없습니다." #, python-format msgid "Invalid membership association: %s" msgstr "올바르지 ì•Šì€ ë©¤ë²„ì‹­ ì—°ê´€: %s" msgid "" "Invalid mix of disk and container formats. When setting a disk or container " "format to one of 'aki', 'ari', or 'ami', the container and disk formats must " "match." msgstr "" "디스í¬ì™€ 컨테ì´ë„ˆ 형ì‹ì˜ ì¡°í•©ì´ ì˜¬ë°”ë¥´ì§€ 않습니다. 디스í¬ë‚˜ 컨테ì´ë„ˆ 형ì‹ì„ " "'aki', 'ari', ë˜ëŠ” 'ami' 중 하나로 설정할 경우 컨테ì´ë„ˆì™€ 디스í¬í˜•ì‹ì´ ì¼ì¹˜í•´" "야 합니다." #, python-format msgid "" "Invalid operation: `%(op)s`. It must be one of the following: %(available)s." msgstr "올바르지 ì•Šì€ ì¡°ìž‘: `%(op)s`. ë‹¤ìŒ ì¤‘ 하나여야 합니다. %(available)s." msgid "Invalid position for adding a location." msgstr "위치를 ì¶”ê°€í•˜ê¸°ì— ì˜¬ë°”ë¥´ì§€ ì•Šì€ í¬ì§€ì…˜ìž…니다." msgid "Invalid position for removing a location." msgstr "위치를 ì œê±°í•˜ê¸°ì— ì˜¬ë°”ë¥´ì§€ ì•Šì€ í¬ì§€ì…˜ìž…니다." msgid "Invalid service catalog json." msgstr "올바르지 ì•Šì€ ì„œë¹„ìŠ¤ 카탈로그 json입니다." #, python-format msgid "Invalid sort direction: %s" msgstr "올바르지 ì•Šì€ ì •ë ¬ ë°©í–¥: %s" #, python-format msgid "" "Invalid sort key: %(sort_key)s. It must be one of the following: " "%(available)s." msgstr "" "올바르지 ì•Šì€ ì •ë ¬ 키: %(sort_key)s. ë‹¤ìŒ ì¤‘ 하나여야 합니다. %(available)s." #, python-format msgid "Invalid status value: %s" msgstr "올바르지 ì•Šì€ ìƒíƒœ ê°’: %s" #, python-format msgid "Invalid status: %s" msgstr "올바르지 ì•Šì€ ìƒíƒœ: %s" #, python-format msgid "Invalid time format for %s." msgstr "%sì— ì˜¬ë°”ë¥´ì§€ ì•Šì€ ì‹œê°„ 형ì‹ìž…니다." #, python-format msgid "Invalid type value: %s" msgstr "올바르지 ì•Šì€ ìœ í˜• ê°’: %s" #, python-format msgid "" "Invalid update. It would result in a duplicate metadata definition namespace " "with the same name of %s" msgstr "" "올바르지 ì•Šì€ ì—…ë°ì´íŠ¸ìž…ë‹ˆë‹¤. %sê³¼(와) ë™ì¼í•œ ì´ë¦„ì˜ ë©”íƒ€ë°ì´í„° ì •ì˜ ë„¤ìž„ìŠ¤íŽ˜" "ì´ìŠ¤ê°€ 중복ë©ë‹ˆë‹¤." #, python-format msgid "" "Invalid update. It would result in a duplicate metadata definition object " "with the same name=%(name)s in namespace=%(namespace_name)s." msgstr "" "올바르지 ì•Šì€ ì—…ë°ì´íŠ¸ìž…ë‹ˆë‹¤. namespace=%(namespace_name)sì—서 name=%(name)s" "ê³¼(와) ë™ì¼í•œ ì´ë¦„ì˜ ë©”íƒ€ë°ì´í„° ì •ì˜ ì˜¤ë¸Œì íŠ¸ê°€ 중복ë©ë‹ˆë‹¤." #, python-format msgid "" "Invalid update. It would result in a duplicate metadata definition object " "with the same name=%(name)s in namespace=%(namespace_name)s." msgstr "" "올바르지 ì•Šì€ ì—…ë°ì´íŠ¸ìž…ë‹ˆë‹¤. namespace=%(namespace_name)sì—서 name=%(name)s" "ê³¼(와) ë™ì¼í•œ ì´ë¦„ì˜ ë©”íƒ€ë°ì´í„° ì •ì˜ ì˜¤ë¸Œì íŠ¸ê°€ 중복ë©ë‹ˆë‹¤." #, python-format msgid "" "Invalid update. It would result in a duplicate metadata definition property " "with the same name=%(name)s in namespace=%(namespace_name)s." msgstr "" "올바르지 ì•Šì€ ì—…ë°ì´íŠ¸ìž…ë‹ˆë‹¤. 네임스페ì´ìФ=%(namespace_name)sì˜ ë™ì¼í•œ ì´ë¦„=" "%(name)s(으)로 메타ë°ì´í„° ì •ì˜ íŠ¹ì„±ì´ ì¤‘ë³µë©ë‹ˆë‹¤." #, python-format msgid "Invalid value '%(value)s' for parameter '%(param)s': %(extra_msg)s" msgstr "매개변수 '%(param)s'ì˜ ì˜¬ë°”ë¥´ì§€ ì•Šì€ ê°’ '%(value)s': %(extra_msg)s" #, python-format msgid "Invalid value for option %(option)s: %(value)s" msgstr "옵션 %(option)sì— ì˜¬ë°”ë¥´ì§€ ì•Šì€ ê°’: %(value)s" #, python-format msgid "Invalid visibility value: %s" msgstr "올바르지 ì•Šì€ ê°€ì‹œì„± ê°’: %s" msgid "It's invalid to provide multiple image sources." msgstr "여러 ê°œì˜ ì´ë¯¸ì§€ 소스를 제공하면 안 ë©ë‹ˆë‹¤." msgid "It's not allowed to add locations if locations are invisible." msgstr "위치가 표시ë˜ì§€ 않는 경우 위치를 추가할 수 없습니다." msgid "It's not allowed to remove locations if locations are invisible." msgstr "위치가 표시ë˜ì§€ 않는 경우 위치를 제거할 수 없습니다." msgid "It's not allowed to update locations if locations are invisible." msgstr "위치가 표시ë˜ì§€ 않는 경우 위치를 ì—…ë°ì´íŠ¸í•  수 없습니다." msgid "List of strings related to the image" msgstr "ì´ë¯¸ì§€ì™€ ê´€ë ¨ëœ ë¬¸ìžì—´ 목ë¡" msgid "Malformed JSON in request body." msgstr "요청 본문ì—서 JSONì˜ í˜•ì‹ì´ 올바르지 않습니다." msgid "Maximal age is count of days since epoch." msgstr "최대 ê¸°ê°„ì€ epoch ì´í›„ì˜ ì¼ ìˆ˜ìž…ë‹ˆë‹¤." #, python-format msgid "Maximum redirects (%(redirects)s) was exceeded." msgstr "최대 경로 재지정(%(redirects)s)ì— ë„달했습니다." #, python-format msgid "Member %(member_id)s is duplicated for image %(image_id)s" msgstr "멤버 %(member_id)sì´(ê°€) ì´ë¯¸ì§€ %(image_id)sì— ëŒ€í•´ 중복ë¨" msgid "Member can't be empty" msgstr "멤버는 비어 ìžˆì„ ìˆ˜ ì—†ìŒ" msgid "Member to be added not specified" msgstr "추가할 멤버를 지정하지 않ìŒ" msgid "Membership could not be found." msgstr "ë©¤ë²„ì‹­ì„ ì°¾ì„ ìˆ˜ 없습니다." #, python-format msgid "" "Metadata definition namespace %(namespace)s is protected and cannot be " "deleted." msgstr "" "메타ë°ì´í„° ì •ì˜ ë„¤ìž„ìŠ¤íŽ˜ì´ìФ %(namespace)sì´(ê°€) 보호ë˜ê³  ì‚­ì œë˜ì—ˆì„ 수 있습" "니다." #, python-format msgid "Metadata definition namespace not found for id=%s" msgstr "id=%sì— ëŒ€í•œ 메타ë°ì´í„° ì •ì˜ ë„¤ìž„ìŠ¤íŽ˜ì´ìŠ¤ë¥¼ ì°¾ì„ ìˆ˜ ì—†ìŒ" #, python-format msgid "" "Metadata definition object %(object_name)s is protected and cannot be " "deleted." msgstr "" "메타ë°ì´í„° ì •ì˜ ì˜¤ë¸Œì íЏ %(object_name)sì´(ê°€) 보호ë˜ê³  ì‚­ì œë˜ì—ˆì„ 수 있습니" "다." #, python-format msgid "Metadata definition object not found for id=%s" msgstr "id=%sì— ëŒ€í•œ 메타ë°ì´í„° ì •ì˜ ì˜¤ë¸Œì íŠ¸ë¥¼ ì°¾ì„ ìˆ˜ ì—†ìŒ" #, python-format msgid "" "Metadata definition property %(property_name)s is protected and cannot be " "deleted." msgstr "" "메타ë°ì´í„° ì •ì˜ íŠ¹ì„± %(property_name)sì´(ê°€) 보호ë˜ê³  ì‚­ì œë˜ì—ˆì„ 수 있습니다." #, python-format msgid "Metadata definition property not found for id=%s" msgstr "id=%sì— ëŒ€í•œ 메타ë°ì´í„° ì •ì˜ íŠ¹ì„±ì„ ì°¾ì„ ìˆ˜ ì—†ìŒ" #, python-format msgid "" "Metadata definition resource-type %(resource_type_name)s is a seeded-system " "type and cannot be deleted." msgstr "" "메타ë°ì´í„° ì •ì˜ resource-type %(resource_type_name)sì€(는) 시드(seed) 시스템 " "유형ì´ê³  삭제할 수 없습니다." #, python-format msgid "" "Metadata definition resource-type-association %(resource_type)s is protected " "and cannot be deleted." msgstr "" "메타ë°ì´í„° ì •ì˜ resource-type-association %(resource_type)sì´(ê°€) 보호ë˜ê³  ì‚­" "제할 수 없습니다." #, python-format msgid "" "Metadata definition tag %(tag_name)s is protected and cannot be deleted." msgstr "메타ë°ì´í„° ì •ì˜ íƒœê·¸ %(tag_name)sì€(는) 보호ë˜ë¯€ë¡œ 삭제할 수 없습니다." #, python-format msgid "Metadata definition tag not found for id=%s" msgstr "id=%sì— ëŒ€í•œ 메타ë°ì´í„° ì •ì˜ íƒœê·¸ë¥¼ ì°¾ì„ ìˆ˜ ì—†ìŒ" msgid "Minimal rows limit is 1." msgstr "최소 í–‰ ì œí•œì€ 1입니다." #, python-format msgid "Missing required credential: %(required)s" msgstr "필수 ì‹ ìž„ ì •ë³´ 누ë½: %(required)s" #, python-format msgid "" "Multiple 'image' service matches for region %(region)s. This generally means " "that a region is required and you have not supplied one." msgstr "" "다중 'ì´ë¯¸ì§€' 서비스가 %(region)s ë¦¬ì ¼ì— ì¼ì¹˜í•©ë‹ˆë‹¤. ì´ëŠ” ì¼ë°˜ì ìœ¼ë¡œ ë¦¬ì ¼ì´ " "필요하지만 ì•„ì§ ë¦¬ì ¼ì„ ì œê³µí•˜ì§€ ì•Šì€ ê²½ìš° ë°œìƒí•©ë‹ˆë‹¤." msgid "No authenticated user" msgstr "ì¸ì¦ëœ 사용ìžê°€ ì—†ìŒ" #, python-format msgid "No image found with ID %s" msgstr "IDê°€ %sì¸ ì´ë¯¸ì§€ë¥¼ ì°¾ì„ ìˆ˜ ì—†ìŒ" #, python-format msgid "No location found with ID %(loc)s from image %(img)s" msgstr "%(img)s ì´ë¯¸ì§€ì—서 IDê°€ %(loc)sì¸ ìœ„ì¹˜ë¥¼ ì°¾ì„ ìˆ˜ ì—†ìŒ" msgid "No permission to share that image" msgstr "해당 ì´ë¯¸ì§€ë¥¼ 공유한 ê¶Œí•œì´ ì—†ìŒ" #, python-format msgid "Not allowed to create members for image %s." msgstr "ì´ë¯¸ì§€ %sì˜ ë©¤ë²„ë¥¼ 작성할 수 없습니다." #, python-format msgid "Not allowed to deactivate image in status '%s'" msgstr "'%s' ìƒíƒœì˜ ì´ë¯¸ì§€ë¥¼ 비활성화할 수 ì—†ìŒ" #, python-format msgid "Not allowed to delete members for image %s." msgstr "ì´ë¯¸ì§€ %sì˜ ë©¤ë²„ë¥¼ 삭제할 수 없습니다." #, python-format msgid "Not allowed to delete tags for image %s." msgstr "ì´ë¯¸ì§€ %sì˜ íƒœê·¸ë¥¼ 삭제할 수 없습니다." #, python-format msgid "Not allowed to list members for image %s." msgstr "ì´ë¯¸ì§€ %sì˜ ë©¤ë²„ë¥¼ 나열할 수 없습니다." #, python-format msgid "Not allowed to reactivate image in status '%s'" msgstr "'%s' ìƒíƒœì˜ ì´ë¯¸ì§€ë¥¼ 다시 활성화할 수 ì—†ìŒ" #, python-format msgid "Not allowed to update members for image %s." msgstr "ì´ë¯¸ì§€ %sì˜ ë©¤ë²„ë¥¼ ì—…ë°ì´íŠ¸í•  수 없습니다." #, python-format msgid "Not allowed to update tags for image %s." msgstr "ì´ë¯¸ì§€ %sì˜ íƒœê·¸ë¥¼ ì—…ë°ì´íŠ¸í•  수 없습니다." #, python-format msgid "Not allowed to upload image data for image %(image_id)s: %(error)s" msgstr "" "ì´ë¯¸ì§€ %(image_id)sì— ëŒ€í•œ ì´ë¯¸ì§€ ë°ì´í„°ì˜ 업로드가 허용ë˜ì§€ 않ìŒ: %(error)s" msgid "Number of sort dirs does not match the number of sort keys" msgstr "ì •ë ¬ 디렉토리 수가 ì •ë ¬ 키 수와 ì¼ì¹˜í•˜ì§€ 않ìŒ" msgid "OVA extract is limited to admin" msgstr "관리ìžë§Œ OVA를 추출할 수 있ìŒ" msgid "Old and new sorting syntax cannot be combined" msgstr "ì´ì „ ë° ìƒˆ 저장 êµ¬ë¬¸ì€ ê²°í•©í•  수 ì—†ìŒ" #, python-format msgid "Operation \"%s\" requires a member named \"value\"." msgstr "\"%s\" 오í¼ë ˆì´ì…˜ì—는 \"value\"ë¼ëŠ” ì´ë¦„ì˜ ë©¤ë²„ê°€ 필요합니다." msgid "" "Operation objects must contain exactly one member named \"add\", \"remove\", " "or \"replace\"." msgstr "" "ì¡°ìž‘ 오브ì íЏì—는 \"add\", \"remove\", ë˜ëŠ” \"replace\" 멤버 중 하나만 í¬í•¨ë˜" "어야 합니다." msgid "" "Operation objects must contain only one member named \"add\", \"remove\", or " "\"replace\"." msgstr "" "ì¡°ìž‘ 오브ì íЏì—는 \"add\", \"remove\",ë˜ëŠ” \"replace\" 멤버 중 하나만 í¬í•¨ë˜" "어야 합니다." msgid "Operations must be JSON objects." msgstr "오í¼ë ˆì´ì…˜ì€ JSON 오브ì íŠ¸ì—¬ì•¼ 합니다." #, python-format msgid "Original locations is not empty: %s" msgstr "ì›ë³¸ 위치가 비어있지 않ìŒ: %s" msgid "Owner can't be updated by non admin." msgstr "비관리ìžëŠ” 소유ìžë¥¼ ì—…ë°ì´íŠ¸í•  수 없습니다." msgid "Owner must be specified to create a tag." msgstr "태그를 작성하려면 소유ìžë¡œ 지정ë˜ì–´ì•¼ 합니다." msgid "Owner of the image" msgstr "ì´ë¯¸ì§€ì˜ 소유ìž" msgid "Owner of the namespace." msgstr "네임스페ì´ìŠ¤ì˜ ì†Œìœ ìžìž…니다." msgid "Param values can't contain 4 byte unicode." msgstr "매개변수 ê°’ì— 4ë°”ì´íЏ 유니코드를 í¬í•¨í•  수 없습니다." #, python-format msgid "Pointer `%s` contains \"~\" not part of a recognized escape sequence." msgstr "" "`%s` í¬ì¸í„°ì— ì¸ì‹ë˜ëŠ” ì´ìŠ¤ì¼€ì´í”„ 시퀀스가 아닌 \"~\"ê°€ í¬í•¨ë˜ì–´ 있습니다." #, python-format msgid "Pointer `%s` contains adjacent \"/\"." msgstr "í¬ì¸í„° `%s`ì— ì¸ì ‘ \"/\"ê°€ í¬í•¨ë©ë‹ˆë‹¤." #, python-format msgid "Pointer `%s` does not contains valid token." msgstr "í¬ì¸í„° `%s`ì— ì˜¬ë°”ë¥¸ 토í°ì´ í¬í•¨ë˜ì–´ 있지 않습니다." #, python-format msgid "Pointer `%s` does not start with \"/\"." msgstr "`%s` í¬ì¸í„°ê°€ \"/\"로 시작하지 않습니다." #, python-format msgid "Pointer `%s` end with \"/\"." msgstr "í¬ì¸í„° `%s`ì´(ê°€) \"/\"로 ë납니다." #, python-format msgid "Port \"%s\" is not valid." msgstr "\"%s\" í¬íŠ¸ê°€ 올바르지 않습니다." #, python-format msgid "Process %d not running" msgstr "프로세스 %dì´(ê°€) 실행 중ì´ì§€ 않ìŒ" #, python-format msgid "Properties %s must be set prior to saving data." msgstr "ë°ì´í„°ë¥¼ 저장하기 ì „ì— %s íŠ¹ì„±ì„ ì„¤ì •í•´ì•¼ 합니다." #, python-format msgid "" "Property %(property_name)s does not start with the expected resource type " "association prefix of '%(prefix)s'." msgstr "" "특성 %(property_name)sì´(ê°€) ì˜ˆìƒ ìžì› 유형 ì—°ê´€ ì ‘ë‘ë¶€ì¸ '%(prefix)s'(으)로 " "시작하지 않습니다." #, python-format msgid "Property %s already present." msgstr "%s íŠ¹ì„±ì´ ì´ë¯¸ 존재합니다." #, python-format msgid "Property %s does not exist." msgstr "%s íŠ¹ì„±ì´ ì¡´ìž¬í•˜ì§€ 않습니다." #, python-format msgid "Property %s may not be removed." msgstr "%s íŠ¹ì„±ì„ ì œê±°í•  수 없습니다." #, python-format msgid "Property %s must be set prior to saving data." msgstr "ë°ì´í„°ë¥¼ 저장하기 ì „ì— %s íŠ¹ì„±ì„ ì„¤ì •í•´ì•¼ 합니다." #, python-format msgid "Property '%s' is protected" msgstr "'%s' íŠ¹ì„±ì´ ë³´í˜¸ë¨ " msgid "Property names can't contain 4 byte unicode." msgstr "특성 ì´ë¦„ì— 4ë°”ì´íЏ 유니코드를 í¬í•¨í•  수 없습니다." #, python-format msgid "" "Provided image size must match the stored image size. (provided size: " "%(ps)d, stored size: %(ss)d)" msgstr "" "ì œê³µëœ ì´ë¯¸ì§€ í¬ê¸°ê°€ ì €ìž¥ëœ ì´ë¯¸ì§€ í¬ê¸°ì™€ ì¼ì¹˜í•´ì•¼ 합니다(ì œê³µëœ í¬ê¸°: " "%(ps)d, ì €ìž¥ëœ í¬ê¸°: %(ss)d)." #, python-format msgid "Provided object does not match schema '%(schema)s': %(reason)s" msgstr "ì œê³µëœ ì˜¤ë¸Œì íŠ¸ê°€ 스키마 '%(schema)s'ì— ì¼ì¹˜í•˜ì§€ 않ìŒ: %(reason)s" #, python-format msgid "Provided status of task is unsupported: %(status)s" msgstr "ì œê³µëœ íƒœìŠ¤í¬ì˜ ìƒíƒœê°€ ì§€ì›ë˜ì§€ 않ìŒ: %(status)s" #, python-format msgid "Provided type of task is unsupported: %(type)s" msgstr "ì œê³µëœ íƒœìŠ¤í¬ ìœ í˜•ì´ ì§€ì›ë˜ì§€ 않ìŒ: %(type)s" msgid "Provides a user friendly description of the namespace." msgstr "사용ìžì—게 ìµìˆ™í•œ 네임스페ì´ìФ ì„¤ëª…ì„ ì œê³µí•©ë‹ˆë‹¤." msgid "Received invalid HTTP redirect." msgstr "올바르지 ì•Šì€ HTTP 경로 ìž¬ì§€ì •ì„ ìˆ˜ì‹ í–ˆìŠµë‹ˆë‹¤." #, python-format msgid "Redirecting to %(uri)s for authorization." msgstr "권한 부여를 위해 %(uri)s(으)로 경로 재지정 중입니다." #, python-format msgid "Registry service can't use %s" msgstr "레지스트리 서비스ì—서 %sì„(를) 사용할 수 ì—†ìŒ" #, python-format msgid "Registry was not configured correctly on API server. Reason: %(reason)s" msgstr "" "API 서버ì—서 레지스트리가 올바르게 구성ë˜ì§€ 않았습니다. ì´ìœ : %(reason)s" #, python-format msgid "Reload of %(serv)s not supported" msgstr "%(serv)sì„(를) 다시 로드할 수 ì—†ìŒ" #, python-format msgid "Reloading %(serv)s (pid %(pid)s) with signal(%(sig)s)" msgstr "신호(%(sig)s)와 함께 %(serv)s(pid %(pid)s) 다시 로드 중" #, python-format msgid "Removing stale pid file %s" msgstr "ì‹œê°„ì´ ê²½ê³¼ëœ pid íŒŒì¼ %sì„(를) 제거하는 중" msgid "Request body must be a JSON array of operation objects." msgstr "요청 ë³¸ë¬¸ì€ ì˜¤í¼ë ˆì´ì…˜ 오브ì íŠ¸ì˜ JSON ë°°ì—´ì´ì–´ì•¼ 합니다." msgid "Request must be a list of commands" msgstr "ìš”ì²­ì€ ì‰¼í‘œë¡œ 구분한 목ë¡ì´ì–´ì•¼ 합니다." #, python-format msgid "Required store %s is invalid" msgstr "필수 저장소 %sì´(ê°€) 올바르지 않ìŒ" msgid "" "Resource type names should be aligned with Heat resource types whenever " "possible: http://docs.openstack.org/developer/heat/template_guide/openstack." "html" msgstr "" "ìžì› 유형 ì´ë¦„ì€ ížˆíŠ¸ ìžì› ìœ í˜•ì— ë§žê²Œ 지정ë˜ì–´ì•¼ 합니다.사용 가능: http://" "docs.openstack.org/developer/heat/template_guide/openstack.html" msgid "Response from Keystone does not contain a Glance endpoint." msgstr "Keystoneì˜ ì‘ë‹µì— Glance 엔드í¬ì¸íŠ¸ê°€ 들어있지 않습니다." msgid "Scope of image accessibility" msgstr "ì´ë¯¸ì§€ ì ‘ê·¼ì„±ì˜ ë²”ìœ„" msgid "Scope of namespace accessibility." msgstr "네임스페ì´ìФ ì ‘ê·¼ì„±ì˜ ë²”ìœ„ìž…ë‹ˆë‹¤." #, python-format msgid "Server %(serv)s is stopped" msgstr "서버 %(serv)sì´(ê°€) 중지ë¨" #, python-format msgid "Server worker creation failed: %(reason)s." msgstr "서버 ìž‘ì—…ìž ìž‘ì„±ì— ì‹¤íŒ¨í•¨: %(reason)s." msgid "Signature verification failed" msgstr "서명 ê²€ì¦ ì‹¤íŒ¨" msgid "Size of image file in bytes" msgstr "ì´ë¯¸ì§€ 파ì¼ì˜ í¬ê¸°(ë°”ì´íЏ)" msgid "" "Some resource types allow more than one key / value pair per instance. For " "example, Cinder allows user and image metadata on volumes. Only the image " "properties metadata is evaluated by Nova (scheduling or drivers). This " "property allows a namespace target to remove the ambiguity." msgstr "" "ì¼ë¶€ ìžì› ìœ í˜•ì€ ì¸ìŠ¤í„´ìŠ¤ 당 둘 ì´ìƒì˜ 키 / ê°’ ìŒì„ 허용합니다.예를 들어, " "Cinder는 ë³¼ë¥¨ì— ì‚¬ìš©ìž ë° ì´ë¯¸ì§€ 메타ë°ì´í„°ë¥¼ 허용합니다. ì´ë¯¸ì§€ 특성 메타ë°" "ì´í„°ë§Œ Nova(ìŠ¤ì¼€ì¤„ë§ ë˜ëŠ” 드ë¼ì´ë²„)ì— ì˜í•´ í‰ê°€ë©ë‹ˆë‹¤. ì´ íŠ¹ì„±ì€ ëª¨í˜¸ì„±ì„ ì œ" "거하기 위해 네임스페ì´ìФ 대ìƒì„ 허용합니다." msgid "Sort direction supplied was not valid." msgstr "ì œê³µëœ ì •ë ¬ ë°©í–¥ì´ ì˜¬ë°”ë¥´ì§€ 않습니다." msgid "Sort key supplied was not valid." msgstr "제공ë˜ëŠ” ì •ë ¬ 키가 올바르지 않습니다." msgid "" "Specifies the prefix to use for the given resource type. Any properties in " "the namespace should be prefixed with this prefix when being applied to the " "specified resource type. Must include prefix separator (e.g. a colon :)." msgstr "" "ì œê³µëœ ìžì› ìœ í˜•ì— ì‚¬ìš©í•  ì ‘ë‘부를 지정합니다. ì§€ì •ëœ ìžì› ìœ í˜•ì— ì ìš©ë˜ëŠ” ê²½" "ìš° 네임스페ì´ìŠ¤ì˜ ëª¨ë“  íŠ¹ì„±ì€ ì´ ì ‘ë‘부로 시작해야 합니다. ì ‘ë‘ë¶€ 구분 기호" "(예: 콜론 :)를 í¬í•¨í•´ì•¼ 합니다." msgid "Status must be \"pending\", \"accepted\" or \"rejected\"." msgstr "ìƒíƒœëŠ” \"보류 중\", \"수ë½ë¨\" ë˜ëŠ” \"ê±°ë¶€ë¨\"ì´ì–´ì•¼ 합니다." msgid "Status not specified" msgstr "ìƒíƒœë¥¼ 지정하지 않ìŒ" msgid "Status of the image" msgstr "ì´ë¯¸ì§€ì˜ ìƒíƒœ" #, python-format msgid "Status transition from %(cur_status)s to %(new_status)s is not allowed" msgstr "%(cur_status)sì—서 %(new_status)s(으)ë¡œì˜ ìƒíƒœ ì „ì´ê°€ 허용ë˜ì§€ 않ìŒ" #, python-format msgid "Stopping %(serv)s (pid %(pid)s) with signal(%(sig)s)" msgstr "신호(%(sig)s)와 함께 %(serv)s(pid %(pid)s) 중지 중" #, python-format msgid "Store for image_id not found: %s" msgstr "image_idì— ëŒ€í•œ 저장소를 ì°¾ì„ ìˆ˜ ì—†ìŒ: %s" #, python-format msgid "Store for scheme %s not found" msgstr "%s ìŠ¤í‚¤ë§ˆì— ëŒ€í•œ 저장소를 ì°¾ì„ ìˆ˜ ì—†ìŒ" #, python-format msgid "" "Supplied %(attr)s (%(supplied)s) and %(attr)s generated from uploaded image " "(%(actual)s) did not match. Setting image status to 'killed'." msgstr "" "ì œê³µëœ %(attr)s (%(supplied)s) ë° %(attr)s (ì—…ë¡œë“œëœ ì´ë¯¸ì§€ %(actual)s(으)로" "부터 ìƒì„±ë¨)ì´(ê°€) ì¼ì¹˜í•˜ì§€ 않ìŒ. ì´ë¯¸ì§€ ìƒíƒœë¥¼ 'ê°•ì œ 종료ë¨'으로 설정." msgid "Supported values for the 'container_format' image attribute" msgstr "'container_format' ì´ë¯¸ì§€ ì†ì„±ì— 대해 ì§€ì›ë˜ëŠ” ê°’" msgid "Supported values for the 'disk_format' image attribute" msgstr "'disk_format' ì´ë¯¸ì§€ ì†ì„±ì— 대해 ì§€ì›ë˜ëŠ” ê°’" #, python-format msgid "Suppressed respawn as %(serv)s was %(rsn)s." msgstr "%(serv)sì´(ê°€) %(rsn)sì´ë¯€ë¡œ 재파ìƒì´ 억제ë˜ì—ˆìŠµë‹ˆë‹¤." msgid "System SIGHUP signal received." msgstr "시스템 SIGHUP 신호를 수신했습니다." #, python-format msgid "Task '%s' is required" msgstr "íƒœìŠ¤í¬ '%s'ì´(ê°€) 필요함" msgid "Task does not exist" msgstr "태스í¬ê°€ 존재하지 않ìŒ" msgid "Task failed due to Internal Error" msgstr "ë‚´ë¶€ 오류로 ì¸í•´ íƒœìŠ¤í¬ ì‹¤íŒ¨" msgid "Task was not configured properly" msgstr "태스í¬ê°€ 제대로 구성ë˜ì§€ 않ìŒ" #, python-format msgid "Task with the given id %(task_id)s was not found" msgstr "ì§€ì •ëœ IDê°€ %(task_id)sì¸ íƒœìŠ¤í¬ë¥¼ ì°¾ì„ ìˆ˜ ì—†ìŒ" msgid "The \"changes-since\" filter is no longer available on v2." msgstr "\"changes-since\" 필터는 v2ì—서 ë” ì´ìƒ 사용할 수 없습니다." #, python-format msgid "The CA file you specified %s does not exist" msgstr "사용ìžê°€ 지정한 CA íŒŒì¼ %sì´(ê°€) 존재하지 않ìŒ" #, python-format msgid "" "The Image %(image_id)s object being created by this task %(task_id)s, is no " "longer in valid status for further processing." msgstr "" "ì´ íƒœìŠ¤í¬ %(task_id)sì—서 작성 ì¤‘ì¸ ì´ë¯¸ì§€ %(image_id)s 오브ì íŠ¸ëŠ” ë” ì´ìƒ í–¥" "후 ì²˜ë¦¬ì— ì‚¬ìš©í•  수 있는 올바른 ìƒíƒœê°€ 아닙니다." msgid "The Store URI was malformed." msgstr "저장소 URIì˜ í˜•ì‹ì´ 올바르지 않습니다." msgid "" "The URL to the keystone service. If \"use_user_token\" is not in effect and " "using keystone auth, then URL of keystone can be specified." msgstr "" "키스톤 ì„œë¹„ìŠ¤ì— ëŒ€í•œ URL입니다. \"use_user_token\"ì´(ê°€) ì ìš©ë˜ì§€ 않는 경우 " "키스톤 ê¶Œí•œì„ ì‚¬ìš©í•œ ë‹¤ìŒ í‚¤ìŠ¤í†¤ì˜ URLì„ ì§€ì •í•  수 있습니다." msgid "" "The administrators password. If \"use_user_token\" is not in effect, then " "admin credentials can be specified." msgstr "" "ê´€ë¦¬ìž ë¹„ë°€ë²ˆí˜¸ìž…ë‹ˆë‹¤. \"use_user_token\"ì´(ê°€) ì ìš©ë˜ì§€ 않는 경우 관리 ì‹ ìž„ " "정보를 지정할 수 있습니다." msgid "" "The administrators user name. If \"use_user_token\" is not in effect, then " "admin credentials can be specified." msgstr "" "관리 ì‚¬ìš©ìž ì´ë¦„입니다. \"use_user_token\"ì´(ê°€) ì ìš©ë˜ì§€ 않는 경우 관리 ì‹ " "ìž„ 정보를 지정할 수 있습니다." #, python-format msgid "The cert file you specified %s does not exist" msgstr "사용ìžê°€ 지정한 ì¸ì¦ íŒŒì¼ %sì´(ê°€) 존재하지 않ìŒ" msgid "The current status of this task" msgstr "ì´ íƒœìŠ¤í¬ì˜ 현재 ìƒíƒœ" #, python-format msgid "" "The device housing the image cache directory %(image_cache_dir)s does not " "support xattr. It is likely you need to edit your fstab and add the " "user_xattr option to the appropriate line for the device housing the cache " "directory." msgstr "" "디바ì´ìФ 하우징 ì´ë¯¸ì§€ ìºì‹œ 디렉터리 %(image_cache_dir)sì˜ Device 는 xattrì„ " "ì§€ì›í•˜ì§€ 않습니다. fstabì„ ìˆ˜ì •í•˜ê±°ë‚˜ user_xattr ì˜µì…˜ì„ ë””ë°”ì´ìФ 하우징 ìºì‹œ " "ë””ë ‰í„°ë¦¬ì˜ ì í•©í•œ í–‰ì— ì¶”ê°€í•˜ê¸° ë°”ëžë‹ˆë‹¤." #, python-format msgid "" "The given uri is not valid. Please specify a valid uri from the following " "list of supported uri %(supported)s" msgstr "" "ì œê³µëœ uriê°€ 올바르지 않습니다. ë‹¤ìŒ ì§€ì› uri 목ë¡ì—서 올바른 uri를 지정하십" "시오. %(supported)s" #, python-format msgid "The incoming image is too large: %s" msgstr "수신 ì´ë¯¸ì§€ê°€ 너무 í¼: %s" #, python-format msgid "The key file you specified %s does not exist" msgstr "사용ìžê°€ 지정한 키 íŒŒì¼ %sì´(ê°€) 존재하지 않ìŒ" #, python-format msgid "" "The limit has been exceeded on the number of allowed image locations. " "Attempted: %(attempted)s, Maximum: %(maximum)s" msgstr "" "í—ˆìš©ëœ ì´ë¯¸ì§€ 위치 ìˆ˜ì˜ í•œê³„ê°€ 초과ë˜ì—ˆìŠµë‹ˆë‹¤. 시ë„함: %(attempted)s, 최대: " "%(maximum)s" #, python-format msgid "" "The limit has been exceeded on the number of allowed image members for this " "image. Attempted: %(attempted)s, Maximum: %(maximum)s" msgstr "" "ì´ ì´ë¯¸ì§€ì— 대해 í—ˆìš©ëœ ì´ë¯¸ì§€ 멤버 ìˆ˜ì˜ í•œê³„ê°€ 초과ë˜ì—ˆìŠµë‹ˆë‹¤. 시ë„함: " "%(attempted)s, 최대: %(maximum)s" #, python-format msgid "" "The limit has been exceeded on the number of allowed image properties. " "Attempted: %(attempted)s, Maximum: %(maximum)s" msgstr "" "í—ˆìš©ëœ ì´ë¯¸ì§€ 특성 ìˆ˜ì˜ í•œê³„ê°€ 초과ë˜ì—ˆìŠµë‹ˆë‹¤. 시ë„함: %(attempted)s, 최대: " "%(maximum)s" #, python-format msgid "" "The limit has been exceeded on the number of allowed image properties. " "Attempted: %(num)s, Maximum: %(quota)s" msgstr "" "í—ˆìš©ëœ ì´ë¯¸ì§€ 특성 ìˆ˜ì˜ í•œê³„ê°€ 초과ë˜ì—ˆìŠµë‹ˆë‹¤. 시ë„함: %(num)s, 최대: " "%(quota)s" #, python-format msgid "" "The limit has been exceeded on the number of allowed image tags. Attempted: " "%(attempted)s, Maximum: %(maximum)s" msgstr "" "í—ˆìš©ëœ ì´ë¯¸ì§€ 태그 ìˆ˜ì˜ í•œê³„ê°€ 초과ë˜ì—ˆìŠµë‹ˆë‹¤. 시ë„함: %(attempted)s, 최대: " "%(maximum)s" #, python-format msgid "The location %(location)s already exists" msgstr "위치 %(location)sì´(ê°€) ì´ë¯¸ 있ìŒ" #, python-format msgid "The location data has an invalid ID: %d" msgstr "위치 ë°ì´í„°ì˜ IDê°€ 올바르지 않ìŒ: %d" #, python-format msgid "" "The metadata definition %(record_type)s with name=%(record_name)s not " "deleted. Other records still refer to it." msgstr "" "name=%(record_name)sì¸ ë©”íƒ€ë°ì´í„° ì •ì˜ %(record_type)sì´(ê°€) ì‚­ì œë˜ì§€ 않습니" "다. 기타 레코드를 여전히 참조합니다." #, python-format msgid "The metadata definition namespace=%(namespace_name)s already exists." msgstr "메타ë°ì´í„° ì •ì˜ namespace=%(namespace_name)sì´(ê°€) ì´ë¯¸ 존재합니다." #, python-format msgid "" "The metadata definition object with name=%(object_name)s was not found in " "namespace=%(namespace_name)s." msgstr "" "name=%(object_name)sì¸ ë©”íƒ€ë°ì´í„° ì •ì˜ ì˜¤ë¸Œì íŠ¸ë¥¼ namespace=" "%(namespace_name)sì—서 ì°¾ì„ ìˆ˜ 없습니다." #, python-format msgid "" "The metadata definition property with name=%(property_name)s was not found " "in namespace=%(namespace_name)s." msgstr "" "name=%(property_name)sì¸ ë©”íƒ€ë°ì´í„° ì •ì˜ íŠ¹ì„±ì„ namespace=%(namespace_name)s" "ì—서 ì°¾ì„ ìˆ˜ 없습니다." #, python-format msgid "" "The metadata definition resource-type association of resource-type=" "%(resource_type_name)s to namespace=%(namespace_name)s already exists." msgstr "" "resource-type=%(resource_type_name)sì˜ ë©”íƒ€ë°ì´í„° ì •ì˜ ìžì› 유형 ì—°ê´€ì´ " "namespace=%(namespace_name)sì— ì´ë¯¸ 존재합니다." #, python-format msgid "" "The metadata definition resource-type association of resource-type=" "%(resource_type_name)s to namespace=%(namespace_name)s, was not found." msgstr "" "resource-type=%(resource_type_name)sì˜ ë©”íƒ€ë°ì´í„° ì •ì˜ ìžì› 유형 ì—°ê´€ì´ " "namespace=%(namespace_name)sì—서 ì°¾ì„ ìˆ˜ 없습니다." #, python-format msgid "" "The metadata definition resource-type with name=%(resource_type_name)s, was " "not found." msgstr "" "name=%(resource_type_name)sì¸ ë©”íƒ€ë°ì´í„° ì •ì˜ ìžì› ìœ í˜•ì„ ì°¾ì„ ìˆ˜ 없습니다." #, python-format msgid "" "The metadata definition tag with name=%(name)s was not found in namespace=" "%(namespace_name)s." msgstr "" "name=%(name)sì¸ ë©”íƒ€ë°ì´í„° ì •ì˜ íƒœê·¸ë¥¼ namespace=%(namespace_name)sì—서 ì°¾ì„ " "수 없습니다." msgid "The parameters required by task, JSON blob" msgstr "태스í¬ì—서 필요로 하는 매개변수, JSON blob" msgid "The provided image is too large." msgstr "ì œê³µëœ ì´ë¯¸ì§€ê°€ 너무 í½ë‹ˆë‹¤." msgid "" "The region for the authentication service. If \"use_user_token\" is not in " "effect and using keystone auth, then region name can be specified." msgstr "" "ì¸ì¦ ì„œë¹„ìŠ¤ì— ëŒ€í•œ 리젼입니다. If \"use_user_token\"ì´(ê°€) ì ìš©ë˜ì§€ 않는 ê²½" "ìš° 키스톤 ê¶Œí•œì„ ì‚¬ìš©í•œ ë‹¤ìŒ ë¦¬ì ¼ ì´ë¦„ì„ ì§€ì •í•  수 있습니다." msgid "The request returned 500 Internal Server Error." msgstr "요청 시 500 ë‚´ë¶€ 서버 오류가 리턴ë˜ì—ˆìŠµë‹ˆë‹¤." msgid "" "The request returned 503 Service Unavailable. This generally occurs on " "service overload or other transient outage." msgstr "" "요청ì—서 '503 서비스 사용 불가능'ì„ ë¦¬í„´í–ˆìŠµë‹ˆë‹¤. ì´ëŠ” ì¼ë°˜ì ìœ¼ë¡œ 서비스 과부" "하나 기타 ì¼ì‹œì  ì •ì „ì¼ ê²½ìš° ë°œìƒí•©ë‹ˆë‹¤." #, python-format msgid "" "The request returned a 302 Multiple Choices. This generally means that you " "have not included a version indicator in a request URI.\n" "\n" "The body of response returned:\n" "%(body)s" msgstr "" "ìš”ì²­ì´ 302 다중 ì„ íƒì‚¬í•­ì„ 리턴했습니다. ì´ëŠ” ì¼ë°˜ì ìœ¼ë¡œ 요청 URIì— ë²„ì „ 표시" "기를 í¬í•¨í•˜ì§€ 않았ìŒì„ ì˜ë¯¸í•©ë‹ˆë‹¤.\n" "\n" "ë¦¬í„´ëœ ì‘ë‹µì˜ ë³¸ë¬¸:\n" "%(body)s" #, python-format msgid "" "The request returned a 413 Request Entity Too Large. This generally means " "that rate limiting or a quota threshold was breached.\n" "\n" "The response body:\n" "%(body)s" msgstr "" "요청ì—서 '413 요청 엔티티가 너무 í¼'ì„ ë¦¬í„´í–ˆìŠµë‹ˆë‹¤. ì´ëŠ” ì¼ë°˜ì ìœ¼ë¡œ 등급 한" "ë„나 할당량 ìž„ê³„ê°’ì„ ìœ„ë°˜í–ˆìŒì„ ì˜ë¯¸í•©ë‹ˆë‹¤.\n" "\n" "ì‘답 본문:\n" "%(body)s" #, python-format msgid "" "The request returned an unexpected status: %(status)s.\n" "\n" "The response body:\n" "%(body)s" msgstr "" "ìš”ì²­ì´ ì˜ˆìƒì¹˜ ì•Šì€ ìƒíƒœë¥¼ 리턴함: %(status)s.\n" "\n" "ì‘답 본문:\n" "%(body)s" msgid "" "The requested image has been deactivated. Image data download is forbidden." msgstr "" "ìš”ì²­ëœ ì´ë¯¸ì§€ê°€ 비활성화ë˜ì—ˆìŠµë‹ˆë‹¤. ì´ë¯¸ì§€ ë°ì´í„° 다운로드가 금지ë©ë‹ˆë‹¤." msgid "The result of current task, JSON blob" msgstr "현재 태스í¬ì˜ ê²°ê³¼, JSON blob" #, python-format msgid "" "The size of the data %(image_size)s will exceed the limit. %(remaining)s " "bytes remaining." msgstr "" "ë°ì´í„° í¬ê¸° %(image_size)sì´(ê°€) ë‚¨ì€ í•œë„ ë°”ì´íЏ %(remaining)sì„(를) 초과합" "니다." #, python-format msgid "The specified member %s could not be found" msgstr "ì§€ì •ëœ ë©¤ë²„ %sì„(를) ì°¾ì„ ìˆ˜ ì—†ìŒ" #, python-format msgid "The specified metadata object %s could not be found" msgstr "ì§€ì •ëœ ë©”íƒ€ë°ì´í„° 오브ì íЏ %sì„(를) ì°¾ì„ ìˆ˜ ì—†ìŒ" #, python-format msgid "The specified metadata tag %s could not be found" msgstr "ì§€ì •ëœ ë©”íƒ€ë°ì´í„° 태그 %sì„(를) ì°¾ì„ ìˆ˜ ì—†ìŒ" #, python-format msgid "The specified namespace %s could not be found" msgstr "ì§€ì •ëœ ë„¤ìž„ìŠ¤íŽ˜ì´ìФ %sì„(를) ì°¾ì„ ìˆ˜ ì—†ìŒ" #, python-format msgid "The specified property %s could not be found" msgstr "ì§€ì •ëœ íŠ¹ì„± %sì„(를) ì°¾ì„ ìˆ˜ ì—†ìŒ" #, python-format msgid "The specified resource type %s could not be found " msgstr "ì§€ì •ëœ ìžì› 유형 %sì„(를) ì°¾ì„ ìˆ˜ ì—†ìŒ" msgid "" "The status of deleted image location can only be set to 'pending_delete' or " "'deleted'" msgstr "" "ì‚­ì œëœ ì´ë¯¸ì§€ ìœ„ì¹˜ì˜ ìƒíƒœëŠ” 'pending_delete' ë˜ëŠ” 'deleted'로만 설정할 수 있" "ìŒ" msgid "" "The status of deleted image location can only be set to 'pending_delete' or " "'deleted'." msgstr "" "ì‚­ì œëœ ì´ë¯¸ì§€ ìœ„ì¹˜ì˜ ìƒíƒœëŠ” 'pending_delete' ë˜ëŠ” 'deleted'로만 설정할 수 있" "습니다." msgid "The status of this image member" msgstr "ì´ ì´ë¯¸ì§€ ë©¤ë²„ì˜ ìƒíƒœ" msgid "" "The strategy to use for authentication. If \"use_user_token\" is not in " "effect, then auth strategy can be specified." msgstr "" "ì¸ì¦ì— 사용할 전략입니다. If \"use_user_token\"ì´(ê°€) ì ìš©ë˜ì§€ 않는 경우ì¸ì¦ " "ì „ëžµì„ ì§€ì •í•  수 있습니다." #, python-format msgid "" "The target member %(member_id)s is already associated with image " "%(image_id)s." msgstr "ëŒ€ìƒ ë©¤ë²„ %(member_id)sì´(ê°€) ì´ë¯¸ ì´ë¯¸ì§€ %(image_id)s." msgid "" "The tenant name of the administrative user. If \"use_user_token\" is not in " "effect, then admin tenant name can be specified." msgstr "" "관리 사용ìžì˜ 테넌트 ì´ë¦„입니다. \"use_user_token\"ì´(ê°€) ì ìš©ë˜ì§€ 않는 경우" "관리 테넌트 ì´ë¦„ì„ ì§€ì •í•  수 있습니다." msgid "The type of task represented by this content" msgstr "ì´ ì»¨í…츠ì—서 나타내는 태스í¬ì˜ 유형" msgid "The unique namespace text." msgstr "고유 네임스페ì´ìФ í…스트입니다." msgid "The user friendly name for the namespace. Used by UI if available." msgstr "" "사용ìžì—게 ìµìˆ™í•œ 네임스페ì´ìŠ¤ì˜ ì´ë¦„입니다. 가능한 경우 UIì—서 사용ë©ë‹ˆë‹¤." #, python-format msgid "" "There is a problem with your %(error_key_name)s %(error_filename)s. Please " "verify it. Error: %(ioe)s" msgstr "" "%(error_key_name)s %(error_filename)sì— ë¬¸ì œì ì´ 있습니다. 문제ì ì„ 확ì¸í•˜ì‹­" "시오. 오류: %(ioe)s" #, python-format msgid "" "There is a problem with your %(error_key_name)s %(error_filename)s. Please " "verify it. OpenSSL error: %(ce)s" msgstr "" "%(error_key_name)s %(error_filename)sì— ë¬¸ì œì ì´ 있습니다. 문제ì ì„ 확ì¸í•˜ì‹­" "시오. OpenSSL 오류: %(ce)s" #, python-format msgid "" "There is a problem with your key pair. Please verify that cert " "%(cert_file)s and key %(key_file)s belong together. OpenSSL error %(ce)s" msgstr "" "키 ìŒì— 문제ì ì´ 있습니다. ì¸ì¦ %(cert_file)s ë° í‚¤ %(key_file)sì´(ê°€) 함께 " "있는지 확ì¸í•˜ì‹­ì‹œì˜¤. OpenSSL 오류 %(ce)s" msgid "There was an error configuring the client." msgstr "í´ë¼ì´ì–¸íЏ 구성 오류가 있었습니다." msgid "There was an error connecting to a server" msgstr "서버 ì—°ê²° 오류가 있었습니다." msgid "" "This operation is currently not permitted on Glance Tasks. They are auto " "deleted after reaching the time based on their expires_at property." msgstr "" "해당 ë™ìž‘ì€ í˜„ìž¬ Glance ìž‘ì—…ì— ëŒ€í•´ì„œëŠ” 허용ë˜ì§€ 않습니다. ì´ë“¤ì€ expires_at " "íŠ¹ì„±ì— ê¸°ë°˜í•œ ì‹œê°„ì— ë„달하면 ìžë™ìœ¼ë¡œ ì‚­ì œë©ë‹ˆë‹¤." msgid "This operation is currently not permitted on Glance images details." msgstr "해당 ë™ìž‘ì€ í˜„ìž¬ Glance ì´ë¯¸ì§€ ì„¸ë¶€ì‚¬í•­ì— ëŒ€í•´ì„œëŠ” 허용ë˜ì§€ 않습니다." msgid "" "Time in hours for which a task lives after, either succeeding or failing" msgstr "ì´í›„ì— íƒœìŠ¤í¬ê°€ í™œì„±ì´ ë˜ëŠ” 시간(시), 성공 ë˜ëŠ” 실패" msgid "Too few arguments." msgstr "ì¸ìˆ˜ê°€ 너무 ì ìŠµë‹ˆë‹¤." msgid "" "URI cannot contain more than one occurrence of a scheme.If you have " "specified a URI like swift://user:pass@http://authurl.com/v1/container/obj, " "you need to change it to use the swift+http:// scheme, like so: swift+http://" "user:pass@authurl.com/v1/container/obj" msgstr "" "URI는 ìŠ¤í‚´ì˜ ë‘˜ ì´ìƒì˜ ë°œìƒì„ í¬í•¨í•  수 없습니다. 다ìŒê³¼ 유사한 URI를 지정한 " "경우 swift://user:pass@http://authurl.com/v1/container/obj, 다ìŒê³¼ ê°™ì´ swift" "+http:// ìŠ¤í‚´ì„ ì‚¬ìš©í•˜ë„ë¡ ë³€ê²½í•´ì•¼ 합니다. swift+http://user:pass@authurl." "com/v1/container/obj" msgid "URL to access the image file kept in external store" msgstr "외부 ì €ìž¥ì†Œì— ë³´ê´€ëœ ì´ë¯¸ì§€ 파ì¼ì— 액세스하기 위한 URL" #, python-format msgid "" "Unable to create pid file %(pid)s. Running as non-root?\n" "Falling back to a temp file, you can stop %(service)s service using:\n" " %(file)s %(server)s stop --pid-file %(fb)s" msgstr "" "pid íŒŒì¼ %(pid)sì„(를) 작성할 수 없습니다. 비루트로 실행 중ì¸ì§€ 확ì¸í•˜ì‹­ì‹œ" "오.\n" "임시 파ì¼ë¡œ ëŒì•„ê°€ 다ìŒì„ 사용하여 %(service)s 서비스를 중지할 수 있습니다.\n" " %(file)s %(server)s stop --pid-file %(fb)s" #, python-format msgid "Unable to filter by unknown operator '%s'." msgstr "알 수 없는 ì—°ì‚°ìž '%s'(으)로 í•„í„°ë§í•  수 없습니다." msgid "Unable to filter on a range with a non-numeric value." msgstr "숫ìžê°€ 아닌 ê°’ì„ ì‚¬ìš©í•˜ì—¬ 범위ì—서 í•„í„°ë§í•  수 없습니다." msgid "Unable to filter on a unknown operator." msgstr "알 수 없는 ì—°ì‚°ìžë¥¼ í•„í„°ë§í•  수 없습니다." msgid "Unable to filter using the specified operator." msgstr "ì§€ì •ëœ ì—°ì‚°ìžë¥¼ 사용하여 í•„í„°ë§í•  수 없습니다." msgid "Unable to filter using the specified range." msgstr "ì§€ì •ëœ ë²”ìœ„ë¥¼ 사용하여 í•„í„°ë§í•  수 없습니다." #, python-format msgid "Unable to find '%s' in JSON Schema change" msgstr "JSON 스키마 변경ì—서 '%s'ì„(를) ì°¾ì„ ìˆ˜ ì—†ìŒ" #, python-format msgid "" "Unable to find `op` in JSON Schema change. It must be one of the following: " "%(available)s." msgstr "" "JSON 스키마 변경ì—서 `op`를 ì°¾ì„ ìˆ˜ 없습니다. ë‹¤ìŒ ì¤‘ 하나여야 합니다. " "%(available)s." msgid "Unable to increase file descriptor limit. Running as non-root?" msgstr "" "íŒŒì¼ ë””ìŠ¤í¬ë¦½í„° 한계를 늘릴 수 없습니다. 비루트로 실행 중ì¸ì§€ 확ì¸í•˜ì‹­ì‹œì˜¤." #, python-format msgid "" "Unable to load %(app_name)s from configuration file %(conf_file)s.\n" "Got: %(e)r" msgstr "" "구성 íŒŒì¼ %(conf_file)sì—서 %(app_name)sì„(를) 로드할 수 없습니다.\n" "오류 ë°œìƒ: %(e)r" #, python-format msgid "Unable to load schema: %(reason)s" msgstr "스키마를 로드할 수 ì—†ìŒ: %(reason)s" #, python-format msgid "Unable to locate paste config file for %s." msgstr "%sì— ëŒ€í•œ 붙여넣기 구성 파ì¼ì„ ì°¾ì„ ìˆ˜ 없습니다." #, python-format msgid "Unable to upload duplicate image data for image%(image_id)s: %(error)s" msgstr "" "ì´ë¯¸ì§€ %(image_id)sì— ëŒ€í•œ 중복 ì´ë¯¸ì§€ ë°ì´í„°ë¥¼ 업로드할 수 ì—†ìŒ: %(error)s" msgid "Unauthorized image access" msgstr "권한 없는 ì´ë¯¸ì§€ 액세스" msgid "Unexpected body type. Expected list/dict." msgstr "ì˜ˆê¸°ì¹˜ì•Šì€ ë³¸ë¬¸ 타입. list/dict를 예ìƒí•©ë‹ˆë‹¤." #, python-format msgid "Unexpected response: %s" msgstr "예ìƒì¹˜ ì•Šì€ ì‘답: %s" #, python-format msgid "Unknown auth strategy '%s'" msgstr "알 수 없는 auth ì „ëžµ '%s'" #, python-format msgid "Unknown command: %s" msgstr "알 수 없는 명령: %s" msgid "Unknown sort direction, must be 'desc' or 'asc'" msgstr "알 수 없는 ì •ë ¬ 방향입니다. 'desc' ë˜ëŠ” 'asc'여야 함" msgid "Unrecognized JSON Schema draft version" msgstr "ì¸ì‹ë˜ì§€ 않는 JSON 스키마 드래프트 버전" msgid "Unrecognized changes-since value" msgstr "ì¸ì‹ë˜ì§€ 않는 changes-since ê°’" #, python-format msgid "Unsupported sort_dir. Acceptable values: %s" msgstr "ì§€ì›ë˜ì§€ 않는 sort_dir. 허용 가능한 ê°’: %s" #, python-format msgid "Unsupported sort_key. Acceptable values: %s" msgstr "ì§€ì›ë˜ì§€ 않는 sort_key. 허용 가능한 ê°’: %s" msgid "Virtual size of image in bytes" msgstr "ì´ë¯¸ì§€ì˜ ê°€ìƒ í¬ê¸°(ë°”ì´íЏ)" #, python-format msgid "Waited 15 seconds for pid %(pid)s (%(file)s) to die; giving up" msgstr "pid %(pid)s(%(file)s)ì´ ì¢…ë£Œë  ë•Œê¹Œì§€ 15ì´ˆ 대기함, í¬ê¸°í•˜ëŠ” 중" msgid "" "When running server in SSL mode, you must specify both a cert_file and " "key_file option value in your configuration file" msgstr "" "서버를 SSL 모드ì—서 실행할 때 구성 파ì¼ì— cert_file ë° key_file 옵션 ê°’ì„ ëª¨" "ë‘ ì§€ì •í•´ì•¼ 함" msgid "" "Whether to pass through the user token when making requests to the registry. " "To prevent failures with token expiration during big files upload, it is " "recommended to set this parameter to False.If \"use_user_token\" is not in " "effect, then admin credentials can be specified." msgstr "" "ë ˆì§€ìŠ¤íŠ¸ë¦¬ì— ëŒ€í•´ ìš”ì²­ì„ ìž‘ì„±í•  때 ì‚¬ìš©ìž í† í°ì„ 통과할지 여부입니다. í° íŒŒì¼" "ì„ ì—…ë¡œë“œí•˜ëŠ” ë™ì•ˆ í† í° ë§Œê¸°ì— ëŒ€í•œ 실패를 방지하려면 ì´ ë§¤ê°œë³€ìˆ˜ë¥¼ False로 " "설정하는 ê²ƒì´ ì¢‹ìŠµë‹ˆë‹¤. \"use_user_token\"ì´ ì ìš©ë˜ì§€ ì•Šì€ ê²½ìš° 관리 ì‹ ìž„ ì •" "보를 지정할 수 있습니다." #, python-format msgid "Wrong command structure: %s" msgstr "ìž˜ëª»ëœ ëª…ë ¹ 구조: %s" msgid "You are not authenticated." msgstr "ì¸ì¦ë˜ì§€ ì•Šì€ ì‚¬ìš©ìžìž…니다." msgid "You are not authorized to complete this action." msgstr "ì´ ì¡°ì¹˜ë¥¼ 완료할 ê¶Œí•œì´ ì—†ìŠµë‹ˆë‹¤." #, python-format msgid "You are not authorized to lookup image %s." msgstr "ì´ë¯¸ì§€ %sì„(를) 검색할 ê¶Œí•œì´ ì—†ìŠµë‹ˆë‹¤." #, python-format msgid "You are not authorized to lookup the members of the image %s." msgstr "ì´ë¯¸ì§€ %sì˜ ë©¤ë²„ë¥¼ 검색할 ê¶Œí•œì´ ì—†ìŠµë‹ˆë‹¤." #, python-format msgid "You are not permitted to create a tag in the namespace owned by '%s'" msgstr "'%s' ì†Œìœ ì˜ ë„¤ìž„ìŠ¤íŽ˜ì´ìŠ¤ì— íƒœê·¸ë¥¼ 작성할 ê¶Œí•œì´ ì—†ìŠµë‹ˆë‹¤." msgid "You are not permitted to create image members for the image." msgstr "ì´ë¯¸ì§€ì— 대한 ì´ë¯¸ì§€ 멤버를 작성할 ê¶Œí•œì´ ì—†ìŠµë‹ˆë‹¤." #, python-format msgid "You are not permitted to create images owned by '%s'." msgstr "'%s' ì†Œìœ ì˜ ì´ë¯¸ì§€ë¥¼ 작성할 ê¶Œí•œì´ ì—†ìŠµë‹ˆë‹¤." #, python-format msgid "You are not permitted to create namespace owned by '%s'" msgstr "'%s' ì†Œìœ ì˜ ë„¤ìž„ìŠ¤íŽ˜ì´ìŠ¤ë¥¼ 작성할 ê¶Œí•œì´ ì—†ìŠµë‹ˆë‹¤." #, python-format msgid "You are not permitted to create object owned by '%s'" msgstr "'%s' ì†Œìœ ì˜ ì˜¤ë¸Œì íŠ¸ë¥¼ 작성할 ê¶Œí•œì´ ì—†ìŠµë‹ˆë‹¤." #, python-format msgid "You are not permitted to create property owned by '%s'" msgstr "'%s' ì†Œìœ ì˜ íŠ¹ì„±ì„ ìž‘ì„±í•  ê¶Œí•œì´ ì—†ìŠµë‹ˆë‹¤." #, python-format msgid "You are not permitted to create resource_type owned by '%s'" msgstr "'%s' ì†Œìœ ì˜ resource_typeì„ ìž‘ì„±í•  ê¶Œí•œì´ ì—†ìŠµë‹ˆë‹¤." #, python-format msgid "You are not permitted to create this task with owner as: %s" msgstr "ë‹¤ìŒ ì†Œìœ ìžë¡œ ì´ íƒœìŠ¤í¬ë¥¼ 작성하ë„ë¡ í—ˆìš©ë˜ì§€ 않았습니다. %s" msgid "You are not permitted to deactivate this image." msgstr "ì´ ì´ë¯¸ì§€ë¥¼ 비활성화할 ê¶Œí•œì´ ì—†ìŠµë‹ˆë‹¤." msgid "You are not permitted to delete this image." msgstr "ì´ ì´ë¯¸ì§€ë¥¼ 삭제할 ê¶Œí•œì´ ì—†ìŠµë‹ˆë‹¤." msgid "You are not permitted to delete this meta_resource_type." msgstr "ì´ meta_resource_typeì„ ì‚­ì œí•  ê¶Œí•œì´ ì—†ìŠµë‹ˆë‹¤." msgid "You are not permitted to delete this namespace." msgstr "ì´ ë„¤ìž„ìŠ¤íŽ˜ì´ìŠ¤ë¥¼ 삭제할 ê¶Œí•œì´ ì—†ìŠµë‹ˆë‹¤." msgid "You are not permitted to delete this object." msgstr "ì´ ì˜¤ë¸Œì íŠ¸ë¥¼ 삭제할 ê¶Œí•œì´ ì—†ìŠµë‹ˆë‹¤." msgid "You are not permitted to delete this property." msgstr "ì´ íŠ¹ì„±ì„ ì‚­ì œí•  ê¶Œí•œì´ ì—†ìŠµë‹ˆë‹¤." msgid "You are not permitted to delete this tag." msgstr "ì´ íƒœê·¸ë¥¼ 삭제할 ê¶Œí•œì´ ì—†ìŠµë‹ˆë‹¤." #, python-format msgid "You are not permitted to modify '%(attr)s' on this %(resource)s." msgstr "ì´ %(resource)sì—서 '%(attr)s'ì„(를) 수정하ë„ë¡ í—ˆìš©ë˜ì§€ 않았습니다." #, python-format msgid "You are not permitted to modify '%s' on this image." msgstr "ì´ ì´ë¯¸ì§€ì—서 '%s'ì„(를) 수정할 ê¶Œí•œì´ ì—†ìŠµë‹ˆë‹¤." msgid "You are not permitted to modify locations for this image." msgstr "ì´ ì´ë¯¸ì§€ì˜ 위치를 수정할 ê¶Œí•œì´ ì—†ìŠµë‹ˆë‹¤." msgid "You are not permitted to modify tags on this image." msgstr "ì´ ì´ë¯¸ì§€ì˜ 태그를 수정할 ê¶Œí•œì´ ì—†ìŠµë‹ˆë‹¤." msgid "You are not permitted to modify this image." msgstr "ì´ ì´ë¯¸ì§€ë¥¼ 수정할 ê¶Œí•œì´ ì—†ìŠµë‹ˆë‹¤." msgid "You are not permitted to reactivate this image." msgstr "ì´ ì´ë¯¸ì§€ë¥¼ 재활성화할 ê¶Œí•œì´ ì—†ìŠµë‹ˆë‹¤." msgid "You are not permitted to set status on this task." msgstr "ì´ íƒœìŠ¤í¬ì—서 ìƒíƒœë¥¼ 설정하ë„ë¡ í—ˆìš©ë˜ì§€ 않았습니다." msgid "You are not permitted to update this namespace." msgstr "ì´ ë„¤ìž„ìŠ¤íŽ˜ì´ìŠ¤ë¥¼ ì—…ë°ì´íŠ¸í•  ê¶Œí•œì´ ì—†ìŠµë‹ˆë‹¤." msgid "You are not permitted to update this object." msgstr "ì´ ì˜¤ë¸Œì íŠ¸ë¥¼ ì—…ë°ì´íŠ¸í•  ê¶Œí•œì´ ì—†ìŠµë‹ˆë‹¤." msgid "You are not permitted to update this property." msgstr "ì´ íŠ¹ì„±ì„ ì—…ë°ì´íŠ¸í•  ê¶Œí•œì´ ì—†ìŠµë‹ˆë‹¤." msgid "You are not permitted to update this tag." msgstr "ì´ íƒœê·¸ë¥¼ ì—…ë°ì´íŠ¸í•  ê¶Œí•œì´ ì—†ìŠµë‹ˆë‹¤." msgid "You are not permitted to upload data for this image." msgstr "ì´ ì´ë¯¸ì§€ì— 대한 ë°ì´í„°ë¥¼ 작성할 ê¶Œí•œì´ ì—†ìŠµë‹ˆë‹¤." #, python-format msgid "You cannot add image member for %s" msgstr "%sì— ëŒ€í•œ ì´ë¯¸ì§€ 멤버를 추가할 수 ì—†ìŒ" #, python-format msgid "You cannot delete image member for %s" msgstr "%sì— ëŒ€í•œ ì´ë¯¸ì§€ 멤버를 삭제할 수 ì—†ìŒ" #, python-format msgid "You cannot get image member for %s" msgstr "%sì— ëŒ€í•œ ì´ë¯¸ì§€ 멤버를 가져올 수 ì—†ìŒ" #, python-format msgid "You cannot update image member %s" msgstr "ì´ë¯¸ì§€ 멤버 %sì„(를) ì—…ë°ì´íŠ¸í•  수 ì—†ìŒ" msgid "You do not own this image" msgstr "ì´ ì´ë¯¸ì§€ë¥¼ 소유하지 않ìŒ" msgid "" "You have selected to use SSL in connecting, and you have supplied a cert, " "however you have failed to supply either a key_file parameter or set the " "GLANCE_CLIENT_KEY_FILE environ variable" msgstr "" "ì—°ê²°ì— SSLì„ ì‚¬ìš©í•˜ë„ë¡ ì„ íƒí•˜ê³  ì¸ì¦ì„ 제공했지만 key_file 매개변수를 제공하" "ì§€ 못했거나 GLANCE_CLIENT_KEY_FILE 환경 변수를 설정하지 못했습니다." msgid "" "You have selected to use SSL in connecting, and you have supplied a key, " "however you have failed to supply either a cert_file parameter or set the " "GLANCE_CLIENT_CERT_FILE environ variable" msgstr "" "ì—°ê²°ì— SSLì„ ì‚¬ìš©í•˜ë„ë¡ ì„ íƒí•˜ê³  키를 제공했지만 cert_file 매개변수를 제공하" "ì§€ 못했거나 GLANCE_CLIENT_CERT_FILE 환경 변수를 설정하지 못했습니다." msgid "" "^([0-9a-fA-F]){8}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-" "fA-F]){12}$" msgstr "" "^([0-9a-fA-F]){8}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-" "fA-F]){12}$" #, python-format msgid "__init__() got unexpected keyword argument '%s'" msgstr "__init__()ê°€ 예ìƒì¹˜ 못한 키워드 ì¸ìˆ˜ '%s'ì„(를) 가져옴" #, python-format msgid "" "cannot transition from %(current)s to %(next)s in update (wanted from_state=" "%(from)s)" msgstr "" "ì—…ë°ì´íЏì—서 %(current)sì—서 %(next)s(으)로 ìƒíƒœ ì „ì´í•  수 (from_state=" "%(from)sì„(를) ì›í•¨)" #, python-format msgid "custom properties (%(props)s) conflict with base properties" msgstr "ì‚¬ìš©ìž ì •ì˜ íŠ¹ì„± (%(props)s)ì´(ê°€) 기본 특성과 ì¶©ëŒí•¨" msgid "eventlet 'poll' nor 'selects' hubs are available on this platform" msgstr "ì´ í”Œëž«í¼ì—서 eventlet 'poll'ì´ë‚˜ 'selects' 허브를 ëª¨ë‘ ì‚¬ìš©í•  수 ì—†ìŒ" msgid "is_public must be None, True, or False" msgstr "is_publicì€ None, True ë˜ëŠ” False여야 함" msgid "limit param must be an integer" msgstr "limit 매개변수는 정수여야 함" msgid "limit param must be positive" msgstr "limit 매개변수가 양수여야 함" msgid "md5 hash of image contents." msgstr "ì´ë¯¸ì§€ 컨í…ì¸ ì˜ md5 해시입니다." #, python-format msgid "new_image() got unexpected keywords %s" msgstr "new_image()ê°€ 예ìƒì¹˜ 못한 키워드 %sì„(를) 가져옴" msgid "protected must be True, or False" msgstr "protected는 True ë˜ëŠ” False여야 함" #, python-format msgid "unable to launch %(serv)s. Got error: %(e)s" msgstr "%(serv)sì„(를) 실행할 수 ì—†ìŒ. 오류 ë°œìƒ: %(e)s" #, python-format msgid "x-openstack-request-id is too long, max size %s" msgstr "x-openstack-request-idê°€ 너무 ê¹€, 최대 í¬ê¸° %s" glance-16.0.0/glance/locale/en_GB/0000775000175100017510000000000013245511661016521 5ustar zuulzuul00000000000000glance-16.0.0/glance/locale/en_GB/LC_MESSAGES/0000775000175100017510000000000013245511661020306 5ustar zuulzuul00000000000000glance-16.0.0/glance/locale/en_GB/LC_MESSAGES/glance.po0000666000175100017510000056352113245511426022114 0ustar zuulzuul00000000000000# Translations template for glance. # Copyright (C) 2015 ORGANIZATION # This file is distributed under the same license as the glance project. # # Translators: # Abigail Brady , Bastien Nocera , 2012 # Andi Chandler , 2013 # Andreas Jaeger , 2016. #zanata # Andi Chandler , 2017. #zanata # Andi Chandler , 2018. #zanata msgid "" msgstr "" "Project-Id-Version: glance VERSION\n" "Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n" "POT-Creation-Date: 2018-02-16 17:15+0000\n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=UTF-8\n" "Content-Transfer-Encoding: 8bit\n" "PO-Revision-Date: 2018-02-16 02:04+0000\n" "Last-Translator: Andi Chandler \n" "Language: en-GB\n" "Plural-Forms: nplurals=2; plural=(n != 1);\n" "Generated-By: Babel 2.0\n" "X-Generator: Zanata 3.9.6\n" "Language-Team: English (United Kingdom)\n" #, python-format msgid "\t%s" msgstr "\t%s" msgid "" "\n" "AES key for encrypting store location metadata.\n" "\n" "Provide a string value representing the AES cipher to use for\n" "encrypting Glance store metadata.\n" "\n" "NOTE: The AES key to use must be set to a random string of length\n" "16, 24 or 32 bytes.\n" "\n" "Possible values:\n" " * String value representing a valid AES key\n" "\n" "Related options:\n" " * None\n" "\n" msgstr "" "\n" "AES key for encrypting store location metadata.\n" "\n" "Provide a string value representing the AES cipher to use for\n" "encrypting Glance store metadata.\n" "\n" "NOTE: The AES key to use must be set to a random string of length\n" "16, 24 or 32 bytes.\n" "\n" "Possible values:\n" " * String value representing a valid AES key\n" "\n" "Related options:\n" " * None\n" "\n" msgid "" "\n" "Absolute path to a private key file.\n" "\n" "Provide a string value representing a valid absolute path to a\n" "private key file which is required to establish the client-server\n" "connection.\n" "\n" "Possible values:\n" " * Absolute path to the private key file\n" "\n" "Related options:\n" " * None\n" "\n" msgstr "" "\n" "Absolute path to a private key file.\n" "\n" "Provide a string value representing a valid absolute path to a\n" "private key file which is required to establish the client-server\n" "connection.\n" "\n" "Possible values:\n" " * Absolute path to the private key file\n" "\n" "Related options:\n" " * None\n" "\n" msgid "" "\n" "Absolute path to the CA file.\n" "\n" "Provide a string value representing a valid absolute path to\n" "the Certificate Authority file to use for client authentication.\n" "\n" "A CA file typically contains necessary trusted certificates to\n" "use for the client authentication. This is essential to ensure\n" "that a secure connection is established to the server via the\n" "internet.\n" "\n" "Possible values:\n" " * Valid absolute path to the CA file\n" "\n" "Related options:\n" " * None\n" "\n" msgstr "" "\n" "Absolute path to the CA file.\n" "\n" "Provide a string value representing a valid absolute path to\n" "the Certificate Authority file to use for client authentication.\n" "\n" "A CA file typically contains necessary trusted certificates to\n" "use for the client authentication. This is essential to ensure\n" "that a secure connection is established to the server via the\n" "internet.\n" "\n" "Possible values:\n" " * Valid absolute path to the CA file\n" "\n" "Related options:\n" " * None\n" "\n" msgid "" "\n" "Absolute path to the Certificate Authority file.\n" "\n" "Provide a string value representing a valid absolute path to the\n" "certificate authority file to use for establishing a secure\n" "connection to the registry server.\n" "\n" "NOTE: This option must be set if ``registry_client_protocol`` is\n" "set to ``https``. Alternatively, the GLANCE_CLIENT_CA_FILE\n" "environment variable may be set to a filepath of the CA file.\n" "This option is ignored if the ``registry_client_insecure`` option\n" "is set to ``True``.\n" "\n" "Possible values:\n" " * String value representing a valid absolute path to the CA\n" " file.\n" "\n" "Related options:\n" " * registry_client_protocol\n" " * registry_client_insecure\n" "\n" msgstr "" "\n" "Absolute path to the Certificate Authority file.\n" "\n" "Provide a string value representing a valid absolute path to the\n" "certificate authority file to use for establishing a secure\n" "connection to the registry server.\n" "\n" "NOTE: This option must be set if ``registry_client_protocol`` is\n" "set to ``https``. Alternatively, the GLANCE_CLIENT_CA_FILE\n" "environment variable may be set to a file path of the CA file.\n" "This option is ignored if the ``registry_client_insecure`` option\n" "is set to ``True``.\n" "\n" "Possible values:\n" " * String value representing a valid absolute path to the CA\n" " file.\n" "\n" "Related options:\n" " * registry_client_protocol\n" " * registry_client_insecure\n" "\n" msgid "" "\n" "Absolute path to the certificate file.\n" "\n" "Provide a string value representing a valid absolute path to the\n" "certificate file to use for establishing a secure connection to\n" "the registry server.\n" "\n" "NOTE: This option must be set if ``registry_client_protocol`` is\n" "set to ``https``. Alternatively, the GLANCE_CLIENT_CERT_FILE\n" "environment variable may be set to a filepath of the certificate\n" "file.\n" "\n" "Possible values:\n" " * String value representing a valid absolute path to the\n" " certificate file.\n" "\n" "Related options:\n" " * registry_client_protocol\n" "\n" msgstr "" "\n" "Absolute path to the certificate file.\n" "\n" "Provide a string value representing a valid absolute path to the\n" "certificate file to use for establishing a secure connection to\n" "the registry server.\n" "\n" "NOTE: This option must be set if ``registry_client_protocol`` is\n" "set to ``https``. Alternatively, the GLANCE_CLIENT_CERT_FILE\n" "environment variable may be set to a filepath of the certificate\n" "file.\n" "\n" "Possible values:\n" " * String value representing a valid absolute path to the\n" " certificate file.\n" "\n" "Related options:\n" " * registry_client_protocol\n" "\n" msgid "" "\n" "Absolute path to the certificate file.\n" "\n" "Provide a string value representing a valid absolute path to the\n" "certificate file which is required to start the API service\n" "securely.\n" "\n" "A certificate file typically is a public key container and includes\n" "the server's public key, server name, server information and the\n" "signature which was a result of the verification process using the\n" "CA certificate. This is required for a secure connection\n" "establishment.\n" "\n" "Possible values:\n" " * Valid absolute path to the certificate file\n" "\n" "Related options:\n" " * None\n" "\n" msgstr "" "\n" "Absolute path to the certificate file.\n" "\n" "Provide a string value representing a valid absolute path to the\n" "certificate file which is required to start the API service\n" "securely.\n" "\n" "A certificate file typically is a public key container and includes\n" "the server's public key, server name, server information and the\n" "signature which was a result of the verification process using the\n" "CA certificate. This is required for a secure connection\n" "establishment.\n" "\n" "Possible values:\n" " * Valid absolute path to the certificate file\n" "\n" "Related options:\n" " * None\n" "\n" msgid "" "\n" "Absolute path to the directory where JSON metadefs files are stored.\n" "\n" "Glance Metadata Definitions (\"metadefs\") are served from the database,\n" "but are stored in files in the JSON format. The files in this\n" "directory are used to initialize the metadefs in the database.\n" "Additionally, when metadefs are exported from the database, the files\n" "are written to this directory.\n" "\n" "NOTE: If you plan to export metadefs, make sure that this directory\n" "has write permissions set for the user being used to run the\n" "glance-api service.\n" "\n" "Possible values:\n" " * String value representing a valid absolute pathname\n" "\n" "Related options:\n" " * None\n" "\n" msgstr "" "\n" "Absolute path to the directory where JSON metadefs files are stored.\n" "\n" "Glance Metadata Definitions (\"metadefs\") are served from the database,\n" "but are stored in files in the JSON format. The files in this\n" "directory are used to initialize the metadefs in the database.\n" "Additionally, when metadefs are exported from the database, the files\n" "are written to this directory.\n" "\n" "NOTE: If you plan to export metadefs, make sure that this directory\n" "has write permissions set for the user being used to run the\n" "glance-api service.\n" "\n" "Possible values:\n" " * String value representing a valid absolute pathname\n" "\n" "Related options:\n" " * None\n" "\n" msgid "" "\n" "Absolute path to the private key file.\n" "\n" "Provide a string value representing a valid absolute path to the\n" "private key file to use for establishing a secure connection to\n" "the registry server.\n" "\n" "NOTE: This option must be set if ``registry_client_protocol`` is\n" "set to ``https``. Alternatively, the GLANCE_CLIENT_KEY_FILE\n" "environment variable may be set to a filepath of the key file.\n" "\n" "Possible values:\n" " * String value representing a valid absolute path to the key\n" " file.\n" "\n" "Related options:\n" " * registry_client_protocol\n" "\n" msgstr "" "\n" "Absolute path to the private key file.\n" "\n" "Provide a string value representing a valid absolute path to the\n" "private key file to use for establishing a secure connection to\n" "the registry server.\n" "\n" "NOTE: This option must be set if ``registry_client_protocol`` is\n" "set to ``https``. Alternatively, the GLANCE_CLIENT_KEY_FILE\n" "environment variable may be set to a filepath of the key file.\n" "\n" "Possible values:\n" " * String value representing a valid absolute path to the key\n" " file.\n" "\n" "Related options:\n" " * registry_client_protocol\n" "\n" msgid "" "\n" "Absolute path to the work directory to use for asynchronous\n" "task operations.\n" "\n" "The directory set here will be used to operate over images -\n" "normally before they are imported in the destination store.\n" "\n" "NOTE: When providing a value for ``work_dir``, please make sure\n" "that enough space is provided for concurrent tasks to run\n" "efficiently without running out of space.\n" "\n" "A rough estimation can be done by multiplying the number of\n" "``max_workers`` with an average image size (e.g 500MB). The image\n" "size estimation should be done based on the average size in your\n" "deployment. Note that depending on the tasks running you may need\n" "to multiply this number by some factor depending on what the task\n" "does. For example, you may want to double the available size if\n" "image conversion is enabled. All this being said, remember these\n" "are just estimations and you should do them based on the worst\n" "case scenario and be prepared to act in case they were wrong.\n" "\n" "Possible values:\n" " * String value representing the absolute path to the working\n" " directory\n" "\n" "Related Options:\n" " * None\n" "\n" msgstr "" "\n" "Absolute path to the work directory to use for asynchronous\n" "task operations.\n" "\n" "The directory set here will be used to operate over images -\n" "normally before they are imported in the destination store.\n" "\n" "NOTE: When providing a value for ``work_dir``, please make sure\n" "that enough space is provided for concurrent tasks to run\n" "efficiently without running out of space.\n" "\n" "A rough estimation can be done by multiplying the number of\n" "``max_workers`` with an average image size (e.g 500MB). The image\n" "size estimation should be done based on the average size in your\n" "deployment. Note that depending on the tasks running you may need\n" "to multiply this number by some factor depending on what the task\n" "does. For example, you may want to double the available size if\n" "image conversion is enabled. All this being said, remember these\n" "are just estimations and you should do them based on the worst\n" "case scenario and be prepared to act in case they were wrong.\n" "\n" "Possible values:\n" " * String value representing the absolute path to the working\n" " directory\n" "\n" "Related Options:\n" " * None\n" "\n" msgid "" "\n" "Address the registry server is hosted on.\n" "\n" "Possible values:\n" " * A valid IP or hostname\n" "\n" "Related options:\n" " * None\n" "\n" msgstr "" "\n" "Address the registry server is hosted on.\n" "\n" "Possible values:\n" " * A valid IP or hostname\n" "\n" "Related options:\n" " * None\n" "\n" msgid "" "\n" "Allow limited access to unauthenticated users.\n" "\n" "Assign a boolean to determine API access for unathenticated\n" "users. When set to False, the API cannot be accessed by\n" "unauthenticated users. When set to True, unauthenticated users can\n" "access the API with read-only privileges. This however only applies\n" "when using ContextMiddleware.\n" "\n" "Possible values:\n" " * True\n" " * False\n" "\n" "Related options:\n" " * None\n" "\n" msgstr "" "\n" "Allow limited access to unauthenticated users.\n" "\n" "Assign a boolean to determine API access for unauthenticated\n" "users. When set to False, the API cannot be accessed by\n" "unauthenticated users. When set to True, unauthenticated users can\n" "access the API with read-only privileges. This however only applies\n" "when using ContextMiddleware.\n" "\n" "Possible values:\n" " * True\n" " * False\n" "\n" "Related options:\n" " * None\n" "\n" msgid "" "\n" "Allow users to add additional/custom properties to images.\n" "\n" "Glance defines a standard set of properties (in its schema) that\n" "appear on every image. These properties are also known as\n" "``base properties``. In addition to these properties, Glance\n" "allows users to add custom properties to images. These are known\n" "as ``additional properties``.\n" "\n" "By default, this configuration option is set to ``True`` and users\n" "are allowed to add additional properties. The number of additional\n" "properties that can be added to an image can be controlled via\n" "``image_property_quota`` configuration option.\n" "\n" "Possible values:\n" " * True\n" " * False\n" "\n" "Related options:\n" " * image_property_quota\n" "\n" msgstr "" "\n" "Allow users to add additional/custom properties to images.\n" "\n" "Glance defines a standard set of properties (in its schema) that\n" "appear on every image. These properties are also known as\n" "``base properties``. In addition to these properties, Glance\n" "allows users to add custom properties to images. These are known\n" "as ``additional properties``.\n" "\n" "By default, this configuration option is set to ``True`` and users\n" "are allowed to add additional properties. The number of additional\n" "properties that can be added to an image can be controlled via\n" "``image_property_quota`` configuration option.\n" "\n" "Possible values:\n" " * True\n" " * False\n" "\n" "Related options:\n" " * image_property_quota\n" "\n" msgid "" "\n" "Base directory for image cache.\n" "\n" "This is the location where image data is cached and served out of. All " "cached\n" "images are stored directly under this directory. This directory also " "contains\n" "three subdirectories, namely, ``incomplete``, ``invalid`` and ``queue``.\n" "\n" "The ``incomplete`` subdirectory is the staging area for downloading images. " "An\n" "image is first downloaded to this directory. When the image download is\n" "successful it is moved to the base directory. However, if the download " "fails,\n" "the partially downloaded image file is moved to the ``invalid`` " "subdirectory.\n" "\n" "The ``queue``subdirectory is used for queuing images for download. This is\n" "used primarily by the cache-prefetcher, which can be scheduled as a " "periodic\n" "task like cache-pruner and cache-cleaner, to cache images ahead of their " "usage.\n" "Upon receiving the request to cache an image, Glance touches a file in the\n" "``queue`` directory with the image id as the file name. The cache-" "prefetcher,\n" "when running, polls for the files in ``queue`` directory and starts\n" "downloading them in the order they were created. When the download is\n" "successful, the zero-sized file is deleted from the ``queue`` directory.\n" "If the download fails, the zero-sized file remains and it'll be retried the\n" "next time cache-prefetcher runs.\n" "\n" "Possible values:\n" " * A valid path\n" "\n" "Related options:\n" " * ``image_cache_sqlite_db``\n" "\n" msgstr "" "\n" "Base directory for image cache.\n" "\n" "This is the location where image data is cached and served out of. All " "cached\n" "images are stored directly under this directory. This directory also " "contains\n" "three subdirectories, namely, ``incomplete``, ``invalid`` and ``queue``.\n" "\n" "The ``incomplete`` subdirectory is the staging area for downloading images. " "An\n" "image is first downloaded to this directory. When the image download is\n" "successful it is moved to the base directory. However, if the download " "fails,\n" "the partially downloaded image file is moved to the ``invalid`` " "subdirectory.\n" "\n" "The ``queue``subdirectory is used for queuing images for download. This is\n" "used primarily by the cache-prefetcher, which can be scheduled as a " "periodic\n" "task like cache-pruner and cache-cleaner, to cache images ahead of their " "usage.\n" "Upon receiving the request to cache an image, Glance touches a file in the\n" "``queue`` directory with the image id as the file name. The cache-" "prefetcher,\n" "when running, polls for the files in ``queue`` directory and starts\n" "downloading them in the order they were created. When the download is\n" "successful, the zero-sized file is deleted from the ``queue`` directory.\n" "If the download fails, the zero-sized file remains and it'll be retried the\n" "next time cache-prefetcher runs.\n" "\n" "Possible values:\n" " * A valid path\n" "\n" "Related options:\n" " * ``image_cache_sqlite_db``\n" "\n" msgid "" "\n" "Default publisher_id for outgoing Glance notifications.\n" "\n" "This is the value that the notification driver will use to identify\n" "messages for events originating from the Glance service. Typically,\n" "this is the hostname of the instance that generated the message.\n" "\n" "Possible values:\n" " * Any reasonable instance identifier, for example: image.host1\n" "\n" "Related options:\n" " * None\n" "\n" msgstr "" "\n" "Default publisher_id for outgoing Glance notifications.\n" "\n" "This is the value that the notification driver will use to identify\n" "messages for events originating from the Glance service. Typically,\n" "this is the hostname of the instance that generated the message.\n" "\n" "Possible values:\n" " * Any reasonable instance identifier, for example: image.host1\n" "\n" "Related options:\n" " * None\n" "\n" msgid "" "\n" "Deploy the v1 API Registry service.\n" "\n" "When this option is set to ``True``, the Registry service\n" "will be enabled in Glance for v1 API requests.\n" "\n" "NOTES:\n" " * Use of Registry is mandatory in v1 API, so this option must\n" " be set to ``True`` if the ``enable_v1_api`` option is enabled.\n" "\n" " * If deploying only the v2 OpenStack Images API, this option,\n" " which is enabled by default, should be disabled.\n" "\n" "Possible values:\n" " * True\n" " * False\n" "\n" "Related options:\n" " * enable_v1_api\n" "\n" msgstr "" "\n" "Deploy the v1 API Registry service.\n" "\n" "When this option is set to ``True``, the Registry service\n" "will be enabled in Glance for v1 API requests.\n" "\n" "NOTES:\n" " * Use of Registry is mandatory in v1 API, so this option must\n" " be set to ``True`` if the ``enable_v1_api`` option is enabled.\n" "\n" " * If deploying only the v2 OpenStack Images API, this option,\n" " which is enabled by default, should be disabled.\n" "\n" "Possible values:\n" " * True\n" " * False\n" "\n" "Related options:\n" " * enable_v1_api\n" "\n" msgid "" "\n" "Deploy the v1 OpenStack Images API.\n" "\n" "When this option is set to ``True``, Glance service will respond to\n" "requests on registered endpoints conforming to the v1 OpenStack\n" "Images API.\n" "\n" "NOTES:\n" " * If this option is enabled, then ``enable_v1_registry`` must\n" " also be set to ``True`` to enable mandatory usage of Registry\n" " service with v1 API.\n" "\n" " * If this option is disabled, then the ``enable_v1_registry``\n" " option, which is enabled by default, is also recommended\n" " to be disabled.\n" "\n" " * This option is separate from ``enable_v2_api``, both v1 and v2\n" " OpenStack Images API can be deployed independent of each\n" " other.\n" "\n" " * If deploying only the v2 Images API, this option, which is\n" " enabled by default, should be disabled.\n" "\n" "Possible values:\n" " * True\n" " * False\n" "\n" "Related options:\n" " * enable_v1_registry\n" " * enable_v2_api\n" "\n" msgstr "" "\n" "Deploy the v1 OpenStack Images API.\n" "\n" "When this option is set to ``True``, Glance service will respond to\n" "requests on registered endpoints conforming to the v1 OpenStack\n" "Images API.\n" "\n" "NOTES:\n" " * If this option is enabled, then ``enable_v1_registry`` must\n" " also be set to ``True`` to enable mandatory usage of Registry\n" " service with v1 API.\n" "\n" " * If this option is disabled, then the ``enable_v1_registry``\n" " option, which is enabled by default, is also recommended\n" " to be disabled.\n" "\n" " * This option is separate from ``enable_v2_api``, both v1 and v2\n" " OpenStack Images API can be deployed independent of each\n" " other.\n" "\n" " * If deploying only the v2 Images API, this option, which is\n" " enabled by default, should be disabled.\n" "\n" "Possible values:\n" " * True\n" " * False\n" "\n" "Related options:\n" " * enable_v1_registry\n" " * enable_v2_api\n" "\n" msgid "" "\n" "Deploy the v2 API Registry service.\n" "\n" "When this option is set to ``True``, the Registry service\n" "will be enabled in Glance for v2 API requests.\n" "\n" "NOTES:\n" " * Use of Registry is optional in v2 API, so this option\n" " must only be enabled if both ``enable_v2_api`` is set to\n" " ``True`` and the ``data_api`` option is set to\n" " ``glance.db.registry.api``.\n" "\n" " * If deploying only the v1 OpenStack Images API, this option,\n" " which is enabled by default, should be disabled.\n" "\n" "Possible values:\n" " * True\n" " * False\n" "\n" "Related options:\n" " * enable_v2_api\n" " * data_api\n" "\n" msgstr "" "\n" "Deploy the v2 API Registry service.\n" "\n" "When this option is set to ``True``, the Registry service\n" "will be enabled in Glance for v2 API requests.\n" "\n" "NOTES:\n" " * Use of Registry is optional in v2 API, so this option\n" " must only be enabled if both ``enable_v2_api`` is set to\n" " ``True`` and the ``data_api`` option is set to\n" " ``glance.db.registry.api``.\n" "\n" " * If deploying only the v1 OpenStack Images API, this option,\n" " which is enabled by default, should be disabled.\n" "\n" "Possible values:\n" " * True\n" " * False\n" "\n" "Related options:\n" " * enable_v2_api\n" " * data_api\n" "\n" msgid "" "\n" "Deploy the v2 OpenStack Images API.\n" "\n" "When this option is set to ``True``, Glance service will respond\n" "to requests on registered endpoints conforming to the v2 OpenStack\n" "Images API.\n" "\n" "NOTES:\n" " * If this option is disabled, then the ``enable_v2_registry``\n" " option, which is enabled by default, is also recommended\n" " to be disabled.\n" "\n" " * This option is separate from ``enable_v1_api``, both v1 and v2\n" " OpenStack Images API can be deployed independent of each\n" " other.\n" "\n" " * If deploying only the v1 Images API, this option, which is\n" " enabled by default, should be disabled.\n" "\n" "Possible values:\n" " * True\n" " * False\n" "\n" "Related options:\n" " * enable_v2_registry\n" " * enable_v1_api\n" "\n" msgstr "" "\n" "Deploy the v2 OpenStack Images API.\n" "\n" "When this option is set to ``True``, Glance service will respond\n" "to requests on registered endpoints conforming to the v2 OpenStack\n" "Images API.\n" "\n" "NOTES:\n" " * If this option is disabled, then the ``enable_v2_registry``\n" " option, which is enabled by default, is also recommended\n" " to be disabled.\n" "\n" " * This option is separate from ``enable_v1_api``, both v1 and v2\n" " OpenStack Images API can be deployed independent of each\n" " other.\n" "\n" " * If deploying only the v1 Images API, this option, which is\n" " enabled by default, should be disabled.\n" "\n" "Possible values:\n" " * True\n" " * False\n" "\n" "Related options:\n" " * enable_v2_registry\n" " * enable_v1_api\n" "\n" msgid "" "\n" "Deployment flavor to use in the server application pipeline.\n" "\n" "Provide a string value representing the appropriate deployment\n" "flavor used in the server application pipleline. This is typically\n" "the partial name of a pipeline in the paste configuration file with\n" "the service name removed.\n" "\n" "For example, if your paste section name in the paste configuration\n" "file is [pipeline:glance-api-keystone], set ``flavor`` to\n" "``keystone``.\n" "\n" "Possible values:\n" " * String value representing a partial pipeline name.\n" "\n" "Related Options:\n" " * config_file\n" "\n" msgstr "" "\n" "Deployment flavour to use in the server application pipeline.\n" "\n" "Provide a string value representing the appropriate deployment\n" "flavour used in the server application pipeline. This is typically\n" "the partial name of a pipeline in the paste configuration file with\n" "the service name removed.\n" "\n" "For example, if your paste section name in the paste configuration\n" "file is [pipeline:glance-api-keystone], set ``flavor`` to\n" "``keystone``.\n" "\n" "Possible values:\n" " * String value representing a partial pipeline name.\n" "\n" "Related Options:\n" " * config_file\n" "\n" msgid "" "\n" "Dictionary contains metadata properties to be injected in image.\n" "\n" "Possible values:\n" " * Dictionary containing key/value pairs. Key characters\n" " length should be <= 255. For example: k1:v1,k2:v2\n" "\n" "\n" msgstr "" "\n" "Dictionary contains metadata properties to be injected in image.\n" "\n" "Possible values:\n" " * Dictionary containing key/value pairs. Key characters\n" " length should be <= 255. For example: k1:v1,k2:v2\n" "\n" "\n" msgid "" "\n" "Digest algorithm to use for digital signature.\n" "\n" "Provide a string value representing the digest algorithm to\n" "use for generating digital signatures. By default, ``sha256``\n" "is used.\n" "\n" "To get a list of the available algorithms supported by the version\n" "of OpenSSL on your platform, run the command:\n" "``openssl list-message-digest-algorithms``.\n" "Examples are 'sha1', 'sha256', and 'sha512'.\n" "\n" "NOTE: ``digest_algorithm`` is not related to Glance's image signing\n" "and verification. It is only used to sign the universally unique\n" "identifier (UUID) as a part of the certificate file and key file\n" "validation.\n" "\n" "Possible values:\n" " * An OpenSSL message digest algorithm identifier\n" "\n" "Relation options:\n" " * None\n" "\n" msgstr "" "\n" "Digest algorithm to use for digital signature.\n" "\n" "Provide a string value representing the digest algorithm to\n" "use for generating digital signatures. By default, ``sha256``\n" "is used.\n" "\n" "To get a list of the available algorithms supported by the version\n" "of OpenSSL on your platform, run the command:\n" "``openssl list-message-digest-algorithms``.\n" "Examples are 'sha1', 'sha256', and 'sha512'.\n" "\n" "NOTE: ``digest_algorithm`` is not related to Glance's image signing\n" "and verification. It is only used to sign the universally unique\n" "identifier (UUID) as a part of the certificate file and key file\n" "validation.\n" "\n" "Possible values:\n" " * An OpenSSL message digest algorithm identifier\n" "\n" "Relation options:\n" " * None\n" "\n" msgid "" "\n" "Enables the Image Import workflow introduced in Pike\n" "\n" "As '[DEFAULT]/node_staging_uri' is required for the Image\n" "Import, it's disabled per default in Pike, enabled per\n" "default in Queens and removed in Rocky. This allows Glance to\n" "operate with previous version configs upon upgrade.\n" "\n" "Setting this option to False will disable the endpoints related\n" "to Image Import Refactoring work.\n" "\n" "Related options:\n" " * [DEFAULT]/node_staging_uri" msgstr "" "\n" "Enables the Image Import workflow introduced in Pike\n" "\n" "As '[DEFAULT]/node_staging_uri' is required for the Image\n" "Import, it's disabled per default in Pike, enabled per\n" "default in Queens and removed in Rocky. This allows Glance to\n" "operate with previous version configs upon upgrade.\n" "\n" "Setting this option to False will disable the endpoints related\n" "to Image Import Refactoring work.\n" "\n" "Related options:\n" " * [DEFAULT]/node_staging_uri" msgid "" "\n" "File containing the swift account(s) configurations.\n" "\n" "Include a string value representing the path to a configuration\n" "file that has references for each of the configured Swift\n" "account(s)/backing stores. By default, no file path is specified\n" "and customized Swift referencing is diabled. Configuring this option\n" "is highly recommended while using Swift storage backend for image\n" "storage as it helps avoid storage of credentials in the\n" "database.\n" "\n" "Possible values:\n" " * None\n" " * String value representing a valid configuration file path\n" "\n" "Related options:\n" " * None\n" "\n" msgstr "" "\n" "File containing the swift account(s) configurations.\n" "\n" "Include a string value representing the path to a configuration\n" "file that has references for each of the configured Swift\n" "account(s)/backing stores. By default, no file path is specified\n" "and customized Swift referencing is disabled. Configuring this option\n" "is highly recommended while using Swift storage backend for image\n" "storage as it helps avoid storage of credentials in the\n" "database.\n" "\n" "Possible values:\n" " * None\n" " * String value representing a valid configuration file path\n" "\n" "Related options:\n" " * None\n" "\n" msgid "" "\n" "Glance registry service is deprecated for removal.\n" "\n" "More information can be found from the spec:\n" "http://specs.openstack.org/openstack/glance-specs/specs/queens/approved/" "glance/deprecate-registry.html\n" msgstr "" "\n" "Glance registry service is deprecated for removal.\n" "\n" "More information can be found from the spec:\n" "http://specs.openstack.org/openstack/glance-specs/specs/queens/approved/" "glance/deprecate-registry.html\n" msgid "" "\n" "Host address of the pydev server.\n" "\n" "Provide a string value representing the hostname or IP of the\n" "pydev server to use for debugging. The pydev server listens for\n" "debug connections on this address, facilitating remote debugging\n" "in Glance.\n" "\n" "Possible values:\n" " * Valid hostname\n" " * Valid IP address\n" "\n" "Related options:\n" " * None\n" "\n" msgstr "" "\n" "Host address of the pydev server.\n" "\n" "Provide a string value representing the hostname or IP of the\n" "pydev server to use for debugging. The pydev server listens for\n" "debug connections on this address, facilitating remote debugging\n" "in Glance.\n" "\n" "Possible values:\n" " * Valid hostname\n" " * Valid IP address\n" "\n" "Related options:\n" " * None\n" "\n" msgid "" "\n" "IP address to bind the glance servers to.\n" "\n" "Provide an IP address to bind the glance server to. The default\n" "value is ``0.0.0.0``.\n" "\n" "Edit this option to enable the server to listen on one particular\n" "IP address on the network card. This facilitates selection of a\n" "particular network interface for the server.\n" "\n" "Possible values:\n" " * A valid IPv4 address\n" " * A valid IPv6 address\n" "\n" "Related options:\n" " * None\n" "\n" msgstr "" "\n" "IP address to bind the Glance servers to.\n" "\n" "Provide an IP address to bind the Glance server to. The default\n" "value is ``0.0.0.0``.\n" "\n" "Edit this option to enable the server to listen on one particular\n" "IP address on the network card. This facilitates selection of a\n" "particular network interface for the server.\n" "\n" "Possible values:\n" " * A valid IPv4 address\n" " * A valid IPv6 address\n" "\n" "Related options:\n" " * None\n" "\n" msgid "" "\n" "Image import plugins to be enabled for task processing.\n" "\n" "Provide list of strings reflecting to the task Objects\n" "that should be included to the Image Import flow. The\n" "task objects needs to be defined in the 'glance/async/\n" "flows/plugins/*' and may be implemented by OpenStack\n" "Glance project team, deployer or 3rd party.\n" "\n" "By default no plugins are enabled and to take advantage\n" "of the plugin model the list of plugins must be set\n" "explicitly in the glance-image-import.conf file.\n" "\n" "The allowed values for this option is comma separated\n" "list of object names in between ``[`` and ``]``.\n" "\n" "Possible values:\n" " * no_op (only logs debug level message that the\n" " plugin has been executed)\n" " * Any provided Task object name to be included\n" " in to the flow.\n" msgstr "" "\n" "Image import plugins to be enabled for task processing.\n" "\n" "Provide list of strings reflecting to the task Objects\n" "that should be included to the Image Import flow. The\n" "task objects needs to be defined in the 'glance/async/\n" "flows/plugins/*' and may be implemented by OpenStack\n" "Glance project team, deployer or 3rd party.\n" "\n" "By default no plugins are enabled and to take advantage\n" "of the plugin model the list of plugins must be set\n" "explicitly in the glance-image-import.conf file.\n" "\n" "The allowed values for this option is comma separated\n" "list of object names in between ``[`` and ``]``.\n" "\n" "Possible values:\n" " * no_op (only logs debug level message that the\n" " plugin has been executed)\n" " * Any provided Task object name to be included\n" " in to the flow.\n" msgid "" "\n" "Limit the request ID length.\n" "\n" "Provide an integer value to limit the length of the request ID to\n" "the specified length. The default value is 64. Users can change this\n" "to any ineteger value between 0 and 16384 however keeping in mind that\n" "a larger value may flood the logs.\n" "\n" "Possible values:\n" " * Integer value between 0 and 16384\n" "\n" "Related options:\n" " * None\n" "\n" msgstr "" "\n" "Limit the request ID length.\n" "\n" "Provide an integer value to limit the length of the request ID to\n" "the specified length. The default value is 64. Users can change this\n" "to any integer value between 0 and 16384 however keeping in mind that\n" "a larger value may flood the logs.\n" "\n" "Possible values:\n" " * Integer value between 0 and 16384\n" "\n" "Related options:\n" " * None\n" "\n" msgid "" "\n" "List of allowed exception modules to handle RPC exceptions.\n" "\n" "Provide a comma separated list of modules whose exceptions are\n" "permitted to be recreated upon receiving exception data via an RPC\n" "call made to Glance. The default list includes\n" "``glance.common.exception``, ``builtins``, and ``exceptions``.\n" "\n" "The RPC protocol permits interaction with Glance via calls across a\n" "network or within the same system. Including a list of exception\n" "namespaces with this option enables RPC to propagate the exceptions\n" "back to the users.\n" "\n" "Possible values:\n" " * A comma separated list of valid exception modules\n" "\n" "Related options:\n" " * None\n" msgstr "" "\n" "List of allowed exception modules to handle RPC exceptions.\n" "\n" "Provide a comma separated list of modules whose exceptions are\n" "permitted to be recreated upon receiving exception data via an RPC\n" "call made to Glance. The default list includes\n" "``glance.common.exception``, ``builtins``, and ``exceptions``.\n" "\n" "The RPC protocol permits interaction with Glance via calls across a\n" "network or within the same system. Including a list of exception\n" "namespaces with this option enables RPC to propagate the exceptions\n" "back to the users.\n" "\n" "Possible values:\n" " * A comma separated list of valid exception modules\n" "\n" "Related options:\n" " * None\n" msgid "" "\n" "List of enabled Image Import Methods\n" "\n" "Both 'glance-direct' and 'web-download' are enabled by default.\n" "\n" "Related options:\n" " * [DEFAULT]/node_staging_uri\n" " * [DEFAULT]/enable_image_import" msgstr "" "\n" "List of enabled Image Import Methods\n" "\n" "Both 'glance-direct' and 'web-download' are enabled by default.\n" "\n" "Related options:\n" " * [DEFAULT]/node_staging_uri\n" " * [DEFAULT]/enable_image_import" msgid "" "\n" "List of notifications to be disabled.\n" "\n" "Specify a list of notifications that should not be emitted.\n" "A notification can be given either as a notification type to\n" "disable a single event notification, or as a notification group\n" "prefix to disable all event notifications within a group.\n" "\n" "Possible values:\n" " A comma-separated list of individual notification types or\n" " notification groups to be disabled. Currently supported groups:\n" " * image\n" " * image.member\n" " * task\n" " * metadef_namespace\n" " * metadef_object\n" " * metadef_property\n" " * metadef_resource_type\n" " * metadef_tag\n" " For a complete listing and description of each event refer to:\n" " http://docs.openstack.org/developer/glance/notifications.html\n" "\n" " The values must be specified as: .\n" " For example: image.create,task.success,metadef_tag\n" "\n" "Related options:\n" " * None\n" "\n" msgstr "" "\n" "List of notifications to be disabled.\n" "\n" "Specify a list of notifications that should not be emitted.\n" "A notification can be given either as a notification type to\n" "disable a single event notification, or as a notification group\n" "prefix to disable all event notifications within a group.\n" "\n" "Possible values:\n" " A comma-separated list of individual notification types or\n" " notification groups to be disabled. Currently supported groups:\n" " * image\n" " * image.member\n" " * task\n" " * metadef_namespace\n" " * metadef_object\n" " * metadef_property\n" " * metadef_resource_type\n" " * metadef_tag\n" " For a complete listing and description of each event refer to:\n" " http://docs.openstack.org/developer/glance/notifications.html\n" "\n" " The values must be specified as: .\n" " For example: image.create,task.success,metadef_tag\n" "\n" "Related options:\n" " * None\n" "\n" msgid "" "\n" "Maximum amount of image storage per tenant.\n" "\n" "This enforces an upper limit on the cumulative storage consumed by all " "images\n" "of a tenant across all stores. This is a per-tenant limit.\n" "\n" "The default unit for this configuration option is Bytes. However, storage\n" "units can be specified using case-sensitive literals ``B``, ``KB``, ``MB``,\n" "``GB`` and ``TB`` representing Bytes, KiloBytes, MegaBytes, GigaBytes and\n" "TeraBytes respectively. Note that there should not be any space between the\n" "value and unit. Value ``0`` signifies no quota enforcement. Negative values\n" "are invalid and result in errors.\n" "\n" "Possible values:\n" " * A string that is a valid concatenation of a non-negative integer\n" " representing the storage value and an optional string literal\n" " representing storage units as mentioned above.\n" "\n" "Related options:\n" " * None\n" "\n" msgstr "" "\n" "Maximum amount of image storage per tenant.\n" "\n" "This enforces an upper limit on the cumulative storage consumed by all " "images\n" "of a tenant across all stores. This is a per-tenant limit.\n" "\n" "The default unit for this configuration option is Bytes. However, storage\n" "units can be specified using case-sensitive literals ``B``, ``KB``, ``MB``,\n" "``GB`` and ``TB`` representing Bytes, KiloBytes, MegaBytes, GigaBytes and\n" "TeraBytes respectively. Note that there should not be any space between the\n" "value and unit. Value ``0`` signifies no quota enforcement. Negative values\n" "are invalid and result in errors.\n" "\n" "Possible values:\n" " * A string that is a valid concatenation of a non-negative integer\n" " representing the storage value and an optional string literal\n" " representing storage units as mentioned above.\n" "\n" "Related options:\n" " * None\n" "\n" msgid "" "\n" "Maximum line size of message headers.\n" "\n" "Provide an integer value representing a length to limit the size of\n" "message headers. The default value is 16384.\n" "\n" "NOTE: ``max_header_line`` may need to be increased when using large\n" "tokens (typically those generated by the Keystone v3 API with big\n" "service catalogs). However, it is to be kept in mind that larger\n" "values for ``max_header_line`` would flood the logs.\n" "\n" "Setting ``max_header_line`` to 0 sets no limit for the line size of\n" "message headers.\n" "\n" "Possible values:\n" " * 0\n" " * Positive integer\n" "\n" "Related options:\n" " * None\n" "\n" msgstr "" "\n" "Maximum line size of message headers.\n" "\n" "Provide an integer value representing a length to limit the size of\n" "message headers. The default value is 16384.\n" "\n" "NOTE: ``max_header_line`` may need to be increased when using large\n" "tokens (typically those generated by the Keystone v3 API with big\n" "service catalogues). However, it is to be kept in mind that larger\n" "values for ``max_header_line`` would flood the logs.\n" "\n" "Setting ``max_header_line`` to 0 sets no limit for the line size of\n" "message headers.\n" "\n" "Possible values:\n" " * 0\n" " * Positive integer\n" "\n" "Related options:\n" " * None\n" "\n" msgid "" "\n" "Maximum number of image members per image.\n" "\n" "This limits the maximum of users an image can be shared with. Any negative\n" "value is interpreted as unlimited.\n" "\n" "Related options:\n" " * None\n" "\n" msgstr "" "\n" "Maximum number of image members per image.\n" "\n" "This limits the maximum of users an image can be shared with. Any negative\n" "value is interpreted as unlimited.\n" "\n" "Related options:\n" " * None\n" "\n" msgid "" "\n" "Maximum number of locations allowed on an image.\n" "\n" "Any negative value is interpreted as unlimited.\n" "\n" "Related options:\n" " * None\n" "\n" msgstr "" "\n" "Maximum number of locations allowed on an image.\n" "\n" "Any negative value is interpreted as unlimited.\n" "\n" "Related options:\n" " * None\n" "\n" msgid "" "\n" "Maximum number of properties allowed on an image.\n" "\n" "This enforces an upper limit on the number of additional properties an " "image\n" "can have. Any negative value is interpreted as unlimited.\n" "\n" "NOTE: This won't have any impact if additional properties are disabled. " "Please\n" "refer to ``allow_additional_image_properties``.\n" "\n" "Related options:\n" " * ``allow_additional_image_properties``\n" "\n" msgstr "" "\n" "Maximum number of properties allowed on an image.\n" "\n" "This enforces an upper limit on the number of additional properties an " "image\n" "can have. Any negative value is interpreted as unlimited.\n" "\n" "NOTE: This won't have any impact if additional properties are disabled. " "Please\n" "refer to ``allow_additional_image_properties``.\n" "\n" "Related options:\n" " * ``allow_additional_image_properties``\n" "\n" msgid "" "\n" "Maximum number of results that could be returned by a request.\n" "\n" "As described in the help text of ``limit_param_default``, some\n" "requests may return multiple results. The number of results to be\n" "returned are governed either by the ``limit`` parameter in the\n" "request or the ``limit_param_default`` configuration option.\n" "The value in either case, can't be greater than the absolute maximum\n" "defined by this configuration option. Anything greater than this\n" "value is trimmed down to the maximum value defined here.\n" "\n" "NOTE: Setting this to a very large value may slow down database\n" " queries and increase response times. Setting this to a\n" " very low value may result in poor user experience.\n" "\n" "Possible values:\n" " * Any positive integer\n" "\n" "Related options:\n" " * limit_param_default\n" "\n" msgstr "" "\n" "Maximum number of results that could be returned by a request.\n" "\n" "As described in the help text of ``limit_param_default``, some\n" "requests may return multiple results. The number of results to be\n" "returned are governed either by the ``limit`` parameter in the\n" "request or the ``limit_param_default`` configuration option.\n" "The value in either case, can't be greater than the absolute maximum\n" "defined by this configuration option. Anything greater than this\n" "value is trimmed down to the maximum value defined here.\n" "\n" "NOTE: Setting this to a very large value may slow down database\n" " queries and increase response times. Setting this to a\n" " very low value may result in poor user experience.\n" "\n" "Possible values:\n" " * Any positive integer\n" "\n" "Related options:\n" " * limit_param_default\n" "\n" msgid "" "\n" "Maximum number of tags allowed on an image.\n" "\n" "Any negative value is interpreted as unlimited.\n" "\n" "Related options:\n" " * None\n" "\n" msgstr "" "\n" "Maximum number of tags allowed on an image.\n" "\n" "Any negative value is interpreted as unlimited.\n" "\n" "Related options:\n" " * None\n" "\n" msgid "" "\n" "Maximum size of image a user can upload in bytes.\n" "\n" "An image upload greater than the size mentioned here would result\n" "in an image creation failure. This configuration option defaults to\n" "1099511627776 bytes (1 TiB).\n" "\n" "NOTES:\n" " * This value should only be increased after careful\n" " consideration and must be set less than or equal to\n" " 8 EiB (9223372036854775808).\n" " * This value must be set with careful consideration of the\n" " backend storage capacity. Setting this to a very low value\n" " may result in a large number of image failures. And, setting\n" " this to a very large value may result in faster consumption\n" " of storage. Hence, this must be set according to the nature of\n" " images created and storage capacity available.\n" "\n" "Possible values:\n" " * Any positive number less than or equal to 9223372036854775808\n" "\n" msgstr "" "\n" "Maximum size of image a user can upload in bytes.\n" "\n" "An image upload greater than the size mentioned here would result\n" "in an image creation failure. This configuration option defaults to\n" "1,099,511,627,776 bytes (1 TiB).\n" "\n" "NOTES:\n" " * This value should only be increased after careful\n" " consideration and must be set less than or equal to\n" " 8 EiB (9,223,372,036,854,775,808).\n" " * This value must be set with careful consideration of the\n" " backend storage capacity. Setting this to a very low value\n" " may result in a large number of image failures. And, setting\n" " this to a very large value may result in faster consumption\n" " of storage. Hence, this must be set according to the nature of\n" " images created and storage capacity available.\n" "\n" "Possible values:\n" " * Any positive number less than or equal to 9,223,372,036,854,775,808\n" "\n" msgid "" "\n" "Name of the paste configuration file.\n" "\n" "Provide a string value representing the name of the paste\n" "configuration file to use for configuring piplelines for\n" "server application deployments.\n" "\n" "NOTES:\n" " * Provide the name or the path relative to the glance directory\n" " for the paste configuration file and not the absolute path.\n" " * The sample paste configuration file shipped with Glance need\n" " not be edited in most cases as it comes with ready-made\n" " pipelines for all common deployment flavors.\n" "\n" "If no value is specified for this option, the ``paste.ini`` file\n" "with the prefix of the corresponding Glance service's configuration\n" "file name will be searched for in the known configuration\n" "directories. (For example, if this option is missing from or has no\n" "value set in ``glance-api.conf``, the service will look for a file\n" "named ``glance-api-paste.ini``.) If the paste configuration file is\n" "not found, the service will not start.\n" "\n" "Possible values:\n" " * A string value representing the name of the paste configuration\n" " file.\n" "\n" "Related Options:\n" " * flavor\n" "\n" msgstr "" "\n" "Name of the paste configuration file.\n" "\n" "Provide a string value representing the name of the paste\n" "configuration file to use for configuring pipelines for\n" "server application deployments.\n" "\n" "NOTES:\n" " * Provide the name or the path relative to the Glance directory\n" " for the paste configuration file and not the absolute path.\n" " * The sample paste configuration file shipped with Glance need\n" " not be edited in most cases as it comes with ready-made\n" " pipelines for all common deployment flavours.\n" "\n" "If no value is specified for this option, the ``paste.ini`` file\n" "with the prefix of the corresponding Glance service's configuration\n" "file name will be searched for in the known configuration\n" "directories. (For example, if this option is missing from or has no\n" "value set in ``glance-api.conf``, the service will look for a file\n" "named ``glance-api-paste.ini``.) If the paste configuration file is\n" "not found, the service will not start.\n" "\n" "Possible values:\n" " * A string value representing the name of the paste configuration\n" " file.\n" "\n" "Related Options:\n" " * flavour\n" "\n" msgid "" "\n" "Number of Glance worker processes to start.\n" "\n" "Provide a non-negative integer value to set the number of child\n" "process workers to service requests. By default, the number of CPUs\n" "available is set as the value for ``workers`` limited to 8. For\n" "example if the processor count is 6, 6 workers will be used, if the\n" "processor count is 24 only 8 workers will be used. The limit will only\n" "apply to the default value, if 24 workers is configured, 24 is used.\n" "\n" "Each worker process is made to listen on the port set in the\n" "configuration file and contains a greenthread pool of size 1000.\n" "\n" "NOTE: Setting the number of workers to zero, triggers the creation\n" "of a single API process with a greenthread pool of size 1000.\n" "\n" "Possible values:\n" " * 0\n" " * Positive integer value (typically equal to the number of CPUs)\n" "\n" "Related options:\n" " * None\n" "\n" msgstr "" "\n" "Number of Glance worker processes to start.\n" "\n" "Provide a non-negative integer value to set the number of child\n" "process workers to service requests. By default, the number of CPUs\n" "available is set as the value for ``workers`` limited to 8. For\n" "example if the processor count is 6, 6 workers will be used, if the\n" "processor count is 24 only 8 workers will be used. The limit will only\n" "apply to the default value, if 24 workers is configured, 24 is used.\n" "\n" "Each worker process is made to listen on the port set in the\n" "configuration file and contains a greenthread pool of size 1000.\n" "\n" "NOTE: Setting the number of workers to zero, triggers the creation\n" "of a single API process with a greenthread pool of size 1000.\n" "\n" "Possible values:\n" " * 0\n" " * Positive integer value (typically equal to the number of CPUs)\n" "\n" "Related options:\n" " * None\n" "\n" msgid "" "\n" "Port number on which the server will listen.\n" "\n" "Provide a valid port number to bind the server's socket to. This\n" "port is then set to identify processes and forward network messages\n" "that arrive at the server. The default bind_port value for the API\n" "server is 9292 and for the registry server is 9191.\n" "\n" "Possible values:\n" " * A valid port number (0 to 65535)\n" "\n" "Related options:\n" " * None\n" "\n" msgstr "" "\n" "Port number on which the server will listen.\n" "\n" "Provide a valid port number to bind the server's socket to. This\n" "port is then set to identify processes and forward network messages\n" "that arrive at the server. The default bind_port value for the API\n" "server is 9292 and for the registry server is 9191.\n" "\n" "Possible values:\n" " * A valid port number (0 to 65535)\n" "\n" "Related options:\n" " * None\n" "\n" msgid "" "\n" "Port number that the pydev server will listen on.\n" "\n" "Provide a port number to bind the pydev server to. The pydev\n" "process accepts debug connections on this port and facilitates\n" "remote debugging in Glance.\n" "\n" "Possible values:\n" " * A valid port number\n" "\n" "Related options:\n" " * None\n" "\n" msgstr "" "\n" "Port number that the pydev server will listen on.\n" "\n" "Provide a port number to bind the pydev server to. The pydev\n" "process accepts debug connections on this port and facilitates\n" "remote debugging in Glance.\n" "\n" "Possible values:\n" " * A valid port number\n" "\n" "Related options:\n" " * None\n" "\n" msgid "" "\n" "Port the registry server is listening on.\n" "\n" "Possible values:\n" " * A valid port number\n" "\n" "Related options:\n" " * None\n" "\n" msgstr "" "\n" "Port the registry server is listening on.\n" "\n" "Possible values:\n" " * A valid port number\n" "\n" "Related options:\n" " * None\n" "\n" msgid "" "\n" "Preference order of storage backends.\n" "\n" "Provide a comma separated list of store names in the order in\n" "which images should be retrieved from storage backends.\n" "These store names must be registered with the ``stores``\n" "configuration option.\n" "\n" "NOTE: The ``store_type_preference`` configuration option is applied\n" "only if ``store_type`` is chosen as a value for the\n" "``location_strategy`` configuration option. An empty list will not\n" "change the location order.\n" "\n" "Possible values:\n" " * Empty list\n" " * Comma separated list of registered store names. Legal values are:\n" " * file\n" " * http\n" " * rbd\n" " * swift\n" " * sheepdog\n" " * cinder\n" " * vmware\n" "\n" "Related options:\n" " * location_strategy\n" " * stores\n" "\n" msgstr "" "\n" "Preference order of storage backends.\n" "\n" "Provide a comma separated list of store names in the order in\n" "which images should be retrieved from storage backends.\n" "These store names must be registered with the ``stores``\n" "configuration option.\n" "\n" "NOTE: The ``store_type_preference`` configuration option is applied\n" "only if ``store_type`` is chosen as a value for the\n" "``location_strategy`` configuration option. An empty list will not\n" "change the location order.\n" "\n" "Possible values:\n" " * Empty list\n" " * Comma separated list of registered store names. Legal values are:\n" " * file\n" " * http\n" " * rbd\n" " * swift\n" " * sheepdog\n" " * cinder\n" " * vmware\n" "\n" "Related options:\n" " * location_strategy\n" " * stores\n" "\n" msgid "" "\n" "Protocol to use for communication with the registry server.\n" "\n" "Provide a string value representing the protocol to use for\n" "communication with the registry server. By default, this option is\n" "set to ``http`` and the connection is not secure.\n" "\n" "This option can be set to ``https`` to establish a secure connection\n" "to the registry server. In this case, provide a key to use for the\n" "SSL connection using the ``registry_client_key_file`` option. Also\n" "include the CA file and cert file using the options\n" "``registry_client_ca_file`` and ``registry_client_cert_file``\n" "respectively.\n" "\n" "Possible values:\n" " * http\n" " * https\n" "\n" "Related options:\n" " * registry_client_key_file\n" " * registry_client_cert_file\n" " * registry_client_ca_file\n" "\n" msgstr "" "\n" "Protocol to use for communication with the registry server.\n" "\n" "Provide a string value representing the protocol to use for\n" "communication with the registry server. By default, this option is\n" "set to ``http`` and the connection is not secure.\n" "\n" "This option can be set to ``https`` to establish a secure connection\n" "to the registry server. In this case, provide a key to use for the\n" "SSL connection using the ``registry_client_key_file`` option. Also\n" "include the CA file and cert file using the options\n" "``registry_client_ca_file`` and ``registry_client_cert_file``\n" "respectively.\n" "\n" "Possible values:\n" " * http\n" " * https\n" "\n" "Related options:\n" " * registry_client_key_file\n" " * registry_client_cert_file\n" " * registry_client_ca_file\n" "\n" msgid "" "\n" "Public url endpoint to use for Glance versions response.\n" "\n" "This is the public url endpoint that will appear in the Glance\n" "\"versions\" response. If no value is specified, the endpoint that is\n" "displayed in the version's response is that of the host running the\n" "API service. Change the endpoint to represent the proxy URL if the\n" "API service is running behind a proxy. If the service is running\n" "behind a load balancer, add the load balancer's URL for this value.\n" "\n" "Possible values:\n" " * None\n" " * Proxy URL\n" " * Load balancer URL\n" "\n" "Related options:\n" " * None\n" "\n" msgstr "" "\n" "Public URL endpoint to use for Glance versions response.\n" "\n" "This is the public URL endpoint that will appear in the Glance\n" "\"versions\" response. If no value is specified, the endpoint that is\n" "displayed in the version's response is that of the host running the\n" "API service. Change the endpoint to represent the proxy URL if the\n" "API service is running behind a proxy. If the service is running\n" "behind a load balancer, add the load balancer's URL for this value.\n" "\n" "Possible values:\n" " * None\n" " * Proxy URL\n" " * Load balancer URL\n" "\n" "Related options:\n" " * None\n" "\n" msgid "" "\n" "Python module path of data access API.\n" "\n" "Specifies the path to the API to use for accessing the data model.\n" "This option determines how the image catalog data will be accessed.\n" "\n" "Possible values:\n" " * glance.db.sqlalchemy.api\n" " * glance.db.registry.api\n" " * glance.db.simple.api\n" "\n" "If this option is set to ``glance.db.sqlalchemy.api`` then the image\n" "catalog data is stored in and read from the database via the\n" "SQLAlchemy Core and ORM APIs.\n" "\n" "Setting this option to ``glance.db.registry.api`` will force all\n" "database access requests to be routed through the Registry service.\n" "This avoids data access from the Glance API nodes for an added layer\n" "of security, scalability and manageability.\n" "\n" "NOTE: In v2 OpenStack Images API, the registry service is optional.\n" "In order to use the Registry API in v2, the option\n" "``enable_v2_registry`` must be set to ``True``.\n" "\n" "Finally, when this configuration option is set to\n" "``glance.db.simple.api``, image catalog data is stored in and read\n" "from an in-memory data structure. This is primarily used for testing.\n" "\n" "Related options:\n" " * enable_v2_api\n" " * enable_v2_registry\n" "\n" msgstr "" "\n" "Python module path of data access API.\n" "\n" "Specifies the path to the API to use for accessing the data model.\n" "This option determines how the image catalogue data will be accessed.\n" "\n" "Possible values:\n" " * glance.db.sqlalchemy.api\n" " * glance.db.registry.api\n" " * glance.db.simple.api\n" "\n" "If this option is set to ``glance.db.sqlalchemy.api`` then the image\n" "catalogue data is stored in and read from the database via the\n" "SQLAlchemy Core and ORM APIs.\n" "\n" "Setting this option to ``glance.db.registry.api`` will force all\n" "database access requests to be routed through the Registry service.\n" "This avoids data access from the Glance API nodes for an added layer\n" "of security, scalability and manageability.\n" "\n" "NOTE: In v2 OpenStack Images API, the registry service is optional.\n" "In order to use the Registry API in v2, the option\n" "``enable_v2_registry`` must be set to ``True``.\n" "\n" "Finally, when this configuration option is set to\n" "``glance.db.simple.api``, image catalogue data is stored in and read\n" "from an in-memory data structure. This is primarily used for testing.\n" "\n" "Related options:\n" " * enable_v2_api\n" " * enable_v2_registry\n" "\n" msgid "" "\n" "Reference to default Swift account/backing store parameters.\n" "\n" "Provide a string value representing a reference to the default set\n" "of parameters required for using swift account/backing store for\n" "image storage. The default reference value for this configuration\n" "option is 'ref1'. This configuration option dereferences the\n" "parameters and facilitates image storage in Swift storage backend\n" "every time a new image is added.\n" "\n" "Possible values:\n" " * A valid string value\n" "\n" "Related options:\n" " * None\n" "\n" msgstr "" "\n" "Reference to default Swift account/backing store parameters.\n" "\n" "Provide a string value representing a reference to the default set\n" "of parameters required for using Swift account/backing store for\n" "image storage. The default reference value for this configuration\n" "option is 'ref1'. This configuration option dereferences the\n" "parameters and facilitates image storage in Swift storage backend\n" "every time a new image is added.\n" "\n" "Possible values:\n" " * A valid string value\n" "\n" "Related options:\n" " * None\n" "\n" msgid "" "\n" "Role used to identify an authenticated user as administrator.\n" "\n" "Provide a string value representing a Keystone role to identify an\n" "administrative user. Users with this role will be granted\n" "administrative privileges. The default value for this option is\n" "'admin'.\n" "\n" "Possible values:\n" " * A string value which is a valid Keystone role\n" "\n" "Related options:\n" " * None\n" "\n" msgstr "" "\n" "Role used to identify an authenticated user as administrator.\n" "\n" "Provide a string value representing a Keystone role to identify an\n" "administrative user. Users with this role will be granted\n" "administrative privileges. The default value for this option is\n" "'admin'.\n" "\n" "Possible values:\n" " * A string value which is a valid Keystone role\n" "\n" "Related options:\n" " * None\n" "\n" msgid "" "\n" "Rule format for property protection.\n" "\n" "Provide the desired way to set property protection on Glance\n" "image properties. The two permissible values are ``roles``\n" "and ``policies``. The default value is ``roles``.\n" "\n" "If the value is ``roles``, the property protection file must\n" "contain a comma separated list of user roles indicating\n" "permissions for each of the CRUD operations on each property\n" "being protected. If set to ``policies``, a policy defined in\n" "policy.json is used to express property protections for each\n" "of the CRUD operations. Examples of how property protections\n" "are enforced based on ``roles`` or ``policies`` can be found at:\n" "https://docs.openstack.org/glance/latest/admin/property-protections." "html#examples\n" "\n" "Possible values:\n" " * roles\n" " * policies\n" "\n" "Related options:\n" " * property_protection_file\n" "\n" msgstr "" "\n" "Rule format for property protection.\n" "\n" "Provide the desired way to set property protection on Glance\n" "image properties. The two permissible values are ``roles``\n" "and ``policies``. The default value is ``roles``.\n" "\n" "If the value is ``roles``, the property protection file must\n" "contain a comma separated list of user roles indicating\n" "permissions for each of the CRUD operations on each property\n" "being protected. If set to ``policies``, a policy defined in\n" "policy.json is used to express property protections for each\n" "of the CRUD operations. Examples of how property protections\n" "are enforced based on ``roles`` or ``policies`` can be found at:\n" "https://docs.openstack.org/glance/latest/admin/property-protections." "html#examples\n" "\n" "Possible values:\n" " * roles\n" " * policies\n" "\n" "Related options:\n" " * property_protection_file\n" "\n" msgid "" "\n" "Run scrubber as a daemon.\n" "\n" "This boolean configuration option indicates whether scrubber should\n" "run as a long-running process that wakes up at regular intervals to\n" "scrub images. The wake up interval can be specified using the\n" "configuration option ``wakeup_time``.\n" "\n" "If this configuration option is set to ``False``, which is the\n" "default value, scrubber runs once to scrub images and exits. In this\n" "case, if the operator wishes to implement continuous scrubbing of\n" "images, scrubber needs to be scheduled as a cron job.\n" "\n" "Possible values:\n" " * True\n" " * False\n" "\n" "Related options:\n" " * ``wakeup_time``\n" "\n" msgstr "" "\n" "Run scrubber as a daemon.\n" "\n" "This boolean configuration option indicates whether scrubber should\n" "run as a long-running process that wakes up at regular intervals to\n" "scrub images. The wake up interval can be specified using the\n" "configuration option ``wakeup_time``.\n" "\n" "If this configuration option is set to ``False``, which is the\n" "default value, scrubber runs once to scrub images and exits. In this\n" "case, if the operator wishes to implement continuous scrubbing of\n" "images, scrubber needs to be scheduled as a cron job.\n" "\n" "Possible values:\n" " * True\n" " * False\n" "\n" "Related options:\n" " * ``wakeup_time``\n" "\n" msgid "" "\n" "Send headers received from identity when making requests to\n" "registry.\n" "\n" "Typically, Glance registry can be deployed in multiple flavors,\n" "which may or may not include authentication. For example,\n" "``trusted-auth`` is a flavor that does not require the registry\n" "service to authenticate the requests it receives. However, the\n" "registry service may still need a user context to be populated to\n" "serve the requests. This can be achieved by the caller\n" "(the Glance API usually) passing through the headers it received\n" "from authenticating with identity for the same request. The typical\n" "headers sent are ``X-User-Id``, ``X-Tenant-Id``, ``X-Roles``,\n" "``X-Identity-Status`` and ``X-Service-Catalog``.\n" "\n" "Provide a boolean value to determine whether to send the identity\n" "headers to provide tenant and user information along with the\n" "requests to registry service. By default, this option is set to\n" "``False``, which means that user and tenant information is not\n" "available readily. It must be obtained by authenticating. Hence, if\n" "this is set to ``False``, ``flavor`` must be set to value that\n" "either includes authentication or authenticated user context.\n" "\n" "Possible values:\n" " * True\n" " * False\n" "\n" "Related options:\n" " * flavor\n" "\n" msgstr "" "\n" "Send headers received from identity when making requests to\n" "registry.\n" "\n" "Typically, Glance registry can be deployed in multiple flavours,\n" "which may or may not include authentication. For example,\n" "``trusted-auth`` is a flavour that does not require the registry\n" "service to authenticate the requests it receives. However, the\n" "registry service may still need a user context to be populated to\n" "serve the requests. This can be achieved by the caller\n" "(the Glance API usually) passing through the headers it received\n" "from authenticating with identity for the same request. The typical\n" "headers sent are ``X-User-Id``, ``X-Tenant-Id``, ``X-Roles``,\n" "``X-Identity-Status`` and ``X-Service-Catalog``.\n" "\n" "Provide a boolean value to determine whether to send the identity\n" "headers to provide tenant and user information along with the\n" "requests to registry service. By default, this option is set to\n" "``False``, which means that user and tenant information is not\n" "available readily. It must be obtained by authenticating. Hence, if\n" "this is set to ``False``, ``flavour`` must be set to value that\n" "either includes authentication or authenticated user context.\n" "\n" "Possible values:\n" " * True\n" " * False\n" "\n" "Related options:\n" " * flavour\n" "\n" msgid "" "\n" "Set keep alive option for HTTP over TCP.\n" "\n" "Provide a boolean value to determine sending of keep alive packets.\n" "If set to ``False``, the server returns the header\n" "\"Connection: close\". If set to ``True``, the server returns a\n" "\"Connection: Keep-Alive\" in its responses. This enables retention of\n" "the same TCP connection for HTTP conversations instead of opening a\n" "new one with each new request.\n" "\n" "This option must be set to ``False`` if the client socket connection\n" "needs to be closed explicitly after the response is received and\n" "read successfully by the client.\n" "\n" "Possible values:\n" " * True\n" " * False\n" "\n" "Related options:\n" " * None\n" "\n" msgstr "" "\n" "Set keep alive option for HTTP over TCP.\n" "\n" "Provide a boolean value to determine sending of keep alive packets.\n" "If set to ``False``, the server returns the header\n" "\"Connection: close\". If set to ``True``, the server returns a\n" "\"Connection: Keep-Alive\" in its responses. This enables retention of\n" "the same TCP connection for HTTP conversations instead of opening a\n" "new one with each new request.\n" "\n" "This option must be set to ``False`` if the client socket connection\n" "needs to be closed explicitly after the response is received and\n" "read successfully by the client.\n" "\n" "Possible values:\n" " * True\n" " * False\n" "\n" "Related options:\n" " * None\n" "\n" msgid "" "\n" "Set the desired image conversion format.\n" "\n" "Provide a valid image format to which you want images to be\n" "converted before they are stored for consumption by Glance.\n" "Appropriate image format conversions are desirable for specific\n" "storage backends in order to facilitate efficient handling of\n" "bandwidth and usage of the storage infrastructure.\n" "\n" "By default, ``conversion_format`` is not set and must be set\n" "explicitly in the configuration file.\n" "\n" "The allowed values for this option are ``raw``, ``qcow2`` and\n" "``vmdk``. The ``raw`` format is the unstructured disk format and\n" "should be chosen when RBD or Ceph storage backends are used for\n" "image storage. ``qcow2`` is supported by the QEMU emulator that\n" "expands dynamically and supports Copy on Write. The ``vmdk`` is\n" "another common disk format supported by many common virtual machine\n" "monitors like VMWare Workstation.\n" "\n" "Possible values:\n" " * qcow2\n" " * raw\n" " * vmdk\n" "\n" "Related options:\n" " * disk_formats\n" "\n" msgstr "" "\n" "Set the desired image conversion format.\n" "\n" "Provide a valid image format to which you want images to be\n" "converted before they are stored for consumption by Glance.\n" "Appropriate image format conversions are desirable for specific\n" "storage backends in order to facilitate efficient handling of\n" "bandwidth and usage of the storage infrastructure.\n" "\n" "By default, ``conversion_format`` is not set and must be set\n" "explicitly in the configuration file.\n" "\n" "The allowed values for this option are ``raw``, ``qcow2`` and\n" "``vmdk``. The ``raw`` format is the unstructured disk format and\n" "should be chosen when RBD or Ceph storage backends are used for\n" "image storage. ``qcow2`` is supported by the QEMU emulator that\n" "expands dynamically and supports Copy on Write. The ``vmdk`` is\n" "another common disk format supported by many common virtual machine\n" "monitors like VMWare Workstation.\n" "\n" "Possible values:\n" " * qcow2\n" " * raw\n" " * vmdk\n" "\n" "Related options:\n" " * disk_formats\n" "\n" msgid "" "\n" "Set the image owner to tenant or the authenticated user.\n" "\n" "Assign a boolean value to determine the owner of an image. When set to\n" "True, the owner of the image is the tenant. When set to False, the\n" "owner of the image will be the authenticated user issuing the request.\n" "Setting it to False makes the image private to the associated user and\n" "sharing with other users within the same tenant (or \"project\")\n" "requires explicit image sharing via image membership.\n" "\n" "Possible values:\n" " * True\n" " * False\n" "\n" "Related options:\n" " * None\n" "\n" msgstr "" "\n" "Set the image owner to tenant or the authenticated user.\n" "\n" "Assign a boolean value to determine the owner of an image. When set to\n" "True, the owner of the image is the tenant. When set to False, the\n" "owner of the image will be the authenticated user issuing the request.\n" "Setting it to False makes the image private to the associated user and\n" "sharing with other users within the same tenant (or \"project\")\n" "requires explicit image sharing via image membership.\n" "\n" "Possible values:\n" " * True\n" " * False\n" "\n" "Related options:\n" " * None\n" "\n" msgid "" "\n" "Set the number of engine executable tasks.\n" "\n" "Provide an integer value to limit the number of workers that can be\n" "instantiated on the hosts. In other words, this number defines the\n" "number of parallel tasks that can be executed at the same time by\n" "the taskflow engine. This value can be greater than one when the\n" "engine mode is set to parallel.\n" "\n" "Possible values:\n" " * Integer value greater than or equal to 1\n" "\n" "Related options:\n" " * engine_mode\n" "\n" msgstr "" "\n" "Set the number of engine executable tasks.\n" "\n" "Provide an integer value to limit the number of workers that can be\n" "instantiated on the hosts. In other words, this number defines the\n" "number of parallel tasks that can be executed at the same time by\n" "the taskflow engine. This value can be greater than one when the\n" "engine mode is set to parallel.\n" "\n" "Possible values:\n" " * Integer value greater than or equal to 1\n" "\n" "Related options:\n" " * engine_mode\n" "\n" msgid "" "\n" "Set the number of incoming connection requests.\n" "\n" "Provide a positive integer value to limit the number of requests in\n" "the backlog queue. The default queue size is 4096.\n" "\n" "An incoming connection to a TCP listener socket is queued before a\n" "connection can be established with the server. Setting the backlog\n" "for a TCP socket ensures a limited queue size for incoming traffic.\n" "\n" "Possible values:\n" " * Positive integer\n" "\n" "Related options:\n" " * None\n" "\n" msgstr "" "\n" "Set the number of incoming connection requests.\n" "\n" "Provide a positive integer value to limit the number of requests in\n" "the backlog queue. The default queue size is 4096.\n" "\n" "An incoming connection to a TCP listener socket is queued before a\n" "connection can be established with the server. Setting the backlog\n" "for a TCP socket ensures a limited queue size for incoming traffic.\n" "\n" "Possible values:\n" " * Positive integer\n" "\n" "Related options:\n" " * None\n" "\n" msgid "" "\n" "Set the taskflow engine mode.\n" "\n" "Provide a string type value to set the mode in which the taskflow\n" "engine would schedule tasks to the workers on the hosts. Based on\n" "this mode, the engine executes tasks either in single or multiple\n" "threads. The possible values for this configuration option are:\n" "``serial`` and ``parallel``. When set to ``serial``, the engine runs\n" "all the tasks in a single thread which results in serial execution\n" "of tasks. Setting this to ``parallel`` makes the engine run tasks in\n" "multiple threads. This results in parallel execution of tasks.\n" "\n" "Possible values:\n" " * serial\n" " * parallel\n" "\n" "Related options:\n" " * max_workers\n" "\n" msgstr "" "\n" "Set the taskflow engine mode.\n" "\n" "Provide a string type value to set the mode in which the taskflow\n" "engine would schedule tasks to the workers on the hosts. Based on\n" "this mode, the engine executes tasks either in single or multiple\n" "threads. The possible values for this configuration option are:\n" "``serial`` and ``parallel``. When set to ``serial``, the engine runs\n" "all the tasks in a single thread which results in serial execution\n" "of tasks. Setting this to ``parallel`` makes the engine run tasks in\n" "multiple threads. This results in parallel execution of tasks.\n" "\n" "Possible values:\n" " * serial\n" " * parallel\n" "\n" "Related options:\n" " * max_workers\n" "\n" msgid "" "\n" "Set the wait time before a connection recheck.\n" "\n" "Provide a positive integer value representing time in seconds which\n" "is set as the idle wait time before a TCP keep alive packet can be\n" "sent to the host. The default value is 600 seconds.\n" "\n" "Setting ``tcp_keepidle`` helps verify at regular intervals that a\n" "connection is intact and prevents frequent TCP connection\n" "reestablishment.\n" "\n" "Possible values:\n" " * Positive integer value representing time in seconds\n" "\n" "Related options:\n" " * None\n" "\n" msgstr "" "\n" "Set the wait time before a connection recheck.\n" "\n" "Provide a positive integer value representing time in seconds which\n" "is set as the idle wait time before a TCP keep alive packet can be\n" "sent to the host. The default value is 600 seconds.\n" "\n" "Setting ``tcp_keepidle`` helps verify at regular intervals that a\n" "connection is intact and prevents frequent TCP connection\n" "re-establishment.\n" "\n" "Possible values:\n" " * Positive integer value representing time in seconds\n" "\n" "Related options:\n" " * None\n" "\n" msgid "" "\n" "Set verification of the registry server certificate.\n" "\n" "Provide a boolean value to determine whether or not to validate\n" "SSL connections to the registry server. By default, this option\n" "is set to ``False`` and the SSL connections are validated.\n" "\n" "If set to ``True``, the connection to the registry server is not\n" "validated via a certifying authority and the\n" "``registry_client_ca_file`` option is ignored. This is the\n" "registry's equivalent of specifying --insecure on the command line\n" "using glanceclient for the API.\n" "\n" "Possible values:\n" " * True\n" " * False\n" "\n" "Related options:\n" " * registry_client_protocol\n" " * registry_client_ca_file\n" "\n" msgstr "" "\n" "Set verification of the registry server certificate.\n" "\n" "Provide a boolean value to determine whether or not to validate\n" "SSL connections to the registry server. By default, this option\n" "is set to ``False`` and the SSL connections are validated.\n" "\n" "If set to ``True``, the connection to the registry server is not\n" "validated via a certifying authority and the\n" "``registry_client_ca_file`` option is ignored. This is the\n" "registry's equivalent of specifying --insecure on the command line\n" "using glanceclient for the API.\n" "\n" "Possible values:\n" " * True\n" " * False\n" "\n" "Related options:\n" " * registry_client_protocol\n" " * registry_client_ca_file\n" "\n" msgid "" "\n" "Show all image locations when returning an image.\n" "\n" "This configuration option indicates whether to show all the image\n" "locations when returning image details to the user. When multiple\n" "image locations exist for an image, the locations are ordered based\n" "on the location strategy indicated by the configuration opt\n" "``location_strategy``. The image locations are shown under the\n" "image property ``locations``.\n" "\n" "NOTES:\n" " * Revealing image locations can present a GRAVE SECURITY RISK as\n" " image locations can sometimes include credentials. Hence, this\n" " is set to ``False`` by default. Set this to ``True`` with\n" " EXTREME CAUTION and ONLY IF you know what you are doing!\n" " * If an operator wishes to avoid showing any image location(s)\n" " to the user, then both this option and\n" " ``show_image_direct_url`` MUST be set to ``False``.\n" "\n" "Possible values:\n" " * True\n" " * False\n" "\n" "Related options:\n" " * show_image_direct_url\n" " * location_strategy\n" "\n" msgstr "" "\n" "Show all image locations when returning an image.\n" "\n" "This configuration option indicates whether to show all the image\n" "locations when returning image details to the user. When multiple\n" "image locations exist for an image, the locations are ordered based\n" "on the location strategy indicated by the configuration opt\n" "``location_strategy``. The image locations are shown under the\n" "image property ``locations``.\n" "\n" "NOTES:\n" " * Revealing image locations can present a GRAVE SECURITY RISK as\n" " image locations can sometimes include credentials. Hence, this\n" " is set to ``False`` by default. Set this to ``True`` with\n" " EXTREME CAUTION and ONLY IF you know what you are doing!\n" " * If an operator wishes to avoid showing any image location(s)\n" " to the user, then both this option and\n" " ``show_image_direct_url`` MUST be set to ``False``.\n" "\n" "Possible values:\n" " * True\n" " * False\n" "\n" "Related options:\n" " * show_image_direct_url\n" " * location_strategy\n" "\n" msgid "" "\n" "Show direct image location when returning an image.\n" "\n" "This configuration option indicates whether to show the direct image\n" "location when returning image details to the user. The direct image\n" "location is where the image data is stored in backend storage. This\n" "image location is shown under the image property ``direct_url``.\n" "\n" "When multiple image locations exist for an image, the best location\n" "is displayed based on the location strategy indicated by the\n" "configuration option ``location_strategy``.\n" "\n" "NOTES:\n" " * Revealing image locations can present a GRAVE SECURITY RISK as\n" " image locations can sometimes include credentials. Hence, this\n" " is set to ``False`` by default. Set this to ``True`` with\n" " EXTREME CAUTION and ONLY IF you know what you are doing!\n" " * If an operator wishes to avoid showing any image location(s)\n" " to the user, then both this option and\n" " ``show_multiple_locations`` MUST be set to ``False``.\n" "\n" "Possible values:\n" " * True\n" " * False\n" "\n" "Related options:\n" " * show_multiple_locations\n" " * location_strategy\n" "\n" msgstr "" "\n" "Show direct image location when returning an image.\n" "\n" "This configuration option indicates whether to show the direct image\n" "location when returning image details to the user. The direct image\n" "location is where the image data is stored in backend storage. This\n" "image location is shown under the image property ``direct_url``.\n" "\n" "When multiple image locations exist for an image, the best location\n" "is displayed based on the location strategy indicated by the\n" "configuration option ``location_strategy``.\n" "\n" "NOTES:\n" " * Revealing image locations can present a GRAVE SECURITY RISK as\n" " image locations can sometimes include credentials. Hence, this\n" " is set to ``False`` by default. Set this to ``True`` with\n" " EXTREME CAUTION and ONLY IF you know what you are doing!\n" " * If an operator wishes to avoid showing any image location(s)\n" " to the user, then both this option and\n" " ``show_multiple_locations`` MUST be set to ``False``.\n" "\n" "Possible values:\n" " * True\n" " * False\n" "\n" "Related options:\n" " * show_multiple_locations\n" " * location_strategy\n" "\n" msgid "" "\n" "Specify name of user roles to be ignored for injecting metadata\n" "properties in the image.\n" "\n" "Possible values:\n" " * List containing user roles. For example: [admin,member]\n" "\n" msgstr "" "\n" "Specify name of user roles to be ignored for injecting metadata\n" "properties in the image.\n" "\n" "Possible values:\n" " * List containing user roles. For example: [admin,member]\n" "\n" msgid "" "\n" "Strategy to determine the preference order of image locations.\n" "\n" "This configuration option indicates the strategy to determine\n" "the order in which an image's locations must be accessed to\n" "serve the image's data. Glance then retrieves the image data\n" "from the first responsive active location it finds in this list.\n" "\n" "This option takes one of two possible values ``location_order``\n" "and ``store_type``. The default value is ``location_order``,\n" "which suggests that image data be served by using locations in\n" "the order they are stored in Glance. The ``store_type`` value\n" "sets the image location preference based on the order in which\n" "the storage backends are listed as a comma separated list for\n" "the configuration option ``store_type_preference``.\n" "\n" "Possible values:\n" " * location_order\n" " * store_type\n" "\n" "Related options:\n" " * store_type_preference\n" "\n" msgstr "" "\n" "Strategy to determine the preference order of image locations.\n" "\n" "This configuration option indicates the strategy to determine\n" "the order in which an image's locations must be accessed to\n" "serve the image's data. Glance then retrieves the image data\n" "from the first responsive active location it finds in this list.\n" "\n" "This option takes one of two possible values ``location_order``\n" "and ``store_type``. The default value is ``location_order``,\n" "which suggests that image data be served by using locations in\n" "the order they are stored in Glance. The ``store_type`` value\n" "sets the image location preference based on the order in which\n" "the storage backends are listed as a comma separated list for\n" "the configuration option ``store_type_preference``.\n" "\n" "Possible values:\n" " * location_order\n" " * store_type\n" "\n" "Related options:\n" " * store_type_preference\n" "\n" msgid "" "\n" "Task executor to be used to run task scripts.\n" "\n" "Provide a string value representing the executor to use for task\n" "executions. By default, ``TaskFlow`` executor is used.\n" "\n" "``TaskFlow`` helps make task executions easy, consistent, scalable\n" "and reliable. It also enables creation of lightweight task objects\n" "and/or functions that are combined together into flows in a\n" "declarative manner.\n" "\n" "Possible values:\n" " * taskflow\n" "\n" "Related Options:\n" " * None\n" "\n" msgstr "" "\n" "Task executor to be used to run task scripts.\n" "\n" "Provide a string value representing the executor to use for task\n" "executions. By default, ``TaskFlow`` executor is used.\n" "\n" "``TaskFlow`` helps make task executions easy, consistent, scalable\n" "and reliable. It also enables creation of lightweight task objects\n" "and/or functions that are combined together into flows in a\n" "declarative manner.\n" "\n" "Possible values:\n" " * taskflow\n" "\n" "Related Options:\n" " * None\n" "\n" msgid "" "\n" "The URL provides location where the temporary data will be stored\n" "\n" "This option is for Glance internal use only. Glance will save the\n" "image data uploaded by the user to 'staging' endpoint during the\n" "image import process.\n" "\n" "This option does not change the 'staging' API endpoint by any means.\n" "\n" "NOTE: It is discouraged to use same path as [task]/work_dir\n" "\n" "NOTE: 'file://' is the only option\n" "api_image_import flow will support for now.\n" "\n" "NOTE: The staging path must be on shared filesystem available to all\n" "Glance API nodes.\n" "\n" "Possible values:\n" " * String starting with 'file://' followed by absolute FS path\n" "\n" "Related options:\n" " * [task]/work_dir\n" " * [DEFAULT]/enable_image_import (*deprecated*)\n" "\n" msgstr "" "\n" "The URL provides location where the temporary data will be stored\n" "\n" "This option is for Glance internal use only. Glance will save the\n" "image data uploaded by the user to 'staging' endpoint during the\n" "image import process.\n" "\n" "This option does not change the 'staging' API endpoint by any means.\n" "\n" "NOTE: It is discouraged to use same path as [task]/work_dir\n" "\n" "NOTE: 'file://' is the only option\n" "api_image_import flow will support for now.\n" "\n" "NOTE: The staging path must be on shared filesystem available to all\n" "Glance API nodes.\n" "\n" "Possible values:\n" " * String starting with 'file://' followed by absolute FS path\n" "\n" "Related options:\n" " * [task]/work_dir\n" " * [DEFAULT]/enable_image_import (*deprecated*)\n" "\n" msgid "" "\n" "The amount of time, in seconds, an incomplete image remains in the cache.\n" "\n" "Incomplete images are images for which download is in progress. Please see " "the\n" "description of configuration option ``image_cache_dir`` for more detail.\n" "Sometimes, due to various reasons, it is possible the download may hang and\n" "the incompletely downloaded image remains in the ``incomplete`` directory.\n" "This configuration option sets a time limit on how long the incomplete " "images\n" "should remain in the ``incomplete`` directory before they are cleaned up.\n" "Once an incomplete image spends more time than is specified here, it'll be\n" "removed by cache-cleaner on its next run.\n" "\n" "It is recommended to run cache-cleaner as a periodic task on the Glance API\n" "nodes to keep the incomplete images from occupying disk space.\n" "\n" "Possible values:\n" " * Any non-negative integer\n" "\n" "Related options:\n" " * None\n" "\n" msgstr "" "\n" "The amount of time, in seconds, an incomplete image remains in the cache.\n" "\n" "Incomplete images are images for which download is in progress. Please see " "the\n" "description of configuration option ``image_cache_dir`` for more detail.\n" "Sometimes, due to various reasons, it is possible the download may hang and\n" "the incompletely downloaded image remains in the ``incomplete`` directory.\n" "This configuration option sets a time limit on how long the incomplete " "images\n" "should remain in the ``incomplete`` directory before they are cleaned up.\n" "Once an incomplete image spends more time than is specified here, it'll be\n" "removed by cache-cleaner on its next run.\n" "\n" "It is recommended to run cache-cleaner as a periodic task on the Glance API\n" "nodes to keep the incomplete images from occupying disk space.\n" "\n" "Possible values:\n" " * Any non-negative integer\n" "\n" "Related options:\n" " * None\n" "\n" msgid "" "\n" "The amount of time, in seconds, to delay image scrubbing.\n" "\n" "When delayed delete is turned on, an image is put into ``pending_delete``\n" "state upon deletion until the scrubber deletes its image data. Typically, " "soon\n" "after the image is put into ``pending_delete`` state, it is available for\n" "scrubbing. However, scrubbing can be delayed until a later point using this\n" "configuration option. This option denotes the time period an image spends " "in\n" "``pending_delete`` state before it is available for scrubbing.\n" "\n" "It is important to realize that this has storage implications. The larger " "the\n" "``scrub_time``, the longer the time to reclaim backend storage from deleted\n" "images.\n" "\n" "Possible values:\n" " * Any non-negative integer\n" "\n" "Related options:\n" " * ``delayed_delete``\n" "\n" msgstr "" "\n" "The amount of time, in seconds, to delay image scrubbing.\n" "\n" "When delayed delete is turned on, an image is put into ``pending_delete``\n" "state upon deletion until the scrubber deletes its image data. Typically, " "soon\n" "after the image is put into ``pending_delete`` state, it is available for\n" "scrubbing. However, scrubbing can be delayed until a later point using this\n" "configuration option. This option denotes the time period an image spends " "in\n" "``pending_delete`` state before it is available for scrubbing.\n" "\n" "It is important to realise that this has storage implications. The larger " "the\n" "``scrub_time``, the longer the time to reclaim backend storage from deleted\n" "images.\n" "\n" "Possible values:\n" " * Any non-negative integer\n" "\n" "Related options:\n" " * ``delayed_delete``\n" "\n" msgid "" "\n" "The default number of results to return for a request.\n" "\n" "Responses to certain API requests, like list images, may return\n" "multiple items. The number of results returned can be explicitly\n" "controlled by specifying the ``limit`` parameter in the API request.\n" "However, if a ``limit`` parameter is not specified, this\n" "configuration value will be used as the default number of results to\n" "be returned for any API request.\n" "\n" "NOTES:\n" " * The value of this configuration option may not be greater than\n" " the value specified by ``api_limit_max``.\n" " * Setting this to a very large value may slow down database\n" " queries and increase response times. Setting this to a\n" " very low value may result in poor user experience.\n" "\n" "Possible values:\n" " * Any positive integer\n" "\n" "Related options:\n" " * api_limit_max\n" "\n" msgstr "" "\n" "The default number of results to return for a request.\n" "\n" "Responses to certain API requests, like list images, may return\n" "multiple items. The number of results returned can be explicitly\n" "controlled by specifying the ``limit`` parameter in the API request.\n" "However, if a ``limit`` parameter is not specified, this\n" "configuration value will be used as the default number of results to\n" "be returned for any API request.\n" "\n" "NOTES:\n" " * The value of this configuration option may not be greater than\n" " the value specified by ``api_limit_max``.\n" " * Setting this to a very large value may slow down database\n" " queries and increase response times. Setting this to a\n" " very low value may result in poor user experience.\n" "\n" "Possible values:\n" " * Any positive integer\n" "\n" "Related options:\n" " * api_limit_max\n" "\n" msgid "" "\n" "The driver to use for image cache management.\n" "\n" "This configuration option provides the flexibility to choose between the\n" "different image-cache drivers available. An image-cache driver is " "responsible\n" "for providing the essential functions of image-cache like write images to/" "read\n" "images from cache, track age and usage of cached images, provide a list of\n" "cached images, fetch size of the cache, queue images for caching and clean " "up\n" "the cache, etc.\n" "\n" "The essential functions of a driver are defined in the base class\n" "``glance.image_cache.drivers.base.Driver``. All image-cache drivers " "(existing\n" "and prospective) must implement this interface. Currently available drivers\n" "are ``sqlite`` and ``xattr``. These drivers primarily differ in the way " "they\n" "store the information about cached images:\n" " * The ``sqlite`` driver uses a sqlite database (which sits on every " "glance\n" " node locally) to track the usage of cached images.\n" " * The ``xattr`` driver uses the extended attributes of files to store " "this\n" " information. It also requires a filesystem that sets ``atime`` on the " "files\n" " when accessed.\n" "\n" "Possible values:\n" " * sqlite\n" " * xattr\n" "\n" "Related options:\n" " * None\n" "\n" msgstr "" "\n" "The driver to use for image cache management.\n" "\n" "This configuration option provides the flexibility to choose between the\n" "different image-cache drivers available. An image-cache driver is " "responsible\n" "for providing the essential functions of image-cache like write images to/" "read\n" "images from cache, track age and usage of cached images, provide a list of\n" "cached images, fetch size of the cache, queue images for caching and clean " "up\n" "the cache, etc.\n" "\n" "The essential functions of a driver are defined in the base class\n" "``glance.image_cache.drivers.base.Driver``. All image-cache drivers " "(existing\n" "and prospective) must implement this interface. Currently available drivers\n" "are ``sqlite`` and ``xattr``. These drivers primarily differ in the way " "they\n" "store the information about cached images:\n" " * The ``sqlite`` driver uses a sqlite database (which sits on every " "glance\n" " node locally) to track the usage of cached images.\n" " * The ``xattr`` driver uses the extended attributes of files to store " "this\n" " information. It also requires a filesystem that sets ``atime`` on the " "files\n" " when accessed.\n" "\n" "Possible values:\n" " * sqlite\n" " * xattr\n" "\n" "Related options:\n" " * None\n" "\n" msgid "" "\n" "The location of the property protection file.\n" "\n" "Provide a valid path to the property protection file which contains\n" "the rules for property protections and the roles/policies associated\n" "with them.\n" "\n" "A property protection file, when set, restricts the Glance image\n" "properties to be created, read, updated and/or deleted by a specific\n" "set of users that are identified by either roles or policies.\n" "If this configuration option is not set, by default, property\n" "protections won't be enforced. If a value is specified and the file\n" "is not found, the glance-api service will fail to start.\n" "More information on property protections can be found at:\n" "https://docs.openstack.org/glance/latest/admin/property-protections.html\n" "\n" "Possible values:\n" " * Empty string\n" " * Valid path to the property protection configuration file\n" "\n" "Related options:\n" " * property_protection_rule_format\n" "\n" msgstr "" "\n" "The location of the property protection file.\n" "\n" "Provide a valid path to the property protection file which contains\n" "the rules for property protections and the roles/policies associated\n" "with them.\n" "\n" "A property protection file, when set, restricts the Glance image\n" "properties to be created, read, updated and/or deleted by a specific\n" "set of users that are identified by either roles or policies.\n" "If this configuration option is not set, by default, property\n" "protections won't be enforced. If a value is specified and the file\n" "is not found, the glance-api service will fail to start.\n" "More information on property protections can be found at:\n" "https://docs.openstack.org/glance/latest/admin/property-protections.html\n" "\n" "Possible values:\n" " * Empty string\n" " * Valid path to the property protection configuration file\n" "\n" "Related options:\n" " * property_protection_rule_format\n" "\n" msgid "" "\n" "The relative path to sqlite file database that will be used for image cache\n" "management.\n" "\n" "This is a relative path to the sqlite file database that tracks the age and\n" "usage statistics of image cache. The path is relative to image cache base\n" "directory, specified by the configuration option ``image_cache_dir``.\n" "\n" "This is a lightweight database with just one table.\n" "\n" "Possible values:\n" " * A valid relative path to sqlite file database\n" "\n" "Related options:\n" " * ``image_cache_dir``\n" "\n" msgstr "" "\n" "The relative path to sqlite file database that will be used for image cache\n" "management.\n" "\n" "This is a relative path to the sqlite file database that tracks the age and\n" "usage statistics of image cache. The path is relative to image cache base\n" "directory, specified by the configuration option ``image_cache_dir``.\n" "\n" "This is a lightweight database with just one table.\n" "\n" "Possible values:\n" " * A valid relative path to sqlite file database\n" "\n" "Related options:\n" " * ``image_cache_dir``\n" "\n" msgid "" "\n" "The size of thread pool to be used for scrubbing images.\n" "\n" "When there are a large number of images to scrub, it is beneficial to scrub\n" "images in parallel so that the scrub queue stays in control and the backend\n" "storage is reclaimed in a timely fashion. This configuration option denotes\n" "the maximum number of images to be scrubbed in parallel. The default value " "is\n" "one, which signifies serial scrubbing. Any value above one indicates " "parallel\n" "scrubbing.\n" "\n" "Possible values:\n" " * Any non-zero positive integer\n" "\n" "Related options:\n" " * ``delayed_delete``\n" "\n" msgstr "" "\n" "The size of thread pool to be used for scrubbing images.\n" "\n" "When there are a large number of images to scrub, it is beneficial to scrub\n" "images in parallel so that the scrub queue stays in control and the backend\n" "storage is reclaimed in a timely fashion. This configuration option denotes\n" "the maximum number of images to be scrubbed in parallel. The default value " "is\n" "one, which signifies serial scrubbing. Any value above one indicates " "parallel\n" "scrubbing.\n" "\n" "Possible values:\n" " * Any non-zero positive integer\n" "\n" "Related options:\n" " * ``delayed_delete``\n" "\n" msgid "" "\n" "The upper limit on cache size, in bytes, after which the cache-pruner " "cleans\n" "up the image cache.\n" "\n" "NOTE: This is just a threshold for cache-pruner to act upon. It is NOT a\n" "hard limit beyond which the image cache would never grow. In fact, " "depending\n" "on how often the cache-pruner runs and how quickly the cache fills, the " "image\n" "cache can far exceed the size specified here very easily. Hence, care must " "be\n" "taken to appropriately schedule the cache-pruner and in setting this limit.\n" "\n" "Glance caches an image when it is downloaded. Consequently, the size of the\n" "image cache grows over time as the number of downloads increases. To keep " "the\n" "cache size from becoming unmanageable, it is recommended to run the\n" "cache-pruner as a periodic task. When the cache pruner is kicked off, it\n" "compares the current size of image cache and triggers a cleanup if the " "image\n" "cache grew beyond the size specified here. After the cleanup, the size of\n" "cache is less than or equal to size specified here.\n" "\n" "Possible values:\n" " * Any non-negative integer\n" "\n" "Related options:\n" " * None\n" "\n" msgstr "" "\n" "The upper limit on cache size, in bytes, after which the cache-pruner " "cleans\n" "up the image cache.\n" "\n" "NOTE: This is just a threshold for cache-pruner to act upon. It is NOT a\n" "hard limit beyond which the image cache would never grow. In fact, " "depending\n" "on how often the cache-pruner runs and how quickly the cache fills, the " "image\n" "cache can far exceed the size specified here very easily. Hence, care must " "be\n" "taken to appropriately schedule the cache-pruner and in setting this limit.\n" "\n" "Glance caches an image when it is downloaded. Consequently, the size of the\n" "image cache grows over time as the number of downloads increases. To keep " "the\n" "cache size from becoming unmanageable, it is recommended to run the\n" "cache-pruner as a periodic task. When the cache pruner is kicked off, it\n" "compares the current size of image cache and triggers a clean-up if the " "image\n" "cache grew beyond the size specified here. After the clean-up, the size of\n" "cache is less than or equal to size specified here.\n" "\n" "Possible values:\n" " * Any non-negative integer\n" "\n" "Related options:\n" " * None\n" "\n" msgid "" "\n" "This option is deprecated for removal in Rocky.\n" "\n" "It was introduced to make sure that the API is not enabled\n" "before the '[DEFAULT]/node_staging_uri' is defined and is\n" "long term redundant." msgstr "" "\n" "This option is deprecated and scheduled for removal in Rocky.\n" "\n" "It was introduced to make sure that the API is not enabled\n" "before the '[DEFAULT]/node_staging_uri' is defined and is\n" "long term redundant." msgid "" "\n" "Time interval, in seconds, between scrubber runs in daemon mode.\n" "\n" "Scrubber can be run either as a cron job or daemon. When run as a daemon, " "this\n" "configuration time specifies the time period between two runs. When the\n" "scrubber wakes up, it fetches and scrubs all ``pending_delete`` images that\n" "are available for scrubbing after taking ``scrub_time`` into consideration.\n" "\n" "If the wakeup time is set to a large number, there may be a large number of\n" "images to be scrubbed for each run. Also, this impacts how quickly the " "backend\n" "storage is reclaimed.\n" "\n" "Possible values:\n" " * Any non-negative integer\n" "\n" "Related options:\n" " * ``daemon``\n" " * ``delayed_delete``\n" "\n" msgstr "" "\n" "Time interval, in seconds, between scrubber runs in daemon mode.\n" "\n" "Scrubber can be run either as a cron job or daemon. When run as a daemon, " "this\n" "configuration time specifies the time period between two runs. When the\n" "scrubber wakes up, it fetches and scrubs all ``pending_delete`` images that\n" "are available for scrubbing after taking ``scrub_time`` into consideration.\n" "\n" "If the wakeup time is set to a large number, there may be a large number of\n" "images to be scrubbed for each run. Also, this impacts how quickly the " "backend\n" "storage is reclaimed.\n" "\n" "Possible values:\n" " * Any non-negative integer\n" "\n" "Related options:\n" " * ``daemon``\n" " * ``delayed_delete``\n" "\n" msgid "" "\n" "Timeout for client connections' socket operations.\n" "\n" "Provide a valid integer value representing time in seconds to set\n" "the period of wait before an incoming connection can be closed. The\n" "default value is 900 seconds.\n" "\n" "The value zero implies wait forever.\n" "\n" "Possible values:\n" " * Zero\n" " * Positive integer\n" "\n" "Related options:\n" " * None\n" "\n" msgstr "" "\n" "Timeout for client connections' socket operations.\n" "\n" "Provide a valid integer value representing time in seconds to set\n" "the period of wait before an incoming connection can be closed. The\n" "default value is 900 seconds.\n" "\n" "The value zero implies wait forever.\n" "\n" "Possible values:\n" " * Zero\n" " * Positive integer\n" "\n" "Related options:\n" " * None\n" "\n" msgid "" "\n" "Timeout value for registry requests.\n" "\n" "Provide an integer value representing the period of time in seconds\n" "that the API server will wait for a registry request to complete.\n" "The default value is 600 seconds.\n" "\n" "A value of 0 implies that a request will never timeout.\n" "\n" "Possible values:\n" " * Zero\n" " * Positive integer\n" "\n" "Related options:\n" " * None\n" "\n" msgstr "" "\n" "Timeout value for registry requests.\n" "\n" "Provide an integer value representing the period of time in seconds\n" "that the API server will wait for a registry request to complete.\n" "The default value is 600 seconds.\n" "\n" "A value of 0 implies that a request will never timeout.\n" "\n" "Possible values:\n" " * Zero\n" " * Positive integer\n" "\n" "Related options:\n" " * None\n" "\n" msgid "" "\n" "Turn on/off delayed delete.\n" "\n" "Typically when an image is deleted, the ``glance-api`` service puts the " "image\n" "into ``deleted`` state and deletes its data at the same time. Delayed " "delete\n" "is a feature in Glance that delays the actual deletion of image data until " "a\n" "later point in time (as determined by the configuration option " "``scrub_time``).\n" "When delayed delete is turned on, the ``glance-api`` service puts the image\n" "into ``pending_delete`` state upon deletion and leaves the image data in " "the\n" "storage backend for the image scrubber to delete at a later time. The image\n" "scrubber will move the image into ``deleted`` state upon successful " "deletion\n" "of image data.\n" "\n" "NOTE: When delayed delete is turned on, image scrubber MUST be running as a\n" "periodic task to prevent the backend storage from filling up with undesired\n" "usage.\n" "\n" "Possible values:\n" " * True\n" " * False\n" "\n" "Related options:\n" " * ``scrub_time``\n" " * ``wakeup_time``\n" " * ``scrub_pool_size``\n" "\n" msgstr "" "\n" "Turn on/off delayed delete.\n" "\n" "Typically when an image is deleted, the ``glance-api`` service puts the " "image\n" "into ``deleted`` state and deletes its data at the same time. Delayed " "delete\n" "is a feature in Glance that delays the actual deletion of image data until " "a\n" "later point in time (as determined by the configuration option " "``scrub_time``).\n" "When delayed delete is turned on, the ``glance-api`` service puts the image\n" "into ``pending_delete`` state upon deletion and leaves the image data in " "the\n" "storage backend for the image scrubber to delete at a later time. The image\n" "scrubber will move the image into ``deleted`` state upon successful " "deletion\n" "of image data.\n" "\n" "NOTE: When delayed delete is turned on, image scrubber MUST be running as a\n" "periodic task to prevent the backend storage from filling up with undesired\n" "usage.\n" "\n" "Possible values:\n" " * True\n" " * False\n" "\n" "Related options:\n" " * ``scrub_time``\n" " * ``wakeup_time``\n" " * ``scrub_pool_size``\n" "\n" #, python-format msgid "%(cls)s exception was raised in the last rpc call: %(val)s" msgstr "%(cls)s exception was raised in the last rpc call: %(val)s" #, python-format msgid "%(m_id)s not found in the member list of the image %(i_id)s." msgstr "%(m_id)s not found in the member list of the image %(i_id)s." #, python-format msgid "%(serv)s (pid %(pid)s) is running..." msgstr "%(serv)s (pid %(pid)s) is running..." #, python-format msgid "%(serv)s appears to already be running: %(pid)s" msgstr "%(serv)s appears to already be running: %(pid)s" #, python-format msgid "" "%(strategy)s is registered as a module twice. %(module)s is not being used." msgstr "" "%(strategy)s is registered as a module twice. %(module)s is not being used." #, python-format msgid "" "%(task_id)s of %(task_type)s not configured properly. Could not load the " "filesystem store" msgstr "" "%(task_id)s of %(task_type)s not configured properly. Could not load the " "filesystem store" #, python-format msgid "" "%(task_id)s of %(task_type)s not configured properly. Missing work dir: " "%(work_dir)s" msgstr "" "%(task_id)s of %(task_type)s not configured properly. Missing work dir: " "%(work_dir)s" #, python-format msgid "" "%(task_id)s of %(task_type)s not configured properly. Value of " "node_staging_uri must be in format 'file://'" msgstr "" "%(task_id)s of %(task_type)s not configured properly. Value of " "node_staging_uri must be in format 'file://'" #, python-format msgid "%(verb)sing %(serv)s" msgstr "%(verb)sing %(serv)s" #, python-format msgid "%(verb)sing %(serv)s with %(conf)s" msgstr "%(verb)sing %(serv)s with %(conf)s" #, python-format msgid "" "%s Please specify a host:port pair, where host is an IPv4 address, IPv6 " "address, hostname, or FQDN. If using an IPv6 address, enclose it in brackets " "separately from the port (i.e., \"[fe80::a:b:c]:9876\")." msgstr "" "%s Please specify a host:port pair, where host is an IPv4 address, IPv6 " "address, hostname, or FQDN. If using an IPv6 address, enclose it in brackets " "separately from the port (i.e., \"[fe80::a:b:c]:9876\")." #, python-format msgid "%s can't contain 4 byte unicode characters." msgstr "%s can't contain 4 byte unicode characters." #, python-format msgid "%s is already stopped" msgstr "%s is already stopped" #, python-format msgid "%s is stopped" msgstr "%s is stopped" #, python-format msgid "'%(param)s' value out of range, must not exceed %(max)d." msgstr "'%(param)s' value out of range, must not exceed %(max)d." msgid "'node_staging_uri' is not set correctly. Could not load staging store." msgstr "'node_staging_uri' is not set correctly. Could not load staging store." msgid "" "--os_auth_url option or OS_AUTH_URL environment variable required when " "keystone authentication strategy is enabled\n" msgstr "" "--os_auth_url option or OS_AUTH_URL environment variable required when " "Keystone authentication strategy is enabled\n" msgid "A body is not expected with this request." msgstr "A body is not expected with this request." #, python-format msgid "" "A metadata definition object with name=%(object_name)s already exists in " "namespace=%(namespace_name)s." msgstr "" "A metadata definition object with name=%(object_name)s already exists in " "namespace=%(namespace_name)s." #, python-format msgid "" "A metadata definition property with name=%(property_name)s already exists in " "namespace=%(namespace_name)s." msgstr "" "A metadata definition property with name=%(property_name)s already exists in " "namespace=%(namespace_name)s." #, python-format msgid "" "A metadata definition resource-type with name=%(resource_type_name)s already " "exists." msgstr "" "A metadata definition resource-type with name=%(resource_type_name)s already " "exists." #, python-format msgid "" "A metadata tag with name=%(name)s already exists in namespace=" "%(namespace_name)s. (Please note that metadata tag names are case " "insensitive)." msgstr "" "A metadata tag with name=%(name)s already exists in namespace=" "%(namespace_name)s. (Please note that metadata tag names are case " "insensitive)." msgid "A set of URLs to access the image file kept in external store" msgstr "A set of URLs to access the image file kept in external store" msgid "Amount of disk space (in GB) required to boot image." msgstr "Amount of disk space (in GB) required to boot image." msgid "Amount of ram (in MB) required to boot image." msgstr "Amount of ram (in MB) required to boot image." msgid "An identifier for the image" msgstr "An identifier for the image" msgid "An identifier for the image member (tenantId)" msgstr "An identifier for the image member (tenantId)" msgid "An identifier for the owner of this task" msgstr "An identifier for the owner of this task" msgid "An identifier for the task" msgstr "An identifier for the task" msgid "An image file url" msgstr "An image file URL" msgid "An image schema url" msgstr "An image schema URL" msgid "An image self url" msgstr "An image self URL" #, python-format msgid "An image with identifier %s already exists" msgstr "An image with identifier %s already exists" msgid "An import task exception occurred" msgstr "An import task exception occurred" msgid "An object with the same identifier already exists." msgstr "An object with the same identifier already exists." msgid "An object with the same identifier is currently being operated on." msgstr "An object with the same identifier is currently being operated on." msgid "An object with the specified identifier was not found." msgstr "An object with the specified identifier was not found." msgid "An unknown exception occurred" msgstr "An unknown exception occurred" msgid "An unknown task exception occurred" msgstr "An unknown task exception occurred" #, python-format msgid "Attempt to upload duplicate image: %s" msgstr "Attempt to upload duplicate image: %s" msgid "Attempted to update Location field for an image not in queued status." msgstr "Attempted to update Location field for an image not in queued status." #, python-format msgid "Attribute '%(property)s' is read-only." msgstr "Attribute '%(property)s' is read-only." #, python-format msgid "Attribute '%(property)s' is reserved." msgstr "Attribute '%(property)s' is reserved." #, python-format msgid "Attribute '%s' is read-only." msgstr "Attribute '%s' is read-only." #, python-format msgid "Attribute '%s' is reserved." msgstr "Attribute '%s' is reserved." msgid "Attribute container_format can be only replaced for a queued image." msgstr "Attribute container_format can be only replaced for a queued image." msgid "Attribute disk_format can be only replaced for a queued image." msgstr "Attribute disk_format can be only replaced for a queued image." msgid "" "Auth key for the user authenticating against the Swift authentication " "service." msgstr "" "Auth key for the user authenticating against the Swift authentication " "service." #, python-format msgid "Auth service at URL %(url)s not found." msgstr "Auth service at URL %(url)s not found." #, python-format msgid "" "Authentication error - the token may have expired during file upload. " "Deleting image data for %s." msgstr "" "Authentication error - the token may have expired during file upload. " "Deleting image data for %s." msgid "Authorization failed." msgstr "Authorisation failed." msgid "Available categories:" msgstr "Available categories:" #, python-format msgid "Bad \"%s\" query filter format. Use ISO 8601 DateTime notation." msgstr "Bad \"%s\" query filter format. Use ISO 8601 DateTime notation." #, python-format msgid "Bad Command: %s" msgstr "Bad Command: %s" #, python-format msgid "Bad header: %(header_name)s" msgstr "Bad header: %(header_name)s" #, python-format msgid "Bad value passed to filter %(filter)s got %(val)s" msgstr "Bad value passed to filter %(filter)s got %(val)s" #, python-format msgid "Badly formed S3 URI: %(uri)s" msgstr "Badly formed S3 URI: %(uri)s" #, python-format msgid "Badly formed credentials '%(creds)s' in Swift URI" msgstr "Badly formed credentials '%(creds)s' in Swift URI" msgid "Badly formed credentials in Swift URI." msgstr "Badly formed credentials in Swift URI." msgid "Body expected in request." msgstr "Body expected in request." msgid "" "CONF.workers should be set to 0 or 1 when using the db.simple.api backend. " "Fore more info, see https://bugs.launchpad.net/glance/+bug/1619508" msgstr "" "CONF.workers should be set to 0 or 1 when using the db.simple.api backend. " "Fore more info, see https://bugs.launchpad.net/glance/+bug/1619508" msgid "Cannot be a negative value" msgstr "Cannot be a negative value" msgid "Cannot be a negative value." msgstr "Cannot be a negative value." #, python-format msgid "Cannot convert image %(key)s '%(value)s' to an integer." msgstr "Cannot convert image %(key)s '%(value)s' to an integer." msgid "Cannot remove last location in the image." msgstr "Cannot remove last location in the image." #, python-format msgid "Cannot save data for image %(image_id)s: %(error)s" msgstr "Cannot save data for image %(image_id)s: %(error)s" msgid "Cannot set locations to empty list." msgstr "Cannot set locations to empty list." msgid "Cannot upload to an unqueued image" msgstr "Cannot upload to an unqueued image" #, python-format msgid "Checksum verification failed. Aborted caching of image '%s'." msgstr "Checksum verification failed. Aborted caching of image '%s'." msgid "Client disconnected before sending all data to backend" msgstr "Client disconnected before sending all data to backend" msgid "Command not found" msgstr "Command not found" msgid "Configuration option was not valid" msgstr "Configuration option was not valid" #, python-format msgid "Connect error/bad request to Auth service at URL %(url)s." msgstr "Connect error/bad request to Auth service at URL %(url)s." #, python-format msgid "Constructed URL: %s" msgstr "Constructed URL: %s" msgid "Container format is not specified." msgstr "Container format is not specified." msgid "Content-Type must be application/octet-stream" msgstr "Content-Type must be application/octet-stream" #, python-format msgid "Corrupt image download for image %(image_id)s" msgstr "Corrupt image download for image %(image_id)s" #, python-format msgid "Could not bind to %(host)s:%(port)s after trying for 30 seconds" msgstr "Could not bind to %(host)s:%(port)s after trying for 30 seconds" msgid "Could not find OVF file in OVA archive file." msgstr "Could not find OVF file in OVA archive file." #, python-format msgid "Could not find metadata object %s" msgstr "Could not find metadata object %s" #, python-format msgid "Could not find metadata tag %s" msgstr "Could not find metadata tag %s" #, python-format msgid "Could not find namespace %s" msgstr "Could not find namespace %s" #, python-format msgid "Could not find property %s" msgstr "Could not find property %s" msgid "Could not find required configuration option" msgstr "Could not find required configuration option" #, python-format msgid "Could not find task %s" msgstr "Could not find task %s" #, python-format msgid "Could not update image: %s" msgstr "Could not update image: %s" #, python-format msgid "Couldn't create metadata namespace: %s" msgstr "Couldn't create metadata namespace: %s" #, python-format msgid "Couldn't create metadata object: %s" msgstr "Couldn't create metadata object: %s" #, python-format msgid "Couldn't create metadata property: %s" msgstr "Couldn't create metadata property: %s" #, python-format msgid "Couldn't create metadata tag: %s" msgstr "Couldn't create metadata tag: %s" #, python-format msgid "Couldn't update metadata namespace: %s" msgstr "Couldn't update metadata namespace: %s" #, python-format msgid "Couldn't update metadata object: %s" msgstr "Couldn't update metadata object: %s" #, python-format msgid "Couldn't update metadata property: %s" msgstr "Couldn't update metadata property: %s" #, python-format msgid "Couldn't update metadata tag: %s" msgstr "Couldn't update metadata tag: %s" msgid "Currently, OVA packages containing multiple disk are not supported." msgstr "Currently, OVA packages containing multiple disk are not supported." msgid "Custom property should not be greater than 255 characters." msgstr "Custom property should not be greater than 255 characters." #, python-format msgid "Data for image_id not found: %s" msgstr "Data for image_id not found: %s" msgid "" "Data migration did not run. Data migration cannot be run before database " "expansion. Run database expansion first using \"glance-manage db expand\"" msgstr "" "Data migration did not run. Data migration cannot be run before database " "expansion. Run database expansion first using \"glance-manage db expand\"" msgid "Data supplied was not valid." msgstr "Data supplied was not valid." msgid "" "Database contraction did not run. Database contraction cannot be run before " "data migration is complete. Run data migration using \"glance-manage db " "migrate\"." msgstr "" "Database contraction did not run. Database contraction cannot be run before " "data migration is complete. Run data migration using \"glance-manage db " "migrate\"." msgid "" "Database contraction did not run. Database contraction cannot be run before " "database expansion. Run database expansion first using \"glance-manage db " "expand\"" msgstr "" "Database contraction did not run. Database contraction cannot be run before " "database expansion. Run database expansion first using \"glance-manage db " "expand\"" msgid "" "Database contraction failed. Couldn't find head revision of contract branch." msgstr "" "Database contraction failed. Couldn't find head revision of contract branch." #, python-format msgid "" "Database contraction failed. Database contraction should have brought the " "database version up to \"%(e_rev)s\" revision. But, current revisions are: " "%(curr_revs)s " msgstr "" "Database contraction failed. Database contraction should have brought the " "database version up to \"%(e_rev)s\" revision. But, current revisions are: " "%(curr_revs)s " msgid "" "Database expansion failed. Couldn't find head revision of expand branch." msgstr "" "Database expansion failed. Couldn't find head revision of expand branch." #, python-format msgid "" "Database expansion failed. Database expansion should have brought the " "database version up to \"%(e_rev)s\" revision. But, current revisions are: " "%(curr_revs)s " msgstr "" "Database expansion failed. Database expansion should have brought the " "database version up to \"%(e_rev)s\" revision. But, current revisions are: " "%(curr_revs)s " msgid "Database is currently not under Alembic's migration control." msgstr "Database is currently not under Alembic's migration control." msgid "" "Database is either not under migration control or under legacy migration " "control, please run \"glance-manage db sync\" to place the database under " "alembic migration control." msgstr "" "Database is either not under migration control or under legacy migration " "control, please run \"glance-manage db sync\" to place the database under " "alembic migration control." msgid "Database is synced successfully." msgstr "Database is synced successfully." msgid "Database is up to date. No migrations needed." msgstr "Database is up to date. No migrations needed." msgid "Database is up to date. No upgrades needed." msgstr "Database is up to date. No upgrades needed." msgid "Date and time of image member creation" msgstr "Date and time of image member creation" msgid "Date and time of image registration" msgstr "Date and time of image registration" msgid "Date and time of last modification of image member" msgstr "Date and time of last modification of image member" msgid "Date and time of namespace creation" msgstr "Date and time of namespace creation" msgid "Date and time of object creation" msgstr "Date and time of object creation" msgid "Date and time of resource type association" msgstr "Date and time of resource type association" msgid "Date and time of tag creation" msgstr "Date and time of tag creation" msgid "Date and time of the last image modification" msgstr "Date and time of the last image modification" msgid "Date and time of the last namespace modification" msgstr "Date and time of the last namespace modification" msgid "Date and time of the last object modification" msgstr "Date and time of the last object modification" msgid "Date and time of the last resource type association modification" msgstr "Date and time of the last resource type association modification" msgid "Date and time of the last tag modification" msgstr "Date and time of the last tag modification" msgid "Datetime when this resource was created" msgstr "Datetime when this resource was created" msgid "Datetime when this resource was updated" msgstr "Datetime when this resource was updated" msgid "Datetime when this resource would be subject to removal" msgstr "Datetime when this resource would be subject to removal" #, python-format msgid "Denying attempt to upload image because it exceeds the quota: %s" msgstr "Denying attempt to upload image because it exceeds the quota: %s" #, python-format msgid "Denying attempt to upload image larger than %d bytes." msgstr "Denying attempt to upload image larger than %d bytes." msgid "Descriptive name for the image" msgstr "Descriptive name for the image" msgid "Disk format is not specified." msgstr "Disk format is not specified." #, python-format msgid "" "Driver %(driver_name)s could not be configured correctly. Reason: %(reason)s" msgstr "" "Driver %(driver_name)s could not be configured correctly. Reason: %(reason)s" msgid "" "Error decoding your request. Either the URL or the request body contained " "characters that could not be decoded by Glance" msgstr "" "Error decoding your request. Either the URL or the request body contained " "characters that could not be decoded by Glance" #, python-format msgid "Error fetching members of image %(image_id)s: %(inner_msg)s" msgstr "Error fetching members of image %(image_id)s: %(inner_msg)s" msgid "Error in store configuration. Adding images to store is disabled." msgstr "Error in store configuration. Adding images to store is disabled." #, python-format msgid "Error: %(exc_type)s: %(e)s" msgstr "Error: %(exc_type)s: %(e)s" msgid "Expected a member in the form: {\"member\": \"image_id\"}" msgstr "Expected a member in the form: {\"member\": \"image_id\"}" msgid "Expected a status in the form: {\"status\": \"status\"}" msgstr "Expected a status in the form: {\"status\": \"status\"}" msgid "External source should not be empty" msgstr "External source should not be empty" #, python-format msgid "External sources are not supported: '%s'" msgstr "External sources are not supported: '%s'" #, python-format msgid "Failed to activate image. Got error: %s" msgstr "Failed to activate image. Got error: %s" #, python-format msgid "Failed to add image metadata. Got error: %s" msgstr "Failed to add image metadata. Got error: %s" #, python-format msgid "Failed to find image %(image_id)s to delete" msgstr "Failed to find image %(image_id)s to delete" #, python-format msgid "Failed to find image to delete: %s" msgstr "Failed to find image to delete: %s" #, python-format msgid "Failed to find image to update: %s" msgstr "Failed to find image to update: %s" #, python-format msgid "Failed to find resource type %(resourcetype)s to delete" msgstr "Failed to find resource type %(resourcetype)s to delete" #, python-format msgid "Failed to initialize the image cache database. Got error: %s" msgstr "Failed to initialise the image cache database. Got error: %s" #, python-format msgid "Failed to read %s from config" msgstr "Failed to read %s from config" #, python-format msgid "Failed to reserve image. Got error: %s" msgstr "Failed to reserve image. Got error: %s" #, python-format msgid "Failed to sync database: ERROR: %s" msgstr "Failed to sync database: ERROR: %s" #, python-format msgid "Failed to update image metadata. Got error: %s" msgstr "Failed to update image metadata. Got error: %s" #, python-format msgid "Failed to upload image %s" msgstr "Failed to upload image %s" #, python-format msgid "" "Failed to upload image data for image %(image_id)s due to HTTP error: " "%(error)s" msgstr "" "Failed to upload image data for image %(image_id)s due to HTTP error: " "%(error)s" #, python-format msgid "" "Failed to upload image data for image %(image_id)s due to internal error: " "%(error)s" msgstr "" "Failed to upload image data for image %(image_id)s due to internal error: " "%(error)s" #, python-format msgid "File %(path)s has invalid backing file %(bfile)s, aborting." msgstr "File %(path)s has invalid backing file %(bfile)s, aborting." msgid "" "File based imports are not allowed. Please use a non-local source of image " "data." msgstr "" "File based imports are not allowed. Please use a non-local source of image " "data." msgid "Forbidden image access" msgstr "Forbidden image access" #, python-format msgid "Forbidden to delete a %s image." msgstr "Forbidden to delete a %s image." #, python-format msgid "Forbidden to delete image: %s" msgstr "Forbidden to delete image: %s" #, python-format msgid "Forbidden to modify '%(key)s' of %(status)s image." msgstr "Forbidden to modify '%(key)s' of %(status)s image." #, python-format msgid "Forbidden to modify '%s' of image." msgstr "Forbidden to modify '%s' of image." msgid "Forbidden to reserve image." msgstr "Forbidden to reserve image." msgid "Forbidden to update deleted image." msgstr "Forbidden to update deleted image." #, python-format msgid "Forbidden to update image: %s" msgstr "Forbidden to update image: %s" #, python-format msgid "Forbidden upload attempt: %s" msgstr "Forbidden upload attempt: %s" #, python-format msgid "Forbidding request, metadata definition namespace=%s is not visible." msgstr "Forbidding request, metadata definition namespace=%s is not visible." #, python-format msgid "Forbidding request, task %s is not visible" msgstr "Forbidding request, task %s is not visible" msgid "Format of the container" msgstr "Format of the container" msgid "Format of the disk" msgstr "Format of the disk" #, python-format msgid "Host \"%s\" is not valid." msgstr "Host \"%s\" is not valid." #, python-format msgid "Host and port \"%s\" is not valid." msgstr "Host and port \"%s\" is not valid." msgid "" "Human-readable informative message only included when appropriate (usually " "on failure)" msgstr "" "Human-readable informative message only included when appropriate (usually " "on failure)" msgid "If true, image will not be deletable." msgstr "If true, image will not be deletable." msgid "If true, namespace will not be deletable." msgstr "If true, namespace will not be deletable." #, python-format msgid "Image %(id)s could not be deleted because it is in use: %(exc)s" msgstr "Image %(id)s could not be deleted because it is in use: %(exc)s" #, python-format msgid "Image %(id)s not found" msgstr "Image %(id)s not found" #, python-format msgid "" "Image %(image_id)s could not be found after upload. The image may have been " "deleted during the upload: %(error)s" msgstr "" "Image %(image_id)s could not be found after upload. The image may have been " "deleted during the upload: %(error)s" #, python-format msgid "Image %(image_id)s is protected and cannot be deleted." msgstr "Image %(image_id)s is protected and cannot be deleted." #, python-format msgid "" "Image %s could not be found after upload. The image may have been deleted " "during the upload, cleaning up the chunks uploaded." msgstr "" "Image %s could not be found after upload. The image may have been deleted " "during the upload, cleaning up the chunks uploaded." #, python-format msgid "" "Image %s could not be found after upload. The image may have been deleted " "during the upload." msgstr "" "Image %s could not be found after upload. The image may have been deleted " "during the upload." #, python-format msgid "Image %s is deactivated" msgstr "Image %s is deactivated" #, python-format msgid "Image %s is not active" msgstr "Image %s is not active" #, python-format msgid "Image %s not found." msgstr "Image %s not found." #, python-format msgid "Image exceeds the storage quota: %s" msgstr "Image exceeds the storage quota: %s" msgid "Image id is required." msgstr "Image id is required." msgid "Image import is not supported at this site." msgstr "Image import is not supported at this site." msgid "Image is protected" msgstr "Image is protected" #, python-format msgid "Image member limit exceeded for image %(id)s: %(e)s:" msgstr "Image member limit exceeded for image %(id)s: %(e)s:" #, python-format msgid "Image name too long: %d" msgstr "Image name too long: %d" msgid "Image operation conflicts" msgstr "Image operation conflicts" #, python-format msgid "" "Image status transition from %(cur_status)s to %(new_status)s is not allowed" msgstr "" "Image status transition from %(cur_status)s to %(new_status)s is not allowed" #, python-format msgid "Image storage media is full: %s" msgstr "Image storage media is full: %s" #, python-format msgid "Image tag limit exceeded for image %(id)s: %(e)s:" msgstr "Image tag limit exceeded for image %(id)s: %(e)s:" #, python-format msgid "Image upload problem: %s" msgstr "Image upload problem: %s" #, python-format msgid "Image with identifier %s already exists!" msgstr "Image with identifier %s already exists!" #, python-format msgid "Image with identifier %s has been deleted." msgstr "Image with identifier %s has been deleted." #, python-format msgid "Image with identifier %s not found" msgstr "Image with identifier %s not found" #, python-format msgid "Image with the given id %(image_id)s was not found" msgstr "Image with the given id %(image_id)s was not found" msgid "Import request requires a 'method' field." msgstr "Import request requires a 'method' field." msgid "Import request requires a 'name' field." msgstr "Import request requires a 'name' field." #, python-format msgid "" "Incorrect auth strategy, expected \"%(expected)s\" but received " "\"%(received)s\"" msgstr "" "Incorrect auth strategy, expected \"%(expected)s\" but received " "\"%(received)s\"" #, python-format msgid "Incorrect request: %s" msgstr "Incorrect request: %s" #, python-format msgid "Input does not contain '%(key)s' field" msgstr "Input does not contain '%(key)s' field" msgid "Input to api_image_import task is empty." msgstr "Input to api_image_import task is empty." #, python-format msgid "Insufficient permissions on image storage media: %s" msgstr "Insufficient permissions on image storage media: %s" #, python-format msgid "Invalid JSON pointer for this resource: '/%s'" msgstr "Invalid JSON pointer for this resource: '/%s'" #, python-format msgid "Invalid checksum '%s': can't exceed 32 characters" msgstr "Invalid checksum '%s': can't exceed 32 characters" msgid "Invalid configuration in glance-swift conf file." msgstr "Invalid configuration in glance-swift conf file." msgid "Invalid configuration in property protection file." msgstr "Invalid configuration in property protection file." #, python-format msgid "Invalid container format '%s' for image." msgstr "Invalid container format '%s' for image." #, python-format msgid "Invalid content type %(content_type)s" msgstr "Invalid content type %(content_type)s" #, python-format msgid "" "Invalid data migration script '%(script)s'. A valid data migration script " "must implement functions 'has_migrations' and 'migrate'." msgstr "" "Invalid data migration script '%(script)s'. A valid data migration script " "must implement functions 'has_migrations' and 'migrate'." #, python-format msgid "Invalid disk format '%s' for image." msgstr "Invalid disk format '%s' for image." #, python-format msgid "Invalid filter value %s. The quote is not closed." msgstr "Invalid filter value %s. The quote is not closed." #, python-format msgid "" "Invalid filter value %s. There is no comma after closing quotation mark." msgstr "" "Invalid filter value %s. There is no comma after closing quotation mark." #, python-format msgid "" "Invalid filter value %s. There is no comma before opening quotation mark." msgstr "" "Invalid filter value %s. There is no comma before opening quotation mark." msgid "Invalid image id format" msgstr "Invalid image id format" #, python-format msgid "Invalid int value for age_in_days: %(age_in_days)s" msgstr "Invalid int value for age_in_days: %(age_in_days)s" #, python-format msgid "Invalid int value for max_rows: %(max_rows)s" msgstr "Invalid int value for max_rows: %(max_rows)s" msgid "Invalid location" msgstr "Invalid location" #, python-format msgid "Invalid location %s" msgstr "Invalid location %s" #, python-format msgid "Invalid location: %s" msgstr "Invalid location: %s" #, python-format msgid "" "Invalid location_strategy option: %(name)s. The valid strategy option(s) " "is(are): %(strategies)s" msgstr "" "Invalid location_strategy option: %(name)s. The valid strategy option(s) " "is(are): %(strategies)s" msgid "Invalid locations" msgstr "Invalid locations" #, python-format msgid "Invalid locations: %s" msgstr "Invalid locations: %s" msgid "Invalid marker format" msgstr "Invalid marker format" msgid "Invalid marker. Image could not be found." msgstr "Invalid marker. Image could not be found." #, python-format msgid "Invalid membership association: %s" msgstr "Invalid membership association: %s" msgid "" "Invalid mix of disk and container formats. When setting a disk or container " "format to one of 'aki', 'ari', or 'ami', the container and disk formats must " "match." msgstr "" "Invalid mix of disk and container formats. When setting a disk or container " "format to one of 'aki', 'ari', or 'ami', the container and disk formats must " "match." #, python-format msgid "" "Invalid operation: `%(op)s`. It must be one of the following: %(available)s." msgstr "" "Invalid operation: `%(op)s`. It must be one of the following: %(available)s." msgid "Invalid position for adding a location." msgstr "Invalid position for adding a location." msgid "Invalid position for removing a location." msgstr "Invalid position for removing a location." msgid "Invalid service catalog json." msgstr "Invalid service catalogue json." #, python-format msgid "Invalid sort direction: %s" msgstr "Invalid sort direction: %s" #, python-format msgid "" "Invalid sort key: %(sort_key)s. It must be one of the following: " "%(available)s." msgstr "" "Invalid sort key: %(sort_key)s. It must be one of the following: " "%(available)s." #, python-format msgid "Invalid status value: %s" msgstr "Invalid status value: %s" #, python-format msgid "Invalid status: %s" msgstr "Invalid status: %s" #, python-format msgid "Invalid time format for %s." msgstr "Invalid time format for %s." #, python-format msgid "Invalid type value: %s" msgstr "Invalid type value: %s" #, python-format msgid "" "Invalid update. It would result in a duplicate metadata definition namespace " "with the same name of %s" msgstr "" "Invalid update. It would result in a duplicate metadata definition namespace " "with the same name of %s" #, python-format msgid "" "Invalid update. It would result in a duplicate metadata definition object " "with the same name=%(name)s in namespace=%(namespace_name)s." msgstr "" "Invalid update. It would result in a duplicate metadata definition object " "with the same name=%(name)s in namespace=%(namespace_name)s." #, python-format msgid "" "Invalid update. It would result in a duplicate metadata definition object " "with the same name=%(name)s in namespace=%(namespace_name)s." msgstr "" "Invalid update. It would result in a duplicate metadata definition object " "with the same name=%(name)s in namespace=%(namespace_name)s." #, python-format msgid "" "Invalid update. It would result in a duplicate metadata definition property " "with the same name=%(name)s in namespace=%(namespace_name)s." msgstr "" "Invalid update. It would result in a duplicate metadata definition property " "with the same name=%(name)s in namespace=%(namespace_name)s." #, python-format msgid "Invalid value '%(value)s' for parameter '%(param)s': %(extra_msg)s" msgstr "Invalid value '%(value)s' for parameter '%(param)s': %(extra_msg)s" #, python-format msgid "" "Invalid value '%s' for 'protected' filter. Valid values are 'true' or " "'false'." msgstr "" "Invalid value '%s' for 'protected' filter. Valid values are 'true' or " "'false'." #, python-format msgid "Invalid value for option %(option)s: %(value)s" msgstr "Invalid value for option %(option)s: %(value)s" #, python-format msgid "Invalid visibility value: %s" msgstr "Invalid visibility value: %s" msgid "It's invalid to provide multiple image sources." msgstr "It's invalid to provide multiple image sources." #, python-format msgid "It's not allowed to add locations if image status is %s." msgstr "It's not allowed to add locations if image status is %s." msgid "It's not allowed to add locations if locations are invisible." msgstr "It's not allowed to add locations if locations are invisible." #, python-format msgid "It's not allowed to remove locations if image status is %s." msgstr "It's not allowed to remove locations if image status is %s." msgid "It's not allowed to remove locations if locations are invisible." msgstr "It's not allowed to remove locations if locations are invisible." #, python-format msgid "It's not allowed to replace locations if image status is %s." msgstr "It's not allowed to replace locations if image status is %s." msgid "It's not allowed to update locations if locations are invisible." msgstr "It's not allowed to update locations if locations are invisible." msgid "List of strings related to the image" msgstr "List of strings related to the image" msgid "Malformed JSON in request body." msgstr "Malformed JSON in request body." msgid "Maximal age is count of days since epoch." msgstr "Maximal age is count of days since epoch." #, python-format msgid "Maximum redirects (%(redirects)s) was exceeded." msgstr "Maximum redirects (%(redirects)s) was exceeded." #, python-format msgid "Member %(member_id)s is duplicated for image %(image_id)s" msgstr "Member %(member_id)s is duplicated for image %(image_id)s" msgid "Member can't be empty" msgstr "Member can't be empty" msgid "Member to be added not specified" msgstr "Member to be added not specified" msgid "Membership could not be found." msgstr "Membership could not be found." #, python-format msgid "" "Metadata definition namespace %(namespace)s is protected and cannot be " "deleted." msgstr "" "Metadata definition namespace %(namespace)s is protected and cannot be " "deleted." #, python-format msgid "Metadata definition namespace not found for id=%s" msgstr "Metadata definition namespace not found for id=%s" #, python-format msgid "Metadata definition namespace=%(namespace_name)s was not found." msgstr "Metadata definition namespace=%(namespace_name)s was not found." #, python-format msgid "" "Metadata definition object %(object_name)s is protected and cannot be " "deleted." msgstr "" "Metadata definition object %(object_name)s is protected and cannot be " "deleted." #, python-format msgid "Metadata definition object not found for id=%s" msgstr "Metadata definition object not found for id=%s" #, python-format msgid "" "Metadata definition property %(property_name)s is protected and cannot be " "deleted." msgstr "" "Metadata definition property %(property_name)s is protected and cannot be " "deleted." #, python-format msgid "Metadata definition property not found for id=%s" msgstr "Metadata definition property not found for id=%s" #, python-format msgid "" "Metadata definition resource-type %(resource_type_name)s is a seeded-system " "type and cannot be deleted." msgstr "" "Metadata definition resource-type %(resource_type_name)s is a seeded-system " "type and cannot be deleted." #, python-format msgid "" "Metadata definition resource-type-association %(resource_type)s is protected " "and cannot be deleted." msgstr "" "Metadata definition resource-type-association %(resource_type)s is protected " "and cannot be deleted." #, python-format msgid "" "Metadata definition tag %(tag_name)s is protected and cannot be deleted." msgstr "" "Metadata definition tag %(tag_name)s is protected and cannot be deleted." #, python-format msgid "Metadata definition tag not found for id=%s" msgstr "Metadata definition tag not found for id=%s" #, python-format msgid "Migrated %s rows" msgstr "Migrated %s rows" msgid "Minimal rows limit is 1." msgstr "Minimal rows limit is 1." msgid "Missing required 'image_id' field" msgstr "Missing required 'image_id' field" #, python-format msgid "Missing required credential: %(required)s" msgstr "Missing required credential: %(required)s" #, python-format msgid "" "Multiple 'image' service matches for region %(region)s. This generally means " "that a region is required and you have not supplied one." msgstr "" "Multiple 'image' service matches for region %(region)s. This generally means " "that a region is required and you have not supplied one." msgid "Must supply a non-negative value for age." msgstr "Must supply a non-negative value for age." msgid "No authenticated user" msgstr "No authenticated user" #, python-format msgid "No image found with ID %s" msgstr "No image found with ID %s" #, python-format msgid "No location found with ID %(loc)s from image %(img)s" msgstr "No location found with ID %(loc)s from image %(img)s" msgid "No permission to share that image" msgstr "No permission to share that image" #, python-format msgid "Not allowed to create members for image %s." msgstr "Not allowed to create members for image %s." #, python-format msgid "Not allowed to deactivate image in status '%s'" msgstr "Not allowed to deactivate image in status '%s'" #, python-format msgid "Not allowed to delete members for image %s." msgstr "Not allowed to delete members for image %s." #, python-format msgid "Not allowed to delete tags for image %s." msgstr "Not allowed to delete tags for image %s." #, python-format msgid "Not allowed to list members for image %s." msgstr "Not allowed to list members for image %s." #, python-format msgid "Not allowed to reactivate image in status '%s'" msgstr "Not allowed to reactivate image in status '%s'" #, python-format msgid "Not allowed to update members for image %s." msgstr "Not allowed to update members for image %s." #, python-format msgid "Not allowed to update tags for image %s." msgstr "Not allowed to update tags for image %s." #, python-format msgid "Not allowed to upload image data for image %(image_id)s: %(error)s" msgstr "Not allowed to upload image data for image %(image_id)s: %(error)s" msgid "Number of sort dirs does not match the number of sort keys" msgstr "Number of sort dirs does not match the number of sort keys" msgid "OVA extract is limited to admin" msgstr "OVA extract is limited to admin" msgid "Old and new sorting syntax cannot be combined" msgstr "Old and new sorting syntax cannot be combined" msgid "Only shared images have members." msgstr "Only shared images have members." #, python-format msgid "Operation \"%s\" requires a member named \"value\"." msgstr "Operation \"%s\" requires a member named \"value\"." msgid "" "Operation objects must contain exactly one member named \"add\", \"remove\", " "or \"replace\"." msgstr "" "Operation objects must contain exactly one member named \"add\", \"remove\", " "or \"replace\"." msgid "" "Operation objects must contain only one member named \"add\", \"remove\", or " "\"replace\"." msgstr "" "Operation objects must contain only one member named \"add\", \"remove\", or " "\"replace\"." msgid "Operations must be JSON objects." msgstr "Operations must be JSON objects." #, python-format msgid "Original locations is not empty: %s" msgstr "Original locations is not empty: %s" msgid "Owner can't be updated by non admin." msgstr "Owner can't be updated by non admin." msgid "Owner must be specified to create a tag." msgstr "Owner must be specified to create a tag." msgid "Owner of the image" msgstr "Owner of the image" msgid "Owner of the namespace." msgstr "Owner of the namespace." msgid "Param values can't contain 4 byte unicode." msgstr "Param values can't contain 4 byte Unicode." msgid "Placed database under migration control at revision:" msgstr "Placed database under migration control at revision:" msgid "Placing database under Alembic's migration control at revision:" msgstr "Placing database under Alembic's migration control at revision:" #, python-format msgid "Pointer `%s` contains \"~\" not part of a recognized escape sequence." msgstr "Pointer `%s` contains \"~\" not part of a recognised escape sequence." #, python-format msgid "Pointer `%s` contains adjacent \"/\"." msgstr "Pointer `%s` contains adjacent \"/\"." #, python-format msgid "Pointer `%s` does not contains valid token." msgstr "Pointer `%s` does not contains valid token." #, python-format msgid "Pointer `%s` does not start with \"/\"." msgstr "Pointer `%s` does not start with \"/\"." #, python-format msgid "Pointer `%s` end with \"/\"." msgstr "Pointer `%s` end with \"/\"." #, python-format msgid "Port \"%s\" is not valid." msgstr "Port \"%s\" is not valid." #, python-format msgid "Process %d not running" msgstr "Process %d not running" #, python-format msgid "Properties %s must be set prior to saving data." msgstr "Properties %s must be set prior to saving data." #, python-format msgid "" "Property %(property_name)s does not start with the expected resource type " "association prefix of '%(prefix)s'." msgstr "" "Property %(property_name)s does not start with the expected resource type " "association prefix of '%(prefix)s'." #, python-format msgid "Property %s already present." msgstr "Property %s already present." #, python-format msgid "Property %s does not exist." msgstr "Property %s does not exist." #, python-format msgid "Property %s may not be removed." msgstr "Property %s may not be removed." #, python-format msgid "Property %s must be set prior to saving data." msgstr "Property %s must be set prior to saving data." #, python-format msgid "Property '%s' is protected" msgstr "Property '%s' is protected" msgid "Property names can't contain 4 byte unicode." msgstr "Property names can't contain 4 byte Unicode." #, python-format msgid "" "Provided image size must match the stored image size. (provided size: " "%(ps)d, stored size: %(ss)d)" msgstr "" "Provided image size must match the stored image size. (provided size: " "%(ps)d, stored size: %(ss)d)" #, python-format msgid "Provided object does not match schema '%(schema)s': %(reason)s" msgstr "Provided object does not match schema '%(schema)s': %(reason)s" #, python-format msgid "Provided status of task is unsupported: %(status)s" msgstr "Provided status of task is unsupported: %(status)s" #, python-format msgid "Provided type of task is unsupported: %(type)s" msgstr "Provided type of task is unsupported: %(type)s" msgid "Provides a user friendly description of the namespace." msgstr "Provides a user friendly description of the namespace." msgid "Purge command failed, check glance-manage logs for more details." msgstr "Purge command failed, check glance-manage logs for more details." msgid "Received invalid HTTP redirect." msgstr "Received invalid HTTP redirect." #, python-format msgid "Redirecting to %(uri)s for authorization." msgstr "Redirecting to %(uri)s for authorisation." #, python-format msgid "Registry service can't use %s" msgstr "Registry service can't use %s" #, python-format msgid "Registry was not configured correctly on API server. Reason: %(reason)s" msgstr "" "Registry was not configured correctly on API server. Reason: %(reason)s" #, python-format msgid "Reload of %(serv)s not supported" msgstr "Reload of %(serv)s not supported" #, python-format msgid "Reloading %(serv)s (pid %(pid)s) with signal(%(sig)s)" msgstr "Reloading %(serv)s (pid %(pid)s) with signal(%(sig)s)" #, python-format msgid "Removing stale pid file %s" msgstr "Removing stale pid file %s" msgid "Request body must be a JSON array of operation objects." msgstr "Request body must be a JSON array of operation objects." msgid "Request must be a list of commands" msgstr "Request must be a list of commands" #, python-format msgid "Required store %s is invalid" msgstr "Required store %s is invalid" msgid "" "Resource type names should be aligned with Heat resource types whenever " "possible: http://docs.openstack.org/developer/heat/template_guide/openstack." "html" msgstr "" "Resource type names should be aligned with Heat resource types whenever " "possible: http://docs.openstack.org/developer/heat/template_guide/openstack." "html" msgid "Response from Keystone does not contain a Glance endpoint." msgstr "Response from Keystone does not contain a Glance endpoint." msgid "Rolling upgrades are currently supported only for MySQL and Sqlite" msgstr "Rolling upgrades are currently supported only for MySQL and Sqlite" msgid "Scope of image accessibility" msgstr "Scope of image accessibility" msgid "Scope of namespace accessibility." msgstr "Scope of namespace accessibility." msgid "Scrubber encountered an error while trying to fetch scrub jobs." msgstr "Scrubber encountered an error while trying to fetch scrub jobs." #, python-format msgid "Server %(serv)s is stopped" msgstr "Server %(serv)s is stopped" #, python-format msgid "Server worker creation failed: %(reason)s." msgstr "Server worker creation failed: %(reason)s." msgid "Signature verification failed" msgstr "Signature verification failed" msgid "Size of image file in bytes" msgstr "Size of image file in bytes" msgid "" "Some resource types allow more than one key / value pair per instance. For " "example, Cinder allows user and image metadata on volumes. Only the image " "properties metadata is evaluated by Nova (scheduling or drivers). This " "property allows a namespace target to remove the ambiguity." msgstr "" "Some resource types allow more than one key / value pair per instance. For " "example, Cinder allows user and image metadata on volumes. Only the image " "properties metadata is evaluated by Nova (scheduling or drivers). This " "property allows a namespace target to remove the ambiguity." msgid "Sort direction supplied was not valid." msgstr "Sort direction supplied was not valid." msgid "Sort key supplied was not valid." msgstr "Sort key supplied was not valid." msgid "" "Specifies the prefix to use for the given resource type. Any properties in " "the namespace should be prefixed with this prefix when being applied to the " "specified resource type. Must include prefix separator (e.g. a colon :)." msgstr "" "Specifies the prefix to use for the given resource type. Any properties in " "the namespace should be prefixed with this prefix when being applied to the " "specified resource type. Must include prefix separator (e.g. a colon :)." msgid "Specifying both 'visibility' and 'is_public' is not permiitted." msgstr "Specifying both 'visibility' and 'is_public' is not permitted." msgid "Status must be \"pending\", \"accepted\" or \"rejected\"." msgstr "Status must be \"pending\", \"accepted\" or \"rejected\"." msgid "Status not specified" msgstr "Status not specified" msgid "Status of the image" msgstr "Status of the image" #, python-format msgid "Status transition from %(cur_status)s to %(new_status)s is not allowed" msgstr "Status transition from %(cur_status)s to %(new_status)s is not allowed" #, python-format msgid "Stopping %(serv)s (pid %(pid)s) with signal(%(sig)s)" msgstr "Stopping %(serv)s (pid %(pid)s) with signal(%(sig)s)" #, python-format msgid "Store for image_id not found: %s" msgstr "Store for image_id not found: %s" #, python-format msgid "Store for scheme %s not found" msgstr "Store for scheme %s not found" #, python-format msgid "" "Supplied %(attr)s (%(supplied)s) and %(attr)s generated from uploaded image " "(%(actual)s) did not match. Setting image status to 'killed'." msgstr "" "Supplied %(attr)s (%(supplied)s) and %(attr)s generated from uploaded image " "(%(actual)s) did not match. Setting image status to 'killed'." msgid "Supported values for the 'container_format' image attribute" msgstr "Supported values for the 'container_format' image attribute" msgid "Supported values for the 'disk_format' image attribute" msgstr "Supported values for the 'disk_format' image attribute" #, python-format msgid "Suppressed respawn as %(serv)s was %(rsn)s." msgstr "Suppressed re-spawn as %(serv)s was %(rsn)s." msgid "System SIGHUP signal received." msgstr "System SIGHUP signal received." #, python-format msgid "Task '%s' is required" msgstr "Task '%s' is required" msgid "Task does not exist" msgstr "Task does not exist" msgid "Task failed due to Internal Error" msgstr "Task failed due to Internal Error" msgid "Task was not configured properly" msgstr "Task was not configured properly" #, python-format msgid "Task with the given id %(task_id)s was not found" msgstr "Task with the given id %(task_id)s was not found" msgid "The \"changes-since\" filter is no longer available on v2." msgstr "The \"changes-since\" filter is no longer available on v2." #, python-format msgid "The CA file you specified %s does not exist" msgstr "The CA file you specified %s does not exist" msgid "" "The HTTP header used to determine the scheme for the original request, even " "if it was removed by an SSL terminating proxy. Typical value is " "\"HTTP_X_FORWARDED_PROTO\"." msgstr "" "The HTTP header used to determine the scheme for the original request, even " "if it was removed by an SSL terminating proxy. Typical value is " "\"HTTP_X_FORWARDED_PROTO\"." #, python-format msgid "" "The Image %(image_id)s object being created by this task %(task_id)s, is no " "longer in valid status for further processing." msgstr "" "The Image %(image_id)s object being created by this task %(task_id)s, is no " "longer in valid status for further processing." msgid "" "The Images (Glance) version 1 API has been DEPRECATED in the Newton release " "and will be removed on or after Pike release, following the standard " "OpenStack deprecation policy. Hence, the configuration options specific to " "the Images (Glance) v1 API are hereby deprecated and subject to removal. " "Operators are advised to deploy the Images (Glance) v2 API." msgstr "" "The Images (Glance) version 1 API has been DEPRECATED in the Newton release " "and will be removed on or after Pike release, following the standard " "OpenStack deprecation policy. Hence, the configuration options specific to " "the Images (Glance) v1 API are hereby deprecated and subject to removal. " "Operators are advised to deploy the Images (Glance) v2 API." msgid "" "The Images (Glance) version 1 API has been DEPRECATED in the Newton release. " "It will be removed on or after Pike release, following the standard " "OpenStack deprecation policy. Once we remove the Images (Glance) v1 API, " "only the Images (Glance) v2 API can be deployed and will be enabled by " "default making this option redundant." msgstr "" "The Images (Glance) version 1 API has been DEPRECATED in the Newton release. " "It will be removed on or after Pike release, following the standard " "OpenStack deprecation policy. Once we remove the Images (Glance) v1 API, " "only the Images (Glance) v2 API can be deployed and will be enabled by " "default making this option redundant." msgid "The Store URI was malformed." msgstr "The Store URI was malformed." msgid "" "The URL to the keystone service. If \"use_user_token\" is not in effect and " "using keystone auth, then URL of keystone can be specified." msgstr "" "The URL to the Keystone service. If \"use_user_token\" is not in effect and " "using Keystone auth, then URL of Keystone can be specified." msgid "The address where the Swift authentication service is listening." msgstr "The address where the Swift authentication service is listening." msgid "" "The administrators password. If \"use_user_token\" is not in effect, then " "admin credentials can be specified." msgstr "" "The administrators password. If \"use_user_token\" is not in effect, then " "admin credentials can be specified." msgid "" "The administrators user name. If \"use_user_token\" is not in effect, then " "admin credentials can be specified." msgstr "" "The administrators user name. If \"use_user_token\" is not in effect, then " "admin credentials can be specified." #, python-format msgid "The cert file you specified %s does not exist" msgstr "The cert file you specified %s does not exist" msgid "" "The current database version is not supported any more. Please upgrade to " "Liberty release first." msgstr "" "The current database version is not supported any more. Please upgrade to " "Liberty release first." msgid "The current status of this task" msgstr "The current status of this task" #, python-format msgid "" "The device housing the image cache directory %(image_cache_dir)s does not " "support xattr. It is likely you need to edit your fstab and add the " "user_xattr option to the appropriate line for the device housing the cache " "directory." msgstr "" "The device housing the image cache directory %(image_cache_dir)s does not " "support xattr. It is likely you need to edit your fstab and add the " "user_xattr option to the appropriate line for the device housing the cache " "directory." #, python-format msgid "" "The given uri is not valid. Please specify a valid uri from the following " "list of supported uri %(supported)s" msgstr "" "The given URI is not valid. Please specify a valid URI from the following " "list of supported URI %(supported)s" #, python-format msgid "The image %s has data on staging" msgstr "The image %s has data on staging" #, python-format msgid "" "The image %s is already present on the target, but our check for it did not " "find it. This indicates that we do not have permissions to see all the " "images on the target server." msgstr "" "The image %s is already present on the target, but our check for it did not " "find it. This indicates that we do not have permissions to see all the " "images on the target server." #, python-format msgid "The incoming image is too large: %s" msgstr "The incoming image is too large: %s" #, python-format msgid "The key file you specified %s does not exist" msgstr "The key file you specified %s does not exist" #, python-format msgid "" "The limit has been exceeded on the number of allowed image locations. " "Attempted: %(attempted)s, Maximum: %(maximum)s" msgstr "" "The limit has been exceeded on the number of allowed image locations. " "Attempted: %(attempted)s, Maximum: %(maximum)s" #, python-format msgid "" "The limit has been exceeded on the number of allowed image members for this " "image. Attempted: %(attempted)s, Maximum: %(maximum)s" msgstr "" "The limit has been exceeded on the number of allowed image members for this " "image. Attempted: %(attempted)s, Maximum: %(maximum)s" #, python-format msgid "" "The limit has been exceeded on the number of allowed image properties. " "Attempted: %(attempted)s, Maximum: %(maximum)s" msgstr "" "The limit has been exceeded on the number of allowed image properties. " "Attempted: %(attempted)s, Maximum: %(maximum)s" #, python-format msgid "" "The limit has been exceeded on the number of allowed image properties. " "Attempted: %(num)s, Maximum: %(quota)s" msgstr "" "The limit has been exceeded on the number of allowed image properties. " "Attempted: %(num)s, Maximum: %(quota)s" #, python-format msgid "" "The limit has been exceeded on the number of allowed image tags. Attempted: " "%(attempted)s, Maximum: %(maximum)s" msgstr "" "The limit has been exceeded on the number of allowed image tags. Attempted: " "%(attempted)s, Maximum: %(maximum)s" #, python-format msgid "The location %(location)s already exists" msgstr "The location %(location)s already exists" #, python-format msgid "The location data has an invalid ID: %d" msgstr "The location data has an invalid ID: %d" #, python-format msgid "" "The metadata definition %(record_type)s with name=%(record_name)s not " "deleted. Other records still refer to it." msgstr "" "The metadata definition %(record_type)s with name=%(record_name)s not " "deleted. Other records still refer to it." #, python-format msgid "The metadata definition namespace=%(namespace_name)s already exists." msgstr "The metadata definition namespace=%(namespace_name)s already exists." #, python-format msgid "" "The metadata definition object with name=%(object_name)s was not found in " "namespace=%(namespace_name)s." msgstr "" "The metadata definition object with name=%(object_name)s was not found in " "namespace=%(namespace_name)s." #, python-format msgid "" "The metadata definition property with name=%(property_name)s was not found " "in namespace=%(namespace_name)s." msgstr "" "The metadata definition property with name=%(property_name)s was not found " "in namespace=%(namespace_name)s." #, python-format msgid "" "The metadata definition resource-type association of resource-type=" "%(resource_type_name)s to namespace=%(namespace_name)s already exists." msgstr "" "The metadata definition resource-type association of resource-type=" "%(resource_type_name)s to namespace=%(namespace_name)s already exists." #, python-format msgid "" "The metadata definition resource-type association of resource-type=" "%(resource_type_name)s to namespace=%(namespace_name)s, was not found." msgstr "" "The metadata definition resource-type association of resource-type=" "%(resource_type_name)s to namespace=%(namespace_name)s, was not found." #, python-format msgid "" "The metadata definition resource-type with name=%(resource_type_name)s, was " "not found." msgstr "" "The metadata definition resource-type with name=%(resource_type_name)s, was " "not found." #, python-format msgid "" "The metadata definition tag with name=%(name)s was not found in namespace=" "%(namespace_name)s." msgstr "" "The metadata definition tag with name=%(name)s was not found in namespace=" "%(namespace_name)s." msgid "The parameters required by task, JSON blob" msgstr "The parameters required by task, JSON blob" msgid "The provided image is too large." msgstr "The provided image is too large." msgid "" "The region for the authentication service. If \"use_user_token\" is not in " "effect and using keystone auth, then region name can be specified." msgstr "" "The region for the authentication service. If \"use_user_token\" is not in " "effect and using Keystone auth, then region name can be specified." msgid "The request returned 500 Internal Server Error." msgstr "The request returned 500 Internal Server Error." msgid "" "The request returned 503 Service Unavailable. This generally occurs on " "service overload or other transient outage." msgstr "" "The request returned 503 Service Unavailable. This generally occurs on " "service overload or other transient outage." #, python-format msgid "" "The request returned a 302 Multiple Choices. This generally means that you " "have not included a version indicator in a request URI.\n" "\n" "The body of response returned:\n" "%(body)s" msgstr "" "The request returned a 302 Multiple Choices. This generally means that you " "have not included a version indicator in a request URI.\n" "\n" "The body of response returned:\n" "%(body)s" #, python-format msgid "" "The request returned a 413 Request Entity Too Large. This generally means " "that rate limiting or a quota threshold was breached.\n" "\n" "The response body:\n" "%(body)s" msgstr "" "The request returned a 413 Request Entity Too Large. This generally means " "that rate limiting or a quota threshold was breached.\n" "\n" "The response body:\n" "%(body)s" #, python-format msgid "" "The request returned an unexpected status: %(status)s.\n" "\n" "The response body:\n" "%(body)s" msgstr "" "The request returned an unexpected status: %(status)s.\n" "\n" "The response body:\n" "%(body)s" msgid "" "The requested image has been deactivated. Image data download is forbidden." msgstr "" "The requested image has been deactivated. Image data download is forbidden." msgid "The result of current task, JSON blob" msgstr "The result of current task, JSON blob" #, python-format msgid "" "The size of the data %(image_size)s will exceed the limit. %(remaining)s " "bytes remaining." msgstr "" "The size of the data %(image_size)s will exceed the limit. %(remaining)s " "bytes remaining." #, python-format msgid "The specified member %s could not be found" msgstr "The specified member %s could not be found" #, python-format msgid "The specified metadata object %s could not be found" msgstr "The specified metadata object %s could not be found" #, python-format msgid "The specified metadata tag %s could not be found" msgstr "The specified metadata tag %s could not be found" #, python-format msgid "The specified namespace %s could not be found" msgstr "The specified namespace %s could not be found" #, python-format msgid "The specified property %s could not be found" msgstr "The specified property %s could not be found" #, python-format msgid "The specified resource type %s could not be found " msgstr "The specified resource type %s could not be found " msgid "" "The status of deleted image location can only be set to 'pending_delete' or " "'deleted'" msgstr "" "The status of deleted image location can only be set to 'pending_delete' or " "'deleted'" msgid "" "The status of deleted image location can only be set to 'pending_delete' or " "'deleted'." msgstr "" "The status of deleted image location can only be set to 'pending_delete' or " "'deleted'." msgid "The status of this image member" msgstr "The status of this image member" msgid "" "The strategy to use for authentication. If \"use_user_token\" is not in " "effect, then auth strategy can be specified." msgstr "" "The strategy to use for authentication. If \"use_user_token\" is not in " "effect, then auth strategy can be specified." #, python-format msgid "" "The target member %(member_id)s is already associated with image " "%(image_id)s." msgstr "" "The target member %(member_id)s is already associated with image " "%(image_id)s." msgid "" "The tenant name of the administrative user. If \"use_user_token\" is not in " "effect, then admin tenant name can be specified." msgstr "" "The tenant name of the administrative user. If \"use_user_token\" is not in " "effect, then admin tenant name can be specified." msgid "The type of task represented by this content" msgstr "The type of task represented by this content" msgid "The unique namespace text." msgstr "The unique namespace text." msgid "The user friendly name for the namespace. Used by UI if available." msgstr "The user friendly name for the namespace. Used by UI if available." msgid "The user to authenticate against the Swift authentication service." msgstr "The user to authenticate against the Swift authentication service." #, python-format msgid "" "There is a problem with your %(error_key_name)s %(error_filename)s. Please " "verify it. Error: %(ioe)s" msgstr "" "There is a problem with your %(error_key_name)s %(error_filename)s. Please " "verify it. Error: %(ioe)s" #, python-format msgid "" "There is a problem with your %(error_key_name)s %(error_filename)s. Please " "verify it. OpenSSL error: %(ce)s" msgstr "" "There is a problem with your %(error_key_name)s %(error_filename)s. Please " "verify it. OpenSSL error: %(ce)s" #, python-format msgid "" "There is a problem with your key pair. Please verify that cert " "%(cert_file)s and key %(key_file)s belong together. OpenSSL error %(ce)s" msgstr "" "There is a problem with your key pair. Please verify that cert " "%(cert_file)s and key %(key_file)s belong together. OpenSSL error %(ce)s" msgid "There was an error configuring the client." msgstr "There was an error configuring the client." msgid "There was an error connecting to a server" msgstr "There was an error connecting to a server" msgid "" "This operation is currently not permitted on Glance Tasks. They are auto " "deleted after reaching the time based on their expires_at property." msgstr "" "This operation is currently not permitted on Glance Tasks. They are auto " "deleted after reaching the time based on their expires_at property." msgid "This operation is currently not permitted on Glance images details." msgstr "This operation is currently not permitted on Glance images details." msgid "" "This option will be removed in the Pike release or later because the same " "functionality can be achieved with greater granularity by using policies. " "Please see the Newton release notes for more information." msgstr "" "This option will be removed in the Pike release or later because the same " "functionality can be achieved with greater granularity by using policies. " "Please see the Newton release notes for more information." msgid "" "Time in hours for which a task lives after, either succeeding or failing" msgstr "" "Time in hours for which a task lives after, either succeeding or failing" msgid "Too few arguments." msgstr "Too few arguments." #, python-format msgid "" "Total size is %(size)d bytes (%(human_size)s) across %(img_count)d images" msgstr "" "Total size is %(size)d bytes (%(human_size)s) across %(img_count)d images" msgid "" "URI cannot contain more than one occurrence of a scheme.If you have " "specified a URI like swift://user:pass@http://authurl.com/v1/container/obj, " "you need to change it to use the swift+http:// scheme, like so: swift+http://" "user:pass@authurl.com/v1/container/obj" msgstr "" "URI cannot contain more than one occurrence of a scheme.If you have " "specified a URI like swift://user:pass@http://authurl.com/v1/container/obj, " "you need to change it to use the swift+http:// scheme, like so: swift+http://" "user:pass@authurl.com/v1/container/obj" #, python-format msgid "URI for web-download does not pass filtering: %s" msgstr "URI for web-download does not pass filtering: %s" msgid "URL to access the image file kept in external store" msgstr "URL to access the image file kept in external store" #, python-format msgid "" "Unable to create pid file %(pid)s. Running as non-root?\n" "Falling back to a temp file, you can stop %(service)s service using:\n" " %(file)s %(server)s stop --pid-file %(fb)s" msgstr "" "Unable to create pid file %(pid)s. Running as non-root?\n" "Falling back to a temp file, you can stop %(service)s service using:\n" " %(file)s %(server)s stop --pid-file %(fb)s" #, python-format msgid "Unable to filter by unknown operator '%s'." msgstr "Unable to filter by unknown operator '%s'." msgid "Unable to filter on a range with a non-numeric value." msgstr "Unable to filter on a range with a non-numeric value." msgid "Unable to filter on a unknown operator." msgstr "Unable to filter on a unknown operator." msgid "Unable to filter using the specified operator." msgstr "Unable to filter using the specified operator." msgid "Unable to filter using the specified range." msgstr "Unable to filter using the specified range." #, python-format msgid "Unable to find '%s' in JSON Schema change" msgstr "Unable to find '%s' in JSON Schema change" #, python-format msgid "" "Unable to find `op` in JSON Schema change. It must be one of the following: " "%(available)s." msgstr "" "Unable to find `op` in JSON Schema change. It must be one of the following: " "%(available)s." msgid "Unable to increase file descriptor limit. Running as non-root?" msgstr "Unable to increase file descriptor limit. Running as non-root?" #, python-format msgid "" "Unable to load %(app_name)s from configuration file %(conf_file)s.\n" "Got: %(e)r" msgstr "" "Unable to load %(app_name)s from configuration file %(conf_file)s.\n" "Got: %(e)r" #, python-format msgid "Unable to load schema: %(reason)s" msgstr "Unable to load schema: %(reason)s" #, python-format msgid "Unable to locate paste config file for %s." msgstr "Unable to locate paste config file for %s." msgid "" "Unable to place database under Alembic's migration control. Unknown database " "state, can't proceed further." msgstr "" "Unable to place database under Alembic's migration control. Unknown database " "state, can't proceed further." #, python-format msgid "Unable to upload duplicate image data for image%(image_id)s: %(error)s" msgstr "Unable to upload duplicate image data for image%(image_id)s: %(error)s" msgid "Unauthorized image access" msgstr "Unauthorised image access" msgid "Unexpected body type. Expected list/dict." msgstr "Unexpected body type. Expected list/dict." #, python-format msgid "Unexpected response: %s" msgstr "Unexpected response: %s" #, python-format msgid "Unknown auth strategy '%s'" msgstr "Unknown auth strategy '%s'" #, python-format msgid "Unknown command: %s" msgstr "Unknown command: %s" #, python-format msgid "Unknown import method name '%s'." msgstr "Unknown import method name '%s'." msgid "Unknown sort direction, must be 'desc' or 'asc'" msgstr "Unknown sort direction, must be 'desc' or 'asc'" msgid "Unrecognized JSON Schema draft version" msgstr "Unrecognised JSON Schema draft version" msgid "Unrecognized changes-since value" msgstr "Unrecognised changes-since value" #, python-format msgid "Unsupported sort_dir. Acceptable values: %s" msgstr "Unsupported sort_dir. Acceptable values: %s" #, python-format msgid "Unsupported sort_key. Acceptable values: %s" msgstr "Unsupported sort_key. Acceptable values: %s" #, python-format msgid "Upgraded database to: %(v)s, current revision(s): %(r)s" msgstr "Upgraded database to: %(v)s, current revision(s): %(r)s" msgid "Upgraded database, current revision(s):" msgstr "Upgraded database, current revision(s):" #, python-format msgid "Uploading the image failed due to: %(exc)s" msgstr "Uploading the image failed due to: %(exc)s" msgid "Use the http_proxy_to_wsgi middleware instead." msgstr "Use the http_proxy_to_wsgi middleware instead." msgid "Virtual size of image in bytes" msgstr "Virtual size of image in bytes" msgid "" "Visibility must be one of \"community\", \"public\", \"private\", or \"shared" "\"" msgstr "" "Visibility must be one of \"community\", \"public\", \"private\", or \"shared" "\"" #, python-format msgid "Waited 15 seconds for pid %(pid)s (%(file)s) to die; giving up" msgstr "Waited 15 seconds for pid %(pid)s (%(file)s) to die; giving up" msgid "" "When running server in SSL mode, you must specify both a cert_file and " "key_file option value in your configuration file" msgstr "" "When running server in SSL mode, you must specify both a cert_file and " "key_file option value in your configuration file" msgid "" "Whether to pass through the user token when making requests to the registry. " "To prevent failures with token expiration during big files upload, it is " "recommended to set this parameter to False.If \"use_user_token\" is not in " "effect, then admin credentials can be specified." msgstr "" "Whether to pass through the user token when making requests to the registry. " "To prevent failures with token expiration during big files upload, it is " "recommended to set this parameter to False.If \"use_user_token\" is not in " "effect, then admin credentials can be specified." #, python-format msgid "Wrong command structure: %s" msgstr "Wrong command structure: %s" msgid "You are not authenticated." msgstr "You are not authenticated." #, python-format msgid "You are not authorized to complete %(action)s action." msgstr "You are not authorised to complete %(action)s action." msgid "You are not authorized to complete this action." msgstr "You are not authorised to complete this action." #, python-format msgid "You are not authorized to lookup image %s." msgstr "You are not authorised to lookup image %s." #, python-format msgid "You are not authorized to lookup the members of the image %s." msgstr "You are not authorised to lookup the members of the image %s." #, python-format msgid "You are not permitted to create a tag in the namespace owned by '%s'" msgstr "You are not permitted to create a tag in the namespace owned by '%s'" msgid "You are not permitted to create image members for the image." msgstr "You are not permitted to create image members for the image." #, python-format msgid "You are not permitted to create images owned by '%s'." msgstr "You are not permitted to create images owned by '%s'." #, python-format msgid "You are not permitted to create namespace owned by '%s'" msgstr "You are not permitted to create namespace owned by '%s'" #, python-format msgid "You are not permitted to create object owned by '%s'" msgstr "You are not permitted to create object owned by '%s'" #, python-format msgid "You are not permitted to create property owned by '%s'" msgstr "You are not permitted to create property owned by '%s'" #, python-format msgid "You are not permitted to create resource_type owned by '%s'" msgstr "You are not permitted to create resource_type owned by '%s'" #, python-format msgid "You are not permitted to create this task with owner as: %s" msgstr "You are not permitted to create this task with owner as: %s" msgid "You are not permitted to deactivate this image." msgstr "You are not permitted to deactivate this image." msgid "You are not permitted to delete this image." msgstr "You are not permitted to delete this image." msgid "You are not permitted to delete this meta_resource_type." msgstr "You are not permitted to delete this meta_resource_type." msgid "You are not permitted to delete this namespace." msgstr "You are not permitted to delete this namespace." msgid "You are not permitted to delete this object." msgstr "You are not permitted to delete this object." msgid "You are not permitted to delete this property." msgstr "You are not permitted to delete this property." msgid "You are not permitted to delete this tag." msgstr "You are not permitted to delete this tag." #, python-format msgid "You are not permitted to modify '%(attr)s' on this %(resource)s." msgstr "You are not permitted to modify '%(attr)s' on this %(resource)s." #, python-format msgid "You are not permitted to modify '%s' on this image." msgstr "You are not permitted to modify '%s' on this image." msgid "You are not permitted to modify locations for this image." msgstr "You are not permitted to modify locations for this image." msgid "You are not permitted to modify tags on this image." msgstr "You are not permitted to modify tags on this image." msgid "You are not permitted to modify this image." msgstr "You are not permitted to modify this image." msgid "You are not permitted to reactivate this image." msgstr "You are not permitted to reactivate this image." msgid "You are not permitted to set status on this task." msgstr "You are not permitted to set status on this task." msgid "You are not permitted to update this namespace." msgstr "You are not permitted to update this namespace." msgid "You are not permitted to update this object." msgstr "You are not permitted to update this object." msgid "You are not permitted to update this property." msgstr "You are not permitted to update this property." msgid "You are not permitted to update this tag." msgstr "You are not permitted to update this tag." msgid "You are not permitted to upload data for this image." msgstr "You are not permitted to upload data for this image." #, python-format msgid "You cannot add image member for %s" msgstr "You cannot add image member for %s" #, python-format msgid "You cannot delete image member for %s" msgstr "You cannot delete image member for %s" #, python-format msgid "You cannot get image member for %s" msgstr "You cannot get image member for %s" #, python-format msgid "You cannot update image member %s" msgstr "You cannot update image member %s" msgid "You do not own this image" msgstr "You do not own this image" msgid "" "You have selected to use SSL in connecting, and you have supplied a cert, " "however you have failed to supply either a key_file parameter or set the " "GLANCE_CLIENT_KEY_FILE environ variable" msgstr "" "You have selected to use SSL in connecting, and you have supplied a cert, " "however you have failed to supply either a key_file parameter or set the " "GLANCE_CLIENT_KEY_FILE environ variable" msgid "" "You have selected to use SSL in connecting, and you have supplied a key, " "however you have failed to supply either a cert_file parameter or set the " "GLANCE_CLIENT_CERT_FILE environ variable" msgstr "" "You have selected to use SSL in connecting, and you have supplied a key, " "however you have failed to supply either a cert_file parameter or set the " "GLANCE_CLIENT_CERT_FILE environ variable" msgid "" "Your database is not up to date. Your first step is to run `glance-manage db " "expand`." msgstr "" "Your database is not up to date. Your first step is to run `glance-manage db " "expand`." msgid "" "Your database is not up to date. Your next step is to run `glance-manage db " "contract`." msgstr "" "Your database is not up to date. Your next step is to run `glance-manage db " "contract`." msgid "" "Your database is not up to date. Your next step is to run `glance-manage db " "migrate`." msgstr "" "Your database is not up to date. Your next step is to run `glance-manage db " "migrate`." msgid "" "^([0-9a-fA-F]){8}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-" "fA-F]){12}$" msgstr "" "^([0-9a-fA-F]){8}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-" "fA-F]){12}$" #, python-format msgid "__init__() got unexpected keyword argument '%s'" msgstr "__init__() got unexpected keyword argument '%s'" #, python-format msgid "" "cannot transition from %(current)s to %(next)s in update (wanted from_state=" "%(from)s)" msgstr "" "cannot transition from %(current)s to %(next)s in update (wanted from_state=" "%(from)s)" #, python-format msgid "custom properties (%(props)s) conflict with base properties" msgstr "custom properties (%(props)s) conflict with base properties" msgid "eventlet 'poll' nor 'selects' hubs are available on this platform" msgstr "eventlet 'poll' nor 'selects' hubs are available on this platform" msgid "is_public must be None, True, or False" msgstr "is_public must be None, True, or False" msgid "limit param must be an integer" msgstr "limit param must be an integer" msgid "limit param must be positive" msgstr "limit param must be positive" msgid "md5 hash of image contents." msgstr "MD5 hash of image contents." #, python-format msgid "new_image() got unexpected keywords %s" msgstr "new_image() got unexpected keywords %s" msgid "protected must be True, or False" msgstr "protected must be True, or False" #, python-format msgid "unable to launch %(serv)s. Got error: %(e)s" msgstr "unable to launch %(serv)s. Got error: %(e)s" #, python-format msgid "x-openstack-request-id is too long, max size %s" msgstr "x-openstack-request-id is too long, max size %s" glance-16.0.0/glance/locale/tr_TR/0000775000175100017510000000000013245511661016601 5ustar zuulzuul00000000000000glance-16.0.0/glance/locale/tr_TR/LC_MESSAGES/0000775000175100017510000000000013245511661020366 5ustar zuulzuul00000000000000glance-16.0.0/glance/locale/tr_TR/LC_MESSAGES/glance.po0000666000175100017510000016723613245511421022172 0ustar zuulzuul00000000000000# Translations template for glance. # Copyright (C) 2015 ORGANIZATION # This file is distributed under the same license as the glance project. # # Translators: # Andreas Jaeger , 2015 # Andreas Jaeger , 2016. #zanata msgid "" msgstr "" "Project-Id-Version: glance 15.0.0.0b3.dev29\n" "Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n" "POT-Creation-Date: 2017-06-23 20:54+0000\n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=UTF-8\n" "Content-Transfer-Encoding: 8bit\n" "PO-Revision-Date: 2016-04-12 05:22+0000\n" "Last-Translator: Copied by Zanata \n" "Language: tr-TR\n" "Plural-Forms: nplurals=1; plural=0;\n" "Generated-By: Babel 2.0\n" "X-Generator: Zanata 3.9.6\n" "Language-Team: Turkish (Turkey)\n" #, python-format msgid "\t%s" msgstr "\t%s" #, python-format msgid "%(cls)s exception was raised in the last rpc call: %(val)s" msgstr "Son rpc çaÄŸrısında %(cls)s istisnası oluÅŸtu: %(val)s" #, python-format msgid "%(serv)s (pid %(pid)s) is running..." msgstr "%(serv)s (pid %(pid)s) çalıştırılıyor..." #, python-format msgid "%(serv)s appears to already be running: %(pid)s" msgstr "%(serv)s çalışıyor görünüyor: %(pid)s" #, python-format msgid "" "%(strategy)s is registered as a module twice. %(module)s is not being used." msgstr "" "%(strategy)s bir birim olarak iki kez kaydedildi. %(module)s kullanılmıyor." #, python-format msgid "" "%(task_id)s of %(task_type)s not configured properly. Could not load the " "filesystem store" msgstr "" "%(task_type)s görev türündeki %(task_id)s düzgün bir ÅŸekilde " "yapılandırılamadı. Dosya sistem deposuna yüklenemedi" #, python-format msgid "" "%(task_id)s of %(task_type)s not configured properly. Missing work dir: " "%(work_dir)s" msgstr "" "%(task_type)s görev türündeki %(task_id)s düzgün bir ÅŸekilde " "yapılandırılamadı. Eksik çalışma dizini: %(work_dir)s" #, python-format msgid "%(verb)sing %(serv)s" msgstr "%(verb)sing %(serv)s" #, python-format msgid "%(verb)sing %(serv)s with %(conf)s" msgstr "%(conf)s ile %(verb)sing %(serv)s" #, python-format msgid "" "%s Please specify a host:port pair, where host is an IPv4 address, IPv6 " "address, hostname, or FQDN. If using an IPv6 address, enclose it in brackets " "separately from the port (i.e., \"[fe80::a:b:c]:9876\")." msgstr "" "%s Lütfen istemcinin bir IPv4, IPv6 adresi, makine adı ya da FQDN olduÄŸu bir " "istemci:baÄŸlantı noktası çifti belirtin. EÄŸer IPv6 kullanılırsa, baÄŸlantı " "noktasından ayrı parantez içine alın (örneÄŸin, \"[fe80::a:b:c]:9876\")." #, python-format msgid "%s can't contain 4 byte unicode characters." msgstr "%s 4 bayt unicode karakterler içeremez." #, python-format msgid "%s is already stopped" msgstr "%s zaten durdurulmuÅŸ" #, python-format msgid "%s is stopped" msgstr "%s durduruldu" msgid "" "--os_auth_url option or OS_AUTH_URL environment variable required when " "keystone authentication strategy is enabled\n" msgstr "" "--os_auth_url seçeneÄŸi ya da OS_AUTH_URL ortam deÄŸiÅŸkeni, keystone kimlik " "doÄŸrulama stratejisi etkinken gereklidir\n" #, python-format msgid "" "A metadata definition object with name=%(object_name)s already exists in " "namespace=%(namespace_name)s." msgstr "" "Ad=%(object_name)s ile bir metadata tanım nesnesi ad alanında=" "%(namespace_name)s zaten var." #, python-format msgid "" "A metadata definition property with name=%(property_name)s already exists in " "namespace=%(namespace_name)s." msgstr "" "Ad=%(property_name)s ile bir metadata tanım özelliÄŸi ad alanında=" "%(namespace_name)s zaten mevcut." #, python-format msgid "" "A metadata definition resource-type with name=%(resource_type_name)s already " "exists." msgstr "" "Ad=%(resource_type_name)s ile bir metadata tanım kaynak-türü zaten mevcut." msgid "A set of URLs to access the image file kept in external store" msgstr "Harici depoda tutulan imaj dosyasına eriÅŸilecek URL kümesi" msgid "Amount of disk space (in GB) required to boot image." msgstr "İmajı ön yüklemek için gereken disk alanı miktarı (GB olarak)." msgid "Amount of ram (in MB) required to boot image." msgstr "İmaj ön yüklemesi için gereken (MB olarak) bellek miktarı." msgid "An identifier for the image" msgstr "İmaj için bir tanımlayıcı" msgid "An identifier for the image member (tenantId)" msgstr "İmaj üyesi için bir tanımlayıcı (tenantId)" msgid "An identifier for the owner of this task" msgstr "Görevin sahibi için bir tanımlayıcı" msgid "An identifier for the task" msgstr "Görev için bir tanımlayıcı" #, python-format msgid "An image with identifier %s already exists" msgstr "%s belirteçli imaj zaten var" msgid "An object with the same identifier already exists." msgstr "Aynı tanımlayıcı ile bir nesne zaten mevcut." msgid "An object with the same identifier is currently being operated on." msgstr "Aynı tanımlayıcıya sahip bir nesne ÅŸu anda iÅŸleniyor." msgid "An object with the specified identifier was not found." msgstr "Belirtilen tanımlayıcı ile bir nesne bulunamadı." msgid "An unknown exception occurred" msgstr "Bilinmeyen olaÄŸandışı bir durum oluÅŸtu" msgid "An unknown task exception occurred" msgstr "Bilinmeyen bir görev olaÄŸandışı durumu oluÅŸtu" #, python-format msgid "Attempt to upload duplicate image: %s" msgstr "Çift imaj yüklemeyi dene: %s" msgid "Attempted to update Location field for an image not in queued status." msgstr "" "Durumu kuyruÄŸa alınmış olmayan bir imaj için Konum alanı güncellemesi " "denendi." #, python-format msgid "Attribute '%(property)s' is read-only." msgstr "'%(property)s' özniteliÄŸi salt okunurdur." #, python-format msgid "Attribute '%(property)s' is reserved." msgstr "'%(property)s' özniteliÄŸi ayrılmıştır." #, python-format msgid "Attribute '%s' is read-only." msgstr "'%s' özniteliÄŸi salt okunurdur." #, python-format msgid "Attribute '%s' is reserved." msgstr "'%s' özniteliÄŸi ayrılmıştır." msgid "Attribute container_format can be only replaced for a queued image." msgstr "" "container_format özniteliÄŸi sadece kuyruÄŸa alınmış bir imaj için " "deÄŸiÅŸtirilebilir." msgid "Attribute disk_format can be only replaced for a queued image." msgstr "" "disk_format özniteliÄŸi sadece kuyruÄŸa alınmış bir imaj için deÄŸiÅŸtirilebilir." #, python-format msgid "Auth service at URL %(url)s not found." msgstr "%(url)s URL'inde kimlik doÄŸrulama servisi bulunamadı." msgid "Authorization failed." msgstr "Yetkilendirme baÅŸarısız oldu." msgid "Available categories:" msgstr "Kullanılabilir kategoriler:" #, python-format msgid "Bad Command: %s" msgstr "Hatalı Komut: %s" #, python-format msgid "Bad header: %(header_name)s" msgstr "Kötü baÅŸlık: %(header_name)s" #, python-format msgid "Bad value passed to filter %(filter)s got %(val)s" msgstr "%(filter)s süzgecine geçirilen hatalı deÄŸer %(val)s var" #, python-format msgid "Badly formed S3 URI: %(uri)s" msgstr "Hatalı oluÅŸturulmuÅŸ S3 URI: %(uri)s" #, python-format msgid "Badly formed credentials '%(creds)s' in Swift URI" msgstr "Swift URI içinde hatalı oluÅŸturulmuÅŸ kimlik bilgileri '%(creds)s'" msgid "Badly formed credentials in Swift URI." msgstr "Swift URI içinde hatalı oluÅŸturulmuÅŸ kimlik bilgileri." msgid "Body expected in request." msgstr "İstekte beklenen vücut." msgid "Cannot be a negative value" msgstr "Negatif bir deÄŸer olamaz" msgid "Cannot be a negative value." msgstr "Negatif bir deÄŸer olamaz." #, python-format msgid "Cannot convert image %(key)s '%(value)s' to an integer." msgstr "%(key)s '%(value)s' imaj deÄŸeri bir tam sayıya dönüştürülemez." #, python-format msgid "Cannot save data for image %(image_id)s: %(error)s" msgstr "%(image_id)s imajı için veri kaydedilemiyor: %(error)s" msgid "Cannot upload to an unqueued image" msgstr "KuyruÄŸa alınmamış imaj yüklenemez" #, python-format msgid "Checksum verification failed. Aborted caching of image '%s'." msgstr "" "SaÄŸlama doÄŸrulama baÅŸarısız oldu. '%s' imajını önbelleÄŸe alma iÅŸlemi " "durduruldu." msgid "Client disconnected before sending all data to backend" msgstr "" "İstemci tüm verileri art alanda çalışan uygulamaya göndermeden önce " "baÄŸlantıyı kesti" msgid "Command not found" msgstr "Komut bulunamadı" msgid "Configuration option was not valid" msgstr "Yapılandırma seçeneÄŸi geçerli deÄŸildi." #, python-format msgid "Connect error/bad request to Auth service at URL %(url)s." msgstr "" "%(url)s URL'indeki kimlik doÄŸrulama servisine baÄŸlantı hatası/hatalı istek." #, python-format msgid "Constructed URL: %s" msgstr "URL inÅŸa edildi: %s" msgid "Container format is not specified." msgstr "Kap biçimi belirtilmemiÅŸ." msgid "Content-Type must be application/octet-stream" msgstr "İçerik-Türü uygulama/sekiz bitli bayt akışı olmalıdır" #, python-format msgid "Corrupt image download for image %(image_id)s" msgstr "%(image_id)s imajı için bozuk imaj indir" #, python-format msgid "Could not bind to %(host)s:%(port)s after trying for 30 seconds" msgstr "30 saniyelik denemeden sonra %(host)s:%(port)s baÄŸlanamadı" #, python-format msgid "Could not find metadata object %s" msgstr "Metadata nesnesi %s bulunamadı" #, python-format msgid "Could not find metadata tag %s" msgstr "%s metadata etiketi bulunamadı" #, python-format msgid "Could not find namespace %s" msgstr "%s ad alanı bulunamadı" #, python-format msgid "Could not find property %s" msgstr "%s özelliÄŸi bulunamadı" msgid "Could not find required configuration option" msgstr "Gerekli yapılandırma seçeneÄŸi bulunamadı" #, python-format msgid "Could not find task %s" msgstr "%s görevi bulunamadı" #, python-format msgid "Could not update image: %s" msgstr "İmaj güncellenemiyor: %s" msgid "Data supplied was not valid." msgstr "SaÄŸlanan veri geçersizdir." msgid "Date and time of image member creation" msgstr "İmaj üyesi oluÅŸturma tarih ve saati" msgid "Date and time of last modification of image member" msgstr "İmaj üyesi son deÄŸiÅŸiklik tarih ve saati" msgid "Datetime when this resource was created" msgstr "Bu kaynak oluÅŸturulduÄŸundaki tarih saat" msgid "Datetime when this resource was updated" msgstr "Bu kaynak güncellendiÄŸindeki tarih saat" msgid "Datetime when this resource would be subject to removal" msgstr "Bu kaynağın kaldırılacağı tarih zaman" #, python-format msgid "Denying attempt to upload image because it exceeds the quota: %s" msgstr "İmaj yükleme giriÅŸimi kotayı aÅŸtığından dolayı reddediliyor: %s" #, python-format msgid "Denying attempt to upload image larger than %d bytes." msgstr "%d bayttan büyük bir imajın yükleme giriÅŸimi reddediliyor." msgid "Descriptive name for the image" msgstr "İmaj için açıklayıcı ad" msgid "Disk format is not specified." msgstr "Disk biçimi belirtilmemiÅŸ." #, python-format msgid "" "Driver %(driver_name)s could not be configured correctly. Reason: %(reason)s" msgstr "" "%(driver_name)s sürücüsü düzgün bir ÅŸekilde yapılandırılamadı. Nedeni: " "%(reason)s" msgid "Error in store configuration. Adding images to store is disabled." msgstr "" "Depolama yapılandırmasında hata. Depolamak için imaj ekleme devre dışıdır." #, python-format msgid "External sources are not supported: '%s'" msgstr "Harici kaynaklar desteklenmiyor: '%s'" #, python-format msgid "Failed to activate image. Got error: %s" msgstr "İmaj etkinleÅŸtirme iÅŸlemi baÅŸarısız oldu. Alınan hata: %s" #, python-format msgid "Failed to add image metadata. Got error: %s" msgstr "İmaj metadata ekleme iÅŸlemi baÅŸarısız oldu. Alınan hata: %s" #, python-format msgid "Failed to find image %(image_id)s to delete" msgstr "Silinecek %(image_id)s imajını bulma iÅŸlemi baÅŸarısız oldu" #, python-format msgid "Failed to find image to delete: %s" msgstr "Silinecek imaj bulunamadı: %s" #, python-format msgid "Failed to find image to update: %s" msgstr "Güncellenecek imaj bulma iÅŸlemi baÅŸarısız oldu: %s" #, python-format msgid "Failed to find resource type %(resourcetype)s to delete" msgstr "Silinecek %(resourcetype)s kaynak türü bulma iÅŸlemi baÅŸarısız oldu" #, python-format msgid "Failed to initialize the image cache database. Got error: %s" msgstr "İmaj önbellek veritabanı baÅŸlatılamadı. Alınan hata: %s" #, python-format msgid "Failed to read %s from config" msgstr "Yapılandırmadan %s okunamadı" #, python-format msgid "Failed to reserve image. Got error: %s" msgstr "İmaj ayırma iÅŸlemi baÅŸarısız oldu. Alınan hata: %s" #, python-format msgid "Failed to update image metadata. Got error: %s" msgstr "İmaj metadata güncelleme iÅŸlemi baÅŸarısız oldu: Alınan hata: %s" #, python-format msgid "Failed to upload image %s" msgstr "%s imajı yükleme iÅŸlemi baÅŸarısız oldu" #, python-format msgid "" "Failed to upload image data for image %(image_id)s due to HTTP error: " "%(error)s" msgstr "" "HTTP hatası nedeniyle %(image_id)s imajı için imaj verisi yüklenemedi: " "%(error)s" #, python-format msgid "" "Failed to upload image data for image %(image_id)s due to internal error: " "%(error)s" msgstr "" "Dahili hata nedeniyle %(image_id)s imajı için imaj verisi yüklenemedi: " "%(error)s" msgid "" "File based imports are not allowed. Please use a non-local source of image " "data." msgstr "" "Dosya tabanlı içeri aktarmlara izin verilmez. Lütfen imaj verilerinin yerel " "olmayan bir kaynağını kullanın." #, python-format msgid "Forbidden to delete a %s image." msgstr "%s imajını silmek yasak." #, python-format msgid "Forbidden to delete image: %s" msgstr "İmaj silmek yasak: %s" msgid "Forbidden to reserve image." msgstr "İmaj ayırmak yasak." msgid "Forbidden to update deleted image." msgstr "Silinen imajın güncellenmesi yasak." #, python-format msgid "Forbidden to update image: %s" msgstr "İmaj güncellemek yasak: %s" #, python-format msgid "Forbidden upload attempt: %s" msgstr "Yükleme giriÅŸimi yasak: %s" #, python-format msgid "Forbidding request, metadata definition namespace=%s is not visible." msgstr "Yasak istek, üstveri tanım ad alanı=%s görünür deÄŸil." #, python-format msgid "Forbidding request, task %s is not visible" msgstr "Yasak istek, %s görevi görünür deÄŸil" msgid "Format of the container" msgstr "Kabın biçimi" msgid "Format of the disk" msgstr "Diskin biçimi" #, python-format msgid "Host \"%s\" is not valid." msgstr "İstemci \"%s\" geçersizdir." #, python-format msgid "Host and port \"%s\" is not valid." msgstr "İstemci ve baÄŸlantı noktası \"%s\" geçersizdir." msgid "" "Human-readable informative message only included when appropriate (usually " "on failure)" msgstr "" "Okunabilir bilgilendirme iletisi sadece uygun olduÄŸunda (genellikle " "baÅŸarısızlıkta) dahildir" msgid "If true, image will not be deletable." msgstr "EÄŸer seçiliyse, imaj silinemeyecektir." msgid "If true, namespace will not be deletable." msgstr "EÄŸer seçiliyse, ad alanı silinemeyecektir." #, python-format msgid "Image %(id)s could not be deleted because it is in use: %(exc)s" msgstr "%(id)s imajı kullanımda olduÄŸundan dolayı silinemedi: %(exc)s" #, python-format msgid "Image %(id)s not found" msgstr "%(id)s imajı bulunamadı" #, python-format msgid "" "Image %(image_id)s could not be found after upload. The image may have been " "deleted during the upload: %(error)s" msgstr "" "%(image_id)s imajı yüklemeden sonra bulunamadı. İmaj yükleme sırasında " "silinmiÅŸ olabilir: %(error)s" #, python-format msgid "Image %(image_id)s is protected and cannot be deleted." msgstr "%(image_id)s imajı korumalıdır ve silinemez." #, python-format msgid "" "Image %s could not be found after upload. The image may have been deleted " "during the upload, cleaning up the chunks uploaded." msgstr "" "%s imajı yüklendikten sonra bulunamadı. İmaj yükleme sırasında silinmiÅŸ, " "yüklenen parçalar temizlenmiÅŸ olabilir." #, python-format msgid "Image %s is deactivated" msgstr "%s imajı devrede deÄŸil" #, python-format msgid "Image %s is not active" msgstr "%s imajı etkin deÄŸil" #, python-format msgid "Image %s not found." msgstr "%s imajı bulunamadı." #, python-format msgid "Image exceeds the storage quota: %s" msgstr "İmaj depolama kotasını aÅŸar: %s" msgid "Image id is required." msgstr "İmaj kimliÄŸi gereklidir." msgid "Image is protected" msgstr "İmaj korumalıdır" #, python-format msgid "Image member limit exceeded for image %(id)s: %(e)s:" msgstr "%(id)s imajı için üye sınırı aşıldı: %(e)s:" #, python-format msgid "Image name too long: %d" msgstr "İmaj adı çok uzun: %d" msgid "Image operation conflicts" msgstr "İmaj iÅŸlem çatışmaları" #, python-format msgid "" "Image status transition from %(cur_status)s to %(new_status)s is not allowed" msgstr "" "%(cur_status)s durumundan %(new_status)s durumuna imaj durum geçiÅŸine izin " "verilmez" #, python-format msgid "Image storage media is full: %s" msgstr "İmaj depolama ortamı dolu: %s" #, python-format msgid "Image tag limit exceeded for image %(id)s: %(e)s:" msgstr "%(id)s imajı için etiket sınırı aşıldı: %(e)s:" #, python-format msgid "Image upload problem: %s" msgstr "İmaj yükleme sorunu: %s" #, python-format msgid "Image with identifier %s already exists!" msgstr "%s tanımlayıcısı ile imaj zaten mevcut!" #, python-format msgid "Image with identifier %s has been deleted." msgstr "%s tanımlayıcılı imaj silindi." #, python-format msgid "Image with identifier %s not found" msgstr "%s tanımlayıcısı ile imaj bulunamadı" #, python-format msgid "Image with the given id %(image_id)s was not found" msgstr "Verilen %(image_id)s ile imaj bulunamadı" #, python-format msgid "" "Incorrect auth strategy, expected \"%(expected)s\" but received " "\"%(received)s\"" msgstr "" "Hatalı yetki stratejisi, beklenen deÄŸer, \"%(expected)s\" ancak alınan " "deÄŸer, \"%(received)s\"" #, python-format msgid "Incorrect request: %s" msgstr "Hatalı istek: %s" #, python-format msgid "Input does not contain '%(key)s' field" msgstr "Girdi '%(key)s' alanı içermez" #, python-format msgid "Insufficient permissions on image storage media: %s" msgstr "İmaj depolama ortamında yetersiz izinler: %s" #, python-format msgid "Invalid JSON pointer for this resource: '/%s'" msgstr "Bu kaynak için geçersiz JSON iÅŸaretçisi: '/%s'" #, python-format msgid "Invalid checksum '%s': can't exceed 32 characters" msgstr "Geçersiz saÄŸlama '%s': 32 karakterden uzun olamaz" msgid "Invalid configuration in glance-swift conf file." msgstr "glance-swift yapılandır dosyasında geçersiz yapılandırma." msgid "Invalid configuration in property protection file." msgstr "Özellik koruma dosyasında geçersiz yapılandırma." #, python-format msgid "Invalid container format '%s' for image." msgstr "İmaj için geçersiz kap biçimi '%s'." #, python-format msgid "Invalid content type %(content_type)s" msgstr "Geçersiz içerik türü %(content_type)s" #, python-format msgid "Invalid disk format '%s' for image." msgstr "İmaj için geçersiz disk biçimi '%s'." msgid "Invalid image id format" msgstr "Geçersiz imaj id biçimi" msgid "Invalid location" msgstr "Geçersiz konum" #, python-format msgid "Invalid location %s" msgstr "Geçersiz konum %s" #, python-format msgid "Invalid location: %s" msgstr "Geçersiz konum: %s" #, python-format msgid "" "Invalid location_strategy option: %(name)s. The valid strategy option(s) " "is(are): %(strategies)s" msgstr "" "Geçersiz location_strategy seçeneÄŸi: %(name)s. Geçerli strateji " "seçenek(leri): %(strategies)s" msgid "Invalid locations" msgstr "Geçersiz konumlar" #, python-format msgid "Invalid locations: %s" msgstr "Geçersiz konumlar: %s" msgid "Invalid marker format" msgstr "Geçersiz iÅŸaretçi biçimi" msgid "Invalid marker. Image could not be found." msgstr "Geçersiz iÅŸaretçi. İmaj bulunamadı." #, python-format msgid "Invalid membership association: %s" msgstr "Geçersiz üyelik iliÅŸkisi: %s" msgid "" "Invalid mix of disk and container formats. When setting a disk or container " "format to one of 'aki', 'ari', or 'ami', the container and disk formats must " "match." msgstr "" "Geçersiz disk ve kap biçimleri karışımı. Bir disk ya da kap biçimi 'aki', " "'ari' ya da 'ami' biçimlerinden biri olarak ayarlanırsa, kap ve disk biçimi " "eÅŸleÅŸmelidir." #, python-format msgid "" "Invalid operation: `%(op)s`. It must be one of the following: %(available)s." msgstr "" "Geçersiz iÅŸlem: `%(op)s`. Åžu seçeneklerden biri olmalıdır: %(available)s." msgid "Invalid position for adding a location." msgstr "Yer eklemek için geçersiz konum." msgid "Invalid position for removing a location." msgstr "Yer kaldırmak için geçersiz konum." msgid "Invalid service catalog json." msgstr "Geçersiz json servis katalogu." #, python-format msgid "Invalid sort direction: %s" msgstr "Geçersiz sıralama yönü: %s" #, python-format msgid "" "Invalid sort key: %(sort_key)s. It must be one of the following: " "%(available)s." msgstr "" "Geçersiz sıralama anahtarı: %(sort_key)s. Åžu seçeneklerden biri olmalıdır: " "%(available)s." #, python-format msgid "Invalid status value: %s" msgstr "Geçersiz durum deÄŸeri: %s" #, python-format msgid "Invalid status: %s" msgstr "Geçersiz durum: %s" #, python-format msgid "Invalid type value: %s" msgstr "Geçersiz tür deÄŸeri: %s" #, python-format msgid "" "Invalid update. It would result in a duplicate metadata definition namespace " "with the same name of %s" msgstr "" "Geçersiz güncelleme. Aynı %s adıyla çift metadata tanım ad alanı ile " "sonuçlanır" #, python-format msgid "" "Invalid update. It would result in a duplicate metadata definition object " "with the same name=%(name)s in namespace=%(namespace_name)s." msgstr "" "Geçersiz güncelleme. Ad alanı=%(namespace_name)s içinde aynı ad=%(name)s " "ile çift metadata tanım nesnesi olmasına neden olacaktır." #, python-format msgid "" "Invalid update. It would result in a duplicate metadata definition object " "with the same name=%(name)s in namespace=%(namespace_name)s." msgstr "" "Geçersiz güncelleme. Ad alanında=%(namespace_name)s aynı ad=%(name)s ile " "çift metadata tanım nesnesi ile sonuçlanır." #, python-format msgid "" "Invalid update. It would result in a duplicate metadata definition property " "with the same name=%(name)s in namespace=%(namespace_name)s." msgstr "" "Geçersiz güncelleme. Ad alanı=%(namespace_name)s içinde aynı ad=%(name)s ile " "çift metadata tanım özelliÄŸi olmasına neden olacaktır." #, python-format msgid "Invalid value '%(value)s' for parameter '%(param)s': %(extra_msg)s" msgstr "" "'%(param)s' parametresi için '%(value)s' geçersiz deÄŸeri: %(extra_msg)s" #, python-format msgid "Invalid value for option %(option)s: %(value)s" msgstr "%(option)s seçeneÄŸi için geçersiz deÄŸer: %(value)s" #, python-format msgid "Invalid visibility value: %s" msgstr "Geçersiz görünürlük deÄŸeri: %s" msgid "It's invalid to provide multiple image sources." msgstr "Birden fazla imaj kaynağı saÄŸlamak için geçersizdir." msgid "List of strings related to the image" msgstr "İmaj ile ilgili karakter dizilerinin listesi" msgid "Malformed JSON in request body." msgstr "İstek gövdesinde bozuk JSON." #, python-format msgid "Maximum redirects (%(redirects)s) was exceeded." msgstr "Yeniden yönlendirmelerin sınırı (%(redirects)s) aşıldı." #, python-format msgid "Member %(member_id)s is duplicated for image %(image_id)s" msgstr "Üye %(member_id)s %(image_id)s imajı için çoÄŸaltıldı" msgid "Member can't be empty" msgstr "Üye boÅŸ olamaz" msgid "Member to be added not specified" msgstr "Eklenecek üye belirtilmemiÅŸ" msgid "Membership could not be found." msgstr "Üyelik bulunamadı." #, python-format msgid "" "Metadata definition namespace %(namespace)s is protected and cannot be " "deleted." msgstr "Metadata tanım ad alanı %(namespace)s korumalıdır ve silinemez." #, python-format msgid "Metadata definition namespace not found for id=%s" msgstr "id=%s için metadata tanım ad alanı bulunamadı" #, python-format msgid "" "Metadata definition object %(object_name)s is protected and cannot be " "deleted." msgstr "Metadata tanım nesnesi %(object_name)s korumalıdır ve silinemez." #, python-format msgid "Metadata definition object not found for id=%s" msgstr "id=%s için metadata tanım nesnesi bulunamadı" #, python-format msgid "" "Metadata definition property %(property_name)s is protected and cannot be " "deleted." msgstr "Metadata tanım özelliÄŸi %(property_name)s korumalıdır ve silinemez." #, python-format msgid "Metadata definition property not found for id=%s" msgstr "id=%s için metadata tanım özelliÄŸi bulunamadı" #, python-format msgid "" "Metadata definition resource-type %(resource_type_name)s is a seeded-system " "type and cannot be deleted." msgstr "" " %(resource_type_name)s metadata tanım kaynak-türü sınıflanmış bir sistem " "türüdür ve silinemez." #, python-format msgid "" "Metadata definition resource-type-association %(resource_type)s is protected " "and cannot be deleted." msgstr "" "Metadata tanım kaynak-tür-iliÅŸkisi %(resource_type)s korumalıdır ve " "silinemez." #, python-format msgid "" "Metadata definition tag %(tag_name)s is protected and cannot be deleted." msgstr "Metadata tanım etiketi %(tag_name)s korumalıdır ve silinemez." #, python-format msgid "Metadata definition tag not found for id=%s" msgstr "id=%s için metadata tanım etiketi bulunamadı" #, python-format msgid "Missing required credential: %(required)s" msgstr "Gerekli olan kimlik eksik: %(required)s" #, python-format msgid "" "Multiple 'image' service matches for region %(region)s. This generally means " "that a region is required and you have not supplied one." msgstr "" "%(region)s bölgesi için birden fazla 'image' servisi eÅŸleÅŸir. Bu genellikle, " "bir bölgenin gerekli olduÄŸu ve saÄŸlamadığınız anlamına gelir." msgid "No authenticated user" msgstr "Kimlik denetimi yapılmamış kullanıcı" #, python-format msgid "No image found with ID %s" msgstr "%s bilgileri ile hiçbir imaj bulunamadı" #, python-format msgid "No location found with ID %(loc)s from image %(img)s" msgstr "%(img)s imajından %(loc)s bilgisi ile hiçbir konum bulunamadı" msgid "No permission to share that image" msgstr "Bu imajı paylaÅŸma izni yok" #, python-format msgid "Not allowed to create members for image %s." msgstr "%s imajı için üye oluÅŸturulmasına izin verilmedi." #, python-format msgid "Not allowed to deactivate image in status '%s'" msgstr "'%s' durumundaki imajın etkinliÄŸini kaldırmaya izin verilmez" #, python-format msgid "Not allowed to delete members for image %s." msgstr "%s imajı için üyelerin silinmesine izin verilmedi." #, python-format msgid "Not allowed to delete tags for image %s." msgstr "%s imajı için etiketlerin silinmesine izin verilmedi." #, python-format msgid "Not allowed to list members for image %s." msgstr "%s imajı için üyelerin listelenmesine izin verilmedi." #, python-format msgid "Not allowed to reactivate image in status '%s'" msgstr "'%s' durumundaki imajı yeniden etkinleÅŸtirmeye izin verilmez" #, python-format msgid "Not allowed to update members for image %s." msgstr "%s imajı için üyelerin güncellenmesine izin verilmedi." #, python-format msgid "Not allowed to update tags for image %s." msgstr "%s imajı için etiketlerin güncellenmesine izin verilmez." #, python-format msgid "Not allowed to upload image data for image %(image_id)s: %(error)s" msgstr "" "%(image_id)s imajı için imaj verisi yüklenmesine izin verilmedi: %(error)s" msgid "Number of sort dirs does not match the number of sort keys" msgstr "" "Sıralama dizinlerinin sayısı, sıralama anahtarlarının sayısıyla eÅŸleÅŸmez" msgid "Old and new sorting syntax cannot be combined" msgstr "Eski ve yeni sıralama sözdizimi birleÅŸtirilemez" #, python-format msgid "Operation \"%s\" requires a member named \"value\"." msgstr "\"%s\" iÅŸlemi \"deÄŸer\" olarak adlandırılan bir üye ister." msgid "" "Operation objects must contain exactly one member named \"add\", \"remove\", " "or \"replace\"." msgstr "" "İşlem nesneleri \"ekle\", \"kaldır\" ya da \"deÄŸiÅŸtir\" olarak adlandırılan " "tam olarak bir üye içermelidir." msgid "" "Operation objects must contain only one member named \"add\", \"remove\", or " "\"replace\"." msgstr "" "İşlem nesneleri, \"ekle\", \"kaldır\" ya da \"deÄŸiÅŸtir\" olarak adlandırılan " "sadece bir üye içermelidir." msgid "Operations must be JSON objects." msgstr "İşlemler JSON nesnesi olmalıdır." #, python-format msgid "Original locations is not empty: %s" msgstr "Özgün konumlar boÅŸ deÄŸil: %s" msgid "Owner must be specified to create a tag." msgstr "Etiket oluÅŸturmak için sahibi belirtilmelidir." msgid "Owner of the image" msgstr "İmajın sahibi" msgid "Owner of the namespace." msgstr "Ad alanı sahibi." msgid "Param values can't contain 4 byte unicode." msgstr "Param deÄŸerleri 4 bayt unikod içermez." #, python-format msgid "Pointer `%s` contains \"~\" not part of a recognized escape sequence." msgstr "" "`%s` iÅŸaretçisi tanınmayan bir vazgeçme dizisinin parçası olmayan \"~\" " "içerir." #, python-format msgid "Pointer `%s` contains adjacent \"/\"." msgstr "`%s` iÅŸaretçisi bitiÅŸik \"/\" içerir." #, python-format msgid "Pointer `%s` does not contains valid token." msgstr "`%s`iÅŸaretçisi geçerli jeton içermez." #, python-format msgid "Pointer `%s` does not start with \"/\"." msgstr "`%s` iÅŸaretçisi \"/\" ile baÅŸlamaz." #, python-format msgid "Pointer `%s` end with \"/\"." msgstr "`%s` iÅŸaretçisi \"/\" ile sonlanır." #, python-format msgid "Port \"%s\" is not valid." msgstr "BaÄŸlantı noktası \"%s\" geçersizdir." #, python-format msgid "Process %d not running" msgstr "%d süreci çalışmıyor" #, python-format msgid "Properties %s must be set prior to saving data." msgstr "%s özellikleri veri kaydetmeden önce ayarlanmış olmalıdır." #, python-format msgid "" "Property %(property_name)s does not start with the expected resource type " "association prefix of '%(prefix)s'." msgstr "" "%(property_name)s özelliÄŸi beklenen kaynak tür iliÅŸkilendirme ön eki " "'%(prefix)s' ile baÅŸlamaz." #, python-format msgid "Property %s already present." msgstr "Özellik %s zaten mevcut." #, python-format msgid "Property %s does not exist." msgstr "Özellik %s mevcut deÄŸil." #, python-format msgid "Property %s may not be removed." msgstr "Özellik %s kaldırılamayabilir." #, python-format msgid "Property %s must be set prior to saving data." msgstr "%s özelliÄŸi veri kaydetmeden önce ayarlanmış olmalıdır." #, python-format msgid "Property '%s' is protected" msgstr "'%s' özelliÄŸi korumalıdır" msgid "Property names can't contain 4 byte unicode." msgstr "Özellik adları 4 bayt unicode içeremez." #, python-format msgid "" "Provided image size must match the stored image size. (provided size: " "%(ps)d, stored size: %(ss)d)" msgstr "" "SaÄŸlanan imaj boyutu depolanan imaj boyutu ile eÅŸleÅŸmelidir. (saÄŸlanan " "boyut: %(ps)d, depolanan boyut: %(ss)d)" #, python-format msgid "Provided object does not match schema '%(schema)s': %(reason)s" msgstr "SaÄŸlanan nesne '%(schema)s' ÅŸeması ile eÅŸleÅŸmez: %(reason)s" #, python-format msgid "Provided status of task is unsupported: %(status)s" msgstr "SaÄŸlanan görev durumu desteklenmiyor: %(status)s" #, python-format msgid "Provided type of task is unsupported: %(type)s" msgstr "SaÄŸlanan görev türü desteklenmiyor: %(type)s" msgid "Provides a user friendly description of the namespace." msgstr "Ad alanı için kullanıcı dostu bir açıklama saÄŸlar." msgid "Received invalid HTTP redirect." msgstr "Geçersiz HTTP yeniden yönlendirme isteÄŸi alındı." #, python-format msgid "Redirecting to %(uri)s for authorization." msgstr "Yetkilendirme için %(uri)s adresine yeniden yönlendiriliyor." #, python-format msgid "Registry service can't use %s" msgstr "Kayıt defteri servisi %s kullanamaz" #, python-format msgid "Registry was not configured correctly on API server. Reason: %(reason)s" msgstr "" "Kayıt defteri API sunucusunda doÄŸru bir ÅŸekilde yapılandırılamadı. Nedeni: " "%(reason)s" #, python-format msgid "Reload of %(serv)s not supported" msgstr "%(serv)s yeniden yükleme desteklenmiyor" #, python-format msgid "Reloading %(serv)s (pid %(pid)s) with signal(%(sig)s)" msgstr "(%(sig)s) sinyali ile %(serv)s (pid %(pid)s) yeniden yükleniyor" #, python-format msgid "Removing stale pid file %s" msgstr "Bozuk pid dosyası %s kaldırılıyor" msgid "Request body must be a JSON array of operation objects." msgstr "İstek vücudu iÅŸlem nesnelerinin bir JSON dizisi olmalıdır." msgid "Request must be a list of commands" msgstr "İstek komutların bir listesi olmalıdır" #, python-format msgid "Required store %s is invalid" msgstr "İstenen depo %s geçersizdir" msgid "" "Resource type names should be aligned with Heat resource types whenever " "possible: http://docs.openstack.org/developer/heat/template_guide/openstack." "html" msgstr "" "Kaynak tür adları her fırsatta, Heat kaynak türleri ile hizalanmalıdır: " "http://docs.openstack.org/developer/heat/template_guide/openstack.html" msgid "Response from Keystone does not contain a Glance endpoint." msgstr "Keystone yanıtı bir Glance uç noktası içermiyor." msgid "Scope of image accessibility" msgstr "İmaj eriÅŸilebilirlik kapsamı" msgid "Scope of namespace accessibility." msgstr "Ad alanı eriÅŸebilirlik kapsamı." #, python-format msgid "Server %(serv)s is stopped" msgstr "Sunucu %(serv)s durdurulur" #, python-format msgid "Server worker creation failed: %(reason)s." msgstr "Sunucu işçisi oluÅŸturma iÅŸlemi baÅŸarısız oldu: %(reason)s." msgid "" "Some resource types allow more than one key / value pair per instance. For " "example, Cinder allows user and image metadata on volumes. Only the image " "properties metadata is evaluated by Nova (scheduling or drivers). This " "property allows a namespace target to remove the ambiguity." msgstr "" "Bazı kaynak türleri her sunucu başına birden fazla anahtar / deÄŸer çiftine " "izin verir. ÖrneÄŸin, Cinder mantıksal sürücü üzerinde kullanıcı ve imaj " "metadatalarına izin verir. Sadece imaj özellikleri metadataları Nova ile " "deÄŸerlendirilir (zamanlama ya da sürücüler). Bu özellik belirsizliÄŸi " "kaldırmak için bir ad alanı hedefine olanak saÄŸlar." msgid "Sort direction supplied was not valid." msgstr "SaÄŸlanan sıralama yönü geçersizdir." msgid "Sort key supplied was not valid." msgstr "SaÄŸlanan sıralama anahtarı geçersizdir." msgid "" "Specifies the prefix to use for the given resource type. Any properties in " "the namespace should be prefixed with this prefix when being applied to the " "specified resource type. Must include prefix separator (e.g. a colon :)." msgstr "" "Verilen kaynak türü için kullanılacak öneki belirtir. Ad alanındaki her " "özellik belirtilen kaynak türüne uygulanırken önek eklenmelidir. Önek " "ayıracı içermelidir (örneÄŸin; :)." msgid "Status must be \"pending\", \"accepted\" or \"rejected\"." msgstr "Durum \"bekliyor\", \"kabul edildi\" ya da \"reddedildi\" olmalıdır." msgid "Status not specified" msgstr "Durum belirtilmemiÅŸ" #, python-format msgid "Status transition from %(cur_status)s to %(new_status)s is not allowed" msgstr "" "%(cur_status)s mevcut durumundan %(new_status)s yeni duruma geçiÅŸe izin " "verilmez" #, python-format msgid "Stopping %(serv)s (pid %(pid)s) with signal(%(sig)s)" msgstr "(%(sig)s) sinyali ile %(serv)s (pid %(pid)s) durduruluyor" #, python-format msgid "Store for image_id not found: %s" msgstr "image_id için depo bulunamadı: %s" #, python-format msgid "Store for scheme %s not found" msgstr "%s ÅŸeması için depo bulunamadı" #, python-format msgid "" "Supplied %(attr)s (%(supplied)s) and %(attr)s generated from uploaded image " "(%(actual)s) did not match. Setting image status to 'killed'." msgstr "" "Verilen %(attr)s (%(supplied)s) ve yüklenen imajdan (%(actual)s) oluÅŸturulan " "%(attr)s uyuÅŸmadı. Görüntü durumu ayarlama 'killed'." msgid "Supported values for the 'container_format' image attribute" msgstr "'container_format' imaj özniteliÄŸi için desteklenen deÄŸerler" msgid "Supported values for the 'disk_format' image attribute" msgstr "'disk_format' imaj özniteliÄŸi için desteklenen deÄŸerler" #, python-format msgid "Suppressed respawn as %(serv)s was %(rsn)s." msgstr "%(serv)s olarak yeniden oluÅŸturulması durdurulan, %(rsn)s idi." msgid "System SIGHUP signal received." msgstr "Sistem SIGHUP sinyali aldı." #, python-format msgid "Task '%s' is required" msgstr "'%s' görevi gereklidir" msgid "Task does not exist" msgstr "Görev mevcut deÄŸil" msgid "Task failed due to Internal Error" msgstr "Görev Dahili Hata nedeniyle baÅŸarısız oldu" msgid "Task was not configured properly" msgstr "Görev düzgün bir ÅŸekilde yapılandırılmadı." #, python-format msgid "Task with the given id %(task_id)s was not found" msgstr "Verilen %(task_id)s ile görev bulunamadı" msgid "The \"changes-since\" filter is no longer available on v2." msgstr "" "\"belli bir zamandan sonraki deÄŸiÅŸiklikler\" süzgeci v2 sürümünde artık " "mevcut deÄŸil." #, python-format msgid "The CA file you specified %s does not exist" msgstr "Belirtilen %s CA dosyası mevcut deÄŸil" #, python-format msgid "" "The Image %(image_id)s object being created by this task %(task_id)s, is no " "longer in valid status for further processing." msgstr "" "%(task_id)s görevi ile oluÅŸturulan %(image_id)s imaj nesnesi, artık ileri " "iÅŸlem için geçerli durumda deÄŸildir." msgid "The Store URI was malformed." msgstr "Depo URI'si bozulmuÅŸ." msgid "" "The URL to the keystone service. If \"use_user_token\" is not in effect and " "using keystone auth, then URL of keystone can be specified." msgstr "" "Keystone hizmeti için URL. EÄŸer \"use_user_token\" yürürlükte deÄŸilse ve " "keystone kimlik doÄŸrulaması kullanılıyorsa, o zaman keystone URL'i " "belirtilebilir." msgid "" "The administrators password. If \"use_user_token\" is not in effect, then " "admin credentials can be specified." msgstr "" "Yönetici parolası. EÄŸer \"use_user_token\" yürürlükte deÄŸilse, o zaman " "yönetici kimlik bilgileri belirtilebilir." msgid "" "The administrators user name. If \"use_user_token\" is not in effect, then " "admin credentials can be specified." msgstr "" "Yönetici kullanıcı adı. EÄŸer \"use_user_token\" yürürlükte deÄŸilse, o zaman " "yönetici kimlik bilgileri belirtilebilir." #, python-format msgid "The cert file you specified %s does not exist" msgstr "Belirtilen %s sertifika dosyası mevcut deÄŸil" msgid "The current status of this task" msgstr "Görevin ÅŸu anki durumu" #, python-format msgid "" "The device housing the image cache directory %(image_cache_dir)s does not " "support xattr. It is likely you need to edit your fstab and add the " "user_xattr option to the appropriate line for the device housing the cache " "directory." msgstr "" "İmaj önbellek dizininin %(image_cache_dir)s yer aldığı aygıt xattr " "desteklemiyor. Önbellek dizini içeren aygıt için fstab düzenlemeniz ve uygun " "satıra user_xattr seçeneÄŸi eklemeniz gerekebilir." #, python-format msgid "" "The given uri is not valid. Please specify a valid uri from the following " "list of supported uri %(supported)s" msgstr "" "Verilen uri geçersizdir. Lütfen, desteklenen uri listesinden %(supported)s " "geçerli bir uri belirtin" #, python-format msgid "The incoming image is too large: %s" msgstr "Gelen imaj çok büyük: %s" #, python-format msgid "The key file you specified %s does not exist" msgstr "BelirttiÄŸiniz %s anahtar dosyası mevcut deÄŸil" #, python-format msgid "" "The limit has been exceeded on the number of allowed image locations. " "Attempted: %(attempted)s, Maximum: %(maximum)s" msgstr "" "İzin verilen imaj konumlarının sayı sınırı aşıldı.Denenen: %(attempted)s, " "Azami: %(maximum)s" #, python-format msgid "" "The limit has been exceeded on the number of allowed image members for this " "image. Attempted: %(attempted)s, Maximum: %(maximum)s" msgstr "" "Bu imaj için izin verilen imaj üye sınırı aşıldı.Denenen: %(attempted)s, En " "fazla: %(maximum)s" #, python-format msgid "" "The limit has been exceeded on the number of allowed image properties. " "Attempted: %(attempted)s, Maximum: %(maximum)s" msgstr "" "İzin verilen imaj özelliklerinin sayı sınırı aşıldı.Denenen: %(attempted)s, " "Azami: %(maximum)s" #, python-format msgid "" "The limit has been exceeded on the number of allowed image properties. " "Attempted: %(num)s, Maximum: %(quota)s" msgstr "" "İmaj özelliklerinde izin verilen sınır aşıldı.Denenen: %(num)s, En fazla: " "%(quota)s" #, python-format msgid "" "The limit has been exceeded on the number of allowed image tags. Attempted: " "%(attempted)s, Maximum: %(maximum)s" msgstr "" "İzin verilen imaj etiketlerinin sayı sınırı aşıldı.Denenen: %(attempted)s, " "Azami: %(maximum)s" #, python-format msgid "The location %(location)s already exists" msgstr "%(location)s konumu zaten mevcut" #, python-format msgid "The location data has an invalid ID: %d" msgstr "Konum verisi geçersiz bir kimliÄŸe sahip: %d" #, python-format msgid "" "The metadata definition %(record_type)s with name=%(record_name)s not " "deleted. Other records still refer to it." msgstr "" "Ad=%(record_name)s ile metadata tanımı %(record_type)s silinebilir deÄŸil. " "DiÄŸer kayıtlar hala onu gösteriyor." #, python-format msgid "The metadata definition namespace=%(namespace_name)s already exists." msgstr "Metadata tanım ad alanı=%(namespace_name)s zaten mevcut." #, python-format msgid "" "The metadata definition object with name=%(object_name)s was not found in " "namespace=%(namespace_name)s." msgstr "" "Ad=%(object_name)s ile metadata tanım nesnesi ad alanında=%(namespace_name)s " "bulunamadı." #, python-format msgid "" "The metadata definition property with name=%(property_name)s was not found " "in namespace=%(namespace_name)s." msgstr "" "Ad=%(property_name)s ile metadata tanım özelliÄŸi ad alanında=" "%(namespace_name)s bulunamadı." #, python-format msgid "" "The metadata definition resource-type association of resource-type=" "%(resource_type_name)s to namespace=%(namespace_name)s already exists." msgstr "" "Ad alanına=%(namespace_name)s kaynak türünün=%(resource_type_name)s metadata " "tanım kaynak tür iliÅŸkisi zaten mevcut." #, python-format msgid "" "The metadata definition resource-type association of resource-type=" "%(resource_type_name)s to namespace=%(namespace_name)s, was not found." msgstr "" "Kaynak türünün=%(resource_type_name)s ad alanında=%(namespace_name)s, " "metadata tanım kaynak-tür iliÅŸkisi bulunamadı." #, python-format msgid "" "The metadata definition resource-type with name=%(resource_type_name)s, was " "not found." msgstr "Ad=%(resource_type_name)s ile metadata tanım kaynak-türü bulunamadı." #, python-format msgid "" "The metadata definition tag with name=%(name)s was not found in namespace=" "%(namespace_name)s." msgstr "" "Ad=%(name)s ile metadata tanım etiketi ad alanında=%(namespace_name)s " "bulunamadı." msgid "The parameters required by task, JSON blob" msgstr "JSON blob, görev tarafından istenen parameteler" msgid "The provided image is too large." msgstr "Getirilen imaj çok büyük." msgid "" "The region for the authentication service. If \"use_user_token\" is not in " "effect and using keystone auth, then region name can be specified." msgstr "" "Kimlik doÄŸrulama servisi için bölge. EÄŸer \"use_user_token\" yürürlükte " "deÄŸilse ve keystone kimlik doÄŸrulaması kullanılıyorsa, bölge adı " "belirtilebilir." msgid "The request returned 500 Internal Server Error." msgstr "İstek geri 500 İç Sunucu Hatası döndürdü." msgid "" "The request returned 503 Service Unavailable. This generally occurs on " "service overload or other transient outage." msgstr "" "İstek 503 Hizmet Kullanılamıyor kodu döndürdü. Bu genellikle, hizmetin aşırı " "yük altında olduÄŸu ya da geçici kesintiler oluÅŸtuÄŸu anlamına gelir." #, python-format msgid "" "The request returned a 302 Multiple Choices. This generally means that you " "have not included a version indicator in a request URI.\n" "\n" "The body of response returned:\n" "%(body)s" msgstr "" "İstek 302 Çok Seçenek kodu döndürdü. Bu genellikle, istek URI'sinin bir " "sürüm göstergesi içermediÄŸi anlamına gelir.\n" "\n" "Dönen yanıtın gövdesi:\n" "%(body)s" #, python-format msgid "" "The request returned a 413 Request Entity Too Large. This generally means " "that rate limiting or a quota threshold was breached.\n" "\n" "The response body:\n" "%(body)s" msgstr "" "İstek 413 Girilen Veri Çok Büyük kodu döndürdü. Bu genellikle, hız " "sınırlayıcı ya da kota eÅŸiÄŸi ihlali anlamına gelir.\n" "\n" "Yanıt gövdesi:\n" "%(body)s" #, python-format msgid "" "The request returned an unexpected status: %(status)s.\n" "\n" "The response body:\n" "%(body)s" msgstr "" "İstek beklenmeyen bir durum döndürdü: %(status)s.\n" "\n" "Yanıt:\n" "%(body)s" msgid "" "The requested image has been deactivated. Image data download is forbidden." msgstr "İstenen imaj devrede deÄŸil. İmaj verisi indirmek yasak." msgid "The result of current task, JSON blob" msgstr "Åžu anki görevin sonucu, JSON blob" #, python-format msgid "" "The size of the data %(image_size)s will exceed the limit. %(remaining)s " "bytes remaining." msgstr "%(image_size)s veri boyutu sınırı aÅŸacak. Kalan bayt %(remaining)s " #, python-format msgid "The specified member %s could not be found" msgstr "Belirtilen üye %s bulunamadı" #, python-format msgid "The specified metadata object %s could not be found" msgstr "Belirtilen metadata nesnesi %s bulunamadı" #, python-format msgid "The specified metadata tag %s could not be found" msgstr "Belirtilen metadata etiketi %s bulunamadı" #, python-format msgid "The specified namespace %s could not be found" msgstr "Belirtilen ad alanı %s bulunamadı" #, python-format msgid "The specified property %s could not be found" msgstr "Belirtilen özellik %s bulunamadı" #, python-format msgid "The specified resource type %s could not be found " msgstr "Belirtilen kaynak türü %s bulunamadı " msgid "" "The status of deleted image location can only be set to 'pending_delete' or " "'deleted'" msgstr "" "Silinen imaj konumunun durumu sadece 'pending_delete' ya da 'deleted' olarak " "ayarlanabilir" msgid "" "The status of deleted image location can only be set to 'pending_delete' or " "'deleted'." msgstr "" "Silinen imaj konum durumu sadece 'pending_delete' ya da 'deleted' olarak " "ayarlanabilir." msgid "The status of this image member" msgstr "Bu imaj üyesinin durumu" msgid "" "The strategy to use for authentication. If \"use_user_token\" is not in " "effect, then auth strategy can be specified." msgstr "" "Kimlik doÄŸrulama için kullanılacak strateji. EÄŸer \"use_user_token\" " "yürürlükte deÄŸilse, o zaman kimlik doÄŸrulama stratejisi belirtilebilir." #, python-format msgid "" "The target member %(member_id)s is already associated with image " "%(image_id)s." msgstr "" "Hedef üye %(member_id)s, %(image_id)s imajı ile zaten iliÅŸkilendirilmiÅŸtir." msgid "" "The tenant name of the administrative user. If \"use_user_token\" is not in " "effect, then admin tenant name can be specified." msgstr "" "İdari kullaıcının kiracı adı. EÄŸer \"use_user_token\" yürürlükte deÄŸilse, o " "zaman yönetici kiracı adı belirtilebilir." msgid "The type of task represented by this content" msgstr "Bu içerik ile sunulan görev türü" msgid "The unique namespace text." msgstr "EÅŸsiz ad alanı metni." msgid "The user friendly name for the namespace. Used by UI if available." msgstr "" "Kullanıcı dostu ad alanı adı. EÄŸer mevcut ise, kullanıcı arayüzü tarafından " "kullanılır." #, python-format msgid "" "There is a problem with your %(error_key_name)s %(error_filename)s. Please " "verify it. Error: %(ioe)s" msgstr "" "%(error_key_name)s %(error_filename)s ile ilgili bir sorun var. Lütfen " "doÄŸrulayın. Hata: %(ioe)s" #, python-format msgid "" "There is a problem with your %(error_key_name)s %(error_filename)s. Please " "verify it. OpenSSL error: %(ce)s" msgstr "" "%(error_key_name)s %(error_filename)s ile ilgili bir sorun var. Lütfen " "doÄŸrulayın. OpenSSL hatası: %(ce)s" #, python-format msgid "" "There is a problem with your key pair. Please verify that cert " "%(cert_file)s and key %(key_file)s belong together. OpenSSL error %(ce)s" msgstr "" "Anahtar çiftiniz ile ilgili bir sorun var. Lütfen sertifika %(cert_file)s " "ve anahtarın %(key_file)s birbirine ait olduÄŸunu doÄŸrulayın. OpenSSL hatası " "%(ce)s" msgid "There was an error configuring the client." msgstr "İstemci yapılandırılırken bir hata meydana geldi." msgid "There was an error connecting to a server" msgstr "Sunucuya baÄŸlanırken bir hata meydana geldi" msgid "" "This operation is currently not permitted on Glance Tasks. They are auto " "deleted after reaching the time based on their expires_at property." msgstr "" "Åžu anda Glance Görevleri üzerinde bu iÅŸleme izin verilmiyor. Onlar " "expires_at özellikliÄŸine göre süreleri dolduktan sonra otomatik silinirler." msgid "This operation is currently not permitted on Glance images details." msgstr "Bu iÅŸleme ÅŸu anda Glance imaj ayrıntılarında izin verilmez." msgid "" "Time in hours for which a task lives after, either succeeding or failing" msgstr "" "Bir görevin baÅŸarılı ya da baÅŸarısız olarak sonuçlanmasından sonra saat " "olarak yaÅŸayacağı süre" msgid "Too few arguments." msgstr "Çok fazla deÄŸiÅŸken." msgid "" "URI cannot contain more than one occurrence of a scheme.If you have " "specified a URI like swift://user:pass@http://authurl.com/v1/container/obj, " "you need to change it to use the swift+http:// scheme, like so: swift+http://" "user:pass@authurl.com/v1/container/obj" msgstr "" "URI bir ÅŸemanın birden fazla olayını içeremez. EÄŸer URI'yi swift://user:" "pass@http://authurl.com/v1/container/obj gibi belirttiyseniz, swift+http:// " "ÅŸemasını kullanmak için onu deÄŸiÅŸtirmeniz gerekir, ÅŸu ÅŸekilde: swift+http://" "user:pass@authurl.com/v1/container/obj" #, python-format msgid "" "Unable to create pid file %(pid)s. Running as non-root?\n" "Falling back to a temp file, you can stop %(service)s service using:\n" " %(file)s %(server)s stop --pid-file %(fb)s" msgstr "" "Pid dosyası %(pid)s oluÅŸturulamadı. Root olmadan çalıştırılsın mı?\n" "Geçici bir dosyaya geri düşüyor, ÅŸu komutları kullanarak %(service)s " "servisini durdurabilirsiniz:\n" " %(file)s %(server)s stop --pid-file %(fb)s" msgid "Unable to filter on a range with a non-numeric value." msgstr "Sayısal olmayan deÄŸer ile bir aralıkta süzme yapılamadı." msgid "Unable to filter using the specified range." msgstr "Belirtilen aralık kullanılarak süzme yapılamadı." #, python-format msgid "Unable to find '%s' in JSON Schema change" msgstr "JSON Åžema deÄŸiÅŸikliÄŸinde '%s' bulunamadı" #, python-format msgid "" "Unable to find `op` in JSON Schema change. It must be one of the following: " "%(available)s." msgstr "" "JSON Åžema deÄŸiÅŸikliÄŸinde `op` bulunamadı. Åžu seçeneklerden biri olmalıdır: " "%(available)s." msgid "Unable to increase file descriptor limit. Running as non-root?" msgstr "Dosya tanıtıcı sınır arttırılamadı. Root olmadan çalıştırılsın mı?" #, python-format msgid "" "Unable to load %(app_name)s from configuration file %(conf_file)s.\n" "Got: %(e)r" msgstr "" "%(conf_file)s yapılandırma dosyasından %(app_name)s uygulaması yüklenemedi.\n" "Alınan: %(e)r" #, python-format msgid "Unable to load schema: %(reason)s" msgstr "Åžema yüklenemedi: %(reason)s" #, python-format msgid "Unable to locate paste config file for %s." msgstr "%s için yapıştırma yapılandırma dosyası yerleÅŸtirilemedi." #, python-format msgid "Unable to upload duplicate image data for image%(image_id)s: %(error)s" msgstr "%(image_id)s imajı için çift imaj verisi yüklenemedi: %(error)s" msgid "Unauthorized image access" msgstr "Yetkisiz imaj eriÅŸimi" #, python-format msgid "Unexpected response: %s" msgstr "Beklenmeyen yanıt: %s" #, python-format msgid "Unknown auth strategy '%s'" msgstr "Bilinmeyen kimlik doÄŸrulama stratejisi '%s'" #, python-format msgid "Unknown command: %s" msgstr "Bilinmeyen komut: %s" msgid "Unknown sort direction, must be 'desc' or 'asc'" msgstr "Bilinmeyen sıralama yönü, 'desc' or 'asc' olmalıdır" msgid "Unrecognized JSON Schema draft version" msgstr "Tanınmayan JSON Åžeması taslak sürümü" msgid "Unrecognized changes-since value" msgstr "Belli bir zamandan sonraki tanınmayan deÄŸiÅŸiklik deÄŸeri" #, python-format msgid "Unsupported sort_dir. Acceptable values: %s" msgstr "Desteklenmeyen sort_dir. Kabul edilen deÄŸerler: %s" #, python-format msgid "Unsupported sort_key. Acceptable values: %s" msgstr "Desteklenmeyen sort_key. Kabul edilen deÄŸerler: %s" #, python-format msgid "Waited 15 seconds for pid %(pid)s (%(file)s) to die; giving up" msgstr "" "%(pid)s (%(file)s) pid'i öldürmek için 15 saniye beklendi; vazgeçiliyor" msgid "" "When running server in SSL mode, you must specify both a cert_file and " "key_file option value in your configuration file" msgstr "" "Sunucu SSL kipte çalışırken, cert_file ve key_file deÄŸerlerinin ikisinide " "yapılandırma dosyanızda belirtmelisiniz" msgid "" "Whether to pass through the user token when making requests to the registry. " "To prevent failures with token expiration during big files upload, it is " "recommended to set this parameter to False.If \"use_user_token\" is not in " "effect, then admin credentials can be specified." msgstr "" "Kayıt defteri sunucusuna istek yaparken kullanıcı jetonunun geçirilip " "geçirilmemesi. Büyük dosyaların yüklenmesi sırasında jetonun süresinin sona " "ermesi ile oluÅŸacak hataları engellemek için, bu parametrenin seçilmemiÅŸ " "olarak ayarlanması önerilir. EÄŸer \"use_user_token\" yürürlükte deÄŸilse, o " "zaman yönetici kimlik bilgileri belirtilebilir." #, python-format msgid "Wrong command structure: %s" msgstr "Hatalı komut yapısı: %s" msgid "You are not authenticated." msgstr "KimliÄŸiniz doÄŸrulanamadı." msgid "You are not authorized to complete this action." msgstr "Bu eylemi tamamlamak için yetkili deÄŸilsiniz." #, python-format msgid "You are not permitted to create a tag in the namespace owned by '%s'" msgstr "'%s''ye ait ad alanında bir etiket oluÅŸturma izniniz yok" msgid "You are not permitted to create image members for the image." msgstr "İmaj için üye oluÅŸturma izniniz yok." #, python-format msgid "You are not permitted to create images owned by '%s'." msgstr "'%s''ye ait imaj oluÅŸturma izniniz yok." #, python-format msgid "You are not permitted to create namespace owned by '%s'" msgstr "'%s''ye ait ad alanı oluÅŸturma izniniz yok" #, python-format msgid "You are not permitted to create object owned by '%s'" msgstr "'%s''ye ait nesne oluÅŸturma izniniz yok" #, python-format msgid "You are not permitted to create property owned by '%s'" msgstr "'%s''ye ait özellik oluÅŸturma izniniz yok." #, python-format msgid "You are not permitted to create resource_type owned by '%s'" msgstr "'%s''ye ait resource_type oluÅŸturma izniniz yok." #, python-format msgid "You are not permitted to create this task with owner as: %s" msgstr "Sahibi olarak bu görevi oluÅŸturma izniniz yok: %s" msgid "You are not permitted to delete this image." msgstr "Bu imajı silme izniniz yok." msgid "You are not permitted to delete this meta_resource_type." msgstr "meta_resource_type silme izniniz yok." msgid "You are not permitted to delete this namespace." msgstr "Bu ad alanını silme izniniz yok." msgid "You are not permitted to delete this object." msgstr "Bu nesneyi silme izniniz yok." msgid "You are not permitted to delete this property." msgstr "Bu özelliÄŸi silme izniniz yok." msgid "You are not permitted to delete this tag." msgstr "Bu etiketi silme izniniz yok." #, python-format msgid "You are not permitted to modify '%(attr)s' on this %(resource)s." msgstr "Bu %(resource)s üzerinde '%(attr)s' deÄŸiÅŸtirme izniniz yok." #, python-format msgid "You are not permitted to modify '%s' on this image." msgstr "Bu imajda '%s' deÄŸiÅŸtirme izniniz yok." msgid "You are not permitted to modify locations for this image." msgstr "Bu imajın konumunu deÄŸiÅŸtirme izniniz yok." msgid "You are not permitted to modify tags on this image." msgstr "Bu imaj üzerindeki etiketleri deÄŸiÅŸtirme izniniz yok." msgid "You are not permitted to modify this image." msgstr "Bu imajı deÄŸiÅŸtirme izniniz yok." msgid "You are not permitted to set status on this task." msgstr "Bu görev üzerinde durum ayarlama izniniz yok." msgid "You are not permitted to update this namespace." msgstr "Bu ad alanını güncelleme izniniz yok." msgid "You are not permitted to update this object." msgstr "Bu nesneyi güncelleme izniniz yok." msgid "You are not permitted to update this property." msgstr "Bu özelliÄŸi güncelleme izniniz yok." msgid "You are not permitted to update this tag." msgstr "Bu etiketi güncelleme izniniz yok." msgid "You are not permitted to upload data for this image." msgstr "Bu imaj için veri yükleme izniniz yok." #, python-format msgid "You cannot add image member for %s" msgstr "%s için imaj üyesi ekleyemiyorsunuz" #, python-format msgid "You cannot delete image member for %s" msgstr "%s için imaj üyesini silemiyorsunuz" #, python-format msgid "You cannot get image member for %s" msgstr "%s için imaj üyesini alamıyorsunuz" #, python-format msgid "You cannot update image member %s" msgstr "%s imaj üyesini güncelleyemiyorsunuz" msgid "You do not own this image" msgstr "Bu imajın sahibi deÄŸilsiniz" msgid "" "You have selected to use SSL in connecting, and you have supplied a cert, " "however you have failed to supply either a key_file parameter or set the " "GLANCE_CLIENT_KEY_FILE environ variable" msgstr "" "BaÄŸlanırken SSL kullanmayı seçtiniz ve bir sertifika saÄŸladınız, ancak ya " "key_file parametresi saÄŸlamayı ya da GLANCE_CLIENT_KEY_FILE deÄŸiÅŸkeni " "ayarlama iÅŸlemini baÅŸaramadınız." msgid "" "You have selected to use SSL in connecting, and you have supplied a key, " "however you have failed to supply either a cert_file parameter or set the " "GLANCE_CLIENT_CERT_FILE environ variable" msgstr "" "BaÄŸlanırken SSL kullanmayı seçtiniz ve bir anahtar saÄŸladınız, ancak ya " "cert_file parametresi saÄŸlamayı ya da GLANCE_CLIENT_CERT_FILE deÄŸiÅŸkeni " "ayarlama iÅŸlemini baÅŸaramadınız." msgid "" "^([0-9a-fA-F]){8}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-" "fA-F]){12}$" msgstr "" "^([0-9a-fA-F]){8}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-" "fA-F]){12}$" #, python-format msgid "__init__() got unexpected keyword argument '%s'" msgstr "__init__() beklenmeyen anahtar sözcük deÄŸiÅŸkeni '%s' aldı" #, python-format msgid "" "cannot transition from %(current)s to %(next)s in update (wanted from_state=" "%(from)s)" msgstr "" "güncellemede (istenen from_state=%(from)s), %(current)s mevcut durumundan " "%(next)s sonrakine geçiÅŸ olamaz " #, python-format msgid "custom properties (%(props)s) conflict with base properties" msgstr "özel özellikler (%(props)s) temel özellikler ile çatışır" msgid "eventlet 'poll' nor 'selects' hubs are available on this platform" msgstr "" "bu platformda eventlet 'poll' ya da 'selects' havuzları kullanılabilirdir" msgid "is_public must be None, True, or False" msgstr "is_public Hiçbiri, DoÄŸru ya da Yanlış olmalıdır" msgid "limit param must be an integer" msgstr "Sınır parametresi tam sayı olmak zorunda" msgid "limit param must be positive" msgstr "Sınır parametresi pozitif olmak zorunda" #, python-format msgid "new_image() got unexpected keywords %s" msgstr "new_image() beklenmeyen anahtar sözcük %s aldı" msgid "protected must be True, or False" msgstr "korumalı DoÄŸru ya da Yanlış olmalıdır" #, python-format msgid "unable to launch %(serv)s. Got error: %(e)s" msgstr "%(serv)s baÅŸlatılamadı. Alınan hata: %(e)s" glance-16.0.0/glance/locale/fr/0000775000175100017510000000000013245511661016156 5ustar zuulzuul00000000000000glance-16.0.0/glance/locale/fr/LC_MESSAGES/0000775000175100017510000000000013245511661017743 5ustar zuulzuul00000000000000glance-16.0.0/glance/locale/fr/LC_MESSAGES/glance.po0000666000175100017510000021716713245511421021546 0ustar zuulzuul00000000000000# Translations template for glance. # Copyright (C) 2015 ORGANIZATION # This file is distributed under the same license as the glance project. # # Translators: # Arnaud Legendre , 2013 # Christophe kryskool , 2013 # EVEILLARD , 2013-2014 # Maxime COQUEREL , 2014 # Andreas Jaeger , 2016. #zanata msgid "" msgstr "" "Project-Id-Version: glance 15.0.0.0b3.dev29\n" "Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n" "POT-Creation-Date: 2017-06-23 20:54+0000\n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=UTF-8\n" "Content-Transfer-Encoding: 8bit\n" "PO-Revision-Date: 2016-04-12 05:20+0000\n" "Last-Translator: Copied by Zanata \n" "Language: fr\n" "Plural-Forms: nplurals=2; plural=(n > 1);\n" "Generated-By: Babel 2.0\n" "X-Generator: Zanata 3.9.6\n" "Language-Team: French\n" #, python-format msgid "\t%s" msgstr "\t%s" #, python-format msgid "%(cls)s exception was raised in the last rpc call: %(val)s" msgstr "" "Une exception %(cls)s s'est produite dans le dernier appel d'une procédure " "distante : %(val)s" #, python-format msgid "%(m_id)s not found in the member list of the image %(i_id)s." msgstr "%(m_id)s introuvable dans la liste des membres de l'image %(i_id)s." #, python-format msgid "%(serv)s (pid %(pid)s) is running..." msgstr "%(serv)s (pid %(pid)s) est en cours d'exécution..." #, python-format msgid "%(serv)s appears to already be running: %(pid)s" msgstr "%(serv)s semble déjà en cours d'exécution : %(pid)s" #, python-format msgid "" "%(strategy)s is registered as a module twice. %(module)s is not being used." msgstr "" "%(strategy)s est enregistré deux fois comme module. %(module)s n'est pas " "utilisé." #, python-format msgid "" "%(task_id)s of %(task_type)s not configured properly. Could not load the " "filesystem store" msgstr "" "%(task_id)s de %(task_type)s ne sont pas configurés correctement. Impossible " "de charger le magasin de système de fichiers" #, python-format msgid "" "%(task_id)s of %(task_type)s not configured properly. Missing work dir: " "%(work_dir)s" msgstr "" "%(task_id)s de %(task_type)s ne sont pas configurés correctement. Rép de " "travail manquant : %(work_dir)s" #, python-format msgid "%(verb)sing %(serv)s" msgstr "%(verb)s %(serv)s" #, python-format msgid "%(verb)sing %(serv)s with %(conf)s" msgstr "Opération %(verb)s en cours sur %(serv)s avec %(conf)s" #, python-format msgid "" "%s Please specify a host:port pair, where host is an IPv4 address, IPv6 " "address, hostname, or FQDN. If using an IPv6 address, enclose it in brackets " "separately from the port (i.e., \"[fe80::a:b:c]:9876\")." msgstr "" "%s Veuillez indiquer une paire hôte:port, dans laquelle hôte est une adresse " "IPv4, une adresse IPv6, un nom d'hôte ou un nom de domaine complet. Si vous " "utilisez une adresse IPv6, faites-la figurer entre crochets de façon à la " "séparer du port (par ex., \"[fe80::a:b:c]:9876\")." #, python-format msgid "%s can't contain 4 byte unicode characters." msgstr "%s ne peut pas contenir de caractère Unicode de 4 octets." #, python-format msgid "%s is already stopped" msgstr "%s est déjà stoppé" #, python-format msgid "%s is stopped" msgstr "%s est arrêté" msgid "" "--os_auth_url option or OS_AUTH_URL environment variable required when " "keystone authentication strategy is enabled\n" msgstr "" "Option --os_auth_url ou variable d'environnement OS_AUTH_URL requise lorsque " "la stratégie d'authentification keystone est activée\n" msgid "A body is not expected with this request." msgstr "Un corps n'est pas attendu avec cette demande." #, python-format msgid "" "A metadata definition object with name=%(object_name)s already exists in " "namespace=%(namespace_name)s." msgstr "" "Un objet de la définition de métadonnées avec le nom %(object_name)s existe " "déjà dans l'espace de nom %(namespace_name)s." #, python-format msgid "" "A metadata definition property with name=%(property_name)s already exists in " "namespace=%(namespace_name)s." msgstr "" "Une propriété de la définition de métadonnées avec le nom %(property_name)s " "existe déjà dans l'espace de nom %(namespace_name)s." #, python-format msgid "" "A metadata definition resource-type with name=%(resource_type_name)s already " "exists." msgstr "" "Un type de ressource de la définition de métadonnées avec le nom " "%(resource_type_name)s existe déjà." msgid "A set of URLs to access the image file kept in external store" msgstr "" "Ensemble d'URL pour accéder au fichier image conservé dans le magasin externe" msgid "Amount of disk space (in GB) required to boot image." msgstr "" "Quantité d'espace disque (en Go) requise pour l'image d'initialisation." msgid "Amount of ram (in MB) required to boot image." msgstr "Quantité de mémoire RAM (en Mo) requise pour l'image d'initialisation." msgid "An identifier for the image" msgstr "Identificateur de l'image" msgid "An identifier for the image member (tenantId)" msgstr "Identificateur pour le membre de l'image (tenantId)" msgid "An identifier for the owner of this task" msgstr "Un identificateur pour le propriétaire de cette tâche" msgid "An identifier for the task" msgstr "Un identificateur pour la tâche" msgid "An image file url" msgstr "URL d'un fichier image" msgid "An image schema url" msgstr "URL d'un schéma d'image" msgid "An image self url" msgstr "URL d'une image self" #, python-format msgid "An image with identifier %s already exists" msgstr "Une image avec l'identificateur %s existe déjà" msgid "An import task exception occurred" msgstr "Une exception liée à la tâche d'importation s'est produite" msgid "An object with the same identifier already exists." msgstr "Un objet avec le même identificateur existe déjà." msgid "An object with the same identifier is currently being operated on." msgstr "Un objet avec le même identificateur est déjà en cours d'utilisation." msgid "An object with the specified identifier was not found." msgstr "Un objet avec l'identificateur spécifié est introuvable." msgid "An unknown exception occurred" msgstr "Une exception inconnue s'est produite" msgid "An unknown task exception occurred" msgstr "Une exception de tâche inconnue s'est produite" #, python-format msgid "Attempt to upload duplicate image: %s" msgstr "Tentative de téléchargement d'image en double : %s" msgid "Attempted to update Location field for an image not in queued status." msgstr "" "Vous avez tenté de mettre à jour la zone Emplacement pour une image qui n'a " "pas le statut en file d'attente." #, python-format msgid "Attribute '%(property)s' is read-only." msgstr "L'attribut '%(property)s' est en lecture seule." #, python-format msgid "Attribute '%(property)s' is reserved." msgstr "L'attribut '%(property)s' est réservé." #, python-format msgid "Attribute '%s' is read-only." msgstr "L'attribut '%s' est en lecture seule." #, python-format msgid "Attribute '%s' is reserved." msgstr "Attribut '%s' est réservé." msgid "Attribute container_format can be only replaced for a queued image." msgstr "" "L'attribut container_format ne peut être remplacé que pour une image mise en " "file d'attente." msgid "Attribute disk_format can be only replaced for a queued image." msgstr "" "L'attribut disk_format ne peut être remplacé que pour une image mise en file " "d'attente." #, python-format msgid "Auth service at URL %(url)s not found." msgstr "Service d'auth à l'URL %(url)s non trouvé." #, python-format msgid "" "Authentication error - the token may have expired during file upload. " "Deleting image data for %s." msgstr "" "Erreur d'authentification - le jeton a peut-être expiré lors du " "téléchargement de fichier. Suppression des données d'image pour %s." msgid "Authorization failed." msgstr "Echec de l'autorisation." msgid "Available categories:" msgstr "Catégories disponibles :" #, python-format msgid "Bad \"%s\" query filter format. Use ISO 8601 DateTime notation." msgstr "" "Format de filtre de requête \"%s\" incorrect. Utilisez la notation de date " "et heure ISO 8601." #, python-format msgid "Bad Command: %s" msgstr "Commande %s erronée " #, python-format msgid "Bad header: %(header_name)s" msgstr "Erreur d’entête: %(header_name)s" #, python-format msgid "Bad value passed to filter %(filter)s got %(val)s" msgstr "Valeur incorrecte transmise pour filtrer %(filter)s, %(val)s obtenu" #, python-format msgid "Badly formed S3 URI: %(uri)s" msgstr "URI S3 incorrecte : %(uri)s" #, python-format msgid "Badly formed credentials '%(creds)s' in Swift URI" msgstr "Données d'identification incorrectes '%(creds)s' dans l'URI Swift" msgid "Badly formed credentials in Swift URI." msgstr "Données d'identification incorrectes dans l'URI Swift." msgid "Body expected in request." msgstr "Corps attendu dans la demande" msgid "Cannot be a negative value" msgstr "Ne peut pas être une valeur négative" msgid "Cannot be a negative value." msgstr "Ne peut pas être une valeur négative." #, python-format msgid "Cannot convert image %(key)s '%(value)s' to an integer." msgstr "Impossible de convertir l'image %(key)s '%(value)s' en entier." msgid "Cannot remove last location in the image." msgstr "Impossible de supprimer le dernier emplacement dans l'image." #, python-format msgid "Cannot save data for image %(image_id)s: %(error)s" msgstr "" "Les données pour l'image %(image_id)s ne peuvent pas être sauvegardées : " "erreur %(error)s" msgid "Cannot set locations to empty list." msgstr "Impossible de définir des emplacements avec une liste vide." msgid "Cannot upload to an unqueued image" msgstr "Téléchargement impossible dans une image non placée en file d'attente" #, python-format msgid "Checksum verification failed. Aborted caching of image '%s'." msgstr "" "Echec de vérification du total de contrôle. Mise en cache de l'image '%s' " "annulée." msgid "Client disconnected before sending all data to backend" msgstr "Client déconnecté avant l'envoi de toutes les données au backend" msgid "Command not found" msgstr "La commande n'a pas été trouvée" msgid "Configuration option was not valid" msgstr "L'option de configuration n'était pas valide" #, python-format msgid "Connect error/bad request to Auth service at URL %(url)s." msgstr "" "Erreur de connexion/demande erronée pour le service d'auth à l'URL %(url)s." #, python-format msgid "Constructed URL: %s" msgstr "URL construite : %s" msgid "Container format is not specified." msgstr "Le format de conteneur n'a pas été spécifié." msgid "Content-Type must be application/octet-stream" msgstr "Le type de contenu doit être application/octet-stream" #, python-format msgid "Corrupt image download for image %(image_id)s" msgstr "téléchargement d'image endommagée pour l'image %(image_id)s" #, python-format msgid "Could not bind to %(host)s:%(port)s after trying for 30 seconds" msgstr "" "Liaison impossible à %(host)s:%(port)s après une tentative de 30 secondes" msgid "Could not find OVF file in OVA archive file." msgstr "Fichier OVF introuvable dans le fichier archive OVA." #, python-format msgid "Could not find metadata object %s" msgstr "L'objet métadonnées %s est introuvable" #, python-format msgid "Could not find metadata tag %s" msgstr "Balise de métadonnées %s introuvable" #, python-format msgid "Could not find namespace %s" msgstr "Espace de nom %s introuvable" #, python-format msgid "Could not find property %s" msgstr "Propriété %s introuvable" msgid "Could not find required configuration option" msgstr "Option de configuration obligatoire introuvable" #, python-format msgid "Could not find task %s" msgstr "La tâche %s est introuvable" #, python-format msgid "Could not update image: %s" msgstr "Impossible de mettre à jour l'image : %s" msgid "Currently, OVA packages containing multiple disk are not supported." msgstr "" "Actuellement, les packages OVA contenant plusieurs disques ne sont pas pris " "en charge." #, python-format msgid "Data for image_id not found: %s" msgstr "Données d'image_id introuvables : %s" msgid "Data supplied was not valid." msgstr "Les données fournies n'étaient pas valides." msgid "Date and time of image member creation" msgstr "Date et heure de création du membre de l'image" msgid "Date and time of image registration" msgstr "Date et heure d'enregistrement de l'image" msgid "Date and time of last modification of image member" msgstr "Date et heure de dernière modification du membre de l'image" msgid "Date and time of namespace creation" msgstr "Date et heure de création de l'espace de nom" msgid "Date and time of object creation" msgstr "Date et heure de création de l'objet" msgid "Date and time of resource type association" msgstr "Date et heure d'association de type de ressource" msgid "Date and time of tag creation" msgstr "Date et heure de création de la balise" msgid "Date and time of the last image modification" msgstr "Date et heure de dernière modification de l'image" msgid "Date and time of the last namespace modification" msgstr "Date et heure de dernière modification de l'espace de nom" msgid "Date and time of the last object modification" msgstr "Date et heure de dernière modification de l'objet" msgid "Date and time of the last resource type association modification" msgstr "" "Date et heure de dernière modification d'association de type de ressource " msgid "Date and time of the last tag modification" msgstr "Date et heure de dernière modification de la balise " msgid "Datetime when this resource was created" msgstr "Date-heure à laquelle cette ressource a été créée" msgid "Datetime when this resource was updated" msgstr "Date-heure à laquelle cette ressource a été mise à jour" msgid "Datetime when this resource would be subject to removal" msgstr "Date-heure à laquelle cette ressource serait soumise à une suppression" #, python-format msgid "Denying attempt to upload image because it exceeds the quota: %s" msgstr "" "Refus de la tentative de téléchargement d'une image qui dépasse le quota : %s" #, python-format msgid "Denying attempt to upload image larger than %d bytes." msgstr "" "Refus de la tentative de téléchargement d'une image dont la taille est " "supérieure à %d octets." msgid "Descriptive name for the image" msgstr "Nom descriptif de l'image" msgid "Disk format is not specified." msgstr "Le format de disque n'a pas été spécifié." #, python-format msgid "" "Driver %(driver_name)s could not be configured correctly. Reason: %(reason)s" msgstr "" "Impossible de configurer le pilote %(driver_name)s correctement. Cause : " "%(reason)s" msgid "" "Error decoding your request. Either the URL or the request body contained " "characters that could not be decoded by Glance" msgstr "" "Erreur lors du décodage de votre demande. L'URL ou le corps de la demande " "contiennent des caractères que Glance ne peut pas décoder" #, python-format msgid "Error fetching members of image %(image_id)s: %(inner_msg)s" msgstr "" "Erreur lors de l'extraction des membres de l'image %(image_id)s : " "%(inner_msg)s" msgid "Error in store configuration. Adding images to store is disabled." msgstr "" "Erreur de configuration du magasin. L'ajout d'images au magasin est " "désactivé." msgid "Expected a member in the form: {\"member\": \"image_id\"}" msgstr "Membre attendu sous la forme : {\"member\": \"image_id\"}" msgid "Expected a status in the form: {\"status\": \"status\"}" msgstr "Statut attendu sous la forme : {\"status\": \"status\"}" msgid "External source should not be empty" msgstr "La source externe ne doit pas être vide" #, python-format msgid "External sources are not supported: '%s'" msgstr "Sources externes non prises en charge : '%s'" #, python-format msgid "Failed to activate image. Got error: %s" msgstr "Echec de l'activation de l'image. Erreur obtenue : %s" #, python-format msgid "Failed to add image metadata. Got error: %s" msgstr "Impossible d'ajouter les métadonnées d'image. Erreur obtenue : %s" #, python-format msgid "Failed to find image %(image_id)s to delete" msgstr "Échec pour trouver image %(image_id)s à supprimer." #, python-format msgid "Failed to find image to delete: %s" msgstr "Échec pour trouver l'image à supprimer: %s" #, python-format msgid "Failed to find image to update: %s" msgstr "Echec pour trouver l'image à mettre à jour: %s" #, python-format msgid "Failed to find resource type %(resourcetype)s to delete" msgstr "Echec pour trouver le type de ressource %(resourcetype)s a supprimer" #, python-format msgid "Failed to initialize the image cache database. Got error: %s" msgstr "" "Impossible d'initialiser la base de données de caches d'image. Erreur " "obtenue : %s" #, python-format msgid "Failed to read %s from config" msgstr "Echec de la lecture de %s à partir de la config" #, python-format msgid "Failed to reserve image. Got error: %s" msgstr "Impossible de réserver l'image. Erreur obtenue : %s" #, python-format msgid "Failed to update image metadata. Got error: %s" msgstr "" "Impossible de mettre à jour les métadonnées d'image. Erreur obtenue : %s" #, python-format msgid "Failed to upload image %s" msgstr "Impossible de charger l'image %s" #, python-format msgid "" "Failed to upload image data for image %(image_id)s due to HTTP error: " "%(error)s" msgstr "" "Echec de téléchargement des données image pour l'image %(image_id)s en " "raison d'une erreur HTTP : %(error)s" #, python-format msgid "" "Failed to upload image data for image %(image_id)s due to internal error: " "%(error)s" msgstr "" "Echec de téléchargement des données image pour l'image %(image_id)s en " "raison d'une erreur interne : %(error)s" #, python-format msgid "File %(path)s has invalid backing file %(bfile)s, aborting." msgstr "" "Le fichier %(path)s dispose d'un fichier de sauvegarde non valide : " "%(bfile)s. L'opération est abandonnée." msgid "" "File based imports are not allowed. Please use a non-local source of image " "data." msgstr "" "Les importations à partir de fichiers sont interdites. Utilisez une source " "externe de données image." msgid "Forbidden image access" msgstr "Accès interdit à l'image" #, python-format msgid "Forbidden to delete a %s image." msgstr "Interdiction de supprimer une image %s" #, python-format msgid "Forbidden to delete image: %s" msgstr "Interdiction de supprimer l'image: %s" #, python-format msgid "Forbidden to modify '%(key)s' of %(status)s image." msgstr "Interdiction de modifier '%(key)s' de l'image %(status)s." #, python-format msgid "Forbidden to modify '%s' of image." msgstr "Interdiction de modifier l'élément '%s' de l'image." msgid "Forbidden to reserve image." msgstr "Interdiction de réserver une image." msgid "Forbidden to update deleted image." msgstr "Interdiction de mettre à jour l'image supprimée." #, python-format msgid "Forbidden to update image: %s" msgstr "Interdiction de mise à jour de l'image: %s" #, python-format msgid "Forbidden upload attempt: %s" msgstr "Tentative de téléchargement interdite : %s" #, python-format msgid "Forbidding request, metadata definition namespace=%s is not visible." msgstr "" "Interdiction de la demande, l'espace de nom %s de la définition de " "métadonnées n'est pas visible." #, python-format msgid "Forbidding request, task %s is not visible" msgstr "Interdiction de la demande, la tâche %s n'est pas visible" msgid "Format of the container" msgstr "Format du conteneur" msgid "Format of the disk" msgstr "Format du disque" #, python-format msgid "Host \"%s\" is not valid." msgstr "Host \"%s\" n'est pas valide." #, python-format msgid "Host and port \"%s\" is not valid." msgstr "Host et port \"%s\" ne sont pas valides." msgid "" "Human-readable informative message only included when appropriate (usually " "on failure)" msgstr "" "Message d'information lisible par l'homme inclus uniquement si approprié " "(habituellement en cas d'incident)" msgid "If true, image will not be deletable." msgstr "Si true, l'image ne pourra pas être supprimée." msgid "If true, namespace will not be deletable." msgstr "Si true, l'espace de nom ne pourra pas être supprimé." #, python-format msgid "Image %(id)s could not be deleted because it is in use: %(exc)s" msgstr "" "L'image %(id)s n'a pas pu être supprimée car elle est utilisée : %(exc)s" #, python-format msgid "Image %(id)s not found" msgstr "Image %(id)s non trouvé" #, python-format msgid "" "Image %(image_id)s could not be found after upload. The image may have been " "deleted during the upload: %(error)s" msgstr "" "Image %(image_id)s introuvable après le téléchargement. Elle a sans doute " "été supprimée au cours du téléchargement : %(error)s" #, python-format msgid "Image %(image_id)s is protected and cannot be deleted." msgstr "L'image %(image_id)s est protégée et ne peut pas être supprimée." #, python-format msgid "" "Image %s could not be found after upload. The image may have been deleted " "during the upload, cleaning up the chunks uploaded." msgstr "" "L'image %s n'a pas été trouvée après le téléchargement. Elle a sans doute " "été supprimée pendant le téléchargement. Nettoyage des blocs téléchargés." #, python-format msgid "" "Image %s could not be found after upload. The image may have been deleted " "during the upload." msgstr "" "L'image %s est introuvable après le chargement. L'image a peut-être été " "supprimée lors du chargement." #, python-format msgid "Image %s is deactivated" msgstr "L'image %s est désactivée" #, python-format msgid "Image %s is not active" msgstr "L'image %s n'est pas active" #, python-format msgid "Image %s not found." msgstr "Image %s introuvable." #, python-format msgid "Image exceeds the storage quota: %s" msgstr "l'image %s dépasse le quota de stockage" msgid "Image id is required." msgstr "Id image est requis." msgid "Image is protected" msgstr "L'image est protégée" #, python-format msgid "Image member limit exceeded for image %(id)s: %(e)s:" msgstr "Le nombre maximal de membres est dépassé pour l'image %(id)s : %(e)s :" #, python-format msgid "Image name too long: %d" msgstr "Nom de l'image trop long : %d" msgid "Image operation conflicts" msgstr "Conflits d'opération d'image" #, python-format msgid "" "Image status transition from %(cur_status)s to %(new_status)s is not allowed" msgstr "" "La transition du statut de l'image de %(cur_status)s vers %(new_status)s " "n'est pas autorisée" #, python-format msgid "Image storage media is full: %s" msgstr "Le support de stockage d'image est saturé : %s" #, python-format msgid "Image tag limit exceeded for image %(id)s: %(e)s:" msgstr "Le nombre maximal de balises est dépassé pour l'image %(id)s : %(e)s :" #, python-format msgid "Image upload problem: %s" msgstr "Problème d'envoi de l'image: %s" #, python-format msgid "Image with identifier %s already exists!" msgstr "L'image avec l'identificateur %s existe déjà !" #, python-format msgid "Image with identifier %s has been deleted." msgstr "L'image avec l'identificateur %s a été supprimée." #, python-format msgid "Image with identifier %s not found" msgstr "L'image portant l'ID %s est introuvable" #, python-format msgid "Image with the given id %(image_id)s was not found" msgstr "L'image avec l'ID %(image_id)s indiqué est introuvable. " #, python-format msgid "" "Incorrect auth strategy, expected \"%(expected)s\" but received " "\"%(received)s\"" msgstr "" "Stratégie d'autorisation incorrecte, valeur attendue \"%(expected)s\" mais " "valeur obtenue \"%(received)s\"" #, python-format msgid "Incorrect request: %s" msgstr "Requête incorrecte: %s" #, python-format msgid "Input does not contain '%(key)s' field" msgstr "L'entrée ne contient pas la zone '%(key)s'" #, python-format msgid "Insufficient permissions on image storage media: %s" msgstr "Droits insuffisants sur le support de stockage d'image : %s" #, python-format msgid "Invalid JSON pointer for this resource: '/%s'" msgstr "Pointeur JSON invalide pour cette ressource : '%s'" #, python-format msgid "Invalid checksum '%s': can't exceed 32 characters" msgstr "" "Total de contrôle '%s' non valide : il ne doit pas comporter plus de 32 " "caractères. " msgid "Invalid configuration in glance-swift conf file." msgstr "" "Configuration non valide dans le fichier de configuration glance-swift." msgid "Invalid configuration in property protection file." msgstr "" "Configuration non valide dans le fichier de verrouillage de propriétés." #, python-format msgid "Invalid container format '%s' for image." msgstr "Format de conteneur '%s' non valide pour l'image." #, python-format msgid "Invalid content type %(content_type)s" msgstr "Type de contenu non valide %(content_type)s" #, python-format msgid "Invalid disk format '%s' for image." msgstr "Format de disque '%s' non valide pour l'image." #, python-format msgid "Invalid filter value %s. The quote is not closed." msgstr "Valeur de filtre %s non valide. Les guillemets ne sont pas fermés." #, python-format msgid "" "Invalid filter value %s. There is no comma after closing quotation mark." msgstr "" "Valeur de filtre %s non valide. Il n'y a pas de virgule après la fermeture " "des guillemets." #, python-format msgid "" "Invalid filter value %s. There is no comma before opening quotation mark." msgstr "" "Valeur de filtre %s non valide. Il n'y a pas de virgule avant l'ouverture " "des guillemets." msgid "Invalid image id format" msgstr "Format d'ID image non valide" msgid "Invalid location" msgstr "Emplacement non valide" #, python-format msgid "Invalid location %s" msgstr "Emplacement non valide : %s" #, python-format msgid "Invalid location: %s" msgstr "Emplacement non valide : %s" #, python-format msgid "" "Invalid location_strategy option: %(name)s. The valid strategy option(s) " "is(are): %(strategies)s" msgstr "" "Option location_strategy non valide : %(name)s. La ou les options de " "stratégie valides sont : %(strategies)s" msgid "Invalid locations" msgstr "Emplacements non valides" #, python-format msgid "Invalid locations: %s" msgstr "Emplacements non valides : %s" msgid "Invalid marker format" msgstr "Format de marqueur non valide" msgid "Invalid marker. Image could not be found." msgstr "Marqueur non valide. Image introuvable." #, python-format msgid "Invalid membership association: %s" msgstr "Association d'appartenance non valide : %s" msgid "" "Invalid mix of disk and container formats. When setting a disk or container " "format to one of 'aki', 'ari', or 'ami', the container and disk formats must " "match." msgstr "" "Combinaison non valide de formats de disque et de conteneur. Si vous " "définissez un disque ou un conteneur au format 'aki', 'ari' ou 'ami', les " "formats du disque et du conteneur doivent correspondre." #, python-format msgid "" "Invalid operation: `%(op)s`. It must be one of the following: %(available)s." msgstr "" "Opération non valide : `%(op)s`. Doit être l'une des suivantes : " "%(available)s." msgid "Invalid position for adding a location." msgstr "Position non valide pour l'ajout d'un emplacement." msgid "Invalid position for removing a location." msgstr "Position non valide pour la suppression d'un emplacement." msgid "Invalid service catalog json." msgstr "json de catalogue de service non valide." #, python-format msgid "Invalid sort direction: %s" msgstr "Sens de tri non valide : %s" #, python-format msgid "" "Invalid sort key: %(sort_key)s. It must be one of the following: " "%(available)s." msgstr "" "Clé de tri non valide : %(sort_key)s. Doit être l'une des valeurs " "suivantes : %(available)s." #, python-format msgid "Invalid status value: %s" msgstr "Valeur de statut non valide : %s" #, python-format msgid "Invalid status: %s" msgstr "Statut non valide : %s" #, python-format msgid "Invalid time format for %s." msgstr "Format d'heure non valide pour %s." #, python-format msgid "Invalid type value: %s" msgstr "Type de valeur non valide: %s" #, python-format msgid "" "Invalid update. It would result in a duplicate metadata definition namespace " "with the same name of %s" msgstr "" "Mise à jour non valide. Elle créerait une de définition de métadonnées en " "double avec le nom %s" #, python-format msgid "" "Invalid update. It would result in a duplicate metadata definition object " "with the same name=%(name)s in namespace=%(namespace_name)s." msgstr "" "Mise à jour non valide. Elle créerait un objet de définition de métadonnées " "en double avec le nom %(name)s dans l'espace de nom %(namespace_name)s." #, python-format msgid "" "Invalid update. It would result in a duplicate metadata definition object " "with the same name=%(name)s in namespace=%(namespace_name)s." msgstr "" "Mise à jour non valide. Elle créerait un objet de définition de métadonnées " "en double avec le nom %(name)s dans l'espace de nom %(namespace_name)s." #, python-format msgid "" "Invalid update. It would result in a duplicate metadata definition property " "with the same name=%(name)s in namespace=%(namespace_name)s." msgstr "" "Mise à jour non valide. Elle créerait une propriété de définition de " "métadonnées avec le nom %(name)s dans l'espace de nom %(namespace_name)s." #, python-format msgid "Invalid value '%(value)s' for parameter '%(param)s': %(extra_msg)s" msgstr "" "Valeur non valide '%(value)s' pour le paramètre '%(param)s' : %(extra_msg)s" #, python-format msgid "Invalid value for option %(option)s: %(value)s" msgstr "Valeur non valide pour l'option %(option)s : %(value)s" #, python-format msgid "Invalid visibility value: %s" msgstr "Valeur de visibilité non valide : %s" msgid "It's invalid to provide multiple image sources." msgstr "Il est invalide de fournir plusieurs sources d'image" msgid "It's not allowed to add locations if locations are invisible." msgstr "" "L'ajout des emplacements n'est pas autorisé si les emplacements sont " "invisibles." msgid "It's not allowed to remove locations if locations are invisible." msgstr "" "La suppression des emplacements n'est pas autorisée si les emplacements sont " "invisibles." msgid "It's not allowed to update locations if locations are invisible." msgstr "" "La mise à jour des emplacements n'est pas autorisée si les emplacements sont " "invisibles." msgid "List of strings related to the image" msgstr "Liste des chaînes associées à l'image" msgid "Malformed JSON in request body." msgstr "JSON incorrect dans le corps de demande." msgid "Maximal age is count of days since epoch." msgstr "L'ancienneté maximale est le nombre de jours depuis l'epoch." #, python-format msgid "Maximum redirects (%(redirects)s) was exceeded." msgstr "Le nombre maximum de redirections (%(redirects)s) a été dépassé." #, python-format msgid "Member %(member_id)s is duplicated for image %(image_id)s" msgstr "Le membre %(member_id)s est en double pour l'image %(image_id)s" msgid "Member can't be empty" msgstr "Membre ne peut pas être vide" msgid "Member to be added not specified" msgstr "Membre à ajouter non spécifié" msgid "Membership could not be found." msgstr "Appartenance non trouvée." #, python-format msgid "" "Metadata definition namespace %(namespace)s is protected and cannot be " "deleted." msgstr "" "L'espace de nom %(namespace)s de la définition de métadonnées est protégé et " "ne peut pas être supprimé." #, python-format msgid "Metadata definition namespace not found for id=%s" msgstr "" "L'espace de nom de définition de métadonnées est introuvable pour l'ID %s" #, python-format msgid "" "Metadata definition object %(object_name)s is protected and cannot be " "deleted." msgstr "" "L'objet %(object_name)s de la définition de métadonnées est protégé et ne " "peut pas être supprimé." #, python-format msgid "Metadata definition object not found for id=%s" msgstr "L'objet de définition de métadonnées est introuvable pour l'ID %s" #, python-format msgid "" "Metadata definition property %(property_name)s is protected and cannot be " "deleted." msgstr "" "La propriété %(property_name)s de la définition de métadonnées est protégée " "et ne peut pas être supprimé." #, python-format msgid "Metadata definition property not found for id=%s" msgstr "La propriété de définition de métadonnées est introuvable pour l'ID %s" #, python-format msgid "" "Metadata definition resource-type %(resource_type_name)s is a seeded-system " "type and cannot be deleted." msgstr "" "Le type de ressource %(resource_type_name)s de la définition de métadonnées " "est un type prédéfiniet ne peut pas être supprimé." #, python-format msgid "" "Metadata definition resource-type-association %(resource_type)s is protected " "and cannot be deleted." msgstr "" "L'association de type de ressource %(resource_type)s de la définition de " "métadonnées est protégée et ne peut pas être supprimée." #, python-format msgid "" "Metadata definition tag %(tag_name)s is protected and cannot be deleted." msgstr "" "La balise de définition de métadonnées %(tag_name)s est protégée et ne peut " "pas être supprimée." #, python-format msgid "Metadata definition tag not found for id=%s" msgstr "La balise de définition de métadonnées est introuvable pour l'ID %s" msgid "Minimal rows limit is 1." msgstr "Le nombre minimal de lignes est 1." #, python-format msgid "Missing required credential: %(required)s" msgstr "Données d'identification obligatoires manquantes : %(required)s" #, python-format msgid "" "Multiple 'image' service matches for region %(region)s. This generally means " "that a region is required and you have not supplied one." msgstr "" "Plusieurs correspondances de service 'image' pour la région %(region)s. En " "général, cela signifie qu'une région est requise et que vous n'en avez pas " "indiquée." msgid "No authenticated user" msgstr "Aucun utilisateur authentifié" #, python-format msgid "No image found with ID %s" msgstr "aucune image trouvée avec l'identifiant %s" #, python-format msgid "No location found with ID %(loc)s from image %(img)s" msgstr "Aucun emplacement trouvé avec l'ID %(loc)s dans l'image %(img)s" msgid "No permission to share that image" msgstr "Aucun droit de partage de cette image" #, python-format msgid "Not allowed to create members for image %s." msgstr "Non autorisé à créer des membres pour l'image %s." #, python-format msgid "Not allowed to deactivate image in status '%s'" msgstr "Non autorisé à désactiver l'image dans l'état '%s'" #, python-format msgid "Not allowed to delete members for image %s." msgstr "Non autorisé à supprimer des membres de l'image %s." #, python-format msgid "Not allowed to delete tags for image %s." msgstr "Non autorisé à supprimer des balises de l'image %s." #, python-format msgid "Not allowed to list members for image %s." msgstr "Non autorisé à répertorier les membres de l'image %s." #, python-format msgid "Not allowed to reactivate image in status '%s'" msgstr "Non autorisé à réactiver l'image dans l'état '%s'" #, python-format msgid "Not allowed to update members for image %s." msgstr "Non autorisé à mettre à jour les membres de l'image %s." #, python-format msgid "Not allowed to update tags for image %s." msgstr "Non autorisé à mettre à jour des balises de l'image %s." #, python-format msgid "Not allowed to upload image data for image %(image_id)s: %(error)s" msgstr "" "Non autorisé à télécharger des données image pour l'image %(image_id)s : " "%(error)s" msgid "Number of sort dirs does not match the number of sort keys" msgstr "Le nombre de rép de tri ne correspond pas au nombre de clés de tri" msgid "OVA extract is limited to admin" msgstr "L'extraction de fichiers OVA est limitée à admin" msgid "Old and new sorting syntax cannot be combined" msgstr "" "Les syntaxes de tri anciennes et nouvelles ne peuvent pas être combinées" #, python-format msgid "Operation \"%s\" requires a member named \"value\"." msgstr "L'opération \"%s\" requiert un membre nommé \"value\"." msgid "" "Operation objects must contain exactly one member named \"add\", \"remove\", " "or \"replace\"." msgstr "" "Les objets d'opération doivent contenir exactement un seul membre nommé \"add" "\", \"remove\" ou \"replace\"." msgid "" "Operation objects must contain only one member named \"add\", \"remove\", or " "\"replace\"." msgstr "" "Les objets d'opération doivent contenir un seul membre nommé \"add\", " "\"remove\" ou \"replace\"." msgid "Operations must be JSON objects." msgstr "Les opérations doivent être des objets JSON." #, python-format msgid "Original locations is not empty: %s" msgstr "L'emplacement original %s n'est pas vide" msgid "Owner can't be updated by non admin." msgstr "Le propriétaire ne peut être mis à jour que par un administrateur." msgid "Owner must be specified to create a tag." msgstr "Le propriétaire doit être indiqué pour créer une balise." msgid "Owner of the image" msgstr "Propriétaire de l'image" msgid "Owner of the namespace." msgstr "Propriétaire de l'espace de nom." msgid "Param values can't contain 4 byte unicode." msgstr "" "Les valeurs de paramètre ne peuvent pas contenir de caractère Unicode de 4 " "octets." #, python-format msgid "Pointer `%s` contains \"~\" not part of a recognized escape sequence." msgstr "" "Le pointeur `%s` contient \"~\" qui ne fait pas partie d'une séquence " "d'échappement reconnue." #, python-format msgid "Pointer `%s` contains adjacent \"/\"." msgstr "Le pointeur `%s` contient des éléments \"/\" adjacent." #, python-format msgid "Pointer `%s` does not contains valid token." msgstr "le Pointeur '%s' ne contient pas de jeton valide." #, python-format msgid "Pointer `%s` does not start with \"/\"." msgstr "Le pointeur `%s` ne commence pas par \"/\"." #, python-format msgid "Pointer `%s` end with \"/\"." msgstr "le pointeur '%s' se termine avec un \"/\"." #, python-format msgid "Port \"%s\" is not valid." msgstr "Port \"%s\" n'est pas valide." #, python-format msgid "Process %d not running" msgstr "Le processus %d n'est pas en fonctionnement" #, python-format msgid "Properties %s must be set prior to saving data." msgstr "" "Les propriétés %s doivent être définies avant de sauvegarder les données." #, python-format msgid "" "Property %(property_name)s does not start with the expected resource type " "association prefix of '%(prefix)s'." msgstr "" "La propriété %(property_name)s ne commence pas par le préfixe d'association " "de type de ressource attendu : '%(prefix)s'." #, python-format msgid "Property %s already present." msgstr "Propriété %s déjà présente." #, python-format msgid "Property %s does not exist." msgstr "La propriété %s n'existe pas." #, python-format msgid "Property %s may not be removed." msgstr "La propriété %s n'est peut-être pas supprimée." #, python-format msgid "Property %s must be set prior to saving data." msgstr "La propriété %s doit être définie avant de sauvegarder les données." #, python-format msgid "Property '%s' is protected" msgstr "La propriété '%s' est protégée" msgid "Property names can't contain 4 byte unicode." msgstr "" "Les noms de propriété ne peuvent pas contenir de caractère Unicode de 4 " "octets." #, python-format msgid "" "Provided image size must match the stored image size. (provided size: " "%(ps)d, stored size: %(ss)d)" msgstr "" "La taille de l'image fournie doit correspondre à la taille de l'image " "stockée. (taille fournie : %(ps)d, taille stockée : %(ss)d)" #, python-format msgid "Provided object does not match schema '%(schema)s': %(reason)s" msgstr "L'objet fourni ne correspond pas au schéma '%(schema)s' : %(reason)s" #, python-format msgid "Provided status of task is unsupported: %(status)s" msgstr "Le statut fourni de la tâche n'est pas pris en charge : %(status)s" #, python-format msgid "Provided type of task is unsupported: %(type)s" msgstr "Le type de tâche fourni n'est pas pris en charge : %(type)s" msgid "Provides a user friendly description of the namespace." msgstr "Fournit une description conviviale de l'espace de nom." msgid "Received invalid HTTP redirect." msgstr "Redirection HTTP non valide reçue." #, python-format msgid "Redirecting to %(uri)s for authorization." msgstr "Redirection vers %(uri)s pour autorisation." #, python-format msgid "Registry service can't use %s" msgstr "Le service de registre ne peut pas utiliser %s" #, python-format msgid "Registry was not configured correctly on API server. Reason: %(reason)s" msgstr "" "Le registre n'a pas été configuré correctement sur le serveur d'API. Cause : " "%(reason)s" #, python-format msgid "Reload of %(serv)s not supported" msgstr "Rechargement de %(serv)s non pris en charge" #, python-format msgid "Reloading %(serv)s (pid %(pid)s) with signal(%(sig)s)" msgstr "Rechargement de %(serv)s (pid %(pid)s) avec le signal (%(sig)s)" #, python-format msgid "Removing stale pid file %s" msgstr "Suppression du fichier PID %s périmé" msgid "Request body must be a JSON array of operation objects." msgstr "Le corps de la demande doit être une matrice JSON d'objets Opération." msgid "Request must be a list of commands" msgstr "La demande doit être une liste de commandes" #, python-format msgid "Required store %s is invalid" msgstr "Le magasin requis %s n'est pas valide" msgid "" "Resource type names should be aligned with Heat resource types whenever " "possible: http://docs.openstack.org/developer/heat/template_guide/openstack." "html" msgstr "" "Les noms de type de ressource doivent être alignés avec les types de " "ressource Heat dans la mesure du possible : http://docs.openstack.org/" "developer/heat/template_guide/openstack.html" msgid "Response from Keystone does not contain a Glance endpoint." msgstr "La réponse de Keystone ne contient pas un noeud final Glance." msgid "Scope of image accessibility" msgstr "Périmètre d'accessibilité de l'image" msgid "Scope of namespace accessibility." msgstr "Périmètre de l'accessibilité de l'espace de nom." #, python-format msgid "Server %(serv)s is stopped" msgstr "Le serveur %(serv)s est arrêté" #, python-format msgid "Server worker creation failed: %(reason)s." msgstr "Echec de la création de travailleur de serveur : %(reason)s." msgid "Signature verification failed" msgstr "La vérification de la signature a échoué" msgid "Size of image file in bytes" msgstr "Taille du fichier image en octets" msgid "" "Some resource types allow more than one key / value pair per instance. For " "example, Cinder allows user and image metadata on volumes. Only the image " "properties metadata is evaluated by Nova (scheduling or drivers). This " "property allows a namespace target to remove the ambiguity." msgstr "" "Certains types de ressource autorisent plusieurs paires clé-valeur par " "instance. Par exemple, Cinder autorise les métadonnées d'utilisateur et " "d'image sur les volumes. Seules les métadonnées de propriétés d'image sont " "évaluées par Nova (planification ou pilotes). Cette propriété autorise une " "cible d'espace de nom pour lever l'ambiguïté." msgid "Sort direction supplied was not valid." msgstr "Le sens de tri fourni n'était pas valide." msgid "Sort key supplied was not valid." msgstr "La clé de tri fournie n'était pas valide." msgid "" "Specifies the prefix to use for the given resource type. Any properties in " "the namespace should be prefixed with this prefix when being applied to the " "specified resource type. Must include prefix separator (e.g. a colon :)." msgstr "" "Spécifie le préfixe à utiliser pour le type de ressource donné. Toutes les " "propriétés de l'espace de nom doivent être précédées de ce préfixe " "lorsqu'elles s'appliquent au type de ressource spécifié. Vous devez inclure " "un séparateur de préfixe (par exemple, le signe deux-points :)." msgid "Status must be \"pending\", \"accepted\" or \"rejected\"." msgstr "L'état doit être \"en attente\", \"accepté\" ou \"rejeté\"." msgid "Status not specified" msgstr "Statut non spécifié" msgid "Status of the image" msgstr "Statut de l'image" #, python-format msgid "Status transition from %(cur_status)s to %(new_status)s is not allowed" msgstr "" "La transition de statut de %(cur_status)s vers %(new_status)s n'est pas " "autorisée" #, python-format msgid "Stopping %(serv)s (pid %(pid)s) with signal(%(sig)s)" msgstr "Arrêt de %(serv)s (pid %(pid)s) avec le signal (%(sig)s)" #, python-format msgid "Store for image_id not found: %s" msgstr "Magasin de l'image_id non trouvé : %s" #, python-format msgid "Store for scheme %s not found" msgstr "Magasin du schéma %s non trouvé" #, python-format msgid "" "Supplied %(attr)s (%(supplied)s) and %(attr)s generated from uploaded image " "(%(actual)s) did not match. Setting image status to 'killed'." msgstr "" "%(attr)s (%(supplied)s) fournis et %(attr)s générés depuis l'image " "téléchargée (%(actual)s) ne correspondent pas. Définition du statut de " "l'image sur 'arrêté'." msgid "Supported values for the 'container_format' image attribute" msgstr "Valeurs prises en charge pour l'attribut d'image 'container_format'" msgid "Supported values for the 'disk_format' image attribute" msgstr "Valeurs prises en charge pour l'attribut d'image 'disk_format'" #, python-format msgid "Suppressed respawn as %(serv)s was %(rsn)s." msgstr "La relance supprimée en tant que %(serv)s était %(rsn)s." msgid "System SIGHUP signal received." msgstr "Signal SIGHUP du système reçu." #, python-format msgid "Task '%s' is required" msgstr "La tâche '%s' est obligatoire" msgid "Task does not exist" msgstr "La tâche n'existe pas" msgid "Task failed due to Internal Error" msgstr "Echec de la tâche en raison d'une erreur interne" msgid "Task was not configured properly" msgstr "La tâche n'a pas été configurée correctement" #, python-format msgid "Task with the given id %(task_id)s was not found" msgstr "La tâche avec l'identificateur donné %(task_id)s est introuvable" msgid "The \"changes-since\" filter is no longer available on v2." msgstr "Le filtre \"changes-since\" n'est plus disponible sur la version 2." #, python-format msgid "The CA file you specified %s does not exist" msgstr "" "Le fichier d'autorité de certification que vous avez spécifié %s n'existe pas" #, python-format msgid "" "The Image %(image_id)s object being created by this task %(task_id)s, is no " "longer in valid status for further processing." msgstr "" "L'objet image %(image_id)s créé par la tâche %(task_id)s n'est plus dans un " "statut valide pour un traitement ultérieur." msgid "The Store URI was malformed." msgstr "L'URI de magasin était incorrect." msgid "" "The URL to the keystone service. If \"use_user_token\" is not in effect and " "using keystone auth, then URL of keystone can be specified." msgstr "" "URL du service keystone. Si \"use_user_token\" n'est pas en vigueur et si " "vous utilisez l'authentification keystone, l'URL de keystone peut être " "spécifiée." msgid "" "The administrators password. If \"use_user_token\" is not in effect, then " "admin credentials can be specified." msgstr "" "Mot de passe de l'administrateur. Si \"use_user_token\" n'est pas en " "vigueur, les données d'identification de l'administrateur peuvent être " "spécifiées." msgid "" "The administrators user name. If \"use_user_token\" is not in effect, then " "admin credentials can be specified." msgstr "" "Nom d'utilisateur administrateur. Si \"use_user_token\" n'est pas en " "vigueur, les données d'identification de l'administrateur peuvent être " "spécifiées." #, python-format msgid "The cert file you specified %s does not exist" msgstr "Le fichier de certificats que vous avez spécifié %s n'existe pas" msgid "The current status of this task" msgstr "Le statut actuel de cette tâche" #, python-format msgid "" "The device housing the image cache directory %(image_cache_dir)s does not " "support xattr. It is likely you need to edit your fstab and add the " "user_xattr option to the appropriate line for the device housing the cache " "directory." msgstr "" "L'unité hébergeant le répertoire de cache d'image %(image_cache_dir)s ne " "prend pas en charge xattr. Vous devez probablement éditer votre fstab et " "ajouter l'option user_xattr sur la ligne appropriée de l'unité hébergeant le " "répertoire de cache." #, python-format msgid "" "The given uri is not valid. Please specify a valid uri from the following " "list of supported uri %(supported)s" msgstr "" "L'identificateur URI fourni n'est pas valide. Indiquez un identificateur URI " "valide sélectionné dans la liste des identificateurs URI pris en charge : " "%(supported)s" #, python-format msgid "The incoming image is too large: %s" msgstr "L'image entrante est trop grande : %s" #, python-format msgid "The key file you specified %s does not exist" msgstr "Le fichier de clés que vous avez spécifié %s n'existe pas" #, python-format msgid "" "The limit has been exceeded on the number of allowed image locations. " "Attempted: %(attempted)s, Maximum: %(maximum)s" msgstr "" "La limite a été dépassée sur le nombre d'emplacements d'image autorisés. " "Tentatives : %(attempted)s, Maximum : %(maximum)s" #, python-format msgid "" "The limit has been exceeded on the number of allowed image members for this " "image. Attempted: %(attempted)s, Maximum: %(maximum)s" msgstr "" "La limite a été dépassée sur le nombre de membres d'image autorisés pour " "cette image. Tentatives : %(attempted)s, Maximum : %(maximum)s" #, python-format msgid "" "The limit has been exceeded on the number of allowed image properties. " "Attempted: %(attempted)s, Maximum: %(maximum)s" msgstr "" "La limite a été dépassée sur le nombre de propriétés d'image autorisées. " "Tentatives : %(attempted)s, Maximum : %(maximum)s" #, python-format msgid "" "The limit has been exceeded on the number of allowed image properties. " "Attempted: %(num)s, Maximum: %(quota)s" msgstr "" "La limite a été dépassée sur le nombre de propriétés d'image autorisées. " "Tentatives : %(num)s, Maximum : %(quota)s" #, python-format msgid "" "The limit has been exceeded on the number of allowed image tags. Attempted: " "%(attempted)s, Maximum: %(maximum)s" msgstr "" "La limite a été dépassée sur le nombre de balises d'image autorisées. " "Tentatives : %(attempted)s, Maximum : %(maximum)s" #, python-format msgid "The location %(location)s already exists" msgstr "L'emplacement %(location)s existe déjà" #, python-format msgid "The location data has an invalid ID: %d" msgstr "Les données d'emplacement possèdent un ID non valide : %d" #, python-format msgid "" "The metadata definition %(record_type)s with name=%(record_name)s not " "deleted. Other records still refer to it." msgstr "" "La définition de métadonnées %(record_type)s avec le nom %(record_name)s n'a " "pas été supprimée. Elle est encore associée à d'autres enregistrements." #, python-format msgid "The metadata definition namespace=%(namespace_name)s already exists." msgstr "" "L'espace de nom %(namespace_name)s de la définition de métadonnées existe " "déjà." #, python-format msgid "" "The metadata definition object with name=%(object_name)s was not found in " "namespace=%(namespace_name)s." msgstr "" "L'objet %(object_name)s de la définition de métadonnées est introuvable dans " "l'espace de nom %(namespace_name)s." #, python-format msgid "" "The metadata definition property with name=%(property_name)s was not found " "in namespace=%(namespace_name)s." msgstr "" "La propriété %(property_name)s de la définition de métadonnées est " "introuvable dans l'espace de nom %(namespace_name)s." #, python-format msgid "" "The metadata definition resource-type association of resource-type=" "%(resource_type_name)s to namespace=%(namespace_name)s already exists." msgstr "" "L'association de type de ressource de la définition de métadonnées entre " "letype de ressource %(resource_type_name)s et l'espace de nom " "%(namespace_name)s existe déjà." #, python-format msgid "" "The metadata definition resource-type association of resource-type=" "%(resource_type_name)s to namespace=%(namespace_name)s, was not found." msgstr "" "L'association de type de ressource de la définition de métadonnées entre " "letype de ressource %(resource_type_name)s et l'espace de nom " "%(namespace_name)s est introuvable." #, python-format msgid "" "The metadata definition resource-type with name=%(resource_type_name)s, was " "not found." msgstr "" "Le type de ressource %(resource_type_name)s de la définition de métadonnées " "est introuvable." #, python-format msgid "" "The metadata definition tag with name=%(name)s was not found in namespace=" "%(namespace_name)s." msgstr "" "La balise de définition de métadonnées nommée %(name)s est introuvable dans " "l'espace de nom %(namespace_name)s." msgid "The parameters required by task, JSON blob" msgstr "Les paramètres requis par la tâche, blob JSON" msgid "The provided image is too large." msgstr "L'image fournie est trop volumineuse." msgid "" "The region for the authentication service. If \"use_user_token\" is not in " "effect and using keystone auth, then region name can be specified." msgstr "" "Région du service d'authentification. Si \"use_user_token\" n'est pas en " "vigueur et si vous utilisez l'authentification keystone, le nom de région " "peut être spécifié." msgid "The request returned 500 Internal Server Error." msgstr "La demande a renvoyé le message 500 Internal Server Error." msgid "" "The request returned 503 Service Unavailable. This generally occurs on " "service overload or other transient outage." msgstr "" "La demande a renvoyé le message 503 Service Unavailable. Cela se produit " "généralement lors d'une surcharge de service ou de tout autre coupure " "transitoire." #, python-format msgid "" "The request returned a 302 Multiple Choices. This generally means that you " "have not included a version indicator in a request URI.\n" "\n" "The body of response returned:\n" "%(body)s" msgstr "" "La demande a renvoyé un message 302 Multiple Choices. Cela signifie " "généralement que vous n'avez pas inclus d'indicateur de version dans l'URI " "de demande.\n" "\n" "Le corps de la réponse a renvoyé :\n" "%(body)s" #, python-format msgid "" "The request returned a 413 Request Entity Too Large. This generally means " "that rate limiting or a quota threshold was breached.\n" "\n" "The response body:\n" "%(body)s" msgstr "" "La demande a renvoyé un message 413 Request Entity Too Large. Cela signifie " "généralement que le taux limite ou le seuil de quota a été dépassé.\n" "\n" "Corps de la réponse :\n" "%(body)s" #, python-format msgid "" "The request returned an unexpected status: %(status)s.\n" "\n" "The response body:\n" "%(body)s" msgstr "" "La demande a renvoyé un statut inattendu : %(status)s.\n" "\n" "Corps de la réponse :\n" "%(body)s" msgid "" "The requested image has been deactivated. Image data download is forbidden." msgstr "" "L'image demandée a été désactivée. Le téléchargement des données image est " "interdit." msgid "The result of current task, JSON blob" msgstr "Le résultat de la tâche en cours, blob JSON" #, python-format msgid "" "The size of the data %(image_size)s will exceed the limit. %(remaining)s " "bytes remaining." msgstr "" "La taille des données %(image_size)s dépassera la limite. %(remaining)s " "octets restants." #, python-format msgid "The specified member %s could not be found" msgstr "Le membre spécifié %s est introuvable" #, python-format msgid "The specified metadata object %s could not be found" msgstr "L'objet métadonnées spécifié %s est introuvable" #, python-format msgid "The specified metadata tag %s could not be found" msgstr "La balise de métadonnées %s est introuvable" #, python-format msgid "The specified namespace %s could not be found" msgstr "L'espace de nom spécifié %s est introuvable" #, python-format msgid "The specified property %s could not be found" msgstr "La propriété spécifiée %s est introuvable" #, python-format msgid "The specified resource type %s could not be found " msgstr "Le type de ressource spécifié %s est introuvable " msgid "" "The status of deleted image location can only be set to 'pending_delete' or " "'deleted'" msgstr "" "L'état de l'emplacement de l'image supprimée ne peut être réglé que sur " "'pending_delete' ou 'deleted'" msgid "" "The status of deleted image location can only be set to 'pending_delete' or " "'deleted'." msgstr "" "L'état de l'emplacement de l'image supprimée ne peut être réglé que sur " "'pending_delete' ou 'deleted'." msgid "The status of this image member" msgstr "Statut de ce membre d'image" msgid "" "The strategy to use for authentication. If \"use_user_token\" is not in " "effect, then auth strategy can be specified." msgstr "" "Stratégie à utiliser pour l'authentification. Si \"use_user_token\" n'est " "pas en vigueur, la stratégie d'authentification peut être spécifiée." #, python-format msgid "" "The target member %(member_id)s is already associated with image " "%(image_id)s." msgstr "Le membre cible %(member_id)s est déjà associé à l'image %(image_id)s." msgid "" "The tenant name of the administrative user. If \"use_user_token\" is not in " "effect, then admin tenant name can be specified." msgstr "" "Nom de locataire de l'utilisateur administrateur. Si \"use_user_token\" " "n'est pas en vigueur, le nom de locataire de l'administrateur peut être " "spécifié." msgid "The type of task represented by this content" msgstr "Le type de tâche représenté par ce contenu" msgid "The unique namespace text." msgstr "Texte unique de l'espace de nom." msgid "The user friendly name for the namespace. Used by UI if available." msgstr "" "Nom convivial de l'espace de nom. Utilisé par l'interface utilisateur si " "disponible." #, python-format msgid "" "There is a problem with your %(error_key_name)s %(error_filename)s. Please " "verify it. Error: %(ioe)s" msgstr "" "Problème lié à votre %(error_key_name)s %(error_filename)s. Veuillez " "vérifier. Erreur : %(ioe)s" #, python-format msgid "" "There is a problem with your %(error_key_name)s %(error_filename)s. Please " "verify it. OpenSSL error: %(ce)s" msgstr "" "Problème lié à votre %(error_key_name)s %(error_filename)s. Veuillez " "vérifier. Erreur OpenSSL : %(ce)s" #, python-format msgid "" "There is a problem with your key pair. Please verify that cert " "%(cert_file)s and key %(key_file)s belong together. OpenSSL error %(ce)s" msgstr "" "Il y a un problème avec votre paire de clés. Vérifiez que le certificat " "%(cert_file)s et la clé %(key_file)s correspondent. Erreur OpenSSL %(ce)s" msgid "There was an error configuring the client." msgstr "Une erreur s'est produite lors de la configuration du client." msgid "There was an error connecting to a server" msgstr "Une erreur s'est produite lors de la connexion à un serveur." msgid "" "This operation is currently not permitted on Glance Tasks. They are auto " "deleted after reaching the time based on their expires_at property." msgstr "" "Cette opération n'est actuellement pas autorisée sur les tâches Glance. " "Elles sont supprimées automatiquement après avoir atteint l'heure définie " "par la propriété expires_at." msgid "This operation is currently not permitted on Glance images details." msgstr "" "Cette opération n'est pas actuellement autorisée sur des détails d'images " "Glance." msgid "" "Time in hours for which a task lives after, either succeeding or failing" msgstr "Durée de vie en heures d'une tâche suite à une réussite ou à un échec" msgid "Too few arguments." msgstr "Trop peu d'arguments." msgid "" "URI cannot contain more than one occurrence of a scheme.If you have " "specified a URI like swift://user:pass@http://authurl.com/v1/container/obj, " "you need to change it to use the swift+http:// scheme, like so: swift+http://" "user:pass@authurl.com/v1/container/obj" msgstr "" "L'URI ne peut pas contenir plusieurs occurrences d'un schéma. Si vous avez " "spécifié un URI tel que swift://user:pass@http://authurl.com/v1/container/" "obj, vous devez le modifier pour utiliser le schéma swift+http://, par " "exemple : swift+http://user:pass@authurl.com/v1/container/obj" msgid "URL to access the image file kept in external store" msgstr "" "URL permettant d'accéder au fichier image conservé dans le magasin externe" #, python-format msgid "" "Unable to create pid file %(pid)s. Running as non-root?\n" "Falling back to a temp file, you can stop %(service)s service using:\n" " %(file)s %(server)s stop --pid-file %(fb)s" msgstr "" "Impossible de créer le fichier PID %(pid)s. Exécution en tant que non " "root ?\n" "Rétablissement vers un fichier temporaire. Vous pouvez arrêter le service " "%(service)s avec :\n" " %(file)s %(server)s stop --pid-file %(fb)s" #, python-format msgid "Unable to filter by unknown operator '%s'." msgstr "Filtrage impossible avec l'opérateur inconnu '%s'." msgid "Unable to filter on a range with a non-numeric value." msgstr "Impossible de filtrer sur une plage avec une valeur non numérique." msgid "Unable to filter on a unknown operator." msgstr "Filtrage impossible avec un opérateur inconnu." msgid "Unable to filter using the specified operator." msgstr "Filtrage impossible à l'aide de l'opérateur spécifié." msgid "Unable to filter using the specified range." msgstr "Impossible de filtrer à l'aide de la plage spécifiée." #, python-format msgid "Unable to find '%s' in JSON Schema change" msgstr "Impossible de trouver '%s' dans la modification du schéma JSON" #, python-format msgid "" "Unable to find `op` in JSON Schema change. It must be one of the following: " "%(available)s." msgstr "" "Impossible de localiser `op` dans la modification du schéma JSON. Doit être " "l'une des valeurs suivantes : %(available)s." msgid "Unable to increase file descriptor limit. Running as non-root?" msgstr "" "Impossible d'augmenter la limite de descripteur de fichier. Exécution en " "tant que non root ?" #, python-format msgid "" "Unable to load %(app_name)s from configuration file %(conf_file)s.\n" "Got: %(e)r" msgstr "" "Impossible de charger %(app_name)s depuis le fichier de configuration " "%(conf_file)s.\n" "Résultat : %(e)r" #, python-format msgid "Unable to load schema: %(reason)s" msgstr "Impossible de charger le schéma : %(reason)s" #, python-format msgid "Unable to locate paste config file for %s." msgstr "" "Impossible de localiser le fichier de configuration du collage pour %s." #, python-format msgid "Unable to upload duplicate image data for image%(image_id)s: %(error)s" msgstr "" "Impossible de télécharger des données image en double pour l'image " "%(image_id)s : %(error)s" msgid "Unauthorized image access" msgstr "Accès à l'image non autorisé" msgid "Unexpected body type. Expected list/dict." msgstr "Type de corps inattendu. Type attendu : list/dict." #, python-format msgid "Unexpected response: %s" msgstr "Réponse inattendue : %s" #, python-format msgid "Unknown auth strategy '%s'" msgstr "Stratégie d'autorisation inconnue '%s'" #, python-format msgid "Unknown command: %s" msgstr "commande %s inconnue" msgid "Unknown sort direction, must be 'desc' or 'asc'" msgstr "Sens de tri inconnu, doit être 'desc' ou 'asc'" msgid "Unrecognized JSON Schema draft version" msgstr "Version brouillon du schéma JSON non reconnue" msgid "Unrecognized changes-since value" msgstr "Valeur changes-since non reconnue" #, python-format msgid "Unsupported sort_dir. Acceptable values: %s" msgstr "sort_dir non pris en charge. Valeurs acceptables : %s" #, python-format msgid "Unsupported sort_key. Acceptable values: %s" msgstr "sort_key non pris en charge. Valeurs acceptables : %s" msgid "Virtual size of image in bytes" msgstr "Taille virtuelle de l'image en octets" #, python-format msgid "Waited 15 seconds for pid %(pid)s (%(file)s) to die; giving up" msgstr "" "Attente de la fin du pid %(pid)s (%(file)s) pendant 15 secondes ; abandon en " "cours" msgid "" "When running server in SSL mode, you must specify both a cert_file and " "key_file option value in your configuration file" msgstr "" "Lors de l'exécution du serveur en mode SSL, vous devez spécifier une valeur " "d'option cert_file et key_file dans votre fichier de configuration" msgid "" "Whether to pass through the user token when making requests to the registry. " "To prevent failures with token expiration during big files upload, it is " "recommended to set this parameter to False.If \"use_user_token\" is not in " "effect, then admin credentials can be specified." msgstr "" "Transmettre le jeton utilisateur lors de demandes au registre. Pour éviter " "des échecs dus à l'expiration du jeton lors du téléchargement de fichiers " "volumineux, il est recommandé de définir de paramètre à 'False'.If Si " "\"use_user_token\" n'est pas activé, des données d'identification " "administrateur doivent être spécifiées." #, python-format msgid "Wrong command structure: %s" msgstr "Structure de commande erronée : %s" msgid "You are not authenticated." msgstr "Vous n'êtes pas authentifié." msgid "You are not authorized to complete this action." msgstr "Vous n'êtes pas autorisé à effectuer cette action." #, python-format msgid "You are not authorized to lookup image %s." msgstr "Vous n'êtes pas autorisé à rechercher l'image %s." #, python-format msgid "You are not authorized to lookup the members of the image %s." msgstr "Vous n'êtes pas autorisé à rechercher les membres de l'image %s." #, python-format msgid "You are not permitted to create a tag in the namespace owned by '%s'" msgstr "" "Vous n'êtes pas autorisé à créer une balise dans l'espace de nom détenu par " "'%s'" msgid "You are not permitted to create image members for the image." msgstr "Vous n'êtes pas autorisé à créer des membres image pour l'image." #, python-format msgid "You are not permitted to create images owned by '%s'." msgstr "Vous n'êtes pas autorisé à créer des images détenues par '%s'." #, python-format msgid "You are not permitted to create namespace owned by '%s'" msgstr "Vous n'êtes pas autorisé à créer un espace de nom détenu par '%s'" #, python-format msgid "You are not permitted to create object owned by '%s'" msgstr "Vous n'êtes pas autorisé à créer un objet détenu par '%s'" #, python-format msgid "You are not permitted to create property owned by '%s'" msgstr "Vous n'êtes pas autorisé à créer une propriété détenue par '%s'" #, python-format msgid "You are not permitted to create resource_type owned by '%s'" msgstr "" "Vous n'êtes pas autorisé à créer des types de ressource détenus par '%s'" #, python-format msgid "You are not permitted to create this task with owner as: %s" msgstr "" "Vous n'êtes pas autorisé à créer cette tâche avec comme propriétaire : %s" msgid "You are not permitted to deactivate this image." msgstr "Vous n'êtes pas autorisé à désactiver cette image." msgid "You are not permitted to delete this image." msgstr "Vous n'êtes pas autorisé à supprimer cette image." msgid "You are not permitted to delete this meta_resource_type." msgstr "Vous n'êtes pas autorisé à supprimer le paramètre meta_resource_type." msgid "You are not permitted to delete this namespace." msgstr "Vous n'êtes pas autorisé à supprimer cet espace de nom." msgid "You are not permitted to delete this object." msgstr "Vous n'êtes pas autorisé à supprimer cet objet." msgid "You are not permitted to delete this property." msgstr "Vous n'êtes pas autorisé à supprimer cette propriété." msgid "You are not permitted to delete this tag." msgstr "Vous n'êtes pas autorisé à supprimer cette balise." #, python-format msgid "You are not permitted to modify '%(attr)s' on this %(resource)s." msgstr "Vous n'êtes pas autorisé à modifier '%(attr)s' sur cette %(resource)s." #, python-format msgid "You are not permitted to modify '%s' on this image." msgstr "Vous n'êtes pas autorisé à modifier '%s' sur cette image." msgid "You are not permitted to modify locations for this image." msgstr "Vous n'êtes pas autorisé à modifier les emplacements pour cette image." msgid "You are not permitted to modify tags on this image." msgstr "Vous n'êtes pas autorisé à modifier les balises pour cette image." msgid "You are not permitted to modify this image." msgstr "Vous n'êtes pas autorisé à modifier cette image." msgid "You are not permitted to reactivate this image." msgstr "Vous n'êtes pas autorisé à réactiver cette image." msgid "You are not permitted to set status on this task." msgstr "Vous n'êtes pas autorisé à définir le statut pour cette tâche." msgid "You are not permitted to update this namespace." msgstr "Vous n'êtes pas autorisé à mettre à jour cet espace de nom." msgid "You are not permitted to update this object." msgstr "Vous n'êtes pas autorisé à mettre à jour cet objet." msgid "You are not permitted to update this property." msgstr "Vous n'êtes pas autorisé à mettre à jour cette propriété." msgid "You are not permitted to update this tag." msgstr "Vous n'êtes pas autorisé à mettre à jour cette balise." msgid "You are not permitted to upload data for this image." msgstr "Vous n'êtes pas autorisé à télécharger des données pour cette image." #, python-format msgid "You cannot add image member for %s" msgstr "Vous ne pouvez pas ajouter le membre image pour %s" #, python-format msgid "You cannot delete image member for %s" msgstr "Vous ne pouvez pas supprimer le membre image pour %s" #, python-format msgid "You cannot get image member for %s" msgstr "Vous ne pouvez pas obtenir le membre image pour %s" #, python-format msgid "You cannot update image member %s" msgstr "Vous ne pouvez pas mettre à jour le membre image pour %s" msgid "You do not own this image" msgstr "Vous n'êtes pas propriétaire de cette image" msgid "" "You have selected to use SSL in connecting, and you have supplied a cert, " "however you have failed to supply either a key_file parameter or set the " "GLANCE_CLIENT_KEY_FILE environ variable" msgstr "" "Vous avez choisi d'utiliser SSL pour la connexion et avez fourni un " "certificat, cependant, vous n'avez pas fourni de paramètre key_file ou " "n'avez pas défini la variable d'environnement GLANCE_CLIENT_KEY_FILE" msgid "" "You have selected to use SSL in connecting, and you have supplied a key, " "however you have failed to supply either a cert_file parameter or set the " "GLANCE_CLIENT_CERT_FILE environ variable" msgstr "" "Vous avez choisi d'utiliser SSL pour la connexion et avez fourni une clé, " "cependant, vous n'avez pas fourni de paramètre cert_file ou n'avez pas " "défini la variable d'environnement GLANCE_CLIENT_CERT_FILE" msgid "" "^([0-9a-fA-F]){8}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-" "fA-F]){12}$" msgstr "" "^([0-9a-fA-F]){8}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-" "fA-F]){12}$" #, python-format msgid "__init__() got unexpected keyword argument '%s'" msgstr "__init__() a récupéré un argument de mot clé '%s' inattendu" #, python-format msgid "" "cannot transition from %(current)s to %(next)s in update (wanted from_state=" "%(from)s)" msgstr "" "impossible d'effectuer la transition depuis %(current)s vers %(next)s dans " "la mise à jour (voulu : from_state=%(from)s)" #, python-format msgid "custom properties (%(props)s) conflict with base properties" msgstr "" "propriétés personnalisées (%(props)s) en conflit avec les propriétés de base" msgid "eventlet 'poll' nor 'selects' hubs are available on this platform" msgstr "" "Les concentrateurs Eventlet 'poll' et 'selects' sont indisponibles sur cette " "plateforme" msgid "is_public must be None, True, or False" msgstr "is_public doit être None, True ou False" msgid "limit param must be an integer" msgstr "le paramètre limit doit être un entier" msgid "limit param must be positive" msgstr "le paramètre limit doit être positif" msgid "md5 hash of image contents." msgstr "Hachage md5 du contenu d'image." #, python-format msgid "new_image() got unexpected keywords %s" msgstr "new_image() a récupéré des mots-clés %s inattendus" msgid "protected must be True, or False" msgstr "protected doit être True ou False" #, python-format msgid "unable to launch %(serv)s. Got error: %(e)s" msgstr "impossible de lancer %(serv)s. Erreur : %(e)s" #, python-format msgid "x-openstack-request-id is too long, max size %s" msgstr "x-openstack-request-id est trop long, sa taille maximale est de %s" glance-16.0.0/glance/locale/ru/0000775000175100017510000000000013245511661016175 5ustar zuulzuul00000000000000glance-16.0.0/glance/locale/ru/LC_MESSAGES/0000775000175100017510000000000013245511661017762 5ustar zuulzuul00000000000000glance-16.0.0/glance/locale/ru/LC_MESSAGES/glance.po0000666000175100017510000025257513245511421021567 0ustar zuulzuul00000000000000# Andreas Jaeger , 2016. #zanata msgid "" msgstr "" "Project-Id-Version: glance 15.0.0.0b3.dev29\n" "Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n" "POT-Creation-Date: 2017-06-23 20:54+0000\n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=UTF-8\n" "Content-Transfer-Encoding: 8bit\n" "PO-Revision-Date: 2016-04-12 05:21+0000\n" "Last-Translator: Copied by Zanata \n" "Language: ru\n" "Plural-Forms: nplurals=3; plural=(n%10==1 && n%100!=11 ? 0 : n%10>=2 && n" "%10<=4 && (n%100<10 || n%100>=20) ? 1 : 2)\n" "Generated-By: Babel 2.0\n" "X-Generator: Zanata 3.9.6\n" "Language-Team: Russian\n" #, python-format msgid "\t%s" msgstr "\t%s" #, python-format msgid "%(cls)s exception was raised in the last rpc call: %(val)s" msgstr "" "Ð’ поÑледнем вызове rpc возникла иÑÐºÐ»ÑŽÑ‡Ð¸Ñ‚ÐµÐ»ÑŒÐ½Ð°Ñ ÑÐ¸Ñ‚ÑƒÐ°Ñ†Ð¸Ñ %(cls)s: %(val)s" #, python-format msgid "%(m_id)s not found in the member list of the image %(i_id)s." msgstr "%(m_id)s не найден в ÑпиÑке Ñлементов образа %(i_id)s." #, python-format msgid "%(serv)s (pid %(pid)s) is running..." msgstr "%(serv)s (pid %(pid)s) работает..." #, python-format msgid "%(serv)s appears to already be running: %(pid)s" msgstr "%(serv)s уже запущен: %(pid)s" #, python-format msgid "" "%(strategy)s is registered as a module twice. %(module)s is not being used." msgstr "" "%(strategy)s зарегиÑтрирована как модуль дважды. %(module)s не иÑпользуетÑÑ." #, python-format msgid "" "%(task_id)s of %(task_type)s not configured properly. Could not load the " "filesystem store" msgstr "" "Служба %(task_id)s типа %(task_type)s наÑтроена неправильно. Ðе удалоÑÑŒ " "загрузить хранилище в файловой ÑиÑтеме" #, python-format msgid "" "%(task_id)s of %(task_type)s not configured properly. Missing work dir: " "%(work_dir)s" msgstr "" "Служба %(task_id)s типа %(task_type)s наÑтроена неправильно. ОтÑутÑтвует " "рабочий каталог: %(work_dir)s" #, python-format msgid "%(verb)sing %(serv)s" msgstr "%(verb)s на %(serv)s" #, python-format msgid "%(verb)sing %(serv)s with %(conf)s" msgstr "%(verb)s %(serv)s Ñ %(conf)s" #, python-format msgid "" "%s Please specify a host:port pair, where host is an IPv4 address, IPv6 " "address, hostname, or FQDN. If using an IPv6 address, enclose it in brackets " "separately from the port (i.e., \"[fe80::a:b:c]:9876\")." msgstr "" "%s Укажите пару host:port, где host - Ñто Ð°Ð´Ñ€ÐµÑ IPv4, Ð°Ð´Ñ€ÐµÑ IPv6, Ð¸Ð¼Ñ Ñ…Ð¾Ñта " "или FQDN. При указании адреÑа IPv6 заключите его в квадратные Ñкобки " "отдельно от порта (например, \"[fe80::a:b:c]:9876\")." #, python-format msgid "%s can't contain 4 byte unicode characters." msgstr "%s не может Ñодержать Ñимволы в кодировке 4-байтового unicode." #, python-format msgid "%s is already stopped" msgstr "%s уже оÑтановлен" #, python-format msgid "%s is stopped" msgstr "%s оÑтановлен" msgid "" "--os_auth_url option or OS_AUTH_URL environment variable required when " "keystone authentication strategy is enabled\n" msgstr "" "ÐžÐ¿Ñ†Ð¸Ñ --os_auth_url или Ð¿ÐµÑ€ÐµÐ¼ÐµÐ½Ð½Ð°Ñ Ñреды OS_AUTH_URL требуетÑÑ, еÑли " "включена ÑÑ‚Ñ€Ð°Ñ‚ÐµÐ³Ð¸Ñ Ð¸Ð´ÐµÐ½Ñ‚Ð¸Ñ„Ð¸ÐºÐ°Ñ†Ð¸Ð¸ Keystone\n" msgid "A body is not expected with this request." msgstr "Ð’ Ñтом запроÑе не должно быть тела." #, python-format msgid "" "A metadata definition object with name=%(object_name)s already exists in " "namespace=%(namespace_name)s." msgstr "" "Объект Ð¾Ð¿Ñ€ÐµÐ´ÐµÐ»ÐµÐ½Ð¸Ñ Ð¼ÐµÑ‚Ð°Ð´Ð°Ð½Ð½Ñ‹Ñ… Ñ Ð¸Ð¼ÐµÐ½ÐµÐ¼ %(object_name)s уже ÑущеÑтвует в " "проÑтранÑтве имен %(namespace_name)s." #, python-format msgid "" "A metadata definition property with name=%(property_name)s already exists in " "namespace=%(namespace_name)s." msgstr "" "СвойÑтво Ð¾Ð¿Ñ€ÐµÐ´ÐµÐ»ÐµÐ½Ð¸Ñ Ð¼ÐµÑ‚Ð°Ð´Ð°Ð½Ð½Ñ‹Ñ… Ñ Ð¸Ð¼ÐµÐ½ÐµÐ¼ %(property_name)s уже ÑущеÑтвует в " "проÑтранÑтве имен %(namespace_name)s." #, python-format msgid "" "A metadata definition resource-type with name=%(resource_type_name)s already " "exists." msgstr "" "Тип реÑурÑа Ð¾Ð¿Ñ€ÐµÐ´ÐµÐ»ÐµÐ½Ð¸Ñ Ð¼ÐµÑ‚Ð°Ð´Ð°Ð½Ð½Ñ‹Ñ… Ñ Ð¸Ð¼ÐµÐ½ÐµÐ¼ %(resource_type_name)s уже " "ÑущеÑтвует." msgid "A set of URLs to access the image file kept in external store" msgstr "" "Ðабор URL Ð´Ð»Ñ Ð´Ð¾Ñтупа к файлу образа, находÑщемуÑÑ Ð²Ð¾ внешнем хранилище" msgid "Amount of disk space (in GB) required to boot image." msgstr "Объем диÑковой памÑти (в ГБ), необходимой Ð´Ð»Ñ Ð·Ð°Ð³Ñ€ÑƒÐ·ÐºÐ¸ образа." msgid "Amount of ram (in MB) required to boot image." msgstr "Объем оперативной памÑти (в МБ), необходимой Ð´Ð»Ñ Ð·Ð°Ð³Ñ€ÑƒÐ·ÐºÐ¸ образа." msgid "An identifier for the image" msgstr "Идентификатор образа" msgid "An identifier for the image member (tenantId)" msgstr "Идентификатор учаÑтника образа (tenantId)" msgid "An identifier for the owner of this task" msgstr "Идентификатор владельца задачи" msgid "An identifier for the task" msgstr "Идентификатор задачи" msgid "An image file url" msgstr "url файла образа" msgid "An image schema url" msgstr "url Ñхемы образа" msgid "An image self url" msgstr "СобÑтвенный url образа" #, python-format msgid "An image with identifier %s already exists" msgstr "Образ Ñ Ð¸Ð´ÐµÐ½Ñ‚Ð¸Ñ„Ð¸ÐºÐ°Ñ‚Ð¾Ñ€Ð¾Ð¼ %s уже ÑущеÑтвует" msgid "An import task exception occurred" msgstr "ИÑÐºÐ»ÑŽÑ‡Ð¸Ñ‚ÐµÐ»ÑŒÐ½Ð°Ñ ÑÐ¸Ñ‚ÑƒÐ°Ñ†Ð¸Ñ Ð² задаче импорта" msgid "An object with the same identifier already exists." msgstr "Объект Ñ Ñ‚Ð°ÐºÐ¸Ð¼ идентификатором уже ÑущеÑтвует." msgid "An object with the same identifier is currently being operated on." msgstr "Объект Ñ Ñ‚Ð°ÐºÐ¸Ð¼ идентификатором занÑÑ‚ в текущей операции." msgid "An object with the specified identifier was not found." msgstr "Объект Ñ ÑƒÐºÐ°Ð·Ð°Ð½Ð½Ñ‹Ð¼ идентификатором не найден." msgid "An unknown exception occurred" msgstr "Возникла неизвеÑÑ‚Ð½Ð°Ñ Ð¸ÑÐºÐ»ÑŽÑ‡Ð¸Ñ‚ÐµÐ»ÑŒÐ½Ð°Ñ ÑитуациÑ" msgid "An unknown task exception occurred" msgstr "ÐÐµÐ¿Ñ€ÐµÐ´Ð²Ð¸Ð´ÐµÐ½Ð½Ð°Ñ Ð¸ÑÐºÐ»ÑŽÑ‡Ð¸Ñ‚ÐµÐ»ÑŒÐ½Ð°Ñ ÑитуациÑ" #, python-format msgid "Attempt to upload duplicate image: %s" msgstr "Попытка загрузить дубликат образа: %s" msgid "Attempted to update Location field for an image not in queued status." msgstr "" "ПредпринÑта попытка обновить поле РаÑположение Ð´Ð»Ñ Ð¾Ð±Ñ€Ð°Ð·Ð°, не находÑщегоÑÑ Ð² " "очереди." #, python-format msgid "Attribute '%(property)s' is read-only." msgstr "Ðтрибут '%(property)s' предназначен только Ð´Ð»Ñ Ñ‡Ñ‚ÐµÐ½Ð¸Ñ." #, python-format msgid "Attribute '%(property)s' is reserved." msgstr "Ðтрибут '%(property)s' зарезервирован." #, python-format msgid "Attribute '%s' is read-only." msgstr "Ðтрибут '%s' предназначен только Ð´Ð»Ñ Ñ‡Ñ‚ÐµÐ½Ð¸Ñ." #, python-format msgid "Attribute '%s' is reserved." msgstr "Ðтрибут '%s' зарезервирован." msgid "Attribute container_format can be only replaced for a queued image." msgstr "" "container_format атрибута может быть заменен только Ð´Ð»Ñ Ð¾Ð±Ñ€Ð°Ð·Ð°, находÑщегоÑÑ " "в очереди." msgid "Attribute disk_format can be only replaced for a queued image." msgstr "" "disk_format атрибута может быть заменен только Ð´Ð»Ñ Ð¾Ð±Ñ€Ð°Ð·Ð°, находÑщегоÑÑ Ð² " "очереди." #, python-format msgid "Auth service at URL %(url)s not found." msgstr "Служба идентификации Ñ URL %(url)s не найдена." #, python-format msgid "" "Authentication error - the token may have expired during file upload. " "Deleting image data for %s." msgstr "" "Ошибка идентификации. Возможно, Ð²Ñ€ÐµÐ¼Ñ Ð´ÐµÐ¹ÑÑ‚Ð²Ð¸Ñ Ð¼Ð°Ñ€ÐºÐµÑ€Ð° иÑтекло во Ð²Ñ€ÐµÐ¼Ñ " "загрузки файла. Данные образа Ð´Ð»Ñ %s будут удалены." msgid "Authorization failed." msgstr "ДоÑтуп не предоÑтавлен." msgid "Available categories:" msgstr "ДоÑтупные категории:" #, python-format msgid "Bad \"%s\" query filter format. Use ISO 8601 DateTime notation." msgstr "" "ÐедопуÑтимый формат фильтра запроÑа \"%s\". ИÑпользуйте нотацию DateTime ISO " "8601." #, python-format msgid "Bad Command: %s" msgstr "ÐÐµÐ¿Ñ€Ð°Ð²Ð¸Ð»ÑŒÐ½Ð°Ñ ÐºÐ¾Ð¼Ð°Ð½Ð´Ð°: %s" #, python-format msgid "Bad header: %(header_name)s" msgstr "Ðеправильный заголовок: %(header_name)s" #, python-format msgid "Bad value passed to filter %(filter)s got %(val)s" msgstr "Фильтру %(filter)s передано неверное значение, получено %(val)s" #, python-format msgid "Badly formed S3 URI: %(uri)s" msgstr "Ðеправильно Ñформированный URI S3: %(uri)s" #, python-format msgid "Badly formed credentials '%(creds)s' in Swift URI" msgstr "" "Ðеправильно Ñформированные идентификационные данные '%(creds)s' в URI Swift" msgid "Badly formed credentials in Swift URI." msgstr "Ðеправильно Ñформированные идентификационные данные в URI Swift." msgid "Body expected in request." msgstr "Ð’ запроÑе ожидалоÑÑŒ тело." msgid "Cannot be a negative value" msgstr "Значение не может быть отрицательным" msgid "Cannot be a negative value." msgstr "Ðе может быть отрицательным значением." #, python-format msgid "Cannot convert image %(key)s '%(value)s' to an integer." msgstr "Ðе удаетÑÑ Ð¿Ñ€ÐµÐ¾Ð±Ñ€Ð°Ð·Ð¾Ð²Ð°Ñ‚ÑŒ %(key)s '%(value)s' в целое чиÑло." msgid "Cannot remove last location in the image." msgstr "ÐÐµÐ»ÑŒÐ·Ñ ÑƒÐ´Ð°Ð»Ñть поÑледнее раÑположение из образа." #, python-format msgid "Cannot save data for image %(image_id)s: %(error)s" msgstr "Ðе удаетÑÑ Ñохранить данные Ð´Ð»Ñ Ð¾Ð±Ñ€Ð°Ð·Ð° %(image_id)s: %(error)s" msgid "Cannot set locations to empty list." msgstr "СпиÑок раÑположений не может быть пуÑтым." msgid "Cannot upload to an unqueued image" msgstr "Ðевозможно загрузить в образ, не находÑщийÑÑ Ð² очереди" #, python-format msgid "Checksum verification failed. Aborted caching of image '%s'." msgstr "" "Проверка контрольной Ñуммой не выполнена. КÑширование образа '%s' прервано." msgid "Client disconnected before sending all data to backend" msgstr "Клиент отключилÑÑ, отправив не вÑе данные в базовую ÑиÑтему" msgid "Command not found" msgstr "Команда не найдена" msgid "Configuration option was not valid" msgstr "ÐедопуÑÑ‚Ð¸Ð¼Ð°Ñ Ð¾Ð¿Ñ†Ð¸Ñ ÐºÐ¾Ð½Ñ„Ð¸Ð³ÑƒÑ€Ð°Ñ†Ð¸Ð¸" #, python-format msgid "Connect error/bad request to Auth service at URL %(url)s." msgstr "" "Ошибка ÑÐ¾ÐµÐ´Ð¸Ð½ÐµÐ½Ð¸Ñ Ð¸Ð»Ð¸ неправильный Ð·Ð°Ð¿Ñ€Ð¾Ñ Ðº Ñлужбе идентификации Ñ URL " "%(url)s." #, python-format msgid "Constructed URL: %s" msgstr "Сформированный URL: %s" msgid "Container format is not specified." msgstr "Ðе указан формат контейнера." msgid "Content-Type must be application/octet-stream" msgstr "Content-Type должен быть задан в формате приложение/октет-поток" #, python-format msgid "Corrupt image download for image %(image_id)s" msgstr "Образ %(image_id)s Ñкачан поврежденным" #, python-format msgid "Could not bind to %(host)s:%(port)s after trying for 30 seconds" msgstr "" "Ðе удалоÑÑŒ выполнить ÑвÑзывание Ñ %(host)s:%(port)s в течение 30 Ñекунд" msgid "Could not find OVF file in OVA archive file." msgstr "Ðе найден файл OVF в файле архива OVA." #, python-format msgid "Could not find metadata object %s" msgstr "Ðе найден объект метаданных %s" #, python-format msgid "Could not find metadata tag %s" msgstr "Ðе удалоÑÑŒ найти тег метаданных %s" #, python-format msgid "Could not find namespace %s" msgstr "Ðе найдено проÑтранÑтво имен %s" #, python-format msgid "Could not find property %s" msgstr "Ðе найдено ÑвойÑтво %s" msgid "Could not find required configuration option" msgstr "ОбÑÐ·Ð°Ñ‚ÐµÐ»ÑŒÐ½Ð°Ñ Ð¾Ð¿Ñ†Ð¸Ñ ÐºÐ¾Ð½Ñ„Ð¸Ð³ÑƒÑ€Ð°Ñ†Ð¸Ð¸ не найдена" #, python-format msgid "Could not find task %s" msgstr "Задача %s не найдена" #, python-format msgid "Could not update image: %s" msgstr "Ðе удалоÑÑŒ изменить образ: %s" msgid "Currently, OVA packages containing multiple disk are not supported." msgstr "Ð’ наÑтоÑщее Ð²Ñ€ÐµÐ¼Ñ Ð¿Ð°ÐºÐµÑ‚Ñ‹ OVA Ñ Ð½ÐµÑколькими диÑками не поддерживаютÑÑ." #, python-format msgid "Data for image_id not found: %s" msgstr "Ðе найдены данные Ð´Ð»Ñ image_id: %s" msgid "Data supplied was not valid." msgstr "ПредоÑтавленные данные недопуÑтимы." msgid "Date and time of image member creation" msgstr "Дата и Ð²Ñ€ÐµÐ¼Ñ ÑÐ¾Ð·Ð´Ð°Ð½Ð¸Ñ ÑƒÑ‡Ð°Ñтника образа" msgid "Date and time of image registration" msgstr "Дата и Ð²Ñ€ÐµÐ¼Ñ Ñ€ÐµÐ³Ð¸Ñтрации образа" msgid "Date and time of last modification of image member" msgstr "Дата и Ð²Ñ€ÐµÐ¼Ñ Ð¿Ð¾Ñледней модификации учаÑтника образа" msgid "Date and time of namespace creation" msgstr "Дата и Ð²Ñ€ÐµÐ¼Ñ ÑÐ¾Ð·Ð´Ð°Ð½Ð¸Ñ Ð¿Ñ€Ð¾ÑтранÑтва имен" msgid "Date and time of object creation" msgstr "Дата и Ð²Ñ€ÐµÐ¼Ñ ÑÐ¾Ð·Ð´Ð°Ð½Ð¸Ñ Ð¾Ð±ÑŠÐµÐºÑ‚Ð°" msgid "Date and time of resource type association" msgstr "Дата и Ð²Ñ€ÐµÐ¼Ñ ÑвÑÐ·Ñ‹Ð²Ð°Ð½Ð¸Ñ Ñ‚Ð¸Ð¿Ð° реÑурÑа" msgid "Date and time of tag creation" msgstr "Дата и Ð²Ñ€ÐµÐ¼Ñ ÑÐ¾Ð·Ð´Ð°Ð½Ð¸Ñ Ñ‚ÐµÐ³Ð°" msgid "Date and time of the last image modification" msgstr "Дата и Ð²Ñ€ÐµÐ¼Ñ Ð¿Ð¾Ñледнего Ð¸Ð·Ð¼ÐµÐ½ÐµÐ½Ð¸Ñ Ð¾Ð±Ñ€Ð°Ð·Ð°" msgid "Date and time of the last namespace modification" msgstr "Дата и Ð²Ñ€ÐµÐ¼Ñ Ð¿Ð¾Ñледнего Ð¸Ð·Ð¼ÐµÐ½ÐµÐ½Ð¸Ñ Ð¿Ñ€Ð¾ÑтранÑтва имен" msgid "Date and time of the last object modification" msgstr "Дата и Ð²Ñ€ÐµÐ¼Ñ Ð¿Ð¾Ñледнего Ð¸Ð·Ð¼ÐµÐ½ÐµÐ½Ð¸Ñ Ð¾Ð±ÑŠÐµÐºÑ‚Ð°" msgid "Date and time of the last resource type association modification" msgstr "Дата и Ð²Ñ€ÐµÐ¼Ñ Ð¿Ð¾Ñледнего Ð¸Ð·Ð¼ÐµÐ½ÐµÐ½Ð¸Ñ ÑвÑзи типа реÑурÑа" msgid "Date and time of the last tag modification" msgstr "Дата и Ð²Ñ€ÐµÐ¼Ñ Ð¿Ð¾Ñледнего Ð¸Ð·Ð¼ÐµÐ½ÐµÐ½Ð¸Ñ Ñ‚ÐµÐ³Ð°" msgid "Datetime when this resource was created" msgstr "Дата и Ð²Ñ€ÐµÐ¼Ñ ÑÐ¾Ð·Ð´Ð°Ð½Ð¸Ñ Ñ€ÐµÑурÑа" msgid "Datetime when this resource was updated" msgstr "Дата и Ð²Ñ€ÐµÐ¼Ñ Ð¾Ð±Ð½Ð¾Ð²Ð»ÐµÐ½Ð¸Ñ Ñ€ÐµÑурÑа" msgid "Datetime when this resource would be subject to removal" msgstr "Дата и Ð²Ñ€ÐµÐ¼Ñ Ð¿Ð»Ð°Ð½Ð¾Ð²Ð¾Ð³Ð¾ ÑƒÐ´Ð°Ð»ÐµÐ½Ð¸Ñ Ñ€ÐµÑурÑа" #, python-format msgid "Denying attempt to upload image because it exceeds the quota: %s" msgstr "Попытка загрузить образ Ñ Ð¿Ñ€ÐµÐ²Ñ‹ÑˆÐµÐ½Ð¸ÐµÐ¼ квоты отклонена: %s" #, python-format msgid "Denying attempt to upload image larger than %d bytes." msgstr "Попытка загрузить образ размером более %d байт отклонена." msgid "Descriptive name for the image" msgstr "ОпиÑательное Ð¸Ð¼Ñ Ð¾Ð±Ñ€Ð°Ð·Ð°" msgid "Disk format is not specified." msgstr "Ðе указан формат диÑка." #, python-format msgid "" "Driver %(driver_name)s could not be configured correctly. Reason: %(reason)s" msgstr "" "Драйвер %(driver_name)s не удалоÑÑŒ правильно наÑтроить. Причина: %(reason)s" msgid "" "Error decoding your request. Either the URL or the request body contained " "characters that could not be decoded by Glance" msgstr "" "Ошибка при декодировании запроÑа. URL или тело запроÑа Ñодержат Ñимволы, " "которые Glance не ÑпоÑобен декодировать" #, python-format msgid "Error fetching members of image %(image_id)s: %(inner_msg)s" msgstr "Ошибка при выборке Ñлементов образа %(image_id)s: %(inner_msg)s" msgid "Error in store configuration. Adding images to store is disabled." msgstr "" "Ошибка в конфигурации хранилища. Добавление образов в хранилище отключено." msgid "Expected a member in the form: {\"member\": \"image_id\"}" msgstr "Элемент должен быть задан в формате: {\"member\": \"image_id\"}" msgid "Expected a status in the form: {\"status\": \"status\"}" msgstr "СоÑтоÑние должно быть указано в формате: {\"status\": \"status\"}" msgid "External source should not be empty" msgstr "Внешний иÑточник не должен быть пуÑтым" #, python-format msgid "External sources are not supported: '%s'" msgstr "Внешние реÑурÑÑ‹ не поддерживаютÑÑ: %s" #, python-format msgid "Failed to activate image. Got error: %s" msgstr "Ðктивировать образ не удалоÑÑŒ. Ошибка: %s" #, python-format msgid "Failed to add image metadata. Got error: %s" msgstr "Добавить метаданные образа не удалоÑÑŒ. Ошибка: %s" #, python-format msgid "Failed to find image %(image_id)s to delete" msgstr "Ðайти образ Ð´Ð»Ñ ÑƒÐ´Ð°Ð»ÐµÐ½Ð¸Ñ %(image_id)s не удалоÑÑŒ" #, python-format msgid "Failed to find image to delete: %s" msgstr "Ðайти образ Ð´Ð»Ñ ÑƒÐ´Ð°Ð»ÐµÐ½Ð¸Ñ Ð½Ðµ удалоÑÑŒ: %s" #, python-format msgid "Failed to find image to update: %s" msgstr "Ðайти образ Ð´Ð»Ñ Ð¾Ð±Ð½Ð¾Ð²Ð»ÐµÐ½Ð¸Ñ Ð½Ðµ удалоÑÑŒ: %s" #, python-format msgid "Failed to find resource type %(resourcetype)s to delete" msgstr "Ðе удалоÑÑŒ найти тип реÑурÑа %(resourcetype)s Ð´Ð»Ñ ÑƒÐ´Ð°Ð»ÐµÐ½Ð¸Ñ" #, python-format msgid "Failed to initialize the image cache database. Got error: %s" msgstr "Инициализировать базу данных кÑша образов не удалоÑÑŒ. Ошибка: %s" #, python-format msgid "Failed to read %s from config" msgstr "ПрочеÑть %s из конфигурации не удалоÑÑŒ" #, python-format msgid "Failed to reserve image. Got error: %s" msgstr "Зарезервировать образ не удалоÑÑŒ. Ошибка: %s" #, python-format msgid "Failed to update image metadata. Got error: %s" msgstr "Обновить метаданные образа не удалоÑÑŒ. Ошибка: %s" #, python-format msgid "Failed to upload image %s" msgstr "Загрузить образ %s не удалоÑÑŒ" #, python-format msgid "" "Failed to upload image data for image %(image_id)s due to HTTP error: " "%(error)s" msgstr "" "Загрузить данные образа %(image_id)s не удалоÑÑŒ из-за ошибки HTTP: %(error)s" #, python-format msgid "" "Failed to upload image data for image %(image_id)s due to internal error: " "%(error)s" msgstr "" "Загрузить данные образа %(image_id)s не удалоÑÑŒ из-за внутренней ошибки: " "%(error)s" #, python-format msgid "File %(path)s has invalid backing file %(bfile)s, aborting." msgstr "" "Файл %(path)s Ñодержит недопуÑтимый базовый файл %(bfile)s, принудительное " "завершение." msgid "" "File based imports are not allowed. Please use a non-local source of image " "data." msgstr "" "Импорты на оÑнове файлов не разрешены. ИÑпользуйте нелокальный иÑточник " "данных образа." msgid "Forbidden image access" msgstr "ДоÑтуп к образу запрещен" #, python-format msgid "Forbidden to delete a %s image." msgstr "УдалÑть образ %s запрещено." #, python-format msgid "Forbidden to delete image: %s" msgstr "УдалÑть образ запрещено: %s" #, python-format msgid "Forbidden to modify '%(key)s' of %(status)s image." msgstr "Запрещено изменÑть '%(key)s' образа %(status)s." #, python-format msgid "Forbidden to modify '%s' of image." msgstr "ИзменÑть '%s' образа запрещено." msgid "Forbidden to reserve image." msgstr "Резервировать образ запрещено." msgid "Forbidden to update deleted image." msgstr "ОбновлÑть удаленный образ запрещено." #, python-format msgid "Forbidden to update image: %s" msgstr "ОбновлÑть образ запрещено: %s" #, python-format msgid "Forbidden upload attempt: %s" msgstr "Ð—Ð°Ð¿Ñ€ÐµÑ‰ÐµÐ½Ð½Ð°Ñ Ð¿Ð¾Ð¿Ñ‹Ñ‚ÐºÐ° загрузки: %s" #, python-format msgid "Forbidding request, metadata definition namespace=%s is not visible." msgstr "" "Запрещенный запроÑ: проÑтранÑтво имен %s Ð¾Ð¿Ñ€ÐµÐ´ÐµÐ»ÐµÐ½Ð¸Ñ Ð¼ÐµÑ‚Ð°Ð´Ð°Ð½Ð½Ñ‹Ñ… невидимое." #, python-format msgid "Forbidding request, task %s is not visible" msgstr "Ð—Ð°Ð¿Ñ€Ð¾Ñ Ð·Ð°Ð¿Ñ€ÐµÑ‰Ð°ÐµÑ‚ÑÑ, задача %s невидима" msgid "Format of the container" msgstr "Формат контейнера" msgid "Format of the disk" msgstr "Формат диÑка" #, python-format msgid "Host \"%s\" is not valid." msgstr "ХоÑÑ‚ \"%s\" недопуÑтим." #, python-format msgid "Host and port \"%s\" is not valid." msgstr "ХоÑÑ‚ и порт \"%s\" недопуÑтимы." msgid "" "Human-readable informative message only included when appropriate (usually " "on failure)" msgstr "" "Информационное Ñообщение Ð´Ð»Ñ Ð¿Ð¾Ð»ÑŒÐ·Ð¾Ð²Ð°Ñ‚ÐµÐ»Ñ Ð´Ð¾Ð±Ð°Ð²Ð»ÑетÑÑ Ñ‚Ð¾Ð»ÑŒÐºÐ¾ в " "ÑоответÑтвующих ÑлучаÑÑ… (обычно в Ñлучае ошибки)" msgid "If true, image will not be deletable." msgstr "ЕÑли значение равно true, то образ Ð½ÐµÐ»ÑŒÐ·Ñ Ð±ÑƒÐ´ÐµÑ‚ удалить." msgid "If true, namespace will not be deletable." msgstr "ЕÑли true, проÑтранÑтво имен будет неудалÑемым." #, python-format msgid "Image %(id)s could not be deleted because it is in use: %(exc)s" msgstr "Ðе удаетÑÑ ÑƒÐ´Ð°Ð»Ð¸Ñ‚ÑŒ образ %(id)s, так как он иÑпользуетÑÑ: %(exc)s" #, python-format msgid "Image %(id)s not found" msgstr "Образ %(id)s не найден" #, python-format msgid "" "Image %(image_id)s could not be found after upload. The image may have been " "deleted during the upload: %(error)s" msgstr "" "Образ %(image_id)s не найден поÑле загрузки. Возможно, он удален во Ð²Ñ€ÐµÐ¼Ñ " "загрузки: %(error)s" #, python-format msgid "Image %(image_id)s is protected and cannot be deleted." msgstr "Образ %(image_id)s защищен и не может быть удален." #, python-format msgid "" "Image %s could not be found after upload. The image may have been deleted " "during the upload, cleaning up the chunks uploaded." msgstr "" "Образ %s не найден поÑле загрузки. Возможно, он был удален во Ð²Ñ€ÐµÐ¼Ñ " "передачи, выполнÑетÑÑ Ð¾Ñ‡Ð¸Ñтка переданных фрагментов." #, python-format msgid "" "Image %s could not be found after upload. The image may have been deleted " "during the upload." msgstr "" "Образ %s не найден поÑле загрузки. Возможно, он удален во Ð²Ñ€ÐµÐ¼Ñ Ð·Ð°Ð³Ñ€ÑƒÐ·ÐºÐ¸." #, python-format msgid "Image %s is deactivated" msgstr "Образ %s деактивирован" #, python-format msgid "Image %s is not active" msgstr "Образ %s неактивен" #, python-format msgid "Image %s not found." msgstr "Образ %s не найден." #, python-format msgid "Image exceeds the storage quota: %s" msgstr "Размер образа превышает квоту хранилища: %s" msgid "Image id is required." msgstr "ТребуетÑÑ Ð˜Ð” образа." msgid "Image is protected" msgstr "Образ защищен" #, python-format msgid "Image member limit exceeded for image %(id)s: %(e)s:" msgstr "" "Превышено предельно допуÑтимое чиÑло учаÑтников Ð´Ð»Ñ Ð¾Ð±Ñ€Ð°Ð·Ð° %(id)s: %(e)s:" #, python-format msgid "Image name too long: %d" msgstr "Ð˜Ð¼Ñ Ð¾Ð±Ñ€Ð°Ð·Ð° Ñлишком длинное: %d" msgid "Image operation conflicts" msgstr "Конфликт операций Ñ Ð¾Ð±Ñ€Ð°Ð·Ð¾Ð¼" #, python-format msgid "" "Image status transition from %(cur_status)s to %(new_status)s is not allowed" msgstr "" "ИзменÑть ÑоÑтоÑние %(cur_status)s образа на %(new_status)s не разрешаетÑÑ" #, python-format msgid "Image storage media is full: %s" msgstr "ÐоÑитель образов переполнен: %s" #, python-format msgid "Image tag limit exceeded for image %(id)s: %(e)s:" msgstr "Превышено предельно допуÑтимое чиÑло тегов Ð´Ð»Ñ Ð¾Ð±Ñ€Ð°Ð·Ð° %(id)s: %(e)s:" #, python-format msgid "Image upload problem: %s" msgstr "Ðеполадка при передаче образа: %s" #, python-format msgid "Image with identifier %s already exists!" msgstr "Образ Ñ Ð¸Ð´ÐµÐ½Ñ‚Ð¸Ñ„Ð¸ÐºÐ°Ñ‚Ð¾Ñ€Ð¾Ð¼ %s уже ÑущеÑтвует!" #, python-format msgid "Image with identifier %s has been deleted." msgstr "Образ Ñ Ð¸Ð´ÐµÐ½Ñ‚Ð¸Ñ„Ð¸ÐºÐ°Ñ‚Ð¾Ñ€Ð¾Ð¼ %s удален." #, python-format msgid "Image with identifier %s not found" msgstr "Образ Ñ Ð¸Ð´ÐµÐ½Ñ‚Ð¸Ñ„Ð¸ÐºÐ°Ñ‚Ð¾Ñ€Ð¾Ð¼ %s не найден" #, python-format msgid "Image with the given id %(image_id)s was not found" msgstr "Ðе найден образ Ñ Ð·Ð°Ð´Ð°Ð½Ð½Ñ‹Ð¼ ИД %(image_id)s" #, python-format msgid "" "Incorrect auth strategy, expected \"%(expected)s\" but received " "\"%(received)s\"" msgstr "" "ÐÐµÐ¿Ñ€Ð°Ð²Ð¸Ð»ÑŒÐ½Ð°Ñ ÑÑ‚Ñ€Ð°Ñ‚ÐµÐ³Ð¸Ñ Ð¸Ð´ÐµÐ½Ñ‚Ð¸Ñ„Ð¸ÐºÐ°Ñ†Ð¸Ð¸, ожидалоÑÑŒ \"%(expected)s\", но " "получено \"%(received)s\"" #, python-format msgid "Incorrect request: %s" msgstr "Ðеправильный запроÑ: %s" #, python-format msgid "Input does not contain '%(key)s' field" msgstr "Ввод не Ñодержит поле %(key)s" #, python-format msgid "Insufficient permissions on image storage media: %s" msgstr "ÐедоÑтаточные права Ð´Ð»Ñ Ð´Ð¾Ñтупа к ноÑителю образов: %s" #, python-format msgid "Invalid JSON pointer for this resource: '/%s'" msgstr "ÐедопуÑтимый указатель JSON Ð´Ð»Ñ Ñтого реÑурÑа: '%s'" #, python-format msgid "Invalid checksum '%s': can't exceed 32 characters" msgstr "" "ÐедопуÑÑ‚Ð¸Ð¼Ð°Ñ ÐºÐ¾Ð½Ñ‚Ñ€Ð¾Ð»ÑŒÐ½Ð°Ñ Ñумма '%s': длина не может превышать 32 Ñимвола" msgid "Invalid configuration in glance-swift conf file." msgstr "ÐедопуÑÑ‚Ð¸Ð¼Ð°Ñ ÐºÐ¾Ð½Ñ„Ð¸Ð³ÑƒÑ€Ð°Ñ†Ð¸Ñ Ð² файле конфигурации glance-swift." msgid "Invalid configuration in property protection file." msgstr "ÐедопуÑÑ‚Ð¸Ð¼Ð°Ñ ÐºÐ¾Ð½Ñ„Ð¸Ð³ÑƒÑ€Ð°Ñ†Ð¸Ñ Ð² файле защиты ÑвойÑтв." #, python-format msgid "Invalid container format '%s' for image." msgstr "Ðеверный формат контейнера '%s' Ð´Ð»Ñ Ð¾Ð±Ñ€Ð°Ð·Ð°." #, python-format msgid "Invalid content type %(content_type)s" msgstr "ÐедопуÑтимый тип Ñодержимого: %(content_type)s" #, python-format msgid "Invalid disk format '%s' for image." msgstr "Ðеверный формат диÑка '%s' Ð´Ð»Ñ Ð¾Ð±Ñ€Ð°Ð·Ð°." #, python-format msgid "Invalid filter value %s. The quote is not closed." msgstr "ÐедопуÑтимое значение фильтра %s. Ðет закрывающей кавычки." #, python-format msgid "" "Invalid filter value %s. There is no comma after closing quotation mark." msgstr "" "ÐедопуÑтимое значение фильтра %s. Ðет запÑтой поÑле закрывающей кавычки." #, python-format msgid "" "Invalid filter value %s. There is no comma before opening quotation mark." msgstr "" "ÐедопуÑтимое значение фильтра %s. Ðет запÑтой перед открывающей кавычкой." msgid "Invalid image id format" msgstr "ÐедопуÑтимый формат ИД образа" msgid "Invalid location" msgstr "ÐедопуÑтимое раÑположение" #, python-format msgid "Invalid location %s" msgstr "Ðеверное раÑположение %s" #, python-format msgid "Invalid location: %s" msgstr "ÐедопуÑтимое раÑположение: %s" #, python-format msgid "" "Invalid location_strategy option: %(name)s. The valid strategy option(s) " "is(are): %(strategies)s" msgstr "" "Ðеверный параметр location_strategy: %(name)s. Верные параметры Ñтратегии: " "%(strategies)s" msgid "Invalid locations" msgstr "ÐедопуÑтимые раÑположениÑ" #, python-format msgid "Invalid locations: %s" msgstr "ÐедопуÑтимые раÑположениÑ: %s" msgid "Invalid marker format" msgstr "ÐедопуÑтимый формат маркера" msgid "Invalid marker. Image could not be found." msgstr "ÐедопуÑтимый маркер. Образ не найден." #, python-format msgid "Invalid membership association: %s" msgstr "ÐедопуÑÑ‚Ð¸Ð¼Ð°Ñ Ð°ÑÑÐ¾Ñ†Ð¸Ð°Ñ†Ð¸Ñ Ñ‡Ð»ÐµÐ½Ñтва: %s" msgid "" "Invalid mix of disk and container formats. When setting a disk or container " "format to one of 'aki', 'ari', or 'ami', the container and disk formats must " "match." msgstr "" "ÐедопуÑтимое Ñочетание форматов диÑка и контейнера. При задании формата " "диÑка или контейнера равным 'aki', 'ari' или 'ami' форматы контейнера и " "диÑка должны Ñовпадать." #, python-format msgid "" "Invalid operation: `%(op)s`. It must be one of the following: %(available)s." msgstr "" "ÐедопуÑÑ‚Ð¸Ð¼Ð°Ñ Ð¾Ð¿ÐµÑ€Ð°Ñ†Ð¸Ñ: `%(op)s`. ДопуÑкаетÑÑ Ð¾Ð´Ð½Ð° из Ñледующих операций: " "%(available)s." msgid "Invalid position for adding a location." msgstr "ÐедопуÑÑ‚Ð¸Ð¼Ð°Ñ Ð¿Ð¾Ð·Ð¸Ñ†Ð¸Ñ Ð´Ð»Ñ Ð´Ð¾Ð±Ð°Ð²Ð»ÐµÐ½Ð¸Ñ Ñ€Ð°ÑположениÑ." msgid "Invalid position for removing a location." msgstr "ÐедопуÑÑ‚Ð¸Ð¼Ð°Ñ Ð¿Ð¾Ð·Ð¸Ñ†Ð¸Ñ Ð´Ð»Ñ ÑƒÐ´Ð°Ð»ÐµÐ½Ð¸Ñ Ñ€Ð°ÑположениÑ." msgid "Invalid service catalog json." msgstr "ÐедопуÑтимый json каталога Ñлужбы." #, python-format msgid "Invalid sort direction: %s" msgstr "ÐедопуÑтимое направление Ñортировки: %s" #, python-format msgid "" "Invalid sort key: %(sort_key)s. It must be one of the following: " "%(available)s." msgstr "" "ÐедопуÑтимый ключ Ñортировки %(sort_key)s. ДопуÑкаетÑÑ Ð¾Ð´Ð¸Ð½ из Ñледующих " "ключей: %(available)s." #, python-format msgid "Invalid status value: %s" msgstr "ÐедопуÑтимое значение ÑоÑтоÑниÑ: %s" #, python-format msgid "Invalid status: %s" msgstr "ÐедопуÑтимое ÑоÑтоÑние: %s" #, python-format msgid "Invalid time format for %s." msgstr "ÐедопуÑтимый формат времени Ð´Ð»Ñ %s." #, python-format msgid "Invalid type value: %s" msgstr "ÐедопуÑтимое значение типа: %s" #, python-format msgid "" "Invalid update. It would result in a duplicate metadata definition namespace " "with the same name of %s" msgstr "" "ÐедопуÑтимое обновление. Оно Ñоздает проÑтранÑтво имен Ð¾Ð¿Ñ€ÐµÐ´ÐµÐ»ÐµÐ½Ð¸Ñ " "метаданных Ñ Ñ‚Ð°ÐºÐ¸Ð¼ же именем, как у проÑтранÑтва имен %s" #, python-format msgid "" "Invalid update. It would result in a duplicate metadata definition object " "with the same name=%(name)s in namespace=%(namespace_name)s." msgstr "" "ÐедопуÑтимое обновление. Оно Ñоздает объект Ð¾Ð¿Ñ€ÐµÐ´ÐµÐ»ÐµÐ½Ð¸Ñ Ð¼ÐµÑ‚Ð°Ð´Ð°Ð½Ð½Ñ‹Ñ… Ñ Ñ‚Ð°ÐºÐ¸Ð¼ " "же именем, как у объекта %(name)s в проÑтранÑтве имен %(namespace_name)s." #, python-format msgid "" "Invalid update. It would result in a duplicate metadata definition object " "with the same name=%(name)s in namespace=%(namespace_name)s." msgstr "" "ÐедопуÑтимое обновление. Оно Ñоздает объект Ð¾Ð¿Ñ€ÐµÐ´ÐµÐ»ÐµÐ½Ð¸Ñ Ð¼ÐµÑ‚Ð°Ð´Ð°Ð½Ð½Ñ‹Ñ… Ñ Ñ‚Ð°ÐºÐ¸Ð¼ " "же именем, как у объекта %(name)s в проÑтранÑтве имен %(namespace_name)s." #, python-format msgid "" "Invalid update. It would result in a duplicate metadata definition property " "with the same name=%(name)s in namespace=%(namespace_name)s." msgstr "" "ÐедопуÑтимое обновление. Оно Ñоздает проÑтранÑтво имен Ð¾Ð¿Ñ€ÐµÐ´ÐµÐ»ÐµÐ½Ð¸Ñ " "метаданных Ñ Ñ‚Ð°ÐºÐ¸Ð¼ же именем, как у ÑвойÑтва %(name)s в проÑтранÑтве имен " "%(namespace_name)s." #, python-format msgid "Invalid value '%(value)s' for parameter '%(param)s': %(extra_msg)s" msgstr "Ðеверное значение '%(value)s' параметра '%(param)s': %(extra_msg)s" #, python-format msgid "Invalid value for option %(option)s: %(value)s" msgstr "ÐедопуÑтимое значение Ð´Ð»Ñ Ð¾Ð¿Ñ†Ð¸Ð¸ %(option)s: %(value)s" #, python-format msgid "Invalid visibility value: %s" msgstr "ÐедопуÑтимое значение видимоÑти: %s" msgid "It's invalid to provide multiple image sources." msgstr "Указывать неÑколько иÑточников образов нельзÑ." msgid "It's not allowed to add locations if locations are invisible." msgstr "Ðе разрешено добавлÑть раÑположениÑ, еÑли они невидимы." msgid "It's not allowed to remove locations if locations are invisible." msgstr "Ðе разрешено удалÑть раÑположениÑ, еÑли они невидимы." msgid "It's not allowed to update locations if locations are invisible." msgstr "Ðе разрешено обновлÑть раÑположениÑ, еÑли они невидимы." msgid "List of strings related to the image" msgstr "СпиÑок Ñтрок, отноÑÑщихÑÑ Ðº образу" msgid "Malformed JSON in request body." msgstr "Ðеправильно Ñформированный JSON в теле запроÑа." msgid "Maximal age is count of days since epoch." msgstr "МакÑимальный возраÑÑ‚ - чиÑло дней Ñ Ð½Ð°Ñ‡Ð°Ð»Ð° Ñпохи." #, python-format msgid "Maximum redirects (%(redirects)s) was exceeded." msgstr "Превышено макÑимальное количеÑтво перенаправлений (%(redirects)s)." #, python-format msgid "Member %(member_id)s is duplicated for image %(image_id)s" msgstr "Обнаружена ÐºÐ¾Ð¿Ð¸Ñ ÑƒÑ‡Ð°Ñтника %(member_id)s Ð´Ð»Ñ Ð¾Ð±Ñ€Ð°Ð·Ð° %(image_id)s" msgid "Member can't be empty" msgstr "УчаÑтник не может быть пуÑтым" msgid "Member to be added not specified" msgstr "ДобавлÑемый учаÑтник не указан" msgid "Membership could not be found." msgstr "ЧленÑтво не найдено." #, python-format msgid "" "Metadata definition namespace %(namespace)s is protected and cannot be " "deleted." msgstr "" "ПроÑтранÑтво имен %(namespace)s Ð¾Ð¿Ñ€ÐµÐ´ÐµÐ»ÐµÐ½Ð¸Ñ Ð¼ÐµÑ‚Ð°Ð´Ð°Ð½Ð½Ñ‹Ñ… защищено и не может " "быть удален." #, python-format msgid "Metadata definition namespace not found for id=%s" msgstr "Ðе найдено проÑтранÑтво имен Ð¾Ð¿Ñ€ÐµÐ´ÐµÐ»ÐµÐ½Ð¸Ñ Ð¼ÐµÑ‚Ð°Ð´Ð°Ð½Ð½Ñ‹Ñ… Ð´Ð»Ñ Ð˜Ð” %s" #, python-format msgid "" "Metadata definition object %(object_name)s is protected and cannot be " "deleted." msgstr "" "Объект %(object_name)s Ð¾Ð¿Ñ€ÐµÐ´ÐµÐ»ÐµÐ½Ð¸Ñ Ð¼ÐµÑ‚Ð°Ð´Ð°Ð½Ð½Ñ‹Ñ… защищен и не может быть удален." #, python-format msgid "Metadata definition object not found for id=%s" msgstr "Ðе найден объект Ð¾Ð¿Ñ€ÐµÐ´ÐµÐ»ÐµÐ½Ð¸Ñ Ð¼ÐµÑ‚Ð°Ð´Ð°Ð½Ð½Ñ‹Ñ… Ð´Ð»Ñ Ð˜Ð” %s" #, python-format msgid "" "Metadata definition property %(property_name)s is protected and cannot be " "deleted." msgstr "" "СвойÑтво %(property_name)s Ð¾Ð¿Ñ€ÐµÐ´ÐµÐ»ÐµÐ½Ð¸Ñ Ð¼ÐµÑ‚Ð°Ð´Ð°Ð½Ð½Ñ‹Ñ… защищено и не может быть " "удалено." #, python-format msgid "Metadata definition property not found for id=%s" msgstr "Ðе найдено ÑвойÑтво Ð¾Ð¿Ñ€ÐµÐ´ÐµÐ»ÐµÐ½Ð¸Ñ Ð¼ÐµÑ‚Ð°Ð´Ð°Ð½Ð½Ñ‹Ñ… Ð´Ð»Ñ Ð˜Ð” %s" #, python-format msgid "" "Metadata definition resource-type %(resource_type_name)s is a seeded-system " "type and cannot be deleted." msgstr "" "Тип реÑурÑа %(resource_type_name)s Ð¾Ð¿Ñ€ÐµÐ´ÐµÐ»ÐµÐ½Ð¸Ñ Ð¼ÐµÑ‚Ð°Ð´Ð°Ð½Ð½Ñ‹Ñ… ÑвлÑетÑÑÑиÑтемным " "типом и не может быть удален." #, python-format msgid "" "Metadata definition resource-type-association %(resource_type)s is protected " "and cannot be deleted." msgstr "" "СвÑзь типа реÑурÑа %(resource_type)s Ð¾Ð¿Ñ€ÐµÐ´ÐµÐ»ÐµÐ½Ð¸Ñ Ð¼ÐµÑ‚Ð°Ð´Ð°Ð½Ð½Ñ‹Ñ… защищена и не " "может быть удалена." #, python-format msgid "" "Metadata definition tag %(tag_name)s is protected and cannot be deleted." msgstr "" "Тег %(tag_name)s Ð¾Ð¿Ñ€ÐµÐ´ÐµÐ»ÐµÐ½Ð¸Ñ Ð¼ÐµÑ‚Ð°Ð´Ð°Ð½Ð½Ñ‹Ñ… защищен и не может быть удален." #, python-format msgid "Metadata definition tag not found for id=%s" msgstr "Ðе найден тег Ð¾Ð¿Ñ€ÐµÐ´ÐµÐ»ÐµÐ½Ð¸Ñ Ð¼ÐµÑ‚Ð°Ð´Ð°Ð½Ð½Ñ‹Ñ… Ð´Ð»Ñ Ð˜Ð” %s" msgid "Minimal rows limit is 1." msgstr "Минимальное чиÑло Ñтрок равно 1." #, python-format msgid "Missing required credential: %(required)s" msgstr "ОтÑутÑтвуют обÑзательные идентификационные данные: %(required)s" #, python-format msgid "" "Multiple 'image' service matches for region %(region)s. This generally means " "that a region is required and you have not supplied one." msgstr "" "ÐеÑколько ÑоответÑтвий Ñлужбы 'image' Ð´Ð»Ñ Ñ€ÐµÐ³Ð¸Ð¾Ð½Ð° %(region)s. Обычно Ñто " "означает, что регион обÑзателен, но вы его не указали." msgid "No authenticated user" msgstr "Ðет идентифицированного пользователÑ" #, python-format msgid "No image found with ID %s" msgstr "Образ Ñ Ð˜Ð” %s не найден" #, python-format msgid "No location found with ID %(loc)s from image %(img)s" msgstr "РаÑположение Ñ Ð˜Ð” %(loc)s из образа %(img)s не найдено" msgid "No permission to share that image" msgstr "Ðет прав на ÑовмеÑтное иÑпользование Ñтого образа" #, python-format msgid "Not allowed to create members for image %s." msgstr "Ðе разрешено Ñоздавать учаÑтников Ð´Ð»Ñ Ð¾Ð±Ñ€Ð°Ð·Ð° %s." #, python-format msgid "Not allowed to deactivate image in status '%s'" msgstr "Запрещено деактивировать образ в ÑоÑтоÑнии %s" #, python-format msgid "Not allowed to delete members for image %s." msgstr "Ðе разрешено удалÑть учаÑтников Ð´Ð»Ñ Ð¾Ð±Ñ€Ð°Ð·Ð° %s." #, python-format msgid "Not allowed to delete tags for image %s." msgstr "Ðе разрешено удалÑть теги Ð´Ð»Ñ Ð¾Ð±Ñ€Ð°Ð·Ð° %s." #, python-format msgid "Not allowed to list members for image %s." msgstr "Ðе разрешено выводить ÑпиÑок учаÑтников Ð´Ð»Ñ Ð¾Ð±Ñ€Ð°Ð·Ð° %s." #, python-format msgid "Not allowed to reactivate image in status '%s'" msgstr "Запрещено повторно активировать образ в ÑоÑтоÑнии %s" #, python-format msgid "Not allowed to update members for image %s." msgstr "Ðе разрешено изменÑть учаÑтников Ð´Ð»Ñ Ð¾Ð±Ñ€Ð°Ð·Ð° %s." #, python-format msgid "Not allowed to update tags for image %s." msgstr "Ðе разрешено изменÑть теги Ð´Ð»Ñ Ð¾Ð±Ñ€Ð°Ð·Ð° %s." #, python-format msgid "Not allowed to upload image data for image %(image_id)s: %(error)s" msgstr "Загружать данные Ð´Ð»Ñ Ð¾Ð±Ñ€Ð°Ð·Ð° %(image_id)s не разрешено: %(error)s" msgid "Number of sort dirs does not match the number of sort keys" msgstr "ЧиÑло направлений Ñортировки не Ñовпадает Ñ Ñ‡Ð¸Ñлом ключей Ñортировки" msgid "OVA extract is limited to admin" msgstr "РаÑпаковку OVA может выполнить только админиÑтратор" msgid "Old and new sorting syntax cannot be combined" msgstr "Прежний и новый ÑинтакÑиÑÑ‹ Ñортировки Ð½ÐµÐ»ÑŒÐ·Ñ Ñмешивать" #, python-format msgid "Operation \"%s\" requires a member named \"value\"." msgstr "Операции \"%s\" требуетÑÑ ÑƒÑ‡Ð°Ñтник Ñ Ð¸Ð¼ÐµÐ½ÐµÐ¼ \"value\"." msgid "" "Operation objects must contain exactly one member named \"add\", \"remove\", " "or \"replace\"." msgstr "" "Объекты операции должны Ñодержать в точноÑти один учаÑтник Ñ Ð¸Ð¼ÐµÐ½ÐµÐ¼ \"add\", " "\"remove\" или \"replace\"." msgid "" "Operation objects must contain only one member named \"add\", \"remove\", or " "\"replace\"." msgstr "" "Объекты операции должны Ñодержать только один учаÑтник Ñ Ð¸Ð¼ÐµÐ½ÐµÐ¼ \"add\", " "\"remove\" или \"replace\"." msgid "Operations must be JSON objects." msgstr "Операции должны быть объектами JSON." #, python-format msgid "Original locations is not empty: %s" msgstr "ИÑходные раÑÐ¿Ð¾Ð»Ð¾Ð¶ÐµÐ½Ð¸Ñ Ð½Ðµ пуÑты: %s" msgid "Owner can't be updated by non admin." msgstr "Обычный пользователь не может изменить владельца." msgid "Owner must be specified to create a tag." msgstr "Ð”Ð»Ñ ÑÐ¾Ð·Ð´Ð°Ð½Ð¸Ñ Ñ‚ÐµÐ³Ð° необходимо указать владельца." msgid "Owner of the image" msgstr "Владелец образа" msgid "Owner of the namespace." msgstr "Владелец проÑтранÑтва имен." msgid "Param values can't contain 4 byte unicode." msgstr "" "Ð—Ð½Ð°Ñ‡ÐµÐ½Ð¸Ñ Ð¿Ð°Ñ€Ð°Ð¼ÐµÑ‚Ñ€Ð¾Ð² не могут Ñодержать Ñимволы в кодировке 4-байтового " "unicode." #, python-format msgid "Pointer `%s` contains \"~\" not part of a recognized escape sequence." msgstr "" "Указатель `%s` Ñодержит Ñимвол \"~\", не входÑщий в раÑпознаваемую Esc-" "поÑледовательноÑть." #, python-format msgid "Pointer `%s` contains adjacent \"/\"." msgstr "Указатель `%s` Ñодержит Ñмежный \"/\"." #, python-format msgid "Pointer `%s` does not contains valid token." msgstr "Указатель `%s` не Ñодержит допуÑтимого маркера." #, python-format msgid "Pointer `%s` does not start with \"/\"." msgstr "Указатель `%s` не начинаетÑÑ Ñ \"/\"." #, python-format msgid "Pointer `%s` end with \"/\"." msgstr "Указатель `%s` оканчиваетÑÑ Ð½Ð° \"/\"." #, python-format msgid "Port \"%s\" is not valid." msgstr "Порт \"%s\" недопуÑтим." #, python-format msgid "Process %d not running" msgstr "ПроцеÑÑ %d не выполнÑетÑÑ" #, python-format msgid "Properties %s must be set prior to saving data." msgstr "СвойÑтва %s должны быть заданы до ÑÐ¾Ñ…Ñ€Ð°Ð½ÐµÐ½Ð¸Ñ Ð´Ð°Ð½Ð½Ñ‹Ñ…." #, python-format msgid "" "Property %(property_name)s does not start with the expected resource type " "association prefix of '%(prefix)s'." msgstr "" "СвойÑтво %(property_name)s не начинаетÑÑ Ñ Ð¾Ð¶Ð¸Ð´Ð°ÐµÐ¼Ð¾Ð³Ð¾ префикÑа ÑвÑзи типа " "реÑурÑа '%(prefix)s'." #, python-format msgid "Property %s already present." msgstr "СвойÑтво %s уже ÑущеÑтвует." #, python-format msgid "Property %s does not exist." msgstr "СвойÑтво %s не ÑущеÑтвует." #, python-format msgid "Property %s may not be removed." msgstr "СвойÑтво %s Ð½ÐµÐ»ÑŒÐ·Ñ ÑƒÐ´Ð°Ð»Ð¸Ñ‚ÑŒ." #, python-format msgid "Property %s must be set prior to saving data." msgstr "СвойÑтво %s должно быть задано до ÑÐ¾Ñ…Ñ€Ð°Ð½ÐµÐ½Ð¸Ñ Ð´Ð°Ð½Ð½Ñ‹Ñ…." #, python-format msgid "Property '%s' is protected" msgstr "СвойÑтво '%s' защищено" msgid "Property names can't contain 4 byte unicode." msgstr "" "Имена ÑвойÑтв не могут Ñодержать Ñимволы в кодировке 4-байтового unicode." #, python-format msgid "" "Provided image size must match the stored image size. (provided size: " "%(ps)d, stored size: %(ss)d)" msgstr "" "Указанный размер образа должен быть равен Ñохраненному размеру образа. " "(Указанный размер: %(ps)d, Ñохраненный размер: %(ss)d)" #, python-format msgid "Provided object does not match schema '%(schema)s': %(reason)s" msgstr "ПредоÑтавленный объект не ÑоответÑтвует Ñхеме '%(schema)s': %(reason)s" #, python-format msgid "Provided status of task is unsupported: %(status)s" msgstr "Указано неподдерживаемое ÑоÑтоÑние задачи: %(status)s" #, python-format msgid "Provided type of task is unsupported: %(type)s" msgstr "Указан неподдерживаемый тип задачи: %(type)s" msgid "Provides a user friendly description of the namespace." msgstr "ОпиÑание проÑтранÑтва имен Ð´Ð»Ñ Ð¿Ð¾Ð»ÑŒÐ·Ð¾Ð²Ð°Ñ‚ÐµÐ»Ñ." msgid "Received invalid HTTP redirect." msgstr "Получено недопуÑтимое перенаправление HTTP." #, python-format msgid "Redirecting to %(uri)s for authorization." msgstr "ПеренаправлÑетÑÑ Ð½Ð° %(uri)s Ð´Ð»Ñ Ð¿Ñ€ÐµÐ´Ð¾ÑÑ‚Ð°Ð²Ð»ÐµÐ½Ð¸Ñ Ð´Ð¾Ñтупа." #, python-format msgid "Registry service can't use %s" msgstr "Служба рееÑтра не может иÑпользовать %s" #, python-format msgid "Registry was not configured correctly on API server. Reason: %(reason)s" msgstr "РееÑтр наÑтроен неправильно на Ñервере API. Причина: %(reason)s" #, python-format msgid "Reload of %(serv)s not supported" msgstr "Перезагрузка %(serv)s не поддерживаетÑÑ" #, python-format msgid "Reloading %(serv)s (pid %(pid)s) with signal(%(sig)s)" msgstr "Перезагрузка %(serv)s (pid %(pid)s) Ñ Ñигналом (%(sig)s)" #, python-format msgid "Removing stale pid file %s" msgstr "Удаление уÑтаревшего файла pid %s" msgid "Request body must be a JSON array of operation objects." msgstr "Тело запроÑа должно быть маÑÑивом JSON объектов операции." msgid "Request must be a list of commands" msgstr "Ð—Ð°Ð¿Ñ€Ð¾Ñ Ð´Ð¾Ð»Ð¶ÐµÐ½ быть ÑпиÑком команд" #, python-format msgid "Required store %s is invalid" msgstr "Ðеобходимое хранилище %s недопуÑтимо" msgid "" "Resource type names should be aligned with Heat resource types whenever " "possible: http://docs.openstack.org/developer/heat/template_guide/openstack." "html" msgstr "" "Имена типов реÑурÑов должны быть ÑоглаÑованы Ñ Ñ‚Ð¸Ð¿Ð°Ð¼Ð¸ реÑурÑов Heat, когда " "Ñто возможно: http://docs.openstack.org/developer/heat/template_guide/" "openstack.html" msgid "Response from Keystone does not contain a Glance endpoint." msgstr "Ответ от Keystone не Ñодержит конечной точки Glance." msgid "Scope of image accessibility" msgstr "ОблаÑть доÑтупноÑти образа" msgid "Scope of namespace accessibility." msgstr "ОблаÑть доÑтупноÑти проÑтранÑтва имен." #, python-format msgid "Server %(serv)s is stopped" msgstr "Сервер %(serv)s оÑтановлен" #, python-format msgid "Server worker creation failed: %(reason)s." msgstr "Создать иÑполнитель Ñервера не удалоÑÑŒ: %(reason)s." msgid "Signature verification failed" msgstr "Проверка подпиÑи не выполнена." msgid "Size of image file in bytes" msgstr "Размер файла образа в байтах" msgid "" "Some resource types allow more than one key / value pair per instance. For " "example, Cinder allows user and image metadata on volumes. Only the image " "properties metadata is evaluated by Nova (scheduling or drivers). This " "property allows a namespace target to remove the ambiguity." msgstr "" "Ðекоторые типы реÑурÑов допуÑкают более одной пары ключ-значение на " "ÑкземплÑÑ€. Ðапример, в Cinder разрешены метаданные пользователей и образов " "Ð´Ð»Ñ Ñ‚Ð¾Ð¼Ð¾Ð². Только метаданные ÑвойÑтв образа обрабатываютÑÑ Nova " "(планирование или драйверы). Это ÑвойÑтво позволÑет целевому объекту " "проÑтранÑтва имен уÑтранить неоднозначноÑть." msgid "Sort direction supplied was not valid." msgstr "Указано недопуÑтимое направление Ñортировки." msgid "Sort key supplied was not valid." msgstr "Задан недопуÑтимый ключ Ñортировки." msgid "" "Specifies the prefix to use for the given resource type. Any properties in " "the namespace should be prefixed with this prefix when being applied to the " "specified resource type. Must include prefix separator (e.g. a colon :)." msgstr "" "Задает Ð¿Ñ€ÐµÑ„Ð¸ÐºÑ Ð´Ð»Ñ Ð´Ð°Ð½Ð½Ð¾Ð³Ð¾ типа реÑурÑов. Ð’Ñе ÑвойÑтва в проÑтранÑтве имен " "должны иметь Ñтот Ð¿Ñ€ÐµÑ„Ð¸ÐºÑ Ð¿Ñ€Ð¸ применении к указанному типу реÑурÑов. Должен " "иÑпользоватьÑÑ Ñ€Ð°Ð·Ð´ÐµÐ»Ð¸Ñ‚ÐµÐ»ÑŒ префикÑа (например, двоеточие :)." msgid "Status must be \"pending\", \"accepted\" or \"rejected\"." msgstr "СоÑтоÑние должно быть \"pending\", \"accepted\" или \"rejected\"." msgid "Status not specified" msgstr "СоÑтоÑние не указано" msgid "Status of the image" msgstr "СоÑтоÑние образа" #, python-format msgid "Status transition from %(cur_status)s to %(new_status)s is not allowed" msgstr "ИзменÑть ÑоÑтоÑние %(cur_status)s на %(new_status)s не разрешаетÑÑ" #, python-format msgid "Stopping %(serv)s (pid %(pid)s) with signal(%(sig)s)" msgstr "ОÑтановка %(serv)s (pid %(pid)s) Ñ Ñигналом (%(sig)s)" #, python-format msgid "Store for image_id not found: %s" msgstr "Хранилище Ð´Ð»Ñ image_id не найдено: %s" #, python-format msgid "Store for scheme %s not found" msgstr "Хранилище Ð´Ð»Ñ Ñхемы %s не найдено" #, python-format msgid "" "Supplied %(attr)s (%(supplied)s) and %(attr)s generated from uploaded image " "(%(actual)s) did not match. Setting image status to 'killed'." msgstr "" "ПредоÑтавленный %(attr)s (%(supplied)s) и %(attr)s, Ñгенерированный из " "загруженного образа (%(actual)s), не Ñовпадают. Образ переводитÑÑ Ð² " "ÑоÑтоÑние 'killed'." msgid "Supported values for the 'container_format' image attribute" msgstr "Поддерживаемые Ð·Ð½Ð°Ñ‡ÐµÐ½Ð¸Ñ Ð°Ñ‚Ñ€Ð¸Ð±ÑƒÑ‚Ð° образа 'container_format'" msgid "Supported values for the 'disk_format' image attribute" msgstr "Поддерживаемые Ð·Ð½Ð°Ñ‡ÐµÐ½Ð¸Ñ Ð°Ñ‚Ñ€Ð¸Ð±ÑƒÑ‚Ð° образа 'disk_format'" #, python-format msgid "Suppressed respawn as %(serv)s was %(rsn)s." msgstr "Повторное порождение подавлено, поÑкольку %(serv)s был %(rsn)s." msgid "System SIGHUP signal received." msgstr "Получен ÑиÑтемный Ñигнал SIGHUP." #, python-format msgid "Task '%s' is required" msgstr "ТребуетÑÑ Ð·Ð°Ð´Ð°Ñ‡Ð° '%s'" msgid "Task does not exist" msgstr "Задача не ÑущеÑтвует" msgid "Task failed due to Internal Error" msgstr "Задача не выполнена из-за внутренней ошибки" msgid "Task was not configured properly" msgstr "Задача неправильно наÑтроена" #, python-format msgid "Task with the given id %(task_id)s was not found" msgstr "Задача Ñ ÑƒÐºÐ°Ð·Ð°Ð½Ð½Ñ‹Ð¼ ИД %(task_id)s не найдена" msgid "The \"changes-since\" filter is no longer available on v2." msgstr "Фильтр \"changes-since\" больше недоÑтупен в v2." #, python-format msgid "The CA file you specified %s does not exist" msgstr "Указанный файл CA %s не ÑущеÑтвует" #, python-format msgid "" "The Image %(image_id)s object being created by this task %(task_id)s, is no " "longer in valid status for further processing." msgstr "" "Объект образа %(image_id)s, Ñоздаваемый Ñ Ð¿Ð¾Ð¼Ð¾Ñ‰ÑŒÑŽ задачи %(task_id)s, больше " "не находитÑÑ Ð² допуÑтимом ÑоÑтоÑнии Ð´Ð»Ñ Ð´Ð°Ð»ÑŒÐ½ÐµÐ¹ÑˆÐµÐ¹ обработки." msgid "The Store URI was malformed." msgstr "URI хранилища неправильно Ñформирован." msgid "" "The URL to the keystone service. If \"use_user_token\" is not in effect and " "using keystone auth, then URL of keystone can be specified." msgstr "" "URL Ñлужбы Keystone. ЕÑли \"use_user_token\" не дейÑтвует и иÑпользуетÑÑ " "Ð¸Ð´ÐµÐ½Ñ‚Ð¸Ñ„Ð¸ÐºÐ°Ñ†Ð¸Ñ Keystone, можно указать URL Keystone." msgid "" "The administrators password. If \"use_user_token\" is not in effect, then " "admin credentials can be specified." msgstr "" "Пароль админиÑтратора. ЕÑли \"use_user_token\" не дейÑтвует, могут быть " "указаны идентификационные данные админиÑтратора." msgid "" "The administrators user name. If \"use_user_token\" is not in effect, then " "admin credentials can be specified." msgstr "" "Ð˜Ð¼Ñ Ð°Ð´Ð¼Ð¸Ð½Ð¸Ñтратора. ЕÑли \"use_user_token\" не дейÑтвует, могут быть указаны " "идентификационные данные админиÑтратора." #, python-format msgid "The cert file you specified %s does not exist" msgstr "Указанный файл Ñертификата %s не ÑущеÑтвует" msgid "The current status of this task" msgstr "Текущее ÑоÑтоÑние задачи" #, python-format msgid "" "The device housing the image cache directory %(image_cache_dir)s does not " "support xattr. It is likely you need to edit your fstab and add the " "user_xattr option to the appropriate line for the device housing the cache " "directory." msgstr "" "УÑтройÑтво, на котором размещен каталог %(image_cache_dir)s кÑша образов, не " "поддерживает xattr. По-видимому, вам нужно отредактировать fstab, добавив " "опцию user_xattr в ÑоответÑтвующую Ñтроку Ð´Ð»Ñ ÑƒÑтройÑтва, на котором " "размещен каталог кÑша." #, python-format msgid "" "The given uri is not valid. Please specify a valid uri from the following " "list of supported uri %(supported)s" msgstr "" "Заданный uri недопуÑтим. Укажите допуÑтимый uri из Ñледующего ÑпиÑка " "поддерживаемых uri %(supported)s" #, python-format msgid "The incoming image is too large: %s" msgstr "ЧереÑчур большой размер входÑщего образа: %s" #, python-format msgid "The key file you specified %s does not exist" msgstr "Указанный файл ключа %s не ÑущеÑтвует" #, python-format msgid "" "The limit has been exceeded on the number of allowed image locations. " "Attempted: %(attempted)s, Maximum: %(maximum)s" msgstr "" "Превышено ограничение по чиÑлу разрешенных раÑположений образа. Указанное " "чиÑло: %(attempted)s, макÑимальное чиÑло: %(maximum)s" #, python-format msgid "" "The limit has been exceeded on the number of allowed image members for this " "image. Attempted: %(attempted)s, Maximum: %(maximum)s" msgstr "" "Превышено ограничение по чиÑлу разрешенных учаÑтников данного образа. " "Указанное чиÑло: %(attempted)s, макÑимальное чиÑло: %(maximum)s" #, python-format msgid "" "The limit has been exceeded on the number of allowed image properties. " "Attempted: %(attempted)s, Maximum: %(maximum)s" msgstr "" "Превышено ограничение по чиÑлу разрешенных ÑвойÑтв образа. Указанное чиÑло: " "%(attempted)s, макÑимальное чиÑло: %(maximum)s" #, python-format msgid "" "The limit has been exceeded on the number of allowed image properties. " "Attempted: %(num)s, Maximum: %(quota)s" msgstr "" "Превышено ограничение по чиÑлу разрешенных ÑвойÑтв образа. Указанное чиÑло: " "%(num)s, макÑимальное чиÑло: %(quota)s" #, python-format msgid "" "The limit has been exceeded on the number of allowed image tags. Attempted: " "%(attempted)s, Maximum: %(maximum)s" msgstr "" "Превышено ограничение по чиÑлу разрешенных тегов образа. Указанное чиÑло: " "%(attempted)s, макÑимальное чиÑло: %(maximum)s" #, python-format msgid "The location %(location)s already exists" msgstr "РаÑположение %(location)s уже ÑущеÑтвует" #, python-format msgid "The location data has an invalid ID: %d" msgstr "Данные о раÑположении Ñодержат недопуÑтимый ИД: %d" #, python-format msgid "" "The metadata definition %(record_type)s with name=%(record_name)s not " "deleted. Other records still refer to it." msgstr "" "Определение метаданных %(record_type)s Ñ Ð¸Ð¼ÐµÐ½ÐµÐ¼ %(record_name)s не удалено. " "Другие запиÑи вÑе еще ÑÑылаютÑÑ Ð½Ð° него." #, python-format msgid "The metadata definition namespace=%(namespace_name)s already exists." msgstr "" "ПроÑтранÑтво имен %(namespace_name)s Ð¾Ð¿Ñ€ÐµÐ´ÐµÐ»ÐµÐ½Ð¸Ñ Ð¼ÐµÑ‚Ð°Ð´Ð°Ð½Ð½Ñ‹Ñ… уже ÑущеÑтвует." #, python-format msgid "" "The metadata definition object with name=%(object_name)s was not found in " "namespace=%(namespace_name)s." msgstr "" "Объект Ð¾Ð¿Ñ€ÐµÐ´ÐµÐ»ÐµÐ½Ð¸Ñ Ð¼ÐµÑ‚Ð°Ð´Ð°Ð½Ð½Ñ‹Ñ… Ñ Ð¸Ð¼ÐµÐ½ÐµÐ¼ %(object_name)s не найден в " "проÑтранÑтве имен %(namespace_name)s." #, python-format msgid "" "The metadata definition property with name=%(property_name)s was not found " "in namespace=%(namespace_name)s." msgstr "" "СвойÑтво Ð¾Ð¿Ñ€ÐµÐ´ÐµÐ»ÐµÐ½Ð¸Ñ Ð¼ÐµÑ‚Ð°Ð´Ð°Ð½Ð½Ñ‹Ñ… Ñ Ð¸Ð¼ÐµÐ½ÐµÐ¼ %(property_name)s не найдено в " "проÑтранÑтве имен %(namespace_name)s." #, python-format msgid "" "The metadata definition resource-type association of resource-type=" "%(resource_type_name)s to namespace=%(namespace_name)s already exists." msgstr "" "СвÑзь типа реÑурÑа Ð¾Ð¿Ñ€ÐµÐ´ÐµÐ»ÐµÐ½Ð¸Ñ Ð¼ÐµÑ‚Ð°Ð´Ð°Ð½Ð½Ñ‹Ñ… Ð´Ð»Ñ Ñ‚Ð¸Ð¿Ð° реÑурÑа" "%(resource_type_name)s и проÑтранÑтва имен %(namespace_name)s уже ÑущеÑтвует." #, python-format msgid "" "The metadata definition resource-type association of resource-type=" "%(resource_type_name)s to namespace=%(namespace_name)s, was not found." msgstr "" "СвÑзь типа реÑурÑа Ð¾Ð¿Ñ€ÐµÐ´ÐµÐ»ÐµÐ½Ð¸Ñ Ð¼ÐµÑ‚Ð°Ð´Ð°Ð½Ð½Ñ‹Ñ… Ð´Ð»Ñ Ñ‚Ð¸Ð¿Ð° реÑурÑа" "%(resource_type_name)s и проÑтранÑтва имен %(namespace_name)s не найдена." #, python-format msgid "" "The metadata definition resource-type with name=%(resource_type_name)s, was " "not found." msgstr "" "Тип реÑурÑа Ð¾Ð¿Ñ€ÐµÐ´ÐµÐ»ÐµÐ½Ð¸Ñ Ð¼ÐµÑ‚Ð°Ð´Ð°Ð½Ð½Ñ‹Ñ… Ñ Ð¸Ð¼ÐµÐ½ÐµÐ¼ %(resource_type_name)s не найден." #, python-format msgid "" "The metadata definition tag with name=%(name)s was not found in namespace=" "%(namespace_name)s." msgstr "" "Тег Ð¾Ð¿Ñ€ÐµÐ´ÐµÐ»ÐµÐ½Ð¸Ñ Ð¼ÐµÑ‚Ð°Ð´Ð°Ð½Ð½Ñ‹Ñ… Ñ Ð¸Ð¼ÐµÐ½ÐµÐ¼ %(name)s не найден в проÑтранÑтве имен " "%(namespace_name)s." msgid "The parameters required by task, JSON blob" msgstr "Параметры, обÑзательные Ð´Ð»Ñ Ð·Ð°Ð´Ð°Ñ‡Ð¸ JSON blob" msgid "The provided image is too large." msgstr "ПредоÑтавленный образ Ñлишком велик." msgid "" "The region for the authentication service. If \"use_user_token\" is not in " "effect and using keystone auth, then region name can be specified." msgstr "" "Регион Ñлужбы идентификации. ЕÑли \"use_user_token\" не дейÑтвует и " "иÑпользуетÑÑ Ð¸Ð´ÐµÐ½Ñ‚Ð¸Ñ„Ð¸ÐºÐ°Ñ†Ð¸Ñ Keystone, можно указать Ð¸Ð¼Ñ Ñ€ÐµÐ³Ð¸Ð¾Ð½Ð°." msgid "The request returned 500 Internal Server Error." msgstr "Ð—Ð°Ð¿Ñ€Ð¾Ñ Ð²Ð¾Ð·Ð²Ñ€Ð°Ñ‚Ð¸Ð» ошибку 500 - ВнутреннÑÑ Ð¾ÑˆÐ¸Ð±ÐºÐ° Ñервера." msgid "" "The request returned 503 Service Unavailable. This generally occurs on " "service overload or other transient outage." msgstr "" "Ð—Ð°Ð¿Ñ€Ð¾Ñ Ð²Ð¾Ð·Ð²Ñ€Ð°Ñ‚Ð¸Ð» ошибку 503 - Служба недоÑтупна. Как правило, Ñто проиÑходит " "при перегруженноÑти Ñлужбы или другом временном Ñбое." #, python-format msgid "" "The request returned a 302 Multiple Choices. This generally means that you " "have not included a version indicator in a request URI.\n" "\n" "The body of response returned:\n" "%(body)s" msgstr "" "Ð—Ð°Ð¿Ñ€Ð¾Ñ Ð²Ð¾Ð·Ð²Ñ€Ð°Ñ‚Ð¸Ð» ошибку 302 - МножеÑтвенный выбор. Как правило, Ñто " "означает, что вы не включили индикатор верÑии в URI запроÑа.\n" "\n" "Возвращенное тело запроÑа:\n" "%(body)s" #, python-format msgid "" "The request returned a 413 Request Entity Too Large. This generally means " "that rate limiting or a quota threshold was breached.\n" "\n" "The response body:\n" "%(body)s" msgstr "" "Ð—Ð°Ð¿Ñ€Ð¾Ñ Ð²Ð¾Ð·Ð²Ñ€Ð°Ñ‚Ð¸Ð» ошибку 413 - СущноÑть запроÑа Ñлишком велика. Как правило, " "Ñто означает, что нарушено ограничение на ÑкороÑть или порог квоты.\n" "\n" "Тело ответа:\n" "%(body)s" #, python-format msgid "" "The request returned an unexpected status: %(status)s.\n" "\n" "The response body:\n" "%(body)s" msgstr "" "Ð—Ð°Ð¿Ñ€Ð¾Ñ Ð²Ð¾Ð·Ð²Ñ€Ð°Ñ‚Ð¸Ð» непредвиденное ÑоÑтоÑние: %(status)s.\n" "\n" "Тело ответа:\n" "%(body)s" msgid "" "The requested image has been deactivated. Image data download is forbidden." msgstr "Запрошенный образ деактивирован. Загрузка данных образа запрещена." msgid "The result of current task, JSON blob" msgstr "Результат текущей задачи JSON blob" #, python-format msgid "" "The size of the data %(image_size)s will exceed the limit. %(remaining)s " "bytes remaining." msgstr "" "Объем данных %(image_size)s превышает допуÑтимый макÑимум. ОÑтаток: " "%(remaining)s байт." #, python-format msgid "The specified member %s could not be found" msgstr "Указанный учаÑтник %s не найден" #, python-format msgid "The specified metadata object %s could not be found" msgstr "Указанный объект метаданных %s не найден" #, python-format msgid "The specified metadata tag %s could not be found" msgstr "Ðе удалоÑÑŒ найти указанный тег метаданных %s" #, python-format msgid "The specified namespace %s could not be found" msgstr "Указанное проÑтранÑтво имен %s не найдено" #, python-format msgid "The specified property %s could not be found" msgstr "Указанное ÑвойÑтво %s не найдено" #, python-format msgid "The specified resource type %s could not be found " msgstr "Указанный тип реÑурÑа %s не найден " msgid "" "The status of deleted image location can only be set to 'pending_delete' or " "'deleted'" msgstr "" "СоÑтоÑние раÑÐ¿Ð¾Ð»Ð¾Ð¶ÐµÐ½Ð¸Ñ ÑƒÐ´Ð°Ð»ÐµÐ½Ð½Ð¾Ð³Ð¾ образа может быть равно только " "'pending_delete' или 'deleted'" msgid "" "The status of deleted image location can only be set to 'pending_delete' or " "'deleted'." msgstr "" "СоÑтоÑние раÑÐ¿Ð¾Ð»Ð¾Ð¶ÐµÐ½Ð¸Ñ ÑƒÐ´Ð°Ð»ÐµÐ½Ð½Ð¾Ð³Ð¾ образа может быть равно только " "'pending_delete' или 'deleted'." msgid "The status of this image member" msgstr "СоÑтоÑние Ñтого учаÑтника образа" msgid "" "The strategy to use for authentication. If \"use_user_token\" is not in " "effect, then auth strategy can be specified." msgstr "" "Ð¡Ñ‚Ñ€Ð°Ñ‚ÐµÐ³Ð¸Ñ Ð¸Ð´ÐµÐ½Ñ‚Ð¸Ñ„Ð¸ÐºÐ°Ñ†Ð¸Ð¸. ЕÑли \"use_user_token\" не дейÑтвует, можно указать " "Ñтратегию идентификации." #, python-format msgid "" "The target member %(member_id)s is already associated with image " "%(image_id)s." msgstr "Целевой учаÑтник %(member_id)s уже ÑвÑзан Ñ Ð¾Ð±Ñ€Ð°Ð·Ð¾Ð¼ %(image_id)s." msgid "" "The tenant name of the administrative user. If \"use_user_token\" is not in " "effect, then admin tenant name can be specified." msgstr "" "Ð˜Ð¼Ñ Ð°Ñ€ÐµÐ½Ð´Ð°Ñ‚Ð¾Ñ€Ð° админиÑтратора. ЕÑли \"use_user_token\" не дейÑтвует, можно " "указать Ð¸Ð¼Ñ Ð°Ñ€ÐµÐ½Ð´Ð°Ñ‚Ð¾Ñ€Ð° админиÑтратора." msgid "The type of task represented by this content" msgstr "Тип задачи, предÑтавленной Ñтим Ñодержимым" msgid "The unique namespace text." msgstr "Уникальный текÑÑ‚ проÑтранÑтва имен." msgid "The user friendly name for the namespace. Used by UI if available." msgstr "" "Ð˜Ð¼Ñ Ð¿Ñ€Ð¾ÑтранÑтва имен Ð´Ð»Ñ Ð¿Ð¾Ð»ÑŒÐ·Ð¾Ð²Ð°Ñ‚ÐµÐ»Ñ. ИÑпользуетÑÑ Ð² пользовательÑком " "интерфейÑе." #, python-format msgid "" "There is a problem with your %(error_key_name)s %(error_filename)s. Please " "verify it. Error: %(ioe)s" msgstr "" "Ошибка в %(error_key_name)s %(error_filename)s. Проверьте. Ошибка: %(ioe)s" #, python-format msgid "" "There is a problem with your %(error_key_name)s %(error_filename)s. Please " "verify it. OpenSSL error: %(ce)s" msgstr "" "Ошибка в %(error_key_name)s %(error_filename)s. Проверьте. Ошибка OpenSSL: " "%(ce)s" #, python-format msgid "" "There is a problem with your key pair. Please verify that cert " "%(cert_file)s and key %(key_file)s belong together. OpenSSL error %(ce)s" msgstr "" "ÐÐµÐ¿Ñ€Ð°Ð²Ð¸Ð»ÑŒÐ½Ð°Ñ Ð¿Ð°Ñ€Ð° ключей. УбедитеÑÑŒ, что Ñертификат %(cert_file)s и ключ " "%(key_file)sÑоответÑтвуют друг другу. Ошибка OpenSSL: %(ce)s" msgid "There was an error configuring the client." msgstr "При наÑтройке клиента произошла ошибка." msgid "There was an error connecting to a server" msgstr "При подключении к Ñерверу произошла ошибка" msgid "" "This operation is currently not permitted on Glance Tasks. They are auto " "deleted after reaching the time based on their expires_at property." msgstr "" "Эта Ð¾Ð¿ÐµÑ€Ð°Ñ†Ð¸Ñ Ð² наÑтоÑщее Ð²Ñ€ÐµÐ¼Ñ Ð½Ðµ разрешена Ð´Ð»Ñ Ð·Ð°Ð´Ð°Ñ‡ Glance. Они " "автоматичеÑки удалÑÑŽÑ‚ÑÑ Ð¿Ð¾Ñле доÑÑ‚Ð¸Ð¶ÐµÐ½Ð¸Ñ Ñрока, указанного в их ÑвойÑтве " "expires_at." msgid "This operation is currently not permitted on Glance images details." msgstr "" "Эта Ð¾Ð¿ÐµÑ€Ð°Ñ†Ð¸Ñ Ð² наÑтоÑщее Ð²Ñ€ÐµÐ¼Ñ Ð½Ðµ разрешена Ð´Ð»Ñ Ñведений об образах Glance." msgid "" "Time in hours for which a task lives after, either succeeding or failing" msgstr "" "Ð’Ñ€ÐµÐ¼Ñ (ч) ÑущеÑÑ‚Ð²Ð¾Ð²Ð°Ð½Ð¸Ñ Ð·Ð°Ð´Ð°Ñ‡Ð¸ поÑле уÑпешного Ð²Ñ‹Ð¿Ð¾Ð»Ð½ÐµÐ½Ð¸Ñ Ð¸Ð»Ð¸ Ð·Ð°Ð²ÐµÑ€ÑˆÐµÐ½Ð¸Ñ Ñ " "ошибкой" msgid "Too few arguments." msgstr "ÐедоÑтаточно аргументов." msgid "" "URI cannot contain more than one occurrence of a scheme.If you have " "specified a URI like swift://user:pass@http://authurl.com/v1/container/obj, " "you need to change it to use the swift+http:// scheme, like so: swift+http://" "user:pass@authurl.com/v1/container/obj" msgstr "" "URI не может Ñодержать больше одного Ð²Ñ…Ð¾Ð¶Ð´ÐµÐ½Ð¸Ñ Ñхемы. ЕÑли вы указали URI " "вида swift://user:pass@http://authurl.com/v1/container/obj, то вам нужно " "изменить его так, чтобы иÑпользовалаÑÑŒ Ñхема swift+http://, например: swift" "+http://user:pass@authurl.com/v1/container/obj" msgid "URL to access the image file kept in external store" msgstr "URL Ð´Ð»Ñ Ð´Ð¾Ñтупа к файлу образа, находÑщемуÑÑ Ð²Ð¾ внешнем хранилище" #, python-format msgid "" "Unable to create pid file %(pid)s. Running as non-root?\n" "Falling back to a temp file, you can stop %(service)s service using:\n" " %(file)s %(server)s stop --pid-file %(fb)s" msgstr "" "Ðе удаетÑÑ Ñоздать файл pid %(pid)s. Запущен без прав доÑтупа root?\n" "Возврат к файлу temp, Ð´Ð»Ñ Ð·Ð°Ð²ÐµÑ€ÑˆÐµÐ½Ð¸Ñ Ñ€Ð°Ð±Ð¾Ñ‚Ñ‹ Ñлужбы %(service)s:\n" " оÑтановить %(file)s %(server)s - pid-файл %(fb)s" #, python-format msgid "Unable to filter by unknown operator '%s'." msgstr "Ðе удаетÑÑ Ð¾Ñ‚Ñ„Ð¸Ð»ÑŒÑ‚Ñ€Ð¾Ð²Ð°Ñ‚ÑŒ Ñ Ð¸Ñпользованием неизвеÑтного оператора: '%s'" msgid "Unable to filter on a range with a non-numeric value." msgstr "Отфильтровать по диапазону Ñ Ð½ÐµÑ‡Ð¸Ñловым значением невозможно." msgid "Unable to filter on a unknown operator." msgstr "Ðе удаетÑÑ Ð¾Ñ‚Ñ„Ð¸Ð»ÑŒÑ‚Ñ€Ð¾Ð²Ð°Ñ‚ÑŒ Ñ Ð¸Ñпользованием неизвеÑтного оператора." msgid "Unable to filter using the specified operator." msgstr "Ðе удаетÑÑ Ð¾Ñ‚Ñ„Ð¸Ð»ÑŒÑ‚Ñ€Ð¾Ð²Ð°Ñ‚ÑŒ Ñ Ð¸Ñпользованием указанного оператора." msgid "Unable to filter using the specified range." msgstr "Отфильтровать ÑоглаÑно указанному диапазону невозможно." #, python-format msgid "Unable to find '%s' in JSON Schema change" msgstr "'%s' не найден в изменении Ñхемы JSON" #, python-format msgid "" "Unable to find `op` in JSON Schema change. It must be one of the following: " "%(available)s." msgstr "" "Ðе удалоÑÑŒ найти `op` в изменении Ñхемы JSON. ДопуÑкаетÑÑ Ð¾Ð´Ð½Ð¾ из Ñледующих " "значений: %(available)s." msgid "Unable to increase file descriptor limit. Running as non-root?" msgstr "" "Ðе удаетÑÑ ÑƒÐ²ÐµÐ»Ð¸Ñ‡Ð¸Ñ‚ÑŒ предельное значение Ð´Ð»Ñ Ð´ÐµÑкриптора файлов. Запущен без " "прав доÑтупа root?" #, python-format msgid "" "Unable to load %(app_name)s from configuration file %(conf_file)s.\n" "Got: %(e)r" msgstr "" "Ðевозможно загрузить %(app_name)s из файла конфигурации %(conf_file)s.\n" "Ошибка: %(e)r" #, python-format msgid "Unable to load schema: %(reason)s" msgstr "Ðе удалоÑÑŒ загрузить Ñхему: %(reason)s" #, python-format msgid "Unable to locate paste config file for %s." msgstr "Ðе удаетÑÑ Ð½Ð°Ð¹Ñ‚Ð¸/вÑтавить файл конфигурации Ð´Ð»Ñ %s." #, python-format msgid "Unable to upload duplicate image data for image%(image_id)s: %(error)s" msgstr "" "Ðе удаетÑÑ Ð·Ð°Ð³Ñ€ÑƒÐ·Ð¸Ñ‚ÑŒ данные Ð´Ð»Ñ Ð´ÑƒÐ±Ð»Ð¸ÐºÐ°Ñ‚Ð° образа %(image_id)s: %(error)s" msgid "Unauthorized image access" msgstr "Ðет прав на доÑтуп к образу" msgid "Unexpected body type. Expected list/dict." msgstr "Ðепредвиденный тип тела. ОжидалÑÑ ÑпиÑок или Ñловарь." #, python-format msgid "Unexpected response: %s" msgstr "Ðепредвиденный ответ: %s" #, python-format msgid "Unknown auth strategy '%s'" msgstr "ÐеизвеÑÑ‚Ð½Ð°Ñ ÑÑ‚Ñ€Ð°Ñ‚ÐµÐ³Ð¸Ñ Ð¸Ð´ÐµÐ½Ñ‚Ð¸Ñ„Ð¸ÐºÐ°Ñ†Ð¸Ð¸: '%s'" #, python-format msgid "Unknown command: %s" msgstr "ÐеизвеÑÑ‚Ð½Ð°Ñ ÐºÐ¾Ð¼Ð°Ð½Ð´Ð°: %s" msgid "Unknown sort direction, must be 'desc' or 'asc'" msgstr "ÐеизвеÑтное направление Ñортировки, должно быть 'desc' или 'asc'" msgid "Unrecognized JSON Schema draft version" msgstr "ÐераÑÐ¿Ð¾Ð·Ð½Ð°Ð½Ð½Ð°Ñ Ð²ÐµÑ€ÑÐ¸Ñ Ñ‡ÐµÑ€Ð½Ð¾Ð²Ð¸ÐºÐ° Ñхемы JSON" msgid "Unrecognized changes-since value" msgstr "ÐераÑпознанное значение изменений за период" #, python-format msgid "Unsupported sort_dir. Acceptable values: %s" msgstr "Ðеподдерживаемый sort_dir. ДопуÑтимые значениÑ: %s" #, python-format msgid "Unsupported sort_key. Acceptable values: %s" msgstr "Ðеподдерживаемый sort_key. ДопуÑтимые значениÑ: %s" msgid "Virtual size of image in bytes" msgstr "Виртуальный размер образа в байтах" #, python-format msgid "Waited 15 seconds for pid %(pid)s (%(file)s) to die; giving up" msgstr "" "СиÑтема ожидала Ð·Ð°Ð²ÐµÑ€ÑˆÐµÐ½Ð¸Ñ pid %(pid)s (%(file)s) в течение 15 Ñекунд; " "оÑвобождение" msgid "" "When running server in SSL mode, you must specify both a cert_file and " "key_file option value in your configuration file" msgstr "" "При работе Ñервера в режиме SSL необходимо указать cert_file и key_file в " "файле конфигурации" msgid "" "Whether to pass through the user token when making requests to the registry. " "To prevent failures with token expiration during big files upload, it is " "recommended to set this parameter to False.If \"use_user_token\" is not in " "effect, then admin credentials can be specified." msgstr "" "ОÑущеÑтвлÑть ли Ñквозную передачу пользовательÑкого маркера при Ñоздании " "запроÑов в рееÑтр Ð”Ð»Ñ Ð¿Ñ€ÐµÐ´Ð¾Ñ‚Ð²Ñ€Ð°Ñ‰ÐµÐ½Ð¸Ñ Ñбоев, ÑвÑзанных Ñ Ð¸Ñтечением Ñрока " "дейÑÑ‚Ð²Ð¸Ñ Ð¼Ð°Ñ€ÐºÐµÑ€Ð° во Ð²Ñ€ÐµÐ¼Ñ Ð¿ÐµÑ€ÐµÐ´Ð°Ñ‡Ð¸ больших данных, рекомендуетÑÑ Ð¿Ñ€Ð¸Ñваивать " "Ñтому параметру значение False. ЕÑли \"use_user_token\" не иÑпользуетÑÑ, " "можно указать идентификационные данные админиÑтратора." #, python-format msgid "Wrong command structure: %s" msgstr "ÐÐµÐ²ÐµÑ€Ð½Ð°Ñ Ñтруктура команды: %s" msgid "You are not authenticated." msgstr "Ð’Ñ‹ не прошли идентификацию." msgid "You are not authorized to complete this action." msgstr "У Ð²Ð°Ñ Ð½ÐµÑ‚ прав на выполнение Ñтого дейÑтвиÑ." #, python-format msgid "You are not authorized to lookup image %s." msgstr "У Ð²Ð°Ñ Ð½ÐµÑ‚ прав доÑтупа Ð´Ð»Ñ Ð¿Ð¾Ð¸Ñка образа %s." #, python-format msgid "You are not authorized to lookup the members of the image %s." msgstr "У Ð²Ð°Ñ Ð½ÐµÑ‚ прав доÑтупа Ð´Ð»Ñ Ð¿Ð¾Ð¸Ñка Ñлементов образа %s." #, python-format msgid "You are not permitted to create a tag in the namespace owned by '%s'" msgstr "" "У Ð²Ð°Ñ Ð½ÐµÑ‚ прав доÑтупа Ð´Ð»Ñ ÑÐ¾Ð·Ð´Ð°Ð½Ð¸Ñ Ñ‚ÐµÐ³Ð° в проÑтранÑтве имен, владельцем " "которого ÑвлÑетÑÑ '%s'" msgid "You are not permitted to create image members for the image." msgstr "Вам не разрешено Ñоздавать учаÑтники образов Ð´Ð»Ñ Ð´Ð°Ð½Ð½Ð¾Ð³Ð¾ образа." #, python-format msgid "You are not permitted to create images owned by '%s'." msgstr "Вам не разрешено Ñоздавать образы, принадлежащие '%s'." #, python-format msgid "You are not permitted to create namespace owned by '%s'" msgstr "Ðет прав доÑтупа на Ñоздание проÑтранÑтва имен, принадлежащего %s." #, python-format msgid "You are not permitted to create object owned by '%s'" msgstr "Ðет прав доÑтупа на Ñоздание объекта, принадлежащего %s." #, python-format msgid "You are not permitted to create property owned by '%s'" msgstr "Ðет прав доÑтупа на Ñоздание ÑвойÑтва, принадлежащего %s." #, python-format msgid "You are not permitted to create resource_type owned by '%s'" msgstr "Ðет прав доÑтупа на Ñоздание resource_type, принадлежащего %s." #, python-format msgid "You are not permitted to create this task with owner as: %s" msgstr "Вам не разрешено Ñоздавать Ñту задачу Ñ Ð²Ð»Ð°Ð´ÐµÐ»ÑŒÑ†ÐµÐ¼: %s" msgid "You are not permitted to deactivate this image." msgstr "Вам не разрешено деактивировать Ñтот образ." msgid "You are not permitted to delete this image." msgstr "Вам не разрешено удалÑть Ñтот образ." msgid "You are not permitted to delete this meta_resource_type." msgstr "Ðет прав доÑтупа на удаление Ñтого meta_resource_type." msgid "You are not permitted to delete this namespace." msgstr "Ðет прав доÑтупа на удаление Ñтого проÑтранÑтва имен." msgid "You are not permitted to delete this object." msgstr "Ðет прав доÑтупа на удаление Ñтого объекта." msgid "You are not permitted to delete this property." msgstr "Ðет прав доÑтупа на удаление Ñтого ÑвойÑтва." msgid "You are not permitted to delete this tag." msgstr "У Ð²Ð°Ñ Ð½ÐµÑ‚ прав доÑтупа Ð´Ð»Ñ ÑƒÐ´Ð°Ð»ÐµÐ½Ð¸Ñ Ñтого тега." #, python-format msgid "You are not permitted to modify '%(attr)s' on this %(resource)s." msgstr "Вам не разрешено изменÑть '%(attr)s' в Ñтом %(resource)s." #, python-format msgid "You are not permitted to modify '%s' on this image." msgstr "Вам не разрешено изменÑть '%s' в Ñтом образе." msgid "You are not permitted to modify locations for this image." msgstr "Вам не разрешено изменÑть раÑÐ¿Ð¾Ð»Ð¾Ð¶ÐµÐ½Ð¸Ñ Ñтого образа." msgid "You are not permitted to modify tags on this image." msgstr "Вам не разрешено изменÑть теги Ñтого образа." msgid "You are not permitted to modify this image." msgstr "Вам не разрешено изменÑть Ñтот образ." msgid "You are not permitted to reactivate this image." msgstr "Вам не разрешено повторно активировать Ñтот образ." msgid "You are not permitted to set status on this task." msgstr "Вам не разрешено указывать ÑоÑтоÑние Ñтой задачи." msgid "You are not permitted to update this namespace." msgstr "Ðет прав доÑтупа на обновление Ñтого проÑтранÑтва имен." msgid "You are not permitted to update this object." msgstr "Ðет прав доÑтупа на обновление Ñтого объекта." msgid "You are not permitted to update this property." msgstr "Ðет прав доÑтупа на обновление Ñтого ÑвойÑтва." msgid "You are not permitted to update this tag." msgstr "У Ð²Ð°Ñ Ð½ÐµÑ‚ прав доÑтупа Ð´Ð»Ñ Ð¸Ð·Ð¼ÐµÐ½ÐµÐ½Ð¸Ñ Ñтого тега." msgid "You are not permitted to upload data for this image." msgstr "Вам не разрешено загружать данные Ð´Ð»Ñ Ñтого образа." #, python-format msgid "You cannot add image member for %s" msgstr "Ðевозможно добавить учаÑтник образа Ð´Ð»Ñ %s" #, python-format msgid "You cannot delete image member for %s" msgstr "Ðевозможно удалить учаÑтник образа Ð´Ð»Ñ %s" #, python-format msgid "You cannot get image member for %s" msgstr "Ðевозможно получить учаÑтник образа Ð´Ð»Ñ %s" #, python-format msgid "You cannot update image member %s" msgstr "Ðевозможно обновить учаÑтник образа %s" msgid "You do not own this image" msgstr "Этот образ вам не принадлежит" msgid "" "You have selected to use SSL in connecting, and you have supplied a cert, " "however you have failed to supply either a key_file parameter or set the " "GLANCE_CLIENT_KEY_FILE environ variable" msgstr "" "Ð’Ñ‹ выбрали применение SSL в Ñоединении и предоÑтавили Ñертификат, однако вам " "не удалоÑÑŒ ни предоÑтавить параметр key_file, ни задать переменную Ñреды " "GLANCE_CLIENT_KEY_FILE" msgid "" "You have selected to use SSL in connecting, and you have supplied a key, " "however you have failed to supply either a cert_file parameter or set the " "GLANCE_CLIENT_CERT_FILE environ variable" msgstr "" "Ð’Ñ‹ выбрали применение SSL в Ñоединении и предоÑтавили ключ, однако вам не " "удалоÑÑŒ ни предоÑтавить параметр cert_file, ни задать переменную Ñреды " "GLANCE_CLIENT_CERT_FILE" msgid "" "^([0-9a-fA-F]){8}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-" "fA-F]){12}$" msgstr "" "^([0-9a-fA-F]){8}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-" "fA-F]){12}$" #, python-format msgid "__init__() got unexpected keyword argument '%s'" msgstr "Ð’ __init__() получен непредвиденный именованный аргумент '%s'" #, python-format msgid "" "cannot transition from %(current)s to %(next)s in update (wanted from_state=" "%(from)s)" msgstr "" "не удаетÑÑ Ð²Ñ‹Ð¿Ð¾Ð»Ð½Ð¸Ñ‚ÑŒ переход от %(current)s к %(next)s при обновлении " "(требуетÑÑ from_state=%(from)s)" #, python-format msgid "custom properties (%(props)s) conflict with base properties" msgstr "наÑтраиваемые ÑвойÑтва (%(props)s) конфликтуют Ñ Ð±Ð°Ð·Ð¾Ð²Ñ‹Ð¼Ð¸ ÑвойÑтвами" msgid "eventlet 'poll' nor 'selects' hubs are available on this platform" msgstr "" "Ð”Ð»Ñ Ñтой платформы отÑутÑтвуют центры обработки Ñобытий poll и selects " "библиотеки eventlet" msgid "is_public must be None, True, or False" msgstr "Параметр is_public должен быть равен None, True или False" msgid "limit param must be an integer" msgstr "Параметр limit должен быть целым чиÑлом" msgid "limit param must be positive" msgstr "Параметр limit должен быть положительным" msgid "md5 hash of image contents." msgstr "Ð¥Ñш md5 Ñодержимого образа." #, python-format msgid "new_image() got unexpected keywords %s" msgstr "Ð’ new_image() получены непредвиденные ключевые Ñлова %s" msgid "protected must be True, or False" msgstr "Параметр protected должен быть равен True или False" #, python-format msgid "unable to launch %(serv)s. Got error: %(e)s" msgstr "не удаетÑÑ Ð·Ð°Ð¿ÑƒÑтить %(serv)s. Ошибка: %(e)s" #, python-format msgid "x-openstack-request-id is too long, max size %s" msgstr "Слишком Ð±Ð¾Ð»ÑŒÑˆÐ°Ñ Ð´Ð»Ð¸Ð½Ð° x-openstack-request-id, макÑÐ¸Ð¼Ð°Ð»ÑŒÐ½Ð°Ñ Ð´Ð»Ð¸Ð½Ð°: %s" glance-16.0.0/glance/locale/es/0000775000175100017510000000000013245511661016156 5ustar zuulzuul00000000000000glance-16.0.0/glance/locale/es/LC_MESSAGES/0000775000175100017510000000000013245511661017743 5ustar zuulzuul00000000000000glance-16.0.0/glance/locale/es/LC_MESSAGES/glance.po0000666000175100017510000021412013245511421021530 0ustar zuulzuul00000000000000# Translations template for glance. # Copyright (C) 2015 ORGANIZATION # This file is distributed under the same license as the glance project. # # Translators: # Adriana Chisco Landazábal , 2015 # Alfredo Matas , 2015 # Marian Tort , 2015 # Pablo Sanchez , 2015 # Andreas Jaeger , 2016. #zanata msgid "" msgstr "" "Project-Id-Version: glance 15.0.0.0b3.dev29\n" "Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n" "POT-Creation-Date: 2017-06-23 20:54+0000\n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=UTF-8\n" "Content-Transfer-Encoding: 8bit\n" "PO-Revision-Date: 2016-04-12 05:20+0000\n" "Last-Translator: Copied by Zanata \n" "Language: es\n" "Plural-Forms: nplurals=2; plural=(n != 1);\n" "Generated-By: Babel 2.0\n" "X-Generator: Zanata 3.9.6\n" "Language-Team: Spanish\n" #, python-format msgid "\t%s" msgstr "\t%s" #, python-format msgid "%(cls)s exception was raised in the last rpc call: %(val)s" msgstr "Ocurrió excepción %(cls)s en la última llamada a rpc: %(val)s" #, python-format msgid "%(m_id)s not found in the member list of the image %(i_id)s." msgstr "" "No se ha encontrado %(m_id)s en la lista de miembros de la imagen %(i_id)s." #, python-format msgid "%(serv)s (pid %(pid)s) is running..." msgstr "Se esta ejecutando %(serv)s (pid %(pid)s) ..." #, python-format msgid "%(serv)s appears to already be running: %(pid)s" msgstr "Parece que %(serv)s ya se está ejecutando: %(pid)s" #, python-format msgid "" "%(strategy)s is registered as a module twice. %(module)s is not being used." msgstr "" "%(strategy)s está registrado como módulo dos veces. %(module)s no se " "encuentra en uso." #, python-format msgid "" "%(task_id)s of %(task_type)s not configured properly. Could not load the " "filesystem store" msgstr "" "%(task_id)s de %(task_type)s no se ha configurado correctamente. No se pudo " "cargar el almacén de sistema de ficheo" #, python-format msgid "" "%(task_id)s of %(task_type)s not configured properly. Missing work dir: " "%(work_dir)s" msgstr "" "%(task_id)s de %(task_type)s no se ha configurado adecuadamente. Hace falta " "work dir: %(work_dir)s" #, python-format msgid "%(verb)sing %(serv)s" msgstr "%(verb)s ing %(serv)s" #, python-format msgid "%(verb)sing %(serv)s with %(conf)s" msgstr "%(verb)s ing %(serv)s con %(conf)s" #, python-format msgid "" "%s Please specify a host:port pair, where host is an IPv4 address, IPv6 " "address, hostname, or FQDN. If using an IPv6 address, enclose it in brackets " "separately from the port (i.e., \"[fe80::a:b:c]:9876\")." msgstr "" "%s Por favor especifique el par host:puerto, en donde el host es una " "dirección IPv4, IPv6, nombre de host o FQDN. Si utiliza una dirección IPv6 " "enciérrela entre paréntesis separados del puerto (por ejemplo \"[fe80::a:b:" "c]:9876\")." #, python-format msgid "%s can't contain 4 byte unicode characters." msgstr "%s no puede contener caracteres 4 byte unicode." #, python-format msgid "%s is already stopped" msgstr "%s ya se detuvo" #, python-format msgid "%s is stopped" msgstr "%s se ha detenido" msgid "" "--os_auth_url option or OS_AUTH_URL environment variable required when " "keystone authentication strategy is enabled\n" msgstr "" "Se necesita la opción --os_auth_url ovariable de ambiente OS_AUTH_URL cuando " "la estrategia de autenticación keystone está habilitada\n" msgid "A body is not expected with this request." msgstr "No se espera un cuerpo en esta solicitud." #, python-format msgid "" "A metadata definition object with name=%(object_name)s already exists in " "namespace=%(namespace_name)s." msgstr "" "Ya existe el objeto para definición de metadatos de nombre=%(object_name)s " "en espacio de nombre=%(namespace_name)s." #, python-format msgid "" "A metadata definition property with name=%(property_name)s already exists in " "namespace=%(namespace_name)s." msgstr "" "Ya existe la propiedad para definición de metadatos de nombre=" "%(property_name)s en espacio de nombre=%(namespace_name)s." #, python-format msgid "" "A metadata definition resource-type with name=%(resource_type_name)s already " "exists." msgstr "" "Ya existe el tipo de recurso para definición de metadatos=" "%(resource_type_name)s" msgid "A set of URLs to access the image file kept in external store" msgstr "" "Conjunto de URLs para acceder al archivo de imagen se mantiene en un almacén " "externo" msgid "Amount of disk space (in GB) required to boot image." msgstr "" "Cantidad de espacio de disco (en GB) necesario para la imagen de arranque." msgid "Amount of ram (in MB) required to boot image." msgstr "Cantidad de RAM (en MB) necesario para la imagen de arranque." msgid "An identifier for the image" msgstr "Un identificador para la imagen" msgid "An identifier for the image member (tenantId)" msgstr "Un identificador para el miembro de la imagen (tenantId)" msgid "An identifier for the owner of this task" msgstr "Un identificador para el propietario de esta tarea" msgid "An identifier for the task" msgstr "Un identificador para la tarea" msgid "An image file url" msgstr "La URL de un archivo de imagen" msgid "An image schema url" msgstr "La URL de un esquema imagen" msgid "An image self url" msgstr "La URL propia de una imagen" #, python-format msgid "An image with identifier %s already exists" msgstr "Ya existe una imagen con el identificador %s" msgid "An import task exception occurred" msgstr "Se ha producido una excepción en una tarea de importación" msgid "An object with the same identifier already exists." msgstr "Ya existe un objeto con el mismo identificador." msgid "An object with the same identifier is currently being operated on." msgstr "Ya se está operando un objeto con el mismo identificador." msgid "An object with the specified identifier was not found." msgstr "No se ha encontrado un objeto con el identificador especificado." msgid "An unknown exception occurred" msgstr "Se ha producido una excepción desconocida " msgid "An unknown task exception occurred" msgstr "Se ha producido una excepción desconocida " #, python-format msgid "Attempt to upload duplicate image: %s" msgstr "Se ha intentado subir imagen duplicada: %s" msgid "Attempted to update Location field for an image not in queued status." msgstr "" "Se ha intentado actualizar el campo de ubicación para una imagen que no está " "en estado de cola." #, python-format msgid "Attribute '%(property)s' is read-only." msgstr "El atributo '%(property)s' es de sólo lectura." #, python-format msgid "Attribute '%(property)s' is reserved." msgstr "El atributo '%(property)s' está reservado." #, python-format msgid "Attribute '%s' is read-only." msgstr "El atributo '%s' es de solo lectura." #, python-format msgid "Attribute '%s' is reserved." msgstr "El atributo '%s' está reservado." msgid "Attribute container_format can be only replaced for a queued image." msgstr "" "El atributo container_format solo se puede reemplazar por una imagen en cola." msgid "Attribute disk_format can be only replaced for a queued image." msgstr "El atributo isk_format solo se puede remplazar con una imagen en cola." #, python-format msgid "Auth service at URL %(url)s not found." msgstr "No se ha encontrado el servicio de autorización en el URL %(url)s." #, python-format msgid "" "Authentication error - the token may have expired during file upload. " "Deleting image data for %s." msgstr "" "Error de autenticación: es posible que el token haya caducado durante la " "carga de archivos. Borrando los datos de imagen de %s." msgid "Authorization failed." msgstr "Ha fallado la autorización." msgid "Available categories:" msgstr "Categorías disponibles:" #, python-format msgid "Bad \"%s\" query filter format. Use ISO 8601 DateTime notation." msgstr "" "Formato de filtro de consulta \"%s\" incorrecto. Utilice la notación de " "DateTime de la ISO 8601." #, python-format msgid "Bad Command: %s" msgstr "Comando incorrecto: %s" #, python-format msgid "Bad header: %(header_name)s" msgstr "Cabecera incorrecta: %(header_name)s" #, python-format msgid "Bad value passed to filter %(filter)s got %(val)s" msgstr "Valores incorrectos pasaron al filtro %(filter)s se obtuvo %(val)s" #, python-format msgid "Badly formed S3 URI: %(uri)s" msgstr "La URI S3 se realizó de manera incorrecta: %(uri)s" #, python-format msgid "Badly formed credentials '%(creds)s' in Swift URI" msgstr "Credenciales formadas incorrectamente '%(creds)s' en URI de Swift" msgid "Badly formed credentials in Swift URI." msgstr "Credenciales con formato incorrecto en URI de Swift." msgid "Body expected in request." msgstr "Se esperaba un cuerpo en la solicitud." msgid "Cannot be a negative value" msgstr "No puede ser un valor negativo" msgid "Cannot be a negative value." msgstr "No puede ser un valor negativo." #, python-format msgid "Cannot convert image %(key)s '%(value)s' to an integer." msgstr "No se puede convertir imagen %(key)s '%(value)s' en un entero." msgid "Cannot remove last location in the image." msgstr "No se puede eliminar la última ubicación de la imagen." #, python-format msgid "Cannot save data for image %(image_id)s: %(error)s" msgstr "No se pueden guardar los datos para la imagen %(image_id)s: %(error)s" msgid "Cannot set locations to empty list." msgstr "No se puede definir ubicaciones como una lista vacía." msgid "Cannot upload to an unqueued image" msgstr "No se puede subir a una imagen en cola" #, python-format msgid "Checksum verification failed. Aborted caching of image '%s'." msgstr "" "Se ha encontrado un error en la verificación de la suma de comprobación. Se " "ha abortado el almacenamiento en memoria caché de la imagen '%s'." msgid "Client disconnected before sending all data to backend" msgstr "El cliente se desconecto antes de enviar todos los datos a backend" msgid "Command not found" msgstr "Comando no encontrado" msgid "Configuration option was not valid" msgstr "La opción de configuración no era válida " #, python-format msgid "Connect error/bad request to Auth service at URL %(url)s." msgstr "" "Solicitud incorrecta/error de conexión a servicio de autorización en el URL " "%(url)s." #, python-format msgid "Constructed URL: %s" msgstr "URL construido : %s" msgid "Container format is not specified." msgstr "No se especificó el formato de contenedor." msgid "Content-Type must be application/octet-stream" msgstr "El tipo de contenido debe ser aplicación/serie de octetos" #, python-format msgid "Corrupt image download for image %(image_id)s" msgstr "Descarga de imagen corrupta para imagen %(image_id)s " #, python-format msgid "Could not bind to %(host)s:%(port)s after trying for 30 seconds" msgstr "" "No se ha podido enlazar con %(host)s:%(port)s después de intentarlo durante " "30 segundos" msgid "Could not find OVF file in OVA archive file." msgstr "No se ha podido encontrar el archivo OVF en el archivo archivador OVA" #, python-format msgid "Could not find metadata object %s" msgstr "No se pudo encontrar el objeto de metadatos %s" #, python-format msgid "Could not find metadata tag %s" msgstr "No se pudo encontrar la etiqueta de metadatos %s" #, python-format msgid "Could not find namespace %s" msgstr "No se ha podido encontrar el espacio de nombre %s" #, python-format msgid "Could not find property %s" msgstr "No se pudo encontrar propiedad %s" msgid "Could not find required configuration option" msgstr "No se ha podido encontrar la opción de configuración necesaria " #, python-format msgid "Could not find task %s" msgstr "No se encontró tarea %s" #, python-format msgid "Could not update image: %s" msgstr "No se ha podido actualizar la imagen: %s" msgid "Currently, OVA packages containing multiple disk are not supported." msgstr "" "Actualmente no se da soporte a los paquetes OVA que contengan múltiples " "discos." #, python-format msgid "Data for image_id not found: %s" msgstr "No se encuentran los datos de image_id: %s" msgid "Data supplied was not valid." msgstr "Los datos proporcionados no son válidos. " msgid "Date and time of image member creation" msgstr "Fecha y hora de creación del miembro de la imagen" msgid "Date and time of image registration" msgstr "Fecha y hora del registro de la imagen" msgid "Date and time of last modification of image member" msgstr "Fecha y hora de la última modificación del miembro de la imagen" msgid "Date and time of namespace creation" msgstr "Fecha y hora de creación del espacio de nombre" msgid "Date and time of object creation" msgstr "Fecha y hora de creación del objeto" msgid "Date and time of resource type association" msgstr "Fecha y hora de asociación del tipo de recurso" msgid "Date and time of tag creation" msgstr "Fecha y hora de creación de la etiqueta" msgid "Date and time of the last image modification" msgstr "Fecha y hora de la última modificación de la imagen" msgid "Date and time of the last namespace modification" msgstr "Fecha y hora de la última modificación de espacio de nombre" msgid "Date and time of the last object modification" msgstr "Fecha y hora de la última modificación del objeto" msgid "Date and time of the last resource type association modification" msgstr "" "Fecha y hora de la última modificación de la asociación del tipo de recurso" msgid "Date and time of the last tag modification" msgstr "Fecha y hora de la última modificación de la etiqueta" msgid "Datetime when this resource was created" msgstr "Fecha en la cual se creó este recurso" msgid "Datetime when this resource was updated" msgstr "Fecha en la cual se actualizó este recurso" msgid "Datetime when this resource would be subject to removal" msgstr "Fecha en la cual este recurso estará sujeto a eliminación" #, python-format msgid "Denying attempt to upload image because it exceeds the quota: %s" msgstr "Denegando intento de carga de imagen porque excede la capacidad: %s" #, python-format msgid "Denying attempt to upload image larger than %d bytes." msgstr "Denegando intento de cargar una imagen mayor que %d bytes." msgid "Descriptive name for the image" msgstr "Nombre descriptivo para la imagen" msgid "Disk format is not specified." msgstr "No se especificó el formato del disco." #, python-format msgid "" "Driver %(driver_name)s could not be configured correctly. Reason: %(reason)s" msgstr "" "El controlador %(driver_name)s no se ha podido configurar correctamente. " "Razón: %(reason)s" msgid "" "Error decoding your request. Either the URL or the request body contained " "characters that could not be decoded by Glance" msgstr "" "Error al descodificar la solicitud. La URL o el cuerpo solicitado contenían " "caracteres que se han podido descodificar en Glance" #, python-format msgid "Error fetching members of image %(image_id)s: %(inner_msg)s" msgstr "Error al captar los miembros de la imagen %(image_id)s: %(inner_msg)s" msgid "Error in store configuration. Adding images to store is disabled." msgstr "" "Error en la configuración del almacén. Se ha inhabilitado la adición de " "imágenes a almacen." msgid "Expected a member in the form: {\"member\": \"image_id\"}" msgstr "Se eperaba un miembro con el formato: {\"member\": \"image_id\"}" msgid "Expected a status in the form: {\"status\": \"status\"}" msgstr "Se eperaba un estado con el formato: {\"status\": \"status\"}" msgid "External source should not be empty" msgstr "El origen externo no puede estar vacío" #, python-format msgid "External sources are not supported: '%s'" msgstr "No se soportan fuentes externas: '%s'" #, python-format msgid "Failed to activate image. Got error: %s" msgstr "Error al activar imagen. Se ha obtenido error: %s" #, python-format msgid "Failed to add image metadata. Got error: %s" msgstr "Error al agregar metadatos de imagen. Se obtuvo error: %s" #, python-format msgid "Failed to find image %(image_id)s to delete" msgstr "No se pudo encontrar imagen %(image_id)s para eliminar" #, python-format msgid "Failed to find image to delete: %s" msgstr "No se ha encontrado la imagen para eliminar: %s" #, python-format msgid "Failed to find image to update: %s" msgstr "No se encontró imagen para actualizar: %s" #, python-format msgid "Failed to find resource type %(resourcetype)s to delete" msgstr "No se encontró tipo de recurso %(resourcetype)s para eliminar" #, python-format msgid "Failed to initialize the image cache database. Got error: %s" msgstr "" "No se ha podido inicializar la base de datos de memoria caché de imagen. Se " "ha obtenido error: %s" #, python-format msgid "Failed to read %s from config" msgstr "No se ha podido leer %s de la configuración" #, python-format msgid "Failed to reserve image. Got error: %s" msgstr "Error al reservar imagen. Se ha obtenido error: %s" #, python-format msgid "Failed to update image metadata. Got error: %s" msgstr "" "No se han podido actualizar metadatos de imagen. Se ha obtenido error: %s" #, python-format msgid "Failed to upload image %s" msgstr "No se cargó imagen %s" #, python-format msgid "" "Failed to upload image data for image %(image_id)s due to HTTP error: " "%(error)s" msgstr "" "No se permite cargar datos de imagen para imagen %(image_id)s a causa de un " "error HTTP: %(error)s" #, python-format msgid "" "Failed to upload image data for image %(image_id)s due to internal error: " "%(error)s" msgstr "" "Error al cargar datos de imagen para imagen %(image_id)s a causa de un error " "interno: %(error)s" #, python-format msgid "File %(path)s has invalid backing file %(bfile)s, aborting." msgstr "" "El archivo %(path)s tiene un archivo de respaldo %(bfile)s no válido, " "terminando de forma anormal." msgid "" "File based imports are not allowed. Please use a non-local source of image " "data." msgstr "" "No se permiten las importaciones basadas en ficheros. Por favor use una " "fuente no-local de datos de imagen." msgid "Forbidden image access" msgstr "Acceso prohibido a la imagen" #, python-format msgid "Forbidden to delete a %s image." msgstr "Se prohíbe eliminar una imagen %s." #, python-format msgid "Forbidden to delete image: %s" msgstr "Está prohibido eliminar imagen: %s" #, python-format msgid "Forbidden to modify '%(key)s' of %(status)s image." msgstr "Prohibido modificar '%(key)s' de la imagen en estado %(status)s." #, python-format msgid "Forbidden to modify '%s' of image." msgstr "Prohibido modificar '%s' de la imagen." msgid "Forbidden to reserve image." msgstr "La reserva de imagen está prohibida." msgid "Forbidden to update deleted image." msgstr "La actualización de una imagen suprimida está prohibida." #, python-format msgid "Forbidden to update image: %s" msgstr "Se prohíbe actualizar imagen: %s" #, python-format msgid "Forbidden upload attempt: %s" msgstr "Intento de carga prohibido: %s" #, python-format msgid "Forbidding request, metadata definition namespace=%s is not visible." msgstr "" "Solicitud no permitida, el espacio de nombre para la definición de metadatos=" "%s no es visible" #, python-format msgid "Forbidding request, task %s is not visible" msgstr "Solicitud no permitida, la tarea %s no es visible" msgid "Format of the container" msgstr "Formato del contenedor" msgid "Format of the disk" msgstr "Formato del disco" #, python-format msgid "Host \"%s\" is not valid." msgstr "Host \"%s\" no es válido." #, python-format msgid "Host and port \"%s\" is not valid." msgstr "Host y puerto \"%s\" no es válido." msgid "" "Human-readable informative message only included when appropriate (usually " "on failure)" msgstr "" "Solo se incluye mensaje informativo legible para humanos cuando sea " "apropiado (usualmente en error)" msgid "If true, image will not be deletable." msgstr "Si es true, la imagen no se podrá suprimir." msgid "If true, namespace will not be deletable." msgstr "Si es true, no se podrá eliminar el espacio de nombre." #, python-format msgid "Image %(id)s could not be deleted because it is in use: %(exc)s" msgstr "No se pudo eliminar imagen %(id)s porque está en uso: %(exc)s" #, python-format msgid "Image %(id)s not found" msgstr "No se ha encontrado la imagen %(id)s" #, python-format msgid "" "Image %(image_id)s could not be found after upload. The image may have been " "deleted during the upload: %(error)s" msgstr "" "No se pudo encontrar imagen %(image_id)s después de subirla. Es posible que " "la imagen haya sido eliminada durante la carga: %(error)s" #, python-format msgid "Image %(image_id)s is protected and cannot be deleted." msgstr "La imagen %(image_id)s está protegida y no se puede suprimir." #, python-format msgid "" "Image %s could not be found after upload. The image may have been deleted " "during the upload, cleaning up the chunks uploaded." msgstr "" "No se pudo encontrar la imagen %s después de subirla. Es posible que la " "imagen haya sido eliminada durante la carga, limpiando los fragmentos " "cargados." #, python-format msgid "" "Image %s could not be found after upload. The image may have been deleted " "during the upload." msgstr "" "No se puede encontrar la imagen %s después de la carga. Es posible que la " "imagen se haya eliminado durante la carga." #, python-format msgid "Image %s is deactivated" msgstr "Se ha desactivado la imagen %s" #, python-format msgid "Image %s is not active" msgstr "La imagen %s no está activa" #, python-format msgid "Image %s not found." msgstr "No se encontró imagen %s." #, python-format msgid "Image exceeds the storage quota: %s" msgstr "La imagen excede la capacidad de almacenamiento: %s" msgid "Image id is required." msgstr "Se necesita id de imagen" msgid "Image is protected" msgstr "La imagen está protegida " #, python-format msgid "Image member limit exceeded for image %(id)s: %(e)s:" msgstr "" "Se ha excedido el límite de miembro de imagen para imagen %(id)s: %(e)s:" #, python-format msgid "Image name too long: %d" msgstr "Nombre de imagen demasiado largo: %d" msgid "Image operation conflicts" msgstr "Conflictos de operación de imagen" #, python-format msgid "" "Image status transition from %(cur_status)s to %(new_status)s is not allowed" msgstr "No se permite la transición de estado %(cur_status)s a %(new_status)s" #, python-format msgid "Image storage media is full: %s" msgstr "El soporte de almacenamiento de imagen está lleno: %s" #, python-format msgid "Image tag limit exceeded for image %(id)s: %(e)s:" msgstr "" "Se ha excedido el límite de etiqueta de imagen para imagen %(id)s: %(e)s:" #, python-format msgid "Image upload problem: %s" msgstr "Problema al cargar la imagen: %s" #, python-format msgid "Image with identifier %s already exists!" msgstr "¡Ya existe una imagen con el identificador %s!" #, python-format msgid "Image with identifier %s has been deleted." msgstr "Se ha eliminado imagen identificada como %s." #, python-format msgid "Image with identifier %s not found" msgstr "No se ha encontrado la imagen con el identificador %s" #, python-format msgid "Image with the given id %(image_id)s was not found" msgstr "No se ha podido encontrar la imagen con ID %(image_id)s" #, python-format msgid "" "Incorrect auth strategy, expected \"%(expected)s\" but received " "\"%(received)s\"" msgstr "" "Estrategia de autorización incorrecta, se esperaba \"%(expected)s\" pero se " "ha recibido \"%(received)s\"" #, python-format msgid "Incorrect request: %s" msgstr "Solicitud incorrecta: %s" #, python-format msgid "Input does not contain '%(key)s' field" msgstr "La entrada no contiene el campo '%(key)s'" #, python-format msgid "Insufficient permissions on image storage media: %s" msgstr "Permisos insuficientes en el soporte de almacenamiento de imagen: %s " #, python-format msgid "Invalid JSON pointer for this resource: '/%s'" msgstr "Puntero JSON no válido para este recurso: '/%s'" #, python-format msgid "Invalid checksum '%s': can't exceed 32 characters" msgstr "Suma de verificación '%s': no puede exceder los 32 caracteres" msgid "Invalid configuration in glance-swift conf file." msgstr "Configuración en fichero en glance-swift no válida." msgid "Invalid configuration in property protection file." msgstr "Configuración en fichero de protección de propiedad no válida." #, python-format msgid "Invalid container format '%s' for image." msgstr "Formato de contenedor '%s' no válido para imagen." #, python-format msgid "Invalid content type %(content_type)s" msgstr "Tipo de contenido no válido %(content_type)s" #, python-format msgid "Invalid disk format '%s' for image." msgstr "Formato de disco '%s' no válido para imagen." #, python-format msgid "Invalid filter value %s. The quote is not closed." msgstr "Valor de filtro no válido %s. No se han cerrado comillas." #, python-format msgid "" "Invalid filter value %s. There is no comma after closing quotation mark." msgstr "" "Valor de filtro no válido %s. No hay una coma antes de cerrar comillas." #, python-format msgid "" "Invalid filter value %s. There is no comma before opening quotation mark." msgstr "Valor de filtro no válido %s. No hay una coma antes de abrir comillas." msgid "Invalid image id format" msgstr "Formato de id de imagen no válido" msgid "Invalid location" msgstr "Ubicación no válida" #, python-format msgid "Invalid location %s" msgstr "Ubicación %s no válida" #, python-format msgid "Invalid location: %s" msgstr "Ubicaciones no válidas: %s" #, python-format msgid "" "Invalid location_strategy option: %(name)s. The valid strategy option(s) " "is(are): %(strategies)s" msgstr "" "Opción location_strategy no válida: %(name)s. La opción(es) válida(s) es/" "son: %(strategies)s" msgid "Invalid locations" msgstr "Ubicaciones no válidas" #, python-format msgid "Invalid locations: %s" msgstr "Ubicaciones no válidas: %s" msgid "Invalid marker format" msgstr "Formato de marcador no válido" msgid "Invalid marker. Image could not be found." msgstr "Marcador no válido. No se ha podido encontrar la imagen. " #, python-format msgid "Invalid membership association: %s" msgstr "Asociación de pertenencia no válida: %s " msgid "" "Invalid mix of disk and container formats. When setting a disk or container " "format to one of 'aki', 'ari', or 'ami', the container and disk formats must " "match." msgstr "" "Mezcla no válida de formatos de disco y contenedor. Al definir un formato de " "disco o de contenedor como 'aki', 'ari' o 'ami', los formatos de contenedor " "y de disco deben coincidir." #, python-format msgid "" "Invalid operation: `%(op)s`. It must be one of the following: %(available)s." msgstr "" "Operación: `%(op)s` no válida. Debe ser una de las siguientes: %(available)s." msgid "Invalid position for adding a location." msgstr "Posición no válida para agregar ubicación." msgid "Invalid position for removing a location." msgstr "Posición no válida para eliminar ubicación." msgid "Invalid service catalog json." msgstr "JSON de catálogo de servicios no válido." #, python-format msgid "Invalid sort direction: %s" msgstr "Dirección de ordenación no válida : %s" #, python-format msgid "" "Invalid sort key: %(sort_key)s. It must be one of the following: " "%(available)s." msgstr "" "Clave de ordenación no válida: %(sort_key)s. Debe ser una de las siguientes: " "%(available)s." #, python-format msgid "Invalid status value: %s" msgstr "Valor de estado no válido: %s" #, python-format msgid "Invalid status: %s" msgstr "Estado no válido: %s" #, python-format msgid "Invalid time format for %s." msgstr "Formato de hora no válido para %s." #, python-format msgid "Invalid type value: %s" msgstr "Valor de tipo no válido: %s" #, python-format msgid "" "Invalid update. It would result in a duplicate metadata definition namespace " "with the same name of %s" msgstr "" "Actualización no válida. Como resultado será un espacio de nombre para la " "definición de metadatos duplicado con el mismo nombre de %s" #, python-format msgid "" "Invalid update. It would result in a duplicate metadata definition object " "with the same name=%(name)s in namespace=%(namespace_name)s." msgstr "" "Actualización no válida. El resultado será un objeto para la definición de " "metadatos duplicado con el mismo nombre de=%(name)s en el espacio de nombre=" "%(namespace_name)s." #, python-format msgid "" "Invalid update. It would result in a duplicate metadata definition object " "with the same name=%(name)s in namespace=%(namespace_name)s." msgstr "" "Actualización no válida. El resultado será un objeto para la definición de " "metadatos duplicado con el mismo nombre de=%(name)s en el espacio de nombre=" "%(namespace_name)s." #, python-format msgid "" "Invalid update. It would result in a duplicate metadata definition property " "with the same name=%(name)s in namespace=%(namespace_name)s." msgstr "" "Actualización no válida. El resultado será una propiedad para la definición " "de metadatos duplicada con el mismo nombre de=%(name)s en espacio de nombre=" "%(namespace_name)s." #, python-format msgid "Invalid value '%(value)s' for parameter '%(param)s': %(extra_msg)s" msgstr "Valor no válido'%(value)s' para parametro '%(param)s': %(extra_msg)s" #, python-format msgid "Invalid value for option %(option)s: %(value)s" msgstr "Valor no válido para opción %(option)s: %(value)s" #, python-format msgid "Invalid visibility value: %s" msgstr "Valor de visibilidad no válido : %s" msgid "It's invalid to provide multiple image sources." msgstr "Proporcionar múltiples fuentes para la imagen no es válido." msgid "It's not allowed to add locations if locations are invisible." msgstr "No se permite añadir ubicaciones si son invisibles." msgid "It's not allowed to remove locations if locations are invisible." msgstr "No se permite eliminar ubicaciones si son invisibles." msgid "It's not allowed to update locations if locations are invisible." msgstr "No se permite actualizar las ubicaciones si son invisibles." msgid "List of strings related to the image" msgstr "Lista de series relacionadas con la imagen" msgid "Malformed JSON in request body." msgstr "JSON con formato incorrecto en el cuerpo de la solicitud." msgid "Maximal age is count of days since epoch." msgstr "La edad máxima es el recuento de días desde epoch." #, python-format msgid "Maximum redirects (%(redirects)s) was exceeded." msgstr "Se ha superado el máximo de redirecciones (%(redirects)s)." #, python-format msgid "Member %(member_id)s is duplicated for image %(image_id)s" msgstr "Se ha duplicado miembro %(member_id)s para imagen %(image_id)s" msgid "Member can't be empty" msgstr "Miembro no puede estar vacío" msgid "Member to be added not specified" msgstr "No se ha especificado el miembro que añadir" msgid "Membership could not be found." msgstr "La pertenencia no se ha podido encontrar." #, python-format msgid "" "Metadata definition namespace %(namespace)s is protected and cannot be " "deleted." msgstr "" "El espacio de nombre %(namespace)s de definición de metadatos está " "protegido y no puede eliminarse." #, python-format msgid "Metadata definition namespace not found for id=%s" msgstr "" "No se encontró espacio de nombre para la definición de metadatos para id=%s" #, python-format msgid "" "Metadata definition object %(object_name)s is protected and cannot be " "deleted." msgstr "" "El objeto %(object_name)s de definición de metadatos está protegido y no " "puede eliminarse." #, python-format msgid "Metadata definition object not found for id=%s" msgstr "No se encontró el objeto para la definición de metadatos para id=%s" #, python-format msgid "" "Metadata definition property %(property_name)s is protected and cannot be " "deleted." msgstr "" "La propiedad %(property_name)s de definición de metadatos está protegida y " "no puede eliminarse." #, python-format msgid "Metadata definition property not found for id=%s" msgstr "No se encontró propiedad para la definición de metadatos para id=%s" #, python-format msgid "" "Metadata definition resource-type %(resource_type_name)s is a seeded-system " "type and cannot be deleted." msgstr "" "El tipo de recurso para la definición de metadatos %(resource_type_name)s es " "un tipo de sistema seeded y no puede eliminarse." #, python-format msgid "" "Metadata definition resource-type-association %(resource_type)s is protected " "and cannot be deleted." msgstr "" "La asociación de tipo de recurso %(resource_type)s de definición de " "metadatos está protegida y no puede eliminarse." #, python-format msgid "" "Metadata definition tag %(tag_name)s is protected and cannot be deleted." msgstr "" "Etiqueta de definición de metadatos %(tag_name)s está protegida y no puede " "eliminarse." #, python-format msgid "Metadata definition tag not found for id=%s" msgstr "No se encontró etiqueta para la definición de metadatos para id=%s" msgid "Minimal rows limit is 1." msgstr "El número mínimo de filas es." #, python-format msgid "Missing required credential: %(required)s" msgstr "Falta la credencial necesaria :%(required)s " #, python-format msgid "" "Multiple 'image' service matches for region %(region)s. This generally means " "that a region is required and you have not supplied one." msgstr "" "Varias coincidencias de servicio 'image' para la región %(region)s. Esto " "generalmente significa que es necesaria una región y que no se ha " "proporcionado ninguna." msgid "No authenticated user" msgstr "Ningún usuario autenticado " #, python-format msgid "No image found with ID %s" msgstr "No se encontró imagen con ID %s" #, python-format msgid "No location found with ID %(loc)s from image %(img)s" msgstr "No se encontró ubicación con ID %(loc)s de imagen %(img)s" msgid "No permission to share that image" msgstr "No existe permiso para compartir esa imagen" #, python-format msgid "Not allowed to create members for image %s." msgstr "No se permite crear miembros para imagen %s." #, python-format msgid "Not allowed to deactivate image in status '%s'" msgstr "No está permitido eliminar imagen en estado '%s'" #, python-format msgid "Not allowed to delete members for image %s." msgstr "No se permite eliminar miembros para imagen %s." #, python-format msgid "Not allowed to delete tags for image %s." msgstr "No se permite eliminar etiquetas para imagen %s." #, python-format msgid "Not allowed to list members for image %s." msgstr "No se permite listar miembros para imagen %s." #, python-format msgid "Not allowed to reactivate image in status '%s'" msgstr "No está permitido reactivar imagen en estado'%s'" #, python-format msgid "Not allowed to update members for image %s." msgstr "No se permite actualizar miembros para imagen %s." #, python-format msgid "Not allowed to update tags for image %s." msgstr "No se permite actualizar etiquetas para imagen %s." #, python-format msgid "Not allowed to upload image data for image %(image_id)s: %(error)s" msgstr "" "No se permite cargar datos de imagen para imagen %(image_id)s: %(error)s" msgid "Number of sort dirs does not match the number of sort keys" msgstr "" "El número de dirs de ordenación no coincide con el número de claves de " "ordenación" msgid "OVA extract is limited to admin" msgstr "La extracción de OVA está limitada al administrador" msgid "Old and new sorting syntax cannot be combined" msgstr "No se puede combinar la antigua y nueva sintaxis de ordenación" #, python-format msgid "Operation \"%s\" requires a member named \"value\"." msgstr "La operación \"%s\" requiere un miembro llamado \"value\"." msgid "" "Operation objects must contain exactly one member named \"add\", \"remove\", " "or \"replace\"." msgstr "" "Los objetos de operación pueden contener exactamente un miembro llamado \"add" "\", \"remove\" o \"replace\"." msgid "" "Operation objects must contain only one member named \"add\", \"remove\", or " "\"replace\"." msgstr "" "Los objetos de operación solo pueden contener un miembro llamado \"add\", " "\"remove\" o \"replace\"." msgid "Operations must be JSON objects." msgstr "Las operaciones deben ser objetos JSON." #, python-format msgid "Original locations is not empty: %s" msgstr "Las ubicaciones originales no están vacías: %s" msgid "Owner can't be updated by non admin." msgstr "Un usuario no admin no puede actualizar al propietario." msgid "Owner must be specified to create a tag." msgstr "Se debe especificar el propietario para crear etiqueta." msgid "Owner of the image" msgstr "Propietario de la imagen" msgid "Owner of the namespace." msgstr "Propietario del espacio de nombre." msgid "Param values can't contain 4 byte unicode." msgstr "Los valores de parámetro no pueden contener 4 byte unicode." #, python-format msgid "Pointer `%s` contains \"~\" not part of a recognized escape sequence." msgstr "" "El puntero `%s` contiene un \"~\" que no forma parte de una secuencia de " "escape reconocida." #, python-format msgid "Pointer `%s` contains adjacent \"/\"." msgstr "El puntero `%s` contiene adyacente \"/\"." #, python-format msgid "Pointer `%s` does not contains valid token." msgstr "El puntero `%s` contiene un token no válido." #, python-format msgid "Pointer `%s` does not start with \"/\"." msgstr "El puntero `%s` no empieza por \"/\"." #, python-format msgid "Pointer `%s` end with \"/\"." msgstr "El puntero `%s` termina en \"/\"." #, python-format msgid "Port \"%s\" is not valid." msgstr "Puerto \"%s\" no es válido." #, python-format msgid "Process %d not running" msgstr "No se está ejecutando proceso %d" #, python-format msgid "Properties %s must be set prior to saving data." msgstr "Las propiedades %s deben definirse antes de guardar datos." #, python-format msgid "" "Property %(property_name)s does not start with the expected resource type " "association prefix of '%(prefix)s'." msgstr "" "La propiedad %(property_name)s no inicia con el prefijo de asociación del " "tipo de recurso esperado de '%(prefix)s'." #, python-format msgid "Property %s already present." msgstr "La propiedad %s ya está presente." #, python-format msgid "Property %s does not exist." msgstr "La propiedad %s no existe." #, python-format msgid "Property %s may not be removed." msgstr "La propiedad %s no se puede eliminar." #, python-format msgid "Property %s must be set prior to saving data." msgstr "La propiedad %s debe definirse antes de guardar datos." #, python-format msgid "Property '%s' is protected" msgstr "Propiedad '%s' está protegida" msgid "Property names can't contain 4 byte unicode." msgstr "Los nombre de propiedad no pueden contener 4 byte unicode." #, python-format msgid "" "Provided image size must match the stored image size. (provided size: " "%(ps)d, stored size: %(ss)d)" msgstr "" "El tamaño de imagen proporcionado debe coincidir con el tamaño de la imagen " "almacenada. (tamaño proporcionado: %(ps)d, tamaño almacenado: %(ss)d)" #, python-format msgid "Provided object does not match schema '%(schema)s': %(reason)s" msgstr "" "El objeto proporcionado no coincide con el esquema '%(schema)s': %(reason)s" #, python-format msgid "Provided status of task is unsupported: %(status)s" msgstr "No se soporta el estado de tarea proporcionado: %(status)s" #, python-format msgid "Provided type of task is unsupported: %(type)s" msgstr "No se soporta el tipo de tarea proporcionado: %(type)s" msgid "Provides a user friendly description of the namespace." msgstr "Proporciona una descripción sencilla del espacio de nombre." msgid "Received invalid HTTP redirect." msgstr "Se ha recibido redirección HTTP no válida. " #, python-format msgid "Redirecting to %(uri)s for authorization." msgstr "Redirigiendo a %(uri)s para la autorización. " #, python-format msgid "Registry service can't use %s" msgstr "El servicio de registro no puede usar %s" #, python-format msgid "Registry was not configured correctly on API server. Reason: %(reason)s" msgstr "" "El registro no se ha configurado correctamente en el servidor de API. Razón: " "%(reason)s" #, python-format msgid "Reload of %(serv)s not supported" msgstr "No se soporta la recarga de %(serv)s" #, python-format msgid "Reloading %(serv)s (pid %(pid)s) with signal(%(sig)s)" msgstr "Recargando %(serv)s (pid %(pid)s) con señal (%(sig)s)" #, python-format msgid "Removing stale pid file %s" msgstr "Eliminando fichero de identificación positiva obsoleto %s" msgid "Request body must be a JSON array of operation objects." msgstr "" "El cuerpo de la solicitud debe ser una matriz JSON de objetos de operación." msgid "Request must be a list of commands" msgstr "La solicitud debe ser una lista de comandos." #, python-format msgid "Required store %s is invalid" msgstr "El almacén %s solicitado no es válido" msgid "" "Resource type names should be aligned with Heat resource types whenever " "possible: http://docs.openstack.org/developer/heat/template_guide/openstack." "html" msgstr "" "Los nombres de tipo de recurso beben alinearse con los tipos de recurso Heat " "en cualquier momento: http://docs.openstack.org/developer/heat/" "template_guide/openstack.html" msgid "Response from Keystone does not contain a Glance endpoint." msgstr "La respuesta de Keystone no contiene un punto final Glance." msgid "Scope of image accessibility" msgstr "Ãmbito de accesibilidad de la imagen" msgid "Scope of namespace accessibility." msgstr "Alcance de accesibilidad del espacio de nombre." #, python-format msgid "Server %(serv)s is stopped" msgstr "El servidor %(serv)s se ha detenido" #, python-format msgid "Server worker creation failed: %(reason)s." msgstr "" "Se ha encontrado un error en la creación del trabajador de servidor: " "%(reason)s." msgid "Signature verification failed" msgstr "Ha fallado la verificación de firma" msgid "Size of image file in bytes" msgstr "Tamaño del archivo de imagen en bytes" msgid "" "Some resource types allow more than one key / value pair per instance. For " "example, Cinder allows user and image metadata on volumes. Only the image " "properties metadata is evaluated by Nova (scheduling or drivers). This " "property allows a namespace target to remove the ambiguity." msgstr "" "Algunos tipos de recurso aceptan más de una clave / par de valor por " "instancia. Por ejemplo, Cinder permite metadatos de usuario e imagen en " "volúmenes. Nova solo evalúa los metadatos de propiedades de imagen " "(planeadores y controladores). Esta propiedad permite un espacio de nombre " "para eliminar la ambigüedad." msgid "Sort direction supplied was not valid." msgstr "La dirección de ordenación proporcionada no es válida." msgid "Sort key supplied was not valid." msgstr "La clave de clasificación proporcionada no es válida. " msgid "" "Specifies the prefix to use for the given resource type. Any properties in " "the namespace should be prefixed with this prefix when being applied to the " "specified resource type. Must include prefix separator (e.g. a colon :)." msgstr "" "Especifica el prefijo que se usará para el tipo de recurso dado. Cualquier " "propiedad en el espacio de nombre deben tener este prefijo cuando se aplica " "al tipo de recurso especificado. Debe incluir separador de prefijo(por " "ejemplo un punto :)." msgid "Status must be \"pending\", \"accepted\" or \"rejected\"." msgstr "el estado debe ser \"pending\", \"accepted\" o \"rejected\"." msgid "Status not specified" msgstr "Estado no especificado" msgid "Status of the image" msgstr "Estado de la imaen" #, python-format msgid "Status transition from %(cur_status)s to %(new_status)s is not allowed" msgstr "No se permite la transición de %(cur_status)s a %(new_status)s" #, python-format msgid "Stopping %(serv)s (pid %(pid)s) with signal(%(sig)s)" msgstr "Deteniendo %(serv)s (pid %(pid)s) con señal (%(sig)s)" #, python-format msgid "Store for image_id not found: %s" msgstr "No se ha encontrado el almacenamiento para image_id: %s" #, python-format msgid "Store for scheme %s not found" msgstr "El almacén para el esquema %s no se ha encontrado" #, python-format msgid "" "Supplied %(attr)s (%(supplied)s) and %(attr)s generated from uploaded image " "(%(actual)s) did not match. Setting image status to 'killed'." msgstr "" "%(attr)s (%(supplied)s) y %(attr)s proporcionados que se han generado desde " "la imagen cargada (%(actual)s) no coinciden. Definiendo estado de imagen " "como 'killed'." msgid "Supported values for the 'container_format' image attribute" msgstr "Valores para el atributo de imagen 'container_format' soportados" msgid "Supported values for the 'disk_format' image attribute" msgstr "Valores para el atributo de imagen 'disk_format' soportados" #, python-format msgid "Suppressed respawn as %(serv)s was %(rsn)s." msgstr "Se suprimió respawn como %(serv)s era %(rsn)s." msgid "System SIGHUP signal received." msgstr "Se ha recibido señal de sistema SIGHUP." #, python-format msgid "Task '%s' is required" msgstr "Se necesita tarea '%s'" msgid "Task does not exist" msgstr "La tarea no existe" msgid "Task failed due to Internal Error" msgstr "La tarea ha fallado a causa de un Error Interno" msgid "Task was not configured properly" msgstr "La tarea no se configuró correctamente" #, python-format msgid "Task with the given id %(task_id)s was not found" msgstr "No se encontró tarea con id %(task_id)s proporcionado" msgid "The \"changes-since\" filter is no longer available on v2." msgstr "El filtro \"changes-since\" ya no está disponible en v2." #, python-format msgid "The CA file you specified %s does not exist" msgstr "El archivo CA %s que ha especificado no existe" #, python-format msgid "" "The Image %(image_id)s object being created by this task %(task_id)s, is no " "longer in valid status for further processing." msgstr "" "El objeto de imagen %(image_id)s que la tarea %(task_id)s está creando, ya " "no tiene un estado válido para un próximo procesamiento. " msgid "The Store URI was malformed." msgstr "El URI del almacén tenía un formato incorrecto." msgid "" "The URL to the keystone service. If \"use_user_token\" is not in effect and " "using keystone auth, then URL of keystone can be specified." msgstr "" "La URL al servicio de keystone. Si \"use_user_token\" no tiene efecto y usa " "keystone auth, entonces se puede especificar la URL de keystone." msgid "" "The administrators password. If \"use_user_token\" is not in effect, then " "admin credentials can be specified." msgstr "" "La contraseña de los administradores. Si \"use_user_token\" no tiene " "efecto, entonces se puede especificar las credenciales del administrador." msgid "" "The administrators user name. If \"use_user_token\" is not in effect, then " "admin credentials can be specified." msgstr "" "El nombre de usuario del administrador. Si \"use_user_token\" no tiene " "efecto, entonces se pueden especificar las credenciales del administrador." #, python-format msgid "The cert file you specified %s does not exist" msgstr "El archivo de certificado que ha especificado %s no existe" msgid "The current status of this task" msgstr "El estado actual de esta tarea" #, python-format msgid "" "The device housing the image cache directory %(image_cache_dir)s does not " "support xattr. It is likely you need to edit your fstab and add the " "user_xattr option to the appropriate line for the device housing the cache " "directory." msgstr "" "El dispositivo que aloja el directorio de caché de imágenes " "%(image_cache_dir)s no soporta xattr. Es probable que tenga que editar fstab " "y añadir la poción user_xattr en la línea adecuada para que el dispositivo " "aloje el directorio de caché." #, python-format msgid "" "The given uri is not valid. Please specify a valid uri from the following " "list of supported uri %(supported)s" msgstr "" "El uri proporcionado no es válido. Por favor especifique un uri válido de la " "siguiente lista de uri soportados %(supported)s" #, python-format msgid "The incoming image is too large: %s" msgstr "La imagen de entrada es demasiado grande: %s" #, python-format msgid "The key file you specified %s does not exist" msgstr "El archivo de claves que ha especificado %s no existe" #, python-format msgid "" "The limit has been exceeded on the number of allowed image locations. " "Attempted: %(attempted)s, Maximum: %(maximum)s" msgstr "" "Se ha excedido el límite en el número permitido para ubicaciones de imagen. " "Intento: %(attempted)s, Máximo: %(maximum)s" #, python-format msgid "" "The limit has been exceeded on the number of allowed image members for this " "image. Attempted: %(attempted)s, Maximum: %(maximum)s" msgstr "" "Se ha excedido el límite en el número de miembros de imagen para esta " "imagen. Intentos: %(attempted)s, Máximo: %(maximum)s" #, python-format msgid "" "The limit has been exceeded on the number of allowed image properties. " "Attempted: %(attempted)s, Maximum: %(maximum)s" msgstr "" "Se ha excedido el límite en el número permitido para propiedades de imagen. " "Intento: %(attempted)s, Máximo: %(maximum)s" #, python-format msgid "" "The limit has been exceeded on the number of allowed image properties. " "Attempted: %(num)s, Maximum: %(quota)s" msgstr "" "Se ha excedido el límite en el número de propiedades de imagen permitidas. " "Intentos: %(num)s, Máximo: %(quota)s" #, python-format msgid "" "The limit has been exceeded on the number of allowed image tags. Attempted: " "%(attempted)s, Maximum: %(maximum)s" msgstr "" "Se ha excedido el límite en el número permitido para etiquetas de imagen. " "Intento: %(attempted)s, Máximo: %(maximum)s" #, python-format msgid "The location %(location)s already exists" msgstr "Ya existe la ubicación %(location)s" #, python-format msgid "The location data has an invalid ID: %d" msgstr "Los datos de ubicación contienen un ID no válido: %d" #, python-format msgid "" "The metadata definition %(record_type)s with name=%(record_name)s not " "deleted. Other records still refer to it." msgstr "" "No se borró la definición de metadatos%(record_type)s de nombre=" "%(record_name)s- Otros archivos aún se refieren a ésta." #, python-format msgid "The metadata definition namespace=%(namespace_name)s already exists." msgstr "" "Ya existe el espacio de nombre para definición de metadatos=" "%(namespace_name)s" #, python-format msgid "" "The metadata definition object with name=%(object_name)s was not found in " "namespace=%(namespace_name)s." msgstr "" "No se encontró el objeto para definición de metadatos de nombre=" "%(object_name)s en espacio de nombre=%(namespace_name)s." #, python-format msgid "" "The metadata definition property with name=%(property_name)s was not found " "in namespace=%(namespace_name)s." msgstr "" "No se encontró la propiedad para definición de metadatos de nombre=" "%(property_name)s en espacio de nombre=%(namespace_name)s." #, python-format msgid "" "The metadata definition resource-type association of resource-type=" "%(resource_type_name)s to namespace=%(namespace_name)s already exists." msgstr "" "Ya existe la asociación de tipo de recurso del tipo de recurso=" "%(resource_type_name)s para el espacio de nombre=%(namespace_name)s." #, python-format msgid "" "The metadata definition resource-type association of resource-type=" "%(resource_type_name)s to namespace=%(namespace_name)s, was not found." msgstr "" "No se encontró la asociación de tipo de recurso del tipo de recurso para " "definición de metadatos=%(resource_type_name)s para el espacio de nombre=" "%(namespace_name)s." #, python-format msgid "" "The metadata definition resource-type with name=%(resource_type_name)s, was " "not found." msgstr "" "No se encontró el tipo de recurso para definición de metadatos de nombre=" "%(resource_type_name)s" #, python-format msgid "" "The metadata definition tag with name=%(name)s was not found in namespace=" "%(namespace_name)s." msgstr "" "No se encontró la etiqueta para definición de metadatos de nombre=%(name)s " "en el espacio de nombre=%(namespace_name)s." msgid "The parameters required by task, JSON blob" msgstr "Los parámetros requeridos por tarea, objeto JSON" msgid "The provided image is too large." msgstr "La imagen proporcionada es demasiado grande." msgid "" "The region for the authentication service. If \"use_user_token\" is not in " "effect and using keystone auth, then region name can be specified." msgstr "" "La región para el servicio de autenticación. Si \"use_user_token\" no tiene " "efecto y utiliza keystone auth, entonces se puede especificar el nombre de " "la región." msgid "The request returned 500 Internal Server Error." msgstr "La solicitud ha devuelto el mensaje 500 Error interno del servidor." msgid "" "The request returned 503 Service Unavailable. This generally occurs on " "service overload or other transient outage." msgstr "" "La solicitud ha devuelto un error 503 Servicio no disponible. Esto sucede " "generalmente por una sobrecarga del servicio o una interrupción transitoria." #, python-format msgid "" "The request returned a 302 Multiple Choices. This generally means that you " "have not included a version indicator in a request URI.\n" "\n" "The body of response returned:\n" "%(body)s" msgstr "" "La solicitud ha devuelto un 302 Múltiples opciones. Generalmente esto " "significa que no se ha incluido un indicador de versión en un URI de " "solicitud.\n" "\n" "El cuerpo de la respuesta devuelta:\n" "%(body)s" #, python-format msgid "" "The request returned a 413 Request Entity Too Large. This generally means " "that rate limiting or a quota threshold was breached.\n" "\n" "The response body:\n" "%(body)s" msgstr "" "La solicitud ha devuelto un error 413 Entidad de solicitud demasiado grande. " "Esto generalmente significa que se ha infringido el límite de índice o un " "umbral de cuota.\n" "\n" "El cuerpo de la respuesta:\n" "%(body)s" #, python-format msgid "" "The request returned an unexpected status: %(status)s.\n" "\n" "The response body:\n" "%(body)s" msgstr "" "La solicitud ha devuelto un estado inesperado: %(status)s.\n" "\n" "El cuerpo de la respuesta:\n" "%(body)s" msgid "" "The requested image has been deactivated. Image data download is forbidden." msgstr "" "Se ha desactivado la imagen solicitada. Se prohíbe la descarga de datos de " "imagen." msgid "The result of current task, JSON blob" msgstr "El resultado de la tarea, objeto JSON actual" #, python-format msgid "" "The size of the data %(image_size)s will exceed the limit. %(remaining)s " "bytes remaining." msgstr "" "El tamaño de los datos %(image_size)s excederá el límite. Quedan " "%(remaining)s bytes" #, python-format msgid "The specified member %s could not be found" msgstr "No se pudo encontrar el miembro %s especificado" #, python-format msgid "The specified metadata object %s could not be found" msgstr "No se pudo encontrar el objeto de metadatos %s especificado" #, python-format msgid "The specified metadata tag %s could not be found" msgstr "No se pudo encontrar la etiqueta de metadatos %s especificada" #, python-format msgid "The specified namespace %s could not be found" msgstr "No se ha podido encontrar el espacio de nombre %s especificado" #, python-format msgid "The specified property %s could not be found" msgstr "No se pudo encontrar la propiedad %s especificada" #, python-format msgid "The specified resource type %s could not be found " msgstr "No se pudo encontrar el tipo de recurso %s especificado" msgid "" "The status of deleted image location can only be set to 'pending_delete' or " "'deleted'" msgstr "" "El estado de la ubicación de la imagen eliminada solo se puede establecer " "como 'pending_delete' o 'deleted'." msgid "" "The status of deleted image location can only be set to 'pending_delete' or " "'deleted'." msgstr "" "El estado de la ubicación de imagen eliminada solo se puede establecer como " "'pending_delete' o 'deleted'." msgid "The status of this image member" msgstr "El estado de este miembro de la imagen" msgid "" "The strategy to use for authentication. If \"use_user_token\" is not in " "effect, then auth strategy can be specified." msgstr "" "La estrategia a usar para la autenticación. SI \"use_user_token\" no tiene " "efecto, entonces, se puede especificar la estrategia auth." #, python-format msgid "" "The target member %(member_id)s is already associated with image " "%(image_id)s." msgstr "" "El miembro meta %(member_id)s ya está asociado con la imagen %(image_id)s." msgid "" "The tenant name of the administrative user. If \"use_user_token\" is not in " "effect, then admin tenant name can be specified." msgstr "" "El nombre global del usuario de administrador. Si \"use_user_token\" no " "tiene efecto, entonces se puede especificar el nombre global del " "administrador." msgid "The type of task represented by this content" msgstr "El tipo de tarea representada por este contenido" msgid "The unique namespace text." msgstr "EL único texto de espacio de nombre." msgid "The user friendly name for the namespace. Used by UI if available." msgstr "" "El nombre fácil de usar para el espacio de nombre. Utilizado por UI si está " "disponible." #, python-format msgid "" "There is a problem with your %(error_key_name)s %(error_filename)s. Please " "verify it. Error: %(ioe)s" msgstr "" "Hay un problema con %(error_key_name)s %(error_filename)s. Por favor " "verifique. Error: %(ioe)s" #, python-format msgid "" "There is a problem with your %(error_key_name)s %(error_filename)s. Please " "verify it. OpenSSL error: %(ce)s" msgstr "" "Hay un problema con %(error_key_name)s %(error_filename)s. Por favor " "verifique. Error OpenSSL: %(ce)s" #, python-format msgid "" "There is a problem with your key pair. Please verify that cert " "%(cert_file)s and key %(key_file)s belong together. OpenSSL error %(ce)s" msgstr "" "Hay un problema con el par de claves. Por favor verifique que el certificado " "%(cert_file)s y clave %(key_file)s deben estar juntas. Error OpenSSL %(ce)s" msgid "There was an error configuring the client." msgstr "Se ha producido un error al configurar el cliente. " msgid "There was an error connecting to a server" msgstr "Se ha producido un error al conectar a un servidor " msgid "" "This operation is currently not permitted on Glance Tasks. They are auto " "deleted after reaching the time based on their expires_at property." msgstr "" "Actualmente no se permite esta operación en las tareas Glance. Se eliminarán " "automáticamente después de alcanzar el tiempo con base en expires_at " "property." msgid "This operation is currently not permitted on Glance images details." msgstr "" "Actualmente no se permite la operación en los detalles de imagen de Glance." msgid "" "Time in hours for which a task lives after, either succeeding or failing" msgstr "Tiempo de vida en horas para la tarea, así tenga éxito o fracase" msgid "Too few arguments." msgstr "Muy pocos argumentos." msgid "" "URI cannot contain more than one occurrence of a scheme.If you have " "specified a URI like swift://user:pass@http://authurl.com/v1/container/obj, " "you need to change it to use the swift+http:// scheme, like so: swift+http://" "user:pass@authurl.com/v1/container/obj" msgstr "" "El URI no puede contener más de una aparición de un esquema. Si ha " "especificado un URI como swift://user:pass@http://authurl.com/v1/container/" "obj, tiene que cambiarlo para que utilice el esquema swift+http://, como: " "swift+http://user:pass@authurl.com/v1/container/obj" msgid "URL to access the image file kept in external store" msgstr "" "La URL para acceder al archivo de imagen se encuentra en un almacén externo" #, python-format msgid "" "Unable to create pid file %(pid)s. Running as non-root?\n" "Falling back to a temp file, you can stop %(service)s service using:\n" " %(file)s %(server)s stop --pid-file %(fb)s" msgstr "" "No se puede crear fichero pid %(pid)s. ¿Ejecutar como non-root?\n" "Retrocediendo a fichero temporal, puede detener el uso de servicio " "%(service)s:\n" " %(file)s %(server)s detener--fichero-pid %(fb)s" #, python-format msgid "Unable to filter by unknown operator '%s'." msgstr "No se puede filtrar con el operador desconocido '%s'." msgid "Unable to filter on a range with a non-numeric value." msgstr "No se ha podido filtrar en un rango con un valor no numérico." msgid "Unable to filter on a unknown operator." msgstr "No se puede filtrar con un operador desconocido." msgid "Unable to filter using the specified operator." msgstr "No se ha podido filtrar utilizando el operador especificado." msgid "Unable to filter using the specified range." msgstr "No se ha podido filtrar mediante el rango especificado." #, python-format msgid "Unable to find '%s' in JSON Schema change" msgstr "No se ha podido encontrar '%s' en el cambio del esquema JSON" #, python-format msgid "" "Unable to find `op` in JSON Schema change. It must be one of the following: " "%(available)s." msgstr "" "No es posible encontrar `op` en cambio de JSON Schema. Debe ser uno de los " "siguientes: %(available)s. " msgid "Unable to increase file descriptor limit. Running as non-root?" msgstr "" "No se puede aumentar el límite de descripción de fichero ¿Desea ejecutar " "como non-root?" #, python-format msgid "" "Unable to load %(app_name)s from configuration file %(conf_file)s.\n" "Got: %(e)r" msgstr "" "No se ha podido cargar %(app_name)s desde el archivo de configuración " "%(conf_file)s.\n" "Se ha obtenido: %(e)r" #, python-format msgid "Unable to load schema: %(reason)s" msgstr "No se ha podido cargar el esquema: %(reason)s" #, python-format msgid "Unable to locate paste config file for %s." msgstr "No se puede ubicar el fichero de configuración de pegado para %s." #, python-format msgid "Unable to upload duplicate image data for image%(image_id)s: %(error)s" msgstr "No se puede cargar datos de imagen duplicada %(image_id)s: %(error)s" msgid "Unauthorized image access" msgstr "Acceso a imagen no autorizado" msgid "Unexpected body type. Expected list/dict." msgstr "Tipo de cuerpo inesperado. Se esperaba list/dict." #, python-format msgid "Unexpected response: %s" msgstr "Respuesta inesperada : %s " #, python-format msgid "Unknown auth strategy '%s'" msgstr "Estrategia de autenticación desconocida '%s' " #, python-format msgid "Unknown command: %s" msgstr "Comando desconocido %s" msgid "Unknown sort direction, must be 'desc' or 'asc'" msgstr "Dirección de clasificación desconocida, debe ser 'desc' o ' asc'" msgid "Unrecognized JSON Schema draft version" msgstr "Versión de borrador de esquema JSON no reconocida" msgid "Unrecognized changes-since value" msgstr "Valor de changes-since no reconocido" #, python-format msgid "Unsupported sort_dir. Acceptable values: %s" msgstr "sort_dir no soportado. Valores aceptables: %s" #, python-format msgid "Unsupported sort_key. Acceptable values: %s" msgstr "sort_key no soportado. Valores aceptables: %s" msgid "Virtual size of image in bytes" msgstr "Tamaño virtual de la imagen en bytes" #, python-format msgid "Waited 15 seconds for pid %(pid)s (%(file)s) to die; giving up" msgstr "" "Se esperó 15 segundos para que pid %(pid)s (%(file)s) muriera; desistiendo" msgid "" "When running server in SSL mode, you must specify both a cert_file and " "key_file option value in your configuration file" msgstr "" "Al ejecutar el servidor en modalidad SSL, debe especificar un valor para las " "opciones cert_file y key_file en el archivo de configuración" msgid "" "Whether to pass through the user token when making requests to the registry. " "To prevent failures with token expiration during big files upload, it is " "recommended to set this parameter to False.If \"use_user_token\" is not in " "effect, then admin credentials can be specified." msgstr "" "Si se debe o no pasar a través del token del usuario cuando se hacen " "solicitudes al registro. Para prevenir fallas con la expiración del token " "durante la carga de ficheros grandes, se recomienda configurar este " "parámetro en False. Si \"use_user_token\" no tiene efecto, entonces se " "pueden especificar credenciales de administración." #, python-format msgid "Wrong command structure: %s" msgstr "Estructura de comando incorrecta: %s" msgid "You are not authenticated." msgstr "No está autenticado." msgid "You are not authorized to complete this action." msgstr "No está autorizado a completar esta acción." #, python-format msgid "You are not authorized to lookup image %s." msgstr "No tiene autorización para buscar la imagen %s." #, python-format msgid "You are not authorized to lookup the members of the image %s." msgstr "No tiene autorización para buscar los miembros de la imagen %s." #, python-format msgid "You are not permitted to create a tag in the namespace owned by '%s'" msgstr "" "No tiene permiso para crear etiqueta en el espacio de nombre propiedad de " "'%s'" msgid "You are not permitted to create image members for the image." msgstr "No tiene permiso para crear miembros de imagen para la imagen." #, python-format msgid "You are not permitted to create images owned by '%s'." msgstr "No tiene permiso para crear imágenes propiedad de '%s'." #, python-format msgid "You are not permitted to create namespace owned by '%s'" msgstr "No tiene permiso para crear espacio de nombre propiedad de '%s'" #, python-format msgid "You are not permitted to create object owned by '%s'" msgstr "No tiene permiso para crear objeto propiedad de '%s'" #, python-format msgid "You are not permitted to create property owned by '%s'" msgstr "No tiene permiso para crear propiedad perteneciente a'%s'" #, python-format msgid "You are not permitted to create resource_type owned by '%s'" msgstr "No tiene permiso para crear resource_type propiedad de '%s'" #, python-format msgid "You are not permitted to create this task with owner as: %s" msgstr "No tiene permiso para crear esta tarea como propiedad de: '%s" msgid "You are not permitted to deactivate this image." msgstr "No tiene permiso para deactivar esta imagen." msgid "You are not permitted to delete this image." msgstr "No tiene permiso para suprimir esta imagen." msgid "You are not permitted to delete this meta_resource_type." msgstr "No tiene permiso para eliminar este meta_resource_type." msgid "You are not permitted to delete this namespace." msgstr "No tiene permiso para eliminar este espacio de nombre." msgid "You are not permitted to delete this object." msgstr "No tiene permiso para eliminar este objeto." msgid "You are not permitted to delete this property." msgstr "No tiene permiso para eliminar esta propiedad." msgid "You are not permitted to delete this tag." msgstr "No tiene permiso para eliminar esta etiqueta." #, python-format msgid "You are not permitted to modify '%(attr)s' on this %(resource)s." msgstr "No tiene permiso para modificar '%(attr)s' en este %(resource)s." #, python-format msgid "You are not permitted to modify '%s' on this image." msgstr "No tiene permiso para modificar '%s' en esta imagen." msgid "You are not permitted to modify locations for this image." msgstr "No tiene permiso para modificar ubicaciones para esta imagen." msgid "You are not permitted to modify tags on this image." msgstr "No tiene permiso para modificar etiquetas en esta imagen." msgid "You are not permitted to modify this image." msgstr "No tiene permiso para modificar esta imagen." msgid "You are not permitted to reactivate this image." msgstr "No tiene permiso para reactivar esta imagen." msgid "You are not permitted to set status on this task." msgstr "No tiene permiso para configurar estado en esta tarea." msgid "You are not permitted to update this namespace." msgstr "No tiene permiso para actualizar este espacio de nombre." msgid "You are not permitted to update this object." msgstr "No tiene permiso para actualizar este objeto." msgid "You are not permitted to update this property." msgstr "No tiene permiso para actualizar esta propiedad." msgid "You are not permitted to update this tag." msgstr "No tiene permiso para actualizar esta etiqueta." msgid "You are not permitted to upload data for this image." msgstr "No tiene permiso para cargar datos para esta imagen." #, python-format msgid "You cannot add image member for %s" msgstr "No se puede añadir el miembro de la imagen para %s" #, python-format msgid "You cannot delete image member for %s" msgstr "No se puede suprimir el miembro de la imagen para %s" #, python-format msgid "You cannot get image member for %s" msgstr "No se puede obtener el miembro de la imagen para %s" #, python-format msgid "You cannot update image member %s" msgstr "No se puede actualizar el miembro de la imagen %s" msgid "You do not own this image" msgstr "No es propietario de esta imagen " msgid "" "You have selected to use SSL in connecting, and you have supplied a cert, " "however you have failed to supply either a key_file parameter or set the " "GLANCE_CLIENT_KEY_FILE environ variable" msgstr "" "Ha seleccionado utilizar SSL en la conexión y ha proporcionado un " "certificado, pero no ha proporcionado un parámetro key_file ni ha definido " "la variable de entorno GLANCE_CLIENT_KEY_FILE" msgid "" "You have selected to use SSL in connecting, and you have supplied a key, " "however you have failed to supply either a cert_file parameter or set the " "GLANCE_CLIENT_CERT_FILE environ variable" msgstr "" "Ha seleccionado utilizar SSL en la conexión y ha proporcionado una clave, " "pero no ha proporcionado un parámetro cert_file ni ha definido la variable " "de entorno GLANCE_CLIENT_CERT_FILE" msgid "" "^([0-9a-fA-F]){8}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-" "fA-F]){12}$" msgstr "" "^([0-9a-fA-F]){8}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-" "fA-F]){12}$" #, python-format msgid "__init__() got unexpected keyword argument '%s'" msgstr "__init__() obtuvo un argumento de búsqueda inesperado '%s'" #, python-format msgid "" "cannot transition from %(current)s to %(next)s in update (wanted from_state=" "%(from)s)" msgstr "" "No se puede pasar de %(current)s a %(next)s en la actualización (se desea " "from_state=%(from)s)" #, python-format msgid "custom properties (%(props)s) conflict with base properties" msgstr "" "las propiedades personalizadas (%(props)s) están en conflicto con las " "propiedades base" msgid "eventlet 'poll' nor 'selects' hubs are available on this platform" msgstr "" "Los concentradores de 'sondeo' y los de 'selección' no están disponibles en " "esta plataforma" msgid "is_public must be None, True, or False" msgstr "is_public debe ser None, True o False" msgid "limit param must be an integer" msgstr "el parámetro de límite debe ser un entero" msgid "limit param must be positive" msgstr "el parámetro de límite debe ser positivo" msgid "md5 hash of image contents." msgstr "md5 hash de contenidos de imagen." #, python-format msgid "new_image() got unexpected keywords %s" msgstr "new_image() obtuvo argumentos de búsqueda inesperados %s" msgid "protected must be True, or False" msgstr "protected debe ser True o False" #, python-format msgid "unable to launch %(serv)s. Got error: %(e)s" msgstr "No se puede iniciar %(serv)s. Se ha obtenido error: %(e)s" #, python-format msgid "x-openstack-request-id is too long, max size %s" msgstr "x-openstack-request-id es demasiado largo, el tamaño máximo es %s" glance-16.0.0/glance/locale/zh_TW/0000775000175100017510000000000013245511661016602 5ustar zuulzuul00000000000000glance-16.0.0/glance/locale/zh_TW/LC_MESSAGES/0000775000175100017510000000000013245511661020367 5ustar zuulzuul00000000000000glance-16.0.0/glance/locale/zh_TW/LC_MESSAGES/glance.po0000666000175100017510000017635413245511421022174 0ustar zuulzuul00000000000000# Translations template for glance. # Copyright (C) 2015 ORGANIZATION # This file is distributed under the same license as the glance project. # # Translators: # Andreas Jaeger , 2016. #zanata msgid "" msgstr "" "Project-Id-Version: glance 15.0.0.0b3.dev29\n" "Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n" "POT-Creation-Date: 2017-06-23 20:54+0000\n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=UTF-8\n" "Content-Transfer-Encoding: 8bit\n" "PO-Revision-Date: 2016-04-12 05:23+0000\n" "Last-Translator: Copied by Zanata \n" "Language: zh-TW\n" "Plural-Forms: nplurals=1; plural=0;\n" "Generated-By: Babel 2.0\n" "X-Generator: Zanata 3.9.6\n" "Language-Team: Chinese (Taiwan)\n" #, python-format msgid "\t%s" msgstr "\t%s" #, python-format msgid "%(cls)s exception was raised in the last rpc call: %(val)s" msgstr "å‰ä¸€å€‹ RPC 呼å«å·²ç™¼å‡º %(cls)s 異常狀æ³ï¼š%(val)s" #, python-format msgid "%(m_id)s not found in the member list of the image %(i_id)s." msgstr "åœ¨æ˜ åƒæª” %(i_id)s çš„æˆå“¡æ¸…單中找ä¸åˆ° %(m_id)s。" #, python-format msgid "%(serv)s (pid %(pid)s) is running..." msgstr "%(serv)s (pid %(pid)s) 正在執行中..." #, python-format msgid "%(serv)s appears to already be running: %(pid)s" msgstr "%(serv)s 似乎已在執行中:%(pid)s" #, python-format msgid "" "%(strategy)s is registered as a module twice. %(module)s is not being used." msgstr "%(strategy)s 已登錄作為模組兩次。%(module)s 未使用。" #, python-format msgid "" "%(task_id)s of %(task_type)s not configured properly. Could not load the " "filesystem store" msgstr "" "未é©ç•¶åœ°é…ç½® %(task_id)s(類型為 %(task_type)s)。無法載入檔案系統儲存庫" #, python-format msgid "" "%(task_id)s of %(task_type)s not configured properly. Missing work dir: " "%(work_dir)s" msgstr "" "未é©ç•¶åœ°é…ç½® %(task_id)s(類型為 %(task_type)sï¼‰ã€‚éºæ¼å·¥ä½œç›®éŒ„:%(work_dir)s" #, python-format msgid "%(verb)sing %(serv)s" msgstr "æ­£åœ¨å° %(serv)s 執行 %(verb)s 作業" #, python-format msgid "%(verb)sing %(serv)s with %(conf)s" msgstr "é€éŽ %(conf)sï¼Œæ­£åœ¨å° %(serv)s 執行 %(verb)s 作業" #, python-format msgid "" "%s Please specify a host:port pair, where host is an IPv4 address, IPv6 " "address, hostname, or FQDN. If using an IPv6 address, enclose it in brackets " "separately from the port (i.e., \"[fe80::a:b:c]:9876\")." msgstr "" "%s 請指定 host:port 組,其中 host 是 IPv4 ä½å€ã€IPv6 ä½å€ã€ä¸»æ©Ÿå稱或 FQDN。" "如果使用 IPv6 ä½å€ï¼Œè«‹å°‡å…¶å–®ç¨æ‹¬åœ¨æ–¹æ‹¬å¼§å…§ï¼Œä»¥èˆ‡åŸ å€åˆ¥é–‹ï¼ˆä¾‹å¦‚ \"[fe80::a:b:" "c]:9876\")。" #, python-format msgid "%s can't contain 4 byte unicode characters." msgstr "%s ä¸èƒ½åŒ…å« 4 ä½å…ƒçµ„ Unicode 字元。" #, python-format msgid "%s is already stopped" msgstr "å·²åœæ­¢ %s" #, python-format msgid "%s is stopped" msgstr "%s å·²åœæ­¢" msgid "" "--os_auth_url option or OS_AUTH_URL environment variable required when " "keystone authentication strategy is enabled\n" msgstr "" "--os_auth_url é¸é …或 OS_AUTH_URL 環境變數(啟用 Keystone 鑑別策略時需è¦ï¼‰\n" msgid "A body is not expected with this request." msgstr "æ­¤è¦æ±‚é æœŸä¸å«å…§æ–‡ã€‚" #, python-format msgid "" "A metadata definition object with name=%(object_name)s already exists in " "namespace=%(namespace_name)s." msgstr "" "å稱為 %(object_name)s çš„ meta 資料定義物件已經存在於å稱空間 " "%(namespace_name)s 中。" #, python-format msgid "" "A metadata definition property with name=%(property_name)s already exists in " "namespace=%(namespace_name)s." msgstr "" "å稱為 %(property_name)s çš„ meta 資料定義內容已經存在於å稱空間 " "%(namespace_name)s 中。" #, python-format msgid "" "A metadata definition resource-type with name=%(resource_type_name)s already " "exists." msgstr "å稱為 %(resource_type_name)s çš„ meta 資料定義資æºé¡žåž‹å·²å­˜åœ¨ã€‚" msgid "A set of URLs to access the image file kept in external store" msgstr "用來存å–外部儲存庫中所ä¿ç•™æ˜ åƒæª”çš„ URL 集" msgid "Amount of disk space (in GB) required to boot image." msgstr "å•Ÿå‹•æ˜ åƒæª”所需的ç£ç¢Ÿç©ºé–“數é‡ï¼ˆä»¥ GB 為單ä½ï¼‰ã€‚" msgid "Amount of ram (in MB) required to boot image." msgstr "å•Ÿå‹•æ˜ åƒæª”所需的 RAM 數é‡ï¼ˆä»¥ MB 為單ä½ï¼‰ã€‚" msgid "An identifier for the image" msgstr "æ˜ åƒæª”çš„ ID" msgid "An identifier for the image member (tenantId)" msgstr "æ˜ åƒæª”æˆå“¡çš„ ID (tenantId)" msgid "An identifier for the owner of this task" msgstr "æ­¤ä½œæ¥­çš„æ“æœ‰è€… ID" msgid "An identifier for the task" msgstr "作業的 ID" msgid "An image file url" msgstr "æ˜ åƒæª” URL" msgid "An image schema url" msgstr "æ˜ åƒæª”綱目 URL" msgid "An image self url" msgstr "æ˜ åƒæª”自身 URL" #, python-format msgid "An image with identifier %s already exists" msgstr "ID 為 %s çš„æ˜ åƒæª”已存在" msgid "An import task exception occurred" msgstr "發生匯入作業異常狀æ³" msgid "An object with the same identifier already exists." msgstr "å·²å­˜åœ¨å…·æœ‰ç›¸åŒ ID 的物件。" msgid "An object with the same identifier is currently being operated on." msgstr "ç›®å‰æ­£åœ¨å°å…·æœ‰ç›¸åŒ ID 的物件執行作業。" msgid "An object with the specified identifier was not found." msgstr "找ä¸åˆ°å…·æœ‰æ‰€æŒ‡å®š ID 的物件。" msgid "An unknown exception occurred" msgstr "ç™¼ç”Ÿä¸æ˜Žç•°å¸¸ç‹€æ³" msgid "An unknown task exception occurred" msgstr "ç™¼ç”Ÿä¸æ˜Žçš„作業異常狀æ³" #, python-format msgid "Attempt to upload duplicate image: %s" msgstr "嘗試上傳é‡è¤‡çš„æ˜ åƒæª”:%s" msgid "Attempted to update Location field for an image not in queued status." msgstr "å·²å˜—è©¦æ›´æ–°è™•æ–¼æœªæŽ’å…¥ä½‡åˆ—ç‹€æ…‹ä¹‹æ˜ åƒæª”的「ä½ç½®ã€æ¬„ä½ã€‚" #, python-format msgid "Attribute '%(property)s' is read-only." msgstr "屬性 '%(property)s' 是唯讀的。" #, python-format msgid "Attribute '%(property)s' is reserved." msgstr "屬性 '%(property)s' å·²ä¿ç•™ã€‚" #, python-format msgid "Attribute '%s' is read-only." msgstr "屬性 '%s' 是唯讀的。" #, python-format msgid "Attribute '%s' is reserved." msgstr "屬性 '%s' å·²ä¿ç•™ã€‚" msgid "Attribute container_format can be only replaced for a queued image." msgstr "åƒ…å·²æŽ’å…¥ä½‡åˆ—çš„æ˜ åƒæª”å¯ä»¥å–代屬性 container_format。" msgid "Attribute disk_format can be only replaced for a queued image." msgstr "åƒ…å·²æŽ’å…¥ä½‡åˆ—çš„æ˜ åƒæª”å¯ä»¥å–代屬性 disk_format。" #, python-format msgid "Auth service at URL %(url)s not found." msgstr "在 URL %(url)s 處找ä¸åˆ°é‘‘別æœå‹™ã€‚" #, python-format msgid "" "Authentication error - the token may have expired during file upload. " "Deleting image data for %s." msgstr "鑑別錯誤 - 在檔案上傳期間,記號å¯èƒ½å·²éŽæœŸã€‚正在刪除 %s çš„æ˜ åƒæª”資料。" msgid "Authorization failed." msgstr "授權失敗。" msgid "Available categories:" msgstr "å¯ç”¨çš„種類:" #, python-format msgid "Bad \"%s\" query filter format. Use ISO 8601 DateTime notation." msgstr "\"%s\" æŸ¥è©¢éŽæ¿¾å™¨æ ¼å¼éŒ¯èª¤ã€‚請使用 ISO 8601 日期時間表示法。" #, python-format msgid "Bad Command: %s" msgstr "錯誤的指令:%s" #, python-format msgid "Bad header: %(header_name)s" msgstr "錯誤的標頭:%(header_name)s" #, python-format msgid "Bad value passed to filter %(filter)s got %(val)s" msgstr "傳éžçµ¦éŽæ¿¾å™¨ %(filter)s çš„å€¼ä¸æ­£ç¢ºï¼Œå–å¾— %(val)s" #, python-format msgid "Badly formed S3 URI: %(uri)s" msgstr "S3 URI 的格å¼ä¸æ­£ç¢ºï¼š%(uri)s" #, python-format msgid "Badly formed credentials '%(creds)s' in Swift URI" msgstr "Swift URI 中èªè­‰ '%(creds)s' 的格å¼ä¸æ­£ç¢º" msgid "Badly formed credentials in Swift URI." msgstr "Swift URI 中èªè­‰çš„æ ¼å¼ä¸æ­£ç¢ºã€‚" msgid "Body expected in request." msgstr "è¦æ±‚中需è¦å…§æ–‡ã€‚" msgid "Cannot be a negative value" msgstr "ä¸èƒ½æ˜¯è² æ•¸å€¼" msgid "Cannot be a negative value." msgstr "ä¸èƒ½æ˜¯è² æ•¸å€¼ã€‚" #, python-format msgid "Cannot convert image %(key)s '%(value)s' to an integer." msgstr "ç„¡æ³•å°‡æ˜ åƒæª” %(key)s '%(value)s' 轉æ›ç‚ºæ•´æ•¸ã€‚" msgid "Cannot remove last location in the image." msgstr "ç„¡æ³•ç§»é™¤æ˜ åƒæª”中的最後ä½ç½®ã€‚" #, python-format msgid "Cannot save data for image %(image_id)s: %(error)s" msgstr "ç„¡æ³•å„²å­˜æ˜ åƒæª” %(image_id)s 的資料:%(error)s" msgid "Cannot set locations to empty list." msgstr "無法將ä½ç½®è¨­ç‚ºç©ºç™½æ¸…單。" msgid "Cannot upload to an unqueued image" msgstr "ç„¡æ³•ä¸Šå‚³è‡³æœªæŽ’å…¥ä½‡åˆ—çš„æ˜ åƒæª”" #, python-format msgid "Checksum verification failed. Aborted caching of image '%s'." msgstr "ç¸½å’Œæª¢æŸ¥é©—è­‰å¤±æ•—ã€‚å·²ä¸­æ­¢å¿«å–æ˜ åƒæª” '%s'。" msgid "Client disconnected before sending all data to backend" msgstr "用戶端已在將所有資料傳é€è‡³å¾Œç«¯ä¹‹å‰æ–·ç·š" msgid "Command not found" msgstr "找ä¸åˆ°æŒ‡ä»¤" msgid "Configuration option was not valid" msgstr "é…ç½®é¸é …無效" #, python-format msgid "Connect error/bad request to Auth service at URL %(url)s." msgstr "將錯誤/ä¸ç•¶çš„è¦æ±‚連接至 URL %(url)s 處的鑑別æœå‹™ã€‚" #, python-format msgid "Constructed URL: %s" msgstr "已建構 URL:%s" msgid "Container format is not specified." msgstr "未指定儲存器格å¼ã€‚" msgid "Content-Type must be application/octet-stream" msgstr "內容類型必須是 application/octet-stream" #, python-format msgid "Corrupt image download for image %(image_id)s" msgstr "æ˜ åƒæª” %(image_id)s çš„æ˜ åƒæª”下載已毀æ" #, python-format msgid "Could not bind to %(host)s:%(port)s after trying for 30 seconds" msgstr "嘗試 30 ç§’é˜å¾Œä»ç„¡æ³•連çµè‡³ %(host)s:%(port)s" msgid "Could not find OVF file in OVA archive file." msgstr "在 OVA ä¿å­˜æª”中找ä¸åˆ° OVF 檔。" #, python-format msgid "Could not find metadata object %s" msgstr "找ä¸åˆ° meta 資料物件 %s" #, python-format msgid "Could not find metadata tag %s" msgstr "找ä¸åˆ° meta 資料標籤 %s" #, python-format msgid "Could not find namespace %s" msgstr "找ä¸åˆ°å稱空間 %s" #, python-format msgid "Could not find property %s" msgstr "找ä¸åˆ°å…§å®¹ %s" msgid "Could not find required configuration option" msgstr "找ä¸åˆ°å¿…è¦é…ç½®é¸é …" #, python-format msgid "Could not find task %s" msgstr "找ä¸åˆ°ä½œæ¥­ %s" #, python-format msgid "Could not update image: %s" msgstr "ç„¡æ³•æ›´æ–°æ˜ åƒæª”:%s" msgid "Currently, OVA packages containing multiple disk are not supported." msgstr "ç›®å‰ï¼Œä¸æ”¯æ´åŒ…å«å¤šå€‹ç£ç¢Ÿçš„ OVA 套件。" #, python-format msgid "Data for image_id not found: %s" msgstr "找ä¸åˆ° image_id 的資料:%s" msgid "Data supplied was not valid." msgstr "æä¾›çš„資料無效。" msgid "Date and time of image member creation" msgstr "æ˜ åƒæª”æˆå“¡çš„建立日期和時間" msgid "Date and time of image registration" msgstr "æ˜ åƒæª”登錄的日期和時間" msgid "Date and time of last modification of image member" msgstr "æ˜ åƒæª”æˆå“¡çš„剿¬¡ä¿®æ”¹æ—¥æœŸå’Œæ™‚é–“" msgid "Date and time of namespace creation" msgstr "å稱空間的建立日期和時間" msgid "Date and time of object creation" msgstr "物件的建立日期和時間" msgid "Date and time of resource type association" msgstr "資æºé¡žåž‹é—œè¯çš„æ—¥æœŸå’Œæ™‚é–“" msgid "Date and time of tag creation" msgstr "標記的建立日期和時間" msgid "Date and time of the last image modification" msgstr "æ˜ åƒæª”çš„å‰æ¬¡ä¿®æ”¹æ—¥æœŸå’Œæ™‚é–“" msgid "Date and time of the last namespace modification" msgstr "åç¨±ç©ºé–“çš„å‰æ¬¡ä¿®æ”¹æ—¥æœŸå’Œæ™‚é–“" msgid "Date and time of the last object modification" msgstr "ç‰©ä»¶çš„å‰æ¬¡ä¿®æ”¹æ—¥æœŸå’Œæ™‚é–“" msgid "Date and time of the last resource type association modification" msgstr "資æºé¡žåž‹é—œè¯çš„剿¬¡ä¿®æ”¹æ—¥æœŸå’Œæ™‚é–“" msgid "Date and time of the last tag modification" msgstr "æ¨™è¨˜çš„å‰æ¬¡ä¿®æ”¹æ—¥æœŸå’Œæ™‚é–“" msgid "Datetime when this resource was created" msgstr "此資æºçš„建立日期時間" msgid "Datetime when this resource was updated" msgstr "此資æºçš„æ›´æ–°æ—¥æœŸæ™‚é–“" msgid "Datetime when this resource would be subject to removal" msgstr "å¯èƒ½æœƒç§»é™¤æ­¤è³‡æºçš„æ—¥æœŸæ™‚é–“" #, python-format msgid "Denying attempt to upload image because it exceeds the quota: %s" msgstr "æ­£åœ¨æ‹’çµ•å˜—è©¦ä¸Šå‚³æ˜ åƒæª”,因為它已超出é…é¡ï¼š%s" #, python-format msgid "Denying attempt to upload image larger than %d bytes." msgstr "正在拒絕嘗試上傳大於 %d 個ä½å…ƒçµ„çš„æ˜ åƒæª”。" msgid "Descriptive name for the image" msgstr "æ˜ åƒæª”的敘述性å稱" msgid "Disk format is not specified." msgstr "未指定ç£ç¢Ÿæ ¼å¼ã€‚" #, python-format msgid "" "Driver %(driver_name)s could not be configured correctly. Reason: %(reason)s" msgstr "無法正確地é…ç½®é©…å‹•ç¨‹å¼ %(driver_name)s。原因:%(reason)s" msgid "" "Error decoding your request. Either the URL or the request body contained " "characters that could not be decoded by Glance" msgstr "" "å°‡æ‚¨çš„è¦æ±‚進行解碼時發生錯誤。URL æˆ–è¦æ±‚內文包å«ç„¡æ³•ç”± Glance 進行解碼的字元" #, python-format msgid "Error fetching members of image %(image_id)s: %(inner_msg)s" msgstr "æå–æ˜ åƒæª” %(image_id)s çš„æˆå“¡æ™‚發生錯誤:%(inner_msg)s" msgid "Error in store configuration. Adding images to store is disabled." msgstr "儲存庫é…置發生錯誤。已åœç”¨æ–°å¢žæ˜ åƒæª”至儲存庫。" msgid "Expected a member in the form: {\"member\": \"image_id\"}" msgstr "é æœŸæˆå“¡çš„æ ¼å¼ç‚ºï¼š{\"member\": \"image_id\"}" msgid "Expected a status in the form: {\"status\": \"status\"}" msgstr "é æœŸç‹€æ…‹çš„æ ¼å¼ç‚ºï¼š{\"status\": \"status\"}" msgid "External source should not be empty" msgstr "外部來æºä¸æ‡‰æ˜¯ç©ºçš„" #, python-format msgid "External sources are not supported: '%s'" msgstr "䏿”¯æ´å¤–部來æºï¼š'%s'" #, python-format msgid "Failed to activate image. Got error: %s" msgstr "ç„¡æ³•å•Ÿå‹•æ˜ åƒæª”。發生錯誤:%s" #, python-format msgid "Failed to add image metadata. Got error: %s" msgstr "ç„¡æ³•æ–°å¢žæ˜ åƒæª” meta 資料。發生錯誤:%s" #, python-format msgid "Failed to find image %(image_id)s to delete" msgstr "找ä¸åˆ°è¦åˆªé™¤çš„æ˜ åƒæª” %(image_id)s" #, python-format msgid "Failed to find image to delete: %s" msgstr "找ä¸åˆ°è¦åˆªé™¤çš„æ˜ åƒæª”:%s" #, python-format msgid "Failed to find image to update: %s" msgstr "找ä¸åˆ°è¦æ›´æ–°çš„æ˜ åƒæª”:%s" #, python-format msgid "Failed to find resource type %(resourcetype)s to delete" msgstr "找ä¸åˆ°è¦åˆªé™¤çš„資æºé¡žåž‹ %(resourcetype)s" #, python-format msgid "Failed to initialize the image cache database. Got error: %s" msgstr "ç„¡æ³•èµ·å§‹è¨­å®šæ˜ åƒæª”å¿«å–資料庫。發生錯誤:%s" #, python-format msgid "Failed to read %s from config" msgstr "無法從é…ç½®ä¸­è®€å– %s" #, python-format msgid "Failed to reserve image. Got error: %s" msgstr "無法ä¿ç•™æ˜ åƒæª”。發生錯誤:%s" #, python-format msgid "Failed to update image metadata. Got error: %s" msgstr "ç„¡æ³•æ›´æ–°æ˜ åƒæª” meta 資料。發生錯誤:%s" #, python-format msgid "Failed to upload image %s" msgstr "ç„¡æ³•ä¸Šå‚³æ˜ åƒæª” %s" #, python-format msgid "" "Failed to upload image data for image %(image_id)s due to HTTP error: " "%(error)s" msgstr "由於 HTTP éŒ¯èª¤è€Œç„¡æ³•ä¸Šå‚³æ˜ åƒæª” %(image_id)s çš„æ˜ åƒæª”資料:%(error)s" #, python-format msgid "" "Failed to upload image data for image %(image_id)s due to internal error: " "%(error)s" msgstr "ç”±æ–¼å…§éƒ¨éŒ¯èª¤è€Œç„¡æ³•ä¸Šå‚³æ˜ åƒæª” %(image_id)s çš„æ˜ åƒæª”資料:%(error)s" #, python-format msgid "File %(path)s has invalid backing file %(bfile)s, aborting." msgstr "檔案 %(path)s å…·æœ‰ç„¡æ•ˆçš„æ”¯æ´æª”案 %(bfile)s,正在中斷。" msgid "" "File based imports are not allowed. Please use a non-local source of image " "data." msgstr "ä¸å®¹è¨±æª”æ¡ˆåž‹åŒ¯å…¥ã€‚è«‹ä½¿ç”¨æ˜ åƒæª”è³‡æ–™çš„éžæœ¬ç«¯ä¾†æºã€‚" msgid "Forbidden image access" msgstr "å·²ç¦æ­¢æ˜ åƒæª”å­˜å–" #, python-format msgid "Forbidden to delete a %s image." msgstr "å·²ç¦æ­¢åˆªé™¤ %s æ˜ åƒæª”。" #, python-format msgid "Forbidden to delete image: %s" msgstr "å·²ç¦æ­¢åˆªé™¤æ˜ åƒæª”:%s" #, python-format msgid "Forbidden to modify '%(key)s' of %(status)s image." msgstr "å·²ç¦æ­¢ä¿®æ”¹ %(status)s æ˜ åƒæª”çš„ '%(key)s'。" #, python-format msgid "Forbidden to modify '%s' of image." msgstr "ç¦æ­¢ä¿®æ”¹æ˜ åƒæª”çš„ '%s'。" msgid "Forbidden to reserve image." msgstr "å·²ç¦æ­¢ä¿ç•™æ˜ åƒæª”。" msgid "Forbidden to update deleted image." msgstr "å·²ç¦æ­¢æ›´æ–°æ‰€åˆªé™¤çš„æ˜ åƒæª”。" #, python-format msgid "Forbidden to update image: %s" msgstr "å·²ç¦æ­¢æ›´æ–°æ˜ åƒæª”:%s" #, python-format msgid "Forbidden upload attempt: %s" msgstr "å·²ç¦æ­¢çš„上傳嘗試:%s" #, python-format msgid "Forbidding request, metadata definition namespace=%s is not visible." msgstr "æ­£åœ¨ç¦æ­¢è¦æ±‚,meta 資料定義å稱空間 %s ä¸å¯è¦‹ã€‚" #, python-format msgid "Forbidding request, task %s is not visible" msgstr "æ­£åœ¨ç¦æ­¢è¦æ±‚,作業 %s ä¸å¯è¦‹" msgid "Format of the container" msgstr "儲存器的格å¼" msgid "Format of the disk" msgstr "ç£ç¢Ÿçš„æ ¼å¼" #, python-format msgid "Host \"%s\" is not valid." msgstr "主機 \"%s\" 無效。" #, python-format msgid "Host and port \"%s\" is not valid." msgstr "主機和埠 \"%s\" 無效。" msgid "" "Human-readable informative message only included when appropriate (usually " "on failure)" msgstr "é©ç•¶çš„æ™‚候(通常是失敗時)僅併入人類å¯è®€çš„åƒè€ƒè¨Šæ¯" msgid "If true, image will not be deletable." msgstr "如果為 trueï¼Œå‰‡æ˜ åƒæª”ä¸å¯åˆªé™¤ã€‚" msgid "If true, namespace will not be deletable." msgstr "如果為 True,則å稱空間將ä¸å¯åˆªé™¤ã€‚" #, python-format msgid "Image %(id)s could not be deleted because it is in use: %(exc)s" msgstr "ç„¡æ³•åˆªé™¤æ˜ åƒæª” %(id)s,因為它在使用中:%(exc)s" #, python-format msgid "Image %(id)s not found" msgstr "找ä¸åˆ°æ˜ åƒæª” %(id)s" #, python-format msgid "" "Image %(image_id)s could not be found after upload. The image may have been " "deleted during the upload: %(error)s" msgstr "" "上傳之後找ä¸åˆ°æ˜ åƒæª” %(image_id)s。å¯èƒ½å·²åœ¨ä¸Šå‚³æœŸé–“åˆªé™¤è©²æ˜ åƒæª”:%(error)s" #, python-format msgid "Image %(image_id)s is protected and cannot be deleted." msgstr "æ˜ åƒæª” %(image_id)s å·²å—ä¿è­·ï¼Œç„¡æ³•刪除。" #, python-format msgid "" "Image %s could not be found after upload. The image may have been deleted " "during the upload, cleaning up the chunks uploaded." msgstr "" "上傳之後找ä¸åˆ°æ˜ åƒæª” %s。å¯èƒ½å·²åœ¨ä¸Šå‚³æœŸé–“åˆªé™¤è©²æ˜ åƒæª”,正在清除已上傳的å€å¡Šã€‚" #, python-format msgid "" "Image %s could not be found after upload. The image may have been deleted " "during the upload." msgstr "在上傳之後,找ä¸åˆ°æ˜ åƒæª” %s。在上傳期間,å¯èƒ½å·²åˆªé™¤è©²æ˜ åƒæª”。" #, python-format msgid "Image %s is deactivated" msgstr "已喿¶ˆå•Ÿå‹•æ˜ åƒæª” %s" #, python-format msgid "Image %s is not active" msgstr "æ˜ åƒæª” %s ä¸åœ¨ä½œç”¨ä¸­" #, python-format msgid "Image %s not found." msgstr "找ä¸åˆ°æ˜ åƒæª” %s。" #, python-format msgid "Image exceeds the storage quota: %s" msgstr "æ˜ åƒæª”超出儲存體é…é¡ï¼š%s" msgid "Image id is required." msgstr "æ˜ åƒæª” ID 是必è¦çš„。" msgid "Image is protected" msgstr "æ˜ åƒæª”是å—ä¿è­·çš„" #, python-format msgid "Image member limit exceeded for image %(id)s: %(e)s:" msgstr "å·²è¶…å‡ºæ˜ åƒæª” %(id)s çš„æ˜ åƒæª”æˆå“¡é™åˆ¶ï¼š%(e)s:" #, python-format msgid "Image name too long: %d" msgstr "æ˜ åƒæª”å稱太長:%d" msgid "Image operation conflicts" msgstr "æ˜ åƒæª”作業è¡çª" #, python-format msgid "" "Image status transition from %(cur_status)s to %(new_status)s is not allowed" msgstr "ä¸å®¹è¨±æ˜ åƒæª”狀態從 %(cur_status)s 轉移至 %(new_status)s" #, python-format msgid "Image storage media is full: %s" msgstr "æ˜ åƒæª”儲存媒體已滿:%s" #, python-format msgid "Image tag limit exceeded for image %(id)s: %(e)s:" msgstr "å·²è¶…å‡ºæ˜ åƒæª” %(id)s çš„æ˜ åƒæª”標籤é™åˆ¶ï¼š%(e)s:" #, python-format msgid "Image upload problem: %s" msgstr "æ˜ åƒæª”上傳å•題:%s" #, python-format msgid "Image with identifier %s already exists!" msgstr "ID 為 %s çš„æ˜ åƒæª”已存在ï¼" #, python-format msgid "Image with identifier %s has been deleted." msgstr "已刪除 ID 為 %s çš„æ˜ åƒæª”。" #, python-format msgid "Image with identifier %s not found" msgstr "找ä¸åˆ° ID 為 %s çš„æ˜ åƒæª”" #, python-format msgid "Image with the given id %(image_id)s was not found" msgstr "找ä¸åˆ°å…·æœ‰çµ¦å®š ID %(image_id)s çš„æ˜ åƒæª”" #, python-format msgid "" "Incorrect auth strategy, expected \"%(expected)s\" but received " "\"%(received)s\"" msgstr "䏿­£ç¢ºçš„é‘‘åˆ¥ç­–ç•¥ï¼Œéœ€è¦ \"%(expected)s\",但收到 \"%(received)s\"" #, python-format msgid "Incorrect request: %s" msgstr "䏿­£ç¢ºçš„è¦æ±‚:%s" #, python-format msgid "Input does not contain '%(key)s' field" msgstr "輸入ä¸åŒ…å« '%(key)s' 欄ä½" #, python-format msgid "Insufficient permissions on image storage media: %s" msgstr "å°æ˜ åƒæª”å„²å­˜åª’é«”çš„è¨±å¯æ¬Šä¸è¶³ï¼š%s" #, python-format msgid "Invalid JSON pointer for this resource: '/%s'" msgstr "此資æºçš„ JSON 指標無效:'/%s'" #, python-format msgid "Invalid checksum '%s': can't exceed 32 characters" msgstr "無效的總和檢查 '%s':ä¸èƒ½è¶…éŽ 32 個字元" msgid "Invalid configuration in glance-swift conf file." msgstr "glance-swift é…置檔中的é…置無效。" msgid "Invalid configuration in property protection file." msgstr "內容ä¿è­·æª”案中的é…置無效。" #, python-format msgid "Invalid container format '%s' for image." msgstr "æ˜ åƒæª”çš„å„²å­˜å™¨æ ¼å¼ '%s' 無效。" #, python-format msgid "Invalid content type %(content_type)s" msgstr "無效的內容類型 %(content_type)s" #, python-format msgid "Invalid disk format '%s' for image." msgstr "æ˜ åƒæª”çš„ç£ç¢Ÿæ ¼å¼ '%s' 無效。" #, python-format msgid "Invalid filter value %s. The quote is not closed." msgstr "ç„¡æ•ˆçš„éŽæ¿¾å™¨å€¼ %sã€‚éºæ¼å³å¼•號。" #, python-format msgid "" "Invalid filter value %s. There is no comma after closing quotation mark." msgstr "ç„¡æ•ˆçš„éŽæ¿¾å™¨å€¼ %s。å³å¼•è™Ÿå¾Œé¢æ²’有逗點。" #, python-format msgid "" "Invalid filter value %s. There is no comma before opening quotation mark." msgstr "ç„¡æ•ˆçš„éŽæ¿¾å™¨å€¼ %s。左引號å‰é¢æ²’有逗點。" msgid "Invalid image id format" msgstr "ç„¡æ•ˆçš„æ˜ åƒæª” ID æ ¼å¼" msgid "Invalid location" msgstr "無效的ä½ç½®" #, python-format msgid "Invalid location %s" msgstr "無效的ä½ç½® %s" #, python-format msgid "Invalid location: %s" msgstr "無效的ä½ç½®ï¼š%s" #, python-format msgid "" "Invalid location_strategy option: %(name)s. The valid strategy option(s) " "is(are): %(strategies)s" msgstr "" "無效的 location_strategy é¸é …:%(name)s。有效的策略é¸é …為:%(strategies)s" msgid "Invalid locations" msgstr "無效的ä½ç½®" #, python-format msgid "Invalid locations: %s" msgstr "無效的ä½ç½®ï¼š%s" msgid "Invalid marker format" msgstr "無效的標記格å¼" msgid "Invalid marker. Image could not be found." msgstr "無效的標記。找ä¸åˆ°æ˜ åƒæª”。" #, python-format msgid "Invalid membership association: %s" msgstr "無效的æˆå“¡è³‡æ ¼é—œè¯ï¼š%s" msgid "" "Invalid mix of disk and container formats. When setting a disk or container " "format to one of 'aki', 'ari', or 'ami', the container and disk formats must " "match." msgstr "" "ç£ç¢Ÿæ ¼å¼åŠå„²å­˜å™¨æ ¼å¼çš„æ··åˆç„¡æ•ˆã€‚å°‡ç£ç¢Ÿæ ¼å¼æˆ–儲存器格å¼è¨­ç‚º 'aki'ã€'ari' 或 " "'ami' 其中之一時,儲存器格å¼åŠç£ç¢Ÿæ ¼å¼å¿…須相符。" #, python-format msgid "" "Invalid operation: `%(op)s`. It must be one of the following: %(available)s." msgstr "無效作業:`%(op)s`。它必須是下列其中一項:%(available)s。" msgid "Invalid position for adding a location." msgstr "用於新增ä½ç½®çš„ä½ç½®ç„¡æ•ˆã€‚" msgid "Invalid position for removing a location." msgstr "用於移除ä½ç½®çš„ä½ç½®ç„¡æ•ˆã€‚" msgid "Invalid service catalog json." msgstr "無效的æœå‹™åž‹éŒ„ JSON。" #, python-format msgid "Invalid sort direction: %s" msgstr "ç„¡æ•ˆçš„æŽ’åºæ–¹å‘:%s" #, python-format msgid "" "Invalid sort key: %(sort_key)s. It must be one of the following: " "%(available)s." msgstr "排åºéµ %(sort_key)s 無效。它必須為下列其中一項:%(available)s。" #, python-format msgid "Invalid status value: %s" msgstr "無效的狀態值:%s" #, python-format msgid "Invalid status: %s" msgstr "無效的狀態:%s" #, python-format msgid "Invalid time format for %s." msgstr "%s 的時間格å¼ç„¡æ•ˆã€‚" #, python-format msgid "Invalid type value: %s" msgstr "無效的類型值:%s" #, python-format msgid "" "Invalid update. It would result in a duplicate metadata definition namespace " "with the same name of %s" msgstr "更新無效。它會導致產生具有相åŒå稱 %s çš„é‡è¤‡ meta 資料定義å稱空間。" #, python-format msgid "" "Invalid update. It would result in a duplicate metadata definition object " "with the same name=%(name)s in namespace=%(namespace_name)s." msgstr "" "無效的更新。此更新將導致下列å稱空間中存在具有相åŒå稱%(name)s çš„é‡è¤‡ meta 資" "料定義物件:%(namespace_name)s。" #, python-format msgid "" "Invalid update. It would result in a duplicate metadata definition object " "with the same name=%(name)s in namespace=%(namespace_name)s." msgstr "" "無效的更新。此更新將導致下列å稱空間中存在具有相åŒå稱%(name)s çš„é‡è¤‡ meta 資" "料定義物件:%(namespace_name)s。" #, python-format msgid "" "Invalid update. It would result in a duplicate metadata definition property " "with the same name=%(name)s in namespace=%(namespace_name)s." msgstr "" "更新無效。它會導致在下列å稱空間中產生具有相åŒå稱 %(name)s çš„é‡è¤‡ meta 資料" "定義內容:%(namespace_name)s。" #, python-format msgid "Invalid value '%(value)s' for parameter '%(param)s': %(extra_msg)s" msgstr "åƒæ•¸ '%(param)s' 的值 '%(value)s' 無效:%(extra_msg)s" #, python-format msgid "Invalid value for option %(option)s: %(value)s" msgstr "é¸é … %(option)s 的值 %(value)s 無效" #, python-format msgid "Invalid visibility value: %s" msgstr "無效的å¯è¦‹æ€§å€¼ï¼š%s" msgid "It's invalid to provide multiple image sources." msgstr "æä¾›å¤šå€‹æ˜ åƒæª”ä¾†æºæ˜¯ç„¡æ•ˆçš„åšæ³•。" msgid "It's not allowed to add locations if locations are invisible." msgstr "如果ä½ç½®æ˜¯éš±è—的,則ä¸å®¹è¨±æ–°å¢žä½ç½®ã€‚" msgid "It's not allowed to remove locations if locations are invisible." msgstr "如果ä½ç½®æ˜¯éš±è—的,則ä¸å®¹è¨±ç§»é™¤ä½ç½®ã€‚" msgid "It's not allowed to update locations if locations are invisible." msgstr "如果ä½ç½®æ˜¯éš±è—的,則ä¸å®¹è¨±æ›´æ–°ä½ç½®ã€‚" msgid "List of strings related to the image" msgstr "èˆ‡æ˜ åƒæª”相關的字串清單" msgid "Malformed JSON in request body." msgstr "è¦æ±‚內文中 JSON 的格å¼ä¸æ­£ç¢ºã€‚" msgid "Maximal age is count of days since epoch." msgstr "ç¶“æ­·æ™‚é–“ä¸Šé™æ˜¯è‡ªæ–°ç´€å…ƒä»¥ä¾†çš„天數。" #, python-format msgid "Maximum redirects (%(redirects)s) was exceeded." msgstr "å·²è¶…å‡ºé‡æ–°å°Žå‘數目上é™ï¼ˆ%(redirects)s 個)。" #, python-format msgid "Member %(member_id)s is duplicated for image %(image_id)s" msgstr "é‡å°æ˜ åƒæª” %(image_id)s,æˆå“¡ %(member_id)s é‡è¤‡" msgid "Member can't be empty" msgstr "æˆå“¡ä¸èƒ½æ˜¯ç©ºçš„" msgid "Member to be added not specified" msgstr "æœªæŒ‡å®šè¦æ–°å¢žçš„æˆå“¡" msgid "Membership could not be found." msgstr "找ä¸åˆ°æˆå“¡è³‡æ ¼ã€‚" #, python-format msgid "" "Metadata definition namespace %(namespace)s is protected and cannot be " "deleted." msgstr "Meta 資料定義å稱空間 %(namespace)s å—ä¿è­·ï¼Œç„¡æ³•將其刪除。" #, python-format msgid "Metadata definition namespace not found for id=%s" msgstr "找ä¸åˆ° ID 為 %s çš„ meta 資料定義å稱空間" #, python-format msgid "" "Metadata definition object %(object_name)s is protected and cannot be " "deleted." msgstr "Meta 資料定義物件 %(object_name)s å—ä¿è­·ï¼Œç„¡æ³•將其刪除。" #, python-format msgid "Metadata definition object not found for id=%s" msgstr "找ä¸åˆ° ID 為 %s çš„ meta 資料定義物件" #, python-format msgid "" "Metadata definition property %(property_name)s is protected and cannot be " "deleted." msgstr "Meta 資料定義內容 %(property_name)s å—ä¿è­·ï¼Œç„¡æ³•將其刪除。" #, python-format msgid "Metadata definition property not found for id=%s" msgstr "找ä¸åˆ° ID 為 %s çš„ meta 資料定義內容" #, python-format msgid "" "Metadata definition resource-type %(resource_type_name)s is a seeded-system " "type and cannot be deleted." msgstr "" "Meta 資料定義資æºé¡žåž‹ %(resource_type_name)s 是種å­ç³»çµ±é¡žåž‹ï¼Œç„¡æ³•將其刪除。" #, python-format msgid "" "Metadata definition resource-type-association %(resource_type)s is protected " "and cannot be deleted." msgstr "Meta 資料定義資æºé¡žåž‹é—œè¯ %(resource_type)s å·²å—ä¿è­·ï¼Œç„¡æ³•將其刪除。" #, python-format msgid "" "Metadata definition tag %(tag_name)s is protected and cannot be deleted." msgstr "meta 資料定義標籤 %(tag_name)s å—ä¿è­·ï¼Œç„¡æ³•將其刪除。" #, python-format msgid "Metadata definition tag not found for id=%s" msgstr "找ä¸åˆ° ID 為 %s çš„ meta 資料定義標籤" msgid "Minimal rows limit is 1." msgstr "列數下é™é™åˆ¶ç‚º 1。" #, python-format msgid "Missing required credential: %(required)s" msgstr "éºæ¼äº†å¿…è¦èªè­‰ï¼š%(required)s" #, python-format msgid "" "Multiple 'image' service matches for region %(region)s. This generally means " "that a region is required and you have not supplied one." msgstr "" "å€åŸŸ %(region)s æœ‰å¤šå€‹ã€Œæ˜ åƒæª”ã€æœå‹™ç›¸ç¬¦é …。這通常表示需è¦ä¸€å€‹å€åŸŸï¼Œä½†æ‚¨å°šæœª" "æä¾›ã€‚" msgid "No authenticated user" msgstr "沒有已鑑別使用者" #, python-format msgid "No image found with ID %s" msgstr "找ä¸åˆ° ID 為 %s çš„æ˜ åƒæª”" #, python-format msgid "No location found with ID %(loc)s from image %(img)s" msgstr "å¾žæ˜ åƒæª” %(img)s 中找ä¸åˆ° ID 為 %(loc)s çš„ä½ç½®" msgid "No permission to share that image" msgstr "æ²’æœ‰å…±ç”¨è©²æ˜ åƒæª”çš„è¨±å¯æ¬Š" #, python-format msgid "Not allowed to create members for image %s." msgstr "ä¸å®¹è¨±å»ºç«‹æ˜ åƒæª” %s çš„æˆå“¡ã€‚" #, python-format msgid "Not allowed to deactivate image in status '%s'" msgstr "ä¸å®¹è¨±å–消啟動處於狀態 '%s' çš„æ˜ åƒæª”" #, python-format msgid "Not allowed to delete members for image %s." msgstr "ä¸å®¹è¨±åˆªé™¤æ˜ åƒæª” %s çš„æˆå“¡ã€‚" #, python-format msgid "Not allowed to delete tags for image %s." msgstr "ä¸å®¹è¨±åˆªé™¤æ˜ åƒæª” %s 的標籤。" #, python-format msgid "Not allowed to list members for image %s." msgstr "ä¸å®¹è¨±åˆ—å‡ºæ˜ åƒæª” %s çš„æˆå“¡ã€‚" #, python-format msgid "Not allowed to reactivate image in status '%s'" msgstr "ä¸å®¹è¨±é‡æ–°å•Ÿå‹•處於狀態 '%s' çš„æ˜ åƒæª”" #, python-format msgid "Not allowed to update members for image %s." msgstr "ä¸å®¹è¨±æ›´æ–°æ˜ åƒæª” %s çš„æˆå“¡ã€‚" #, python-format msgid "Not allowed to update tags for image %s." msgstr "ä¸å®¹è¨±æ›´æ–°æ˜ åƒæª” %s 的標籤。" #, python-format msgid "Not allowed to upload image data for image %(image_id)s: %(error)s" msgstr "ä¸å®¹è¨±ä¸Šå‚³æ˜ åƒæª” %(image_id)s çš„æ˜ åƒæª”資料:%(error)s" msgid "Number of sort dirs does not match the number of sort keys" msgstr "æŽ’åºæ–¹å‘數目與排åºéµæ•¸ç›®ä¸ç¬¦" msgid "OVA extract is limited to admin" msgstr "OVA æ“·å–å·²é™åˆ¶ç‚ºç®¡ç†è€…" msgid "Old and new sorting syntax cannot be combined" msgstr "無法çµåˆæ–°èˆŠæŽ’åºèªžæ³•" #, python-format msgid "Operation \"%s\" requires a member named \"value\"." msgstr "作業 \"%s\" 需è¦å稱為 \"value\" çš„æˆå“¡ã€‚" msgid "" "Operation objects must contain exactly one member named \"add\", \"remove\", " "or \"replace\"." msgstr "" "作業物件必須正好包å«ä¸€å€‹å稱為 \"add\"ã€\"remove\" 或 \"replace\" çš„æˆå“¡ã€‚" msgid "" "Operation objects must contain only one member named \"add\", \"remove\", or " "\"replace\"." msgstr "作業物件åªèƒ½åŒ…å«ä¸€å€‹å稱為 \"add\"ã€\"remove\" 或 \"replace\" çš„æˆå“¡ã€‚" msgid "Operations must be JSON objects." msgstr "作業必須是 JSON 物件。" #, python-format msgid "Original locations is not empty: %s" msgstr "原始ä½ç½®ä¸æ˜¯ç©ºçš„:%s" msgid "Owner can't be updated by non admin." msgstr "æ“æœ‰è€…無法由éžç®¡ç†è€…進行更新。" msgid "Owner must be specified to create a tag." msgstr "å¿…é ˆæŒ‡å®šæ“æœ‰è€…æ‰èƒ½å»ºç«‹æ¨™ç±¤ã€‚" msgid "Owner of the image" msgstr "æ˜ åƒæª”çš„æ“æœ‰è€…" msgid "Owner of the namespace." msgstr "åç¨±ç©ºé–“çš„æ“æœ‰è€…。" msgid "Param values can't contain 4 byte unicode." msgstr "åƒæ•¸å€¼ä¸èƒ½åŒ…å« 4 ä½å…ƒçµ„ Unicode。" #, python-format msgid "Pointer `%s` contains \"~\" not part of a recognized escape sequence." msgstr "指標 `%s` 包å«ä¸å±¬æ–¼å¯è¾¨è­˜ ESC åºåˆ—çš„ \"~\"。" #, python-format msgid "Pointer `%s` contains adjacent \"/\"." msgstr "指標 `%s` 包å«ç›¸é„°çš„ \"/\"。" #, python-format msgid "Pointer `%s` does not contains valid token." msgstr "指標 `%s` ä¸åŒ…嫿œ‰æ•ˆçš„記號。" #, python-format msgid "Pointer `%s` does not start with \"/\"." msgstr "指標 `%s` çš„é–‹é ­ä¸æ˜¯ \"/\"。" #, python-format msgid "Pointer `%s` end with \"/\"." msgstr "指標 `%s` çš„çµå°¾æ˜¯ \"/\"。" #, python-format msgid "Port \"%s\" is not valid." msgstr "埠 \"%s\" 無效。" #, python-format msgid "Process %d not running" msgstr "ç¨‹åº %d ä¸åœ¨åŸ·è¡Œä¸­" #, python-format msgid "Properties %s must be set prior to saving data." msgstr "儲存資料之å‰å¿…須設定內容 %s。" #, python-format msgid "" "Property %(property_name)s does not start with the expected resource type " "association prefix of '%(prefix)s'." msgstr "內容 %(property_name)s çš„é–‹é ­ä¸æ˜¯é æœŸçš„資æºé¡žåž‹é—œè¯å­—首 '%(prefix)s'。" #, python-format msgid "Property %s already present." msgstr "內容 %s 已存在。" #, python-format msgid "Property %s does not exist." msgstr "內容 %s ä¸å­˜åœ¨ã€‚" #, python-format msgid "Property %s may not be removed." msgstr "å¯èƒ½ç„¡æ³•移除內容 %s。" #, python-format msgid "Property %s must be set prior to saving data." msgstr "儲存資料之å‰å¿…須設定內容 %s。" #, python-format msgid "Property '%s' is protected" msgstr "內容 '%s' å—ä¿è­·" msgid "Property names can't contain 4 byte unicode." msgstr "內容å稱ä¸èƒ½åŒ…å« 4 ä½å…ƒçµ„ Unicode。" #, python-format msgid "" "Provided image size must match the stored image size. (provided size: " "%(ps)d, stored size: %(ss)d)" msgstr "" "æä¾›çš„æ˜ åƒæª”大å°å¿…須符åˆå„²å­˜çš„æ˜ åƒæª”大å°ã€‚(æä¾›çš„大å°ï¼š%(ps)d,儲存的大å°ï¼š" "%(ss)d)" #, python-format msgid "Provided object does not match schema '%(schema)s': %(reason)s" msgstr "所æä¾›çš„物件與綱目 '%(schema)s' ä¸ç¬¦ï¼š%(reason)s" #, python-format msgid "Provided status of task is unsupported: %(status)s" msgstr "æä¾›çš„作業狀態 %(status)s ä¸å—支æ´" #, python-format msgid "Provided type of task is unsupported: %(type)s" msgstr "æä¾›çš„作業類型 %(type)s ä¸å—支æ´" msgid "Provides a user friendly description of the namespace." msgstr "æä¾›å°ä½¿ç”¨è€…更為å‹å–„çš„å稱空間說明。" msgid "Received invalid HTTP redirect." msgstr "收到無效的 HTTP 釿–°å°Žå‘。" #, python-format msgid "Redirecting to %(uri)s for authorization." msgstr "æ­£åœ¨é‡æ–°å°Žå‘至 %(uri)s 以進行授權。" #, python-format msgid "Registry service can't use %s" msgstr "登錄æœå‹™ç„¡æ³•使用 %s" #, python-format msgid "Registry was not configured correctly on API server. Reason: %(reason)s" msgstr "API 伺æœå™¨ä¸Šæœªæ­£ç¢ºåœ°é…置登錄。原因:%(reason)s" #, python-format msgid "Reload of %(serv)s not supported" msgstr "䏿”¯æ´é‡æ–°è¼‰å…¥ %(serv)s" #, python-format msgid "Reloading %(serv)s (pid %(pid)s) with signal(%(sig)s)" msgstr "正在使用信號 (%(sig)s) 來釿–°è¼‰å…¥ %(serv)s (pid %(pid)s)" #, python-format msgid "Removing stale pid file %s" msgstr "æ­£åœ¨ç§»é™¤éŽæ™‚ PID 檔案 %s" msgid "Request body must be a JSON array of operation objects." msgstr "è¦æ±‚內文必須是作業物件的 JSON 陣列。" msgid "Request must be a list of commands" msgstr "è¦æ±‚必須是指令清單" #, python-format msgid "Required store %s is invalid" msgstr "需è¦çš„儲存庫 %s 無效" msgid "" "Resource type names should be aligned with Heat resource types whenever " "possible: http://docs.openstack.org/developer/heat/template_guide/openstack." "html" msgstr "" "資æºé¡žåž‹å稱應該儘å¯èƒ½èˆ‡ Heat 資æºé¡žåž‹ä¸€è‡´ï¼šhttp://docs.openstack.org/" "developer/heat/template_guide/openstack.html" msgid "Response from Keystone does not contain a Glance endpoint." msgstr "Keystone 的回應ä¸åŒ…å« Glance 端點。" msgid "Scope of image accessibility" msgstr "æ˜ åƒæª”çš„å¯å­˜å–性範åœ" msgid "Scope of namespace accessibility." msgstr "å稱空間的å¯å­˜å–性範åœã€‚" #, python-format msgid "Server %(serv)s is stopped" msgstr "伺æœå™¨ %(serv)s å·²åœæ­¢" #, python-format msgid "Server worker creation failed: %(reason)s." msgstr "建立伺æœå™¨å·¥ä½œç¨‹å¼å¤±æ•—:%(reason)s。" msgid "Signature verification failed" msgstr "簽章驗證失敗" msgid "Size of image file in bytes" msgstr "æ˜ åƒæª”的大å°ï¼ˆä»¥ä½å…ƒçµ„為單ä½ï¼‰" msgid "" "Some resource types allow more than one key / value pair per instance. For " "example, Cinder allows user and image metadata on volumes. Only the image " "properties metadata is evaluated by Nova (scheduling or drivers). This " "property allows a namespace target to remove the ambiguity." msgstr "" "部分資æºé¡žåž‹å®¹è¨±æ¯å€‹å¯¦ä¾‹å…·æœ‰å¤šå€‹éµå€¼çµ„。例如,Cinder å®¹è¨±ä½¿ç”¨è€…åŠæ˜ åƒæª” meta " "資料存在於多個ç£å€ä¸Šã€‚Nova åªè©•ä¼°æ˜ åƒæª”內容 meta 資料(正在排程或驅動程å¼ï¼‰ã€‚" "此內容容許åç¨±ç©ºé–“ç›®æ¨™æ¶ˆé™¤æ­¤èªžç¾©ä¸æ˜Žç¢ºæƒ…æ³ã€‚" msgid "Sort direction supplied was not valid." msgstr "æä¾›çš„æŽ’åºæ–¹å‘無效。" msgid "Sort key supplied was not valid." msgstr "æä¾›çš„æŽ’åºéµç„¡æ•ˆã€‚" msgid "" "Specifies the prefix to use for the given resource type. Any properties in " "the namespace should be prefixed with this prefix when being applied to the " "specified resource type. Must include prefix separator (e.g. a colon :)." msgstr "" "指定è¦ç”¨æ–¼çµ¦å®šè³‡æºé¡žåž‹çš„字首。將å稱空間內的任何內容套用至指定的資æºé¡žåž‹æ™‚," "都應該為該內容新增此字首。必須包括字首分隔字元(例如,冒號 :)。" msgid "Status must be \"pending\", \"accepted\" or \"rejected\"." msgstr "狀態必須是 \"pending\"ã€\"accepted\" 或 \"rejected\"。" msgid "Status not specified" msgstr "未指定狀態" msgid "Status of the image" msgstr "æ˜ åƒæª”的狀態" #, python-format msgid "Status transition from %(cur_status)s to %(new_status)s is not allowed" msgstr "ä¸å®¹è¨±ç‹€æ…‹å¾ž %(cur_status)s 轉移至 %(new_status)s" #, python-format msgid "Stopping %(serv)s (pid %(pid)s) with signal(%(sig)s)" msgstr "正在使用信號 (%(sig)s) ä¾†åœæ­¢ %(serv)s (pid %(pid)s)" #, python-format msgid "Store for image_id not found: %s" msgstr "找ä¸åˆ° image_id 的儲存庫:%s" #, python-format msgid "Store for scheme %s not found" msgstr "找ä¸åˆ°æž¶æ§‹ %s 的儲存庫" #, python-format msgid "" "Supplied %(attr)s (%(supplied)s) and %(attr)s generated from uploaded image " "(%(actual)s) did not match. Setting image status to 'killed'." msgstr "" "æä¾›çš„ %(attr)s (%(supplied)s)ï¼Œèˆ‡å¾žæ‰€ä¸Šå‚³æ˜ åƒæª” (%(actual)s) 產生的 " "%(attr)s ä¸ç¬¦ã€‚æ­£åœ¨å°‡æ˜ åƒæª”ç‹€æ…‹è¨­ç‚ºã€Œå·²çµæŸã€ã€‚" msgid "Supported values for the 'container_format' image attribute" msgstr "'container_format' æ˜ åƒæª”屬性的支æ´å€¼" msgid "Supported values for the 'disk_format' image attribute" msgstr "'disk_format' æ˜ åƒæª”屬性的支æ´å€¼" #, python-format msgid "Suppressed respawn as %(serv)s was %(rsn)s." msgstr "已暫åœé‡æ–°å¤§é‡ç”¢ç”Ÿï¼Œå› ç‚º %(serv)s 是 %(rsn)s。" msgid "System SIGHUP signal received." msgstr "接收到系統 SIGHUP 信號。" #, python-format msgid "Task '%s' is required" msgstr "需è¦ä½œæ¥­ '%s'" msgid "Task does not exist" msgstr "作業ä¸å­˜åœ¨" msgid "Task failed due to Internal Error" msgstr "由於內部錯誤,作業失敗" msgid "Task was not configured properly" msgstr "作業未é©ç•¶åœ°é…ç½®" #, python-format msgid "Task with the given id %(task_id)s was not found" msgstr "找ä¸åˆ°å…·æœ‰çµ¦å®š ID %(task_id)s 的作業" msgid "The \"changes-since\" filter is no longer available on v2." msgstr "在第 2 版上,已無法å†ä½¿ç”¨ \"changes-since\" éŽæ¿¾å™¨ã€‚" #, python-format msgid "The CA file you specified %s does not exist" msgstr "指定的 CA 檔 %s ä¸å­˜åœ¨" #, python-format msgid "" "The Image %(image_id)s object being created by this task %(task_id)s, is no " "longer in valid status for further processing." msgstr "" "此作業 %(task_id)s æ‰€å»ºç«‹çš„æ˜ åƒæª” %(image_id)s 物件ä¸å†è™•於有效狀態,無法進一" "步處ç†ã€‚" msgid "The Store URI was malformed." msgstr "儲存庫 URI 的格å¼ä¸æ­£ç¢ºã€‚" msgid "" "The URL to the keystone service. If \"use_user_token\" is not in effect and " "using keystone auth, then URL of keystone can be specified." msgstr "" "Keystone æœå‹™çš„ URL。如果 \"use_user_token\" 未生效並且使用了 Keystone 鑑別," "則å¯ä»¥æŒ‡å®š Keystone çš„ URL。" msgid "" "The administrators password. If \"use_user_token\" is not in effect, then " "admin credentials can be specified." msgstr "管ç†è€…密碼。如果 \"use_user_token\" 未生效,則å¯ä»¥æŒ‡å®šç®¡ç†èªè­‰ã€‚" msgid "" "The administrators user name. If \"use_user_token\" is not in effect, then " "admin credentials can be specified." msgstr "管ç†è€…使用者å稱。如果 \"use_user_token\" 未生效,則å¯ä»¥æŒ‡å®šç®¡ç†èªè­‰ã€‚" #, python-format msgid "The cert file you specified %s does not exist" msgstr "指定的憑證檔 %s ä¸å­˜åœ¨" msgid "The current status of this task" msgstr "此作業的ç¾è¡Œç‹€æ…‹" #, python-format msgid "" "The device housing the image cache directory %(image_cache_dir)s does not " "support xattr. It is likely you need to edit your fstab and add the " "user_xattr option to the appropriate line for the device housing the cache " "directory." msgstr "" "å­˜æ”¾æ˜ åƒæª”å¿«å–目錄 %(image_cache_dir)s çš„è£ç½®ä¸æ”¯æ´ xattr。您å¯èƒ½éœ€è¦ç·¨è¼¯ " "fstab 並將 user_xattr é¸é …新增至存放快å–目錄之è£ç½®çš„é©ç•¶è¡Œã€‚" #, python-format msgid "" "The given uri is not valid. Please specify a valid uri from the following " "list of supported uri %(supported)s" msgstr "" "給定的 URI ç„¡æ•ˆã€‚è«‹å¾žä¸‹åˆ—å—æ”¯æ´çš„ URI %(supported)s 清單中指定有效的 URI" #, python-format msgid "The incoming image is too large: %s" msgstr "é€å…¥çš„æ˜ åƒæª”太大:%s" #, python-format msgid "The key file you specified %s does not exist" msgstr "指定的金鑰檔 %s ä¸å­˜åœ¨" #, python-format msgid "" "The limit has been exceeded on the number of allowed image locations. " "Attempted: %(attempted)s, Maximum: %(maximum)s" msgstr "" "å®¹è¨±çš„æ˜ åƒæª”ä½ç½®æ•¸ç›®å·²è¶…出此é™åˆ¶ã€‚已嘗試:%(attempted)s,上é™ï¼š%(maximum)s" #, python-format msgid "" "The limit has been exceeded on the number of allowed image members for this " "image. Attempted: %(attempted)s, Maximum: %(maximum)s" msgstr "" "æ­¤æ˜ åƒæª”å®¹è¨±çš„æ˜ åƒæª”æˆå“¡æ•¸ç›®å·²è¶…出此é™åˆ¶ã€‚已嘗試:%(attempted)s,上é™ï¼š" "%(maximum)s" #, python-format msgid "" "The limit has been exceeded on the number of allowed image properties. " "Attempted: %(attempted)s, Maximum: %(maximum)s" msgstr "" "å®¹è¨±çš„æ˜ åƒæª”內容數目已超出此é™åˆ¶ã€‚已嘗試:%(attempted)s,上é™ï¼š%(maximum)s" #, python-format msgid "" "The limit has been exceeded on the number of allowed image properties. " "Attempted: %(num)s, Maximum: %(quota)s" msgstr "å®¹è¨±çš„æ˜ åƒæª”內容數目已超出此é™åˆ¶ã€‚已嘗試:%(num)s,上é™ï¼š%(quota)s" #, python-format msgid "" "The limit has been exceeded on the number of allowed image tags. Attempted: " "%(attempted)s, Maximum: %(maximum)s" msgstr "" "å®¹è¨±çš„æ˜ åƒæª”標籤數目已超出此é™åˆ¶ã€‚已嘗試:%(attempted)s,上é™ï¼š%(maximum)s" #, python-format msgid "The location %(location)s already exists" msgstr "ä½ç½® %(location)s 已存在" #, python-format msgid "The location data has an invalid ID: %d" msgstr "ä½ç½®è³‡æ–™çš„ ID 無效:%d" #, python-format msgid "" "The metadata definition %(record_type)s with name=%(record_name)s not " "deleted. Other records still refer to it." msgstr "" "未刪除å稱為 %(record_name)s çš„ meta 資料定義 %(record_type)s。其他記錄ä»åƒç…§" "æ­¤ meta 資料定義。" #, python-format msgid "The metadata definition namespace=%(namespace_name)s already exists." msgstr "Meta 資料定義å稱空間 %(namespace_name)s 已經存在。" #, python-format msgid "" "The metadata definition object with name=%(object_name)s was not found in " "namespace=%(namespace_name)s." msgstr "" "在下列å稱空間中,找ä¸åˆ°å稱為 %(object_name)s çš„ meta 資料定義物件:" "%(namespace_name)s。" #, python-format msgid "" "The metadata definition property with name=%(property_name)s was not found " "in namespace=%(namespace_name)s." msgstr "" "在下列å稱空間中,找ä¸åˆ°å稱為 %(property_name)s çš„ meta 資料定義內容:" "%(namespace_name)s。" #, python-format msgid "" "The metadata definition resource-type association of resource-type=" "%(resource_type_name)s to namespace=%(namespace_name)s already exists." msgstr "" "資æºé¡žåž‹ %(resource_type_name)s 與å稱空間 %(namespace_name)s çš„meta 資料定義" "資æºé¡žåž‹é—œè¯å·²å­˜åœ¨ã€‚" #, python-format msgid "" "The metadata definition resource-type association of resource-type=" "%(resource_type_name)s to namespace=%(namespace_name)s, was not found." msgstr "" "找ä¸åˆ°è³‡æºé¡žåž‹ %(resource_type_name)s 與å稱空間 %(namespace_name)s çš„meta 資" "料定義資æºé¡žåž‹é—œè¯ã€‚" #, python-format msgid "" "The metadata definition resource-type with name=%(resource_type_name)s, was " "not found." msgstr "找ä¸åˆ°å稱為 %(resource_type_name)s çš„ meta 資料定義資æºé¡žåž‹ã€‚" #, python-format msgid "" "The metadata definition tag with name=%(name)s was not found in namespace=" "%(namespace_name)s." msgstr "" "在下列å稱空間中,找ä¸åˆ°å稱為 %(name)s çš„ meta 資料定義標籤:" "%(namespace_name)s。" msgid "The parameters required by task, JSON blob" msgstr "ä½œæ¥­æ‰€éœ€çš„åƒæ•¸ï¼šJSON 二進ä½å¤§åž‹ç‰©ä»¶" msgid "The provided image is too large." msgstr "所æä¾›çš„æ˜ åƒæª”太大。" msgid "" "The region for the authentication service. If \"use_user_token\" is not in " "effect and using keystone auth, then region name can be specified." msgstr "" "鑑別æœå‹™çš„å€åŸŸã€‚如果 \"use_user_token\" 未生效並且使用了 Keystone 鑑別,則å¯" "以指定å€åŸŸå稱。" msgid "The request returned 500 Internal Server Error." msgstr "è¦æ±‚傳回了「500 內部伺æœå™¨éŒ¯èª¤ã€ã€‚" msgid "" "The request returned 503 Service Unavailable. This generally occurs on " "service overload or other transient outage." msgstr "" "è¦æ±‚傳回了「503 無法使用æœå‹™ã€ã€‚通常,在æœå‹™è¶…載或其他暫時性æœå‹™ä¸­æ–·æ™‚發生。" #, python-format msgid "" "The request returned a 302 Multiple Choices. This generally means that you " "have not included a version indicator in a request URI.\n" "\n" "The body of response returned:\n" "%(body)s" msgstr "" "è¦æ±‚傳回了「302 多é‡é¸æ“‡ã€ã€‚é€™é€šå¸¸è¡¨ç¤ºè¦æ±‚ URI 中尚ä¸åŒ…å«ç‰ˆæœ¬æŒ‡ç¤ºç¬¦ã€‚\n" "\n" "傳回了回應內文:\n" "%(body)s" #, python-format msgid "" "The request returned a 413 Request Entity Too Large. This generally means " "that rate limiting or a quota threshold was breached.\n" "\n" "The response body:\n" "%(body)s" msgstr "" "è¦æ±‚傳回了「413 è¦æ±‚實體太大ã€ã€‚這通常表示已é•å評比é™åˆ¶æˆ–é…é¡è‡¨ç•Œå€¼ã€‚\n" "\n" "回應內文:\n" "%(body)s" #, python-format msgid "" "The request returned an unexpected status: %(status)s.\n" "\n" "The response body:\n" "%(body)s" msgstr "" "è¦æ±‚傳回了éžé æœŸçš„狀態:%(status)s。\n" "\n" "回應內文:\n" "%(body)s" msgid "" "The requested image has been deactivated. Image data download is forbidden." msgstr "已喿¶ˆå•Ÿå‹•æ‰€è¦æ±‚çš„æ˜ åƒæª”ã€‚å·²ç¦æ­¢ä¸‹è¼‰æ˜ åƒæª”資料。" msgid "The result of current task, JSON blob" msgstr "ç¾è¡Œä½œæ¥­çš„çµæžœï¼šJSON 二進ä½å¤§åž‹ç‰©ä»¶" #, python-format msgid "" "The size of the data %(image_size)s will exceed the limit. %(remaining)s " "bytes remaining." msgstr "è³‡æ–™çš„å¤§å° %(image_size)s 將超出該é™åˆ¶ã€‚剩餘 %(remaining)s 個ä½å…ƒçµ„。" #, python-format msgid "The specified member %s could not be found" msgstr "找ä¸åˆ°æŒ‡å®šçš„æˆå“¡ %s" #, python-format msgid "The specified metadata object %s could not be found" msgstr "找ä¸åˆ°æŒ‡å®šçš„ meta 資料物件 %s" #, python-format msgid "The specified metadata tag %s could not be found" msgstr "找ä¸åˆ°æŒ‡å®šçš„ meta 資料標籤 %s" #, python-format msgid "The specified namespace %s could not be found" msgstr "找ä¸åˆ°æŒ‡å®šçš„å稱空間 %s" #, python-format msgid "The specified property %s could not be found" msgstr "找ä¸åˆ°æŒ‡å®šçš„內容 %s" #, python-format msgid "The specified resource type %s could not be found " msgstr "找ä¸åˆ°æŒ‡å®šçš„資æºé¡žåž‹ %s" msgid "" "The status of deleted image location can only be set to 'pending_delete' or " "'deleted'" msgstr "åªèƒ½å°‡å·²åˆªé™¤æ˜ åƒæª”ä½ç½®çš„狀態設為 'pending_delete' 或'deleted'" msgid "" "The status of deleted image location can only be set to 'pending_delete' or " "'deleted'." msgstr "åªèƒ½å°‡å·²åˆªé™¤æ˜ åƒæª”ä½ç½®çš„狀態設為 'pending_delete' 或'deleted'。" msgid "The status of this image member" msgstr "æ­¤æ˜ åƒæª”æˆå“¡çš„狀態" msgid "" "The strategy to use for authentication. If \"use_user_token\" is not in " "effect, then auth strategy can be specified." msgstr "" "用於進行鑑別的策略。如果 \"use_user_token\" 未生效,則å¯ä»¥æŒ‡å®šé‘‘別策略。" #, python-format msgid "" "The target member %(member_id)s is already associated with image " "%(image_id)s." msgstr "目標æˆå“¡ %(member_id)s å·²ç¶“èˆ‡æ˜ åƒæª”%(image_id)s 相關è¯ã€‚" msgid "" "The tenant name of the administrative user. If \"use_user_token\" is not in " "effect, then admin tenant name can be specified." msgstr "" "管ç†ä½¿ç”¨è€…的承租人å稱。如果 \"use_user_token\" 未生效,則å¯ä»¥æŒ‡å®šç®¡ç†æ‰¿ç§Ÿäºº" "å稱。" msgid "The type of task represented by this content" msgstr "此內容所表示的作業類型" msgid "The unique namespace text." msgstr "唯一的å稱空間文字。" msgid "The user friendly name for the namespace. Used by UI if available." msgstr "å°ä½¿ç”¨è€…更為å‹å–„çš„å稱空間å稱。如果有的話,則由使用者介é¢ä½¿ç”¨ã€‚" #, python-format msgid "" "There is a problem with your %(error_key_name)s %(error_filename)s. Please " "verify it. Error: %(ioe)s" msgstr "" "%(error_key_name)s %(error_filename)s 有å•題。請驗證å•題。錯誤:%(ioe)s" #, python-format msgid "" "There is a problem with your %(error_key_name)s %(error_filename)s. Please " "verify it. OpenSSL error: %(ce)s" msgstr "" "%(error_key_name)s %(error_filename)s 有å•題。請驗證å•題。OpenSSL 錯誤:" "%(ce)s" #, python-format msgid "" "There is a problem with your key pair. Please verify that cert " "%(cert_file)s and key %(key_file)s belong together. OpenSSL error %(ce)s" msgstr "" "金鑰組有å•é¡Œã€‚è«‹ç¢ºèªæ†‘è­‰ %(cert_file)s åŠé‡‘é‘° %(key_file)s 是é…å°çš„。OpenSSL " "錯誤 %(ce)s" msgid "There was an error configuring the client." msgstr "é…置用戶端時發生錯誤。" msgid "There was an error connecting to a server" msgstr "連接至伺æœå™¨æ™‚發生錯誤" msgid "" "This operation is currently not permitted on Glance Tasks. They are auto " "deleted after reaching the time based on their expires_at property." msgstr "" "ç›®å‰ä¸å…è¨±å° Glance 作業執行這項作業。根據它們的 expires_at內容,將在é”到時間" "之後自動刪除它們。" msgid "This operation is currently not permitted on Glance images details." msgstr "ç›®å‰ä¸å…è¨±å° Glance æ˜ åƒæª”詳細資料執行這項作業。" msgid "" "Time in hours for which a task lives after, either succeeding or failing" msgstr "作業在æˆåŠŸæˆ–å¤±æ•—å¾Œå­˜æ´»çš„æ™‚é–“ï¼ˆå°æ™‚)" msgid "Too few arguments." msgstr "引數太少。" msgid "" "URI cannot contain more than one occurrence of a scheme.If you have " "specified a URI like swift://user:pass@http://authurl.com/v1/container/obj, " "you need to change it to use the swift+http:// scheme, like so: swift+http://" "user:pass@authurl.com/v1/container/obj" msgstr "" "URI 中ä¸èƒ½å¤šæ¬¡å‡ºç¾æŸä¸€æž¶æ§‹ã€‚如果所指定的 URI 類似於 swift://user:pass@http://" "authurl.com/v1/container/obj,則需è¦å°‡å…¶è®Šæ›´æˆä½¿ç”¨ swift+http:// 架構,例如:" "swift+http://user:pass@authurl.com/v1/container/obj" msgid "URL to access the image file kept in external store" msgstr "用來存å–外部儲存庫中所ä¿ç•™ä¹‹æ˜ åƒæª”çš„ URL" #, python-format msgid "" "Unable to create pid file %(pid)s. Running as non-root?\n" "Falling back to a temp file, you can stop %(service)s service using:\n" " %(file)s %(server)s stop --pid-file %(fb)s" msgstr "" "無法建立 PID 檔案 %(pid)s。è¦ä»¥éž root 使用者身分執行嗎?\n" "正在撤回而使用暫存檔,您å¯ä»¥ä½¿ç”¨ä¸‹åˆ—æŒ‡ä»¤ä¾†åœæ­¢ %(service)s æœå‹™ï¼š\n" " %(file)s %(server)s stop --pid-file %(fb)s" #, python-format msgid "Unable to filter by unknown operator '%s'." msgstr "無法ä¾ä¸æ˜Žé‹ç®—å­ '%s' é€²è¡ŒéŽæ¿¾ã€‚" msgid "Unable to filter on a range with a non-numeric value." msgstr "無法å°åŒ…å«éžæ•¸å€¼çš„範åœé€²è¡ŒéŽæ¿¾ã€‚" msgid "Unable to filter on a unknown operator." msgstr "無法ä¾ä¸æ˜Žé‹ç®—å­é€²è¡ŒéŽæ¿¾ã€‚" msgid "Unable to filter using the specified operator." msgstr "無法使用指定的é‹ç®—å­é€²è¡ŒéŽæ¿¾ã€‚" msgid "Unable to filter using the specified range." msgstr "無法使用指定的範åœé€²è¡ŒéŽæ¿¾ã€‚" #, python-format msgid "Unable to find '%s' in JSON Schema change" msgstr "在「JSON 綱目ã€è®Šæ›´ä¸­æ‰¾ä¸åˆ° '%s'" #, python-format msgid "" "Unable to find `op` in JSON Schema change. It must be one of the following: " "%(available)s." msgstr "在 JSON 綱目變更中找ä¸åˆ° `op`。它必須是下列其中一項:%(available)s。" msgid "Unable to increase file descriptor limit. Running as non-root?" msgstr "無法增加檔案æè¿°å­é™åˆ¶ã€‚è¦ä»¥éž root 使用者身分執行嗎?" #, python-format msgid "" "Unable to load %(app_name)s from configuration file %(conf_file)s.\n" "Got: %(e)r" msgstr "" "無法從é…置檔 %(conf_file)s 載入 %(app_name)s。\n" "發生錯誤:%(e)r" #, python-format msgid "Unable to load schema: %(reason)s" msgstr "無法載入綱目:%(reason)s" #, python-format msgid "Unable to locate paste config file for %s." msgstr "找ä¸åˆ° %s çš„ paste é…置檔。" #, python-format msgid "Unable to upload duplicate image data for image%(image_id)s: %(error)s" msgstr "ç„¡æ³•ä¸Šå‚³æ˜ åƒæª” %(image_id)s çš„é‡è¤‡æ˜ åƒæª”資料:%(error)s" msgid "Unauthorized image access" msgstr "æœªç²æŽˆæ¬Šçš„æ˜ åƒæª”å­˜å–" msgid "Unexpected body type. Expected list/dict." msgstr "éžé æœŸçš„å…§æ–‡é¡žåž‹ã€‚é æœŸç‚ºæ¸…å–®/字典。" #, python-format msgid "Unexpected response: %s" msgstr "éžé æœŸçš„回應:%s" #, python-format msgid "Unknown auth strategy '%s'" msgstr "䏿˜Žçš„鑑別策略 '%s'" #, python-format msgid "Unknown command: %s" msgstr "䏿˜ŽæŒ‡ä»¤ï¼š%s" msgid "Unknown sort direction, must be 'desc' or 'asc'" msgstr "䏿˜Žçš„æŽ’åºæ–¹å‘,必須為 'desc' 或 'asc'" msgid "Unrecognized JSON Schema draft version" msgstr "無法辨識的「JSON 綱目ã€è‰ç¨¿ç‰ˆæœ¬" msgid "Unrecognized changes-since value" msgstr "無法辨識 changes-since 值" #, python-format msgid "Unsupported sort_dir. Acceptable values: %s" msgstr "䏿”¯æ´çš„ sort_dirã€‚å¯æŽ¥å—的值:%s" #, python-format msgid "Unsupported sort_key. Acceptable values: %s" msgstr "䏿”¯æ´çš„ sort_keyã€‚å¯æŽ¥å—的值:%s" msgid "Virtual size of image in bytes" msgstr "æ˜ åƒæª”的虛擬大å°ï¼ˆä»¥ä½å…ƒçµ„為單ä½ï¼‰" #, python-format msgid "Waited 15 seconds for pid %(pid)s (%(file)s) to die; giving up" msgstr "等待 PID %(pid)s (%(file)s) 當掉已é”到 15 秒;正在放棄" msgid "" "When running server in SSL mode, you must specify both a cert_file and " "key_file option value in your configuration file" msgstr "" "在 SSL 模å¼ä¸‹åŸ·è¡Œä¼ºæœå™¨æ™‚,必須在é…置檔中指定 cert_file åŠ key_file é¸é …值" msgid "" "Whether to pass through the user token when making requests to the registry. " "To prevent failures with token expiration during big files upload, it is " "recommended to set this parameter to False.If \"use_user_token\" is not in " "effect, then admin credentials can be specified." msgstr "" "是å¦è¦åœ¨å‘ç™»éŒ„ç™¼å‡ºè¦æ±‚時é€éŽä½¿ç”¨è€…記號來傳éžã€‚如果è¦åœ¨ä¸Šå‚³å¤§åž‹æª”案期間防止與" "記號有效期é™ç›¸é—œçš„å¤±æ•—ï¼Œå»ºè­°å°‡æ­¤åƒæ•¸è¨­å®šç‚º False。如果 \"use_user_token\" 未" "生效,則å¯ä»¥æŒ‡å®šç®¡ç†èªè­‰ã€‚" #, python-format msgid "Wrong command structure: %s" msgstr "éŒ¯èª¤çš„æŒ‡ä»¤çµæ§‹ï¼š%s" msgid "You are not authenticated." msgstr "您沒有進行鑑別。" msgid "You are not authorized to complete this action." msgstr "æ‚¨æœªç²æŽˆæ¬Šä¾†å®Œæˆæ­¤å‹•作。" #, python-format msgid "You are not authorized to lookup image %s." msgstr "æ‚¨æœªç²æŽˆæ¬Šä¾†æŸ¥é–±æ˜ åƒæª” %s。" #, python-format msgid "You are not authorized to lookup the members of the image %s." msgstr "æ‚¨æœªç²æŽˆæ¬Šä¾†æŸ¥é–±æ˜ åƒæª” %s çš„æˆå“¡ã€‚" #, python-format msgid "You are not permitted to create a tag in the namespace owned by '%s'" msgstr "ä¸å…許您在 '%s' æ“æœ‰çš„å稱空間中建立標籤" msgid "You are not permitted to create image members for the image." msgstr "ä¸å…è¨±æ‚¨çµ¦æ˜ åƒæª”å»ºç«‹æ˜ åƒæª”æˆå“¡ã€‚" #, python-format msgid "You are not permitted to create images owned by '%s'." msgstr "ä¸å…è¨±æ‚¨å»ºç«‹æ“æœ‰è€…為 '%s' çš„æ˜ åƒæª”。" #, python-format msgid "You are not permitted to create namespace owned by '%s'" msgstr "ä¸å…è¨±æ‚¨å»ºç«‹æ“æœ‰è€…為 '%s' çš„å稱空間" #, python-format msgid "You are not permitted to create object owned by '%s'" msgstr "ä¸å…è¨±æ‚¨å»ºç«‹æ“æœ‰è€…為 '%s' 的物件" #, python-format msgid "You are not permitted to create property owned by '%s'" msgstr "ä¸å…è¨±æ‚¨å»ºç«‹æ“æœ‰è€…為 '%s' 的內容" #, python-format msgid "You are not permitted to create resource_type owned by '%s'" msgstr "ä¸å…è¨±æ‚¨å»ºç«‹æ“æœ‰è€…為 '%s' çš„ resource_type" #, python-format msgid "You are not permitted to create this task with owner as: %s" msgstr "ä¸å…è¨±æ‚¨ä»¥æ“æœ‰è€…身分來建立此作業:%s" msgid "You are not permitted to deactivate this image." msgstr "ä¸å…è¨±æ‚¨å–æ¶ˆå•Ÿå‹•æ­¤æ˜ åƒæª”。" msgid "You are not permitted to delete this image." msgstr "ä¸å…è¨±æ‚¨åˆªé™¤æ­¤æ˜ åƒæª”。" msgid "You are not permitted to delete this meta_resource_type." msgstr "ä¸å…許您刪除此 meta_resource_type。" msgid "You are not permitted to delete this namespace." msgstr "ä¸å…許您刪除此å稱空間。" msgid "You are not permitted to delete this object." msgstr "ä¸å…許您刪除此物件。" msgid "You are not permitted to delete this property." msgstr "ä¸å…許您刪除此內容。" msgid "You are not permitted to delete this tag." msgstr "ä¸å…許您刪除此標籤。" #, python-format msgid "You are not permitted to modify '%(attr)s' on this %(resource)s." msgstr "ä¸å…許您修改此 %(resource)s 上的 '%(attr)s'。" #, python-format msgid "You are not permitted to modify '%s' on this image." msgstr "ä¸å…è¨±æ‚¨ä¿®æ”¹æ­¤æ˜ åƒæª”上的 '%s'。" msgid "You are not permitted to modify locations for this image." msgstr "ä¸å…è¨±æ‚¨ä¿®æ”¹æ­¤æ˜ åƒæª”çš„ä½ç½®ã€‚" msgid "You are not permitted to modify tags on this image." msgstr "ä¸å…è¨±æ‚¨ä¿®æ”¹æ­¤æ˜ åƒæª”上的標籤。" msgid "You are not permitted to modify this image." msgstr "ä¸å…è¨±æ‚¨ä¿®æ”¹æ­¤æ˜ åƒæª”。" msgid "You are not permitted to reactivate this image." msgstr "ä¸å…è¨±æ‚¨é‡æ–°å•Ÿå‹•æ­¤æ˜ åƒæª”。" msgid "You are not permitted to set status on this task." msgstr "ä¸å…許您在此作業上設定狀態。" msgid "You are not permitted to update this namespace." msgstr "ä¸å…許您更新此å稱空間。" msgid "You are not permitted to update this object." msgstr "ä¸å…許您更新此物件。" msgid "You are not permitted to update this property." msgstr "ä¸å…許您更新此內容。" msgid "You are not permitted to update this tag." msgstr "ä¸å…許您更新此標籤。" msgid "You are not permitted to upload data for this image." msgstr "ä¸å…è¨±æ‚¨çµ¦æ­¤æ˜ åƒæª”上傳資料。" #, python-format msgid "You cannot add image member for %s" msgstr "無法給 %s æ–°å¢žæ˜ åƒæª”æˆå“¡" #, python-format msgid "You cannot delete image member for %s" msgstr "無法刪除 %s çš„æ˜ åƒæª”æˆå“¡" #, python-format msgid "You cannot get image member for %s" msgstr "無法å–å¾— %s çš„æ˜ åƒæª”æˆå“¡" #, python-format msgid "You cannot update image member %s" msgstr "ç„¡æ³•æ›´æ–°æ˜ åƒæª”æˆå“¡ %s" msgid "You do not own this image" msgstr "æ‚¨ä¸æ˜¯æ­¤æ˜ åƒæª”çš„æ“æœ‰è€…" msgid "" "You have selected to use SSL in connecting, and you have supplied a cert, " "however you have failed to supply either a key_file parameter or set the " "GLANCE_CLIENT_KEY_FILE environ variable" msgstr "" "您已é¸å–在連接時使用 SSL,並且æä¾›äº†æ†‘證,但未æä¾› key_file åƒæ•¸ï¼Œä¹Ÿæ²’有設定 " "GLANCE_CLIENT_KEY_FILE 環境變數" msgid "" "You have selected to use SSL in connecting, and you have supplied a key, " "however you have failed to supply either a cert_file parameter or set the " "GLANCE_CLIENT_CERT_FILE environ variable" msgstr "" "您已é¸å–在連接時使用 SSL,並且æä¾›äº†é‡‘鑰,但未æä¾› cert_file åƒæ•¸ï¼Œä¹Ÿæ²’有設" "定 GLANCE_CLIENT_CERT_FILE 環境變數" msgid "" "^([0-9a-fA-F]){8}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-" "fA-F]){12}$" msgstr "" "^([0-9a-fA-F]){8}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-" "fA-F]){12}$" #, python-format msgid "__init__() got unexpected keyword argument '%s'" msgstr "__init__() å–å¾—éžé æœŸçš„é—œéµå­—引數 '%s'" #, python-format msgid "" "cannot transition from %(current)s to %(next)s in update (wanted from_state=" "%(from)s)" msgstr "更新時無法從 %(current)s 轉移至 %(next)sï¼ˆéœ€è¦ from_state = %(from)s)" #, python-format msgid "custom properties (%(props)s) conflict with base properties" msgstr "自訂內容 (%(props)s) 與基本內容相è¡çª" msgid "eventlet 'poll' nor 'selects' hubs are available on this platform" msgstr "此平å°ä¸Šç„¡æ³•使用 eventlet 'poll' åŠ 'selects' 中心。" msgid "is_public must be None, True, or False" msgstr "is_public 必須是 Noneã€True 或 False" msgid "limit param must be an integer" msgstr "é™åˆ¶åƒæ•¸å¿…須是整數" msgid "limit param must be positive" msgstr "é™åˆ¶åƒæ•¸å¿…須是正數" msgid "md5 hash of image contents." msgstr "æ˜ åƒæª”內容的 md5 雜湊值。" #, python-format msgid "new_image() got unexpected keywords %s" msgstr "new_image() å–å¾—éžé æœŸçš„é—œéµå­— %s" msgid "protected must be True, or False" msgstr "protected 必須是 True 或 False" #, python-format msgid "unable to launch %(serv)s. Got error: %(e)s" msgstr "無法啟動 %(serv)s。å–得錯誤:%(e)s" #, python-format msgid "x-openstack-request-id is too long, max size %s" msgstr "x-openstack-request-id 太長,大å°ä¸Šé™ç‚º %s" glance-16.0.0/glance/locale/de/0000775000175100017510000000000013245511661016137 5ustar zuulzuul00000000000000glance-16.0.0/glance/locale/de/LC_MESSAGES/0000775000175100017510000000000013245511661017724 5ustar zuulzuul00000000000000glance-16.0.0/glance/locale/de/LC_MESSAGES/glance.po0000666000175100017510000021642113245511421021517 0ustar zuulzuul00000000000000# Translations template for glance. # Copyright (C) 2015 ORGANIZATION # This file is distributed under the same license as the glance project. # # Translators: # Carsten Duch , 2014 # Ettore Atalan , 2014 # Laera Loris , 2013 # Robert Simai, 2014 # Andreas Jaeger , 2016. #zanata # Robert Simai , 2016. #zanata msgid "" msgstr "" "Project-Id-Version: glance 15.0.0.0b3.dev29\n" "Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n" "POT-Creation-Date: 2017-06-23 20:54+0000\n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=UTF-8\n" "Content-Transfer-Encoding: 8bit\n" "PO-Revision-Date: 2016-12-06 08:09+0000\n" "Last-Translator: Robert Simai \n" "Language: de\n" "Plural-Forms: nplurals=2; plural=(n != 1);\n" "Generated-By: Babel 2.0\n" "X-Generator: Zanata 3.9.6\n" "Language-Team: German\n" #, python-format msgid "\t%s" msgstr "\t%s" #, python-format msgid "%(cls)s exception was raised in the last rpc call: %(val)s" msgstr "Eine %(cls)s-Ausnahme ist im letzten RPC-Aufruf aufgetreten: %(val)s" #, python-format msgid "%(m_id)s not found in the member list of the image %(i_id)s." msgstr "%(m_id)s in der Mitgliedsliste des Abbild %(i_id)s nicht gefunden." #, python-format msgid "%(serv)s (pid %(pid)s) is running..." msgstr "%(serv)s (pid %(pid)s) läuft..." #, python-format msgid "%(serv)s appears to already be running: %(pid)s" msgstr "%(serv)s scheint bereits aktiv zu sein: %(pid)s" #, python-format msgid "" "%(strategy)s is registered as a module twice. %(module)s is not being used." msgstr "" "%(strategy)s ist als Modul doppelt registriert. %(module)s wird nicht " "verwendet." #, python-format msgid "" "%(task_id)s of %(task_type)s not configured properly. Could not load the " "filesystem store" msgstr "" "%(task_id)s von %(task_type)s sind nicht ordnungsgemäß konfiguriert. Laden " "des Dateisystemspeichers nicht möglich" #, python-format msgid "" "%(task_id)s of %(task_type)s not configured properly. Missing work dir: " "%(work_dir)s" msgstr "" "%(task_id)s von %(task_type)s sind nicht ordnungsgemäß konfiguriert. " "Fehlendes Arbeitsverzeichnis: %(work_dir)s" #, python-format msgid "%(verb)sing %(serv)s" msgstr "%(verb)sing %(serv)s" #, python-format msgid "%(verb)sing %(serv)s with %(conf)s" msgstr "%(serv)s mit %(conf)s %(verb)s" #, python-format msgid "" "%s Please specify a host:port pair, where host is an IPv4 address, IPv6 " "address, hostname, or FQDN. If using an IPv6 address, enclose it in brackets " "separately from the port (i.e., \"[fe80::a:b:c]:9876\")." msgstr "" "%s Geben Sie ein Host:Port-Paar an, wobei 'Host' eine IPv4-Adresse, eine " "IPv6-Adresse, ein Hostname oder ein vollständig qualifizierter Domänenname " "ist. Bei Verwendung einer IPv6-Adresse schließen Sie diese in Klammern ein, " "damit sie vom Port getrennt ist (d. h. \"[fe80::a:b:c]:9876\")." #, python-format msgid "%s can't contain 4 byte unicode characters." msgstr "%s darf keine 4-Byte-Unicode-Zeichen enthalten." #, python-format msgid "%s is already stopped" msgstr "%s ist bereits gestoppt" #, python-format msgid "%s is stopped" msgstr "%s ist gestoppt" msgid "" "--os_auth_url option or OS_AUTH_URL environment variable required when " "keystone authentication strategy is enabled\n" msgstr "" "Option --os_auth_url oder Umgebungsvariable OS_AUTH_URL erforderlich, wenn " "die Keystone-Authentifizierungsstrategie aktiviert ist\n" msgid "A body is not expected with this request." msgstr "Es wird kein Body bei dieser Anforderung erwartet. " #, python-format msgid "" "A metadata definition object with name=%(object_name)s already exists in " "namespace=%(namespace_name)s." msgstr "" "Ein Metadatendefinitionsobjekt namens %(object_name)s ist bereits in " "Namensbereich %(namespace_name)s nicht gefunden." #, python-format msgid "" "A metadata definition property with name=%(property_name)s already exists in " "namespace=%(namespace_name)s." msgstr "" "Eine Metadatendefinitionseigenschaft namens %(property_name)s ist bereits in " "Namensbereich %(namespace_name)s vorhanden. " #, python-format msgid "" "A metadata definition resource-type with name=%(resource_type_name)s already " "exists." msgstr "" "Ein Ressourcentyp %(resource_type_name)s der Metadatendefinition ist bereits " "vorhanden. " msgid "A set of URLs to access the image file kept in external store" msgstr "URLs für den Zugriff auf die Abbilddatei im externen Speicher" msgid "Amount of disk space (in GB) required to boot image." msgstr "" "Menge an Plattenspeicher (in GB), die zum Booten des Abbildes erforderlich " "ist." msgid "Amount of ram (in MB) required to boot image." msgstr "" "Menge an Arbeitsspeicher (in MB), die zum Booten des Abbildes erforderlich " "ist." msgid "An identifier for the image" msgstr "Eine ID für das Abbild" msgid "An identifier for the image member (tenantId)" msgstr "Eine ID für das Abbildelement (tenantId)" msgid "An identifier for the owner of this task" msgstr "Eine ID für den Eigentümer diesen Tasks" msgid "An identifier for the task" msgstr "Eine ID für die Task" msgid "An image file url" msgstr "URL der Abbilddatei" msgid "An image schema url" msgstr "URL des Abbildschemas" msgid "An image self url" msgstr "'self'-URL für Abbild" #, python-format msgid "An image with identifier %s already exists" msgstr "Ein Abbild mit ID %s ist bereits vorhanden" msgid "An import task exception occurred" msgstr "Es ist eine Ausnahme bei einer Importtask eingetreten." msgid "An object with the same identifier already exists." msgstr "Ein Objekt mit der gleichen ID ist bereits vorhanden." msgid "An object with the same identifier is currently being operated on." msgstr "An einem Objekt mit dieser ID wird derzeit eine Operation ausgeführt. " msgid "An object with the specified identifier was not found." msgstr "Ein Objekt mit der angegebenen ID wurde nicht gefunden." msgid "An unknown exception occurred" msgstr "Eine unbekannte Ausnahme ist aufgetreten" msgid "An unknown task exception occurred" msgstr "Eine unbekannte Taskausnahme ist aufgetreten" #, python-format msgid "Attempt to upload duplicate image: %s" msgstr "Versuch doppeltes Abbild hochzuladen: %s" msgid "Attempted to update Location field for an image not in queued status." msgstr "" "Versuch, Adressfeld für ein Abbild zu aktualisieren, das sich nicht im " "Warteschlangenmodus befindet." #, python-format msgid "Attribute '%(property)s' is read-only." msgstr "Attribut '%(property)s' ist schreibgeschützt." #, python-format msgid "Attribute '%(property)s' is reserved." msgstr "Attribut '%(property)s' ist reserviert." #, python-format msgid "Attribute '%s' is read-only." msgstr "Attribut '%s' ist schreibgeschützt." #, python-format msgid "Attribute '%s' is reserved." msgstr "Attribut '%s' ist reserviert." msgid "Attribute container_format can be only replaced for a queued image." msgstr "" "Attribut 'container_format' kann nur durch ein Abbild in der Warteschlange " "ersetzt werden. " msgid "Attribute disk_format can be only replaced for a queued image." msgstr "" "Attribut 'disk_format' kann nur durch ein Abbild in der Warteschlange " "ersetzt werden. " #, python-format msgid "Auth service at URL %(url)s not found." msgstr "Authentifizierungsservice unter URL %(url)s nicht gefunden." #, python-format msgid "" "Authentication error - the token may have expired during file upload. " "Deleting image data for %s." msgstr "" "Authentifizierungsfehler: Das Token ist möglicherweise beim Hochladen der " "Datei abgelaufen. Die Abbilddaten für %s werden gelöscht." msgid "Authorization failed." msgstr "Authorisierung fehlgeschlagen." msgid "Available categories:" msgstr "Verfügbare Kategorien:" #, python-format msgid "Bad \"%s\" query filter format. Use ISO 8601 DateTime notation." msgstr "" "Falsches \"%s\"-Abfragefilterformat. Verwenden Sie die ISO 8601 DateTime-" "Notation." #, python-format msgid "Bad Command: %s" msgstr "Fehlerhaftes Kommando: %s" #, python-format msgid "Bad header: %(header_name)s" msgstr "Fehlerhafter Header: %(header_name)s" #, python-format msgid "Bad value passed to filter %(filter)s got %(val)s" msgstr "Falscher an Filter %(filter)s übergebener Wert hat %(val)s abgerufen" #, python-format msgid "Badly formed S3 URI: %(uri)s" msgstr "Falsches Format der S3 URI: %(uri)s" #, python-format msgid "Badly formed credentials '%(creds)s' in Swift URI" msgstr "Fehlerhafter Berechtigungsnachweis '%(creds)s' in Swift-URI" msgid "Badly formed credentials in Swift URI." msgstr "Fehlerhafter Berechtigungsnachweis in Swift-URI." msgid "Body expected in request." msgstr "Text in Anforderung erwartet." msgid "Cannot be a negative value" msgstr "Darf kein negativer Wert sein" msgid "Cannot be a negative value." msgstr "Darf kein negativer Wert sein." #, python-format msgid "Cannot convert image %(key)s '%(value)s' to an integer." msgstr "" "Abbild %(key)s '%(value)s' kann nicht in eine Ganzzahl konvertiert werden. " msgid "Cannot remove last location in the image." msgstr "Die letzte Position im Abbild kann nicht entfernt werden. " #, python-format msgid "Cannot save data for image %(image_id)s: %(error)s" msgstr "" "Daten für Abbild %(image_id)s können nicht gespeichert werden: %(error)s" msgid "Cannot set locations to empty list." msgstr "Positionen können nicht auf leere Liste gesetzt werden. " msgid "Cannot upload to an unqueued image" msgstr "" "Hochladen auf Abbild, das sich nicht in Warteschlange befindet, nicht möglich" #, python-format msgid "Checksum verification failed. Aborted caching of image '%s'." msgstr "" "Verifizierung von Kontrollsumme fehlgeschlagen. Zwischenspeichern von Image " "'%s' abgebrochen." msgid "Client disconnected before sending all data to backend" msgstr "" "Die Verbindung zum Client wurde beendet, bevor alle Daten zum Backend " "geschickt wurden" msgid "Command not found" msgstr "Kommando nicht gefunden" msgid "Configuration option was not valid" msgstr "Konfigurationsoption war nicht gültig" #, python-format msgid "Connect error/bad request to Auth service at URL %(url)s." msgstr "" "Verbindungsfehler/fehlerhafte Anforderung an Authentifizierungsservice unter " "URL %(url)s." #, python-format msgid "Constructed URL: %s" msgstr "Erstellte URL: %s" msgid "Container format is not specified." msgstr "Containerformat wurde nicht angegeben." msgid "Content-Type must be application/octet-stream" msgstr "Inhaltstyp muss Anwendungs-/Oktet-Stream sein" #, python-format msgid "Corrupt image download for image %(image_id)s" msgstr "Fehlerhafter Abbild-Download für Abbild %(image_id)s" #, python-format msgid "Could not bind to %(host)s:%(port)s after trying for 30 seconds" msgstr "" "Keine Bindung an %(host)s:%(port)s möglich nach Versuch über 30 Sekunden" msgid "Could not find OVF file in OVA archive file." msgstr "Es wurde keine OVF-Datei in der OVA-Archivdatei gefunden. " #, python-format msgid "Could not find metadata object %s" msgstr "Metadatenobjekt %s konnte nicht gefunden werden" #, python-format msgid "Could not find metadata tag %s" msgstr "Metadatenschlagwort %s konnte nicht gefunden werden" #, python-format msgid "Could not find namespace %s" msgstr "Namensbereich %s konnte nicht gefunden werden" #, python-format msgid "Could not find property %s" msgstr "Eigenschaft %s konnte nicht gefunden werden" msgid "Could not find required configuration option" msgstr "Erforderliche Konfigurationsoption konnte nicht gefunden werden" #, python-format msgid "Could not find task %s" msgstr "Task %s konnte nicht gefunden werden" #, python-format msgid "Could not update image: %s" msgstr "Abbild konnte nicht aktualisiert werden: %s" msgid "Currently, OVA packages containing multiple disk are not supported." msgstr "Zurzeit werden OVA-Pakete mit mehreren Platten nicht unterstützt. " #, python-format msgid "Data for image_id not found: %s" msgstr "Daten für image_id nicht gefunden: %s" msgid "Data supplied was not valid." msgstr "Angegebene Daten waren nicht gültig." msgid "Date and time of image member creation" msgstr "Datum und Uhrzeit der Erstellung des Abbildelements" msgid "Date and time of image registration" msgstr "Datum und Uhrzeit der Abbildregistrierung " msgid "Date and time of last modification of image member" msgstr "Datum und Uhrzeit der letzten Änderung des Abbildelements" msgid "Date and time of namespace creation" msgstr "Datum und Uhrzeit der Erstellung des Namensbereichs" msgid "Date and time of object creation" msgstr "Datum und Uhrzeit der Objekterstellung" msgid "Date and time of resource type association" msgstr "Datum und Uhrzeit der Ressourcentypzuordnung" msgid "Date and time of tag creation" msgstr "Datum und Uhrzeit der Erstellung des Schlagwortes" msgid "Date and time of the last image modification" msgstr "Datum und Uhrzeit der letzten Abbildänderung" msgid "Date and time of the last namespace modification" msgstr "Datum und Uhrzeit der letzten Änderung des Namensbereichs" msgid "Date and time of the last object modification" msgstr "Datum und Uhrzeit der letzten Objektänderung" msgid "Date and time of the last resource type association modification" msgstr "Datum und Uhrzeit der letzten Änderung der Ressourcentypzuordnung" msgid "Date and time of the last tag modification" msgstr "Datum und Uhrzeit der letzten Schlagwortänderung" msgid "Datetime when this resource was created" msgstr "Datum/Uhrzeit der Erstellung dieser Ressource" msgid "Datetime when this resource was updated" msgstr "Datum/Uhrzeit der Aktualisierung dieser Ressource" msgid "Datetime when this resource would be subject to removal" msgstr "Datum/Uhrzeit, zu dem/der diese Ressource entfernt werden würde" #, python-format msgid "Denying attempt to upload image because it exceeds the quota: %s" msgstr "" "Versuch, das Abbild hochzuladen, wird verweigert, weil es das Kontingent " "überschreitet: %s" #, python-format msgid "Denying attempt to upload image larger than %d bytes." msgstr "" "Versuch, Abbild hochzuladen, das größer ist als %d Bytes, wird nicht " "zugelassen." msgid "Descriptive name for the image" msgstr "Beschreibender Name für das Abbild" msgid "Disk format is not specified." msgstr "Plattenformat wurde nicht angegeben." #, python-format msgid "" "Driver %(driver_name)s could not be configured correctly. Reason: %(reason)s" msgstr "" "Treiber %(driver_name)s konnte nicht ordnungsgemäß konfiguriert werden. " "Grund: %(reason)s" msgid "" "Error decoding your request. Either the URL or the request body contained " "characters that could not be decoded by Glance" msgstr "" "Fehler beim Entschlüsseln Ihrer Anforderung. Entweder die URL oder der " "angeforderte Body enthalten Zeichen, die von Glance nicht entschlüsselt " "werden konnten. " #, python-format msgid "Error fetching members of image %(image_id)s: %(inner_msg)s" msgstr "" "Fehler beim Abrufen der Mitglieder von Abbild %(image_id)s: %(inner_msg)s" msgid "Error in store configuration. Adding images to store is disabled." msgstr "" "Fehler in Speicherkonfiguration. Hinzufügen von Abbildern zu Speicher ist " "inaktiviert." msgid "Expected a member in the form: {\"member\": \"image_id\"}" msgstr "" "Mitglied mit Angabe im folgenden Format erwartet: {\"member\": \"image_id\"}" msgid "Expected a status in the form: {\"status\": \"status\"}" msgstr "" "Status mit Angabe im folgenden Format erwartet: {\"status\": \"status\"}" msgid "External source should not be empty" msgstr "Externe Quelle darf nicht leer sein." #, python-format msgid "External sources are not supported: '%s'" msgstr "Externe Quellen werden nicht unterstützt: '%s'" #, python-format msgid "Failed to activate image. Got error: %s" msgstr "Abbild wurde nicht aktiviert. Fehler: %s" #, python-format msgid "Failed to add image metadata. Got error: %s" msgstr "Abbildmetadaten wurden nicht hinzugefügt. Fehler: %s" #, python-format msgid "Failed to find image %(image_id)s to delete" msgstr "Zu löschendes Abbild %(image_id)s wurde nicht gefunden" #, python-format msgid "Failed to find image to delete: %s" msgstr "Zu löschendes Abbild wurde nicht gefunden: %s" #, python-format msgid "Failed to find image to update: %s" msgstr "Zu aktualisierendes Abbild wurde nicht gefunden: %s" #, python-format msgid "Failed to find resource type %(resourcetype)s to delete" msgstr "Zu löschender Ressourcentyp %(resourcetype)s wurde nicht gefunden" #, python-format msgid "Failed to initialize the image cache database. Got error: %s" msgstr "" "Die Image-Zwischenspeicherdatenbank wurde nicht initialisiert. Fehler: %s" #, python-format msgid "Failed to read %s from config" msgstr "Fehler beim Lesen von %s aus Konfiguration" #, python-format msgid "Failed to reserve image. Got error: %s" msgstr "Abbild wurde nicht reserviert. Fehler: %s" #, python-format msgid "Failed to update image metadata. Got error: %s" msgstr "Abbildmetadaten wurden nicht aktualisiert. Fehler: %s" #, python-format msgid "Failed to upload image %s" msgstr "Fehler beim Hochladen des Abbildes %s" #, python-format msgid "" "Failed to upload image data for image %(image_id)s due to HTTP error: " "%(error)s" msgstr "" "Fehler beim Hochladen von Abbilddaten für Abbild %(image_id)s wegen HTTP-" "Fehler: %(error)s" #, python-format msgid "" "Failed to upload image data for image %(image_id)s due to internal error: " "%(error)s" msgstr "" "Fehler beim Hochladen der Abbilddaten für das Abbild %(image_id)s auf Grund " "eines internen Fehlers: %(error)s" #, python-format msgid "File %(path)s has invalid backing file %(bfile)s, aborting." msgstr "Datei %(path)s hat ungültige Sicherungsdatei %(bfile)s. Abbruch." msgid "" "File based imports are not allowed. Please use a non-local source of image " "data." msgstr "" "Dateibasierte Importe sind nicht zulässig. Verwenden Sie eine " "Imagedatenquelle, die nicht lokal ist." msgid "Forbidden image access" msgstr "Unzulässiger Zugriff auf Abbild" #, python-format msgid "Forbidden to delete a %s image." msgstr "Es ist nicht erlaubt, ein %s Abbild zu löschen." #, python-format msgid "Forbidden to delete image: %s" msgstr "Löschen von Abbild nicht erlaubt: %s" #, python-format msgid "Forbidden to modify '%(key)s' of %(status)s image." msgstr "Es ist nicht erlaubt, '%(key)s' des %(status)s-Abbild zu ändern." #, python-format msgid "Forbidden to modify '%s' of image." msgstr "Ändern von '%s' eines Abbild nicht erlaubt." msgid "Forbidden to reserve image." msgstr "Reservieren von Abbild nicht erlaubt." msgid "Forbidden to update deleted image." msgstr "Aktualisieren von gelöschtem Abbild nicht erlaubt." #, python-format msgid "Forbidden to update image: %s" msgstr "Aktualisieren von Abbild nicht erlaubt: %s" #, python-format msgid "Forbidden upload attempt: %s" msgstr "Unerlaubter Uploadversuch: %s" #, python-format msgid "Forbidding request, metadata definition namespace=%s is not visible." msgstr "" "Anforderung wird verboten, Metadatendefinitionsnamensbereich %s ist nicht " "sichtbar. " #, python-format msgid "Forbidding request, task %s is not visible" msgstr "Anforderung wird nicht zugelassen, Task %s ist nicht sichtbar" msgid "Format of the container" msgstr "Format des Containers" msgid "Format of the disk" msgstr "Format der Festplatte" #, python-format msgid "Host \"%s\" is not valid." msgstr "Host \"%s\" ist nicht gültig." #, python-format msgid "Host and port \"%s\" is not valid." msgstr "Host und Port \"%s\" ist nicht gültig." msgid "" "Human-readable informative message only included when appropriate (usually " "on failure)" msgstr "" "Informationsnachricht in Klarschrift nur eingeschlossen, wenn zweckdienlich " "(in der Regel bei einem Fehler)" msgid "If true, image will not be deletable." msgstr "Bei 'true' kann das Abbild nicht gelöscht werden." msgid "If true, namespace will not be deletable." msgstr "Bei 'true' kann der Namensbereich nicht gelöscht werden." #, python-format msgid "Image %(id)s could not be deleted because it is in use: %(exc)s" msgstr "" "Abbild %(id)s konnte nicht gelöscht werden, da es verwendet wird: %(exc)s" #, python-format msgid "Image %(id)s not found" msgstr "Abbild %(id)s nicht gefunden" #, python-format msgid "" "Image %(image_id)s could not be found after upload. The image may have been " "deleted during the upload: %(error)s" msgstr "" "Abbild %(image_id)s wurde nach dem Upload nicht gefunden. Das Abbild wurde " "möglicherweise während des Uploads gelöscht: %(error)s" #, python-format msgid "Image %(image_id)s is protected and cannot be deleted." msgstr "Abbild %(image_id)s ist geschützt und kann nicht gelöscht werden." #, python-format msgid "" "Image %s could not be found after upload. The image may have been deleted " "during the upload, cleaning up the chunks uploaded." msgstr "" "Abbild %s konnte nach dem Upload nicht gefunden werden. Das Abbild wurde " "möglicherweise beim Upload gelöscht. Die hochgeladenen Blöcke werden " "bereinigt." #, python-format msgid "" "Image %s could not be found after upload. The image may have been deleted " "during the upload." msgstr "" "Abbild %s konnte nach dem Hochladen nicht gefunden werden. Das Abbild ist " "möglicherweise beim Hochladen gelöscht worden." #, python-format msgid "Image %s is deactivated" msgstr "Abbild %s ist deaktiviert" #, python-format msgid "Image %s is not active" msgstr "Abbild %s ist nicht aktiv" #, python-format msgid "Image %s not found." msgstr "Abbild %s nicht gefunden." #, python-format msgid "Image exceeds the storage quota: %s" msgstr "Das Abbild übersteigt das vorhandene Speicherkontingent: %s" msgid "Image id is required." msgstr "Abbild-ID ist erforderlich." msgid "Image is protected" msgstr "Abbild ist geschützt" #, python-format msgid "Image member limit exceeded for image %(id)s: %(e)s:" msgstr "Grenzwert für Abbildmitglieder für Abbild %(id)s überschritten: %(e)s:" #, python-format msgid "Image name too long: %d" msgstr "Abbildname zu lang: %d" msgid "Image operation conflicts" msgstr "Abbildoperationskonflikte" #, python-format msgid "" "Image status transition from %(cur_status)s to %(new_status)s is not allowed" msgstr "" "Abbild-Statusänderung von %(cur_status)s nach %(new_status)s ist nicht " "erlaubt" #, python-format msgid "Image storage media is full: %s" msgstr "Datenträger zum Speichern des Abbildes ist voll: %s" #, python-format msgid "Image tag limit exceeded for image %(id)s: %(e)s:" msgstr "Grenzwert für Abbildschlagwort für Abbild %(id)s überschritten: %(e)s:" #, python-format msgid "Image upload problem: %s" msgstr "Problem beim Abbildupload: %s" #, python-format msgid "Image with identifier %s already exists!" msgstr "Abbild mit ID %s ist bereits vorhanden!" #, python-format msgid "Image with identifier %s has been deleted." msgstr "Abbild mit ID %s wurde gelöscht. " #, python-format msgid "Image with identifier %s not found" msgstr "Abbild mit ID %s nicht gefunden" #, python-format msgid "Image with the given id %(image_id)s was not found" msgstr "Abbild mit der angegebenen ID %(image_id)s wurde nicht gefunden" #, python-format msgid "" "Incorrect auth strategy, expected \"%(expected)s\" but received " "\"%(received)s\"" msgstr "" "Falsche Authentifizierungsstrategie. Erwartet wurde \"%(expected)s\", " "empfangen wurde jedoch \"%(received)s\"" #, python-format msgid "Incorrect request: %s" msgstr "Falsche Anforderung: %s" #, python-format msgid "Input does not contain '%(key)s' field" msgstr "Eingabe enthält nicht das Feld '%(key)s' " #, python-format msgid "Insufficient permissions on image storage media: %s" msgstr "Nicht ausreichende Berechtigungen auf Abbildspeichermedien: %s" #, python-format msgid "Invalid JSON pointer for this resource: '/%s'" msgstr "Ungültiger JSON Zeiger für diese Ressource: : '/%s'" #, python-format msgid "Invalid checksum '%s': can't exceed 32 characters" msgstr "Ungültige Kontrollsumme '%s': Darf 32 Zeichen nicht überschreiten" msgid "Invalid configuration in glance-swift conf file." msgstr "Ungültige Konfiguration in der Glance-Swift-Konfigurationsdatei." msgid "Invalid configuration in property protection file." msgstr "Ungültige Konfiguration in Eigenschaftsschutzdatei. " #, python-format msgid "Invalid container format '%s' for image." msgstr "Ungültiges Containerformat '%s' für Abbild." #, python-format msgid "Invalid content type %(content_type)s" msgstr "Ungültiger Inhaltstyp %(content_type)s" #, python-format msgid "Invalid disk format '%s' for image." msgstr "Ungültiges Plattenformat '%s' für Abbild." #, python-format msgid "Invalid filter value %s. The quote is not closed." msgstr "Ungültiger Filterwert %s. Das schließende Anführungszeichen fehlt." #, python-format msgid "" "Invalid filter value %s. There is no comma after closing quotation mark." msgstr "" "Ungültiger Filterwert %s. Vor dem schließenden Anführungszeichen ist kein " "Komma." #, python-format msgid "" "Invalid filter value %s. There is no comma before opening quotation mark." msgstr "" "Ungültiger Filterwert %s. Vor dem öffnenden Anführungszeichen ist kein Komma." msgid "Invalid image id format" msgstr "Ungültiges Abbild-ID-Format" msgid "Invalid location" msgstr "Ungültige Position" #, python-format msgid "Invalid location %s" msgstr "Ungültige Position %s" #, python-format msgid "Invalid location: %s" msgstr "Ungültiger Ort: %s" #, python-format msgid "" "Invalid location_strategy option: %(name)s. The valid strategy option(s) " "is(are): %(strategies)s" msgstr "" "Ungültige location_strategy Option: %(name)s. Gültige Optionen sind: " "%(strategies)s" msgid "Invalid locations" msgstr "Ungültige Positionen" #, python-format msgid "Invalid locations: %s" msgstr "Unbekannte Stellen: %s" msgid "Invalid marker format" msgstr "Ungültiges Markerformat" msgid "Invalid marker. Image could not be found." msgstr "Ungültiger Marker. Abbild konnte nicht gefunden werden." #, python-format msgid "Invalid membership association: %s" msgstr "Ungültige Mitgliedschaftszuordnung: %s" msgid "" "Invalid mix of disk and container formats. When setting a disk or container " "format to one of 'aki', 'ari', or 'ami', the container and disk formats must " "match." msgstr "" "Ungültige Kombination von Platten- und Containerformaten. Beim Festlegen " "eines Platten- oder Containerformats auf 'aki', 'ari' oder 'ami' müssen die " "Container- und Plattenformate übereinstimmen." #, python-format msgid "" "Invalid operation: `%(op)s`. It must be one of the following: %(available)s." msgstr "" "Ungültige Operation: '%(op)s'. Es muss eine der folgenden Optionen verwendet " "werden: %(available)s." msgid "Invalid position for adding a location." msgstr "Ungültige Position zum Hinzufügen einer Position." msgid "Invalid position for removing a location." msgstr "Ungültige Stelle zum Entfernen einer Position." msgid "Invalid service catalog json." msgstr "Ungültige Servicekatalog-JSON." #, python-format msgid "Invalid sort direction: %s" msgstr "Ungültige Sortierrichtung: %s" #, python-format msgid "" "Invalid sort key: %(sort_key)s. It must be one of the following: " "%(available)s." msgstr "" "Ungültiger Sortierschlüssel: %(sort_key)s. Es muss einer der folgenden sein: " "%(available)s." #, python-format msgid "Invalid status value: %s" msgstr "Ungültiger Statuswert: %s" #, python-format msgid "Invalid status: %s" msgstr "Ungültiger Status: %s" #, python-format msgid "Invalid time format for %s." msgstr "Ungültiges Zeitformat für %s." #, python-format msgid "Invalid type value: %s" msgstr "Ungültiger Wert für Typ: %s" #, python-format msgid "" "Invalid update. It would result in a duplicate metadata definition namespace " "with the same name of %s" msgstr "" "Ungültige Aktualisierung. Sie würde zu einer doppelten " "Metadatendefinitionseigenschaft mit demselben Namen wie %s führen" #, python-format msgid "" "Invalid update. It would result in a duplicate metadata definition object " "with the same name=%(name)s in namespace=%(namespace_name)s." msgstr "" "Ungültige Aktualisierung. Sie wurde zu einem doppelten " "Metadatendefinitionsobjekt mit demselben Namen %(name)s im Namensbereich " "%(namespace_name)s führen." #, python-format msgid "" "Invalid update. It would result in a duplicate metadata definition object " "with the same name=%(name)s in namespace=%(namespace_name)s." msgstr "" "Ungültige Aktualisierung. Sie wurde zu einem doppelten " "Metadatendefinitionsobjekt mit demselben Namen %(name)s im Namensbereich " "%(namespace_name)s führen." #, python-format msgid "" "Invalid update. It would result in a duplicate metadata definition property " "with the same name=%(name)s in namespace=%(namespace_name)s." msgstr "" "Ungültige Aktualisierung. Sie würde zu einer doppelten " "Metadatendefinitionseigenschaft mit demselben Namen %(name)s im " "Namensbereich %(namespace_name)s führen. " #, python-format msgid "Invalid value '%(value)s' for parameter '%(param)s': %(extra_msg)s" msgstr "Ungültiger Wert '%(value)s' für Parameter '%(param)s': %(extra_msg)s" #, python-format msgid "Invalid value for option %(option)s: %(value)s" msgstr "Ungültiger Wert für Option %(option)s: %(value)s" #, python-format msgid "Invalid visibility value: %s" msgstr "Ungültiger Sichtbarkeitswert: %s" msgid "It's invalid to provide multiple image sources." msgstr "Die Angabe von mehreren Abbildquellen ist ungültig." msgid "It's not allowed to add locations if locations are invisible." msgstr "" "Es ist nicht zulässig, Positionen hinzuzufügen, wenn die Positionen nicht " "sichtbar sind. " msgid "It's not allowed to remove locations if locations are invisible." msgstr "" "Es ist nicht zulässig, Positionen zu entfernen, wenn die Positionen nicht " "sichtbar sind. " msgid "It's not allowed to update locations if locations are invisible." msgstr "" "Es ist nicht zulässig, Positionen zu aktualisieren, wenn die Positionen " "nicht sichtbar sind. " msgid "List of strings related to the image" msgstr "Liste mit dem Abbild zugehörigen Zeichenketten" msgid "Malformed JSON in request body." msgstr "Fehlerhafte JSON in Anforderungshauptteil." msgid "Maximal age is count of days since epoch." msgstr "Das maximale Alter entspricht der Anzahl von Tagen seit der Epoche." #, python-format msgid "Maximum redirects (%(redirects)s) was exceeded." msgstr "Das Maximum an Umleitungen (%(redirects)s) wurde überschritten." #, python-format msgid "Member %(member_id)s is duplicated for image %(image_id)s" msgstr "Mitglied %(member_id)s ist für Abbild %(image_id)s doppelt vorhanden" msgid "Member can't be empty" msgstr "Mitglied darf nicht leer sein" msgid "Member to be added not specified" msgstr "Hinzuzufügendes Element nicht angegeben" msgid "Membership could not be found." msgstr "Mitgliedschaft konnte nicht gefunden werden." #, python-format msgid "" "Metadata definition namespace %(namespace)s is protected and cannot be " "deleted." msgstr "" "Der Metadatendefinitionsnamensbereich %(namespace)s ist geschützt und kann " "nicht gelöscht werden." #, python-format msgid "Metadata definition namespace not found for id=%s" msgstr "Metadatendefinitionsnamensbereich für id=%s nicht gefunden" #, python-format msgid "" "Metadata definition object %(object_name)s is protected and cannot be " "deleted." msgstr "" "Das Metadatendefinitionsobjekt %(object_name)s ist geschützt und kann nicht " "gelöscht werden." #, python-format msgid "Metadata definition object not found for id=%s" msgstr "Metadatendefinitionsobjekt für id=%s nicht gefunden" #, python-format msgid "" "Metadata definition property %(property_name)s is protected and cannot be " "deleted." msgstr "" "Die Metadatendefinitionseigenschaft %(property_name)s ist geschützt und kann " "nicht gelöscht werden. " #, python-format msgid "Metadata definition property not found for id=%s" msgstr "Metadatendefinitionseigenschaft für id=%s nicht gefunden" #, python-format msgid "" "Metadata definition resource-type %(resource_type_name)s is a seeded-system " "type and cannot be deleted." msgstr "" "Der Ressourcentyp %(resource_type_name)s der Metadatendefinition ist ein " "Basisdaten-Systemtyp und kann nicht gelöscht werden. " #, python-format msgid "" "Metadata definition resource-type-association %(resource_type)s is protected " "and cannot be deleted." msgstr "" "Die Ressourcentypzuordnung %(resource_type)s der Metadatendefinition ist " "geschützt und kann nicht gelöscht werden." #, python-format msgid "" "Metadata definition tag %(tag_name)s is protected and cannot be deleted." msgstr "" "Der Metadatendefinitionstag %(tag_name)s ist geschützt und kann nicht " "gelöscht werden." #, python-format msgid "Metadata definition tag not found for id=%s" msgstr "Metadatendefinitionstag für id=%s nicht gefunden" msgid "Minimal rows limit is 1." msgstr "Der Wert für die Mindestzeilenanzahl ist 1." #, python-format msgid "Missing required credential: %(required)s" msgstr "Erforderlicher Berechtigungsnachweis fehlt: %(required)s" #, python-format msgid "" "Multiple 'image' service matches for region %(region)s. This generally means " "that a region is required and you have not supplied one." msgstr "" "Mehrere 'image'-Serviceübereinstimmungen für Region %(region)s. Dies weist " "im Allgemeinen darauf hin, dass eine Region erforderlich ist und dass Sie " "keine angegeben haben." msgid "No authenticated user" msgstr "Kein authentifizierter Benutzer" #, python-format msgid "No image found with ID %s" msgstr "Es wurde kein Abbild mit der ID %s gefunden" #, python-format msgid "No location found with ID %(loc)s from image %(img)s" msgstr "Keine Position mit ID %(loc)s von Abbild %(img)s gefunden" msgid "No permission to share that image" msgstr "Keine Berechtigung dieses Abbild freizugeben" #, python-format msgid "Not allowed to create members for image %s." msgstr "Es ist nicht zulässig, Mitglieder für Abbild %s zu erstellen." #, python-format msgid "Not allowed to deactivate image in status '%s'" msgstr "Deaktivieren des Abbild im Status '%s' nicht zulässig" #, python-format msgid "Not allowed to delete members for image %s." msgstr "Es ist nicht zulässig, Mitglieder für Abbild %s zu löschen." #, python-format msgid "Not allowed to delete tags for image %s." msgstr "Es ist nicht zulässig, Schlagwörter für Abbild %s zu löschen." #, python-format msgid "Not allowed to list members for image %s." msgstr "Es ist nicht zulässig, Mitglieder für Abbild %s aufzulisten." #, python-format msgid "Not allowed to reactivate image in status '%s'" msgstr "Erneutes Aktivieren des Abbildes im Status '%s' nicht zulässig" #, python-format msgid "Not allowed to update members for image %s." msgstr "Es ist nicht zulässig, Mitglieder für Abbild %s zu aktualisieren." #, python-format msgid "Not allowed to update tags for image %s." msgstr "Es ist nicht zulässig, Schlagwörter für Abbild %s zu aktualisieren." #, python-format msgid "Not allowed to upload image data for image %(image_id)s: %(error)s" msgstr "" "Hochladen von Abbilddaten für Abbild %(image_id)s nicht zulässig: %(error)s" msgid "Number of sort dirs does not match the number of sort keys" msgstr "" "Die Anzahl der Sortierverzeichnisse entspricht nicht der Anzahl der " "Sortierschlüssel" msgid "OVA extract is limited to admin" msgstr "OVA-Extraktion kann nur vom Administrator ausgeführt werden." msgid "Old and new sorting syntax cannot be combined" msgstr "Die alte und die neue Sortiersyntax können nicht kombiniert werden" #, python-format msgid "Operation \"%s\" requires a member named \"value\"." msgstr "Operation \"%s\" erfordert ein Element mit der Bezeichnung \"value\"." msgid "" "Operation objects must contain exactly one member named \"add\", \"remove\", " "or \"replace\"." msgstr "" "Operationsobjekte müssen genau ein Element mit der Bezeichnung \"add\", " "\"remove\" oder \"replace\" enthalten." msgid "" "Operation objects must contain only one member named \"add\", \"remove\", or " "\"replace\"." msgstr "" "Operationsobjekte dürfen nur ein Element mit der Bezeichnung \"add\", " "\"remove\" oder \"replace\" enthalten." msgid "Operations must be JSON objects." msgstr "Operationen müssen JSON-Objekte sein." #, python-format msgid "Original locations is not empty: %s" msgstr "Originalpositionen sind nicht leer: %s" msgid "Owner can't be updated by non admin." msgstr "" "Eigner kann durch einen Benutzer, der kein Administrator ist, nicht " "aktualisiert werden." msgid "Owner must be specified to create a tag." msgstr "Der Eigentümer muss zum Erstellen eines Schlagwortes angegeben werden." msgid "Owner of the image" msgstr "Eigentümer des Abbildes" msgid "Owner of the namespace." msgstr "Eigentümer des Namensbereichs. " msgid "Param values can't contain 4 byte unicode." msgstr "Parameterwerte dürfen kein 4-Byte-Unicode enthalten." #, python-format msgid "Pointer `%s` contains \"~\" not part of a recognized escape sequence." msgstr "" "Zeiger `%s` enthält \"~\", das nicht Teil einer erkannten Escapezeichenfolge " "ist." #, python-format msgid "Pointer `%s` contains adjacent \"/\"." msgstr "Der Zeiger `%s` enthält ein angrenzendes \"/\"." #, python-format msgid "Pointer `%s` does not contains valid token." msgstr "Der Zeiger `%s` enthält kein gültiges Token." #, python-format msgid "Pointer `%s` does not start with \"/\"." msgstr "Zeiger `%s` beginnt nicht mit \"/\"." #, python-format msgid "Pointer `%s` end with \"/\"." msgstr "Der Zeiger `%s` endet mit einem \"/\"." #, python-format msgid "Port \"%s\" is not valid." msgstr "Port \"%s\" ist nicht gültig." #, python-format msgid "Process %d not running" msgstr "Prozess %d wird nicht ausgeführt" #, python-format msgid "Properties %s must be set prior to saving data." msgstr "Eigenschaften %s müssen vor dem Speichern von Daten festgelegt werden." #, python-format msgid "" "Property %(property_name)s does not start with the expected resource type " "association prefix of '%(prefix)s'." msgstr "" "Eigenschaft %(property_name)s beginnt nicht mit dem erwarteten " "Zuordnungspräfix für Ressourcentypen '%(prefix)s'." #, python-format msgid "Property %s already present." msgstr "Eigenschaft %s ist bereits vorhanden." #, python-format msgid "Property %s does not exist." msgstr "Eigenschaft %s ist nicht vorhanden." #, python-format msgid "Property %s may not be removed." msgstr "Eigenschaft %s darf nicht entfernt werden." #, python-format msgid "Property %s must be set prior to saving data." msgstr "Eigenschaft %s muss vor dem Speichern von Daten festgelegt werden." #, python-format msgid "Property '%s' is protected" msgstr "Eigenschaft '%s' ist geschützt" msgid "Property names can't contain 4 byte unicode." msgstr "Eigenschaftsnamen dürfen kein 4-Byte-Unicode enthalten." #, python-format msgid "" "Provided image size must match the stored image size. (provided size: " "%(ps)d, stored size: %(ss)d)" msgstr "" "Die angegebene Abbildgröße muss der gespeicherten Abbildgröße entsprechen. " "(angegebene Größe: %(ps)d, gespeicherte Größe: %(ss)d)" #, python-format msgid "Provided object does not match schema '%(schema)s': %(reason)s" msgstr "Angegebenes Objekt passt nicht zu Schema '%(schema)s': %(reason)s" #, python-format msgid "Provided status of task is unsupported: %(status)s" msgstr "Der angegebene Status der Task wird nicht unterstützt: %(status)s" #, python-format msgid "Provided type of task is unsupported: %(type)s" msgstr "Der angegebene Typ der Task wird nicht unterstützt: %(type)s" msgid "Provides a user friendly description of the namespace." msgstr "" "Stellt eine benutzerfreundliche Beschreibung des Namensbereichs bereit. " msgid "Received invalid HTTP redirect." msgstr "Ungültige HTTP-Umleitung erhalten." #, python-format msgid "Redirecting to %(uri)s for authorization." msgstr "Umleitung auf %(uri)s für Autorisierung." #, python-format msgid "Registry service can't use %s" msgstr "Registrierungsdienst kann %s nicht verwenden" #, python-format msgid "Registry was not configured correctly on API server. Reason: %(reason)s" msgstr "" "Registrierungsdatenbank wurde nicht ordnungsgemäß auf einem API-Server " "konfiguriert. Grund: %(reason)s" #, python-format msgid "Reload of %(serv)s not supported" msgstr "Erneutes Laden von %(serv)s nicht unterstützt" #, python-format msgid "Reloading %(serv)s (pid %(pid)s) with signal(%(sig)s)" msgstr "%(serv)s (PID %(pid)s) wird mit Signal (%(sig)s) erneut geladen" #, python-format msgid "Removing stale pid file %s" msgstr "Veraltete PID-Datei %s wird entfernt" msgid "Request body must be a JSON array of operation objects." msgstr "" "Anforderungshauptteil muss eine JSON-Array mit Operationsobjekten sein." msgid "Request must be a list of commands" msgstr "Die Anfrage muss eine Liste von Kommandos sein" #, python-format msgid "Required store %s is invalid" msgstr "Der verlangte Speicher %s ist ungültig" msgid "" "Resource type names should be aligned with Heat resource types whenever " "possible: http://docs.openstack.org/developer/heat/template_guide/openstack." "html" msgstr "" "Ressourcentypennamen sollten möglichst immer an den Heat-Ressourcentypen " "ausgerichtet werden: http://docs.openstack.org/developer/heat/template_guide/" "openstack.html" msgid "Response from Keystone does not contain a Glance endpoint." msgstr "Antwort von Keystone enthält keinen Glance-Endpunkt." msgid "Scope of image accessibility" msgstr "Umfang der Abbildzugänglichkeit" msgid "Scope of namespace accessibility." msgstr "Umfang der Zugänglichkeit des Namensbereichs. " #, python-format msgid "Server %(serv)s is stopped" msgstr "Server %(serv)s wurde gestoppt" #, python-format msgid "Server worker creation failed: %(reason)s." msgstr "Erstellung von Server-Worker fehlgeschlagen: %(reason)s." msgid "Signature verification failed" msgstr "Signaturverifizierung fehlgeschlagen" msgid "Size of image file in bytes" msgstr "Größe der Abbilddatei in Byte " msgid "" "Some resource types allow more than one key / value pair per instance. For " "example, Cinder allows user and image metadata on volumes. Only the image " "properties metadata is evaluated by Nova (scheduling or drivers). This " "property allows a namespace target to remove the ambiguity." msgstr "" "Bei manchen Ressourcentypen sind mehrere Schlüssel/Wert-Paare pro Instanz " "zulässig. Cinder lässt z. B. Benutzer- und Abbildmetadaten für Datenträger " "zu. Nur die Metadaten der Imageeigenschaften werden von Nova ausgewertet " "(Planung oder Treiber). Diese Eigenschaft lässt zu, dass ein " "Namensbereichsziel die Mehrdeutigkeit entfernt. " msgid "Sort direction supplied was not valid." msgstr "Die angegebene Sortierrichtung war nicht gültig. " msgid "Sort key supplied was not valid." msgstr "Der angegebene Sortierschlüssel war nicht gültig. " msgid "" "Specifies the prefix to use for the given resource type. Any properties in " "the namespace should be prefixed with this prefix when being applied to the " "specified resource type. Must include prefix separator (e.g. a colon :)." msgstr "" "Gibt das Präfix an, das für den angegebenen Ressourcentyp zu verwenden ist. " "Alle Eigenschaften im Namensbereich sollten dieses Präfix aufweisen, wenn " "sie auf den angegebenen Ressourcentyp angewendet werden. Muss " "Präfixtrennzeichen aufweisen (z. B. einen Doppelpunkt :)." msgid "Status must be \"pending\", \"accepted\" or \"rejected\"." msgstr "Status muss \"pending\", \"accepted\" oder \"rejected\" sein." msgid "Status not specified" msgstr "Status nicht angegeben" msgid "Status of the image" msgstr "Status des Abbildes" #, python-format msgid "Status transition from %(cur_status)s to %(new_status)s is not allowed" msgstr "" "Der Statusübergang von %(cur_status)s zu %(new_status)s ist nicht zulässig" #, python-format msgid "Stopping %(serv)s (pid %(pid)s) with signal(%(sig)s)" msgstr "%(serv)s (PID %(pid)s) wird mit Signal (%(sig)s) gestoppt" #, python-format msgid "Store for image_id not found: %s" msgstr "Speicher für image_id nicht gefunden: %s" #, python-format msgid "Store for scheme %s not found" msgstr "Speicher für Schema %s nicht gefunden" #, python-format msgid "" "Supplied %(attr)s (%(supplied)s) and %(attr)s generated from uploaded image " "(%(actual)s) did not match. Setting image status to 'killed'." msgstr "" "Angaben für %(attr)s (%(supplied)s) und %(attr)s, die aus dem hochgeladenen " "Abbild (%(actual)s) generiert wurden, stimmten nicht überein. Abbildstatus " "wird auf 'killed' gesetzt." msgid "Supported values for the 'container_format' image attribute" msgstr "Unterstützte Werte für das 'container_format' Abbild-Attribut" msgid "Supported values for the 'disk_format' image attribute" msgstr "Unterstützte Werte für das Abbildattribut 'disk_format'" #, python-format msgid "Suppressed respawn as %(serv)s was %(rsn)s." msgstr "Erneute Generierung wurde unterdrückt, da %(serv)s %(rsn)s war." msgid "System SIGHUP signal received." msgstr "System-SIGHUP-Signal empfangen. " #, python-format msgid "Task '%s' is required" msgstr "Task '%s' ist erforderlich" msgid "Task does not exist" msgstr "Task ist nicht vorhanden" msgid "Task failed due to Internal Error" msgstr "Task fehlgeschlagen. Grund: Interner Fehler" msgid "Task was not configured properly" msgstr "Die Task war nicht ordnungsgemäß konfiguriert" #, python-format msgid "Task with the given id %(task_id)s was not found" msgstr "Die Task mit der angegebenen ID %(task_id)s wurde nicht gefunden" msgid "The \"changes-since\" filter is no longer available on v2." msgstr "Der Filter \"changes-since\" ist bei Version 2 nicht mehr verfügbar." #, python-format msgid "The CA file you specified %s does not exist" msgstr "" "Die von Ihnen angegebene Zertifizierungsstellendatei %s ist nicht vorhanden" #, python-format msgid "" "The Image %(image_id)s object being created by this task %(task_id)s, is no " "longer in valid status for further processing." msgstr "" "Das Objekt von Abbild %(image_id)s, das von Task %(task_id)s erstellt wurde, " "befindet sich nicht mehr in einem gültigen Status zur weiteren Verarbeitung." msgid "The Store URI was malformed." msgstr "Die Speicher-URI war fehlerhaft." msgid "" "The URL to the keystone service. If \"use_user_token\" is not in effect and " "using keystone auth, then URL of keystone can be specified." msgstr "" "Die URL des Keystone-Service. Wenn \"use_user_token\" nicht wirksam ist und " "die Keystone-Authentifizierung verwendet wird, kann die Keystone-URL " "angegeben werden. " msgid "" "The administrators password. If \"use_user_token\" is not in effect, then " "admin credentials can be specified." msgstr "" "Das Administratorkennwort. Wenn \"use_user_token\" nicht wirksam ist, können " "Berechtigungsnachweise für den Administrator angegeben werden. " msgid "" "The administrators user name. If \"use_user_token\" is not in effect, then " "admin credentials can be specified." msgstr "" "Der Administratorname. Wenn \"use_user_token\" nicht wirksam ist, können " "Berechtigungsnachweise für den Administrator angegeben werden. " #, python-format msgid "The cert file you specified %s does not exist" msgstr "Die von Ihnen angegebene Zertifizierungsdatei %s ist nicht vorhanden" msgid "The current status of this task" msgstr "Der aktuelle Status dieser Task" #, python-format msgid "" "The device housing the image cache directory %(image_cache_dir)s does not " "support xattr. It is likely you need to edit your fstab and add the " "user_xattr option to the appropriate line for the device housing the cache " "directory." msgstr "" "Das Gerät, auf dem sich das Abbild-Zwischenspeicherverzeichnis " "%(image_cache_dir)s befindet, unterstützt xattr nicht. Wahrscheinlich müssen " "Sie fstab bearbeiten und die Option user_xattr zur entsprechenden Zeile für " "das Gerät, auf dem sich das Zwischenspeicherverzeichnis befindet, hinzufügen." #, python-format msgid "" "The given uri is not valid. Please specify a valid uri from the following " "list of supported uri %(supported)s" msgstr "" "Der angegebene URI ist ungültig. Geben Sie einen gültigen URI aus der " "folgenden Liste mit unterstützten URIs %(supported)s an." #, python-format msgid "The incoming image is too large: %s" msgstr "Das eingehende Abbild ist zu groß: %s" #, python-format msgid "The key file you specified %s does not exist" msgstr "Die von Ihnen angegebene Schlüsseldatei %s ist nicht vorhanden" #, python-format msgid "" "The limit has been exceeded on the number of allowed image locations. " "Attempted: %(attempted)s, Maximum: %(maximum)s" msgstr "" "Der Grenzwert für die zulässige Anzahl an Abbildpositionen wurde " "überschritten. Versucht: %(attempted)s, Maximum: %(maximum)s" #, python-format msgid "" "The limit has been exceeded on the number of allowed image members for this " "image. Attempted: %(attempted)s, Maximum: %(maximum)s" msgstr "" "Der Grenzwert für die zulässige Anzahl an Abbildmitgliedern wurde für dieses " "Abbild überschritten. Versucht: %(attempted)s, Maximum: %(maximum)s" #, python-format msgid "" "The limit has been exceeded on the number of allowed image properties. " "Attempted: %(attempted)s, Maximum: %(maximum)s" msgstr "" "Der Grenzwert für die zulässige Anzahl an Abbildeigenschaften wurde " "überschritten. Versucht: %(attempted)s, Maximum: %(maximum)s" #, python-format msgid "" "The limit has been exceeded on the number of allowed image properties. " "Attempted: %(num)s, Maximum: %(quota)s" msgstr "" "Der Grenzwert für die zulässige Anzahl an Abbildeigenschaften wurde " "überschritten. Versucht: %(num)s, Maximum: %(quota)s" #, python-format msgid "" "The limit has been exceeded on the number of allowed image tags. Attempted: " "%(attempted)s, Maximum: %(maximum)s" msgstr "" "Der Grenzwert für die zulässige Anzahl an Abbildschlagwörter wurde " "überschritten. Versucht: %(attempted)s, Maximum: %(maximum)s" #, python-format msgid "The location %(location)s already exists" msgstr "Die Position %(location)s ist bereits vorhanden" #, python-format msgid "The location data has an invalid ID: %d" msgstr "Die Position weist eine ungültige ID auf: %d" #, python-format msgid "" "The metadata definition %(record_type)s with name=%(record_name)s not " "deleted. Other records still refer to it." msgstr "" "Die Metadatendefinition %(record_type)s namens %(record_name)s wurde nicht " "gelöscht. Andere Datensätze verweisen noch darauf. " #, python-format msgid "The metadata definition namespace=%(namespace_name)s already exists." msgstr "" "Der Metadatendefinitionsnamensbereich %(namespace_name)s ist bereits " "vorhanden. " #, python-format msgid "" "The metadata definition object with name=%(object_name)s was not found in " "namespace=%(namespace_name)s." msgstr "" "Das Metadatendefinitionsobjekt namens %(object_name)s wurde in Namensbereich " "%(namespace_name)s nicht gefunden. " #, python-format msgid "" "The metadata definition property with name=%(property_name)s was not found " "in namespace=%(namespace_name)s." msgstr "" "Die Metadatendefinitionseigenschaft namens %(property_name)s wurde nicht in " "Namensbereich %(namespace_name)s gefunden. " #, python-format msgid "" "The metadata definition resource-type association of resource-type=" "%(resource_type_name)s to namespace=%(namespace_name)s already exists." msgstr "" "Die Ressourcentypzuordnung der Metadatendefinition zwischen Ressourcentyp " "%(resource_type_name)s und Namensbereich %(namespace_name)s ist bereits " "vorhanden." #, python-format msgid "" "The metadata definition resource-type association of resource-type=" "%(resource_type_name)s to namespace=%(namespace_name)s, was not found." msgstr "" "Die Ressourcentypzuordnung der Metadatendefinition zwischen Ressourcentyp " "%(resource_type_name)s und Namensbereich %(namespace_name)s wurde nicht " "gefunden." #, python-format msgid "" "The metadata definition resource-type with name=%(resource_type_name)s, was " "not found." msgstr "" "Der Ressourcentyp %(resource_type_name)s der Metadatendefinition wurde nicht " "gefunden. " #, python-format msgid "" "The metadata definition tag with name=%(name)s was not found in namespace=" "%(namespace_name)s." msgstr "" "Der Metadatendefinitionstag namens %(name)s wurde in Namensbereich " "%(namespace_name)s nicht gefunden." msgid "The parameters required by task, JSON blob" msgstr "Die für die Task erforderlichen Parameter, JSON-Blob-Objekt" msgid "The provided image is too large." msgstr "Das angegebene Abbild ist zu groß." msgid "" "The region for the authentication service. If \"use_user_token\" is not in " "effect and using keystone auth, then region name can be specified." msgstr "" "Die Region für den Authentifizierungsservice. Wenn \"use_user_token\" nicht " "wirksam ist und die Keystone-Authentifizierung verwendet wird, kann der " "Regionsname angegeben werden. " msgid "The request returned 500 Internal Server Error." msgstr "" "Die Anforderung hat eine Nachricht vom Typ '500 - interner Serverfehler' " "zurückgegeben." msgid "" "The request returned 503 Service Unavailable. This generally occurs on " "service overload or other transient outage." msgstr "" "Die Anforderung hat eine Nachricht vom Typ '503 - Service nicht verfügbar' " "zurückgegeben. Dies geschieht im Allgemeinen bei einer Serviceüberbelastung " "oder einem anderen temporären Ausfall." #, python-format msgid "" "The request returned a 302 Multiple Choices. This generally means that you " "have not included a version indicator in a request URI.\n" "\n" "The body of response returned:\n" "%(body)s" msgstr "" "Die Anforderung hat eine Nachricht vom Typ '302 - Mehrere Möglichkeiten' " "zurückgegeben. Dies weist im Allgemeinen darauf hin, dass Sie bei einem " "Anfrage-URI keinen Versionsindikator angegeben haben.\n" "\n" "Nachrichtentext der zurückgegebenen Antwort:\n" "%(body)s" #, python-format msgid "" "The request returned a 413 Request Entity Too Large. This generally means " "that rate limiting or a quota threshold was breached.\n" "\n" "The response body:\n" "%(body)s" msgstr "" "Die Anforderung hat eine Nachricht vom Typ '413 - Anforderungsentität zu " "groß' zurückgegeben. Dies weist im Allgemeinen darauf hin, dass die " "Geschwindigkeitsbegrenzung oder ein Kontingentschwellenwert überschritten " "wurde.\n" "\n" "Der Antworttext:\n" "%(body)s" #, python-format msgid "" "The request returned an unexpected status: %(status)s.\n" "\n" "The response body:\n" "%(body)s" msgstr "" "Die Anforderung hat einen unerwarteten Status zurückgegeben: %(status)s.\n" "\n" "Der Antworttext:\n" "%(body)s" msgid "" "The requested image has been deactivated. Image data download is forbidden." msgstr "" "Das angeforderte Abbild wurde deaktiviert. Der Download von Abbilddaten ist " "nicht zulässig. " msgid "The result of current task, JSON blob" msgstr "Das Ergebnis der aktuellen Task, JSON-Blob-Objekt" #, python-format msgid "" "The size of the data %(image_size)s will exceed the limit. %(remaining)s " "bytes remaining." msgstr "" "Die Größe der Daten, mit denen %(image_size)s den Grenzwert überschreiten " "wird. %(remaining)s Byte verbleiben." #, python-format msgid "The specified member %s could not be found" msgstr "Das angegebene Mitglied %s konnte nicht gefunden werden" #, python-format msgid "The specified metadata object %s could not be found" msgstr "Das angegebene Metadatenobjekt %s konnte nicht gefunden werden" #, python-format msgid "The specified metadata tag %s could not be found" msgstr "Das angegebene Metadatenschlagwort %s konnte nicht gefunden werden" #, python-format msgid "The specified namespace %s could not be found" msgstr "Der angegebene Namensbereich %s konnte nicht gefunden werden" #, python-format msgid "The specified property %s could not be found" msgstr "Die angegebene Eigenschaft %s konnte nicht gefunden werden" #, python-format msgid "The specified resource type %s could not be found " msgstr "Der angegebene Ressourcentyp %s konnte nicht gefunden werden" msgid "" "The status of deleted image location can only be set to 'pending_delete' or " "'deleted'" msgstr "" "Der Status der Position des gelöschten Abbildes kann nur auf " "'pending_delete' oder auf 'deleted' gesetzt werden." msgid "" "The status of deleted image location can only be set to 'pending_delete' or " "'deleted'." msgstr "" "Der Status der Position des gelöschten Abbild kann nur auf 'pending_delete' " "oder auf 'deleted' gesetzt werden." msgid "The status of this image member" msgstr "Der Status dieses Abbildelements" msgid "" "The strategy to use for authentication. If \"use_user_token\" is not in " "effect, then auth strategy can be specified." msgstr "" "Die für die Authentifizierung zu verwendende Strategie. Wenn \"use_user_token" "\" nicht wirksam ist, kann die Authentifizierungsstrategie angegeben werden. " #, python-format msgid "" "The target member %(member_id)s is already associated with image " "%(image_id)s." msgstr "" "Das Zielmitglied %(member_id)s ist dem Abbild %(image_id)s bereits " "zugeordnet." msgid "" "The tenant name of the administrative user. If \"use_user_token\" is not in " "effect, then admin tenant name can be specified." msgstr "" "Der Nutzername des Benutzers mit Verwaltungsaufgaben. Wenn \"use_user_token" "\" nicht wirksam ist, kann der Administratornutzername angegeben werden. " msgid "The type of task represented by this content" msgstr "Der Typ der durch diesen Inhalt dargestellten Task" msgid "The unique namespace text." msgstr "Der eindeutige Text für den Namensbereich. " msgid "The user friendly name for the namespace. Used by UI if available." msgstr "" "Der benutzerfreundliche Name für den Namensbereich. Wird von der " "Benutzerschnittstelle verwendet, falls verfügbar. " #, python-format msgid "" "There is a problem with your %(error_key_name)s %(error_filename)s. Please " "verify it. Error: %(ioe)s" msgstr "" "Es ist ein Problem bei %(error_key_name)s %(error_filename)s aufgetreten. " "Überprüfen Sie dies. Fehler: %(ioe)s" #, python-format msgid "" "There is a problem with your %(error_key_name)s %(error_filename)s. Please " "verify it. OpenSSL error: %(ce)s" msgstr "" "Es ist ein Problem bei %(error_key_name)s %(error_filename)s aufgetreten. " "Überprüfen Sie dies. OpenSSL-Fehler: %(ce)s" #, python-format msgid "" "There is a problem with your key pair. Please verify that cert " "%(cert_file)s and key %(key_file)s belong together. OpenSSL error %(ce)s" msgstr "" "Es gibt ein Problem mit Ihrem Schlüsselpaar. Überprüfen Sie, ob das " "Zertifikat %(cert_file)s und der Schlüssel %(key_file)s zusammengehören. " "OpenSSL-Fehler %(ce)s" msgid "There was an error configuring the client." msgstr "Fehler bei Konfiguration des Clients." msgid "There was an error connecting to a server" msgstr "Fehler beim Herstellen einer Verbindung zu einem Server." msgid "" "This operation is currently not permitted on Glance Tasks. They are auto " "deleted after reaching the time based on their expires_at property." msgstr "" "Diese Operation ist derzeit bei Glance-Schlagwörtern nicht zulässig. Sie " "werden bei Erreichen der in der Eigenschaft 'expires_at' festgelegten Zeit " "automatisch gelöscht." msgid "This operation is currently not permitted on Glance images details." msgstr "Diese Operation ist derzeit bei Glance-Abbilddetails nicht zulässig." msgid "" "Time in hours for which a task lives after, either succeeding or failing" msgstr "" "Zeit in Stunden, für die eine Task anschließend aktiv bleibt, entweder bei " "Erfolg oder bei Fehlschlag" msgid "Too few arguments." msgstr "Zu wenig Argumente" msgid "" "URI cannot contain more than one occurrence of a scheme.If you have " "specified a URI like swift://user:pass@http://authurl.com/v1/container/obj, " "you need to change it to use the swift+http:// scheme, like so: swift+http://" "user:pass@authurl.com/v1/container/obj" msgstr "" "URI kann nicht mehrere Vorkommen eines Schemas enthalten. Wenn Sie einen URI " "wie swift://user:pass@http://authurl.com/v1/container/obj angegeben haben, " "müssen Sie ihn ändern, um das swift+http://-Schema verwenden zu können. " "Beispiel: swift+http://user:pass@authurl.com/v1/container/obj" msgid "URL to access the image file kept in external store" msgstr "URL für den Zugriff auf Abbilddatei in externem Speicher" #, python-format msgid "" "Unable to create pid file %(pid)s. Running as non-root?\n" "Falling back to a temp file, you can stop %(service)s service using:\n" " %(file)s %(server)s stop --pid-file %(fb)s" msgstr "" "PID-Datei %(pid)s kann nicht erstellt werden. Wird nicht als Root " "ausgeführt?\n" "Es wird auf eine temporäre Datei zurückgegriffen; Sie können den Dienst " "%(service)s stoppen mithilfe von:\n" " %(file)s %(server)s stop --pid-file %(fb)s" #, python-format msgid "Unable to filter by unknown operator '%s'." msgstr "Filtern mit dem unbekannten Operator '%s' nicht möglich." msgid "Unable to filter on a range with a non-numeric value." msgstr "Filtern in einem Bereich mit nicht numerischem Wert nicht möglich." msgid "Unable to filter on a unknown operator." msgstr "Filtern mit einem unbekannten Operator nicht möglich." msgid "Unable to filter using the specified operator." msgstr "Filtern mit dem angegebenen Operator nicht möglich." msgid "Unable to filter using the specified range." msgstr "Filtern mit dem angegebenen Bereich nicht möglich." #, python-format msgid "Unable to find '%s' in JSON Schema change" msgstr "'%s' kann in JSON-Schemaänderung nicht gefunden werden" #, python-format msgid "" "Unable to find `op` in JSON Schema change. It must be one of the following: " "%(available)s." msgstr "" "'op' wurde in JSON-Schemaänderung nicht gefunden. Es muss eine der folgenden " "Optionen verwendet werden: %(available)s." msgid "Unable to increase file descriptor limit. Running as non-root?" msgstr "" "Grenzwert für Dateideskriptoren kann nicht erhöht werden. Wird nicht als " "Root ausgeführt?" #, python-format msgid "" "Unable to load %(app_name)s from configuration file %(conf_file)s.\n" "Got: %(e)r" msgstr "" "%(app_name)s kann nicht aus Konfigurationsdatei %(conf_file)s geladen " "werden.\n" "Abgerufen: %(e)r" #, python-format msgid "Unable to load schema: %(reason)s" msgstr "Schema kann nicht geladen werden: %(reason)s" #, python-format msgid "Unable to locate paste config file for %s." msgstr "Konfigurationsdatei zum Einfügen für %s konnte nicht gefunden werden." #, python-format msgid "Unable to upload duplicate image data for image%(image_id)s: %(error)s" msgstr "" "Hochladen von doppelten Abbilddaten für Abbild %(image_id)s nicht möglich: " "%(error)s" msgid "Unauthorized image access" msgstr "Unauthorisierter Abbildzugriff" msgid "Unexpected body type. Expected list/dict." msgstr "Unerwarteter Hauptteiltyp. Erwartet wurde list/dict." #, python-format msgid "Unexpected response: %s" msgstr "Unerwartete Antwort: %s" #, python-format msgid "Unknown auth strategy '%s'" msgstr "Unbekannte Authentifizierungsstrategie '%s'" #, python-format msgid "Unknown command: %s" msgstr "Unbekanntes Kommando: %s" msgid "Unknown sort direction, must be 'desc' or 'asc'" msgstr "Unbekannte Sortierrichtung; muss 'desc' oder 'asc' sein" msgid "Unrecognized JSON Schema draft version" msgstr "Unerkannte JSON-Schemaentwurfsversion" msgid "Unrecognized changes-since value" msgstr "Unerkannter Wert für 'changes-since'" #, python-format msgid "Unsupported sort_dir. Acceptable values: %s" msgstr "Nicht unterstützter Wert für 'sort_dir'. Zulässige Werte: %s" #, python-format msgid "Unsupported sort_key. Acceptable values: %s" msgstr "Nicht unterstützter Wert für 'sort_key'. Zulässige Werte: %s" msgid "Virtual size of image in bytes" msgstr "Virtuelle Größe des Abbildes in Byte" #, python-format msgid "Waited 15 seconds for pid %(pid)s (%(file)s) to die; giving up" msgstr "" "Es wurde 15 Sekunden auf den Abbruch von PID %(pid)s (%(file)s) gewartet; " "Abbruch" msgid "" "When running server in SSL mode, you must specify both a cert_file and " "key_file option value in your configuration file" msgstr "" "Wenn der Server im SSL-Modus läuft, müssen Sie sowohl für die 'cert_file'- " "als auch für die 'key_file'-Option in Ihrer Konfigurationsdatei einen Wert " "angeben" msgid "" "Whether to pass through the user token when making requests to the registry. " "To prevent failures with token expiration during big files upload, it is " "recommended to set this parameter to False.If \"use_user_token\" is not in " "effect, then admin credentials can be specified." msgstr "" "Gibt an, ob das Benutzertoken durchlaufen werden soll, wenn Anforderungen an " "die Registry gesendet werden. Um Fehler mit dem Ablauf des Tokens beim " "Hochladen von großen Dateien zu verhindern, wird empfohlen, diesen Parameter " "auf False festzulegen. Wenn \"use_user_token\" nicht wirksam ist, können " "Berechtigungsnachweise für den Administrator angegeben werden." #, python-format msgid "Wrong command structure: %s" msgstr "Falsche Kommandostruktur: %s" msgid "You are not authenticated." msgstr "Sie sind nicht authentifiziert." msgid "You are not authorized to complete this action." msgstr "Sie sind nicht dazu authorisiert, diese Aktion abzuschließen" #, python-format msgid "You are not authorized to lookup image %s." msgstr "Sie sind nicht berechtigt, Abbild %s zu suchen." #, python-format msgid "You are not authorized to lookup the members of the image %s." msgstr "Sie sind nicht berechtigt, die Mitglieder des Abbild %s zu suchen." #, python-format msgid "You are not permitted to create a tag in the namespace owned by '%s'" msgstr "" "Sie können keine Schlagwörter in Namensbereichen erstellen, die '%s' gehören." msgid "You are not permitted to create image members for the image." msgstr "Sie können keine Abbildelemente für das Abbild erstellen." #, python-format msgid "You are not permitted to create images owned by '%s'." msgstr "Sie können keine Abbilder erstellen, die '%s' gehören." #, python-format msgid "You are not permitted to create namespace owned by '%s'" msgstr "Sie können keine Namensbereiche erstellen, die '%s' gehören." #, python-format msgid "You are not permitted to create object owned by '%s'" msgstr "Sie können keine Objekte erstellen, die '%s' gehören." #, python-format msgid "You are not permitted to create property owned by '%s'" msgstr "Sie können keine Eigenschaften erstellen, die '%s' gehören." #, python-format msgid "You are not permitted to create resource_type owned by '%s'" msgstr "Sie können keinen resource_type erstellen, der '%s' gehört." #, python-format msgid "You are not permitted to create this task with owner as: %s" msgstr "Sie können diese Task nicht mit dem Eigentümer %s erstellen" msgid "You are not permitted to deactivate this image." msgstr "Sie können dieses Abbild nicht deaktivieren. " msgid "You are not permitted to delete this image." msgstr "Sie können dieses Abbild nicht löschen." msgid "You are not permitted to delete this meta_resource_type." msgstr "Sie können diesen meta_resource_type nicht löschen." msgid "You are not permitted to delete this namespace." msgstr "Sie können diesen Namensbereich nicht löschen." msgid "You are not permitted to delete this object." msgstr "Sie können dieses Objekt nicht löschen." msgid "You are not permitted to delete this property." msgstr "Sie können diese Eigenschaft nicht löschen." msgid "You are not permitted to delete this tag." msgstr "Sie können dieses Schlagwort nicht löschen." #, python-format msgid "You are not permitted to modify '%(attr)s' on this %(resource)s." msgstr "" "Sie haben keine Berechtigung, um '%(attr)s' für %(resource)s zu ändern." #, python-format msgid "You are not permitted to modify '%s' on this image." msgstr "Sie können '%s' bei diesem Abbild nicht ändern." msgid "You are not permitted to modify locations for this image." msgstr "Sie können Positionen für dieses Abbild nicht ändern." msgid "You are not permitted to modify tags on this image." msgstr "Sie können Schlagwörter bei diesem Abbild nicht ändern." msgid "You are not permitted to modify this image." msgstr "Sie können dieses Abbild nicht ändern." msgid "You are not permitted to reactivate this image." msgstr "Sie können dieses Abbild nicht erneut aktivieren. " msgid "You are not permitted to set status on this task." msgstr "Sie können den Status für diese Task nicht festlegen. " msgid "You are not permitted to update this namespace." msgstr "Sie können diesen Namensbereich nicht aktualisieren. " msgid "You are not permitted to update this object." msgstr "Sie können dieses Objekt nicht aktualisieren. " msgid "You are not permitted to update this property." msgstr "Sie können diese Eigenschaft nicht aktualisieren. " msgid "You are not permitted to update this tag." msgstr "Sie können dieses Schlagwort nicht aktualisieren." msgid "You are not permitted to upload data for this image." msgstr "Sie können keine Daten für dieses Abbild hochladen." #, python-format msgid "You cannot add image member for %s" msgstr "Hinzufügen von Abbildelement für %s nicht möglich" #, python-format msgid "You cannot delete image member for %s" msgstr "Löschen von Abbildelement für %s nicht möglich" #, python-format msgid "You cannot get image member for %s" msgstr "Abrufen von Abbildelement für %s nicht möglich" #, python-format msgid "You cannot update image member %s" msgstr "Aktualisieren von Abbildelement %s nicht möglich" msgid "You do not own this image" msgstr "Sie sind nicht Eigner dieses Images" msgid "" "You have selected to use SSL in connecting, and you have supplied a cert, " "however you have failed to supply either a key_file parameter or set the " "GLANCE_CLIENT_KEY_FILE environ variable" msgstr "" "Sie haben sich dafür entschieden, SSL für die Verbindung zu verwenden, und " "Sie haben ein Zertifikat angegeben. Allerdings haben Sie weder einen " "key_file-Parameter angegeben noch die GLANCE_CLIENT_KEY_FILE-" "Umgebungsvariable festgelegt" msgid "" "You have selected to use SSL in connecting, and you have supplied a key, " "however you have failed to supply either a cert_file parameter or set the " "GLANCE_CLIENT_CERT_FILE environ variable" msgstr "" "Sie haben sich dafür entschieden, SSL für die Verbindung zu verwenden, und " "Sie haben einen Schlüssel angegeben. Allerdings haben Sie weder einen " "cert_file-Parameter angegeben noch die GLANCE_CLIENT_CERT_FILE-" "Umgebungsvariable festgelegt" msgid "" "^([0-9a-fA-F]){8}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-" "fA-F]){12}$" msgstr "" "^([0-9a-fA-F]){8}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-" "fA-F]){12}$" #, python-format msgid "__init__() got unexpected keyword argument '%s'" msgstr "__init__() hat unerwartetes Schlüsselwortargument '%s' erhalten" #, python-format msgid "" "cannot transition from %(current)s to %(next)s in update (wanted from_state=" "%(from)s)" msgstr "" "Übergang von %(current)s zu %(next)s in Aktualisierung nicht möglich " "(gewünscht ist from_state=%(from)s)" #, python-format msgid "custom properties (%(props)s) conflict with base properties" msgstr "" "Benutzerdefinierte Eigenschaften (%(props)s) stehen im Konflikt mit " "Basiseigenschaften" msgid "eventlet 'poll' nor 'selects' hubs are available on this platform" msgstr "" "Hub weder für Eventlet 'poll' noch für 'selects' ist auf dieser Plattform " "verfügbar" msgid "is_public must be None, True, or False" msgstr "'is_public' muss 'None', 'True' oder 'False' sein" msgid "limit param must be an integer" msgstr "'limit'-Parameter muss eine Ganzzahl sein" msgid "limit param must be positive" msgstr "'limit'-Parameter muss positiv sein" msgid "md5 hash of image contents." msgstr "md5-Hashwert von Abbildinhalten. " #, python-format msgid "new_image() got unexpected keywords %s" msgstr "new_image() hat unerwartete Schlüsselwörter %s erhalten" msgid "protected must be True, or False" msgstr "'protected' muss 'True' oder 'False' sein" #, python-format msgid "unable to launch %(serv)s. Got error: %(e)s" msgstr "%(serv)s kann nicht gestartet werden. Fehler: %(e)s" #, python-format msgid "x-openstack-request-id is too long, max size %s" msgstr "x-openstack-request-id ist zu lang. Max. Größe %s" glance-16.0.0/glance/locale/it/0000775000175100017510000000000013245511661016163 5ustar zuulzuul00000000000000glance-16.0.0/glance/locale/it/LC_MESSAGES/0000775000175100017510000000000013245511661017750 5ustar zuulzuul00000000000000glance-16.0.0/glance/locale/it/LC_MESSAGES/glance.po0000666000175100017510000021514113245511421021541 0ustar zuulzuul00000000000000# Translations template for glance. # Copyright (C) 2015 ORGANIZATION # This file is distributed under the same license as the glance project. # # Translators: # Andreas Jaeger , 2016. #zanata # KATO Tomoyuki , 2016. #zanata msgid "" msgstr "" "Project-Id-Version: glance 15.0.0.0b3.dev29\n" "Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n" "POT-Creation-Date: 2017-06-23 20:54+0000\n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=UTF-8\n" "Content-Transfer-Encoding: 8bit\n" "PO-Revision-Date: 2016-06-03 01:43+0000\n" "Last-Translator: KATO Tomoyuki \n" "Language: it\n" "Plural-Forms: nplurals=2; plural=(n != 1);\n" "Generated-By: Babel 2.0\n" "X-Generator: Zanata 3.9.6\n" "Language-Team: Italian\n" #, python-format msgid "\t%s" msgstr "\t%s" #, python-format msgid "%(cls)s exception was raised in the last rpc call: %(val)s" msgstr "Eccezione %(cls)s generata nell'ultima chiamata rpc: %(val)s" #, python-format msgid "%(m_id)s not found in the member list of the image %(i_id)s." msgstr "%(m_id)s non trovato nell'elenco di membri dell'immagine %(i_id)s." #, python-format msgid "%(serv)s (pid %(pid)s) is running..." msgstr "%(serv)s (pid %(pid)s) in esecuzione..." #, python-format msgid "%(serv)s appears to already be running: %(pid)s" msgstr "%(serv)s sembra essere già in esecuzione: %(pid)s" #, python-format msgid "" "%(strategy)s is registered as a module twice. %(module)s is not being used." msgstr "" "%(strategy)s è registrato come modulo due volte. %(module)s non viene del " "provider." #, python-format msgid "" "%(task_id)s of %(task_type)s not configured properly. Could not load the " "filesystem store" msgstr "" "%(task_id)s di %(task_type)s non configurato correttamente. Impossibile " "caricare l'archivio filesystem" #, python-format msgid "" "%(task_id)s of %(task_type)s not configured properly. Missing work dir: " "%(work_dir)s" msgstr "" "%(task_id)s di %(task_type)s non configurato correttamente. Directory di " "lavoro mancante: %(work_dir)s" #, python-format msgid "%(verb)sing %(serv)s" msgstr "%(verb)sing %(serv)s" #, python-format msgid "%(verb)sing %(serv)s with %(conf)s" msgstr "%(verb)s %(serv)s con %(conf)s" #, python-format msgid "" "%s Please specify a host:port pair, where host is an IPv4 address, IPv6 " "address, hostname, or FQDN. If using an IPv6 address, enclose it in brackets " "separately from the port (i.e., \"[fe80::a:b:c]:9876\")." msgstr "" "%s Specificare una coppia host:port in cui host è un indirizzo IPv4, un " "indirizzo IPv6 nome host o FQDN. Se si utilizza un indirizzo IPv6 " "racchiuderlo in parentesi separatamente dalla porta (ad esempio, \"[fe80::a:" "b:c]:9876\")." #, python-format msgid "%s can't contain 4 byte unicode characters." msgstr "%s non può contenere 4 byte di caratteri unicode." #, python-format msgid "%s is already stopped" msgstr "%s è già stato arrestato" #, python-format msgid "%s is stopped" msgstr "%s è stato arrestato" msgid "" "--os_auth_url option or OS_AUTH_URL environment variable required when " "keystone authentication strategy is enabled\n" msgstr "" "l'opzione --os_auth_url o la variabile d'ambiente OS_AUTH_URL sono " "obbligatori quando è abilitato il modo di autenticazione keystone\n" msgid "A body is not expected with this request." msgstr "Un corpo non è previsto con questa richiesta." #, python-format msgid "" "A metadata definition object with name=%(object_name)s already exists in " "namespace=%(namespace_name)s." msgstr "" "Un oggetto della definizione di metadati con nome=%(object_name)s già " "esiste nello nello spazio dei nomi=%(namespace_name)s." #, python-format msgid "" "A metadata definition property with name=%(property_name)s already exists in " "namespace=%(namespace_name)s." msgstr "" "Una proprietà della definizione di metadati con nome=%(property_name)s già " "esiste nello spazio dei nomi=%(namespace_name)s." #, python-format msgid "" "A metadata definition resource-type with name=%(resource_type_name)s already " "exists." msgstr "" "Un tipo-risorsa della definizione di metadati con nome=" "%(resource_type_name)s già esiste." msgid "A set of URLs to access the image file kept in external store" msgstr "" "Un insieme di URL per accedere al file di immagini conservato nell'archivio " "esterno" msgid "Amount of disk space (in GB) required to boot image." msgstr "Quantità di spazio su disco (in GB) richiesto per l'immagine di avvio." msgid "Amount of ram (in MB) required to boot image." msgstr "Quantità di ram (in MB) richiesta per l'immagine di avvio." msgid "An identifier for the image" msgstr "Un identificativo per l'immagine" msgid "An identifier for the image member (tenantId)" msgstr "Un identificativo per il membro dell'immagine (tenantId)" msgid "An identifier for the owner of this task" msgstr "Un identificativo del proprietario di questa attività" msgid "An identifier for the task" msgstr "Un identificativo per l'attività" msgid "An image file url" msgstr "Un URL al file di immagini" msgid "An image schema url" msgstr "Un URL allo schema di immagini" msgid "An image self url" msgstr "Un URL personale all'immagine" #, python-format msgid "An image with identifier %s already exists" msgstr "Un'immagine con identificativo %s già esiste" msgid "An import task exception occurred" msgstr "Si è verificata un'eccezione attività di importazione" msgid "An object with the same identifier already exists." msgstr "Già esiste un oggetto con lo stesso identificativo." msgid "An object with the same identifier is currently being operated on." msgstr "Un oggetto con lo stesso identificativo è attualmente in uso." msgid "An object with the specified identifier was not found." msgstr "Impossibile trovare un oggetto con l'identificativo specificato." msgid "An unknown exception occurred" msgstr "Si è verificata un'eccezione sconosciuta" msgid "An unknown task exception occurred" msgstr "Si è verificata un'eccezione attività sconosciuta" #, python-format msgid "Attempt to upload duplicate image: %s" msgstr "Tentativo di caricare un duplicato di immagine: %s" msgid "Attempted to update Location field for an image not in queued status." msgstr "" "Si è tentato di aggiornare il campo Ubicazione per un'immagine che non si " "trova nello stato accodato." #, python-format msgid "Attribute '%(property)s' is read-only." msgstr "Attributo '%(property)s' è di sola lettura." #, python-format msgid "Attribute '%(property)s' is reserved." msgstr "L'attributo '%(property)s' è riservato." #, python-format msgid "Attribute '%s' is read-only." msgstr "Attributo '%s' è di sola lettura." #, python-format msgid "Attribute '%s' is reserved." msgstr "L'attributo '%s' è riservato." msgid "Attribute container_format can be only replaced for a queued image." msgstr "" "L'attributo container_format può essere sostituito solo per un'immagine " "nella coda." msgid "Attribute disk_format can be only replaced for a queued image." msgstr "" "L'attributo disk_format può essere sostituito solo per un'immagine nella " "coda." #, python-format msgid "Auth service at URL %(url)s not found." msgstr "Servizio di autenticazione all'URL %(url)s non trovato." #, python-format msgid "" "Authentication error - the token may have expired during file upload. " "Deleting image data for %s." msgstr "" "Errore di autenticazione - il token potrebbe essere scaduto durante il " "caricamento del file. Eliminazione dei dati dell'immagine per %s." msgid "Authorization failed." msgstr "Autorizzazione non riuscita." msgid "Available categories:" msgstr "Categorie disponibili:" #, python-format msgid "Bad \"%s\" query filter format. Use ISO 8601 DateTime notation." msgstr "" "Formato filtro di query \"%s\" errato. Utilizzare la notazione ISO 8601 " "DateTime." #, python-format msgid "Bad Command: %s" msgstr "Comando non corretto: %s" #, python-format msgid "Bad header: %(header_name)s" msgstr "Intestazione non valida: %(header_name)s" #, python-format msgid "Bad value passed to filter %(filter)s got %(val)s" msgstr "Il valore non valido fornito al filtro %(filter)s ha riportato %(val)s" #, python-format msgid "Badly formed S3 URI: %(uri)s" msgstr "URI S3 formato in modo non corretto: %(uri)s" #, python-format msgid "Badly formed credentials '%(creds)s' in Swift URI" msgstr "Credenziali con formato non corretto %(creds)s' nell'URI Swift" msgid "Badly formed credentials in Swift URI." msgstr "Credenziali formate in modo non corretto nell'URI Swift." msgid "Body expected in request." msgstr "Corpo previsto nella richiesta." msgid "Cannot be a negative value" msgstr "Non può essere un valore negativo" msgid "Cannot be a negative value." msgstr "Non può essere un valore negativo." #, python-format msgid "Cannot convert image %(key)s '%(value)s' to an integer." msgstr "" "Impossibile convertire %(key)s dell'immagine '%(value)s' in un numero intero." msgid "Cannot remove last location in the image." msgstr "Impossibile rimuovere l'ultima ubicazione nell'immagine." #, python-format msgid "Cannot save data for image %(image_id)s: %(error)s" msgstr "Impossibile salvare i dati per l'immagine %(image_id)s: %(error)s" msgid "Cannot set locations to empty list." msgstr "Impossibile impostare le ubicazione nell'elenco vuoto." msgid "Cannot upload to an unqueued image" msgstr "Impossibile caricare un'immagine non accodata" #, python-format msgid "Checksum verification failed. Aborted caching of image '%s'." msgstr "" "Verifica checksum non riuscita. È stata interrotta la memorizzazione nella " "cache dell'immagine '%s'." msgid "Client disconnected before sending all data to backend" msgstr "Client disconnesso prima di inviare tutti i dati a backend" msgid "Command not found" msgstr "Comando non trovato" msgid "Configuration option was not valid" msgstr "Opzione di configurazione non valida" #, python-format msgid "Connect error/bad request to Auth service at URL %(url)s." msgstr "" "Connetti richiesta/non corretta o in errore per il servizio di " "autenticazione all'URL %(url)s." #, python-format msgid "Constructed URL: %s" msgstr "URL costruita: %s" msgid "Container format is not specified." msgstr "Formato contenitore non specificato. " msgid "Content-Type must be application/octet-stream" msgstr "Tipo-contenuto deve essere application/octet-stream" #, python-format msgid "Corrupt image download for image %(image_id)s" msgstr "" "Esecuzione del download immagine danneggiato per l'immagine %(image_id)s" #, python-format msgid "Could not bind to %(host)s:%(port)s after trying for 30 seconds" msgstr "" "Impossibile collegarsi a %(host)s:%(port)s dopo aver tentato per 30 secondi" msgid "Could not find OVF file in OVA archive file." msgstr "Impossibile trovare il file OVD nel file di archivio OVA." #, python-format msgid "Could not find metadata object %s" msgstr "Impossibile trovare l'oggetto di metadati %s" #, python-format msgid "Could not find metadata tag %s" msgstr "Impossibile trovare il tag di metadati %s" #, python-format msgid "Could not find namespace %s" msgstr "Impossibile trovare lo spazio dei nomi %s" #, python-format msgid "Could not find property %s" msgstr "Impossibile trovare la proprietà %s" msgid "Could not find required configuration option" msgstr "Impossibile trovare l'opzione di configurazione richiesta" #, python-format msgid "Could not find task %s" msgstr "Impossibile trovare l'attività %s" #, python-format msgid "Could not update image: %s" msgstr "Impossibile aggiornare l'immagine: %s" msgid "Currently, OVA packages containing multiple disk are not supported." msgstr "" "Attualmente, i pacchetti OVA che contengono più dischi non sono supportati." #, python-format msgid "Data for image_id not found: %s" msgstr "Dati per image_id non trovati: %s" msgid "Data supplied was not valid." msgstr "I dati forniti non erano validi." msgid "Date and time of image member creation" msgstr "Data e ora di creazione del membro dell'immagine" msgid "Date and time of image registration" msgstr "Data e ora della registrazione dell'immagine" msgid "Date and time of last modification of image member" msgstr "Data e ora dell'ultima modifica del membro dell'immagine" msgid "Date and time of namespace creation" msgstr "Data ed ora della creazione dello spazio dei nomi" msgid "Date and time of object creation" msgstr "Data ed ora della creazione dell'oggetto" msgid "Date and time of resource type association" msgstr "Data ed ora dell'associazione del tipo di risorsa" msgid "Date and time of tag creation" msgstr "Data ed ora della creazione del tag" msgid "Date and time of the last image modification" msgstr "Data e ora dell'ultima modifica dell'immagine" msgid "Date and time of the last namespace modification" msgstr "Data ed ora dell'ultima modifica allo spazio dei nomi" msgid "Date and time of the last object modification" msgstr "Data ed ora dell'ultima modifica all'oggetto" msgid "Date and time of the last resource type association modification" msgstr "Data ed ora dell'ultima modifica all'associazione del tipo di risorsa" msgid "Date and time of the last tag modification" msgstr "Data ed ora dell'ultima modifica al tag" msgid "Datetime when this resource was created" msgstr "Data e ora in cui questa risorsa è stata creata" msgid "Datetime when this resource was updated" msgstr "Data e ora in cui questa risorsa è stata aggiornata" msgid "Datetime when this resource would be subject to removal" msgstr "Data e ora in cui questa risorsa verrà rimossa" #, python-format msgid "Denying attempt to upload image because it exceeds the quota: %s" msgstr "" "Rifiutato il tentativo di caricare l'immagine perché supera la quota: %s" #, python-format msgid "Denying attempt to upload image larger than %d bytes." msgstr "Divieto del tentativo di caricare un'immagine più grande di %d byte." msgid "Descriptive name for the image" msgstr "Nome descrittivo per l'immagine" msgid "Disk format is not specified." msgstr "Formato disco non specificato. " #, python-format msgid "" "Driver %(driver_name)s could not be configured correctly. Reason: %(reason)s" msgstr "" "Impossibile configurare il driver %(driver_name)s correttamente. Motivo: " "%(reason)s" msgid "" "Error decoding your request. Either the URL or the request body contained " "characters that could not be decoded by Glance" msgstr "" "Errore di decodifica della richiesta. L'URL o il corpo della richiesta " "contengono caratteri che non possono essere decodificati da Glance" #, python-format msgid "Error fetching members of image %(image_id)s: %(inner_msg)s" msgstr "" "Errore durante il recupero dei membri immagine %(image_id)s: %(inner_msg)s" msgid "Error in store configuration. Adding images to store is disabled." msgstr "" "Errore nella configurazione dell'archivio. L'aggiunta di immagini a questo " "archivio non è consentita." msgid "Expected a member in the form: {\"member\": \"image_id\"}" msgstr "Previsto un membro nel formato: {\"member\": \"image_id\"}" msgid "Expected a status in the form: {\"status\": \"status\"}" msgstr "Previsto uno stato nel formato: {\"status\": \"status\"}" msgid "External source should not be empty" msgstr "L'origine esterna non deve essere vuota" #, python-format msgid "External sources are not supported: '%s'" msgstr "Le origini esterne non sono supportate: '%s'" #, python-format msgid "Failed to activate image. Got error: %s" msgstr "Attivazione immagine non riuscita. Ricevuto errore: %s" #, python-format msgid "Failed to add image metadata. Got error: %s" msgstr "Impossibile aggiungere metadati all'immagine. Ricevuto errore: %s" #, python-format msgid "Failed to find image %(image_id)s to delete" msgstr "Impossibile trovare l'immagine %(image_id)s da eliminare" #, python-format msgid "Failed to find image to delete: %s" msgstr "Impossibile trovare l'immagine da eliminare: %s" #, python-format msgid "Failed to find image to update: %s" msgstr "Impossibile trovare l'immagine da aggiornare: %s" #, python-format msgid "Failed to find resource type %(resourcetype)s to delete" msgstr "Impossibile trovare il tipo di risorsa %(resourcetype)s da eliminare" #, python-format msgid "Failed to initialize the image cache database. Got error: %s" msgstr "" "Impossibile inizializzare il database cache immagini. Errore ricevuto: %s" #, python-format msgid "Failed to read %s from config" msgstr "Impossibile leggere %s dalla configurazione" #, python-format msgid "Failed to reserve image. Got error: %s" msgstr "Impossibile prenotare l'immagine. Errore: %s" #, python-format msgid "Failed to update image metadata. Got error: %s" msgstr "Impossibile aggiornare i metadati immagine. Errore: %s" #, python-format msgid "Failed to upload image %s" msgstr "Caricamento immagine %s non riuscito" #, python-format msgid "" "Failed to upload image data for image %(image_id)s due to HTTP error: " "%(error)s" msgstr "" "Impossibile caricare i dati dell'immagine %(image_id)s a causa di un errore " "HTTP: %(error)s" #, python-format msgid "" "Failed to upload image data for image %(image_id)s due to internal error: " "%(error)s" msgstr "" "Impossibile caricare i dati dell'immagine %(image_id)s a causa di un errore " "interno: %(error)s" #, python-format msgid "File %(path)s has invalid backing file %(bfile)s, aborting." msgstr "" "Il file %(path)s ha un file di backup %(bfile)s non valido, operazione " "interrotta." msgid "" "File based imports are not allowed. Please use a non-local source of image " "data." msgstr "" "Le importazioni basata su file non sono consentite. Utilizzare un'origine " "dati dell'immagine non locale." msgid "Forbidden image access" msgstr "Accesso all'immagine vietato" #, python-format msgid "Forbidden to delete a %s image." msgstr "Divieto di eliminare un'immagine %s." #, python-format msgid "Forbidden to delete image: %s" msgstr "Divieto di eliminare l'immagine: %s" #, python-format msgid "Forbidden to modify '%(key)s' of %(status)s image." msgstr "Divieto di modificare '%(key)s' dell'immagine %(status)s." #, python-format msgid "Forbidden to modify '%s' of image." msgstr "Divieto di modificare '%s' dell'immagine." msgid "Forbidden to reserve image." msgstr "Vietato prenotare l'immagine." msgid "Forbidden to update deleted image." msgstr "Divieto di aggiornare l'immagine eliminata." #, python-format msgid "Forbidden to update image: %s" msgstr "Divieto di aggiornare l'immagine: %s" #, python-format msgid "Forbidden upload attempt: %s" msgstr "Vietato tentativo di caricamento: %s" #, python-format msgid "Forbidding request, metadata definition namespace=%s is not visible." msgstr "" "Richiesta vietata, lo spazio dei nomi della definizione di metadati =%s non " "è visibile." #, python-format msgid "Forbidding request, task %s is not visible" msgstr "Richiesta vietata, l'attività %s non è visibile" msgid "Format of the container" msgstr "Formato del contenitore" msgid "Format of the disk" msgstr "Formato del disco" #, python-format msgid "Host \"%s\" is not valid." msgstr "L'host \"%s\" non è valido." #, python-format msgid "Host and port \"%s\" is not valid." msgstr "Host o porta \"%s\" non è valido." msgid "" "Human-readable informative message only included when appropriate (usually " "on failure)" msgstr "" "I messaggi informativi leggibili dall'utente sono inclusi solo se necessario " "(di solito in caso di errore)" msgid "If true, image will not be deletable." msgstr "Se true, l'immagine non sarà eliminabile." msgid "If true, namespace will not be deletable." msgstr "Se impostato su true, lo spazio dei nomi non sarà eliminabile." #, python-format msgid "Image %(id)s could not be deleted because it is in use: %(exc)s" msgstr "L'immagine %(id)s non può essere eliminata perché è in uso: %(exc)s" #, python-format msgid "Image %(id)s not found" msgstr "Immagine %(id)s non trovata" #, python-format msgid "" "Image %(image_id)s could not be found after upload. The image may have been " "deleted during the upload: %(error)s" msgstr "" "Impossibile trovare l'immagine %(image_id)s dopo il caricamento. L'immagine " "potrebbe essere stata eliminata durante il caricamento: %(error)s" #, python-format msgid "Image %(image_id)s is protected and cannot be deleted." msgstr "L'immagine %(image_id)s è protetta e non può essere eliminata." #, python-format msgid "" "Image %s could not be found after upload. The image may have been deleted " "during the upload, cleaning up the chunks uploaded." msgstr "" "Impossibile trovare l'immagine %s dopo il caricamento. L'immagine potrebbe " "essere stata eliminata durante il caricamento. Eliminazione delle porzioni " "caricate." #, python-format msgid "" "Image %s could not be found after upload. The image may have been deleted " "during the upload." msgstr "" "Impossibile trovare l'immagine %s dopo il caricamento. L'immagine potrebbe " "essere stata eliminata durante il caricamento." #, python-format msgid "Image %s is deactivated" msgstr "L'immagine %s è disattivata" #, python-format msgid "Image %s is not active" msgstr "L'immagine %s non è attiva" #, python-format msgid "Image %s not found." msgstr "Immagine %s non trovata." #, python-format msgid "Image exceeds the storage quota: %s" msgstr "L'immagine supera la quota di memoria: %s" msgid "Image id is required." msgstr "ID immagine obbligatorio." msgid "Image is protected" msgstr "L'immagine è protetta" #, python-format msgid "Image member limit exceeded for image %(id)s: %(e)s:" msgstr "" "Superato il limite del membro dell'immagine per l'immagine %(id)s: %(e)s:" #, python-format msgid "Image name too long: %d" msgstr "Il nome dell'immagine è troppo lungo: %d" msgid "Image operation conflicts" msgstr "L'operazione dell'immagine è in conflitto" #, python-format msgid "" "Image status transition from %(cur_status)s to %(new_status)s is not allowed" msgstr "" "Il passaggio di stato dell'immagine da %(cur_status)s a %(new_status)s non è " "consentito" #, python-format msgid "Image storage media is full: %s" msgstr "Il supporto di memorizzazione dell'immagine è pieno: %s" #, python-format msgid "Image tag limit exceeded for image %(id)s: %(e)s:" msgstr "Superato il limite di tag dell'immagine per l'immagine %(id)s: %(e)s:" #, python-format msgid "Image upload problem: %s" msgstr "Problemi nel caricamento dell'immagine: %s" #, python-format msgid "Image with identifier %s already exists!" msgstr "Immagine con identificativo %s già esiste!" #, python-format msgid "Image with identifier %s has been deleted." msgstr "L'immagine con identificativo %s è stata eliminata." #, python-format msgid "Image with identifier %s not found" msgstr "Impossibile trovare l'immagine con identificativo %s" #, python-format msgid "Image with the given id %(image_id)s was not found" msgstr "L'immagine con l'id fornito %(image_id)s non è stata trovata" #, python-format msgid "" "Incorrect auth strategy, expected \"%(expected)s\" but received " "\"%(received)s\"" msgstr "" "Strategia di autenticazione errata, previsto \"%(expected)s\" ma ricevuto " "\"%(received)s\"" #, python-format msgid "Incorrect request: %s" msgstr "Richiesta non corretta: %s" #, python-format msgid "Input does not contain '%(key)s' field" msgstr "L'input non contiene il campo '%(key)s'" #, python-format msgid "Insufficient permissions on image storage media: %s" msgstr "" "Autorizzazioni insufficienti sul supporto di memorizzazione immagini: %s" #, python-format msgid "Invalid JSON pointer for this resource: '/%s'" msgstr "Puntatore JSON non valido per questa risorsa: '/%s'" #, python-format msgid "Invalid checksum '%s': can't exceed 32 characters" msgstr "Checksum non valido '%s': non può superare 32 caratteri " msgid "Invalid configuration in glance-swift conf file." msgstr "Configurazione nel file di configurazione glance-swift non valida." msgid "Invalid configuration in property protection file." msgstr "Configurazione non valida nel file di protezione della proprietà." #, python-format msgid "Invalid container format '%s' for image." msgstr "Formato del contenitore '%s' non valido per l'immagine." #, python-format msgid "Invalid content type %(content_type)s" msgstr "Tipo contenuto non valido %(content_type)s" #, python-format msgid "Invalid disk format '%s' for image." msgstr "Formato del disco '%s' non valido per l'immagine." #, python-format msgid "Invalid filter value %s. The quote is not closed." msgstr "Valore filtro non valido %s. Le virgolette non sono chiuse." #, python-format msgid "" "Invalid filter value %s. There is no comma after closing quotation mark." msgstr "" "Valore filtro non valido %s. Non è presente una virgola prima delle " "virgolette di chiusura." #, python-format msgid "" "Invalid filter value %s. There is no comma before opening quotation mark." msgstr "" "Valore filtro non valido %s. Non è presente una virgola prima delle " "virgolette di apertura." msgid "Invalid image id format" msgstr "Formato ID immagine non valido" msgid "Invalid location" msgstr "Ubicazione non valida" #, python-format msgid "Invalid location %s" msgstr "Ubicazione non valida %s" #, python-format msgid "Invalid location: %s" msgstr "Ubicazione non valida: %s" #, python-format msgid "" "Invalid location_strategy option: %(name)s. The valid strategy option(s) " "is(are): %(strategies)s" msgstr "" "Opzione location_strategy non valida: %(name)s. Le opzioni strategia valide " "sono: %(strategies)s" msgid "Invalid locations" msgstr "Ubicazioni non valide" #, python-format msgid "Invalid locations: %s" msgstr "Ubicazioni non valide: %s" msgid "Invalid marker format" msgstr "Formato indicatore non valido" msgid "Invalid marker. Image could not be found." msgstr "Indicatore non valido. Impossibile trovare l'immagine." #, python-format msgid "Invalid membership association: %s" msgstr "Associazione di appartenenza non valida: %s" msgid "" "Invalid mix of disk and container formats. When setting a disk or container " "format to one of 'aki', 'ari', or 'ami', the container and disk formats must " "match." msgstr "" "Combinazione di formati di disco e contenitore non valida. Quando si imposta " "un formato disco o contenitore in uno dei seguenti 'aki', 'ari' o 'ami', i " "formati contenitore e disco devono corrispondere." #, python-format msgid "" "Invalid operation: `%(op)s`. It must be one of the following: %(available)s." msgstr "" "Operazione non valida: `%(op)s`. Deve essere uno dei seguenti: %(available)s." msgid "Invalid position for adding a location." msgstr "Posizione non valida per l'aggiunta di una ubicazione." msgid "Invalid position for removing a location." msgstr "Posizione non valida per la rimozione di una ubicazione." msgid "Invalid service catalog json." msgstr "json del catalogo del servizio non è valido." #, python-format msgid "Invalid sort direction: %s" msgstr "Direzione ordinamento non valida: %s" #, python-format msgid "" "Invalid sort key: %(sort_key)s. It must be one of the following: " "%(available)s." msgstr "" "Chiave di ordinamento non valida: %(sort_key)s. Deve essere una delle " "seguenti: %(available)s." #, python-format msgid "Invalid status value: %s" msgstr "Valore di stato non valido: %s" #, python-format msgid "Invalid status: %s" msgstr "Stato non valido: %s" #, python-format msgid "Invalid time format for %s." msgstr "Formato ora non valido per %s." #, python-format msgid "Invalid type value: %s" msgstr "Valore di tipo non valido: %s" #, python-format msgid "" "Invalid update. It would result in a duplicate metadata definition namespace " "with the same name of %s" msgstr "" "Aggiornamento non valido. Potrebbe generare uno spazio dei nomi della " "definizione di metadati duplicato con lo stesso nome di %s" #, python-format msgid "" "Invalid update. It would result in a duplicate metadata definition object " "with the same name=%(name)s in namespace=%(namespace_name)s." msgstr "" "Aggiornamento non valido. Potrebbe generare un oggetto della definizione di " "metadati duplicato con lo stesso nome=%(name)s nello spazio dei nomi" "%(namespace_name)s." #, python-format msgid "" "Invalid update. It would result in a duplicate metadata definition object " "with the same name=%(name)s in namespace=%(namespace_name)s." msgstr "" "Aggiornamento non valido. Potrebbe generare un oggetto della definizione di " "metadati duplicato con lo stesso nome=%(name)s nello spazio dei nomi" "%(namespace_name)s." #, python-format msgid "" "Invalid update. It would result in a duplicate metadata definition property " "with the same name=%(name)s in namespace=%(namespace_name)s." msgstr "" "Aggiornamento non valido. Potrebbe generare uno spazio dei nomi della " "definizione di metadati duplicato con lo stesso nome=%(name)s nello spazio " "dei nomi=%(namespace_name)s." #, python-format msgid "Invalid value '%(value)s' for parameter '%(param)s': %(extra_msg)s" msgstr "" "Valore '%(value)s' non valido per il parametro '%(param)s': %(extra_msg)s" #, python-format msgid "Invalid value for option %(option)s: %(value)s" msgstr "Valore non valido per l'opzione %(option)s: %(value)s" #, python-format msgid "Invalid visibility value: %s" msgstr "Valore visibilità non valido: %s" msgid "It's invalid to provide multiple image sources." msgstr "Non è valido per fornire più origini delle immagini." msgid "It's not allowed to add locations if locations are invisible." msgstr "" "Non è consentito aggiungere ubicazione se le ubicazioni sono invisibili." msgid "It's not allowed to remove locations if locations are invisible." msgstr "" "Non è consentito rimuovere ubicazioni se le ubicazioni sono invisibili." msgid "It's not allowed to update locations if locations are invisible." msgstr "Non è consentito caricare ubicazioni se le ubicazioni sono invisibili." msgid "List of strings related to the image" msgstr "Elenco di stringhe relative all'immagine" msgid "Malformed JSON in request body." msgstr "JSON non corretto nel corpo della richiesta." msgid "Maximal age is count of days since epoch." msgstr "L'età massima è il numero di giorni dal periodo." #, python-format msgid "Maximum redirects (%(redirects)s) was exceeded." msgstr "Il numero massimo di rendirizzamenti (%(redirects)s) è stato superato." #, python-format msgid "Member %(member_id)s is duplicated for image %(image_id)s" msgstr "Il membro %(member_id)s è il duplicato dell'immagine %(image_id)s" msgid "Member can't be empty" msgstr "Il membro non può essere vuoto" msgid "Member to be added not specified" msgstr "Membro da aggiungere non specificato" msgid "Membership could not be found." msgstr "Impossibile trovare l'appartenenza." #, python-format msgid "" "Metadata definition namespace %(namespace)s is protected and cannot be " "deleted." msgstr "" "Lo spazio dei nomi della definizione di metadati %(namespace)s è protetto e " "non è possibile eliminarlo." #, python-format msgid "Metadata definition namespace not found for id=%s" msgstr "" "Lo spazio dei nomi della definizione dei metadati per l'id=%s non è stato " "trovato" #, python-format msgid "" "Metadata definition object %(object_name)s is protected and cannot be " "deleted." msgstr "" "L'oggetto di definizione di metadati %(object_name)s è protetto e non è " "possibile eliminarlo." #, python-format msgid "Metadata definition object not found for id=%s" msgstr "" "L'oggetto della definizione dei metadati per l'id=%s non è stato trovato" #, python-format msgid "" "Metadata definition property %(property_name)s is protected and cannot be " "deleted." msgstr "" "La proprietà della definizione di metadati %(property_name)s è protetta e " "non è possibile eliminarlo." #, python-format msgid "Metadata definition property not found for id=%s" msgstr "" "La proprietà della definizione dei metadati per l'id=%s non è stata trovata" #, python-format msgid "" "Metadata definition resource-type %(resource_type_name)s is a seeded-system " "type and cannot be deleted." msgstr "" "Il tipo-risorsa della definizione di metadati %(resource_type_name)s è un " "tipo inserito dalsistema e non è possibile eliminarlo." #, python-format msgid "" "Metadata definition resource-type-association %(resource_type)s is protected " "and cannot be deleted." msgstr "" "L'associazione-tipo-risorsa della definizione di metadati %(resource_type)s " "è protetta e non può essere eliminata." #, python-format msgid "" "Metadata definition tag %(tag_name)s is protected and cannot be deleted." msgstr "" "Il tag di definizione dei metadati %(tag_name)s è protetto e non può essere " "eliminato." #, python-format msgid "Metadata definition tag not found for id=%s" msgstr "Il tag di definizione dei metadati per l'id=%s non è stato trovato" msgid "Minimal rows limit is 1." msgstr "Il limite di righe minimo è 1." #, python-format msgid "Missing required credential: %(required)s" msgstr "Credenziale richiesta mancante: %(required)s" #, python-format msgid "" "Multiple 'image' service matches for region %(region)s. This generally means " "that a region is required and you have not supplied one." msgstr "" "Il servizio 'immagine' multipla corrisponde nella regione %(region)s. Questo " "in genere significa che una regione è obbligatoria e non ne è stata fornita " "una." msgid "No authenticated user" msgstr "Nessun utente autenticato" #, python-format msgid "No image found with ID %s" msgstr "Nessuna immagine trovata con ID %s" #, python-format msgid "No location found with ID %(loc)s from image %(img)s" msgstr "" "Non è stata trovata nessuna ubicazione con ID %(loc)s dall'immagine %(img)s" msgid "No permission to share that image" msgstr "Nessuna autorizzazione per condividere tale immagine" #, python-format msgid "Not allowed to create members for image %s." msgstr "Non è consentito creare membri per l'immagine %s." #, python-format msgid "Not allowed to deactivate image in status '%s'" msgstr "Disattivazione dell'immagine in stato '%s' non consentita" #, python-format msgid "Not allowed to delete members for image %s." msgstr "Non è consentito eliminare i membri dell'immagine %s." #, python-format msgid "Not allowed to delete tags for image %s." msgstr "Non è consentito eliminare i tag dell'immagine %s." #, python-format msgid "Not allowed to list members for image %s." msgstr "Non è consentito elencare i membri dell'immagine %s." #, python-format msgid "Not allowed to reactivate image in status '%s'" msgstr "Riattivazione dell'immagine in stato '%s' non consentita" #, python-format msgid "Not allowed to update members for image %s." msgstr "Non è consentito aggiornare i membri dell'immagine %s." #, python-format msgid "Not allowed to update tags for image %s." msgstr "Non è consentito aggiornare i tag dell'immagine %s." #, python-format msgid "Not allowed to upload image data for image %(image_id)s: %(error)s" msgstr "" "Non è consentito caricare i dati dell'immagine per l'immagine %(image_id)s: " "%(error)s" msgid "Number of sort dirs does not match the number of sort keys" msgstr "" "Il numero di directory di ordinamento non corrisponde al numero di chiavi di " "ordinamento" msgid "OVA extract is limited to admin" msgstr "L'estrazione OVA è limitata all'amministratore" msgid "Old and new sorting syntax cannot be combined" msgstr "Impossibile combinare la nuova e la precedente sintassi di ordinamento" #, python-format msgid "Operation \"%s\" requires a member named \"value\"." msgstr "L'operazione \"%s\" richiede un membro denominato \"value\"." msgid "" "Operation objects must contain exactly one member named \"add\", \"remove\", " "or \"replace\"." msgstr "" "Gli oggetti dell'operazione devono contenere esattamente un membro " "denominato \"add\", \"remove\" o \"replace\"." msgid "" "Operation objects must contain only one member named \"add\", \"remove\", or " "\"replace\"." msgstr "" "Gli oggetti dell'operazione devono contenere solo un membro denominato \"add" "\", \" remove \" o \"replace\"." msgid "Operations must be JSON objects." msgstr "Le operazioni devono essere oggetti JSON." #, python-format msgid "Original locations is not empty: %s" msgstr "Le ubicazioni originali non sono vuote: %s" msgid "Owner can't be updated by non admin." msgstr "Il proprietario non può essere aggiornato da un non admin." msgid "Owner must be specified to create a tag." msgstr "Il proprietario deve specificare per creare un tag." msgid "Owner of the image" msgstr "Proprietario dell'immagine" msgid "Owner of the namespace." msgstr "Proprietario dello spazio dei nomi." msgid "Param values can't contain 4 byte unicode." msgstr "I valori dei parametri non possono contenere 4 byte unicode." #, python-format msgid "Pointer `%s` contains \"~\" not part of a recognized escape sequence." msgstr "" "Il puntatore `%s` contiene \"~\" che non fa parte di una sequenza escape " "riconosciuta." #, python-format msgid "Pointer `%s` contains adjacent \"/\"." msgstr "Il puntatore `%s` contiene l'adiacente \"/\"." #, python-format msgid "Pointer `%s` does not contains valid token." msgstr "Il puntatore `%s` non contiene token valido." #, python-format msgid "Pointer `%s` does not start with \"/\"." msgstr "Il puntatore `%s` non inizia con \"/\"." #, python-format msgid "Pointer `%s` end with \"/\"." msgstr "Il puntatore `%s` finisce con \"/\"." #, python-format msgid "Port \"%s\" is not valid." msgstr "La porta \"%s\" non è valida." #, python-format msgid "Process %d not running" msgstr "Il processo %d non è in esecuzione" #, python-format msgid "Properties %s must be set prior to saving data." msgstr "Le proprietà %s devono essere impostate prima di salvare i dati." #, python-format msgid "" "Property %(property_name)s does not start with the expected resource type " "association prefix of '%(prefix)s'." msgstr "" "La proprietà %(property_name)s non inizia con il prefisso di associazione " "del tipo di risorsa previsto '%(prefix)s'." #, python-format msgid "Property %s already present." msgstr "La proprietà %s è già presente." #, python-format msgid "Property %s does not exist." msgstr "La proprietà %s non esiste." #, python-format msgid "Property %s may not be removed." msgstr "La proprietà %s non può essere rimossa." #, python-format msgid "Property %s must be set prior to saving data." msgstr "La proprietà %s deve essere impostata prima di salvare i dati." #, python-format msgid "Property '%s' is protected" msgstr "La proprietà '%s' è protetta" msgid "Property names can't contain 4 byte unicode." msgstr "I nomi delle proprietà non possono contenere 4 byte unicode." #, python-format msgid "" "Provided image size must match the stored image size. (provided size: " "%(ps)d, stored size: %(ss)d)" msgstr "" "La dimensione dell'immagine fornita deve corrispondere alla dimensione " "dell'immagine memorizzata. (dimensione fornita: %(ps)d, dimensione " "memorizzata: %(ss)d)" #, python-format msgid "Provided object does not match schema '%(schema)s': %(reason)s" msgstr "L'oggetto fornito non corrisponde allo schema '%(schema)s': %(reason)s" #, python-format msgid "Provided status of task is unsupported: %(status)s" msgstr "Lo stato dell'attività fornito non è supportato: %(status)s" #, python-format msgid "Provided type of task is unsupported: %(type)s" msgstr "Il tipo dell'attività fornito non è supportato: %(type)s" msgid "Provides a user friendly description of the namespace." msgstr "Fornisce una semplice descrizione utente dello spazio dei nomi." msgid "Received invalid HTTP redirect." msgstr "Ricevuto un reindirizzamento HTTP non valido." #, python-format msgid "Redirecting to %(uri)s for authorization." msgstr "Reindirizzamento a %(uri)s per l'autorizzazione." #, python-format msgid "Registry service can't use %s" msgstr "Il servizio registro non può utilizzare %s" #, python-format msgid "Registry was not configured correctly on API server. Reason: %(reason)s" msgstr "" "Il registro non è stato configurato correttamente sul server API. Motivo: " "%(reason)s" #, python-format msgid "Reload of %(serv)s not supported" msgstr "Ricaricamento di %(serv)s non supportato" #, python-format msgid "Reloading %(serv)s (pid %(pid)s) with signal(%(sig)s)" msgstr "Ricaricamento %(serv)s (pid %(pid)s) con segnale(%(sig)s)" #, python-format msgid "Removing stale pid file %s" msgstr "Rimozione del file pid %s obsoleto in corso" msgid "Request body must be a JSON array of operation objects." msgstr "" "Il corpo della richiesta deve essere un array JSON degli oggetti " "dell'operazione." msgid "Request must be a list of commands" msgstr "La richiesta deve essere un elenco di comandi" #, python-format msgid "Required store %s is invalid" msgstr "Archivio richiesto %s non valido" msgid "" "Resource type names should be aligned with Heat resource types whenever " "possible: http://docs.openstack.org/developer/heat/template_guide/openstack." "html" msgstr "" "I nomi del tipo di risorsa devono essere allineati con i tipi di risorsa " "Heat quando possibile: http://docs.openstack.org/developer/heat/" "template_guide/openstack.html" msgid "Response from Keystone does not contain a Glance endpoint." msgstr "La risposta dal Keystone non contiene un endpoint Glance." msgid "Scope of image accessibility" msgstr "Ambito di accessibilità dell'immagine" msgid "Scope of namespace accessibility." msgstr "Ambito di accessibilità dello spazio dei nomi." #, python-format msgid "Server %(serv)s is stopped" msgstr "Il server %(serv)s è stato arrestato" #, python-format msgid "Server worker creation failed: %(reason)s." msgstr "Creazione dell'operatore server non riuscita: %(reason)s." msgid "Signature verification failed" msgstr "Verifica firma non riuscita" msgid "Size of image file in bytes" msgstr "Dimensione del file di immagine in byte" msgid "" "Some resource types allow more than one key / value pair per instance. For " "example, Cinder allows user and image metadata on volumes. Only the image " "properties metadata is evaluated by Nova (scheduling or drivers). This " "property allows a namespace target to remove the ambiguity." msgstr "" "Alcuni tipi di risorsa consentono più di una coppia chiave / valore per " "istanza. Ad esempio, Cinder consente metadati immagine ed utente sui " "volumi. Solo i metadati delle proprietà dell'immagine vengono valutati da " "Nova (pianificazione o driver). Questa proprietà consente una destinazione " "dello spazio dei nomi per eliminare l'ambiguità." msgid "Sort direction supplied was not valid." msgstr "La direzione di ordinamento fornita non è valida." msgid "Sort key supplied was not valid." msgstr "La chiave di ordinamento fornita non è valida." msgid "" "Specifies the prefix to use for the given resource type. Any properties in " "the namespace should be prefixed with this prefix when being applied to the " "specified resource type. Must include prefix separator (e.g. a colon :)." msgstr "" "Specifica il prefisso da utilizzare per il tipo di risorsa fornito. " "Qualsiasi proprietà nello spazio dei nomi deve essere preceduta da un " "prefisso quando viene applicata ad un tipo di risorsa specificato. Deve " "includere un separatore di prefisso (ad esempio due punti :)." msgid "Status must be \"pending\", \"accepted\" or \"rejected\"." msgstr "Lo stato deve essere \"pending\", \"accepted\" o \"rejected\"." msgid "Status not specified" msgstr "Stato non specificato" msgid "Status of the image" msgstr "Stato dell'immagine" #, python-format msgid "Status transition from %(cur_status)s to %(new_status)s is not allowed" msgstr "" "Il passaggio di stato da %(cur_status)s a %(new_status)s non è consentito" #, python-format msgid "Stopping %(serv)s (pid %(pid)s) with signal(%(sig)s)" msgstr "Arresto di %(serv)s in corso (pid %(pid)s) con segnale(%(sig)s)" #, python-format msgid "Store for image_id not found: %s" msgstr "Archivio per image_id non trovato: %s" #, python-format msgid "Store for scheme %s not found" msgstr "Archivio per lo schema %s non trovato" #, python-format msgid "" "Supplied %(attr)s (%(supplied)s) and %(attr)s generated from uploaded image " "(%(actual)s) did not match. Setting image status to 'killed'." msgstr "" "%(attr)s (%(supplied)s) e %(attr)s fornito e generato dall'immagine caricata " "(%(actual)s) non corrispondevano. Lo stato dell'immagine viene impostato su " "'killed'." msgid "Supported values for the 'container_format' image attribute" msgstr "Valori supportati per l'attributo di immagine 'container_format'" msgid "Supported values for the 'disk_format' image attribute" msgstr "Valori supportati per l'attributo di immagine 'disk_format'" #, python-format msgid "Suppressed respawn as %(serv)s was %(rsn)s." msgstr "Respawn soppresso come %(serv)s era %(rsn)s." msgid "System SIGHUP signal received." msgstr "Ricevuto segnale SIGHUP di sistema." #, python-format msgid "Task '%s' is required" msgstr "Attività '%s' obbligatoria" msgid "Task does not exist" msgstr "L'attività non esiste" msgid "Task failed due to Internal Error" msgstr "Attività non riuscita a causa di un errore interno" msgid "Task was not configured properly" msgstr "L'attività non è stata configurata correttamente" #, python-format msgid "Task with the given id %(task_id)s was not found" msgstr "L'attività con l'id fornito %(task_id)s non è stata trovata" msgid "The \"changes-since\" filter is no longer available on v2." msgstr "Il filtro \"changes-since\" non è più disponibile su v2." #, python-format msgid "The CA file you specified %s does not exist" msgstr "Il file CA specificato %s non esiste" #, python-format msgid "" "The Image %(image_id)s object being created by this task %(task_id)s, is no " "longer in valid status for further processing." msgstr "" "L'oggetto immagine %(image_id)s, in fase di creazione da questa attività " "%(task_id)s, non si trova più in uno stato che ne consenta ulteriori " "elaborazioni." msgid "The Store URI was malformed." msgstr "L'URI della memoria non era corretto." msgid "" "The URL to the keystone service. If \"use_user_token\" is not in effect and " "using keystone auth, then URL of keystone can be specified." msgstr "" "L'URL per il servizio keystone. Se \"use_user_token\" non è attiva e si " "utilizza l'auth keystone, è possibile specificare l'URL di keystone." msgid "" "The administrators password. If \"use_user_token\" is not in effect, then " "admin credentials can be specified." msgstr "" "La password degli amministratori. Se non è attiva \"use_user_token\", è " "possibile specificare le credenziali admin." msgid "" "The administrators user name. If \"use_user_token\" is not in effect, then " "admin credentials can be specified." msgstr "" "Il nome utente degli amministratori. Se non è attiva \"use_user_token\" , è " "possibile specificare le credenziali admin." #, python-format msgid "The cert file you specified %s does not exist" msgstr "Il file certificato specificato %s non esiste" msgid "The current status of this task" msgstr "Lo stato corrente di questa attività" #, python-format msgid "" "The device housing the image cache directory %(image_cache_dir)s does not " "support xattr. It is likely you need to edit your fstab and add the " "user_xattr option to the appropriate line for the device housing the cache " "directory." msgstr "" "L'unità in cui si trova la directory cache dell'immagine %(image_cache_dir)s " "non supporta xattr. Probabilmente è necessario modificare fstab e aggiungere " "l'opzione user_xattr nella riga appropriata per l'unità che ospita la " "directory cache." #, python-format msgid "" "The given uri is not valid. Please specify a valid uri from the following " "list of supported uri %(supported)s" msgstr "" "L'URI fornito non è valido. Specificare un URI valido dal seguente elenco di " "uri supportati %(supported)s" #, python-format msgid "The incoming image is too large: %s" msgstr "L'immagine in entrata è troppo grande: %s" #, python-format msgid "The key file you specified %s does not exist" msgstr "Il file chiave specificato %snon esiste" #, python-format msgid "" "The limit has been exceeded on the number of allowed image locations. " "Attempted: %(attempted)s, Maximum: %(maximum)s" msgstr "" "Il limite di ubicazioni immagine consentito è stato superato. Tentato: " "%(attempted)s, Massimo: %(maximum)s" #, python-format msgid "" "The limit has been exceeded on the number of allowed image members for this " "image. Attempted: %(attempted)s, Maximum: %(maximum)s" msgstr "" "Il limite di membri dell'immagine consentito è stato superato in questa " "immagine. Tentato: %(attempted)s, Massimo: %(maximum)s" #, python-format msgid "" "The limit has been exceeded on the number of allowed image properties. " "Attempted: %(attempted)s, Maximum: %(maximum)s" msgstr "" "Il limite di proprietà immagine consentito è stato superato. Tentato: " "%(attempted)s, Massimo: %(maximum)s" #, python-format msgid "" "The limit has been exceeded on the number of allowed image properties. " "Attempted: %(num)s, Maximum: %(quota)s" msgstr "" "Il limite di proprietà immagine consentito è stato superato. Tentato: " "%(num)s, Massimo: %(quota)s" #, python-format msgid "" "The limit has been exceeded on the number of allowed image tags. Attempted: " "%(attempted)s, Maximum: %(maximum)s" msgstr "" "Il limite di tag immagine consentito è stato superato. Tentato: " "%(attempted)s, Massimo: %(maximum)s" #, python-format msgid "The location %(location)s already exists" msgstr "L'ubicazione %(location)s esiste già" #, python-format msgid "The location data has an invalid ID: %d" msgstr "I dati dell'ubicazione hanno un ID non valido: %d" #, python-format msgid "" "The metadata definition %(record_type)s with name=%(record_name)s not " "deleted. Other records still refer to it." msgstr "" "La definizione di metadati %(record_type)s con nome=%(record_name)s non è " "eliminata. Altri record ancora fanno riferimento a tale definizione." #, python-format msgid "The metadata definition namespace=%(namespace_name)s already exists." msgstr "" "Lo spazio dei nomi della definizione di metadati =%(namespace_name)s già " "esiste." #, python-format msgid "" "The metadata definition object with name=%(object_name)s was not found in " "namespace=%(namespace_name)s." msgstr "" "L'oggetto della definizione di metadati con nome=%(object_name)s non è stato " "trovato nello spazio dei nomi=%(namespace_name)s." #, python-format msgid "" "The metadata definition property with name=%(property_name)s was not found " "in namespace=%(namespace_name)s." msgstr "" "La proprietà della definizione di metadati con nome=%(property_name)s non è " "stata trovata nello spazio dei nomi=%(namespace_name)s." #, python-format msgid "" "The metadata definition resource-type association of resource-type=" "%(resource_type_name)s to namespace=%(namespace_name)s already exists." msgstr "" "L'associazione tipo-risorsa della definizione di metadati del tipo-risorsa=" "%(resource_type_name)s per lo spazio dei nomi=%(namespace_name)s già esiste." #, python-format msgid "" "The metadata definition resource-type association of resource-type=" "%(resource_type_name)s to namespace=%(namespace_name)s, was not found." msgstr "" "L'associazione tipo-risorsa della definizione di metadati del tipo-risorsa=" "%(resource_type_name)s per lo spazio dei nomi=%(namespace_name)s, non è " "stata trovata." #, python-format msgid "" "The metadata definition resource-type with name=%(resource_type_name)s, was " "not found." msgstr "" "Il tipo-risorsa della definizione di metadati con nome=" "%(resource_type_name)s, non è stato trovato." #, python-format msgid "" "The metadata definition tag with name=%(name)s was not found in namespace=" "%(namespace_name)s." msgstr "" "Il tag di definizione dei metadati con nome=%(name)s non è stato trovato " "nello spazio dei nomi=%(namespace_name)s." msgid "The parameters required by task, JSON blob" msgstr "I parametri richiesti dall'attività, blob JSON" msgid "The provided image is too large." msgstr "L'immagine fornita è troppo grande." msgid "" "The region for the authentication service. If \"use_user_token\" is not in " "effect and using keystone auth, then region name can be specified." msgstr "" "la regione per il servizio di autenticazione. Se \"use_user_token\" non è " "attiva e si utilizza l'autenticazione keystone, è possibile specificare il " "nome regione." msgid "The request returned 500 Internal Server Error." msgstr "La richiesta ha restituito 500 Errore interno del server." msgid "" "The request returned 503 Service Unavailable. This generally occurs on " "service overload or other transient outage." msgstr "" "La richiesta ha restituito 503 Servizio non disponibile 503. Ciò " "generalmente si verifica nel sovraccarico del servizio o altro tipo di " "interruzione temporanea." #, python-format msgid "" "The request returned a 302 Multiple Choices. This generally means that you " "have not included a version indicator in a request URI.\n" "\n" "The body of response returned:\n" "%(body)s" msgstr "" "La richiesta ha restituito 302 scelte multiple. Questo generalmente indica " "che non è stato incluso un indicatore di versione in un URI della " "richiesta.\n" "\n" "Restituito il corpo della risposta:\n" "%(body)s" #, python-format msgid "" "The request returned a 413 Request Entity Too Large. This generally means " "that rate limiting or a quota threshold was breached.\n" "\n" "The response body:\n" "%(body)s" msgstr "" "La richiesta ha restituito 413 Entità della richiesta troppo grande. Questo " "generalmente significa che il limite della velocità o la soglia della quota " "sono stati violati.\n" "\n" "Il corpo della risposta \n" "%(body)s" #, python-format msgid "" "The request returned an unexpected status: %(status)s.\n" "\n" "The response body:\n" "%(body)s" msgstr "" "La richiesta ha restituito uno stato imprevisto: %(status)s.\n" "\n" "Il corpo della risposta \n" "%(body)s" msgid "" "The requested image has been deactivated. Image data download is forbidden." msgstr "" "L'immagine richiesta è stata disattivata. Il download dei dati immagine non " "è consentito." msgid "The result of current task, JSON blob" msgstr "Il risultato dell'attività corrente, blob JSON" #, python-format msgid "" "The size of the data %(image_size)s will exceed the limit. %(remaining)s " "bytes remaining." msgstr "" "La dimensione dei dati %(image_size)s supererà il limite. %(remaining)s byte " "rimanenti." #, python-format msgid "The specified member %s could not be found" msgstr "Impossibile trovare il membro specificato %s" #, python-format msgid "The specified metadata object %s could not be found" msgstr "Impossibile trovare l'oggetto di metadati %s specificato" #, python-format msgid "The specified metadata tag %s could not be found" msgstr "Impossibile trovare il tag di metadati %s specificato" #, python-format msgid "The specified namespace %s could not be found" msgstr "Impossibile trovare lo spazio dei nomi %s specificato" #, python-format msgid "The specified property %s could not be found" msgstr "Impossibile trovare la proprietà %s specificata" #, python-format msgid "The specified resource type %s could not be found " msgstr "Impossibile trovare il tipo di risorsa %s specificato " msgid "" "The status of deleted image location can only be set to 'pending_delete' or " "'deleted'" msgstr "" "Lo stato dell'ubicazione immagine eliminata può essere impostata solo su " "'pending_delete' o 'deleted'" msgid "" "The status of deleted image location can only be set to 'pending_delete' or " "'deleted'." msgstr "" "Lo stato dell'ubicazione immagine eliminata può essere impostata solo su " "'pending_delete' o 'deleted'." msgid "The status of this image member" msgstr "Lo stato di questo membro dell'immagine" msgid "" "The strategy to use for authentication. If \"use_user_token\" is not in " "effect, then auth strategy can be specified." msgstr "" "La strategia da utilizzare per l'autenticazione. Se \"use_user_token\" non è " "attiva è possibile specificare la strategia di autenticazione." #, python-format msgid "" "The target member %(member_id)s is already associated with image " "%(image_id)s." msgstr "" "Il membro di destinazione %(member_id)s è già associato all'immagine " "%(image_id)s." msgid "" "The tenant name of the administrative user. If \"use_user_token\" is not in " "effect, then admin tenant name can be specified." msgstr "" "Il nome tenant dell'utente amministrativo. Se \"use_user_token\" non è " "attiva è possibile specificare il nome tenant admin." msgid "The type of task represented by this content" msgstr "Il tipo di attività rappresentata da questo contenuto" msgid "The unique namespace text." msgstr "Il testo dello spazio dei nomi univoco." msgid "The user friendly name for the namespace. Used by UI if available." msgstr "" "Il nome utente semplice per lo spazio dei nomi. Utilizzato dalla UI se " "disponibile." #, python-format msgid "" "There is a problem with your %(error_key_name)s %(error_filename)s. Please " "verify it. Error: %(ioe)s" msgstr "" "Si è verificato un problema in %(error_key_name)s %(error_filename)s. " "Verificare. Errore: %(ioe)s" #, python-format msgid "" "There is a problem with your %(error_key_name)s %(error_filename)s. Please " "verify it. OpenSSL error: %(ce)s" msgstr "" "Si è verificato un problema in %(error_key_name)s %(error_filename)s. " "Verificare. Errore OpenSSL: %(ce)s" #, python-format msgid "" "There is a problem with your key pair. Please verify that cert " "%(cert_file)s and key %(key_file)s belong together. OpenSSL error %(ce)s" msgstr "" "Si è verificato un problema con la coppia di chiavi. Verificare che il cert " "%(cert_file)s e la chiave %(key_file)s siano collegati. Errore OpenSSL " "%(ce)s" msgid "There was an error configuring the client." msgstr "Si è verificato un errore durante la configurazione del client." msgid "There was an error connecting to a server" msgstr "Si è verificato un errore durante la connessione al server" msgid "" "This operation is currently not permitted on Glance Tasks. They are auto " "deleted after reaching the time based on their expires_at property." msgstr "" "Questa operazione non è attualmente consentita nelle attività Glance. " "Vengono automaticamente eliminate al raggiungimento dell'ora in base alla " "proprietà expires_at." msgid "This operation is currently not permitted on Glance images details." msgstr "" "Questa operazione non è attualmente consentita nei dettagli delle immagini " "Glance." msgid "" "Time in hours for which a task lives after, either succeeding or failing" msgstr "" "Periodo di tempo, in ore, per cui l'attività prosegue dopo l'esito positivo " "o meno" msgid "Too few arguments." msgstr "Troppo pochi argomenti." msgid "" "URI cannot contain more than one occurrence of a scheme.If you have " "specified a URI like swift://user:pass@http://authurl.com/v1/container/obj, " "you need to change it to use the swift+http:// scheme, like so: swift+http://" "user:pass@authurl.com/v1/container/obj" msgstr "" "L'URI non può contenere più di una ricorrenza di uno schema. Se è stato " "specificato un URI come swift://user:pass@http://authurl.com/v1/container/" "obj, è necessario modificarlo per utilizzare lo schema swift+http://, come: " "swift+http://user:pass@authurl.com/v1/container/obj" msgid "URL to access the image file kept in external store" msgstr "URL per accedere al file di immagini tenuto nell'archivio esterno" #, python-format msgid "" "Unable to create pid file %(pid)s. Running as non-root?\n" "Falling back to a temp file, you can stop %(service)s service using:\n" " %(file)s %(server)s stop --pid-file %(fb)s" msgstr "" "Impossibile creare il file pid %(pid)s. Eseguire come non-root?\n" "Ritorno a un file temporaneo; è possibile arrestare il servizio %(service)s " "utilizzando:\n" " %(file)s %(server)s stop --pid-file %(fb)s" #, python-format msgid "Unable to filter by unknown operator '%s'." msgstr "Impossibile filtrare mediante un operatore sconosciuto '%s'." msgid "Unable to filter on a range with a non-numeric value." msgstr "" "Impossibile filtrare in base a un intervallo con un valore non numerico." msgid "Unable to filter on a unknown operator." msgstr "Impossibile filtrare su un operatore sconosciuto." msgid "Unable to filter using the specified operator." msgstr "Impossibile filtrare utilizzando l'operatore specificato." msgid "Unable to filter using the specified range." msgstr "Impossibile filtrare utilizzando l'intervallo specificato." #, python-format msgid "Unable to find '%s' in JSON Schema change" msgstr "Impossibile trovare '%s' nella modifica dello schema JSON" #, python-format msgid "" "Unable to find `op` in JSON Schema change. It must be one of the following: " "%(available)s." msgstr "" "Impossibile trovare `op` in modifica schema JSON. Deve essere uno dei " "seguenti: %(available)s." msgid "Unable to increase file descriptor limit. Running as non-root?" msgstr "" "Impossibile aumentare il limite del descrittore di file. Eseguire come non-" "root?" #, python-format msgid "" "Unable to load %(app_name)s from configuration file %(conf_file)s.\n" "Got: %(e)r" msgstr "" "Impossibile caricare %(app_name)s dal file di configurazione %(conf_file)s.\n" "Ricevuto: %(e)r" #, python-format msgid "Unable to load schema: %(reason)s" msgstr "Impossibile caricare lo schema: %(reason)s" #, python-format msgid "Unable to locate paste config file for %s." msgstr "Impossibile individuare il file di configurazione paste per %s." #, python-format msgid "Unable to upload duplicate image data for image%(image_id)s: %(error)s" msgstr "" "Impossibile caricare i dati dell'immagine duplicata per l'immagine " "%(image_id)s: %(error)s" msgid "Unauthorized image access" msgstr "Accesso all'immagine non autorizzato" msgid "Unexpected body type. Expected list/dict." msgstr "Tipo di corpo imprevisto. Elenco/dizionario previsto." #, python-format msgid "Unexpected response: %s" msgstr "Risposta imprevista: %s" #, python-format msgid "Unknown auth strategy '%s'" msgstr "Strategia di autenticazione sconosciuta '%s'" #, python-format msgid "Unknown command: %s" msgstr "Comando sconosciuto: %s" msgid "Unknown sort direction, must be 'desc' or 'asc'" msgstr "Direzione ordinamento sconosciuta, deve essere 'desc' o 'asc'" msgid "Unrecognized JSON Schema draft version" msgstr "Versione della bozza dello schema JSON non riconosciuta" msgid "Unrecognized changes-since value" msgstr "Valore changes-since non riconosciuto" #, python-format msgid "Unsupported sort_dir. Acceptable values: %s" msgstr "sort_dir non supportato. Valori consentiti: %s" #, python-format msgid "Unsupported sort_key. Acceptable values: %s" msgstr "sort_key non supportato. Valori consentiti: %s" msgid "Virtual size of image in bytes" msgstr "Dimensione virtuale dell'immagine in byte" #, python-format msgid "Waited 15 seconds for pid %(pid)s (%(file)s) to die; giving up" msgstr "Entro 15 secondi il pid %(pid)s (%(file)s) verrà interrotto; terminato" msgid "" "When running server in SSL mode, you must specify both a cert_file and " "key_file option value in your configuration file" msgstr "" "Quando si esegue il server in modalità SSL, è necessario specificare sia un " "valore dell'opzione cert_file che key_file nel file di configurazione" msgid "" "Whether to pass through the user token when making requests to the registry. " "To prevent failures with token expiration during big files upload, it is " "recommended to set this parameter to False.If \"use_user_token\" is not in " "effect, then admin credentials can be specified." msgstr "" "Se passare o meno attraverso il token utente quando si effettuano richieste " "al registro. Per impedire problemi con la scadenza del token durante il " "caricamento di file grandi, si consiglia di impostare questo parametro su " "False. Se \"use_user_token\" non è in vigore, è possibile specificare le " "credenziali admin." #, python-format msgid "Wrong command structure: %s" msgstr "Struttura del comando errata: %s" msgid "You are not authenticated." msgstr "L'utente non è autenticato." msgid "You are not authorized to complete this action." msgstr "Non si è autorizzati a completare questa azione." #, python-format msgid "You are not authorized to lookup image %s." msgstr "Non si è autorizzati a ricercare l'immagine %s." #, python-format msgid "You are not authorized to lookup the members of the image %s." msgstr "Non si è autorizzati a ricercare i membri dell'immagine %s." #, python-format msgid "You are not permitted to create a tag in the namespace owned by '%s'" msgstr "" "L'utente non dispone dell'autorizzazione per creare un tag lo spazio dei " "nomi posseduto da '%s'" msgid "You are not permitted to create image members for the image." msgstr "Non si è autorizzati a creare membri dell'immagine per l'immagine." #, python-format msgid "You are not permitted to create images owned by '%s'." msgstr "Non si è autorizzati a creare immagini di proprietà di '%s'." #, python-format msgid "You are not permitted to create namespace owned by '%s'" msgstr "" "L'utente non dispone dell'autorizzazione per creare lo spazio dei nomi " "posseduto da '%s'" #, python-format msgid "You are not permitted to create object owned by '%s'" msgstr "" "L'utente non dispone dell'autorizzazione per creare l'oggetto posseduto da " "'%s'" #, python-format msgid "You are not permitted to create property owned by '%s'" msgstr "" "L'utente non dispone dell'autorizzazione per creare la proprietà posseduta " "da '%s'" #, python-format msgid "You are not permitted to create resource_type owned by '%s'" msgstr "" "L'utente non dispone dell'autorizzazione per creare il tipo_risorsa " "posseduto da '%s'" #, python-format msgid "You are not permitted to create this task with owner as: %s" msgstr "Non si è autorizzati a creare questa attività con proprietario: %s" msgid "You are not permitted to deactivate this image." msgstr "Non si è autorizzati a disattivare questa immagine." msgid "You are not permitted to delete this image." msgstr "Non si è autorizzati a eliminare questa immagine." msgid "You are not permitted to delete this meta_resource_type." msgstr "" "L'utente non dispone dell'autorizzazione per eliminare questo " "tipo_risorsa_metadati." msgid "You are not permitted to delete this namespace." msgstr "" "L'utente non dispone dell'autorizzazione per eliminare questo spazio dei " "nomi." msgid "You are not permitted to delete this object." msgstr "L'utente non dispone dell'autorizzazione per eliminare questo oggetto." msgid "You are not permitted to delete this property." msgstr "" "L'utente non dispone dell'autorizzazione per eliminare questa proprietà." msgid "You are not permitted to delete this tag." msgstr "L'utente non dispone dell'autorizzazione per eliminare questo tag." #, python-format msgid "You are not permitted to modify '%(attr)s' on this %(resource)s." msgstr "Non si è autorizzati a modificare '%(attr)s' in questa %(resource)s." #, python-format msgid "You are not permitted to modify '%s' on this image." msgstr "Non si è autorizzati a modificare '%s' in questa immagine." msgid "You are not permitted to modify locations for this image." msgstr "Non si è autorizzati a modificare le ubicazioni per questa immagine." msgid "You are not permitted to modify tags on this image." msgstr "Non si è autorizzati a modificare i tag in questa immagine." msgid "You are not permitted to modify this image." msgstr "Non si è autorizzati a modificare questa immagine." msgid "You are not permitted to reactivate this image." msgstr "Non si è autorizzati a riattivare questa immagine." msgid "You are not permitted to set status on this task." msgstr "Non si è autorizzati ad impostare lo stato in questa attività." msgid "You are not permitted to update this namespace." msgstr "" "L'utente non dispone dell'autorizzazione per aggiornare questo spazio dei " "nomi." msgid "You are not permitted to update this object." msgstr "" "L'utente non dispone dell'autorizzazione per aggiornare questo oggetto." msgid "You are not permitted to update this property." msgstr "" "L'utente non dispone dell'autorizzazione per aggiornare questa proprietà." msgid "You are not permitted to update this tag." msgstr "L'utente non dispone dell'autorizzazione per aggiornare questo tag." msgid "You are not permitted to upload data for this image." msgstr "Non si è autorizzati a caricare i dati per questa immagine." #, python-format msgid "You cannot add image member for %s" msgstr "Non è possibile aggiungere il membro dell'immagine per %s" #, python-format msgid "You cannot delete image member for %s" msgstr "Non è possibile eliminare il membro dell'immagine per %s" #, python-format msgid "You cannot get image member for %s" msgstr "Non è possibile ottenere il membro dell'immagine per %s" #, python-format msgid "You cannot update image member %s" msgstr "Non è possibile aggiornare il membro dell'immagine %s" msgid "You do not own this image" msgstr "Non si possiede tale immagine" msgid "" "You have selected to use SSL in connecting, and you have supplied a cert, " "however you have failed to supply either a key_file parameter or set the " "GLANCE_CLIENT_KEY_FILE environ variable" msgstr "" "Si è scelto di utilizzare nella connessione SSL ed è stato fornito un " "certificato, tuttavia non è stato fornito un parametro key_file o la " "variabile di ambiente GLANCE_CLIENT_KEY_FILE non è stata impostata" msgid "" "You have selected to use SSL in connecting, and you have supplied a key, " "however you have failed to supply either a cert_file parameter or set the " "GLANCE_CLIENT_CERT_FILE environ variable" msgstr "" "Si è scelto di utilizzare SSL nella connessione e si è fornita una chiave, " "tuttavia non è stato fornito un parametro cert_file parameter o la variabile " "di ambiente GLANCE_CLIENT_CERT_FILE non è stata impostata" msgid "" "^([0-9a-fA-F]){8}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-" "fA-F]){12}$" msgstr "" "^([0-9a-fA-F]){8}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-" "fA-F]){12}$" #, python-format msgid "__init__() got unexpected keyword argument '%s'" msgstr "__init__() ha ricevuto l'argomento di parole chiave '%s' non previsto" #, python-format msgid "" "cannot transition from %(current)s to %(next)s in update (wanted from_state=" "%(from)s)" msgstr "" "Impossibile passare da %(current)s a %(next)s in fase di aggiornamento " "(richiesto from_state=%(from)s)" #, python-format msgid "custom properties (%(props)s) conflict with base properties" msgstr "" "le proprietà personalizzate (%(props)s) sono in conflitto con le proprietà " "di base" msgid "eventlet 'poll' nor 'selects' hubs are available on this platform" msgstr "" "Su questa piattaforma non sono disponibili hub 'poll' e 'selects' eventlog" msgid "is_public must be None, True, or False" msgstr "is_public deve essere None, True, o False" msgid "limit param must be an integer" msgstr "parametro limite deve essere un numero intero" msgid "limit param must be positive" msgstr "parametro limite deve essere positivo" msgid "md5 hash of image contents." msgstr "hash md5 del contenuto dell'immagine. " #, python-format msgid "new_image() got unexpected keywords %s" msgstr "new_image() ha ricevuto parole chiave %s non previste" msgid "protected must be True, or False" msgstr "protetto deve essere True o False" #, python-format msgid "unable to launch %(serv)s. Got error: %(e)s" msgstr "impossibile avviare %(serv)s. Si è verificato l'errore: %(e)s" #, python-format msgid "x-openstack-request-id is too long, max size %s" msgstr "x-openstack-request-id è troppo lungo, dimensione max %s" glance-16.0.0/glance/locale/pt_BR/0000775000175100017510000000000013245511661016555 5ustar zuulzuul00000000000000glance-16.0.0/glance/locale/pt_BR/LC_MESSAGES/0000775000175100017510000000000013245511661020342 5ustar zuulzuul00000000000000glance-16.0.0/glance/locale/pt_BR/LC_MESSAGES/glance.po0000666000175100017510000021331013245511421022127 0ustar zuulzuul00000000000000# Translations template for glance. # Copyright (C) 2015 ORGANIZATION # This file is distributed under the same license as the glance project. # # Translators: # Gabriel Wainer, 2013 # Gabriel Wainer, 2013 # Rodrigo Felix de Almeida , 2014 # Volmar Oliveira Junior , 2013 # Andreas Jaeger , 2016. #zanata msgid "" msgstr "" "Project-Id-Version: glance 15.0.0.0b3.dev29\n" "Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n" "POT-Creation-Date: 2017-06-23 20:54+0000\n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=UTF-8\n" "Content-Transfer-Encoding: 8bit\n" "PO-Revision-Date: 2016-04-12 05:22+0000\n" "Last-Translator: Copied by Zanata \n" "Language: pt-BR\n" "Plural-Forms: nplurals=2; plural=(n > 1);\n" "Generated-By: Babel 2.0\n" "X-Generator: Zanata 3.9.6\n" "Language-Team: Portuguese (Brazil)\n" #, python-format msgid "\t%s" msgstr "\t%s" #, python-format msgid "%(cls)s exception was raised in the last rpc call: %(val)s" msgstr "exceção %(cls)s foi disparada na última chamada RPC: %(val)s" #, python-format msgid "%(m_id)s not found in the member list of the image %(i_id)s." msgstr "%(m_id)s não localizado na lista de membros da imagem %(i_id)s." #, python-format msgid "%(serv)s (pid %(pid)s) is running..." msgstr "%(serv)s (pid %(pid)s) está em execução..." #, python-format msgid "%(serv)s appears to already be running: %(pid)s" msgstr "%(serv)s parece já estar em execução: %(pid)s" #, python-format msgid "" "%(strategy)s is registered as a module twice. %(module)s is not being used." msgstr "" "%(strategy)s é registrado como um módulo duas vezes. %(module)s não está " "sendo usado." #, python-format msgid "" "%(task_id)s of %(task_type)s not configured properly. Could not load the " "filesystem store" msgstr "" "%(task_id)s de %(task_type)s não foi configurado adequadamente. Não foi " "possível carregar o armazenamento de sistema de arquivos" #, python-format msgid "" "%(task_id)s of %(task_type)s not configured properly. Missing work dir: " "%(work_dir)s" msgstr "" "%(task_id)s de %(task_type)s não foi configurado adequadamente. Faltando o " "diretório de trabalho: %(work_dir)s" #, python-format msgid "%(verb)sing %(serv)s" msgstr "%(verb)sing %(serv)s" #, python-format msgid "%(verb)sing %(serv)s with %(conf)s" msgstr "%(verb)sing %(serv)s com %(conf)s" #, python-format msgid "" "%s Please specify a host:port pair, where host is an IPv4 address, IPv6 " "address, hostname, or FQDN. If using an IPv6 address, enclose it in brackets " "separately from the port (i.e., \"[fe80::a:b:c]:9876\")." msgstr "" "%s Especifique um par host:porta, em que o host é um endereço IPv4, IPv6, " "nome do host ou FQDN. Se você estiver usando um endereço IPv6, coloque-o nos " "suportes separadamente da porta (ou seja, \"[fe80::a:b:c]:9876\")." #, python-format msgid "%s can't contain 4 byte unicode characters." msgstr "%s não pode conter caracteres de unicode de 4 bytes." #, python-format msgid "%s is already stopped" msgstr "%s já está parado" #, python-format msgid "%s is stopped" msgstr "%s está parado" msgid "" "--os_auth_url option or OS_AUTH_URL environment variable required when " "keystone authentication strategy is enabled\n" msgstr "" "opção --os_auth_url ou variável de ambiente OS_AUTH_URL requerida quando " "estratégia de autenticação keystone está ativada\n" msgid "A body is not expected with this request." msgstr "Um corpo não é esperado com essa solicitação." #, python-format msgid "" "A metadata definition object with name=%(object_name)s already exists in " "namespace=%(namespace_name)s." msgstr "" "Um objeto de definição de metadados com o nome=%(object_name)s já existe no " "namespace=%(namespace_name)s." #, python-format msgid "" "A metadata definition property with name=%(property_name)s already exists in " "namespace=%(namespace_name)s." msgstr "" "Uma propriedade de definição de metadados com o nome=%(property_name)s já " "existe no namespace=%(namespace_name)s." #, python-format msgid "" "A metadata definition resource-type with name=%(resource_type_name)s already " "exists." msgstr "" "Um tipo de recurso de definição de metadados com o nome=" "%(resource_type_name)s já existe." msgid "A set of URLs to access the image file kept in external store" msgstr "" "Um conjunto de URLs para acessar o arquivo de imagem mantido em " "armazenamento externo" msgid "Amount of disk space (in GB) required to boot image." msgstr "" "Quantidade de espaço em disco (em GB) necessária para a imagem de " "inicialização." msgid "Amount of ram (in MB) required to boot image." msgstr "Quantidade de ram (em MB) necessária para a imagem de inicialização." msgid "An identifier for the image" msgstr "Um identificador para a imagem" msgid "An identifier for the image member (tenantId)" msgstr "Um identificador para o membro de imagem (tenantId)" msgid "An identifier for the owner of this task" msgstr "Um identificador para o proprietário desta tarefa" msgid "An identifier for the task" msgstr "Um identificador para a tarefa" msgid "An image file url" msgstr "Uma URL de arquivo de imagem" msgid "An image schema url" msgstr "Uma URL de esquema de imagem" msgid "An image self url" msgstr "Uma URL automática de imagem" #, python-format msgid "An image with identifier %s already exists" msgstr "Uma imagem com o identificador %s já existe" msgid "An import task exception occurred" msgstr "Ocorreu uma exceção em uma tarefa importante" msgid "An object with the same identifier already exists." msgstr "Um objeto com o mesmo identificador já existe." msgid "An object with the same identifier is currently being operated on." msgstr "Um objeto com o mesmo identificador está atualmente sendo operado." msgid "An object with the specified identifier was not found." msgstr "Um objeto com o identificador especificado não foi localizado." msgid "An unknown exception occurred" msgstr "Ocorreu uma exceção desconhecida" msgid "An unknown task exception occurred" msgstr "Ocorreu uma exceção de tarefa desconhecida" #, python-format msgid "Attempt to upload duplicate image: %s" msgstr "Tentativa de fazer upload de imagem duplicada: %s" msgid "Attempted to update Location field for an image not in queued status." msgstr "" "Tentativa de atualizar o campo Local para uma imagem não está no status em " "fila." #, python-format msgid "Attribute '%(property)s' is read-only." msgstr "O atributo '%(property)s' é somente leitura." #, python-format msgid "Attribute '%(property)s' is reserved." msgstr "O atributo '%(property)s' é reservado." #, python-format msgid "Attribute '%s' is read-only." msgstr "Atributo '%s' é apenas leitura." #, python-format msgid "Attribute '%s' is reserved." msgstr "Atributo '%s' é reservado." msgid "Attribute container_format can be only replaced for a queued image." msgstr "" "Atributo container_format pode ser apenas substituído por uma imagem na fila." msgid "Attribute disk_format can be only replaced for a queued image." msgstr "" "Atributo disk_format pode ser apenas substituído por uma imagem na fila." #, python-format msgid "Auth service at URL %(url)s not found." msgstr "Serviço de autenticação na URL %(url)s não localizado." #, python-format msgid "" "Authentication error - the token may have expired during file upload. " "Deleting image data for %s." msgstr "" "Erro de autenticação - o token pode ter expirado durante o envio do arquivo. " "Removendo dados da imagem %s." msgid "Authorization failed." msgstr "Falha de autorização." msgid "Available categories:" msgstr "Categorias disponíveis:" #, python-format msgid "Bad \"%s\" query filter format. Use ISO 8601 DateTime notation." msgstr "" "Formato de filtro de consulta \"%s\" inválido. Use a notação ISO 8601 " "DateTime." #, python-format msgid "Bad Command: %s" msgstr "Comandos inválidos: %s" #, python-format msgid "Bad header: %(header_name)s" msgstr "Cabeçalho inválido: %(header_name)s" #, python-format msgid "Bad value passed to filter %(filter)s got %(val)s" msgstr "Valor inválido passado para o filtro %(filter)s obteve %(val)s" #, python-format msgid "Badly formed S3 URI: %(uri)s" msgstr "URI S3 malformado: %(uri)s" #, python-format msgid "Badly formed credentials '%(creds)s' in Swift URI" msgstr "Credenciais malformadas '%(creds)s' no URI Swift" msgid "Badly formed credentials in Swift URI." msgstr "Credenciais malformadas no URI Swift." msgid "Body expected in request." msgstr "Corpo esperado na solicitação." msgid "Cannot be a negative value" msgstr "Não pode ser um valor negativo" msgid "Cannot be a negative value." msgstr "Não pode ser um valor negativo." #, python-format msgid "Cannot convert image %(key)s '%(value)s' to an integer." msgstr "" "Não é possível converter a imagem %(key)s '%(value)s' para um número inteiro." msgid "Cannot remove last location in the image." msgstr "Não é possível remover o último local na imagem." #, python-format msgid "Cannot save data for image %(image_id)s: %(error)s" msgstr "Não é possível salvar os dados da imagem %(image_id)s: %(error)s" msgid "Cannot set locations to empty list." msgstr "Não é possível configurar locais para esvaziar a lista." msgid "Cannot upload to an unqueued image" msgstr "Não é possível fazer upload para uma imagem fora da fila" #, python-format msgid "Checksum verification failed. Aborted caching of image '%s'." msgstr "" "A soma de verificação falhou. Interrompido o armazenamento em cache da " "imagem '%s'." msgid "Client disconnected before sending all data to backend" msgstr "Cliente desconectado antes de enviar todos os dados para o backend" msgid "Command not found" msgstr "Comando não encontrado" msgid "Configuration option was not valid" msgstr "A opção de configuração não era válida" #, python-format msgid "Connect error/bad request to Auth service at URL %(url)s." msgstr "" "Erro de conexão/solicitação inválida para serviço de autenticação na URL " "%(url)s." #, python-format msgid "Constructed URL: %s" msgstr "URL construída: %s" msgid "Container format is not specified." msgstr "O formato de contêiner não foi especificado." msgid "Content-Type must be application/octet-stream" msgstr "Tipo de Conteúdo deve ser application/octet-stream" #, python-format msgid "Corrupt image download for image %(image_id)s" msgstr "Download de imagem corrompido para a imagem %(image_id)s" #, python-format msgid "Could not bind to %(host)s:%(port)s after trying for 30 seconds" msgstr "" "Não foi possível ligar a %(host)s:%(port)s depois de tentar por 30 segundos" msgid "Could not find OVF file in OVA archive file." msgstr "Não foi possível localizar o arquivo OVF no archive OVA." #, python-format msgid "Could not find metadata object %s" msgstr "Não foi possível localizar o objeto de metadados %s" #, python-format msgid "Could not find metadata tag %s" msgstr "Não foi possível localizar a identificação de metadados %s" #, python-format msgid "Could not find namespace %s" msgstr "Não foi possível localizar o namespace %s" #, python-format msgid "Could not find property %s" msgstr "Não é possível localizar a propriedade %s" msgid "Could not find required configuration option" msgstr "Não foi possível localizar a opção de configuração necessária" #, python-format msgid "Could not find task %s" msgstr "Não foi possível localizar tarefa %s" #, python-format msgid "Could not update image: %s" msgstr "Não foi possível atualizar a imagem: %s" msgid "Currently, OVA packages containing multiple disk are not supported." msgstr "" "Atualmente, os pacotes OVA que contêm diversos discos não são suportados. " #, python-format msgid "Data for image_id not found: %s" msgstr "Dados de image_id não localizados: %s" msgid "Data supplied was not valid." msgstr "Os dados fornecidos não eram válidos." msgid "Date and time of image member creation" msgstr "Data e hora da criação de membro da imagem" msgid "Date and time of image registration" msgstr "Data e hora do registro da imagem " msgid "Date and time of last modification of image member" msgstr "Data e hora da última modificação de membro da imagem" msgid "Date and time of namespace creation" msgstr "Data e hora da criação do namespace" msgid "Date and time of object creation" msgstr "Data e hora da criação do objeto" msgid "Date and time of resource type association" msgstr "Data e hora da associação do tipo de recurso " msgid "Date and time of tag creation" msgstr "Data e hora da criação da identificação " msgid "Date and time of the last image modification" msgstr "Data e hora da última modificação da imagem " msgid "Date and time of the last namespace modification" msgstr "Data e hora da última modificação do namespace " msgid "Date and time of the last object modification" msgstr "Data e hora da última modificação do objeto" msgid "Date and time of the last resource type association modification" msgstr "Data e hora da última modificação de associação de tipo de recurso " msgid "Date and time of the last tag modification" msgstr "Data e hora da última modificação da identificação " msgid "Datetime when this resource was created" msgstr "Data/hora quando este recurso foi criado" msgid "Datetime when this resource was updated" msgstr "Data/Hora quando este recurso foi atualizado" msgid "Datetime when this resource would be subject to removal" msgstr "Data/Hora quando este recurso deve ser objeto de remoção" #, python-format msgid "Denying attempt to upload image because it exceeds the quota: %s" msgstr "Negando a tentativa de upload da imagem porque ela excede a cota: %s" #, python-format msgid "Denying attempt to upload image larger than %d bytes." msgstr "Negando tentativa de fazer upload de imagem maior que %d bytes." msgid "Descriptive name for the image" msgstr "Nome descritivo para a imagem" msgid "Disk format is not specified." msgstr "O formato de disco não foi especificado." #, python-format msgid "" "Driver %(driver_name)s could not be configured correctly. Reason: %(reason)s" msgstr "" "O driver %(driver_name)s não pôde ser configurado corretamente. Motivo: " "%(reason)s" msgid "" "Error decoding your request. Either the URL or the request body contained " "characters that could not be decoded by Glance" msgstr "" "Erro ao decodificar sua solicitação. A URL ou o corpo da solicitação " "continha caracteres que não puderam ser decodificados pelo Glance" #, python-format msgid "Error fetching members of image %(image_id)s: %(inner_msg)s" msgstr "Erro ao buscar membros da imagem %(image_id)s: %(inner_msg)s" msgid "Error in store configuration. Adding images to store is disabled." msgstr "" "Erro na configuração do armazenamento. A inclusão de imagens para " "armazenamento está desativada." msgid "Expected a member in the form: {\"member\": \"image_id\"}" msgstr "O membro era esperado no formato: {\"member\": \"image_id\"}" msgid "Expected a status in the form: {\"status\": \"status\"}" msgstr "O estado era esperado no formato: {\"status\": \"status\"}" msgid "External source should not be empty" msgstr "A fonte externa não deve estar vazia" #, python-format msgid "External sources are not supported: '%s'" msgstr "As fontes externas não são suportadas: '%s'" #, python-format msgid "Failed to activate image. Got error: %s" msgstr "Falha ao ativar imagem. Erro obtido: %s" #, python-format msgid "Failed to add image metadata. Got error: %s" msgstr "Falha ao incluir metadados da imagem. Erro obtido: %s" #, python-format msgid "Failed to find image %(image_id)s to delete" msgstr "Falhar ao localizar a imagem %(image_id)s para excluir" #, python-format msgid "Failed to find image to delete: %s" msgstr "Falha ao encontrar imagem para excluir: %s" #, python-format msgid "Failed to find image to update: %s" msgstr "Falha ao encontrar imagem para atualizar: %s" #, python-format msgid "Failed to find resource type %(resourcetype)s to delete" msgstr "Falha ao localizar o tipo de recurso %(resourcetype)s para excluir" #, python-format msgid "Failed to initialize the image cache database. Got error: %s" msgstr "" "Falha ao inicializar o banco de dados de cache da imagem. Erro obtido: %s" #, python-format msgid "Failed to read %s from config" msgstr "Falha ao ler %s da configuração" #, python-format msgid "Failed to reserve image. Got error: %s" msgstr "Falha ao reservar imagem. Erro obtido: %s" #, python-format msgid "Failed to update image metadata. Got error: %s" msgstr "Falha ao atualizar metadados da imagem. Erro obtido: %s" #, python-format msgid "Failed to upload image %s" msgstr "Falha ao enviar imagem %s" #, python-format msgid "" "Failed to upload image data for image %(image_id)s due to HTTP error: " "%(error)s" msgstr "" "Falha ao fazer upload dos dados de imagem para a imagem %(image_id)s devido " "a erro de HTTP: %(error)s" #, python-format msgid "" "Failed to upload image data for image %(image_id)s due to internal error: " "%(error)s" msgstr "" "Falha ao fazer upload dos dados de imagem para a imagem %(image_id)s devido " "a erro interno: %(error)s" #, python-format msgid "File %(path)s has invalid backing file %(bfile)s, aborting." msgstr "" "O arquivo %(path)s tem arquivo de backup inválido %(bfile)s, interrompendo." msgid "" "File based imports are not allowed. Please use a non-local source of image " "data." msgstr "" "Importações baseadas em arquivo não são permitidas. Use uma fonte não local " "de dados de imagem." msgid "Forbidden image access" msgstr "Proibido o acesso a imagem" #, python-format msgid "Forbidden to delete a %s image." msgstr "Proibido excluir uma imagem %s." #, python-format msgid "Forbidden to delete image: %s" msgstr "Proibido excluir imagem: %s" #, python-format msgid "Forbidden to modify '%(key)s' of %(status)s image." msgstr "Proibido modificar '%(key)s' da imagem %(status)s" #, python-format msgid "Forbidden to modify '%s' of image." msgstr "Proibido modificar '%s' de imagem." msgid "Forbidden to reserve image." msgstr "Proibido reservar imagem." msgid "Forbidden to update deleted image." msgstr "Proibido atualizar imagem excluída." #, python-format msgid "Forbidden to update image: %s" msgstr "Proibido atualizar imagem: %s" #, python-format msgid "Forbidden upload attempt: %s" msgstr "Tentativa de upload proibida: %s" #, python-format msgid "Forbidding request, metadata definition namespace=%s is not visible." msgstr "" "Proibindo solicitação, o namespace de definição de metadados=%s não é " "visível." #, python-format msgid "Forbidding request, task %s is not visible" msgstr "Proibindo solicitação, a tarefa %s não está visível" msgid "Format of the container" msgstr "Formato do contêiner" msgid "Format of the disk" msgstr "Formato do disco" #, python-format msgid "Host \"%s\" is not valid." msgstr "Host \"%s\" não é válido." #, python-format msgid "Host and port \"%s\" is not valid." msgstr "Host e porta \"%s\" não são válidos." msgid "" "Human-readable informative message only included when appropriate (usually " "on failure)" msgstr "" "Mensagem informativa legível apenas incluída quando apropriado (geralmente " "em falha)" msgid "If true, image will not be deletable." msgstr "Se true, a imagem não será excluível." msgid "If true, namespace will not be deletable." msgstr "Se verdadeiro, o namespace não poderá ser excluído." #, python-format msgid "Image %(id)s could not be deleted because it is in use: %(exc)s" msgstr "" "A imagem %(id)s não pôde ser excluída, pois ela está sendo usada: %(exc)s" #, python-format msgid "Image %(id)s not found" msgstr "Imagem %(id)s não localizada" #, python-format msgid "" "Image %(image_id)s could not be found after upload. The image may have been " "deleted during the upload: %(error)s" msgstr "" "Imagem %(image_id)s não pôde ser localizada após o upload. A imagem pode ter " "sido excluída durante o upload: %(error)s" #, python-format msgid "Image %(image_id)s is protected and cannot be deleted." msgstr "A imagem %(image_id)s está protegida e não pode ser excluída." #, python-format msgid "" "Image %s could not be found after upload. The image may have been deleted " "during the upload, cleaning up the chunks uploaded." msgstr "" "A imagem %s não pôde ser localizada após o upload. A imagem pode ter sido " "excluída durante o upload, limpando os chunks transferidos por upload." #, python-format msgid "" "Image %s could not be found after upload. The image may have been deleted " "during the upload." msgstr "" "A imagem %s não foi encontrada após o envio. A imagem pode ter sido removida " "durante o envio." #, python-format msgid "Image %s is deactivated" msgstr "Imagem %s está desativada" #, python-format msgid "Image %s is not active" msgstr "A imagem %s não está ativa" #, python-format msgid "Image %s not found." msgstr "Imagem %s não localizada." #, python-format msgid "Image exceeds the storage quota: %s" msgstr "Imagem excede a cota de armazenamento: %s" msgid "Image id is required." msgstr "ID da imagem é obrigatório." msgid "Image is protected" msgstr "A imagem está protegida" #, python-format msgid "Image member limit exceeded for image %(id)s: %(e)s:" msgstr "O limite do membro da imagem excedido para imagem %(id)s: %(e)s:" #, python-format msgid "Image name too long: %d" msgstr "Nome da imagem muito longo: %d" msgid "Image operation conflicts" msgstr "Conflitos da operação de imagem" #, python-format msgid "" "Image status transition from %(cur_status)s to %(new_status)s is not allowed" msgstr "" "Transição de status de imagem de %(cur_status)s para %(new_status)s não é " "permitido" #, python-format msgid "Image storage media is full: %s" msgstr "A mídia de armazenamento da imagem está cheia: %s" #, python-format msgid "Image tag limit exceeded for image %(id)s: %(e)s:" msgstr "" "O limite de identificação da imagem excedeu para a imagem %(id)s: %(e)s:" #, python-format msgid "Image upload problem: %s" msgstr "Problema ao fazer upload de imagem: %s" #, python-format msgid "Image with identifier %s already exists!" msgstr "A imagem o identificador %s já existe!" #, python-format msgid "Image with identifier %s has been deleted." msgstr "Imagem com identificador %s foi excluída." #, python-format msgid "Image with identifier %s not found" msgstr "Imagem com identificador %s não localizada" #, python-format msgid "Image with the given id %(image_id)s was not found" msgstr "Imagem com o ID fornecido %(image_id)s não foi localizada" #, python-format msgid "" "Incorrect auth strategy, expected \"%(expected)s\" but received " "\"%(received)s\"" msgstr "" "Estratégia de autorização incorreta; esperava-se \"%(expected)s\", mas foi " "recebido \"%(received)s\"" #, python-format msgid "Incorrect request: %s" msgstr "Requisição incorreta: %s" #, python-format msgid "Input does not contain '%(key)s' field" msgstr "A entrada não contém o campo '%(key)s'" #, python-format msgid "Insufficient permissions on image storage media: %s" msgstr "Permissões insuficientes na mídia de armazenamento da imagem: %s" #, python-format msgid "Invalid JSON pointer for this resource: '/%s'" msgstr "Ponteiro de JSON inválido para este recurso: '/%s'" #, python-format msgid "Invalid checksum '%s': can't exceed 32 characters" msgstr "Soma de verificação inválida '%s': não pode exceder 32 caracteres" msgid "Invalid configuration in glance-swift conf file." msgstr "Configuração inválida no arquivo de configuração glance-swift." msgid "Invalid configuration in property protection file." msgstr "Configuração inválida no arquivo de proteção de propriedade." #, python-format msgid "Invalid container format '%s' for image." msgstr "Formato de Contâiner inválido '%s' para imagem." #, python-format msgid "Invalid content type %(content_type)s" msgstr "Tipo de conteúdo inválido %(content_type)s" #, python-format msgid "Invalid disk format '%s' for image." msgstr "Formato de disco inválido '%s' para imagem." #, python-format msgid "Invalid filter value %s. The quote is not closed." msgstr "Valor de filtro inválido %s. A aspa não está fechada." #, python-format msgid "" "Invalid filter value %s. There is no comma after closing quotation mark." msgstr "" "Valor de filtro inválido %s.Não há nenhuma vírgula antes da aspa de " "fechamento." #, python-format msgid "" "Invalid filter value %s. There is no comma before opening quotation mark." msgstr "" "Valor de filtro inválido %s.Não há nenhuma vírgula antes da aspa de abertura." msgid "Invalid image id format" msgstr "Formato de ID da imagem inválido" msgid "Invalid location" msgstr "Local inválido" #, python-format msgid "Invalid location %s" msgstr "Local inválido %s" #, python-format msgid "Invalid location: %s" msgstr "Localidade inválida: %s" #, python-format msgid "" "Invalid location_strategy option: %(name)s. The valid strategy option(s) " "is(are): %(strategies)s" msgstr "" "Opção location_strategy inválida: %(name)s. A(s) opção(ões) de estratégia(s) " "válida(s) é(são): %(strategies)s" msgid "Invalid locations" msgstr "Locais inválidos" #, python-format msgid "Invalid locations: %s" msgstr "Localidades inválidas: %s" msgid "Invalid marker format" msgstr "Formato de marcador inválido" msgid "Invalid marker. Image could not be found." msgstr "Marcador inválido. A imagem não pôde ser localizada." #, python-format msgid "Invalid membership association: %s" msgstr "Associação inválida: %s" msgid "" "Invalid mix of disk and container formats. When setting a disk or container " "format to one of 'aki', 'ari', or 'ami', the container and disk formats must " "match." msgstr "" "Combinação inválida de formatos de disco e contêiner. Ao configurar um " "formato de disco ou contêiner para um destes, 'aki', 'ari' ou 'ami', os " "formatos de contêiner e disco devem corresponder." #, python-format msgid "" "Invalid operation: `%(op)s`. It must be one of the following: %(available)s." msgstr "" "Operação inválida: `%(op)s`. Ela deve ser um das seguintes: %(available)s." msgid "Invalid position for adding a location." msgstr "Posição inválida para adicionar uma localidade." msgid "Invalid position for removing a location." msgstr "Posição inválida para remover uma localidade." msgid "Invalid service catalog json." msgstr "Catálogo de serviço json inválido." #, python-format msgid "Invalid sort direction: %s" msgstr "Direção de classificação inválida: %s" #, python-format msgid "" "Invalid sort key: %(sort_key)s. It must be one of the following: " "%(available)s." msgstr "" "Chave de classificação inválida: %(sort_key)s. Deve ser um dos seguintes: " "%(available)s." #, python-format msgid "Invalid status value: %s" msgstr "Valro de status inválido: %s" #, python-format msgid "Invalid status: %s" msgstr "Status inválido: %s" #, python-format msgid "Invalid time format for %s." msgstr "Fromato de horário inválido para %s" #, python-format msgid "Invalid type value: %s" msgstr "Valor de tipo inválido: %s" #, python-format msgid "" "Invalid update. It would result in a duplicate metadata definition namespace " "with the same name of %s" msgstr "" "Atualização inválida. Ela resultaria em uma propriedade de definição de " "metadados duplicada com o mesmo nome de %s" #, python-format msgid "" "Invalid update. It would result in a duplicate metadata definition object " "with the same name=%(name)s in namespace=%(namespace_name)s." msgstr "" "Atualização inválida. Ela resultaria em um objeto de definição de metadados " "duplicado com o mesmo nome=%(name)s no namespace=%(namespace_name)s." #, python-format msgid "" "Invalid update. It would result in a duplicate metadata definition object " "with the same name=%(name)s in namespace=%(namespace_name)s." msgstr "" "Atualização inválida. Ela resultaria em um objeto de definição de metadados " "duplicado com o mesmo nome=%(name)s no namespace=%(namespace_name)s." #, python-format msgid "" "Invalid update. It would result in a duplicate metadata definition property " "with the same name=%(name)s in namespace=%(namespace_name)s." msgstr "" "Atualização inválida. Ela resultaria em uma propriedade de definição de " "metadados duplicada com o mesmo nome=%(name)s no namespace=" "%(namespace_name)s." #, python-format msgid "Invalid value '%(value)s' for parameter '%(param)s': %(extra_msg)s" msgstr "Valor inválido '%(value)s' para o parâmetro '%(param)s': %(extra_msg)s" #, python-format msgid "Invalid value for option %(option)s: %(value)s" msgstr "Valor inválido para a opção %(option)s: %(value)s" #, python-format msgid "Invalid visibility value: %s" msgstr "Valor de visibilidade inválido: %s" msgid "It's invalid to provide multiple image sources." msgstr "é inválido fornecer múltiplas fontes de imagens." msgid "It's not allowed to add locations if locations are invisible." msgstr "Não é permitido adicionar locais se os locais forem invisíveis." msgid "It's not allowed to remove locations if locations are invisible." msgstr "Não é permitido remover locais se os locais forem invisíveis." msgid "It's not allowed to update locations if locations are invisible." msgstr "Não é permitido atualizar locais se os locais forem invisíveis." msgid "List of strings related to the image" msgstr "Lista de sequências relacionadas à imagem" msgid "Malformed JSON in request body." msgstr "JSON malformado no corpo da solicitação." msgid "Maximal age is count of days since epoch." msgstr "A idade máxima é a contagem de dias desde a época." #, python-format msgid "Maximum redirects (%(redirects)s) was exceeded." msgstr "O máximo de redirecionamentos (%(redirects)s) foi excedido." #, python-format msgid "Member %(member_id)s is duplicated for image %(image_id)s" msgstr "O membro %(member_id)s é duplicado para a imagem %(image_id)s" msgid "Member can't be empty" msgstr "Membro não pode ser vazio" msgid "Member to be added not specified" msgstr "Membro a ser incluído não especificado" msgid "Membership could not be found." msgstr "Associação não pôde ser localizada." #, python-format msgid "" "Metadata definition namespace %(namespace)s is protected and cannot be " "deleted." msgstr "" "O namespace de definição de metadados %(namespace)s é protegido e não pode " "ser excluída." #, python-format msgid "Metadata definition namespace not found for id=%s" msgstr "Namespace de definição de metadados não localizado para o id=%s" #, python-format msgid "" "Metadata definition object %(object_name)s is protected and cannot be " "deleted." msgstr "" "O objeto de definição de metadados %(object_name)s é protegido e não pode " "ser excluída." #, python-format msgid "Metadata definition object not found for id=%s" msgstr "Objeto de definição de metadados não localizado para o id=%s" #, python-format msgid "" "Metadata definition property %(property_name)s is protected and cannot be " "deleted." msgstr "" "A propriedade de definição de metadados %(property_name)s é protegida e não " "pode ser excluída." #, python-format msgid "Metadata definition property not found for id=%s" msgstr "Propriedade de definição de metadados não localizada para id=%s" #, python-format msgid "" "Metadata definition resource-type %(resource_type_name)s is a seeded-system " "type and cannot be deleted." msgstr "" "A definição de metadados resource-type %(resource_type_name)s é um tipo de " "sistema com valor sementee não pode ser excluída." #, python-format msgid "" "Metadata definition resource-type-association %(resource_type)s is protected " "and cannot be deleted." msgstr "" "A definição de metadados resource-type-association %(resource_type)s é " "protegida e não poderá ser excluída." #, python-format msgid "" "Metadata definition tag %(tag_name)s is protected and cannot be deleted." msgstr "" "A identificação da definição de metadados %(tag_name)s é protegida e não " "pode ser excluída." #, python-format msgid "Metadata definition tag not found for id=%s" msgstr "Identificação de definição de metadados não localizada para o id=%s" msgid "Minimal rows limit is 1." msgstr "O limite mínimo de linhas é 1." #, python-format msgid "Missing required credential: %(required)s" msgstr "Credencial necessária ausente: %(required)s" #, python-format msgid "" "Multiple 'image' service matches for region %(region)s. This generally means " "that a region is required and you have not supplied one." msgstr "" "Diversas correspondências do serviço de 'imagem' para a região %(region)s. " "Isso geralmente significa que uma região é necessária e você não a forneceu." msgid "No authenticated user" msgstr "Usuário não autenticado" #, python-format msgid "No image found with ID %s" msgstr "Nenhuma imagem encontrada com o ID %s" #, python-format msgid "No location found with ID %(loc)s from image %(img)s" msgstr "Nenhum local localizado com o ID %(loc)s da imagem %(img)s" msgid "No permission to share that image" msgstr "Nenhum permissão para compartilhar essa imagem" #, python-format msgid "Not allowed to create members for image %s." msgstr "Não é permitido criar membros para a imagem %s." #, python-format msgid "Not allowed to deactivate image in status '%s'" msgstr "Não é permitido desativar a imagem no status '%s'" #, python-format msgid "Not allowed to delete members for image %s." msgstr "Não é permitido excluir membros para a imagem %s." #, python-format msgid "Not allowed to delete tags for image %s." msgstr "Não é permitido excluir identificações para a imagem %s." #, python-format msgid "Not allowed to list members for image %s." msgstr "Não é permitido listar os membros para a imagem %s." #, python-format msgid "Not allowed to reactivate image in status '%s'" msgstr "Não é permitido reativar a imagem no status '%s'" #, python-format msgid "Not allowed to update members for image %s." msgstr "Não é permitido atualizar os membros para a imagem %s." #, python-format msgid "Not allowed to update tags for image %s." msgstr "Não é permitido atualizar as identificações para a imagem %s." #, python-format msgid "Not allowed to upload image data for image %(image_id)s: %(error)s" msgstr "" "Não é permitido fazer upload de dados de imagem para a imagem %(image_id)s: " "%(error)s" msgid "Number of sort dirs does not match the number of sort keys" msgstr "" "O número de diretórios de classificação não corresponde ao número de chaves " "de classificação" msgid "OVA extract is limited to admin" msgstr "O extrato de OVA é limitado para administrador" msgid "Old and new sorting syntax cannot be combined" msgstr "A sintaxe de classificação nova e antiga não podem ser combinadas" #, python-format msgid "Operation \"%s\" requires a member named \"value\"." msgstr "A operação \"%s\" requer um membro denominado \"valor\"." msgid "" "Operation objects must contain exactly one member named \"add\", \"remove\", " "or \"replace\"." msgstr "" "Objetos de operação devem conter exatamente um membro denominado \"incluir" "\", \"remover\" ou \"substituir\"." msgid "" "Operation objects must contain only one member named \"add\", \"remove\", or " "\"replace\"." msgstr "" "Objetos de operação devem conter apenas um membro denominado \"incluir\", " "\"remover\" ou \"substituir\"." msgid "Operations must be JSON objects." msgstr "As operações devem ser objetos JSON." #, python-format msgid "Original locations is not empty: %s" msgstr "Localidade original não está vazia: %s" msgid "Owner can't be updated by non admin." msgstr "O proprietário não pode ser atualizado por um não administrador." msgid "Owner must be specified to create a tag." msgstr "O proprietário deve ser especificado para criar uma identificação." msgid "Owner of the image" msgstr "Proprietário da imagem" msgid "Owner of the namespace." msgstr "Proprietário do namespace." msgid "Param values can't contain 4 byte unicode." msgstr "Valores de parâmetro não podem conter unicode de 4 bytes." #, python-format msgid "Pointer `%s` contains \"~\" not part of a recognized escape sequence." msgstr "" "O ponteiro `%s` contém \"~\" não parte de uma sequência de escape " "reconhecida." #, python-format msgid "Pointer `%s` contains adjacent \"/\"." msgstr "O ponteiro `%s` contém uma \"/\" adjacente." #, python-format msgid "Pointer `%s` does not contains valid token." msgstr "O ponteiro `%s` não contém um token válido." #, python-format msgid "Pointer `%s` does not start with \"/\"." msgstr "O ponteiro `%s` não começa com \"/\"." #, python-format msgid "Pointer `%s` end with \"/\"." msgstr "O ponteiro `%s` termina com \"/\"." #, python-format msgid "Port \"%s\" is not valid." msgstr "Porta \"%s\" não é válida." #, python-format msgid "Process %d not running" msgstr "O processo %d não está em execução" #, python-format msgid "Properties %s must be set prior to saving data." msgstr "As propriedades %s devem ser configuradas antes de salvar os dados." #, python-format msgid "" "Property %(property_name)s does not start with the expected resource type " "association prefix of '%(prefix)s'." msgstr "" "A propriedade %(property_name)s não começa com o prefixo de associação do " "tipo de recurso esperado de ‘%(prefix)s‘." #, python-format msgid "Property %s already present." msgstr "Propriedade %s já presente." #, python-format msgid "Property %s does not exist." msgstr "A propriedade %s não existe." #, python-format msgid "Property %s may not be removed." msgstr "A propriedade %s pode não ser removida." #, python-format msgid "Property %s must be set prior to saving data." msgstr "A propriedade %s deve ser configurada antes de salvar os dados." #, python-format msgid "Property '%s' is protected" msgstr "Propriedade '%s' é protegida" msgid "Property names can't contain 4 byte unicode." msgstr "Os nomes de propriedade não podem conter unicode de 4 bytes." #, python-format msgid "" "Provided image size must match the stored image size. (provided size: " "%(ps)d, stored size: %(ss)d)" msgstr "" "O tamanho da imagem fornecida deve corresponder ao tamanho da imagem " "armazenada. (tamanho fornecido: %(ps)d, tamanho armazenado:%(ss)d)" #, python-format msgid "Provided object does not match schema '%(schema)s': %(reason)s" msgstr "O objeto fornecido não corresponde ao esquema '%(schema)s': %(reason)s" #, python-format msgid "Provided status of task is unsupported: %(status)s" msgstr "Status de tarefa fornecido não é suportado: %(status)s" #, python-format msgid "Provided type of task is unsupported: %(type)s" msgstr "Tipo de tarefa fornecido não é suportado: %(type)s" msgid "Provides a user friendly description of the namespace." msgstr "Fornece uma descrição fácil do namespace." msgid "Received invalid HTTP redirect." msgstr "Redirecionamento de HTTP inválido recebido." #, python-format msgid "Redirecting to %(uri)s for authorization." msgstr "Redirecionando para %(uri)s para obter autorização." #, python-format msgid "Registry service can't use %s" msgstr "Serviço de registro não pode utilizar %s" #, python-format msgid "Registry was not configured correctly on API server. Reason: %(reason)s" msgstr "" "O registro não foi configurado corretamente no servidor de API. Motivo: " "%(reason)s" #, python-format msgid "Reload of %(serv)s not supported" msgstr "Recarregamento de %(serv)s não suportado" #, python-format msgid "Reloading %(serv)s (pid %(pid)s) with signal(%(sig)s)" msgstr "Recarregando %(serv)s (pid %(pid)s) com sinal (%(sig)s)" #, python-format msgid "Removing stale pid file %s" msgstr "Removendo o arquivo pid %s antigo" msgid "Request body must be a JSON array of operation objects." msgstr "" "O corpo da solicitação deve ser uma matriz JSON de objetos de operação." msgid "Request must be a list of commands" msgstr "Requisição deve ser uma lista de comandos" #, python-format msgid "Required store %s is invalid" msgstr "O armazenamento necessário %s é inválido" msgid "" "Resource type names should be aligned with Heat resource types whenever " "possible: http://docs.openstack.org/developer/heat/template_guide/openstack." "html" msgstr "" "Os nomes do tipo de recurso devem estar alinhados aos tipos de recurso do " "Heat sempre que possível: http://docs.openstack.org/developer/heat/" "template_guide/openstack.html" msgid "Response from Keystone does not contain a Glance endpoint." msgstr "A resposta de Keystone não contém um terminal de Visão Rápida." msgid "Scope of image accessibility" msgstr "Escopo de acessibilidade de imagem" msgid "Scope of namespace accessibility." msgstr "Escopo da acessibilidade do namespace." #, python-format msgid "Server %(serv)s is stopped" msgstr "O servidor %(serv)s foi interrompido" #, python-format msgid "Server worker creation failed: %(reason)s." msgstr "Falha na criação do trabalhador do servidor: %(reason)s." msgid "Signature verification failed" msgstr "A verificação de assinatura falhou" msgid "Size of image file in bytes" msgstr "Tamanho do arquivo da imagem em bytes " msgid "" "Some resource types allow more than one key / value pair per instance. For " "example, Cinder allows user and image metadata on volumes. Only the image " "properties metadata is evaluated by Nova (scheduling or drivers). This " "property allows a namespace target to remove the ambiguity." msgstr "" "Alguns tipos de recurso permitem mais de um par de chave/valor por " "instância. Por exemplo, o Cinder permite metadados do usuário e da imagem " "em volumes. Somente os metadados de propriedades da imagem são avaliados " "pelo Nova (planejamento ou drivers). Essa propriedade permite que um destino " "de namespace remova a ambiguidade." msgid "Sort direction supplied was not valid." msgstr "A direção de classificação fornecida não era válida." msgid "Sort key supplied was not valid." msgstr "A chave de classificação fornecida não era válida." msgid "" "Specifies the prefix to use for the given resource type. Any properties in " "the namespace should be prefixed with this prefix when being applied to the " "specified resource type. Must include prefix separator (e.g. a colon :)." msgstr "" "Especifica o prefixo a ser usado para o tipo de recurso determinado. " "Qualquer propriedade no namespace deve ter esse prefixo ao ser aplicada ao " "tipo de recurso especificado. O separador de prefixo deve ser incluído (p. " "ex., dois pontos :)." msgid "Status must be \"pending\", \"accepted\" or \"rejected\"." msgstr "O status deve ser \"pendente\", \"aceito\" ou \"rejeitado\"." msgid "Status not specified" msgstr "Status não especificado" msgid "Status of the image" msgstr "Status da imagem" #, python-format msgid "Status transition from %(cur_status)s to %(new_status)s is not allowed" msgstr "" "Status de transição de %(cur_status)s para %(new_status)s não é permitido" #, python-format msgid "Stopping %(serv)s (pid %(pid)s) with signal(%(sig)s)" msgstr "Parando %(serv)s (pid %(pid)s) com sinal (%(sig)s)" #, python-format msgid "Store for image_id not found: %s" msgstr "Armazenamento de image_id não localizado: %s" #, python-format msgid "Store for scheme %s not found" msgstr "Armazenamento do esquema %s não localizado" #, python-format msgid "" "Supplied %(attr)s (%(supplied)s) and %(attr)s generated from uploaded image " "(%(actual)s) did not match. Setting image status to 'killed'." msgstr "" "%(attr)s fornecido (%(supplied)s) e %(attr)s gerado da imagem transferida " "por upload (%(actual)s) não corresponderam. Configurando o status da imagem " "para 'eliminado'." msgid "Supported values for the 'container_format' image attribute" msgstr "Valores suportados para o atributo de imagem 'container_format'" msgid "Supported values for the 'disk_format' image attribute" msgstr "Valores suportados para o atributo de imagem 'disk_format'" #, python-format msgid "Suppressed respawn as %(serv)s was %(rsn)s." msgstr "Novo spawn suprimido já que %(serv)s era %(rsn)s." msgid "System SIGHUP signal received." msgstr "Sinal SIGHUP do sistema recebido." #, python-format msgid "Task '%s' is required" msgstr "Tarefa '%s é obrigatória" msgid "Task does not exist" msgstr "A tarefa não existe" msgid "Task failed due to Internal Error" msgstr "A tarefa falhou devido a Erro interno" msgid "Task was not configured properly" msgstr "A tarefa não foi configurada adequadamente" #, python-format msgid "Task with the given id %(task_id)s was not found" msgstr "Tarefa com o ID fornecido %(task_id)s não foi localizada" msgid "The \"changes-since\" filter is no longer available on v2." msgstr "O filtro \" changes-since \" não está mais disponível na v2." #, python-format msgid "The CA file you specified %s does not exist" msgstr "O arquivo CA especificado %s não existe" #, python-format msgid "" "The Image %(image_id)s object being created by this task %(task_id)s, is no " "longer in valid status for further processing." msgstr "" "O objeto da Imagem %(image_id)s que está sendo criado por esta tarefa " "%(task_id)s não está mais no status válido para processamento adicional." msgid "The Store URI was malformed." msgstr "O URI de Armazenamento foi malformado." msgid "" "The URL to the keystone service. If \"use_user_token\" is not in effect and " "using keystone auth, then URL of keystone can be specified." msgstr "" "A URL para o serviço do keystone. Se \"use_user_token\" não estiver em vigor " "e utilizando uma autorização do keystone, então a URL do keystone pode ser " "especificada." msgid "" "The administrators password. If \"use_user_token\" is not in effect, then " "admin credentials can be specified." msgstr "" "A senha do administrador. Se \"use_user_token\" não estiver em vigor, então " "as credenciais do administrador podem ser especificadas." msgid "" "The administrators user name. If \"use_user_token\" is not in effect, then " "admin credentials can be specified." msgstr "" "O nome de usuário do administrador. Se \"use_user_token\" não estiver em " "vigor, então as credenciais do administrador podem ser especificadas." #, python-format msgid "The cert file you specified %s does not exist" msgstr "O arquivo de certificado especificado %s não existe" msgid "The current status of this task" msgstr "O status atual desta tarefa" #, python-format msgid "" "The device housing the image cache directory %(image_cache_dir)s does not " "support xattr. It is likely you need to edit your fstab and add the " "user_xattr option to the appropriate line for the device housing the cache " "directory." msgstr "" "O dispositivo no qual reside o diretório de cache de imagem " "%(image_cache_dir)s não suporta xattr. É provável que você precise editar " "fstab e incluir a opção user_xattr na linha apropriada do dispositivo que " "contém o diretório de cache." #, python-format msgid "" "The given uri is not valid. Please specify a valid uri from the following " "list of supported uri %(supported)s" msgstr "" "O URI fornecido não é válido. Especifique um uri válido a partir da seguinte " "lista de URI suportados %(supported)s" #, python-format msgid "The incoming image is too large: %s" msgstr "A imagem recebida é muito grande: %s" #, python-format msgid "The key file you specified %s does not exist" msgstr "O arquivo-chave especificado %s não existe" #, python-format msgid "" "The limit has been exceeded on the number of allowed image locations. " "Attempted: %(attempted)s, Maximum: %(maximum)s" msgstr "" "O limite foi excedido no número de localizações de imagens permitidas. " "Tentativa: %(attempted)s, Máximo: %(maximum)s" #, python-format msgid "" "The limit has been exceeded on the number of allowed image members for this " "image. Attempted: %(attempted)s, Maximum: %(maximum)s" msgstr "" "O limite foi excedido no número de membros de imagem permitidos para esta " "imagem. Tentativa: %(attempted)s, Máximo: %(maximum)s" #, python-format msgid "" "The limit has been exceeded on the number of allowed image properties. " "Attempted: %(attempted)s, Maximum: %(maximum)s" msgstr "" "O limite foi excedido no número de propriedades de imagem permitidas. " "Tentativa: %(attempted)s, Máximo: %(maximum)s" #, python-format msgid "" "The limit has been exceeded on the number of allowed image properties. " "Attempted: %(num)s, Maximum: %(quota)s" msgstr "" "O limite foi excedido no número de propriedades de imagem permitidas. " "Tentativa: %(num)s, Máximo: %(quota)s" #, python-format msgid "" "The limit has been exceeded on the number of allowed image tags. Attempted: " "%(attempted)s, Maximum: %(maximum)s" msgstr "" "O limite foi excedido no número de tags de imagem permitidas. Tentativa: " "%(attempted)s, Máximo: %(maximum)s" #, python-format msgid "The location %(location)s already exists" msgstr "O local %(location)s já existe" #, python-format msgid "The location data has an invalid ID: %d" msgstr "Os dados da localização têm um ID inválido: %d" #, python-format msgid "" "The metadata definition %(record_type)s with name=%(record_name)s not " "deleted. Other records still refer to it." msgstr "" "Definição de metadados %(record_type)s com o nome=%(record_name)s não " "excluída. Outros registros ainda se referem a ela." #, python-format msgid "The metadata definition namespace=%(namespace_name)s already exists." msgstr "O namespace de definição de metadados=%(namespace_name)s já existe." #, python-format msgid "" "The metadata definition object with name=%(object_name)s was not found in " "namespace=%(namespace_name)s." msgstr "" "O objeto de definição de metadados com o nome=%(object_name)s não foi " "localizado no namespace=%(namespace_name)s." #, python-format msgid "" "The metadata definition property with name=%(property_name)s was not found " "in namespace=%(namespace_name)s." msgstr "" "A propriedade de definição de metadados com o nome=%(property_name)s não foi " "localizada no namespace=%(namespace_name)s." #, python-format msgid "" "The metadata definition resource-type association of resource-type=" "%(resource_type_name)s to namespace=%(namespace_name)s already exists." msgstr "" "A associação do tipo de recurso de definição de metadados do tipo derecurso=" "%(resource_type_name)s ao namespace=%(namespace_name)s já existe." #, python-format msgid "" "The metadata definition resource-type association of resource-type=" "%(resource_type_name)s to namespace=%(namespace_name)s, was not found." msgstr "" "A associação do tipo de recurso de definição de metadados do tipo derecurso=" "%(resource_type_name)s ao namespace=%(namespace_name)s, não foi localizada." #, python-format msgid "" "The metadata definition resource-type with name=%(resource_type_name)s, was " "not found." msgstr "" "O tipo de recurso de definição de metadados com o nome=" "%(resource_type_name)s, não foi localizado." #, python-format msgid "" "The metadata definition tag with name=%(name)s was not found in namespace=" "%(namespace_name)s." msgstr "" "A identificação da definição de metadados com o nome=%(name)s não foi " "localizada no namespace=%(namespace_name)s." msgid "The parameters required by task, JSON blob" msgstr "Os parâmetros requeridos pela tarefa, blob JSON" msgid "The provided image is too large." msgstr "A imagem fornecida é muito grande." msgid "" "The region for the authentication service. If \"use_user_token\" is not in " "effect and using keystone auth, then region name can be specified." msgstr "" "A região para o serviço de autenticação. Se \"use_user_token\" não estiver " "em vigor e utilizando a autorização do keystone, então o nome da região pode " "ser especificado." msgid "The request returned 500 Internal Server Error." msgstr "A solicitação retornou 500 Erro Interno do Servidor." msgid "" "The request returned 503 Service Unavailable. This generally occurs on " "service overload or other transient outage." msgstr "" "A solicitação retornou 503 Serviço Indisponível. Isso geralmente ocorre em " "sobrecarga de serviço ou outra interrupção temporária." #, python-format msgid "" "The request returned a 302 Multiple Choices. This generally means that you " "have not included a version indicator in a request URI.\n" "\n" "The body of response returned:\n" "%(body)s" msgstr "" "A solicitação retornou 302 Várias Opções. Isso geralmente significa que você " "não incluiu um indicador de versão em um URI de solicitação.\n" "\n" "O corpo da resposta retornou:\n" "%(body)s" #, python-format msgid "" "The request returned a 413 Request Entity Too Large. This generally means " "that rate limiting or a quota threshold was breached.\n" "\n" "The response body:\n" "%(body)s" msgstr "" "A solicitação retornou 413 Entidade de Solicitação Muito Grande. Isso " "geralmente significa que a taxa de limitação ou um limite de cota foi " "violado.\n" "\n" "O corpo de resposta:\n" "%(body)s" #, python-format msgid "" "The request returned an unexpected status: %(status)s.\n" "\n" "The response body:\n" "%(body)s" msgstr "" "A solicitação retornou um status inesperado: %(status)s.\n" "\n" "O corpo de resposta:\n" "%(body)s" msgid "" "The requested image has been deactivated. Image data download is forbidden." msgstr "" "A imagem solicitada foi desativada. O download de dados da imagem é proibido." msgid "The result of current task, JSON blob" msgstr "O resultado da tarefa atual, blob JSON" #, python-format msgid "" "The size of the data %(image_size)s will exceed the limit. %(remaining)s " "bytes remaining." msgstr "" "O tamanho dos dados que %(image_size)s irá exceder do limite. %(remaining)s " "bytes restantes." #, python-format msgid "The specified member %s could not be found" msgstr "O membro especificado %s não pôde ser localizado" #, python-format msgid "The specified metadata object %s could not be found" msgstr "O objeto de metadados especificado %s não pôde ser localizado" #, python-format msgid "The specified metadata tag %s could not be found" msgstr "A identificação de metadados especificada %s não pôde ser localizada" #, python-format msgid "The specified namespace %s could not be found" msgstr "O namespace especificado %s não pôde ser localizado" #, python-format msgid "The specified property %s could not be found" msgstr "A propriedade especificada %s não pôde ser localizada" #, python-format msgid "The specified resource type %s could not be found " msgstr "O tipo de recurso especificado %s não pôde ser localizado " msgid "" "The status of deleted image location can only be set to 'pending_delete' or " "'deleted'" msgstr "" "O status de local da imagem excluída só pode ser definido como " "'pending_delete' ou 'deleted'" msgid "" "The status of deleted image location can only be set to 'pending_delete' or " "'deleted'." msgstr "" "O status de local da imagem excluída só pode ser definido como " "'pending_delete' ou 'deleted'." msgid "The status of this image member" msgstr "O status desse membro da imagem" msgid "" "The strategy to use for authentication. If \"use_user_token\" is not in " "effect, then auth strategy can be specified." msgstr "" "A estratégia a ser utilizada para autenticação. Se \"use_user_token\" não " "estiver em vigor, então a estratégia de autorização pode ser especificada." #, python-format msgid "" "The target member %(member_id)s is already associated with image " "%(image_id)s." msgstr "" "O membro de destino %(member_id)s já está associado à imagem %(image_id)s." msgid "" "The tenant name of the administrative user. If \"use_user_token\" is not in " "effect, then admin tenant name can be specified." msgstr "" "O nome de locatário do usuário administrativo. Se \"use_user_token\" não " "estiver em vigor, então o nome de locatário do administrador pode ser " "especificado." msgid "The type of task represented by this content" msgstr "O tipo de tarefa representada por este conteúdo" msgid "The unique namespace text." msgstr "O texto do namespace exclusivo." msgid "The user friendly name for the namespace. Used by UI if available." msgstr "" "O nome fácil do namespace. Usando pela interface com o usuário, se " "disponível." #, python-format msgid "" "There is a problem with your %(error_key_name)s %(error_filename)s. Please " "verify it. Error: %(ioe)s" msgstr "" "Há um problema com o %(error_key_name)s %(error_filename)s. Verifique-o. " "Erro: %(ioe)s" #, python-format msgid "" "There is a problem with your %(error_key_name)s %(error_filename)s. Please " "verify it. OpenSSL error: %(ce)s" msgstr "" "Há um problema com o %(error_key_name)s %(error_filename)s. Verifique-o. " "Erro de OpenSSL: %(ce)s" #, python-format msgid "" "There is a problem with your key pair. Please verify that cert " "%(cert_file)s and key %(key_file)s belong together. OpenSSL error %(ce)s" msgstr "" "Há um problema com seu par de chaves. Verifique se o certificado " "%(cert_file)s e a chave %(key_file)s estão juntos. Erro de OpenSSL %(ce)s" msgid "There was an error configuring the client." msgstr "Houve um erro ao configurar o cliente." msgid "There was an error connecting to a server" msgstr "Houve um erro ao conectar a um servidor" msgid "" "This operation is currently not permitted on Glance Tasks. They are auto " "deleted after reaching the time based on their expires_at property." msgstr "" "Esta operação não é atualmente permitida em Tarefas do Glance. Elas são " "automaticamente excluídas após atingir o tempo com base em sua propriedade " "expires_at." msgid "This operation is currently not permitted on Glance images details." msgstr "" "Esta operação não é atualmente permitida em detalhes de imagens do Glance." msgid "" "Time in hours for which a task lives after, either succeeding or failing" msgstr "Tempo em horas durante o qual uma tarefa é mantida, com êxito ou falha" msgid "Too few arguments." msgstr "Muito poucos argumentos." msgid "" "URI cannot contain more than one occurrence of a scheme.If you have " "specified a URI like swift://user:pass@http://authurl.com/v1/container/obj, " "you need to change it to use the swift+http:// scheme, like so: swift+http://" "user:pass@authurl.com/v1/container/obj" msgstr "" "URI não pode conter mais de uma ocorrência de um esquema. Se você tiver " "especificado um URI como swift://user:pass@http://authurl.com/v1/container/" "obj, precisará alterá-lo para usar o esquema swift+http://, desta forma: " "swift+http://user:pass@authurl.com/v1/container/obj" msgid "URL to access the image file kept in external store" msgstr "URL para acessar o arquivo de imagem mantido no armazenamento externo " #, python-format msgid "" "Unable to create pid file %(pid)s. Running as non-root?\n" "Falling back to a temp file, you can stop %(service)s service using:\n" " %(file)s %(server)s stop --pid-file %(fb)s" msgstr "" "Impossível criar arquivo pid %(pid)s. Executando como não raiz?\n" "Voltando para um arquivo temporário, é possível parar o serviço %(service)s " "usando:\n" " %(file)s %(server)s stop --pid-file %(fb)s" #, python-format msgid "Unable to filter by unknown operator '%s'." msgstr "Não é possível filtrar por operador desconhecido '%s'." msgid "Unable to filter on a range with a non-numeric value." msgstr "Não é possível filtrar um intervalo com um valor não numérico." msgid "Unable to filter on a unknown operator." msgstr "Não é possível filtrar em um operador desconhecido." msgid "Unable to filter using the specified operator." msgstr "Não é possível filtrar usando o operador especificado." msgid "Unable to filter using the specified range." msgstr "Não é possível filtrar usando o intervalo especificado." #, python-format msgid "Unable to find '%s' in JSON Schema change" msgstr "Não é possível localizar '%s' na mudança de Esquema JSON" #, python-format msgid "" "Unable to find `op` in JSON Schema change. It must be one of the following: " "%(available)s." msgstr "" "Não é possível localizar `op` na mudança de Esquema JSON. Deve ser um dos " "seguintes: %(available)s." msgid "Unable to increase file descriptor limit. Running as non-root?" msgstr "" "Não é possível aumentar o limite do descritor de arquivo. Executando como " "não-raiz?" #, python-format msgid "" "Unable to load %(app_name)s from configuration file %(conf_file)s.\n" "Got: %(e)r" msgstr "" "Não é possível carregar %(app_name)s do arquivo de configuração " "%(conf_file)s.\n" "Obtido: %(e)r" #, python-format msgid "Unable to load schema: %(reason)s" msgstr "Não é possível carregar o esquema: %(reason)s" #, python-format msgid "Unable to locate paste config file for %s." msgstr "Impossível localizar o arquivo de configuração de colagem para %s." #, python-format msgid "Unable to upload duplicate image data for image%(image_id)s: %(error)s" msgstr "" "Não é possível fazer upload de dados de imagem duplicados para a imagem " "%(image_id)s: %(error)s" msgid "Unauthorized image access" msgstr "Acesso à imagem desautorizado" msgid "Unexpected body type. Expected list/dict." msgstr "Tipo de corpo inesperado. Lista/dicionário esperados." #, python-format msgid "Unexpected response: %s" msgstr "Resposta inesperada: %s" #, python-format msgid "Unknown auth strategy '%s'" msgstr "Estratégia de autenticação desconhecida %s'" #, python-format msgid "Unknown command: %s" msgstr "Comando desconhecido: %s" msgid "Unknown sort direction, must be 'desc' or 'asc'" msgstr "Direção de classificação desconhecida; deve ser 'desc' ou 'asc'" msgid "Unrecognized JSON Schema draft version" msgstr "Versão rascunho do Esquema JSON não reconhecida" msgid "Unrecognized changes-since value" msgstr "Valor desde as alterações não reconhecido" #, python-format msgid "Unsupported sort_dir. Acceptable values: %s" msgstr "sort_dir não suportado. Valores aceitáveis: %s" #, python-format msgid "Unsupported sort_key. Acceptable values: %s" msgstr "sort_key não suportado. Valores aceitáveis: %s" msgid "Virtual size of image in bytes" msgstr "Tamanho virtual de imagem em bytes " #, python-format msgid "Waited 15 seconds for pid %(pid)s (%(file)s) to die; giving up" msgstr "" "Esperou 15 segundos para pid %(pid)s (%(file)s) ser eliminado; desistindo" msgid "" "When running server in SSL mode, you must specify both a cert_file and " "key_file option value in your configuration file" msgstr "" "Ao executar o servidor no modo SSL, você deve especificar um valor de opção " "cert_file e key_file no seu arquivo de configuração" msgid "" "Whether to pass through the user token when making requests to the registry. " "To prevent failures with token expiration during big files upload, it is " "recommended to set this parameter to False.If \"use_user_token\" is not in " "effect, then admin credentials can be specified." msgstr "" "Se passar pelo token do usuário ao fazer solicitações ao registro. Para " "evitar falhas com expiração de token durante o upload de arquivos grandes, é " "recomendável configurar esse parâmetro como False. Se \"use_user_token\" não " "estiver em vigor, as credenciais do administrador poderão ser especificadas." #, python-format msgid "Wrong command structure: %s" msgstr "Estrutura de comandos incorreta: %s" msgid "You are not authenticated." msgstr "Você não está autenticado." msgid "You are not authorized to complete this action." msgstr "Você não está autorizado a concluir esta ação." #, python-format msgid "You are not authorized to lookup image %s." msgstr "Você não está autorizado a consultar a imagem %s." #, python-format msgid "You are not authorized to lookup the members of the image %s." msgstr "Você não está autorizado a consultar os membros da imagem %s." #, python-format msgid "You are not permitted to create a tag in the namespace owned by '%s'" msgstr "" "Você não tem permissão para criar uma identificação no namespace de " "propriedade de '%s'" msgid "You are not permitted to create image members for the image." msgstr "Você não tem permissão para criar membros da imagem." #, python-format msgid "You are not permitted to create images owned by '%s'." msgstr "Você não tem permissão para criar imagens de propriedade de '%s'." #, python-format msgid "You are not permitted to create namespace owned by '%s'" msgstr "Você não tem permissão para criar namespace de propriedade de '%s'" #, python-format msgid "You are not permitted to create object owned by '%s'" msgstr "Você não tem permissão para criar objeto de propriedade de '%s'" #, python-format msgid "You are not permitted to create property owned by '%s'" msgstr "" "Você não tem permissão para criar essa propriedade de propriedade de '%s'" #, python-format msgid "You are not permitted to create resource_type owned by '%s'" msgstr "Você não tem permissão para criar resource_type de propriedade de '%s'" #, python-format msgid "You are not permitted to create this task with owner as: %s" msgstr "" "Você não tem permissão para criar essa tarefa com proprietário como: %s" msgid "You are not permitted to deactivate this image." msgstr "Você não tem permissão para desativar esta imagem." msgid "You are not permitted to delete this image." msgstr "Você não tem permissão para excluir esta imagem." msgid "You are not permitted to delete this meta_resource_type." msgstr "Você não tem permissão para excluir esse meta_resource_type." msgid "You are not permitted to delete this namespace." msgstr "Você não tem permissão para excluir esse namespace." msgid "You are not permitted to delete this object." msgstr "Você não tem permissão para excluir esse objeto." msgid "You are not permitted to delete this property." msgstr "Você não tem permissão para excluir essa propriedade." msgid "You are not permitted to delete this tag." msgstr "Você não tem permissão para excluir esta identificação." #, python-format msgid "You are not permitted to modify '%(attr)s' on this %(resource)s." msgstr "Você não ter permissão para modificar '%(attr)s' nesse %(resource)s." #, python-format msgid "You are not permitted to modify '%s' on this image." msgstr "Você não tem permissão para modificar '%s' nesta imagem." msgid "You are not permitted to modify locations for this image." msgstr "Você não tem permissão para modificar locais para esta imagem." msgid "You are not permitted to modify tags on this image." msgstr "Você não tem permissão para modificar tags nesta imagem." msgid "You are not permitted to modify this image." msgstr "Você não tem permissão para modificar esta imagem." msgid "You are not permitted to reactivate this image." msgstr "Você não tem permissão para reativar essa imagem." msgid "You are not permitted to set status on this task." msgstr "Você não tem permissão para definir o status dessa tarefa." msgid "You are not permitted to update this namespace." msgstr "Você não tem permissão para editar esse namespace." msgid "You are not permitted to update this object." msgstr "Você não tem permissão para atualizar esse objeto." msgid "You are not permitted to update this property." msgstr "Você não tem permissão para atualizar essa propriedade." msgid "You are not permitted to update this tag." msgstr "Você não tem permissão para atualizar esta identificação." msgid "You are not permitted to upload data for this image." msgstr "Você não tem permissão para fazer upload de dados para esta imagem." #, python-format msgid "You cannot add image member for %s" msgstr "Não é possível incluir o membro da imagem para %s" #, python-format msgid "You cannot delete image member for %s" msgstr "Não é possível excluir o membro da imagem para %s" #, python-format msgid "You cannot get image member for %s" msgstr "Não é possível obter o membro da imagem para %s" #, python-format msgid "You cannot update image member %s" msgstr "Não é possível atualizar o membro da imagem %s" msgid "You do not own this image" msgstr "Você não possui essa imagem" msgid "" "You have selected to use SSL in connecting, and you have supplied a cert, " "however you have failed to supply either a key_file parameter or set the " "GLANCE_CLIENT_KEY_FILE environ variable" msgstr "" "Você optou por usar SSL na conexão e forneceu um certificado, mas falhou em " "fornecer um parâmetro key_file ou configurar a variável de ambiente " "GLANCE_CLIENT_KEY_FILE" msgid "" "You have selected to use SSL in connecting, and you have supplied a key, " "however you have failed to supply either a cert_file parameter or set the " "GLANCE_CLIENT_CERT_FILE environ variable" msgstr "" "Você optou por usar SSL na conexão e forneceu uma chave, mas falhou em " "fornecer um parâmetro cert_file ou configurar a variável de ambiente " "GLANCE_CLIENT_CERT_FILE" msgid "" "^([0-9a-fA-F]){8}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-" "fA-F]){12}$" msgstr "" "^([0-9a-fA-F]){8}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-" "fA-F]){12}$" #, python-format msgid "__init__() got unexpected keyword argument '%s'" msgstr "__init__() obteve argumento de palavra-chave inesperado '%s'" #, python-format msgid "" "cannot transition from %(current)s to %(next)s in update (wanted from_state=" "%(from)s)" msgstr "" "Não é possível a transição de %(current)s para %(next)s na atualização " "(desejado from_state=%(from)s)" #, python-format msgid "custom properties (%(props)s) conflict with base properties" msgstr "" "conflito de propriedades customizadas (%(props)s) com propriedades de base" msgid "eventlet 'poll' nor 'selects' hubs are available on this platform" msgstr "" "nem o hub 'poll' nem o 'selects' do eventlet estão disponíveis nesta " "plataforma" msgid "is_public must be None, True, or False" msgstr "is_public deve ser Nenhum, True ou False" msgid "limit param must be an integer" msgstr "o parâmetro limit deve ser um número inteiro" msgid "limit param must be positive" msgstr "o parâmetro limit deve ser positivo" msgid "md5 hash of image contents." msgstr "Hash md5 do conteúdo da imagem." #, python-format msgid "new_image() got unexpected keywords %s" msgstr "new_image() obteve palavras-chave inesperadas %s" msgid "protected must be True, or False" msgstr "protegido deve ser True, ou False" #, python-format msgid "unable to launch %(serv)s. Got error: %(e)s" msgstr "Não é possível ativar %(serv)s. Obteve erro: %(e)s" #, python-format msgid "x-openstack-request-id is too long, max size %s" msgstr "x-openstack-request-id é muito longo; tamanho máximo %s" glance-16.0.0/glance/image_cache/0000775000175100017510000000000013245511661016515 5ustar zuulzuul00000000000000glance-16.0.0/glance/image_cache/pruner.py0000666000175100017510000000141613245511421020400 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Prunes the Image Cache """ from glance.image_cache import base class Pruner(base.CacheApp): def run(self): self.cache.prune() glance-16.0.0/glance/image_cache/client.py0000666000175100017510000001016013245511421020337 0ustar zuulzuul00000000000000# Copyright 2012 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os from oslo_serialization import jsonutils as json from glance.common import client as base_client from glance.common import exception from glance.i18n import _ class CacheClient(base_client.BaseClient): DEFAULT_PORT = 9292 DEFAULT_DOC_ROOT = '/v1' def delete_cached_image(self, image_id): """ Delete a specified image from the cache """ self.do_request("DELETE", "/cached_images/%s" % image_id) return True def get_cached_images(self, **kwargs): """ Returns a list of images stored in the image cache. """ res = self.do_request("GET", "/cached_images") data = json.loads(res.read())['cached_images'] return data def get_queued_images(self, **kwargs): """ Returns a list of images queued for caching """ res = self.do_request("GET", "/queued_images") data = json.loads(res.read())['queued_images'] return data def delete_all_cached_images(self): """ Delete all cached images """ res = self.do_request("DELETE", "/cached_images") data = json.loads(res.read()) num_deleted = data['num_deleted'] return num_deleted def queue_image_for_caching(self, image_id): """ Queue an image for prefetching into cache """ self.do_request("PUT", "/queued_images/%s" % image_id) return True def delete_queued_image(self, image_id): """ Delete a specified image from the cache queue """ self.do_request("DELETE", "/queued_images/%s" % image_id) return True def delete_all_queued_images(self): """ Delete all queued images """ res = self.do_request("DELETE", "/queued_images") data = json.loads(res.read()) num_deleted = data['num_deleted'] return num_deleted def get_client(host, port=None, timeout=None, use_ssl=False, username=None, password=None, tenant=None, auth_url=None, auth_strategy=None, auth_token=None, region=None, is_silent_upload=False, insecure=False): """ Returns a new client Glance client object based on common kwargs. If an option isn't specified falls back to common environment variable defaults. """ if auth_url or os.getenv('OS_AUTH_URL'): force_strategy = 'keystone' else: force_strategy = None creds = { 'username': username or os.getenv('OS_AUTH_USER', os.getenv('OS_USERNAME')), 'password': password or os.getenv('OS_AUTH_KEY', os.getenv('OS_PASSWORD')), 'tenant': tenant or os.getenv('OS_AUTH_TENANT', os.getenv('OS_TENANT_NAME')), 'auth_url': auth_url or os.getenv('OS_AUTH_URL'), 'strategy': force_strategy or auth_strategy or os.getenv('OS_AUTH_STRATEGY', 'noauth'), 'region': region or os.getenv('OS_REGION_NAME'), } if creds['strategy'] == 'keystone' and not creds['auth_url']: msg = _("--os_auth_url option or OS_AUTH_URL environment variable " "required when keystone authentication strategy is enabled\n") raise exception.ClientConfigurationError(msg) return CacheClient( host=host, port=port, timeout=timeout, use_ssl=use_ssl, auth_token=auth_token or os.getenv('OS_TOKEN'), creds=creds, insecure=insecure, configure_via_auth=False) glance-16.0.0/glance/image_cache/cleaner.py0000666000175100017510000000143513245511421020477 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Cleans up any invalid cache entries """ from glance.image_cache import base class Cleaner(base.CacheApp): def run(self): self.cache.clean() glance-16.0.0/glance/image_cache/base.py0000666000175100017510000000133513245511421017777 0ustar zuulzuul00000000000000# Copyright 2012 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from glance.image_cache import ImageCache class CacheApp(object): def __init__(self): self.cache = ImageCache() glance-16.0.0/glance/image_cache/prefetcher.py0000666000175100017510000000552613245511421021222 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Prefetches images into the Image Cache """ import eventlet import glance_store from oslo_log import log as logging from glance.common import exception from glance import context from glance.i18n import _LI, _LW from glance.image_cache import base import glance.registry.client.v1.api as registry LOG = logging.getLogger(__name__) class Prefetcher(base.CacheApp): def __init__(self): super(Prefetcher, self).__init__() registry.configure_registry_client() registry.configure_registry_admin_creds() def fetch_image_into_cache(self, image_id): ctx = context.RequestContext(is_admin=True, show_deleted=True) try: image_meta = registry.get_image_metadata(ctx, image_id) if image_meta['status'] != 'active': LOG.warn(_LW("Image '%s' is not active. Not caching.") % image_id) return False except exception.NotFound: LOG.warn(_LW("No metadata found for image '%s'") % image_id) return False location = image_meta['location'] image_data, image_size = glance_store.get_from_backend(location, context=ctx) LOG.debug("Caching image '%s'", image_id) cache_tee_iter = self.cache.cache_tee_iter(image_id, image_data, image_meta['checksum']) # Image is tee'd into cache and checksum verified # as we iterate list(cache_tee_iter) return True def run(self): images = self.cache.get_queued_images() if not images: LOG.debug("Nothing to prefetch.") return True num_images = len(images) LOG.debug("Found %d images to prefetch", num_images) pool = eventlet.GreenPool(num_images) results = pool.imap(self.fetch_image_into_cache, images) successes = sum([1 for r in results if r is True]) if successes != num_images: LOG.warn(_LW("Failed to successfully cache all " "images in queue.")) return False LOG.info(_LI("Successfully cached all %d images"), num_images) return True glance-16.0.0/glance/image_cache/__init__.py0000666000175100017510000004000413245511421020620 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ LRU Cache for Image Data """ import hashlib from oslo_config import cfg from oslo_log import log as logging from oslo_utils import encodeutils from oslo_utils import excutils from oslo_utils import importutils from oslo_utils import units from glance.common import exception from glance.common import utils from glance.i18n import _, _LE, _LI, _LW LOG = logging.getLogger(__name__) image_cache_opts = [ cfg.StrOpt('image_cache_driver', default='sqlite', choices=('sqlite', 'xattr'), ignore_case=True, help=_(""" The driver to use for image cache management. This configuration option provides the flexibility to choose between the different image-cache drivers available. An image-cache driver is responsible for providing the essential functions of image-cache like write images to/read images from cache, track age and usage of cached images, provide a list of cached images, fetch size of the cache, queue images for caching and clean up the cache, etc. The essential functions of a driver are defined in the base class ``glance.image_cache.drivers.base.Driver``. All image-cache drivers (existing and prospective) must implement this interface. Currently available drivers are ``sqlite`` and ``xattr``. These drivers primarily differ in the way they store the information about cached images: * The ``sqlite`` driver uses a sqlite database (which sits on every glance node locally) to track the usage of cached images. * The ``xattr`` driver uses the extended attributes of files to store this information. It also requires a filesystem that sets ``atime`` on the files when accessed. Possible values: * sqlite * xattr Related options: * None """)), cfg.IntOpt('image_cache_max_size', default=10 * units.Gi, # 10 GB min=0, help=_(""" The upper limit on cache size, in bytes, after which the cache-pruner cleans up the image cache. NOTE: This is just a threshold for cache-pruner to act upon. It is NOT a hard limit beyond which the image cache would never grow. In fact, depending on how often the cache-pruner runs and how quickly the cache fills, the image cache can far exceed the size specified here very easily. Hence, care must be taken to appropriately schedule the cache-pruner and in setting this limit. Glance caches an image when it is downloaded. Consequently, the size of the image cache grows over time as the number of downloads increases. To keep the cache size from becoming unmanageable, it is recommended to run the cache-pruner as a periodic task. When the cache pruner is kicked off, it compares the current size of image cache and triggers a cleanup if the image cache grew beyond the size specified here. After the cleanup, the size of cache is less than or equal to size specified here. Possible values: * Any non-negative integer Related options: * None """)), cfg.IntOpt('image_cache_stall_time', default=86400, # 24 hours min=0, help=_(""" The amount of time, in seconds, an incomplete image remains in the cache. Incomplete images are images for which download is in progress. Please see the description of configuration option ``image_cache_dir`` for more detail. Sometimes, due to various reasons, it is possible the download may hang and the incompletely downloaded image remains in the ``incomplete`` directory. This configuration option sets a time limit on how long the incomplete images should remain in the ``incomplete`` directory before they are cleaned up. Once an incomplete image spends more time than is specified here, it'll be removed by cache-cleaner on its next run. It is recommended to run cache-cleaner as a periodic task on the Glance API nodes to keep the incomplete images from occupying disk space. Possible values: * Any non-negative integer Related options: * None """)), cfg.StrOpt('image_cache_dir', help=_(""" Base directory for image cache. This is the location where image data is cached and served out of. All cached images are stored directly under this directory. This directory also contains three subdirectories, namely, ``incomplete``, ``invalid`` and ``queue``. The ``incomplete`` subdirectory is the staging area for downloading images. An image is first downloaded to this directory. When the image download is successful it is moved to the base directory. However, if the download fails, the partially downloaded image file is moved to the ``invalid`` subdirectory. The ``queue``subdirectory is used for queuing images for download. This is used primarily by the cache-prefetcher, which can be scheduled as a periodic task like cache-pruner and cache-cleaner, to cache images ahead of their usage. Upon receiving the request to cache an image, Glance touches a file in the ``queue`` directory with the image id as the file name. The cache-prefetcher, when running, polls for the files in ``queue`` directory and starts downloading them in the order they were created. When the download is successful, the zero-sized file is deleted from the ``queue`` directory. If the download fails, the zero-sized file remains and it'll be retried the next time cache-prefetcher runs. Possible values: * A valid path Related options: * ``image_cache_sqlite_db`` """)), ] CONF = cfg.CONF CONF.register_opts(image_cache_opts) class ImageCache(object): """Provides an LRU cache for image data.""" def __init__(self): self.init_driver() def init_driver(self): """ Create the driver for the cache """ driver_name = CONF.image_cache_driver driver_module = (__name__ + '.drivers.' + driver_name + '.Driver') try: self.driver_class = importutils.import_class(driver_module) LOG.info(_LI("Image cache loaded driver '%s'."), driver_name) except ImportError as import_err: LOG.warn(_LW("Image cache driver " "'%(driver_name)s' failed to load. " "Got error: '%(import_err)s."), {'driver_name': driver_name, 'import_err': import_err}) driver_module = __name__ + '.drivers.sqlite.Driver' LOG.info(_LI("Defaulting to SQLite driver.")) self.driver_class = importutils.import_class(driver_module) self.configure_driver() def configure_driver(self): """ Configure the driver for the cache and, if it fails to configure, fall back to using the SQLite driver which has no odd dependencies """ try: self.driver = self.driver_class() self.driver.configure() except exception.BadDriverConfiguration as config_err: driver_module = self.driver_class.__module__ LOG.warn(_LW("Image cache driver " "'%(driver_module)s' failed to configure. " "Got error: '%(config_err)s"), {'driver_module': driver_module, 'config_err': config_err}) LOG.info(_LI("Defaulting to SQLite driver.")) default_module = __name__ + '.drivers.sqlite.Driver' self.driver_class = importutils.import_class(default_module) self.driver = self.driver_class() self.driver.configure() def is_cached(self, image_id): """ Returns True if the image with the supplied ID has its image file cached. :param image_id: Image ID """ return self.driver.is_cached(image_id) def is_queued(self, image_id): """ Returns True if the image identifier is in our cache queue. :param image_id: Image ID """ return self.driver.is_queued(image_id) def get_cache_size(self): """ Returns the total size in bytes of the image cache. """ return self.driver.get_cache_size() def get_hit_count(self, image_id): """ Return the number of hits that an image has :param image_id: Opaque image identifier """ return self.driver.get_hit_count(image_id) def get_cached_images(self): """ Returns a list of records about cached images. """ return self.driver.get_cached_images() def delete_all_cached_images(self): """ Removes all cached image files and any attributes about the images and returns the number of cached image files that were deleted. """ return self.driver.delete_all_cached_images() def delete_cached_image(self, image_id): """ Removes a specific cached image file and any attributes about the image :param image_id: Image ID """ self.driver.delete_cached_image(image_id) def delete_all_queued_images(self): """ Removes all queued image files and any attributes about the images and returns the number of queued image files that were deleted. """ return self.driver.delete_all_queued_images() def delete_queued_image(self, image_id): """ Removes a specific queued image file and any attributes about the image :param image_id: Image ID """ self.driver.delete_queued_image(image_id) def prune(self): """ Removes all cached image files above the cache's maximum size. Returns a tuple containing the total number of cached files removed and the total size of all pruned image files. """ max_size = CONF.image_cache_max_size current_size = self.driver.get_cache_size() if max_size > current_size: LOG.debug("Image cache has free space, skipping prune...") return (0, 0) overage = current_size - max_size LOG.debug("Image cache currently %(overage)d bytes over max " "size. Starting prune to max size of %(max_size)d ", {'overage': overage, 'max_size': max_size}) total_bytes_pruned = 0 total_files_pruned = 0 entry = self.driver.get_least_recently_accessed() while entry and current_size > max_size: image_id, size = entry LOG.debug("Pruning '%(image_id)s' to free %(size)d bytes", {'image_id': image_id, 'size': size}) self.driver.delete_cached_image(image_id) total_bytes_pruned = total_bytes_pruned + size total_files_pruned = total_files_pruned + 1 current_size = current_size - size entry = self.driver.get_least_recently_accessed() LOG.debug("Pruning finished pruning. " "Pruned %(total_files_pruned)d and " "%(total_bytes_pruned)d.", {'total_files_pruned': total_files_pruned, 'total_bytes_pruned': total_bytes_pruned}) return total_files_pruned, total_bytes_pruned def clean(self, stall_time=None): """ Cleans up any invalid or incomplete cached images. The cache driver decides what that means... """ self.driver.clean(stall_time) def queue_image(self, image_id): """ This adds a image to be cache to the queue. If the image already exists in the queue or has already been cached, we return False, True otherwise :param image_id: Image ID """ return self.driver.queue_image(image_id) def get_caching_iter(self, image_id, image_checksum, image_iter): """ Returns an iterator that caches the contents of an image while the image contents are read through the supplied iterator. :param image_id: Image ID :param image_checksum: checksum expected to be generated while iterating over image data :param image_iter: Iterator that will read image contents """ if not self.driver.is_cacheable(image_id): return image_iter LOG.debug("Tee'ing image '%s' into cache", image_id) return self.cache_tee_iter(image_id, image_iter, image_checksum) def cache_tee_iter(self, image_id, image_iter, image_checksum): try: current_checksum = hashlib.md5() with self.driver.open_for_write(image_id) as cache_file: for chunk in image_iter: try: cache_file.write(chunk) finally: current_checksum.update(chunk) yield chunk cache_file.flush() if (image_checksum and image_checksum != current_checksum.hexdigest()): msg = _("Checksum verification failed. Aborted " "caching of image '%s'.") % image_id raise exception.GlanceException(msg) except exception.GlanceException as e: with excutils.save_and_reraise_exception(): # image_iter has given us bad, (size_checked_iter has found a # bad length), or corrupt data (checksum is wrong). LOG.exception(encodeutils.exception_to_unicode(e)) except Exception as e: LOG.exception(_LE("Exception encountered while tee'ing " "image '%(image_id)s' into cache: %(error)s. " "Continuing with response.") % {'image_id': image_id, 'error': encodeutils.exception_to_unicode(e)}) # If no checksum provided continue responding even if # caching failed. for chunk in image_iter: yield chunk def cache_image_iter(self, image_id, image_iter, image_checksum=None): """ Cache an image with supplied iterator. :param image_id: Image ID :param image_file: Iterator retrieving image chunks :param image_checksum: Checksum of image :returns: True if image file was cached, False otherwise """ if not self.driver.is_cacheable(image_id): return False for chunk in self.get_caching_iter(image_id, image_checksum, image_iter): pass return True def cache_image_file(self, image_id, image_file): """ Cache an image file. :param image_id: Image ID :param image_file: Image file to cache :returns: True if image file was cached, False otherwise """ CHUNKSIZE = 64 * units.Mi return self.cache_image_iter(image_id, utils.chunkiter(image_file, CHUNKSIZE)) def open_for_read(self, image_id): """ Open and yield file for reading the image file for an image with supplied identifier. :note Upon successful reading of the image file, the image's hit count will be incremented. :param image_id: Image ID """ return self.driver.open_for_read(image_id) def get_image_size(self, image_id): """ Return the size of the image file for an image with supplied identifier. :param image_id: Image ID """ return self.driver.get_image_size(image_id) def get_queued_images(self): """ Returns a list of image IDs that are in the queue. The list should be sorted by the time the image ID was inserted into the queue. """ return self.driver.get_queued_images() glance-16.0.0/glance/image_cache/drivers/0000775000175100017510000000000013245511661020173 5ustar zuulzuul00000000000000glance-16.0.0/glance/image_cache/drivers/xattr.py0000666000175100017510000004064413245511421021713 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Cache driver that uses xattr file tags and requires a filesystem that has atimes set. Assumptions =========== 1. Cache data directory exists on a filesytem that updates atime on reads ('noatime' should NOT be set) 2. Cache data directory exists on a filesystem that supports xattrs. This is optional, but highly recommended since it allows us to present ops with useful information pertaining to the cache, like human readable filenames and statistics. 3. `glance-prune` is scheduled to run as a periodic job via cron. This is needed to run the LRU prune strategy to keep the cache size within the limits set by the config file. Cache Directory Notes ===================== The image cache data directory contains the main cache path, where the active cache entries and subdirectories for handling partial downloads and errored-out cache images. The layout looks like: $image_cache_dir/ entry1 entry2 ... incomplete/ invalid/ queue/ """ from __future__ import absolute_import from contextlib import contextmanager import errno import os import stat import time from oslo_config import cfg from oslo_log import log as logging from oslo_utils import encodeutils from oslo_utils import excutils from oslo_utils import fileutils import six import xattr from glance.common import exception from glance.i18n import _, _LI from glance.image_cache.drivers import base LOG = logging.getLogger(__name__) CONF = cfg.CONF class Driver(base.Driver): """ Cache driver that uses xattr file tags and requires a filesystem that has atimes set. """ def configure(self): """ Configure the driver to use the stored configuration options Any store that needs special configuration should implement this method. If the store was not able to successfully configure itself, it should raise `exception.BadDriverConfiguration` """ # Here we set up the various file-based image cache paths # that we need in order to find the files in different states # of cache management. self.set_paths() # We do a quick attempt to write a user xattr to a temporary file # to check that the filesystem is even enabled to support xattrs image_cache_dir = self.base_dir fake_image_filepath = os.path.join(image_cache_dir, 'checkme') with open(fake_image_filepath, 'wb') as fake_file: fake_file.write(b"XXX") fake_file.flush() try: set_xattr(fake_image_filepath, 'hits', '1') except IOError as e: if e.errno == errno.EOPNOTSUPP: msg = (_("The device housing the image cache directory " "%(image_cache_dir)s does not support xattr. It is" " likely you need to edit your fstab and add the " "user_xattr option to the appropriate line for the" " device housing the cache directory.") % {'image_cache_dir': image_cache_dir}) LOG.error(msg) raise exception.BadDriverConfiguration(driver_name="xattr", reason=msg) else: # Cleanup after ourselves... fileutils.delete_if_exists(fake_image_filepath) def get_cache_size(self): """ Returns the total size in bytes of the image cache. """ sizes = [] for path in get_all_regular_files(self.base_dir): file_info = os.stat(path) sizes.append(file_info[stat.ST_SIZE]) return sum(sizes) def get_hit_count(self, image_id): """ Return the number of hits that an image has. :param image_id: Opaque image identifier """ if not self.is_cached(image_id): return 0 path = self.get_image_filepath(image_id) return int(get_xattr(path, 'hits', default=0)) def get_cached_images(self): """ Returns a list of records about cached images. """ LOG.debug("Gathering cached image entries.") entries = [] for path in get_all_regular_files(self.base_dir): image_id = os.path.basename(path) entry = {'image_id': image_id} file_info = os.stat(path) entry['last_modified'] = file_info[stat.ST_MTIME] entry['last_accessed'] = file_info[stat.ST_ATIME] entry['size'] = file_info[stat.ST_SIZE] entry['hits'] = self.get_hit_count(image_id) entries.append(entry) return entries def is_cached(self, image_id): """ Returns True if the image with the supplied ID has its image file cached. :param image_id: Image ID """ return os.path.exists(self.get_image_filepath(image_id)) def is_cacheable(self, image_id): """ Returns True if the image with the supplied ID can have its image file cached, False otherwise. :param image_id: Image ID """ # Make sure we're not already cached or caching the image return not (self.is_cached(image_id) or self.is_being_cached(image_id)) def is_being_cached(self, image_id): """ Returns True if the image with supplied id is currently in the process of having its image file cached. :param image_id: Image ID """ path = self.get_image_filepath(image_id, 'incomplete') return os.path.exists(path) def is_queued(self, image_id): """ Returns True if the image identifier is in our cache queue. """ path = self.get_image_filepath(image_id, 'queue') return os.path.exists(path) def delete_all_cached_images(self): """ Removes all cached image files and any attributes about the images """ deleted = 0 for path in get_all_regular_files(self.base_dir): delete_cached_file(path) deleted += 1 return deleted def delete_cached_image(self, image_id): """ Removes a specific cached image file and any attributes about the image :param image_id: Image ID """ path = self.get_image_filepath(image_id) delete_cached_file(path) def delete_all_queued_images(self): """ Removes all queued image files and any attributes about the images """ files = [f for f in get_all_regular_files(self.queue_dir)] for file in files: fileutils.delete_if_exists(file) return len(files) def delete_queued_image(self, image_id): """ Removes a specific queued image file and any attributes about the image :param image_id: Image ID """ path = self.get_image_filepath(image_id, 'queue') fileutils.delete_if_exists(path) def get_least_recently_accessed(self): """ Return a tuple containing the image_id and size of the least recently accessed cached file, or None if no cached files. """ stats = [] for path in get_all_regular_files(self.base_dir): file_info = os.stat(path) stats.append((file_info[stat.ST_ATIME], # access time file_info[stat.ST_SIZE], # size in bytes path)) # absolute path if not stats: return None stats.sort() return os.path.basename(stats[0][2]), stats[0][1] @contextmanager def open_for_write(self, image_id): """ Open a file for writing the image file for an image with supplied identifier. :param image_id: Image ID """ incomplete_path = self.get_image_filepath(image_id, 'incomplete') def set_attr(key, value): set_xattr(incomplete_path, key, value) def commit(): set_attr('hits', 0) final_path = self.get_image_filepath(image_id) LOG.debug("Fetch finished, moving " "'%(incomplete_path)s' to '%(final_path)s'", dict(incomplete_path=incomplete_path, final_path=final_path)) os.rename(incomplete_path, final_path) # Make sure that we "pop" the image from the queue... if self.is_queued(image_id): LOG.debug("Removing image '%s' from queue after " "caching it.", image_id) fileutils.delete_if_exists( self.get_image_filepath(image_id, 'queue')) def rollback(e): set_attr('error', encodeutils.exception_to_unicode(e)) invalid_path = self.get_image_filepath(image_id, 'invalid') LOG.debug("Fetch of cache file failed (%(e)s), rolling back by " "moving '%(incomplete_path)s' to " "'%(invalid_path)s'", {'e': encodeutils.exception_to_unicode(e), 'incomplete_path': incomplete_path, 'invalid_path': invalid_path}) os.rename(incomplete_path, invalid_path) try: with open(incomplete_path, 'wb') as cache_file: yield cache_file except Exception as e: with excutils.save_and_reraise_exception(): rollback(e) else: commit() finally: # if the generator filling the cache file neither raises an # exception, nor completes fetching all data, neither rollback # nor commit will have been called, so the incomplete file # will persist - in that case remove it as it is unusable # example: ^c from client fetch if os.path.exists(incomplete_path): rollback('incomplete fetch') @contextmanager def open_for_read(self, image_id): """ Open and yield file for reading the image file for an image with supplied identifier. :param image_id: Image ID """ path = self.get_image_filepath(image_id) with open(path, 'rb') as cache_file: yield cache_file path = self.get_image_filepath(image_id) inc_xattr(path, 'hits', 1) def queue_image(self, image_id): """ This adds a image to be cache to the queue. If the image already exists in the queue or has already been cached, we return False, True otherwise :param image_id: Image ID """ if self.is_cached(image_id): LOG.info(_LI("Not queueing image '%s'. Already cached."), image_id) return False if self.is_being_cached(image_id): LOG.info(_LI("Not queueing image '%s'. Already being " "written to cache"), image_id) return False if self.is_queued(image_id): LOG.info(_LI("Not queueing image '%s'. Already queued."), image_id) return False path = self.get_image_filepath(image_id, 'queue') LOG.debug("Queueing image '%s'.", image_id) # Touch the file to add it to the queue with open(path, "w"): pass return True def get_queued_images(self): """ Returns a list of image IDs that are in the queue. The list should be sorted by the time the image ID was inserted into the queue. """ files = [f for f in get_all_regular_files(self.queue_dir)] items = [] for path in files: mtime = os.path.getmtime(path) items.append((mtime, os.path.basename(path))) items.sort() return [image_id for (modtime, image_id) in items] def _reap_old_files(self, dirpath, entry_type, grace=None): now = time.time() reaped = 0 for path in get_all_regular_files(dirpath): mtime = os.path.getmtime(path) age = now - mtime if not grace: LOG.debug("No grace period, reaping '%(path)s'" " immediately", {'path': path}) delete_cached_file(path) reaped += 1 elif age > grace: LOG.debug("Cache entry '%(path)s' exceeds grace period, " "(%(age)i s > %(grace)i s)", {'path': path, 'age': age, 'grace': grace}) delete_cached_file(path) reaped += 1 LOG.info(_LI("Reaped %(reaped)s %(entry_type)s cache entries"), {'reaped': reaped, 'entry_type': entry_type}) return reaped def reap_invalid(self, grace=None): """Remove any invalid cache entries :param grace: Number of seconds to keep an invalid entry around for debugging purposes. If None, then delete immediately. """ return self._reap_old_files(self.invalid_dir, 'invalid', grace=grace) def reap_stalled(self, grace=None): """Remove any stalled cache entries :param grace: Number of seconds to keep an invalid entry around for debugging purposes. If None, then delete immediately. """ return self._reap_old_files(self.incomplete_dir, 'stalled', grace=grace) def clean(self, stall_time=None): """ Delete any image files in the invalid directory and any files in the incomplete directory that are older than a configurable amount of time. """ self.reap_invalid() if stall_time is None: stall_time = CONF.image_cache_stall_time self.reap_stalled(stall_time) def get_all_regular_files(basepath): for fname in os.listdir(basepath): path = os.path.join(basepath, fname) if os.path.isfile(path): yield path def delete_cached_file(path): LOG.debug("Deleting image cache file '%s'", path) fileutils.delete_if_exists(path) def _make_namespaced_xattr_key(key, namespace='user'): """ Create a fully-qualified xattr-key by including the intended namespace. Namespacing differs among OSes[1]: FreeBSD: user, system Linux: user, system, trusted, security MacOS X: not needed Mac OS X won't break if we include a namespace qualifier, so, for simplicity, we always include it. -- [1] http://en.wikipedia.org/wiki/Extended_file_attributes """ namespaced_key = ".".join([namespace, key]) return namespaced_key def get_xattr(path, key, **kwargs): """Return the value for a particular xattr If the key doesn't not exist, or xattrs aren't supported by the file system then a KeyError will be raised, that is, unless you specify a default using kwargs. """ namespaced_key = _make_namespaced_xattr_key(key) try: return xattr.getxattr(path, namespaced_key) except IOError: if 'default' in kwargs: return kwargs['default'] else: raise def set_xattr(path, key, value): """Set the value of a specified xattr. If xattrs aren't supported by the file-system, we skip setting the value. """ namespaced_key = _make_namespaced_xattr_key(key) if not isinstance(value, six.binary_type): value = str(value) if six.PY3: value = value.encode('utf-8') xattr.setxattr(path, namespaced_key, value) def inc_xattr(path, key, n=1): """ Increment the value of an xattr (assuming it is an integer). BEWARE, this code *does* have a RACE CONDITION, since the read/update/write sequence is not atomic. Since the use-case for this function is collecting stats--not critical-- the benefits of simple, lock-free code out-weighs the possibility of an occasional hit not being counted. """ count = int(get_xattr(path, key)) count += n set_xattr(path, key, str(count)) glance-16.0.0/glance/image_cache/drivers/base.py0000666000175100017510000001464613245511421021466 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Base attribute driver class """ import os.path from oslo_config import cfg from oslo_log import log as logging from glance.common import exception from glance.common import utils from glance.i18n import _ LOG = logging.getLogger(__name__) CONF = cfg.CONF class Driver(object): def configure(self): """ Configure the driver to use the stored configuration options Any store that needs special configuration should implement this method. If the store was not able to successfully configure itself, it should raise `exception.BadDriverConfiguration` """ # Here we set up the various file-based image cache paths # that we need in order to find the files in different states # of cache management. self.set_paths() def set_paths(self): """ Creates all necessary directories under the base cache directory """ self.base_dir = CONF.image_cache_dir if self.base_dir is None: msg = _('Failed to read %s from config') % 'image_cache_dir' LOG.error(msg) driver = self.__class__.__module__ raise exception.BadDriverConfiguration(driver_name=driver, reason=msg) self.incomplete_dir = os.path.join(self.base_dir, 'incomplete') self.invalid_dir = os.path.join(self.base_dir, 'invalid') self.queue_dir = os.path.join(self.base_dir, 'queue') dirs = [self.incomplete_dir, self.invalid_dir, self.queue_dir] for path in dirs: utils.safe_mkdirs(path) def get_cache_size(self): """ Returns the total size in bytes of the image cache. """ raise NotImplementedError def get_cached_images(self): """ Returns a list of records about cached images. The list of records shall be ordered by image ID and shall look like:: [ { 'image_id': , 'hits': INTEGER, 'last_modified': ISO_TIMESTAMP, 'last_accessed': ISO_TIMESTAMP, 'size': INTEGER }, ... ] """ return NotImplementedError def is_cached(self, image_id): """ Returns True if the image with the supplied ID has its image file cached. :param image_id: Image ID """ raise NotImplementedError def is_cacheable(self, image_id): """ Returns True if the image with the supplied ID can have its image file cached, False otherwise. :param image_id: Image ID """ raise NotImplementedError def is_queued(self, image_id): """ Returns True if the image identifier is in our cache queue. :param image_id: Image ID """ raise NotImplementedError def delete_all_cached_images(self): """ Removes all cached image files and any attributes about the images and returns the number of cached image files that were deleted. """ raise NotImplementedError def delete_cached_image(self, image_id): """ Removes a specific cached image file and any attributes about the image :param image_id: Image ID """ raise NotImplementedError def delete_all_queued_images(self): """ Removes all queued image files and any attributes about the images and returns the number of queued image files that were deleted. """ raise NotImplementedError def delete_queued_image(self, image_id): """ Removes a specific queued image file and any attributes about the image :param image_id: Image ID """ raise NotImplementedError def queue_image(self, image_id): """ Puts an image identifier in a queue for caching. Return True on successful add to the queue, False otherwise... :param image_id: Image ID """ def clean(self, stall_time=None): """ Dependent on the driver, clean up and destroy any invalid or incomplete cached images """ raise NotImplementedError def get_least_recently_accessed(self): """ Return a tuple containing the image_id and size of the least recently accessed cached file, or None if no cached files. """ raise NotImplementedError def open_for_write(self, image_id): """ Open a file for writing the image file for an image with supplied identifier. :param image_id: Image ID """ raise NotImplementedError def open_for_read(self, image_id): """ Open and yield file for reading the image file for an image with supplied identifier. :param image_id: Image ID """ raise NotImplementedError def get_image_filepath(self, image_id, cache_status='active'): """ This crafts an absolute path to a specific entry :param image_id: Image ID :param cache_status: Status of the image in the cache """ if cache_status == 'active': return os.path.join(self.base_dir, str(image_id)) return os.path.join(self.base_dir, cache_status, str(image_id)) def get_image_size(self, image_id): """ Return the size of the image file for an image with supplied identifier. :param image_id: Image ID """ path = self.get_image_filepath(image_id) return os.path.getsize(path) def get_queued_images(self): """ Returns a list of image IDs that are in the queue. The list should be sorted by the time the image ID was inserted into the queue. """ raise NotImplementedError glance-16.0.0/glance/image_cache/drivers/__init__.py0000666000175100017510000000000013245511421022266 0ustar zuulzuul00000000000000glance-16.0.0/glance/image_cache/drivers/sqlite.py0000666000175100017510000004157213245511421022053 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Cache driver that uses SQLite to store information about cached images """ from __future__ import absolute_import from contextlib import contextmanager import os import sqlite3 import stat import time from eventlet import sleep from eventlet import timeout from oslo_concurrency import lockutils from oslo_config import cfg from oslo_log import log as logging from oslo_utils import excutils from oslo_utils import fileutils from glance.common import exception from glance.i18n import _, _LE, _LI, _LW from glance.image_cache.drivers import base LOG = logging.getLogger(__name__) sqlite_opts = [ cfg.StrOpt('image_cache_sqlite_db', default='cache.db', help=_(""" The relative path to sqlite file database that will be used for image cache management. This is a relative path to the sqlite file database that tracks the age and usage statistics of image cache. The path is relative to image cache base directory, specified by the configuration option ``image_cache_dir``. This is a lightweight database with just one table. Possible values: * A valid relative path to sqlite file database Related options: * ``image_cache_dir`` """)), ] CONF = cfg.CONF CONF.register_opts(sqlite_opts) DEFAULT_SQL_CALL_TIMEOUT = 2 class SqliteConnection(sqlite3.Connection): """ SQLite DB Connection handler that plays well with eventlet, slightly modified from Swift's similar code. """ def __init__(self, *args, **kwargs): self.timeout_seconds = kwargs.get('timeout', DEFAULT_SQL_CALL_TIMEOUT) kwargs['timeout'] = 0 sqlite3.Connection.__init__(self, *args, **kwargs) def _timeout(self, call): with timeout.Timeout(self.timeout_seconds): while True: try: return call() except sqlite3.OperationalError as e: if 'locked' not in str(e): raise sleep(0.05) def execute(self, *args, **kwargs): return self._timeout(lambda: sqlite3.Connection.execute( self, *args, **kwargs)) def commit(self): return self._timeout(lambda: sqlite3.Connection.commit(self)) def dict_factory(cur, row): return {col[0]: row[idx] for idx, col in enumerate(cur.description)} class Driver(base.Driver): """ Cache driver that uses xattr file tags and requires a filesystem that has atimes set. """ def configure(self): """ Configure the driver to use the stored configuration options Any store that needs special configuration should implement this method. If the store was not able to successfully configure itself, it should raise `exception.BadDriverConfiguration` """ super(Driver, self).configure() # Create the SQLite database that will hold our cache attributes self.initialize_db() def initialize_db(self): db = CONF.image_cache_sqlite_db self.db_path = os.path.join(self.base_dir, db) lockutils.set_defaults(self.base_dir) @lockutils.synchronized('image_cache_db_init', external=True) def create_db(): try: conn = sqlite3.connect(self.db_path, check_same_thread=False, factory=SqliteConnection) conn.executescript(""" CREATE TABLE IF NOT EXISTS cached_images ( image_id TEXT PRIMARY KEY, last_accessed REAL DEFAULT 0.0, last_modified REAL DEFAULT 0.0, size INTEGER DEFAULT 0, hits INTEGER DEFAULT 0, checksum TEXT ); """) conn.close() except sqlite3.DatabaseError as e: msg = _("Failed to initialize the image cache database. " "Got error: %s") % e LOG.error(msg) raise exception.BadDriverConfiguration(driver_name='sqlite', reason=msg) create_db() def get_cache_size(self): """ Returns the total size in bytes of the image cache. """ sizes = [] for path in self.get_cache_files(self.base_dir): if path == self.db_path: continue file_info = os.stat(path) sizes.append(file_info[stat.ST_SIZE]) return sum(sizes) def get_hit_count(self, image_id): """ Return the number of hits that an image has. :param image_id: Opaque image identifier """ if not self.is_cached(image_id): return 0 hits = 0 with self.get_db() as db: cur = db.execute("""SELECT hits FROM cached_images WHERE image_id = ?""", (image_id,)) hits = cur.fetchone()[0] return hits def get_cached_images(self): """ Returns a list of records about cached images. """ LOG.debug("Gathering cached image entries.") with self.get_db() as db: cur = db.execute("""SELECT image_id, hits, last_accessed, last_modified, size FROM cached_images ORDER BY image_id""") cur.row_factory = dict_factory return [r for r in cur] def is_cached(self, image_id): """ Returns True if the image with the supplied ID has its image file cached. :param image_id: Image ID """ return os.path.exists(self.get_image_filepath(image_id)) def is_cacheable(self, image_id): """ Returns True if the image with the supplied ID can have its image file cached, False otherwise. :param image_id: Image ID """ # Make sure we're not already cached or caching the image return not (self.is_cached(image_id) or self.is_being_cached(image_id)) def is_being_cached(self, image_id): """ Returns True if the image with supplied id is currently in the process of having its image file cached. :param image_id: Image ID """ path = self.get_image_filepath(image_id, 'incomplete') return os.path.exists(path) def is_queued(self, image_id): """ Returns True if the image identifier is in our cache queue. :param image_id: Image ID """ path = self.get_image_filepath(image_id, 'queue') return os.path.exists(path) def delete_all_cached_images(self): """ Removes all cached image files and any attributes about the images """ deleted = 0 with self.get_db() as db: for path in self.get_cache_files(self.base_dir): delete_cached_file(path) deleted += 1 db.execute("""DELETE FROM cached_images""") db.commit() return deleted def delete_cached_image(self, image_id): """ Removes a specific cached image file and any attributes about the image :param image_id: Image ID """ path = self.get_image_filepath(image_id) with self.get_db() as db: delete_cached_file(path) db.execute("""DELETE FROM cached_images WHERE image_id = ?""", (image_id, )) db.commit() def delete_all_queued_images(self): """ Removes all queued image files and any attributes about the images """ files = [f for f in self.get_cache_files(self.queue_dir)] for file in files: fileutils.delete_if_exists(file) return len(files) def delete_queued_image(self, image_id): """ Removes a specific queued image file and any attributes about the image :param image_id: Image ID """ path = self.get_image_filepath(image_id, 'queue') fileutils.delete_if_exists(path) def clean(self, stall_time=None): """ Delete any image files in the invalid directory and any files in the incomplete directory that are older than a configurable amount of time. """ self.delete_invalid_files() if stall_time is None: stall_time = CONF.image_cache_stall_time now = time.time() older_than = now - stall_time self.delete_stalled_files(older_than) def get_least_recently_accessed(self): """ Return a tuple containing the image_id and size of the least recently accessed cached file, or None if no cached files. """ with self.get_db() as db: cur = db.execute("""SELECT image_id FROM cached_images ORDER BY last_accessed LIMIT 1""") try: image_id = cur.fetchone()[0] except TypeError: # There are no more cached images return None path = self.get_image_filepath(image_id) try: file_info = os.stat(path) size = file_info[stat.ST_SIZE] except OSError: size = 0 return image_id, size @contextmanager def open_for_write(self, image_id): """ Open a file for writing the image file for an image with supplied identifier. :param image_id: Image ID """ incomplete_path = self.get_image_filepath(image_id, 'incomplete') def commit(): with self.get_db() as db: final_path = self.get_image_filepath(image_id) LOG.debug("Fetch finished, moving " "'%(incomplete_path)s' to '%(final_path)s'", dict(incomplete_path=incomplete_path, final_path=final_path)) os.rename(incomplete_path, final_path) # Make sure that we "pop" the image from the queue... if self.is_queued(image_id): fileutils.delete_if_exists( self.get_image_filepath(image_id, 'queue')) filesize = os.path.getsize(final_path) now = time.time() db.execute("""INSERT INTO cached_images (image_id, last_accessed, last_modified, hits, size) VALUES (?, ?, ?, 0, ?)""", (image_id, now, now, filesize)) db.commit() def rollback(e): with self.get_db() as db: if os.path.exists(incomplete_path): invalid_path = self.get_image_filepath(image_id, 'invalid') LOG.warn(_LW("Fetch of cache file failed (%(e)s), rolling " "back by moving '%(incomplete_path)s' to " "'%(invalid_path)s'") % {'e': e, 'incomplete_path': incomplete_path, 'invalid_path': invalid_path}) os.rename(incomplete_path, invalid_path) db.execute("""DELETE FROM cached_images WHERE image_id = ?""", (image_id, )) db.commit() try: with open(incomplete_path, 'wb') as cache_file: yield cache_file except Exception as e: with excutils.save_and_reraise_exception(): rollback(e) else: commit() finally: # if the generator filling the cache file neither raises an # exception, nor completes fetching all data, neither rollback # nor commit will have been called, so the incomplete file # will persist - in that case remove it as it is unusable # example: ^c from client fetch if os.path.exists(incomplete_path): rollback('incomplete fetch') @contextmanager def open_for_read(self, image_id): """ Open and yield file for reading the image file for an image with supplied identifier. :param image_id: Image ID """ path = self.get_image_filepath(image_id) with open(path, 'rb') as cache_file: yield cache_file now = time.time() with self.get_db() as db: db.execute("""UPDATE cached_images SET hits = hits + 1, last_accessed = ? WHERE image_id = ?""", (now, image_id)) db.commit() @contextmanager def get_db(self): """ Returns a context manager that produces a database connection that self-closes and calls rollback if an error occurs while using the database connection """ conn = sqlite3.connect(self.db_path, check_same_thread=False, factory=SqliteConnection) conn.row_factory = sqlite3.Row conn.text_factory = str conn.execute('PRAGMA synchronous = NORMAL') conn.execute('PRAGMA count_changes = OFF') conn.execute('PRAGMA temp_store = MEMORY') try: yield conn except sqlite3.DatabaseError as e: msg = _LE("Error executing SQLite call. Got error: %s") % e LOG.error(msg) conn.rollback() finally: conn.close() def queue_image(self, image_id): """ This adds a image to be cache to the queue. If the image already exists in the queue or has already been cached, we return False, True otherwise :param image_id: Image ID """ if self.is_cached(image_id): LOG.info(_LI("Not queueing image '%s'. Already cached."), image_id) return False if self.is_being_cached(image_id): LOG.info(_LI("Not queueing image '%s'. Already being " "written to cache"), image_id) return False if self.is_queued(image_id): LOG.info(_LI("Not queueing image '%s'. Already queued."), image_id) return False path = self.get_image_filepath(image_id, 'queue') # Touch the file to add it to the queue with open(path, "w"): pass return True def delete_invalid_files(self): """ Removes any invalid cache entries """ for path in self.get_cache_files(self.invalid_dir): fileutils.delete_if_exists(path) LOG.info(_LI("Removed invalid cache file %s"), path) def delete_stalled_files(self, older_than): """ Removes any incomplete cache entries older than a supplied modified time. :param older_than: Files written to on or before this timestamp will be deleted. """ for path in self.get_cache_files(self.incomplete_dir): if os.path.getmtime(path) < older_than: try: fileutils.delete_if_exists(path) LOG.info(_LI("Removed stalled cache file %s"), path) except Exception as e: msg = (_LW("Failed to delete file %(path)s. " "Got error: %(e)s"), dict(path=path, e=e)) LOG.warn(msg) def get_queued_images(self): """ Returns a list of image IDs that are in the queue. The list should be sorted by the time the image ID was inserted into the queue. """ files = [f for f in self.get_cache_files(self.queue_dir)] items = [] for path in files: mtime = os.path.getmtime(path) items.append((mtime, os.path.basename(path))) items.sort() return [image_id for (modtime, image_id) in items] def get_cache_files(self, basepath): """ Returns cache files in the supplied directory :param basepath: Directory to look in for cache files """ for fname in os.listdir(basepath): path = os.path.join(basepath, fname) if path != self.db_path and os.path.isfile(path): yield path def delete_cached_file(path): LOG.debug("Deleting image cache file '%s'", path) fileutils.delete_if_exists(path) glance-16.0.0/glance/async/0000775000175100017510000000000013245511661015425 5ustar zuulzuul00000000000000glance-16.0.0/glance/async/taskflow_executor.py0000666000175100017510000001566613245511421021561 0ustar zuulzuul00000000000000# Copyright 2015 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import futurist from oslo_config import cfg from oslo_log import log as logging from oslo_utils import encodeutils from oslo_utils import excutils from six.moves import urllib from stevedore import driver from taskflow import engines from taskflow.listeners import logging as llistener import glance.async from glance.common import exception from glance.common.scripts import utils as script_utils from glance.i18n import _, _LE LOG = logging.getLogger(__name__) _deprecated_opt = cfg.DeprecatedOpt('eventlet_executor_pool_size', group='task') taskflow_executor_opts = [ cfg.StrOpt('engine_mode', default='parallel', choices=('serial', 'parallel'), help=_(""" Set the taskflow engine mode. Provide a string type value to set the mode in which the taskflow engine would schedule tasks to the workers on the hosts. Based on this mode, the engine executes tasks either in single or multiple threads. The possible values for this configuration option are: ``serial`` and ``parallel``. When set to ``serial``, the engine runs all the tasks in a single thread which results in serial execution of tasks. Setting this to ``parallel`` makes the engine run tasks in multiple threads. This results in parallel execution of tasks. Possible values: * serial * parallel Related options: * max_workers """)), cfg.IntOpt('max_workers', default=10, min=1, help=_(""" Set the number of engine executable tasks. Provide an integer value to limit the number of workers that can be instantiated on the hosts. In other words, this number defines the number of parallel tasks that can be executed at the same time by the taskflow engine. This value can be greater than one when the engine mode is set to parallel. Possible values: * Integer value greater than or equal to 1 Related options: * engine_mode """), deprecated_opts=[_deprecated_opt]) ] CONF = cfg.CONF CONF.register_opts(taskflow_executor_opts, group='taskflow_executor') class TaskExecutor(glance.async.TaskExecutor): def __init__(self, context, task_repo, image_repo, image_factory): self.context = context self.task_repo = task_repo self.image_repo = image_repo self.image_factory = image_factory super(TaskExecutor, self).__init__(context, task_repo, image_repo, image_factory) @staticmethod def _fetch_an_executor(): if CONF.taskflow_executor.engine_mode != 'parallel': return None else: max_workers = CONF.taskflow_executor.max_workers try: return futurist.GreenThreadPoolExecutor( max_workers=max_workers) except RuntimeError: # NOTE(harlowja): I guess eventlet isn't being made # useable, well just use native threads then (or try to). return futurist.ThreadPoolExecutor(max_workers=max_workers) def _get_flow(self, task): try: task_input = script_utils.unpack_task_input(task) kwds = { 'task_id': task.task_id, 'task_type': task.type, 'context': self.context, 'task_repo': self.task_repo, 'image_repo': self.image_repo, 'image_factory': self.image_factory } if task.type == "import": uri = script_utils.validate_location_uri( task_input.get('import_from')) kwds['uri'] = uri if task.type == 'api_image_import': kwds['image_id'] = task_input['image_id'] kwds['import_req'] = task_input['import_req'] return driver.DriverManager('glance.flows', task.type, invoke_on_load=True, invoke_kwds=kwds).driver except urllib.error.URLError as exc: raise exception.ImportTaskError(message=exc.reason) except (exception.BadStoreUri, exception.Invalid) as exc: raise exception.ImportTaskError(message=exc.msg) except RuntimeError: raise NotImplementedError() def begin_processing(self, task_id): try: super(TaskExecutor, self).begin_processing(task_id) except exception.ImportTaskError as exc: LOG.error(_LE('Failed to execute task %(task_id)s: %(exc)s') % {'task_id': task_id, 'exc': exc.msg}) task = self.task_repo.get(task_id) task.fail(exc.msg) self.task_repo.save(task) def _run(self, task_id, task_type): LOG.debug('Taskflow executor picked up the execution of task ID ' '%(task_id)s of task type ' '%(task_type)s', {'task_id': task_id, 'task_type': task_type}) task = script_utils.get_task(self.task_repo, task_id) if task is None: # NOTE: This happens if task is not found in the database. In # such cases, there is no way to update the task status so, # it's ignored here. return flow = self._get_flow(task) executor = self._fetch_an_executor() try: engine = engines.load( flow, engine=CONF.taskflow_executor.engine_mode, executor=executor, max_workers=CONF.taskflow_executor.max_workers) with llistener.DynamicLoggingListener(engine, log=LOG): engine.run() except exception.UploadException as exc: task.fail(encodeutils.exception_to_unicode(exc)) self.task_repo.save(task) except Exception as exc: with excutils.save_and_reraise_exception(): LOG.error(_LE('Failed to execute task %(task_id)s: %(exc)s') % {'task_id': task_id, 'exc': encodeutils.exception_to_unicode(exc)}) # TODO(sabari): Check for specific exceptions and update the # task failure message. task.fail(_('Task failed due to Internal Error')) self.task_repo.save(task) finally: if executor is not None: executor.shutdown() glance-16.0.0/glance/async/__init__.py0000666000175100017510000000533513245511421017540 0ustar zuulzuul00000000000000# Copyright 2014 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_log import log as logging from glance.i18n import _LE LOG = logging.getLogger(__name__) class TaskExecutor(object): """Base class for Asynchronous task executors. It does not support the execution mechanism. Provisions the extensible classes with necessary variables to utilize important Glance modules like, context, task_repo, image_repo, image_factory. Note: It also gives abstraction for the standard pre-processing and post-processing operations to be executed by a task. These may include validation checks, security checks, introspection, error handling etc. The aim is to give developers an abstract sense of the execution pipeline logic. Args: context: glance.context.RequestContext object for AuthZ and AuthN checks task_repo: glance.db.TaskRepo object which acts as a translator for glance.domain.Task and glance.domain.TaskStub objects into ORM semantics image_repo: glance.db.ImageRepo object which acts as a translator for glance.domain.Image object into ORM semantics image_factory: glance.domain.ImageFactory object to be used for creating new images for certain types of tasks viz. import, cloning """ def __init__(self, context, task_repo, image_repo, image_factory): self.context = context self.task_repo = task_repo self.image_repo = image_repo self.image_factory = image_factory def begin_processing(self, task_id): task = self.task_repo.get(task_id) task.begin_processing() self.task_repo.save(task) # start running self._run(task_id, task.type) def _run(self, task_id, task_type): task = self.task_repo.get(task_id) msg = _LE("This execution of Tasks is not setup. Please consult the " "project documentation for more information on the " "executors available.") LOG.error(msg) task.fail(_LE("Internal error occurred while trying to process task.")) self.task_repo.save(task) glance-16.0.0/glance/async/flows/0000775000175100017510000000000013245511661016557 5ustar zuulzuul00000000000000glance-16.0.0/glance/async/flows/plugins/0000775000175100017510000000000013245511661020240 5ustar zuulzuul00000000000000glance-16.0.0/glance/async/flows/plugins/plugin_opts.py0000666000175100017510000000247213245511421023156 0ustar zuulzuul00000000000000# Copyright 2018 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import glance.async.flows.plugins.inject_image_metadata # Note(jokke): This list contains tuples of config options for import plugins. # When new plugin is introduced its config options need to be added to this # list so that they can be processed, when config generator is used to generate # the glance-image-import.conf.sample it will also pick up the details. The # module needs to be imported as the Glance release packaged example(s) above # and the first part of the tuple refers to the group the options gets # registered under at the config file. PLUGIN_OPTS = [ ('inject_metadata_properties', glance.async.flows.plugins.inject_image_metadata.inject_metadata_opts), ] def get_plugin_opts(): return PLUGIN_OPTS glance-16.0.0/glance/async/flows/plugins/no_op.py0000666000175100017510000000353213245511421021723 0ustar zuulzuul00000000000000# Copyright 2017 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg from oslo_log import log as logging from taskflow.patterns import linear_flow as lf from taskflow import task LOG = logging.getLogger(__name__) CONF = cfg.CONF class _Noop(task.Task): def __init__(self, task_id, task_type, image_repo): self.task_id = task_id self.task_type = task_type self.image_repo = image_repo super(_Noop, self).__init__( name='%s-Noop-%s' % (task_type, task_id)) def execute(self, **kwargs): LOG.debug("No_op import plugin") return def revert(self, result=None, **kwargs): # NOTE(flaper87): If result is None, it probably # means this task failed. Otherwise, we would have # a result from its execution. if result is not None: LOG.debug("No_op import plugin failed") return def get_flow(**kwargs): """Return task flow for no-op. :param task_id: Task ID. :param task_type: Type of the task. :param image_repo: Image repository used. """ task_id = kwargs.get('task_id') task_type = kwargs.get('task_type') image_repo = kwargs.get('image_repo') return lf.Flow(task_type).add( _Noop(task_id, task_type, image_repo), ) glance-16.0.0/glance/async/flows/plugins/__init__.py0000666000175100017510000000226213245511421022347 0ustar zuulzuul00000000000000# Copyright 2017 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg from stevedore import named CONF = cfg.CONF def get_import_plugins(**kwargs): task_list = CONF.image_import_opts.image_import_plugins extensions = named.NamedExtensionManager('glance.image_import.plugins', names=task_list, name_order=True, invoke_on_load=True, invoke_kwds=kwargs) for extension in extensions.extensions: yield extension.obj glance-16.0.0/glance/async/flows/plugins/inject_image_metadata.py0000666000175100017510000000614413245511421025071 0ustar zuulzuul00000000000000# Copyright 2018 NTT DATA, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg from taskflow.patterns import linear_flow as lf from taskflow import task from glance.i18n import _ CONF = cfg.CONF inject_metadata_opts = [ cfg.ListOpt('ignore_user_roles', default='admin', help=_(""" Specify name of user roles to be ignored for injecting metadata properties in the image. Possible values: * List containing user roles. For example: [admin,member] """)), cfg.DictOpt('inject', default={}, help=_(""" Dictionary contains metadata properties to be injected in image. Possible values: * Dictionary containing key/value pairs. Key characters length should be <= 255. For example: k1:v1,k2:v2 """)), ] CONF.register_opts(inject_metadata_opts, group='inject_metadata_properties') class _InjectMetadataProperties(task.Task): def __init__(self, context, task_id, task_type, image_repo, image_id): self.context = context self.task_id = task_id self.task_type = task_type self.image_repo = image_repo self.image_id = image_id super(_InjectMetadataProperties, self).__init__( name='%s-InjectMetadataProperties-%s' % (task_type, task_id)) def execute(self): """Inject custom metadata properties to image :param image_id: Glance Image ID """ user_roles = self.context.roles ignore_user_roles = CONF.inject_metadata_properties.ignore_user_roles if not [role for role in user_roles if role in ignore_user_roles]: properties = CONF.inject_metadata_properties.inject if properties: image = self.image_repo.get(self.image_id) image.extra_properties.update(properties) self.image_repo.save(image) def get_flow(**kwargs): """Return task flow for inject_image_metadata. :param task_id: Task ID. :param task_type: Type of the task. :param image_repo: Image repository used. :param image_id: Image_ID used. :param context: Context used. """ task_id = kwargs.get('task_id') task_type = kwargs.get('task_type') image_repo = kwargs.get('image_repo') image_id = kwargs.get('image_id') context = kwargs.get('context') return lf.Flow(task_type).add( _InjectMetadataProperties(context, task_id, task_type, image_repo, image_id), ) glance-16.0.0/glance/async/flows/convert.py0000666000175100017510000001301613245511421020606 0ustar zuulzuul00000000000000# Copyright 2015 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os from oslo_concurrency import processutils as putils from oslo_config import cfg from oslo_log import log as logging from taskflow.patterns import linear_flow as lf from taskflow import task from glance.i18n import _, _LW LOG = logging.getLogger(__name__) convert_task_opts = [ # NOTE: This configuration option requires the operator to explicitly set # an image conversion format. There being no sane default due to the # dependency on the environment in which OpenStack is running, we do not # mark this configuration option as "required". Rather a warning message # is given to the operator, prompting for an image conversion format to # be set. cfg.StrOpt('conversion_format', sample_default='raw', choices=('qcow2', 'raw', 'vmdk'), help=_(""" Set the desired image conversion format. Provide a valid image format to which you want images to be converted before they are stored for consumption by Glance. Appropriate image format conversions are desirable for specific storage backends in order to facilitate efficient handling of bandwidth and usage of the storage infrastructure. By default, ``conversion_format`` is not set and must be set explicitly in the configuration file. The allowed values for this option are ``raw``, ``qcow2`` and ``vmdk``. The ``raw`` format is the unstructured disk format and should be chosen when RBD or Ceph storage backends are used for image storage. ``qcow2`` is supported by the QEMU emulator that expands dynamically and supports Copy on Write. The ``vmdk`` is another common disk format supported by many common virtual machine monitors like VMWare Workstation. Possible values: * qcow2 * raw * vmdk Related options: * disk_formats """)), ] CONF = cfg.CONF # NOTE(flaper87): Registering under the taskflow_executor section # for now. It seems a waste to have a whole section dedicated to a # single task with a single option. CONF.register_opts(convert_task_opts, group='taskflow_executor') class _Convert(task.Task): conversion_missing_warned = False def __init__(self, task_id, task_type, image_repo): self.task_id = task_id self.task_type = task_type self.image_repo = image_repo super(_Convert, self).__init__( name='%s-Convert-%s' % (task_type, task_id)) def execute(self, image_id, file_path): # NOTE(flaper87): A format must be explicitly # specified. There's no "sane" default for this # because the dest format may work differently depending # on the environment OpenStack is running in. conversion_format = CONF.taskflow_executor.conversion_format if conversion_format is None: if not _Convert.conversion_missing_warned: msg = _LW('The conversion format is None, please add a value ' 'for it in the config file for this task to ' 'work: %s') LOG.warn(msg, self.task_id) _Convert.conversion_missing_warned = True return image_obj = self.image_repo.get(image_id) src_format = image_obj.disk_format # TODO(flaper87): Check whether the image is in the desired # format already. Probably using `qemu-img` just like the # `Introspection` task. # NOTE(hemanthm): We add '-f' parameter to the convert command here so # that the image format need not be inferred by qemu utils. This # shields us from being vulnerable to an attack vector described here # https://bugs.launchpad.net/glance/+bug/1449062 dest_path = os.path.join(CONF.task.work_dir, "%s.converted" % image_id) stdout, stderr = putils.trycmd('qemu-img', 'convert', '-f', src_format, '-O', conversion_format, file_path, dest_path, log_errors=putils.LOG_ALL_ERRORS) if stderr: raise RuntimeError(stderr) os.rename(dest_path, file_path.split("file://")[-1]) return file_path def revert(self, image_id, result=None, **kwargs): # NOTE(flaper87): If result is None, it probably # means this task failed. Otherwise, we would have # a result from its execution. if result is None: return fs_path = result.split("file://")[-1] if os.path.exists(fs_path): os.remove(fs_path) def get_flow(**kwargs): """Return task flow for converting images to different formats. :param task_id: Task ID. :param task_type: Type of the task. :param image_repo: Image repository used. """ task_id = kwargs.get('task_id') task_type = kwargs.get('task_type') image_repo = kwargs.get('image_repo') return lf.Flow(task_type).add( _Convert(task_id, task_type, image_repo), ) glance-16.0.0/glance/async/flows/base_import.py0000666000175100017510000004717013245511421021442 0ustar zuulzuul00000000000000# Copyright 2015 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import json import os import glance_store as store_api from glance_store import backend from oslo_concurrency import processutils as putils from oslo_config import cfg from oslo_log import log as logging from oslo_utils import encodeutils from oslo_utils import excutils import six from stevedore import named from taskflow.patterns import linear_flow as lf from taskflow import retry from taskflow import task from taskflow.types import failure from glance.async import utils from glance.common import exception from glance.common.scripts.image_import import main as image_import from glance.common.scripts import utils as script_utils from glance.i18n import _, _LE, _LI LOG = logging.getLogger(__name__) CONF = cfg.CONF class _CreateImage(task.Task): default_provides = 'image_id' def __init__(self, task_id, task_type, task_repo, image_repo, image_factory): self.task_id = task_id self.task_type = task_type self.task_repo = task_repo self.image_repo = image_repo self.image_factory = image_factory super(_CreateImage, self).__init__( name='%s-CreateImage-%s' % (task_type, task_id)) def execute(self): task = script_utils.get_task(self.task_repo, self.task_id) if task is None: return task_input = script_utils.unpack_task_input(task) image = image_import.create_image( self.image_repo, self.image_factory, task_input.get('image_properties'), self.task_id) LOG.debug("Task %(task_id)s created image %(image_id)s", {'task_id': task.task_id, 'image_id': image.image_id}) return image.image_id def revert(self, *args, **kwargs): # TODO(NiallBunting): Deleting the image like this could be considered # a brute force way of reverting images. It may be worth checking if # data has been written. result = kwargs.get('result', None) if result is not None: if kwargs.get('flow_failures', None) is not None: image = self.image_repo.get(result) LOG.debug("Deleting image whilst reverting.") image.delete() self.image_repo.remove(image) class _ImportToFS(task.Task): default_provides = 'file_path' def __init__(self, task_id, task_type, task_repo, uri): self.task_id = task_id self.task_type = task_type self.task_repo = task_repo self.uri = uri super(_ImportToFS, self).__init__( name='%s-ImportToFS-%s' % (task_type, task_id)) if CONF.task.work_dir is None: msg = (_("%(task_id)s of %(task_type)s not configured " "properly. Missing work dir: %(work_dir)s") % {'task_id': self.task_id, 'task_type': self.task_type, 'work_dir': CONF.task.work_dir}) raise exception.BadTaskConfiguration(msg) self.store = self._build_store() def _build_store(self): # NOTE(flaper87): Due to the nice glance_store api (#sarcasm), we're # forced to build our own config object, register the required options # (and by required I mean *ALL* of them, even the ones we don't want), # and create our own store instance by calling a private function. # This is certainly unfortunate but it's the best we can do until the # glance_store refactor is done. A good thing is that glance_store is # under our team's management and it gates on Glance so changes to # this API will (should?) break task's tests. conf = cfg.ConfigOpts() backend.register_opts(conf) conf.set_override('filesystem_store_datadir', CONF.task.work_dir, group='glance_store') # NOTE(flaper87): Do not even try to judge me for this... :( # With the glance_store refactor, this code will change, until # that happens, we don't have a better option and this is the # least worst one, IMHO. store = backend._load_store(conf, 'file') if store is None: msg = (_("%(task_id)s of %(task_type)s not configured " "properly. Could not load the filesystem store") % {'task_id': self.task_id, 'task_type': self.task_type}) raise exception.BadTaskConfiguration(msg) store.configure() return store def execute(self, image_id): """Create temp file into store and return path to it :param image_id: Glance Image ID """ # NOTE(flaper87): We've decided to use a separate `work_dir` for # this task - and tasks coming after this one - as a way to expect # users to configure a local store for pre-import works on the image # to happen. # # While using any path should be "technically" fine, it's not what # we recommend as the best solution. For more details on this, please # refer to the comment in the `_ImportToStore.execute` method. data = script_utils.get_image_data_iter(self.uri) path = self.store.add(image_id, data, 0, context=None)[0] try: # NOTE(flaper87): Consider moving this code to a common # place that other tasks can consume as well. stdout, stderr = putils.trycmd('qemu-img', 'info', '--output=json', path, prlimit=utils.QEMU_IMG_PROC_LIMITS, log_errors=putils.LOG_ALL_ERRORS) except OSError as exc: with excutils.save_and_reraise_exception(): exc_message = encodeutils.exception_to_unicode(exc) msg = _LE('Failed to execute security checks on the image ' '%(task_id)s: %(exc)s') LOG.error(msg, {'task_id': self.task_id, 'exc': exc_message}) metadata = json.loads(stdout) backing_file = metadata.get('backing-filename') if backing_file is not None: msg = _("File %(path)s has invalid backing file " "%(bfile)s, aborting.") % {'path': path, 'bfile': backing_file} raise RuntimeError(msg) return path def revert(self, image_id, result, **kwargs): if isinstance(result, failure.Failure): LOG.exception(_LE('Task: %(task_id)s failed to import image ' '%(image_id)s to the filesystem.'), {'task_id': self.task_id, 'image_id': image_id}) return if os.path.exists(result.split("file://")[-1]): store_api.delete_from_backend(result) class _DeleteFromFS(task.Task): def __init__(self, task_id, task_type): self.task_id = task_id self.task_type = task_type super(_DeleteFromFS, self).__init__( name='%s-DeleteFromFS-%s' % (task_type, task_id)) def execute(self, file_path): """Remove file from the backend :param file_path: path to the file being deleted """ store_api.delete_from_backend(file_path) class _ImportToStore(task.Task): def __init__(self, task_id, task_type, image_repo, uri): self.task_id = task_id self.task_type = task_type self.image_repo = image_repo self.uri = uri super(_ImportToStore, self).__init__( name='%s-ImportToStore-%s' % (task_type, task_id)) def execute(self, image_id, file_path=None): """Bringing the introspected image to back end store :param image_id: Glance Image ID :param file_path: path to the image file """ # NOTE(flaper87): There are a couple of interesting bits in the # interaction between this task and the `_ImportToFS` one. I'll try # to cover them in this comment. # # NOTE(flaper87): # `_ImportToFS` downloads the image to a dedicated `work_dir` which # needs to be configured in advance (please refer to the config option # docs for more info). The motivation behind this is also explained in # the `_ImportToFS.execute` method. # # Due to the fact that we have an `_ImportToFS` task which downloads # the image data already, we need to be as smart as we can in this task # to avoid downloading the data several times and reducing the copy or # write times. There are several scenarios where the interaction # between this task and `_ImportToFS` could be improved. All these # scenarios assume the `_ImportToFS` task has been executed before # and/or in a more abstract scenario, that `file_path` is being # provided. # # Scenario 1: FS Store is Remote, introspection enabled, # conversion disabled # # In this scenario, the user would benefit from having the scratch path # being the same path as the fs store. Only one write would happen and # an extra read will happen in order to introspect the image. Note that # this read is just for the image headers and not the entire file. # # Scenario 2: FS Store is remote, introspection enabled, # conversion enabled # # In this scenario, the user would benefit from having a *local* store # into which the image can be converted. This will require downloading # the image locally, converting it and then copying the converted image # to the remote store. # # Scenario 3: FS Store is local, introspection enabled, # conversion disabled # Scenario 4: FS Store is local, introspection enabled, # conversion enabled # # In both these scenarios the user shouldn't care if the FS # store path and the work dir are the same, therefore probably # benefit, about the scratch path and the FS store being the # same from a performance perspective. Space wise, regardless # of the scenario, the user will have to account for it in # advance. # # Lets get to it and identify the different scenarios in the # implementation image = self.image_repo.get(image_id) image.status = 'saving' self.image_repo.save(image) # NOTE(flaper87): Let's dance... and fall # # Unfortunatelly, because of the way our domain layers work and # the checks done in the FS store, we can't simply rename the file # and set the location. To do that, we'd have to duplicate the logic # of every and each of the domain factories (quota, location, etc) # and we'd also need to hack the FS store to prevent it from raising # a "duplication path" error. I'd rather have this task copying the # image bits one more time than duplicating all that logic. # # Since I don't think this should be the definitive solution, I'm # leaving the code below as a reference for what should happen here # once the FS store and domain code will be able to handle this case. # # if file_path is None: # image_import.set_image_data(image, self.uri, None) # return # NOTE(flaper87): Don't assume the image was stored in the # work_dir. Think in the case this path was provided by another task. # Also, lets try to neither assume things nor create "logic" # dependencies between this task and `_ImportToFS` # # base_path = os.path.dirname(file_path.split("file://")[-1]) # NOTE(flaper87): Hopefully just scenarios #3 and #4. I say # hopefully because nothing prevents the user to use the same # FS store path as a work dir # # image_path = os.path.join(base_path, image_id) # # if (base_path == CONF.glance_store.filesystem_store_datadir or # base_path in CONF.glance_store.filesystem_store_datadirs): # os.rename(file_path, image_path) # # image_import.set_image_data(image, image_path, None) try: image_import.set_image_data(image, file_path or self.uri, self.task_id) except IOError as e: msg = (_('Uploading the image failed due to: %(exc)s') % {'exc': encodeutils.exception_to_unicode(e)}) LOG.error(msg) raise exception.UploadException(message=msg) # NOTE(flaper87): We need to save the image again after the locations # have been set in the image. self.image_repo.save(image) class _SaveImage(task.Task): def __init__(self, task_id, task_type, image_repo): self.task_id = task_id self.task_type = task_type self.image_repo = image_repo super(_SaveImage, self).__init__( name='%s-SaveImage-%s' % (task_type, task_id)) def execute(self, image_id): """Transition image status to active :param image_id: Glance Image ID """ new_image = self.image_repo.get(image_id) if new_image.status == 'saving': # NOTE(flaper87): THIS IS WRONG! # we should be doing atomic updates to avoid # race conditions. This happens in other places # too. new_image.status = 'active' self.image_repo.save(new_image) class _CompleteTask(task.Task): def __init__(self, task_id, task_type, task_repo): self.task_id = task_id self.task_type = task_type self.task_repo = task_repo super(_CompleteTask, self).__init__( name='%s-CompleteTask-%s' % (task_type, task_id)) def execute(self, image_id): """Finishing the task flow :param image_id: Glance Image ID """ task = script_utils.get_task(self.task_repo, self.task_id) if task is None: return try: task.succeed({'image_id': image_id}) except Exception as e: # Note: The message string contains Error in it to indicate # in the task.message that it's a error message for the user. # TODO(nikhil): need to bring back save_and_reraise_exception when # necessary log_msg = _LE("Task ID %(task_id)s failed. Error: %(exc_type)s: " "%(e)s") LOG.exception(log_msg, {'exc_type': six.text_type(type(e)), 'e': encodeutils.exception_to_unicode(e), 'task_id': task.task_id}) err_msg = _("Error: %(exc_type)s: %(e)s") task.fail(err_msg % {'exc_type': six.text_type(type(e)), 'e': encodeutils.exception_to_unicode(e)}) finally: self.task_repo.save(task) LOG.info(_LI("%(task_id)s of %(task_type)s completed"), {'task_id': self.task_id, 'task_type': self.task_type}) def _get_import_flows(**kwargs): # NOTE(flaper87): Until we have a better infrastructure to enable # and disable tasks plugins, hard-code the tasks we know exist, # instead of loading everything from the namespace. This guarantees # both, the load order of these plugins and the fact that no random # plugins will be added/loaded until we feel comfortable with this. # Future patches will keep using NamedExtensionManager but they'll # rely on a config option to control this process. extensions = named.NamedExtensionManager('glance.flows.import', names=['ovf_process', 'convert', 'introspect'], name_order=True, invoke_on_load=True, invoke_kwds=kwargs) for ext in extensions.extensions: yield ext.obj def get_flow(**kwargs): """Return task flow :param task_id: Task ID :param task_type: Type of the task :param task_repo: Task repo :param image_repo: Image repository used :param image_factory: Glance Image Factory :param uri: uri for the image file """ task_id = kwargs.get('task_id') task_type = kwargs.get('task_type') task_repo = kwargs.get('task_repo') image_repo = kwargs.get('image_repo') image_factory = kwargs.get('image_factory') uri = kwargs.get('uri') flow = lf.Flow(task_type, retry=retry.AlwaysRevert()).add( _CreateImage(task_id, task_type, task_repo, image_repo, image_factory)) import_to_store = _ImportToStore(task_id, task_type, image_repo, uri) try: # NOTE(flaper87): ImportToLocal and DeleteFromLocal shouldn't be here. # Ideally, we should have the different import flows doing this for us # and this function should clean up duplicated tasks. For example, say # 2 flows need to have a local copy of the image - ImportToLocal - in # order to be able to complete the task - i.e Introspect-. In that # case, the introspect.get_flow call should add both, ImportToLocal and # DeleteFromLocal, to the flow and this function will reduce the # duplicated calls to those tasks by creating a linear flow that # ensures those are called before the other tasks. For now, I'm # keeping them here, though. limbo = lf.Flow(task_type).add(_ImportToFS(task_id, task_type, task_repo, uri)) for subflow in _get_import_flows(**kwargs): limbo.add(subflow) # NOTE(flaper87): We have hard-coded 2 tasks, # if there aren't more than 2, it means that # no subtask has been registered. if len(limbo) > 1: flow.add(limbo) # NOTE(flaper87): Until this implementation gets smarter, # make sure ImportToStore is called *after* the imported # flow stages. If not, the image will be set to saving state # invalidating tasks like Introspection or Convert. flow.add(import_to_store) # NOTE(flaper87): Since this is an "optional" task but required # when `limbo` is executed, we're adding it in its own subflow # to isolate it from the rest of the flow. delete_flow = lf.Flow(task_type).add(_DeleteFromFS(task_id, task_type)) flow.add(delete_flow) else: flow.add(import_to_store) except exception.BadTaskConfiguration as exc: # NOTE(flaper87): If something goes wrong with the load of # import tasks, make sure we go on. LOG.error(_LE('Bad task configuration: %s'), exc.message) flow.add(import_to_store) flow.add( _SaveImage(task_id, task_type, image_repo), _CompleteTask(task_id, task_type, task_repo) ) return flow glance-16.0.0/glance/async/flows/introspect.py0000666000175100017510000000673513245511421021332 0ustar zuulzuul00000000000000# Copyright 2015 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import json from oslo_concurrency import processutils as putils from oslo_log import log as logging from oslo_utils import encodeutils from oslo_utils import excutils from taskflow.patterns import linear_flow as lf from glance.async import utils from glance.i18n import _LE LOG = logging.getLogger(__name__) class _Introspect(utils.OptionalTask): """Taskflow to pull the embedded metadata out of image file""" def __init__(self, task_id, task_type, image_repo): self.task_id = task_id self.task_type = task_type self.image_repo = image_repo super(_Introspect, self).__init__( name='%s-Introspect-%s' % (task_type, task_id)) def execute(self, image_id, file_path): """Does the actual introspection :param image_id: Glance image ID :param file_path: Path to the file being introspected """ try: stdout, stderr = putils.trycmd('qemu-img', 'info', '--output=json', file_path, prlimit=utils.QEMU_IMG_PROC_LIMITS, log_errors=putils.LOG_ALL_ERRORS) except OSError as exc: # NOTE(flaper87): errno == 2 means the executable file # was not found. For now, log an error and move forward # until we have a better way to enable/disable optional # tasks. if exc.errno != 2: with excutils.save_and_reraise_exception(): exc_message = encodeutils.exception_to_unicode(exc) msg = _LE('Failed to execute introspection ' '%(task_id)s: %(exc)s') LOG.error(msg, {'task_id': self.task_id, 'exc': exc_message}) return if stderr: raise RuntimeError(stderr) metadata = json.loads(stdout) new_image = self.image_repo.get(image_id) new_image.virtual_size = metadata.get('virtual-size', 0) new_image.disk_format = metadata.get('format') self.image_repo.save(new_image) LOG.debug("%(task_id)s: Introspection successful: %(file)s", {'task_id': self.task_id, 'file': file_path}) return new_image def get_flow(**kwargs): """Return task flow for introspecting images to obtain metadata about the image. :param task_id: Task ID :param task_type: Type of the task. :param image_repo: Image repository used. """ task_id = kwargs.get('task_id') task_type = kwargs.get('task_type') image_repo = kwargs.get('image_repo') LOG.debug("Flow: %(task_type)s with ID %(id)s on %(repo)s", {'task_type': task_type, 'id': task_id, 'repo': image_repo}) return lf.Flow(task_type).add( _Introspect(task_id, task_type, image_repo), ) glance-16.0.0/glance/async/flows/ovf_process.py0000666000175100017510000002447113245511421021465 0ustar zuulzuul00000000000000# Copyright 2015 Intel Corporation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os import re import shutil import tarfile try: from defusedxml import cElementTree as ET except ImportError: from defusedxml import ElementTree as ET from oslo_config import cfg from oslo_log import log as logging from oslo_serialization import jsonutils as json from six.moves import urllib from taskflow.patterns import linear_flow as lf from taskflow import task from glance.i18n import _, _LW LOG = logging.getLogger(__name__) CONF = cfg.CONF # Define the CIM namespaces here. Currently we will be supporting extracting # properties only from CIM_ProcessorAllocationSettingData CIM_NS = {'http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/' 'CIM_ProcessorAllocationSettingData': 'cim_pasd'} class _OVF_Process(task.Task): """ Extracts the single disk image from an OVA tarball and saves it to the Glance image store. It also parses the included OVF file for selected metadata which it then saves in the image store as the previously saved image's properties. """ default_provides = 'file_path' def __init__(self, task_id, task_type, image_repo): self.task_id = task_id self.task_type = task_type self.image_repo = image_repo super(_OVF_Process, self).__init__( name='%s-OVF_Process-%s' % (task_type, task_id)) def _get_extracted_file_path(self, image_id): return os.path.join(CONF.task.work_dir, "%s.extracted" % image_id) def _get_ova_iter_objects(self, uri): """Returns iterable object either for local file or uri :param uri: uri (remote or local) to the ova package we want to iterate """ if uri.startswith("file://"): uri = uri.split("file://")[-1] return open(uri, "rb") return urllib.request.urlopen(uri) def execute(self, image_id, file_path): """ :param image_id: Id to use when storing extracted image to Glance image store. It is assumed that some other task has already created a row in the store with this id. :param file_path: Path to the OVA package """ image = self.image_repo.get(image_id) # Expect 'ova' as image container format for OVF_Process task if image.container_format == 'ova': # FIXME(dramakri): This is an admin-only feature for security # reasons. Ideally this should be achieved by making the import # task API admin only. This is one of the items that the upcoming # import refactoring work plans to do. Until then, we will check # the context as a short-cut. if image.context and image.context.is_admin: extractor = OVAImageExtractor() data_iter = self._get_ova_iter_objects(file_path) disk, properties = extractor.extract(data_iter) image.extra_properties.update(properties) image.container_format = 'bare' self.image_repo.save(image) dest_path = self._get_extracted_file_path(image_id) with open(dest_path, 'wb') as f: shutil.copyfileobj(disk, f, 4096) # Overwrite the input ova file since it is no longer needed os.rename(dest_path, file_path.split("file://")[-1]) else: raise RuntimeError(_('OVA extract is limited to admin')) return file_path def revert(self, image_id, result, **kwargs): fs_path = self._get_extracted_file_path(image_id) if os.path.exists(fs_path): os.path.remove(fs_path) class OVAImageExtractor(object): """Extracts and parses the uploaded OVA package A class that extracts the disk image and OVF file from an OVA tar archive. Parses the OVF file for metadata of interest. """ def __init__(self): self.interested_properties = [] self._load_interested_properties() def extract(self, ova): """Extracts disk image and OVF file from OVA package Extracts a single disk image and OVF from OVA tar archive and calls OVF parser method. :param ova: a file object containing the OVA file :returns: a tuple of extracted disk file object and dictionary of properties parsed from the OVF file :raises RuntimeError: an error for malformed OVA and OVF files """ with tarfile.open(fileobj=ova) as tar_file: filenames = tar_file.getnames() ovf_filename = next((filename for filename in filenames if filename.endswith('.ovf')), None) if ovf_filename: ovf = tar_file.extractfile(ovf_filename) disk_name, properties = self._parse_OVF(ovf) ovf.close() else: raise RuntimeError(_('Could not find OVF file in OVA archive ' 'file.')) disk = tar_file.extractfile(disk_name) return (disk, properties) def _parse_OVF(self, ovf): """Parses the OVF file Parses the OVF file for specified metadata properties. Interested properties must be specified in ovf-metadata.json conf file. The OVF file's qualified namespaces are removed from the included properties. :param ovf: a file object containing the OVF file :returns: a tuple of disk filename and a properties dictionary :raises RuntimeError: an error for malformed OVF file """ def _get_namespace_and_tag(tag): """Separate and return the namespace and tag elements. There is no native support for this operation in elementtree package. See http://bugs.python.org/issue18304 for details. """ m = re.match(r'\{(.+)\}(.+)', tag) if m: return m.group(1), m.group(2) else: return '', tag disk_filename, file_elements, file_ref = None, None, None properties = {} for event, elem in ET.iterparse(ovf): if event == 'end': ns, tag = _get_namespace_and_tag(elem.tag) if ns in CIM_NS and tag in self.interested_properties: properties[CIM_NS[ns] + '_' + tag] = (elem.text.strip() if elem.text else '') if tag == 'DiskSection': disks = [child for child in list(elem) if _get_namespace_and_tag(child.tag)[1] == 'Disk'] if len(disks) > 1: """ Currently only single disk image extraction is supported. FIXME(dramakri): Support multiple images in OVA package """ raise RuntimeError(_('Currently, OVA packages ' 'containing multiple disk are ' 'not supported.')) disk = next(iter(disks)) file_ref = next(value for key, value in disk.items() if _get_namespace_and_tag(key)[1] == 'fileRef') if tag == 'References': file_elements = list(elem) # Clears elements to save memory except for 'File' and 'Disk' # references, which we will need to later access if tag != 'File' and tag != 'Disk': elem.clear() for file_element in file_elements: file_id = next(value for key, value in file_element.items() if _get_namespace_and_tag(key)[1] == 'id') if file_id != file_ref: continue disk_filename = next(value for key, value in file_element.items() if _get_namespace_and_tag(key)[1] == 'href') return (disk_filename, properties) def _load_interested_properties(self): """Find the OVF properties config file and load it. OVF properties config file specifies which metadata of interest to extract. Reads in a JSON file named 'ovf-metadata.json' if available. See example file at etc/ovf-metadata.json.sample. """ filename = 'ovf-metadata.json' match = CONF.find_file(filename) if match: with open(match, 'r') as properties_file: properties = json.loads(properties_file.read()) self.interested_properties = properties.get( 'cim_pasd', []) if not self.interested_properties: LOG.warn(_LW('OVF metadata of interest was not specified ' 'in ovf-metadata.json config file. Please ' 'set "cim_pasd" to a list of interested ' 'CIM_ProcessorAllocationSettingData ' 'properties.')) else: LOG.warn(_LW('OVF properties config file "ovf-metadata.json" was ' 'not found.')) def get_flow(**kwargs): """Returns task flow for OVF Process. :param task_id: Task ID :param task_type: Type of the task. :param image_repo: Image repository used. """ task_id = kwargs.get('task_id') task_type = kwargs.get('task_type') image_repo = kwargs.get('image_repo') LOG.debug("Flow: %(task_type)s with ID %(id)s on %(repo)s" % {'task_type': task_type, 'id': task_id, 'repo': image_repo}) return lf.Flow(task_type).add( _OVF_Process(task_id, task_type, image_repo), ) glance-16.0.0/glance/async/flows/__init__.py0000666000175100017510000000000013245511421020652 0ustar zuulzuul00000000000000glance-16.0.0/glance/async/flows/_internal_plugins/0000775000175100017510000000000013245511661022273 5ustar zuulzuul00000000000000glance-16.0.0/glance/async/flows/_internal_plugins/web_download.py0000666000175100017510000001151013245511421025303 0ustar zuulzuul00000000000000# Copyright 2018 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from glance_store import backend from oslo_config import cfg from oslo_log import log as logging from taskflow.patterns import linear_flow as lf from taskflow import task from taskflow.types import failure from glance.common import exception from glance.common.scripts import utils as script_utils from glance.i18n import _, _LE LOG = logging.getLogger(__name__) CONF = cfg.CONF class _WebDownload(task.Task): default_provides = 'file_uri' def __init__(self, task_id, task_type, image_repo, image_id, uri): self.task_id = task_id self.task_type = task_type self.image_repo = image_repo self.image_id = image_id self.uri = uri super(_WebDownload, self).__init__( name='%s-WebDownload-%s' % (task_type, task_id)) if CONF.node_staging_uri is None: msg = (_("%(task_id)s of %(task_type)s not configured " "properly. Missing node_staging_uri: %(work_dir)s") % {'task_id': self.task_id, 'task_type': self.task_type, 'work_dir': CONF.node_staging_uri}) raise exception.BadTaskConfiguration(msg) self.store = self._build_store() def _build_store(self): # NOTE(flaper87): Due to the nice glance_store api (#sarcasm), we're # forced to build our own config object, register the required options # (and by required I mean *ALL* of them, even the ones we don't want), # and create our own store instance by calling a private function. # This is certainly unfortunate but it's the best we can do until the # glance_store refactor is done. A good thing is that glance_store is # under our team's management and it gates on Glance so changes to # this API will (should?) break task's tests. conf = cfg.ConfigOpts() backend.register_opts(conf) conf.set_override('filesystem_store_datadir', CONF.node_staging_uri[7:], group='glance_store') # NOTE(flaper87): Do not even try to judge me for this... :( # With the glance_store refactor, this code will change, until # that happens, we don't have a better option and this is the # least worst one, IMHO. store = backend._load_store(conf, 'file') if store is None: msg = (_("%(task_id)s of %(task_type)s not configured " "properly. Could not load the filesystem store") % {'task_id': self.task_id, 'task_type': self.task_type}) raise exception.BadTaskConfiguration(msg) store.configure() return store def execute(self): """Create temp file into store and return path to it :param image_id: Glance Image ID """ # NOTE(jokke): We've decided to use staging area for this task as # a way to expect users to configure a local store for pre-import # works on the image to happen. # # While using any path should be "technically" fine, it's not what # we recommend as the best solution. For more details on this, please # refer to the comment in the `_ImportToStore.execute` method. data = script_utils.get_image_data_iter(self.uri) path = self.store.add(self.image_id, data, 0)[0] return path def revert(self, result, **kwargs): if isinstance(result, failure.Failure): LOG.exception(_LE('Task: %(task_id)s failed to import image ' '%(image_id)s to the filesystem.'), {'task_id': self.task_id, 'image_id': self.image_id}) def get_flow(**kwargs): """Return task flow for web-download. :param task_id: Task ID. :param task_type: Type of the task. :param image_repo: Image repository used. :param uri: URI the image data is downloaded from. """ task_id = kwargs.get('task_id') task_type = kwargs.get('task_type') image_repo = kwargs.get('image_repo') image_id = kwargs.get('image_id') uri = kwargs.get('import_req')['method'].get('uri') return lf.Flow(task_type).add( _WebDownload(task_id, task_type, image_repo, image_id, uri), ) glance-16.0.0/glance/async/flows/_internal_plugins/__init__.py0000666000175100017510000001663413245511421024412 0ustar zuulzuul00000000000000# Copyright 2018 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg from stevedore import named from glance.i18n import _ CONF = cfg.CONF import_filtering_opts = [ cfg.ListOpt('allowed_schemes', item_type=cfg.types.String(quotes=True), bounds=True, default=['http', 'https'], help=_(""" Specify the "whitelist" of allowed url schemes for web-download. This option provides whitelisting of uri schemes that will be allowed when an end user imports an image using the web-download import method. The whitelist has priority such that if there is also a blacklist defined for schemes, the blacklist will be ignored. Host and port filtering, however, will be applied. See the Glance Administration Guide for more information. Possible values: * List containing normalized url schemes as they are returned from urllib.parse. For example ['ftp','https'] * Hint: leave the whitelist empty if you want the disallowed_schemes blacklist to be processed Related options: * disallowed_schemes * allowed_hosts * disallowed_hosts * allowed_ports * disallowed_ports """)), cfg.ListOpt('disallowed_schemes', item_type=cfg.types.String(quotes=True), bounds=True, default=[], help=_(""" Specify the "blacklist" of uri schemes disallowed for web-download. This option provides blacklisting of uri schemes that will be rejected when an end user imports an image using the web-download import method. Note that if a scheme whitelist is defined using the 'allowed_schemes' option, *this option will be ignored*. Host and port filtering, however, will be applied. See the Glance Administration Guide for more information. Possible values: * List containing normalized url schemes as they are returned from urllib.parse. For example ['ftp','https'] * By default the list is empty Related options: * allowed_schemes * allowed_hosts * disallowed_hosts * allowed_ports * disallowed_ports """)), cfg.ListOpt('allowed_hosts', item_type=cfg.types.HostAddress(), bounds=True, default=[], help=_(""" Specify the "whitelist" of allowed target hosts for web-download. This option provides whitelisting of hosts that will be allowed when an end user imports an image using the web-download import method. The whitelist has priority such that if there is also a blacklist defined for hosts, the blacklist will be ignored. The uri must have already passed scheme filtering before this host filter will be applied. If the uri passes, port filtering will then be applied. See the Glance Administration Guide for more information. Possible values: * List containing normalized hostname or ip like it would be returned in the urllib.parse netloc without the port * By default the list is empty * Hint: leave the whitelist empty if you want the disallowed_hosts blacklist to be processed Related options: * allowed_schemes * disallowed_schemes * disallowed_hosts * allowed_ports * disallowed_ports """)), cfg.ListOpt('disallowed_hosts', item_type=cfg.types.HostAddress(), bounds=True, default=[], help=_(""" Specify the "blacklist" of hosts disallowed for web-download. This option provides blacklisting of hosts that will be rejected when an end user imports an image using the web-download import method. Note that if a host whitelist is defined using the 'allowed_hosts' option, *this option will be ignored*. The uri must have already passed scheme filtering before this host filter will be applied. If the uri passes, port filtering will then be applied. See the Glance Administration Guide for more information. Possible values: * List containing normalized hostname or ip like it would be returned in the urllib.parse netloc without the port * By default the list is empty Related options: * allowed_schemes * disallowed_schemes * allowed_hosts * allowed_ports * disallowed_ports """)), cfg.ListOpt('allowed_ports', item_type=cfg.types.Integer(min=1, max=65535), bounds=True, default=[80, 443], help=_(""" Specify the "whitelist" of allowed ports for web-download. This option provides whitelisting of ports that will be allowed when an end user imports an image using the web-download import method. The whitelist has priority such that if there is also a blacklist defined for ports, the blacklist will be ignored. Note that scheme and host filtering have already been applied by the time a uri hits the port filter. See the Glance Administration Guide for more information. Possible values: * List containing ports as they are returned from urllib.parse netloc field. Thus the value is a list of integer values, for example [80, 443] * Hint: leave the whitelist empty if you want the disallowed_ports blacklist to be processed Related options: * allowed_schemes * disallowed_schemes * allowed_hosts * disallowed_hosts * disallowed_ports """)), cfg.ListOpt('disallowed_ports', item_type=cfg.types.Integer(min=1, max=65535), bounds=True, default=[], help=_(""" Specify the "blacklist" of disallowed ports for web-download. This option provides blacklisting of target ports that will be rejected when an end user imports an image using the web-download import method. Note that if a port whitelist is defined using the 'allowed_ports' option, *this option will be ignored*. Note that scheme and host filtering have already been applied by the time a uri hits the port filter. See the Glance Administration Guide for more information. Possible values: * List containing ports as they are returned from urllib.parse netloc field. Thus the value is a list of integer values, for example [22, 88] * By default this list is empty Related options: * allowed_schemes * disallowed_schemes * allowed_hosts * disallowed_hosts * allowed_ports """)), ] CONF.register_opts(import_filtering_opts, group='import_filtering_opts') def get_import_plugin(**kwargs): method_list = CONF.enabled_import_methods import_method = kwargs.get('import_req')['method']['name'] if import_method in method_list: import_method = import_method.replace("-", "_") task_list = [import_method] # TODO(jokke): Implement error handling of non-listed methods. extensions = named.NamedExtensionManager( 'glance.image_import.internal_plugins', names=task_list, name_order=True, invoke_on_load=True, invoke_kwds=kwargs) for extension in extensions.extensions: return extension.obj glance-16.0.0/glance/async/flows/api_image_import.py0000666000175100017510000003254213245511421022440 0ustar zuulzuul00000000000000# Copyright 2015 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import glance_store as store_api from glance_store import backend from oslo_config import cfg from oslo_log import log as logging from oslo_utils import encodeutils import six from taskflow.patterns import linear_flow as lf from taskflow import retry from taskflow import task import glance.async.flows._internal_plugins as internal_plugins import glance.async.flows.plugins as import_plugins from glance.common import exception from glance.common.scripts.image_import import main as image_import from glance.common.scripts import utils as script_utils from glance.i18n import _, _LE, _LI LOG = logging.getLogger(__name__) CONF = cfg.CONF api_import_opts = [ cfg.ListOpt('image_import_plugins', item_type=cfg.types.String(quotes=True), bounds=True, sample_default='[no_op]', default=[], help=_(""" Image import plugins to be enabled for task processing. Provide list of strings reflecting to the task Objects that should be included to the Image Import flow. The task objects needs to be defined in the 'glance/async/ flows/plugins/*' and may be implemented by OpenStack Glance project team, deployer or 3rd party. By default no plugins are enabled and to take advantage of the plugin model the list of plugins must be set explicitly in the glance-image-import.conf file. The allowed values for this option is comma separated list of object names in between ``[`` and ``]``. Possible values: * no_op (only logs debug level message that the plugin has been executed) * Any provided Task object name to be included in to the flow. """)), ] CONF.register_opts(api_import_opts, group='image_import_opts') # TODO(jokke): We should refactor the task implementations so that we do not # need to duplicate what we have already for example in base_import.py. class _DeleteFromFS(task.Task): def __init__(self, task_id, task_type): self.task_id = task_id self.task_type = task_type super(_DeleteFromFS, self).__init__( name='%s-DeleteFromFS-%s' % (task_type, task_id)) def execute(self, file_path): """Remove file from the backend :param file_path: path to the file being deleted """ store_api.delete_from_backend(file_path) class _VerifyStaging(task.Task): # NOTE(jokke): This could be also for example "staging_path" but to # keep this compatible with other flows we want to stay consistent # with base_import default_provides = 'file_path' def __init__(self, task_id, task_type, task_repo, uri): self.task_id = task_id self.task_type = task_type self.task_repo = task_repo self.uri = uri super(_VerifyStaging, self).__init__( name='%s-ConfigureStaging-%s' % (task_type, task_id)) # NOTE(jokke): If we want to use other than 'file' store in the # future, this is one thing that needs to change. try: uri.index('file:///', 0) except ValueError: msg = (_("%(task_id)s of %(task_type)s not configured " "properly. Value of node_staging_uri must be " " in format 'file://'") % {'task_id': self.task_id, 'task_type': self.task_type}) raise exception.BadTaskConfiguration(msg) # NOTE(jokke): We really don't need the store for anything but # verifying that we actually can build the store will allow us to # fail the flow early with clear message why that happens. self._build_store() def _build_store(self): # NOTE(jokke): If we want to use some other store for staging, we can # implement the logic more general here. For now this should do. # NOTE(flaper87): Due to the nice glance_store api (#sarcasm), we're # forced to build our own config object, register the required options # (and by required I mean *ALL* of them, even the ones we don't want), # and create our own store instance by calling a private function. # This is certainly unfortunate but it's the best we can do until the # glance_store refactor is done. A good thing is that glance_store is # under our team's management and it gates on Glance so changes to # this API will (should?) break task's tests. conf = cfg.ConfigOpts() backend.register_opts(conf) conf.set_override('filesystem_store_datadir', CONF.node_staging_uri[7:], group='glance_store') # NOTE(flaper87): Do not even try to judge me for this... :( # With the glance_store refactor, this code will change, until # that happens, we don't have a better option and this is the # least worst one, IMHO. store = backend._load_store(conf, 'file') try: store.configure() except AttributeError: msg = (_("%(task_id)s of %(task_type)s not configured " "properly. Could not load the filesystem store") % {'task_id': self.task_id, 'task_type': self.task_type}) raise exception.BadTaskConfiguration(msg) def execute(self): """Test the backend store and return the 'file_path'""" return self.uri class _ImportToStore(task.Task): def __init__(self, task_id, task_type, image_repo, uri, image_id): self.task_id = task_id self.task_type = task_type self.image_repo = image_repo self.uri = uri self.image_id = image_id super(_ImportToStore, self).__init__( name='%s-ImportToStore-%s' % (task_type, task_id)) def execute(self, file_path=None): """Bringing the imported image to back end store :param image_id: Glance Image ID :param file_path: path to the image file """ # NOTE(flaper87): Let's dance... and fall # # Unfortunatelly, because of the way our domain layers work and # the checks done in the FS store, we can't simply rename the file # and set the location. To do that, we'd have to duplicate the logic # of every and each of the domain factories (quota, location, etc) # and we'd also need to hack the FS store to prevent it from raising # a "duplication path" error. I'd rather have this task copying the # image bits one more time than duplicating all that logic. # # Since I don't think this should be the definitive solution, I'm # leaving the code below as a reference for what should happen here # once the FS store and domain code will be able to handle this case. # # if file_path is None: # image_import.set_image_data(image, self.uri, None) # return # NOTE(flaper87): Don't assume the image was stored in the # work_dir. Think in the case this path was provided by another task. # Also, lets try to neither assume things nor create "logic" # dependencies between this task and `_ImportToFS` # # base_path = os.path.dirname(file_path.split("file://")[-1]) # NOTE(flaper87): Hopefully just scenarios #3 and #4. I say # hopefully because nothing prevents the user to use the same # FS store path as a work dir # # image_path = os.path.join(base_path, image_id) # # if (base_path == CONF.glance_store.filesystem_store_datadir or # base_path in CONF.glance_store.filesystem_store_datadirs): # os.rename(file_path, image_path) # # image_import.set_image_data(image, image_path, None) # NOTE(jokke): The different options here are kind of pointless as we # will need the file path anyways for our delete workflow for now. # For future proofing keeping this as is. image = self.image_repo.get(self.image_id) image_import.set_image_data(image, file_path or self.uri, self.task_id) # NOTE(flaper87): We need to save the image again after the locations # have been set in the image. self.image_repo.save(image) class _SaveImage(task.Task): def __init__(self, task_id, task_type, image_repo, image_id): self.task_id = task_id self.task_type = task_type self.image_repo = image_repo self.image_id = image_id super(_SaveImage, self).__init__( name='%s-SaveImage-%s' % (task_type, task_id)) def execute(self): """Transition image status to active :param image_id: Glance Image ID """ new_image = self.image_repo.get(self.image_id) if new_image.status == 'saving': # NOTE(flaper87): THIS IS WRONG! # we should be doing atomic updates to avoid # race conditions. This happens in other places # too. new_image.status = 'active' self.image_repo.save(new_image) class _CompleteTask(task.Task): def __init__(self, task_id, task_type, task_repo, image_id): self.task_id = task_id self.task_type = task_type self.task_repo = task_repo self.image_id = image_id super(_CompleteTask, self).__init__( name='%s-CompleteTask-%s' % (task_type, task_id)) def execute(self): """Finishing the task flow :param image_id: Glance Image ID """ task = script_utils.get_task(self.task_repo, self.task_id) if task is None: return try: task.succeed({'image_id': self.image_id}) except Exception as e: # Note: The message string contains Error in it to indicate # in the task.message that it's a error message for the user. # TODO(nikhil): need to bring back save_and_reraise_exception when # necessary log_msg = _LE("Task ID %(task_id)s failed. Error: %(exc_type)s: " "%(e)s") LOG.exception(log_msg, {'exc_type': six.text_type(type(e)), 'e': encodeutils.exception_to_unicode(e), 'task_id': task.task_id}) err_msg = _("Error: %(exc_type)s: %(e)s") task.fail(err_msg % {'exc_type': six.text_type(type(e)), 'e': encodeutils.exception_to_unicode(e)}) finally: self.task_repo.save(task) LOG.info(_LI("%(task_id)s of %(task_type)s completed"), {'task_id': self.task_id, 'task_type': self.task_type}) def get_flow(**kwargs): """Return task flow :param task_id: Task ID :param task_type: Type of the task :param task_repo: Task repo :param image_repo: Image repository used :param image_id: ID of the Image to be processed :param uri: uri for the image file """ task_id = kwargs.get('task_id') task_type = kwargs.get('task_type') task_repo = kwargs.get('task_repo') image_repo = kwargs.get('image_repo') image_id = kwargs.get('image_id') import_method = kwargs.get('import_req')['method']['name'] uri = kwargs.get('import_req')['method'].get('uri') if not uri and import_method == 'glance-direct': separator = '' if not CONF.node_staging_uri.endswith('/'): separator = '/' uri = separator.join((CONF.node_staging_uri, str(image_id))) flow = lf.Flow(task_type, retry=retry.AlwaysRevert()) if import_method == 'web-download': downloadToStaging = internal_plugins.get_import_plugin(**kwargs) flow.add(downloadToStaging) if not CONF.node_staging_uri.endswith('/'): separator = '/' file_uri = separator.join((CONF.node_staging_uri, str(image_id))) else: file_uri = uri flow.add(_VerifyStaging(task_id, task_type, task_repo, file_uri)) for plugin in import_plugins.get_import_plugins(**kwargs): flow.add(plugin) import_to_store = _ImportToStore(task_id, task_type, image_repo, file_uri, image_id) flow.add(import_to_store) delete_task = lf.Flow(task_type).add(_DeleteFromFS(task_id, task_type)) flow.add(delete_task) save_task = _SaveImage(task_id, task_type, image_repo, image_id) flow.add(save_task) complete_task = _CompleteTask(task_id, task_type, task_repo, image_id) flow.add(complete_task) image = image_repo.get(image_id) from_state = image.status image.status = 'importing' image_repo.save(image, from_state=from_state) return flow glance-16.0.0/glance/async/utils.py0000666000175100017510000000675313245511421017146 0ustar zuulzuul00000000000000# Copyright 2015 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_concurrency import processutils as putils from oslo_log import log as logging from oslo_utils import encodeutils from oslo_utils import units from taskflow import task from glance.i18n import _LW LOG = logging.getLogger(__name__) # NOTE(hemanthm): As reported in the bug #1449062, "qemu-img info" calls can # be exploited to craft DoS attacks by providing malicious input. The process # limits defined here are protections against such attacks. This essentially # limits the CPU time and address space used by the process that executes # "qemu-img info" command to 2 seconds and 1 GB respectively. QEMU_IMG_PROC_LIMITS = putils.ProcessLimits(cpu_time=2, address_space=1 * units.Gi) class OptionalTask(task.Task): def __init__(self, *args, **kwargs): super(OptionalTask, self).__init__(*args, **kwargs) self.execute = self._catch_all(self.execute) def _catch_all(self, func): # NOTE(flaper87): Read this comment before calling the MI6 # Here's the thing, there's no nice way to define "optional" # tasks. That is, tasks whose failure shouldn't affect the execution # of the flow. The only current "sane" way to do this, is by catching # everything and logging. This seems harmless from a taskflow # perspective but it is not. There are some issues related to this # "workaround": # # - Task's states will shamelessly lie to us saying the task succeeded. # # - No revert procedure will be triggered, which means optional tasks, # for now, mustn't cause any side-effects because they won't be able to # clean them up. If these tasks depend on other task that do cause side # effects, a task that cleans those side effects most be registered as # well. For example, _ImportToFS, _MyDumbTask, _DeleteFromFS. # # - Ideally, optional tasks shouldn't `provide` new values unless they # are part of an optional flow. Due to the decoration of the execute # method, these tasks will need to define the provided methods at # class level using `default_provides`. # # # The taskflow team is working on improving this and on something that # will provide the ability of defining optional tasks. For now, to lie # ourselves we must. # # NOTE(harlowja): The upstream change that is hopefully going to make # this easier/built-in is at: https://review.openstack.org/#/c/271116/ def wrapper(*args, **kwargs): try: return func(*args, **kwargs) except Exception as exc: msg = (_LW("An optional task has failed, " "the failure was: %s") % encodeutils.exception_to_unicode(exc)) LOG.warn(msg) return wrapper glance-16.0.0/glance/tests/0000775000175100017510000000000013245511661015452 5ustar zuulzuul00000000000000glance-16.0.0/glance/tests/unit/0000775000175100017510000000000013245511661016431 5ustar zuulzuul00000000000000glance-16.0.0/glance/tests/unit/test_store_location.py0000666000175100017510000000620013245511421023060 0ustar zuulzuul00000000000000# Copyright 2011-2013 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import glance_store import mock from glance.common import exception from glance.common import store_utils import glance.location from glance.tests.unit import base CONF = {'default_store': 'file', 'swift_store_auth_address': 'localhost:8080', 'swift_store_container': 'glance', 'swift_store_user': 'user', 'swift_store_key': 'key', 'default_swift_reference': 'store_1' } class TestStoreLocation(base.StoreClearingUnitTest): class FakeImageProxy(object): size = None context = None store_api = mock.Mock() store_utils = store_utils def test_add_location_for_image_without_size(self): def fake_get_size_from_backend(uri, context=None): return 1 self.stubs.Set(glance_store, 'get_size_from_backend', fake_get_size_from_backend) with mock.patch('glance.location._check_image_location'): loc1 = {'url': 'file:///fake1.img.tar.gz', 'metadata': {}} loc2 = {'url': 'file:///fake2.img.tar.gz', 'metadata': {}} # Test for insert location image1 = TestStoreLocation.FakeImageProxy() locations = glance.location.StoreLocations(image1, []) locations.insert(0, loc2) self.assertEqual(1, image1.size) # Test for set_attr of _locations_proxy image2 = TestStoreLocation.FakeImageProxy() locations = glance.location.StoreLocations(image2, [loc1]) locations[0] = loc2 self.assertIn(loc2, locations) self.assertEqual(1, image2.size) def test_add_location_with_restricted_sources(self): loc1 = {'url': 'file:///fake1.img.tar.gz', 'metadata': {}} loc2 = {'url': 'swift+config:///xxx', 'metadata': {}} loc3 = {'url': 'filesystem:///foo.img.tar.gz', 'metadata': {}} # Test for insert location image1 = TestStoreLocation.FakeImageProxy() locations = glance.location.StoreLocations(image1, []) self.assertRaises(exception.BadStoreUri, locations.insert, 0, loc1) self.assertRaises(exception.BadStoreUri, locations.insert, 0, loc3) self.assertNotIn(loc1, locations) self.assertNotIn(loc3, locations) # Test for set_attr of _locations_proxy image2 = TestStoreLocation.FakeImageProxy() locations = glance.location.StoreLocations(image2, [loc1]) self.assertRaises(exception.BadStoreUri, locations.insert, 0, loc2) self.assertNotIn(loc2, locations) glance-16.0.0/glance/tests/unit/api/0000775000175100017510000000000013245511661017202 5ustar zuulzuul00000000000000glance-16.0.0/glance/tests/unit/api/test_cmd.py0000666000175100017510000001270613245511421021360 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import sys import glance_store as store import mock from oslo_config import cfg from oslo_log import log as logging import six import glance.cmd.api import glance.cmd.cache_cleaner import glance.cmd.cache_pruner import glance.common.config from glance.common import exception as exc import glance.common.wsgi import glance.image_cache.cleaner import glance.image_cache.pruner from glance.tests import utils as test_utils CONF = cfg.CONF class TestGlanceApiCmd(test_utils.BaseTestCase): __argv_backup = None def _do_nothing(self, *args, **kwargs): pass def _raise(self, exc): def fake(*args, **kwargs): raise exc return fake def setUp(self): super(TestGlanceApiCmd, self).setUp() self.__argv_backup = sys.argv sys.argv = ['glance-api'] self.stderr = six.StringIO() sys.stderr = self.stderr store.register_opts(CONF) self.stubs.Set(glance.common.config, 'load_paste_app', self._do_nothing) self.stubs.Set(glance.common.wsgi.Server, 'start', self._do_nothing) self.stubs.Set(glance.common.wsgi.Server, 'wait', self._do_nothing) def tearDown(self): sys.stderr = sys.__stderr__ sys.argv = self.__argv_backup super(TestGlanceApiCmd, self).tearDown() def test_supported_default_store(self): self.config(group='glance_store', default_store='file') glance.cmd.api.main() def test_worker_creation_failure(self): failure = exc.WorkerCreationFailure(reason='test') self.stubs.Set(glance.common.wsgi.Server, 'start', self._raise(failure)) exit = self.assertRaises(SystemExit, glance.cmd.api.main) self.assertEqual(2, exit.code) @mock.patch.object(glance.common.config, 'parse_cache_args') @mock.patch.object(logging, 'setup') @mock.patch.object(glance.image_cache.ImageCache, 'init_driver') @mock.patch.object(glance.image_cache.ImageCache, 'clean') def test_cache_cleaner_main(self, mock_cache_clean, mock_cache_init_driver, mock_log_setup, mock_parse_config): mock_cache_init_driver.return_value = None manager = mock.MagicMock() manager.attach_mock(mock_log_setup, 'mock_log_setup') manager.attach_mock(mock_parse_config, 'mock_parse_config') manager.attach_mock(mock_cache_init_driver, 'mock_cache_init_driver') manager.attach_mock(mock_cache_clean, 'mock_cache_clean') glance.cmd.cache_cleaner.main() expected_call_sequence = [mock.call.mock_parse_config(), mock.call.mock_log_setup(CONF, 'glance'), mock.call.mock_cache_init_driver(), mock.call.mock_cache_clean()] self.assertEqual(expected_call_sequence, manager.mock_calls) @mock.patch.object(glance.image_cache.base.CacheApp, '__init__') def test_cache_cleaner_main_runtime_exception_handling(self, mock_cache): mock_cache.return_value = None self.stubs.Set(glance.image_cache.cleaner.Cleaner, 'run', self._raise(RuntimeError)) exit = self.assertRaises(SystemExit, glance.cmd.cache_cleaner.main) self.assertEqual('ERROR: ', exit.code) @mock.patch.object(glance.common.config, 'parse_cache_args') @mock.patch.object(logging, 'setup') @mock.patch.object(glance.image_cache.ImageCache, 'init_driver') @mock.patch.object(glance.image_cache.ImageCache, 'prune') def test_cache_pruner_main(self, mock_cache_prune, mock_cache_init_driver, mock_log_setup, mock_parse_config): mock_cache_init_driver.return_value = None manager = mock.MagicMock() manager.attach_mock(mock_log_setup, 'mock_log_setup') manager.attach_mock(mock_parse_config, 'mock_parse_config') manager.attach_mock(mock_cache_init_driver, 'mock_cache_init_driver') manager.attach_mock(mock_cache_prune, 'mock_cache_prune') glance.cmd.cache_pruner.main() expected_call_sequence = [mock.call.mock_parse_config(), mock.call.mock_log_setup(CONF, 'glance'), mock.call.mock_cache_init_driver(), mock.call.mock_cache_prune()] self.assertEqual(expected_call_sequence, manager.mock_calls) @mock.patch.object(glance.image_cache.base.CacheApp, '__init__') def test_cache_pruner_main_runtime_exception_handling(self, mock_cache): mock_cache.return_value = None self.stubs.Set(glance.image_cache.pruner.Pruner, 'run', self._raise(RuntimeError)) exit = self.assertRaises(SystemExit, glance.cmd.cache_pruner.main) self.assertEqual('ERROR: ', exit.code) glance-16.0.0/glance/tests/unit/api/test_property_protections.py0000666000175100017510000003325213245511421025131 0ustar zuulzuul00000000000000# Copyright 2013 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from glance.api import policy from glance.api import property_protections from glance.common import exception from glance.common import property_utils import glance.domain from glance.tests import utils TENANT1 = '6838eb7b-6ded-434a-882c-b344c77fe8df' TENANT2 = '2c014f32-55eb-467d-8fcb-4bd706012f81' class TestProtectedImageRepoProxy(utils.BaseTestCase): class ImageRepoStub(object): def __init__(self, fixtures): self.fixtures = fixtures def get(self, image_id): for f in self.fixtures: if f.image_id == image_id: return f else: raise ValueError(image_id) def list(self, *args, **kwargs): return self.fixtures def add(self, image): self.fixtures.append(image) def setUp(self): super(TestProtectedImageRepoProxy, self).setUp() self.set_property_protections() self.policy = policy.Enforcer() self.property_rules = property_utils.PropertyRules(self.policy) self.image_factory = glance.domain.ImageFactory() extra_props = {'spl_create_prop': 'c', 'spl_read_prop': 'r', 'spl_update_prop': 'u', 'spl_delete_prop': 'd', 'forbidden': 'prop'} extra_props_2 = {'spl_read_prop': 'r', 'forbidden': 'prop'} self.fixtures = [ self.image_factory.new_image(image_id='1', owner=TENANT1, extra_properties=extra_props), self.image_factory.new_image(owner=TENANT2, visibility='public'), self.image_factory.new_image(image_id='3', owner=TENANT1, extra_properties=extra_props_2), ] self.context = glance.context.RequestContext(roles=['spl_role']) image_repo = self.ImageRepoStub(self.fixtures) self.image_repo = property_protections.ProtectedImageRepoProxy( image_repo, self.context, self.property_rules) def test_get_image(self): image_id = '1' result_image = self.image_repo.get(image_id) result_extra_props = result_image.extra_properties self.assertEqual('c', result_extra_props['spl_create_prop']) self.assertEqual('r', result_extra_props['spl_read_prop']) self.assertEqual('u', result_extra_props['spl_update_prop']) self.assertEqual('d', result_extra_props['spl_delete_prop']) self.assertNotIn('forbidden', result_extra_props.keys()) def test_list_image(self): result_images = self.image_repo.list() self.assertEqual(3, len(result_images)) result_extra_props = result_images[0].extra_properties self.assertEqual('c', result_extra_props['spl_create_prop']) self.assertEqual('r', result_extra_props['spl_read_prop']) self.assertEqual('u', result_extra_props['spl_update_prop']) self.assertEqual('d', result_extra_props['spl_delete_prop']) self.assertNotIn('forbidden', result_extra_props.keys()) result_extra_props = result_images[1].extra_properties self.assertEqual({}, result_extra_props) result_extra_props = result_images[2].extra_properties self.assertEqual('r', result_extra_props['spl_read_prop']) self.assertNotIn('forbidden', result_extra_props.keys()) class TestProtectedImageProxy(utils.BaseTestCase): def setUp(self): super(TestProtectedImageProxy, self).setUp() self.set_property_protections() self.policy = policy.Enforcer() self.property_rules = property_utils.PropertyRules(self.policy) class ImageStub(object): def __init__(self, extra_prop): self.extra_properties = extra_prop def test_read_image_with_extra_prop(self): context = glance.context.RequestContext(roles=['spl_role']) extra_prop = {'spl_read_prop': 'read', 'spl_fake_prop': 'prop'} image = self.ImageStub(extra_prop) result_image = property_protections.ProtectedImageProxy( image, context, self.property_rules) result_extra_props = result_image.extra_properties self.assertEqual('read', result_extra_props['spl_read_prop']) self.assertNotIn('spl_fake_prop', result_extra_props.keys()) class TestExtraPropertiesProxy(utils.BaseTestCase): def setUp(self): super(TestExtraPropertiesProxy, self).setUp() self.set_property_protections() self.policy = policy.Enforcer() self.property_rules = property_utils.PropertyRules(self.policy) def test_read_extra_property_as_admin_role(self): extra_properties = {'foo': 'bar', 'ping': 'pong'} context = glance.context.RequestContext(roles=['admin']) extra_prop_proxy = property_protections.ExtraPropertiesProxy( context, extra_properties, self.property_rules) test_result = extra_prop_proxy['foo'] self.assertEqual('bar', test_result) def test_read_extra_property_as_unpermitted_role(self): extra_properties = {'foo': 'bar', 'ping': 'pong'} context = glance.context.RequestContext(roles=['unpermitted_role']) extra_prop_proxy = property_protections.ExtraPropertiesProxy( context, extra_properties, self.property_rules) self.assertRaises(KeyError, extra_prop_proxy.__getitem__, 'foo') def test_update_extra_property_as_permitted_role_after_read(self): extra_properties = {'foo': 'bar', 'ping': 'pong'} context = glance.context.RequestContext(roles=['admin']) extra_prop_proxy = property_protections.ExtraPropertiesProxy( context, extra_properties, self.property_rules) extra_prop_proxy['foo'] = 'par' self.assertEqual('par', extra_prop_proxy['foo']) def test_update_extra_property_as_unpermitted_role_after_read(self): extra_properties = {'spl_read_prop': 'bar'} context = glance.context.RequestContext(roles=['spl_role']) extra_prop_proxy = property_protections.ExtraPropertiesProxy( context, extra_properties, self.property_rules) self.assertRaises(exception.ReservedProperty, extra_prop_proxy.__setitem__, 'spl_read_prop', 'par') def test_update_reserved_extra_property(self): extra_properties = {'spl_create_prop': 'bar'} context = glance.context.RequestContext(roles=['spl_role']) extra_prop_proxy = property_protections.ExtraPropertiesProxy( context, extra_properties, self.property_rules) self.assertRaises(exception.ReservedProperty, extra_prop_proxy.__setitem__, 'spl_create_prop', 'par') def test_update_empty_extra_property(self): extra_properties = {'foo': ''} context = glance.context.RequestContext(roles=['admin']) extra_prop_proxy = property_protections.ExtraPropertiesProxy( context, extra_properties, self.property_rules) extra_prop_proxy['foo'] = 'bar' self.assertEqual('bar', extra_prop_proxy['foo']) def test_create_extra_property_admin(self): extra_properties = {} context = glance.context.RequestContext(roles=['admin']) extra_prop_proxy = property_protections.ExtraPropertiesProxy( context, extra_properties, self.property_rules) extra_prop_proxy['boo'] = 'doo' self.assertEqual('doo', extra_prop_proxy['boo']) def test_create_reserved_extra_property(self): extra_properties = {} context = glance.context.RequestContext(roles=['spl_role']) extra_prop_proxy = property_protections.ExtraPropertiesProxy( context, extra_properties, self.property_rules) self.assertRaises(exception.ReservedProperty, extra_prop_proxy.__setitem__, 'boo', 'doo') def test_delete_extra_property_as_admin_role(self): extra_properties = {'foo': 'bar'} context = glance.context.RequestContext(roles=['admin']) extra_prop_proxy = property_protections.ExtraPropertiesProxy( context, extra_properties, self.property_rules) del extra_prop_proxy['foo'] self.assertRaises(KeyError, extra_prop_proxy.__getitem__, 'foo') def test_delete_nonexistant_extra_property_as_admin_role(self): extra_properties = {} context = glance.context.RequestContext(roles=['admin']) extra_prop_proxy = property_protections.ExtraPropertiesProxy( context, extra_properties, self.property_rules) self.assertRaises(KeyError, extra_prop_proxy.__delitem__, 'foo') def test_delete_reserved_extra_property(self): extra_properties = {'spl_read_prop': 'r'} context = glance.context.RequestContext(roles=['spl_role']) extra_prop_proxy = property_protections.ExtraPropertiesProxy( context, extra_properties, self.property_rules) # Ensure property has been created and can be read self.assertEqual('r', extra_prop_proxy['spl_read_prop']) self.assertRaises(exception.ReservedProperty, extra_prop_proxy.__delitem__, 'spl_read_prop') def test_delete_nonexistant_extra_property(self): extra_properties = {} roles = ['spl_role'] extra_prop_proxy = property_protections.ExtraPropertiesProxy( roles, extra_properties, self.property_rules) self.assertRaises(KeyError, extra_prop_proxy.__delitem__, 'spl_read_prop') def test_delete_empty_extra_property(self): extra_properties = {'foo': ''} context = glance.context.RequestContext(roles=['admin']) extra_prop_proxy = property_protections.ExtraPropertiesProxy( context, extra_properties, self.property_rules) del extra_prop_proxy['foo'] self.assertNotIn('foo', extra_prop_proxy) class TestProtectedImageFactoryProxy(utils.BaseTestCase): def setUp(self): super(TestProtectedImageFactoryProxy, self).setUp() self.set_property_protections() self.policy = policy.Enforcer() self.property_rules = property_utils.PropertyRules(self.policy) self.factory = glance.domain.ImageFactory() def test_create_image_no_extra_prop(self): self.context = glance.context.RequestContext(tenant=TENANT1, roles=['spl_role']) self.image_factory = property_protections.ProtectedImageFactoryProxy( self.factory, self.context, self.property_rules) extra_props = {} image = self.image_factory.new_image(extra_properties=extra_props) expected_extra_props = {} self.assertEqual(expected_extra_props, image.extra_properties) def test_create_image_extra_prop(self): self.context = glance.context.RequestContext(tenant=TENANT1, roles=['spl_role']) self.image_factory = property_protections.ProtectedImageFactoryProxy( self.factory, self.context, self.property_rules) extra_props = {'spl_create_prop': 'c'} image = self.image_factory.new_image(extra_properties=extra_props) expected_extra_props = {'spl_create_prop': 'c'} self.assertEqual(expected_extra_props, image.extra_properties) def test_create_image_extra_prop_reserved_property(self): self.context = glance.context.RequestContext(tenant=TENANT1, roles=['spl_role']) self.image_factory = property_protections.ProtectedImageFactoryProxy( self.factory, self.context, self.property_rules) extra_props = {'foo': 'bar', 'spl_create_prop': 'c'} # no reg ex for property 'foo' is mentioned for spl_role in config self.assertRaises(exception.ReservedProperty, self.image_factory.new_image, extra_properties=extra_props) def test_create_image_extra_prop_admin(self): self.context = glance.context.RequestContext(tenant=TENANT1, roles=['admin']) self.image_factory = property_protections.ProtectedImageFactoryProxy( self.factory, self.context, self.property_rules) extra_props = {'foo': 'bar', 'spl_create_prop': 'c'} image = self.image_factory.new_image(extra_properties=extra_props) expected_extra_props = {'foo': 'bar', 'spl_create_prop': 'c'} self.assertEqual(expected_extra_props, image.extra_properties) def test_create_image_extra_prop_invalid_role(self): self.context = glance.context.RequestContext(tenant=TENANT1, roles=['imaginary-role']) self.image_factory = property_protections.ProtectedImageFactoryProxy( self.factory, self.context, self.property_rules) extra_props = {'foo': 'bar', 'spl_create_prop': 'c'} self.assertRaises(exception.ReservedProperty, self.image_factory.new_image, extra_properties=extra_props) glance-16.0.0/glance/tests/unit/api/test_common.py0000666000175100017510000001305513245511421022103 0ustar zuulzuul00000000000000# Copyright 2012 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools import webob import glance.api.common from glance.common import config from glance.common import exception from glance.tests import utils as test_utils class SimpleIterator(object): def __init__(self, file_object, chunk_size): self.file_object = file_object self.chunk_size = chunk_size def __iter__(self): def read_chunk(): return self.fobj.read(self.chunk_size) chunk = read_chunk() while chunk: yield chunk chunk = read_chunk() else: raise StopIteration() class TestSizeCheckedIter(testtools.TestCase): def _get_image_metadata(self): return {'id': 'e31cb99c-fe89-49fb-9cc5-f5104fffa636'} def _get_webob_response(self): request = webob.Request.blank('/') response = webob.Response() response.request = request return response def test_uniform_chunk_size(self): resp = self._get_webob_response() meta = self._get_image_metadata() checked_image = glance.api.common.size_checked_iter( resp, meta, 4, ['AB', 'CD'], None) self.assertEqual('AB', next(checked_image)) self.assertEqual('CD', next(checked_image)) self.assertRaises(StopIteration, next, checked_image) def test_small_last_chunk(self): resp = self._get_webob_response() meta = self._get_image_metadata() checked_image = glance.api.common.size_checked_iter( resp, meta, 3, ['AB', 'C'], None) self.assertEqual('AB', next(checked_image)) self.assertEqual('C', next(checked_image)) self.assertRaises(StopIteration, next, checked_image) def test_variable_chunk_size(self): resp = self._get_webob_response() meta = self._get_image_metadata() checked_image = glance.api.common.size_checked_iter( resp, meta, 6, ['AB', '', 'CDE', 'F'], None) self.assertEqual('AB', next(checked_image)) self.assertEqual('', next(checked_image)) self.assertEqual('CDE', next(checked_image)) self.assertEqual('F', next(checked_image)) self.assertRaises(StopIteration, next, checked_image) def test_too_many_chunks(self): """An image should streamed regardless of expected_size""" resp = self._get_webob_response() meta = self._get_image_metadata() checked_image = glance.api.common.size_checked_iter( resp, meta, 4, ['AB', 'CD', 'EF'], None) self.assertEqual('AB', next(checked_image)) self.assertEqual('CD', next(checked_image)) self.assertEqual('EF', next(checked_image)) self.assertRaises(exception.GlanceException, next, checked_image) def test_too_few_chunks(self): resp = self._get_webob_response() meta = self._get_image_metadata() checked_image = glance.api.common.size_checked_iter(resp, meta, 6, ['AB', 'CD'], None) self.assertEqual('AB', next(checked_image)) self.assertEqual('CD', next(checked_image)) self.assertRaises(exception.GlanceException, next, checked_image) def test_too_much_data(self): resp = self._get_webob_response() meta = self._get_image_metadata() checked_image = glance.api.common.size_checked_iter(resp, meta, 3, ['AB', 'CD'], None) self.assertEqual('AB', next(checked_image)) self.assertEqual('CD', next(checked_image)) self.assertRaises(exception.GlanceException, next, checked_image) def test_too_little_data(self): resp = self._get_webob_response() meta = self._get_image_metadata() checked_image = glance.api.common.size_checked_iter(resp, meta, 6, ['AB', 'CD', 'E'], None) self.assertEqual('AB', next(checked_image)) self.assertEqual('CD', next(checked_image)) self.assertEqual('E', next(checked_image)) self.assertRaises(exception.GlanceException, next, checked_image) class TestMalformedRequest(test_utils.BaseTestCase): def setUp(self): """Establish a clean test environment""" super(TestMalformedRequest, self).setUp() self.config(flavor='', group='paste_deploy', config_file='etc/glance-api-paste.ini') self.api = config.load_paste_app('glance-api') def test_redirect_incomplete_url(self): """Test Glance redirects /v# to /v#/ with correct Location header""" req = webob.Request.blank('/v1.1') res = req.get_response(self.api) self.assertEqual(webob.exc.HTTPFound.code, res.status_int) self.assertEqual('http://localhost/v1/', res.location) glance-16.0.0/glance/tests/unit/api/test_cmd_cache_manage.py0000666000175100017510000003177113245511421024016 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import argparse import sys import mock import prettytable from glance.cmd import cache_manage from glance.common import exception import glance.common.utils import glance.image_cache.client from glance.tests import utils as test_utils @mock.patch('sys.stdout', mock.Mock()) class TestGlanceCmdManage(test_utils.BaseTestCase): def _run_command(self, cmd_args, return_code=None): """Runs the cache-manage command. :param cmd_args: The command line arguments. :param return_code: The expected return code of the command. """ testargs = ['cache_manage'] testargs.extend(cmd_args) with mock.patch.object(sys, 'exit') as mock_exit: with mock.patch.object(sys, 'argv', testargs): try: cache_manage.main() except Exception: # See if we expected this failure if return_code is None: raise if return_code is not None: mock_exit.called_with(return_code) @mock.patch.object(argparse.ArgumentParser, 'print_help') def test_help(self, mock_print_help): self._run_command(['help']) self.assertEqual(1, mock_print_help.call_count) @mock.patch.object(cache_manage, 'lookup_command') def test_help_with_command(self, mock_lookup_command): mock_lookup_command.return_value = cache_manage.print_help self._run_command(['help', 'list-cached']) mock_lookup_command.assert_any_call('help') mock_lookup_command.assert_any_call('list-cached') def test_help_with_redundant_command(self): self._run_command(['help', 'list-cached', '1'], cache_manage.FAILURE) @mock.patch.object(glance.image_cache.client.CacheClient, 'get_cached_images') @mock.patch.object(prettytable.PrettyTable, 'add_row') def test_list_cached_images(self, mock_row_create, mock_images): """ Verify that list_cached() method correctly processes images with all filled data and images with not filled 'last_accessed' field. """ mock_images.return_value = [ {'last_accessed': float(0), 'last_modified': float(1378985797.124511), 'image_id': '1', 'size': '128', 'hits': '1'}, {'last_accessed': float(1378985797.124511), 'last_modified': float(1378985797.124511), 'image_id': '2', 'size': '255', 'hits': '2'}] self._run_command(['list-cached'], cache_manage.SUCCESS) self.assertEqual(len(mock_images.return_value), mock_row_create.call_count) @mock.patch.object(glance.image_cache.client.CacheClient, 'get_cached_images') def test_list_cached_images_empty(self, mock_images): """ Verify that list_cached() method handles a case when no images are cached without errors. """ self._run_command(['list-cached'], cache_manage.SUCCESS) @mock.patch.object(glance.image_cache.client.CacheClient, 'get_queued_images') @mock.patch.object(prettytable.PrettyTable, 'add_row') def test_list_queued_images(self, mock_row_create, mock_images): """Verify that list_queued() method correctly processes images.""" mock_images.return_value = [ {'image_id': '1'}, {'image_id': '2'}] # cache_manage.list_queued(mock.Mock()) self._run_command(['list-queued'], cache_manage.SUCCESS) self.assertEqual(len(mock_images.return_value), mock_row_create.call_count) @mock.patch.object(glance.image_cache.client.CacheClient, 'get_queued_images') def test_list_queued_images_empty(self, mock_images): """ Verify that list_queued() method handles a case when no images were queued without errors. """ mock_images.return_value = [] self._run_command(['list-queued'], cache_manage.SUCCESS) def test_queue_image_without_index(self): self._run_command(['queue-image'], cache_manage.FAILURE) @mock.patch.object(glance.cmd.cache_manage, 'user_confirm') @mock.patch.object(glance.cmd.cache_manage, 'get_client') def test_queue_image_not_forced_not_confirmed(self, mock_client, mock_confirm): # --force not set and queue confirmation return False. mock_confirm.return_value = False self._run_command(['queue-image', 'fakeimageid'], cache_manage.SUCCESS) self.assertFalse(mock_client.called) @mock.patch.object(glance.cmd.cache_manage, 'user_confirm') @mock.patch.object(glance.cmd.cache_manage, 'get_client') def test_queue_image_not_forced_confirmed(self, mock_get_client, mock_confirm): # --force not set and confirmation return True. mock_confirm.return_value = True mock_client = mock.MagicMock() mock_get_client.return_value = mock_client # verbose to cover additional condition and line self._run_command(['queue-image', 'fakeimageid', '-v'], cache_manage.SUCCESS) self.assertTrue(mock_get_client.called) mock_client.queue_image_for_caching.assert_called_with('fakeimageid') def test_delete_cached_image_without_index(self): self._run_command(['delete-cached-image'], cache_manage.FAILURE) @mock.patch.object(glance.cmd.cache_manage, 'user_confirm') @mock.patch.object(glance.cmd.cache_manage, 'get_client') def test_delete_cached_image_not_forced_not_confirmed(self, mock_client, mock_confirm): # --force not set and confirmation return False. mock_confirm.return_value = False self._run_command(['delete-cached-image', 'fakeimageid'], cache_manage.SUCCESS) self.assertFalse(mock_client.called) @mock.patch.object(glance.cmd.cache_manage, 'user_confirm') @mock.patch.object(glance.cmd.cache_manage, 'get_client') def test_delete_cached_image_not_forced_confirmed(self, mock_get_client, mock_confirm): # --force not set and confirmation return True. mock_confirm.return_value = True mock_client = mock.MagicMock() mock_get_client.return_value = mock_client # verbose to cover additional condition and line self._run_command(['delete-cached-image', 'fakeimageid', '-v'], cache_manage.SUCCESS) self.assertTrue(mock_get_client.called) mock_client.delete_cached_image.assert_called_with('fakeimageid') @mock.patch.object(glance.cmd.cache_manage, 'user_confirm') @mock.patch.object(glance.cmd.cache_manage, 'get_client') def test_delete_cached_images_not_forced_not_confirmed(self, mock_client, mock_confirm): # --force not set and confirmation return False. mock_confirm.return_value = False self._run_command(['delete-all-cached-images'], cache_manage.SUCCESS) self.assertFalse(mock_client.called) @mock.patch.object(glance.cmd.cache_manage, 'user_confirm') @mock.patch.object(glance.cmd.cache_manage, 'get_client') def test_delete_cached_images_not_forced_confirmed(self, mock_get_client, mock_confirm): # --force not set and confirmation return True. mock_confirm.return_value = True mock_client = mock.MagicMock() mock_get_client.return_value = mock_client # verbose to cover additional condition and line self._run_command(['delete-all-cached-images', '-v'], cache_manage.SUCCESS) self.assertTrue(mock_get_client.called) mock_client.delete_all_cached_images.assert_called() def test_delete_queued_image_without_index(self): self._run_command(['delete-queued-image'], cache_manage.FAILURE) @mock.patch.object(glance.cmd.cache_manage, 'user_confirm') @mock.patch.object(glance.cmd.cache_manage, 'get_client') def test_delete_queued_image_not_forced_not_confirmed(self, mock_client, mock_confirm): # --force not set and confirmation set to False. mock_confirm.return_value = False self._run_command(['delete-queued-image', 'img_id'], cache_manage.SUCCESS) self.assertFalse(mock_client.called) @mock.patch.object(glance.cmd.cache_manage, 'user_confirm') @mock.patch.object(glance.cmd.cache_manage, 'get_client') def test_delete_queued_image_not_forced_confirmed(self, mock_get_client, mock_confirm): # --force not set and confirmation set to True. mock_confirm.return_value = True mock_client = mock.MagicMock() mock_get_client.return_value = mock_client self._run_command(['delete-queued-image', 'img_id', '-v'], cache_manage.SUCCESS) self.assertTrue(mock_get_client.called) mock_client.delete_queued_image.assert_called_with('img_id') @mock.patch.object(glance.cmd.cache_manage, 'user_confirm') @mock.patch.object(glance.cmd.cache_manage, 'get_client') def test_delete_queued_images_not_forced_not_confirmed(self, mock_client, mock_confirm): # --force not set and confirmation set to False. mock_confirm.return_value = False self._run_command(['delete-all-queued-images'], cache_manage.SUCCESS) self.assertFalse(mock_client.called) @mock.patch.object(glance.cmd.cache_manage, 'user_confirm') @mock.patch.object(glance.cmd.cache_manage, 'get_client') def test_delete_queued_images_not_forced_confirmed(self, mock_get_client, mock_confirm): # --force not set and confirmation set to True. mock_confirm.return_value = True mock_client = mock.MagicMock() mock_get_client.return_value = mock_client self._run_command(['delete-all-queued-images', '-v'], cache_manage.SUCCESS) self.assertTrue(mock_get_client.called) mock_client.delete_all_queued_images.assert_called() @mock.patch.object(glance.cmd.cache_manage, 'get_client') def test_catch_error_not_found(self, mock_function): mock_function.side_effect = exception.NotFound() self.assertEqual(cache_manage.FAILURE, cache_manage.list_cached(mock.Mock())) @mock.patch.object(glance.cmd.cache_manage, 'get_client') def test_catch_error_forbidden(self, mock_function): mock_function.side_effect = exception.Forbidden() self.assertEqual(cache_manage.FAILURE, cache_manage.list_cached(mock.Mock())) @mock.patch.object(glance.cmd.cache_manage, 'get_client') def test_catch_error_unhandled(self, mock_function): mock_function.side_effect = exception.Duplicate() my_mock = mock.Mock() my_mock.debug = False self.assertEqual(cache_manage.FAILURE, cache_manage.list_cached(my_mock)) @mock.patch.object(glance.cmd.cache_manage, 'get_client') def test_catch_error_unhandled_debug_mode(self, mock_function): mock_function.side_effect = exception.Duplicate() my_mock = mock.Mock() my_mock.debug = True self.assertRaises(exception.Duplicate, cache_manage.list_cached, my_mock) def test_cache_manage_env(self): def_value = 'sometext12345678900987654321' self.assertNotEqual(def_value, cache_manage.env('PATH', default=def_value)) def test_cache_manage_env_default(self): def_value = 'sometext12345678900987654321' self.assertEqual(def_value, cache_manage.env('TMPVALUE1234567890', default=def_value)) def test_lookup_command_unsupported_command(self): self._run_command(['unsupported_command'], cache_manage.FAILURE) glance-16.0.0/glance/tests/unit/api/__init__.py0000666000175100017510000000000013245511421021275 0ustar zuulzuul00000000000000glance-16.0.0/glance/tests/unit/api/middleware/0000775000175100017510000000000013245511661021317 5ustar zuulzuul00000000000000glance-16.0.0/glance/tests/unit/api/middleware/test_cache_manage.py0000666000175100017510000001430213245511421025277 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from glance.api import cached_images from glance.api.middleware import cache_manage import glance.common.config import glance.common.wsgi import glance.image_cache from glance.tests import utils as test_utils import mock import webob class TestCacheManageFilter(test_utils.BaseTestCase): @mock.patch.object(glance.image_cache.ImageCache, "init_driver") def setUp(self, mock_init_driver): super(TestCacheManageFilter, self).setUp() self.stub_application_name = "stubApplication" self.stub_value = "Stub value" self.image_id = "image_id_stub" mock_init_driver.return_value = None self.cache_manage_filter = cache_manage.CacheManageFilter( self.stub_application_name) def test_bogus_request(self): # prepare bogus_request = webob.Request.blank("/bogus/") # call resource = self.cache_manage_filter.process_request(bogus_request) # check self.assertIsNone(resource) @mock.patch.object(cached_images.Controller, "get_cached_images") def test_get_cached_images(self, mock_get_cached_images): # setup mock_get_cached_images.return_value = self.stub_value # prepare request = webob.Request.blank("/v1/cached_images") # call resource = self.cache_manage_filter.process_request(request) # check mock_get_cached_images.assert_called_with(request) self.assertEqual('"' + self.stub_value + '"', resource.body.decode('utf-8')) @mock.patch.object(cached_images.Controller, "delete_cached_image") def test_delete_cached_image(self, mock_delete_cached_image): # setup mock_delete_cached_image.return_value = self.stub_value # prepare request = webob.Request.blank("/v1/cached_images/" + self.image_id, environ={'REQUEST_METHOD': "DELETE"}) # call resource = self.cache_manage_filter.process_request(request) # check mock_delete_cached_image.assert_called_with(request, image_id=self.image_id) self.assertEqual('"' + self.stub_value + '"', resource.body.decode('utf-8')) @mock.patch.object(cached_images.Controller, "delete_cached_images") def test_delete_cached_images(self, mock_delete_cached_images): # setup mock_delete_cached_images.return_value = self.stub_value # prepare request = webob.Request.blank("/v1/cached_images", environ={'REQUEST_METHOD': "DELETE"}) # call resource = self.cache_manage_filter.process_request(request) # check mock_delete_cached_images.assert_called_with(request) self.assertEqual('"' + self.stub_value + '"', resource.body.decode('utf-8')) @mock.patch.object(cached_images.Controller, "queue_image") def test_put_queued_image(self, mock_queue_image): # setup mock_queue_image.return_value = self.stub_value # prepare request = webob.Request.blank("/v1/queued_images/" + self.image_id, environ={'REQUEST_METHOD': "PUT"}) # call resource = self.cache_manage_filter.process_request(request) # check mock_queue_image.assert_called_with(request, image_id=self.image_id) self.assertEqual('"' + self.stub_value + '"', resource.body.decode('utf-8')) @mock.patch.object(cached_images.Controller, "get_queued_images") def test_get_queued_images(self, mock_get_queued_images): # setup mock_get_queued_images.return_value = self.stub_value # prepare request = webob.Request.blank("/v1/queued_images") # call resource = self.cache_manage_filter.process_request(request) # check mock_get_queued_images.assert_called_with(request) self.assertEqual('"' + self.stub_value + '"', resource.body.decode('utf-8')) @mock.patch.object(cached_images.Controller, "delete_queued_image") def test_delete_queued_image(self, mock_delete_queued_image): # setup mock_delete_queued_image.return_value = self.stub_value # prepare request = webob.Request.blank("/v1/queued_images/" + self.image_id, environ={'REQUEST_METHOD': 'DELETE'}) # call resource = self.cache_manage_filter.process_request(request) # check mock_delete_queued_image.assert_called_with(request, image_id=self.image_id) self.assertEqual('"' + self.stub_value + '"', resource.body.decode('utf-8')) @mock.patch.object(cached_images.Controller, "delete_queued_images") def test_delete_queued_images(self, mock_delete_queued_images): # setup mock_delete_queued_images.return_value = self.stub_value # prepare request = webob.Request.blank("/v1/queued_images", environ={'REQUEST_METHOD': 'DELETE'}) # call resource = self.cache_manage_filter.process_request(request) # check mock_delete_queued_images.assert_called_with(request) self.assertEqual('"' + self.stub_value + '"', resource.body.decode('utf-8')) glance-16.0.0/glance/tests/unit/api/middleware/__init__.py0000666000175100017510000000000013245511421023412 0ustar zuulzuul00000000000000glance-16.0.0/glance/tests/unit/test_versions.py0000666000175100017510000002403313245511421021710 0ustar zuulzuul00000000000000# Copyright 2012 OpenStack Foundation. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_serialization import jsonutils from six.moves import http_client as http import webob from glance.api.middleware import version_negotiation from glance.api import versions from glance.common.wsgi import Request as WsgiRequest from glance.tests.unit import base class VersionsTest(base.IsolatedUnitTest): """Test the version information returned from the API service.""" def _get_versions_list(self, url): versions = [ { 'id': 'v2.6', 'status': 'CURRENT', 'links': [{'rel': 'self', 'href': '%s/v2/' % url}], }, { 'id': 'v2.5', 'status': 'SUPPORTED', 'links': [{'rel': 'self', 'href': '%s/v2/' % url}], }, { 'id': 'v2.4', 'status': 'SUPPORTED', 'links': [{'rel': 'self', 'href': '%s/v2/' % url}], }, { 'id': 'v2.3', 'status': 'SUPPORTED', 'links': [{'rel': 'self', 'href': '%s/v2/' % url}], }, { 'id': 'v2.2', 'status': 'SUPPORTED', 'links': [{'rel': 'self', 'href': '%s/v2/' % url}], }, { 'id': 'v2.1', 'status': 'SUPPORTED', 'links': [{'rel': 'self', 'href': '%s/v2/' % url}], }, { 'id': 'v2.0', 'status': 'SUPPORTED', 'links': [{'rel': 'self', 'href': '%s/v2/' % url}], }, { 'id': 'v1.1', 'status': 'DEPRECATED', 'links': [{'rel': 'self', 'href': '%s/v1/' % url}], }, { 'id': 'v1.0', 'status': 'DEPRECATED', 'links': [{'rel': 'self', 'href': '%s/v1/' % url}], }, ] return versions def test_get_version_list(self): req = webob.Request.blank('/', base_url='http://127.0.0.1:9292/') req.accept = 'application/json' self.config(bind_host='127.0.0.1', bind_port=9292) res = versions.Controller().index(req) self.assertEqual(http.MULTIPLE_CHOICES, res.status_int) self.assertEqual('application/json', res.content_type) results = jsonutils.loads(res.body)['versions'] expected = self._get_versions_list('http://127.0.0.1:9292') self.assertEqual(expected, results) def test_get_version_list_public_endpoint(self): req = webob.Request.blank('/', base_url='http://127.0.0.1:9292/') req.accept = 'application/json' self.config(bind_host='127.0.0.1', bind_port=9292, public_endpoint='https://example.com:9292') res = versions.Controller().index(req) self.assertEqual(http.MULTIPLE_CHOICES, res.status_int) self.assertEqual('application/json', res.content_type) results = jsonutils.loads(res.body)['versions'] expected = self._get_versions_list('https://example.com:9292') self.assertEqual(expected, results) def test_get_version_list_secure_proxy_ssl_header(self): self.config(secure_proxy_ssl_header='HTTP_X_FORWARDED_PROTO') url = 'http://localhost:9292' environ = webob.request.environ_from_url(url) req = WsgiRequest(environ) res = versions.Controller().index(req) self.assertEqual(http.MULTIPLE_CHOICES, res.status_int) self.assertEqual('application/json', res.content_type) results = jsonutils.loads(res.body)['versions'] expected = self._get_versions_list(url) self.assertEqual(expected, results) def test_get_version_list_secure_proxy_ssl_header_https(self): self.config(secure_proxy_ssl_header='HTTP_X_FORWARDED_PROTO') url = 'http://localhost:9292' ssl_url = 'https://localhost:9292' environ = webob.request.environ_from_url(url) environ['HTTP_X_FORWARDED_PROTO'] = "https" req = WsgiRequest(environ) res = versions.Controller().index(req) self.assertEqual(http.MULTIPLE_CHOICES, res.status_int) self.assertEqual('application/json', res.content_type) results = jsonutils.loads(res.body)['versions'] expected = self._get_versions_list(ssl_url) self.assertEqual(expected, results) class VersionNegotiationTest(base.IsolatedUnitTest): def setUp(self): super(VersionNegotiationTest, self).setUp() self.middleware = version_negotiation.VersionNegotiationFilter(None) def test_request_url_v1(self): request = webob.Request.blank('/v1/images') self.middleware.process_request(request) self.assertEqual('/v1/images', request.path_info) def test_request_url_v1_0(self): request = webob.Request.blank('/v1.0/images') self.middleware.process_request(request) self.assertEqual('/v1/images', request.path_info) def test_request_url_v1_1(self): request = webob.Request.blank('/v1.1/images') self.middleware.process_request(request) self.assertEqual('/v1/images', request.path_info) def test_request_accept_v1(self): request = webob.Request.blank('/images') request.headers = {'accept': 'application/vnd.openstack.images-v1'} self.middleware.process_request(request) self.assertEqual('/v1/images', request.path_info) def test_request_url_v2(self): request = webob.Request.blank('/v2/images') self.middleware.process_request(request) self.assertEqual('/v2/images', request.path_info) def test_request_url_v2_0(self): request = webob.Request.blank('/v2.0/images') self.middleware.process_request(request) self.assertEqual('/v2/images', request.path_info) def test_request_url_v2_1(self): request = webob.Request.blank('/v2.1/images') self.middleware.process_request(request) self.assertEqual('/v2/images', request.path_info) def test_request_url_v2_2(self): request = webob.Request.blank('/v2.2/images') self.middleware.process_request(request) self.assertEqual('/v2/images', request.path_info) def test_request_url_v2_3(self): request = webob.Request.blank('/v2.3/images') self.middleware.process_request(request) self.assertEqual('/v2/images', request.path_info) def test_request_url_v2_4(self): request = webob.Request.blank('/v2.4/images') self.middleware.process_request(request) self.assertEqual('/v2/images', request.path_info) def test_request_url_v2_5(self): request = webob.Request.blank('/v2.5/images') self.middleware.process_request(request) self.assertEqual('/v2/images', request.path_info) def test_request_url_v2_6(self): request = webob.Request.blank('/v2.6/images') self.middleware.process_request(request) self.assertEqual('/v2/images', request.path_info) def test_request_url_v2_7_unsupported(self): request = webob.Request.blank('/v2.7/images') resp = self.middleware.process_request(request) self.assertIsInstance(resp, versions.Controller) def test_request_url_v2_7_unsupported_EXPERIMENTAL(self): request = webob.Request.blank('/v2.7/images') self.config(enable_image_import=True) resp = self.middleware.process_request(request) self.assertIsInstance(resp, versions.Controller) class VersionsAndNegotiationTest(VersionNegotiationTest, VersionsTest): """ Test that versions mentioned in the versions response are correctly negotiated. """ def _get_list_of_version_ids(self, status): request = webob.Request.blank('/') request.accept = 'application/json' response = versions.Controller().index(request) v_list = jsonutils.loads(response.body)['versions'] return [v['id'] for v in v_list if v['status'] == status] def _assert_version_is_negotiated(self, version_id): request = webob.Request.blank("/%s/images" % version_id) self.middleware.process_request(request) major = version_id.split('.', 1)[0] expected = "/%s/images" % major self.assertEqual(expected, request.path_info) def test_current_is_negotiated(self): # NOTE(rosmaita): Bug 1609571: the versions response was correct, but # the negotiation had not been updated for the CURRENT version. to_check = self._get_list_of_version_ids('CURRENT') self.assertTrue(to_check) for version_id in to_check: self._assert_version_is_negotiated(version_id) def test_supported_is_negotiated(self): to_check = self._get_list_of_version_ids('SUPPORTED') for version_id in to_check: self._assert_version_is_negotiated(version_id) def test_deprecated_is_negotiated(self): to_check = self._get_list_of_version_ids('DEPRECATED') for version_id in to_check: self._assert_version_is_negotiated(version_id) def test_experimental_is_negotiated(self): to_check = self._get_list_of_version_ids('EXPERIMENTAL') for version_id in to_check: self._assert_version_is_negotiated(version_id) glance-16.0.0/glance/tests/unit/test_cache_middleware.py0000666000175100017510000010101313245511421023272 0ustar zuulzuul00000000000000# Copyright 2012 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from mock import patch from oslo_policy import policy # NOTE(jokke): simplified transition to py3, behaves like py2 xrange from six.moves import http_client as http from six.moves import range import testtools import webob import glance.api.middleware.cache import glance.api.policy from glance.common import exception from glance import context import glance.registry.client.v1.api as registry from glance.tests.unit import base from glance.tests.unit import utils as unit_test_utils class ImageStub(object): def __init__(self, image_id, extra_properties=None, visibility='private'): if extra_properties is None: extra_properties = {} self.image_id = image_id self.visibility = visibility self.status = 'active' self.extra_properties = extra_properties self.checksum = 'c1234' self.size = 123456789 class TestCacheMiddlewareURLMatching(testtools.TestCase): def test_v1_no_match_detail(self): req = webob.Request.blank('/v1/images/detail') out = glance.api.middleware.cache.CacheFilter._match_request(req) self.assertIsNone(out) def test_v1_no_match_detail_with_query_params(self): req = webob.Request.blank('/v1/images/detail?limit=10') out = glance.api.middleware.cache.CacheFilter._match_request(req) self.assertIsNone(out) def test_v1_match_id_with_query_param(self): req = webob.Request.blank('/v1/images/asdf?ping=pong') out = glance.api.middleware.cache.CacheFilter._match_request(req) self.assertEqual(('v1', 'GET', 'asdf'), out) def test_v2_match_id(self): req = webob.Request.blank('/v2/images/asdf/file') out = glance.api.middleware.cache.CacheFilter._match_request(req) self.assertEqual(('v2', 'GET', 'asdf'), out) def test_v2_no_match_bad_path(self): req = webob.Request.blank('/v2/images/asdf') out = glance.api.middleware.cache.CacheFilter._match_request(req) self.assertIsNone(out) def test_no_match_unknown_version(self): req = webob.Request.blank('/v3/images/asdf') out = glance.api.middleware.cache.CacheFilter._match_request(req) self.assertIsNone(out) class TestCacheMiddlewareRequestStashCacheInfo(testtools.TestCase): def setUp(self): super(TestCacheMiddlewareRequestStashCacheInfo, self).setUp() self.request = webob.Request.blank('') self.middleware = glance.api.middleware.cache.CacheFilter def test_stash_cache_request_info(self): self.middleware._stash_request_info(self.request, 'asdf', 'GET', 'v2') self.assertEqual('asdf', self.request.environ['api.cache.image_id']) self.assertEqual('GET', self.request.environ['api.cache.method']) self.assertEqual('v2', self.request.environ['api.cache.version']) def test_fetch_cache_request_info(self): self.request.environ['api.cache.image_id'] = 'asdf' self.request.environ['api.cache.method'] = 'GET' self.request.environ['api.cache.version'] = 'v2' (image_id, method, version) = self.middleware._fetch_request_info( self.request) self.assertEqual('asdf', image_id) self.assertEqual('GET', method) self.assertEqual('v2', version) def test_fetch_cache_request_info_unset(self): out = self.middleware._fetch_request_info(self.request) self.assertIsNone(out) class ChecksumTestCacheFilter(glance.api.middleware.cache.CacheFilter): def __init__(self): class DummyCache(object): def get_caching_iter(self, image_id, image_checksum, app_iter): self.image_checksum = image_checksum self.cache = DummyCache() self.policy = unit_test_utils.FakePolicyEnforcer() class TestCacheMiddlewareChecksumVerification(base.IsolatedUnitTest): def setUp(self): super(TestCacheMiddlewareChecksumVerification, self).setUp() self.context = context.RequestContext(is_admin=True) self.request = webob.Request.blank('') self.request.context = self.context def test_checksum_v1_header(self): cache_filter = ChecksumTestCacheFilter() headers = {"x-image-meta-checksum": "1234567890"} resp = webob.Response(request=self.request, headers=headers) cache_filter._process_GET_response(resp, None) self.assertEqual("1234567890", cache_filter.cache.image_checksum) def test_checksum_v2_header(self): cache_filter = ChecksumTestCacheFilter() headers = { "x-image-meta-checksum": "1234567890", "Content-MD5": "abcdefghi" } resp = webob.Response(request=self.request, headers=headers) cache_filter._process_GET_response(resp, None) self.assertEqual("abcdefghi", cache_filter.cache.image_checksum) def test_checksum_missing_header(self): cache_filter = ChecksumTestCacheFilter() resp = webob.Response(request=self.request) cache_filter._process_GET_response(resp, None) self.assertIsNone(cache_filter.cache.image_checksum) class FakeImageSerializer(object): def show(self, response, raw_response): return True class ProcessRequestTestCacheFilter(glance.api.middleware.cache.CacheFilter): def __init__(self): self.serializer = FakeImageSerializer() class DummyCache(object): def __init__(self): self.deleted_images = [] def is_cached(self, image_id): return True def get_caching_iter(self, image_id, image_checksum, app_iter): pass def delete_cached_image(self, image_id): self.deleted_images.append(image_id) def get_image_size(self, image_id): pass self.cache = DummyCache() self.policy = unit_test_utils.FakePolicyEnforcer() class TestCacheMiddlewareProcessRequest(base.IsolatedUnitTest): def _enforcer_from_rules(self, unparsed_rules): rules = policy.Rules.from_dict(unparsed_rules) enforcer = glance.api.policy.Enforcer() enforcer.set_rules(rules, overwrite=True) return enforcer def test_v1_deleted_image_fetch(self): """ Test for determining that when an admin tries to download a deleted image it returns 404 Not Found error. """ def dummy_img_iterator(): for i in range(3): yield i image_id = 'test1' image_meta = { 'id': image_id, 'name': 'fake_image', 'status': 'deleted', 'created_at': '', 'min_disk': '10G', 'min_ram': '1024M', 'protected': False, 'locations': '', 'checksum': 'c1234', 'owner': '', 'disk_format': 'raw', 'container_format': 'bare', 'size': '123456789', 'virtual_size': '123456789', 'is_public': 'public', 'deleted': True, 'updated_at': '', 'properties': {}, } request = webob.Request.blank('/v1/images/%s' % image_id) request.context = context.RequestContext() cache_filter = ProcessRequestTestCacheFilter() self.assertRaises(exception.NotFound, cache_filter._process_v1_request, request, image_id, dummy_img_iterator, image_meta) def test_process_v1_request_for_deleted_but_cached_image(self): """ Test for determining image is deleted from cache when it is not found in Glance Registry. """ def fake_process_v1_request(request, image_id, image_iterator, image_meta): raise exception.ImageNotFound() def fake_get_v1_image_metadata(request, image_id): return {'status': 'active', 'properties': {}} image_id = 'test1' request = webob.Request.blank('/v1/images/%s' % image_id) request.context = context.RequestContext() cache_filter = ProcessRequestTestCacheFilter() self.stubs.Set(cache_filter, '_get_v1_image_metadata', fake_get_v1_image_metadata) self.stubs.Set(cache_filter, '_process_v1_request', fake_process_v1_request) cache_filter.process_request(request) self.assertIn(image_id, cache_filter.cache.deleted_images) def test_v1_process_request_image_fetch(self): def dummy_img_iterator(): for i in range(3): yield i image_id = 'test1' image_meta = { 'id': image_id, 'name': 'fake_image', 'status': 'active', 'created_at': '', 'min_disk': '10G', 'min_ram': '1024M', 'protected': False, 'locations': '', 'checksum': 'c1234', 'owner': '', 'disk_format': 'raw', 'container_format': 'bare', 'size': '123456789', 'virtual_size': '123456789', 'is_public': 'public', 'deleted': False, 'updated_at': '', 'properties': {}, } request = webob.Request.blank('/v1/images/%s' % image_id) request.context = context.RequestContext() cache_filter = ProcessRequestTestCacheFilter() actual = cache_filter._process_v1_request( request, image_id, dummy_img_iterator, image_meta) self.assertTrue(actual) def test_v1_remove_location_image_fetch(self): class CheckNoLocationDataSerializer(object): def show(self, response, raw_response): return 'location_data' in raw_response['image_meta'] def dummy_img_iterator(): for i in range(3): yield i image_id = 'test1' image_meta = { 'id': image_id, 'name': 'fake_image', 'status': 'active', 'created_at': '', 'min_disk': '10G', 'min_ram': '1024M', 'protected': False, 'locations': '', 'checksum': 'c1234', 'owner': '', 'disk_format': 'raw', 'container_format': 'bare', 'size': '123456789', 'virtual_size': '123456789', 'is_public': 'public', 'deleted': False, 'updated_at': '', 'properties': {}, } request = webob.Request.blank('/v1/images/%s' % image_id) request.context = context.RequestContext() cache_filter = ProcessRequestTestCacheFilter() cache_filter.serializer = CheckNoLocationDataSerializer() actual = cache_filter._process_v1_request( request, image_id, dummy_img_iterator, image_meta) self.assertFalse(actual) def test_verify_metadata_deleted_image(self): """ Test verify_metadata raises exception.NotFound for a deleted image """ image_meta = {'status': 'deleted', 'is_public': True, 'deleted': True} cache_filter = ProcessRequestTestCacheFilter() self.assertRaises(exception.NotFound, cache_filter._verify_metadata, image_meta) def _test_verify_metadata_zero_size(self, image_meta): """ Test verify_metadata updates metadata with cached image size for images with 0 size. :param image_meta: Image metadata, which may be either an ImageTarget instance or a legacy v1 dict. """ image_size = 1 cache_filter = ProcessRequestTestCacheFilter() with patch.object(cache_filter.cache, 'get_image_size', return_value=image_size): cache_filter._verify_metadata(image_meta) self.assertEqual(image_size, image_meta['size']) def test_verify_metadata_zero_size(self): """ Test verify_metadata updates metadata with cached image size for images with 0 size """ image_meta = {'size': 0, 'deleted': False, 'id': 'test1', 'status': 'active'} self._test_verify_metadata_zero_size(image_meta) def test_verify_metadata_is_image_target_instance_with_zero_size(self): """ Test verify_metadata updates metadata which is ImageTarget instance """ image = ImageStub('test1') image.size = 0 image_meta = glance.api.policy.ImageTarget(image) self._test_verify_metadata_zero_size(image_meta) def test_v2_process_request_response_headers(self): def dummy_img_iterator(): for i in range(3): yield i image_id = 'test1' request = webob.Request.blank('/v2/images/test1/file') request.context = context.RequestContext() request.environ['api.cache.image'] = ImageStub(image_id) image_meta = { 'id': image_id, 'name': 'fake_image', 'status': 'active', 'created_at': '', 'min_disk': '10G', 'min_ram': '1024M', 'protected': False, 'locations': '', 'checksum': 'c1234', 'owner': '', 'disk_format': 'raw', 'container_format': 'bare', 'size': '123456789', 'virtual_size': '123456789', 'is_public': 'public', 'deleted': False, 'updated_at': '', 'properties': {}, } cache_filter = ProcessRequestTestCacheFilter() response = cache_filter._process_v2_request( request, image_id, dummy_img_iterator, image_meta) self.assertEqual('application/octet-stream', response.headers['Content-Type']) self.assertEqual('c1234', response.headers['Content-MD5']) self.assertEqual('123456789', response.headers['Content-Length']) def test_v2_process_request_without_checksum(self): def dummy_img_iterator(): for i in range(3): yield i image_id = 'test1' request = webob.Request.blank('/v2/images/test1/file') request.context = context.RequestContext() image = ImageStub(image_id) image.checksum = None request.environ['api.cache.image'] = image image_meta = { 'id': image_id, 'name': 'fake_image', 'status': 'active', 'size': '123456789', } cache_filter = ProcessRequestTestCacheFilter() response = cache_filter._process_v2_request( request, image_id, dummy_img_iterator, image_meta) self.assertNotIn('Content-MD5', response.headers.keys()) def test_process_request_without_download_image_policy(self): """ Test for cache middleware skip processing when request context has not 'download_image' role. """ def fake_get_v1_image_metadata(*args, **kwargs): return {'status': 'active', 'properties': {}} image_id = 'test1' request = webob.Request.blank('/v1/images/%s' % image_id) request.context = context.RequestContext() cache_filter = ProcessRequestTestCacheFilter() cache_filter._get_v1_image_metadata = fake_get_v1_image_metadata enforcer = self._enforcer_from_rules({'download_image': '!'}) cache_filter.policy = enforcer self.assertRaises(webob.exc.HTTPForbidden, cache_filter.process_request, request) def test_v1_process_request_download_restricted(self): """ Test process_request for v1 api where _member_ role not able to download the image with custom property. """ image_id = 'test1' def fake_get_v1_image_metadata(*args, **kwargs): return { 'id': image_id, 'name': 'fake_image', 'status': 'active', 'created_at': '', 'min_disk': '10G', 'min_ram': '1024M', 'protected': False, 'locations': '', 'checksum': 'c1234', 'owner': '', 'disk_format': 'raw', 'container_format': 'bare', 'size': '123456789', 'virtual_size': '123456789', 'is_public': 'public', 'deleted': False, 'updated_at': '', 'x_test_key': 'test_1234' } enforcer = self._enforcer_from_rules({ "restricted": "not ('test_1234':%(x_test_key)s and role:_member_)", "download_image": "role:admin or rule:restricted" }) request = webob.Request.blank('/v1/images/%s' % image_id) request.context = context.RequestContext(roles=['_member_']) cache_filter = ProcessRequestTestCacheFilter() cache_filter._get_v1_image_metadata = fake_get_v1_image_metadata cache_filter.policy = enforcer self.assertRaises(webob.exc.HTTPForbidden, cache_filter.process_request, request) def test_v1_process_request_download_permitted(self): """ Test process_request for v1 api where member role able to download the image with custom property. """ image_id = 'test1' def fake_get_v1_image_metadata(*args, **kwargs): return { 'id': image_id, 'name': 'fake_image', 'status': 'active', 'created_at': '', 'min_disk': '10G', 'min_ram': '1024M', 'protected': False, 'locations': '', 'checksum': 'c1234', 'owner': '', 'disk_format': 'raw', 'container_format': 'bare', 'size': '123456789', 'virtual_size': '123456789', 'is_public': 'public', 'deleted': False, 'updated_at': '', 'x_test_key': 'test_1234' } request = webob.Request.blank('/v1/images/%s' % image_id) request.context = context.RequestContext(roles=['member']) cache_filter = ProcessRequestTestCacheFilter() cache_filter._get_v1_image_metadata = fake_get_v1_image_metadata rules = { "restricted": "not ('test_1234':%(x_test_key)s and role:_member_)", "download_image": "role:admin or rule:restricted" } self.set_policy_rules(rules) cache_filter.policy = glance.api.policy.Enforcer() actual = cache_filter.process_request(request) self.assertTrue(actual) def test_v1_process_request_image_meta_not_found(self): """ Test process_request for v1 api where registry raises NotFound exception as image metadata not found. """ image_id = 'test1' def fake_get_v1_image_metadata(*args, **kwargs): raise exception.NotFound() request = webob.Request.blank('/v1/images/%s' % image_id) request.context = context.RequestContext(roles=['_member_']) cache_filter = ProcessRequestTestCacheFilter() self.stubs.Set(registry, 'get_image_metadata', fake_get_v1_image_metadata) rules = { "restricted": "not ('test_1234':%(x_test_key)s and role:_member_)", "download_image": "role:admin or rule:restricted" } self.set_policy_rules(rules) cache_filter.policy = glance.api.policy.Enforcer() self.assertRaises(webob.exc.HTTPNotFound, cache_filter.process_request, request) def test_v2_process_request_download_restricted(self): """ Test process_request for v2 api where _member_ role not able to download the image with custom property. """ image_id = 'test1' extra_properties = { 'x_test_key': 'test_1234' } def fake_get_v2_image_metadata(*args, **kwargs): image = ImageStub(image_id, extra_properties=extra_properties) request.environ['api.cache.image'] = image return glance.api.policy.ImageTarget(image) enforcer = self._enforcer_from_rules({ "restricted": "not ('test_1234':%(x_test_key)s and role:_member_)", "download_image": "role:admin or rule:restricted" }) request = webob.Request.blank('/v2/images/test1/file') request.context = context.RequestContext(roles=['_member_']) cache_filter = ProcessRequestTestCacheFilter() cache_filter._get_v2_image_metadata = fake_get_v2_image_metadata cache_filter.policy = enforcer self.assertRaises(webob.exc.HTTPForbidden, cache_filter.process_request, request) def test_v2_process_request_download_permitted(self): """ Test process_request for v2 api where member role able to download the image with custom property. """ image_id = 'test1' extra_properties = { 'x_test_key': 'test_1234' } def fake_get_v2_image_metadata(*args, **kwargs): image = ImageStub(image_id, extra_properties=extra_properties) request.environ['api.cache.image'] = image return glance.api.policy.ImageTarget(image) request = webob.Request.blank('/v2/images/test1/file') request.context = context.RequestContext(roles=['member']) cache_filter = ProcessRequestTestCacheFilter() cache_filter._get_v2_image_metadata = fake_get_v2_image_metadata rules = { "restricted": "not ('test_1234':%(x_test_key)s and role:_member_)", "download_image": "role:admin or rule:restricted" } self.set_policy_rules(rules) cache_filter.policy = glance.api.policy.Enforcer() actual = cache_filter.process_request(request) self.assertTrue(actual) class TestCacheMiddlewareProcessResponse(base.IsolatedUnitTest): def test_process_v1_DELETE_response(self): image_id = 'test1' request = webob.Request.blank('/v1/images/%s' % image_id) request.context = context.RequestContext() cache_filter = ProcessRequestTestCacheFilter() headers = {"x-image-meta-deleted": True} resp = webob.Response(request=request, headers=headers) actual = cache_filter._process_DELETE_response(resp, image_id) self.assertEqual(resp, actual) def test_get_status_code(self): headers = {"x-image-meta-deleted": True} resp = webob.Response(headers=headers) cache_filter = ProcessRequestTestCacheFilter() actual = cache_filter.get_status_code(resp) self.assertEqual(http.OK, actual) def test_process_response(self): def fake_fetch_request_info(*args, **kwargs): return ('test1', 'GET', 'v1') def fake_get_v1_image_metadata(*args, **kwargs): return {'properties': {}} cache_filter = ProcessRequestTestCacheFilter() cache_filter._fetch_request_info = fake_fetch_request_info cache_filter._get_v1_image_metadata = fake_get_v1_image_metadata image_id = 'test1' request = webob.Request.blank('/v1/images/%s' % image_id) request.context = context.RequestContext() headers = {"x-image-meta-deleted": True} resp = webob.Response(request=request, headers=headers) actual = cache_filter.process_response(resp) self.assertEqual(resp, actual) def test_process_response_without_download_image_policy(self): """ Test for cache middleware raise webob.exc.HTTPForbidden directly when request context has not 'download_image' role. """ def fake_fetch_request_info(*args, **kwargs): return ('test1', 'GET', 'v1') def fake_get_v1_image_metadata(*args, **kwargs): return {'properties': {}} cache_filter = ProcessRequestTestCacheFilter() cache_filter._fetch_request_info = fake_fetch_request_info cache_filter._get_v1_image_metadata = fake_get_v1_image_metadata rules = {'download_image': '!'} self.set_policy_rules(rules) cache_filter.policy = glance.api.policy.Enforcer() image_id = 'test1' request = webob.Request.blank('/v1/images/%s' % image_id) request.context = context.RequestContext() resp = webob.Response(request=request) self.assertRaises(webob.exc.HTTPForbidden, cache_filter.process_response, resp) self.assertEqual([b''], resp.app_iter) def test_v1_process_response_download_restricted(self): """ Test process_response for v1 api where _member_ role not able to download the image with custom property. """ image_id = 'test1' def fake_fetch_request_info(*args, **kwargs): return ('test1', 'GET', 'v1') def fake_get_v1_image_metadata(*args, **kwargs): return { 'id': image_id, 'name': 'fake_image', 'status': 'active', 'created_at': '', 'min_disk': '10G', 'min_ram': '1024M', 'protected': False, 'locations': '', 'checksum': 'c1234', 'owner': '', 'disk_format': 'raw', 'container_format': 'bare', 'size': '123456789', 'virtual_size': '123456789', 'is_public': 'public', 'deleted': False, 'updated_at': '', 'x_test_key': 'test_1234' } cache_filter = ProcessRequestTestCacheFilter() cache_filter._fetch_request_info = fake_fetch_request_info cache_filter._get_v1_image_metadata = fake_get_v1_image_metadata rules = { "restricted": "not ('test_1234':%(x_test_key)s and role:_member_)", "download_image": "role:admin or rule:restricted" } self.set_policy_rules(rules) cache_filter.policy = glance.api.policy.Enforcer() request = webob.Request.blank('/v1/images/%s' % image_id) request.context = context.RequestContext(roles=['_member_']) resp = webob.Response(request=request) self.assertRaises(webob.exc.HTTPForbidden, cache_filter.process_response, resp) def test_v1_process_response_download_permitted(self): """ Test process_response for v1 api where member role able to download the image with custom property. """ image_id = 'test1' def fake_fetch_request_info(*args, **kwargs): return ('test1', 'GET', 'v1') def fake_get_v1_image_metadata(*args, **kwargs): return { 'id': image_id, 'name': 'fake_image', 'status': 'active', 'created_at': '', 'min_disk': '10G', 'min_ram': '1024M', 'protected': False, 'locations': '', 'checksum': 'c1234', 'owner': '', 'disk_format': 'raw', 'container_format': 'bare', 'size': '123456789', 'virtual_size': '123456789', 'is_public': 'public', 'deleted': False, 'updated_at': '', 'x_test_key': 'test_1234' } cache_filter = ProcessRequestTestCacheFilter() cache_filter._fetch_request_info = fake_fetch_request_info cache_filter._get_v1_image_metadata = fake_get_v1_image_metadata rules = { "restricted": "not ('test_1234':%(x_test_key)s and role:_member_)", "download_image": "role:admin or rule:restricted" } self.set_policy_rules(rules) cache_filter.policy = glance.api.policy.Enforcer() request = webob.Request.blank('/v1/images/%s' % image_id) request.context = context.RequestContext(roles=['member']) resp = webob.Response(request=request) actual = cache_filter.process_response(resp) self.assertEqual(resp, actual) def test_v1_process_response_image_meta_not_found(self): """ Test process_response for v1 api where registry raises NotFound exception as image metadata not found. """ image_id = 'test1' def fake_fetch_request_info(*args, **kwargs): return ('test1', 'GET', 'v1') def fake_get_v1_image_metadata(*args, **kwargs): raise exception.NotFound() cache_filter = ProcessRequestTestCacheFilter() cache_filter._fetch_request_info = fake_fetch_request_info self.stubs.Set(registry, 'get_image_metadata', fake_get_v1_image_metadata) rules = { "restricted": "not ('test_1234':%(x_test_key)s and role:_member_)", "download_image": "role:admin or rule:restricted" } self.set_policy_rules(rules) cache_filter.policy = glance.api.policy.Enforcer() request = webob.Request.blank('/v1/images/%s' % image_id) request.context = context.RequestContext(roles=['_member_']) resp = webob.Response(request=request) self.assertRaises(webob.exc.HTTPNotFound, cache_filter.process_response, resp) def test_v2_process_response_download_restricted(self): """ Test process_response for v2 api where _member_ role not able to download the image with custom property. """ image_id = 'test1' extra_properties = { 'x_test_key': 'test_1234' } def fake_fetch_request_info(*args, **kwargs): return ('test1', 'GET', 'v2') def fake_get_v2_image_metadata(*args, **kwargs): image = ImageStub(image_id, extra_properties=extra_properties) request.environ['api.cache.image'] = image return glance.api.policy.ImageTarget(image) cache_filter = ProcessRequestTestCacheFilter() cache_filter._fetch_request_info = fake_fetch_request_info cache_filter._get_v2_image_metadata = fake_get_v2_image_metadata rules = { "restricted": "not ('test_1234':%(x_test_key)s and role:_member_)", "download_image": "role:admin or rule:restricted" } self.set_policy_rules(rules) cache_filter.policy = glance.api.policy.Enforcer() request = webob.Request.blank('/v2/images/test1/file') request.context = context.RequestContext(roles=['_member_']) resp = webob.Response(request=request) self.assertRaises(webob.exc.HTTPForbidden, cache_filter.process_response, resp) def test_v2_process_response_download_permitted(self): """ Test process_response for v2 api where member role able to download the image with custom property. """ image_id = 'test1' extra_properties = { 'x_test_key': 'test_1234' } def fake_fetch_request_info(*args, **kwargs): return ('test1', 'GET', 'v2') def fake_get_v2_image_metadata(*args, **kwargs): image = ImageStub(image_id, extra_properties=extra_properties) request.environ['api.cache.image'] = image return glance.api.policy.ImageTarget(image) cache_filter = ProcessRequestTestCacheFilter() cache_filter._fetch_request_info = fake_fetch_request_info cache_filter._get_v2_image_metadata = fake_get_v2_image_metadata rules = { "restricted": "not ('test_1234':%(x_test_key)s and role:_member_)", "download_image": "role:admin or rule:restricted" } self.set_policy_rules(rules) cache_filter.policy = glance.api.policy.Enforcer() request = webob.Request.blank('/v2/images/test1/file') request.context = context.RequestContext(roles=['member']) resp = webob.Response(request=request) actual = cache_filter.process_response(resp) self.assertEqual(resp, actual) glance-16.0.0/glance/tests/unit/common/0000775000175100017510000000000013245511661017721 5ustar zuulzuul00000000000000glance-16.0.0/glance/tests/unit/common/test_scripts.py0000666000175100017510000000264213245511421023021 0ustar zuulzuul00000000000000# Copyright 2014 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import glance.common.scripts as scripts from glance.common.scripts.image_import import main as image_import import glance.tests.utils as test_utils class TestScripts(test_utils.BaseTestCase): def setUp(self): super(TestScripts, self).setUp() def test_run_task(self): task_id = mock.ANY task_type = 'import' context = mock.ANY task_repo = mock.ANY image_repo = mock.ANY image_factory = mock.ANY with mock.patch.object(image_import, 'run') as mock_run: scripts.run_task(task_id, task_type, context, task_repo, image_repo, image_factory) mock_run.assert_called_once_with(task_id, context, task_repo, image_repo, image_factory) glance-16.0.0/glance/tests/unit/common/test_swift_store_utils.py0000666000175100017510000000714213245511421025122 0ustar zuulzuul00000000000000# Copyright 2014 Rackspace # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import fixtures from glance.common import exception from glance.common import swift_store_utils from glance.tests.unit import base class TestSwiftParams(base.IsolatedUnitTest): def setUp(self): super(TestSwiftParams, self).setUp() conf_file = "glance-swift.conf" test_dir = self.useFixture(fixtures.TempDir()).path self.swift_config_file = self._copy_data_file(conf_file, test_dir) self.config(swift_store_config_file=self.swift_config_file) def test_multiple_swift_account_enabled(self): self.config(swift_store_config_file="glance-swift.conf") self.assertTrue( swift_store_utils.is_multiple_swift_store_accounts_enabled()) def test_multiple_swift_account_disabled(self): self.config(swift_store_config_file=None) self.assertFalse( swift_store_utils.is_multiple_swift_store_accounts_enabled()) def test_swift_config_file_doesnt_exist(self): self.config(swift_store_config_file='fake-file.conf') self.assertRaises(exception.InvalidSwiftStoreConfiguration, swift_store_utils.SwiftParams) def test_swift_config_uses_default_values_multiple_account_disabled(self): default_user = 'user_default' default_key = 'key_default' default_auth_address = 'auth@default.com' default_account_reference = 'ref_default' confs = {'swift_store_config_file': None, 'swift_store_user': default_user, 'swift_store_key': default_key, 'swift_store_auth_address': default_auth_address, 'default_swift_reference': default_account_reference} self.config(**confs) swift_params = swift_store_utils.SwiftParams().params self.assertEqual(1, len(swift_params.keys())) self.assertEqual(default_user, swift_params[default_account_reference]['user'] ) self.assertEqual(default_key, swift_params[default_account_reference]['key'] ) self.assertEqual(default_auth_address, swift_params[default_account_reference] ['auth_address'] ) def test_swift_store_config_validates_for_creds_auth_address(self): swift_params = swift_store_utils.SwiftParams().params self.assertEqual('tenant:user1', swift_params['ref1']['user'] ) self.assertEqual('key1', swift_params['ref1']['key'] ) self.assertEqual('example.com', swift_params['ref1']['auth_address']) self.assertEqual('user2', swift_params['ref2']['user']) self.assertEqual('key2', swift_params['ref2']['key']) self.assertEqual('http://example.com', swift_params['ref2']['auth_address'] ) glance-16.0.0/glance/tests/unit/common/test_wsgi.py0000666000175100017510000007764213245511421022317 0ustar zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2010-2011 OpenStack Foundation # Copyright 2014 IBM Corp. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime import gettext import os import socket from babel import localedata import eventlet.patcher import fixtures import mock from oslo_concurrency import processutils from oslo_serialization import jsonutils import routes import six from six.moves import http_client as http import webob from glance.api.v1 import router as router_v1 from glance.api.v2 import router as router_v2 from glance.common import exception from glance.common import utils from glance.common import wsgi from glance import i18n from glance.tests import utils as test_utils class RequestTest(test_utils.BaseTestCase): def _set_expected_languages(self, all_locales=None, avail_locales=None): if all_locales is None: all_locales = [] # Override localedata.locale_identifiers to return some locales. def returns_some_locales(*args, **kwargs): return all_locales self.stubs.Set(localedata, 'locale_identifiers', returns_some_locales) # Override gettext.find to return other than None for some languages. def fake_gettext_find(lang_id, *args, **kwargs): found_ret = '/glance/%s/LC_MESSAGES/glance.mo' % lang_id if avail_locales is None: # All locales are available. return found_ret languages = kwargs['languages'] if languages[0] in avail_locales: return found_ret return None self.stubs.Set(gettext, 'find', fake_gettext_find) def test_content_range(self): request = wsgi.Request.blank('/tests/123') request.headers["Content-Range"] = 'bytes 10-99/*' range_ = request.get_range_from_request(120) self.assertEqual(10, range_.start) self.assertEqual(100, range_.stop) # non-inclusive self.assertIsNone(range_.length) def test_content_range_invalid(self): request = wsgi.Request.blank('/tests/123') request.headers["Content-Range"] = 'bytes=0-99' self.assertRaises(webob.exc.HTTPRequestRangeNotSatisfiable, request.get_range_from_request, 120) def test_range(self): request = wsgi.Request.blank('/tests/123') request.headers["Range"] = 'bytes=10-99' range_ = request.get_range_from_request(120) self.assertEqual(10, range_.start) self.assertEqual(100, range_.end) # non-inclusive def test_range_invalid(self): request = wsgi.Request.blank('/tests/123') request.headers["Range"] = 'bytes=150-' self.assertRaises(webob.exc.HTTPRequestRangeNotSatisfiable, request.get_range_from_request, 120) def test_content_type_missing(self): request = wsgi.Request.blank('/tests/123') self.assertRaises(exception.InvalidContentType, request.get_content_type, ('application/xml',)) def test_content_type_unsupported(self): request = wsgi.Request.blank('/tests/123') request.headers["Content-Type"] = "text/html" self.assertRaises(exception.InvalidContentType, request.get_content_type, ('application/xml',)) def test_content_type_with_charset(self): request = wsgi.Request.blank('/tests/123') request.headers["Content-Type"] = "application/json; charset=UTF-8" result = request.get_content_type(('application/json',)) self.assertEqual("application/json", result) def test_params(self): if six.PY2: expected = webob.multidict.NestedMultiDict({ 'limit': '20', 'name': '\xd0\x9f\xd1\x80\xd0\xb8\xd0\xb2\xd0\xb5\xd1\x82', 'sort_key': 'name', 'sort_dir': 'asc'}) else: expected = webob.multidict.NestedMultiDict({ 'limit': '20', 'name': 'Привет', 'sort_key': 'name', 'sort_dir': 'asc'}) request = wsgi.Request.blank("/?limit=20&name=%D0%9F%D1%80%D0%B8" "%D0%B2%D0%B5%D1%82&sort_key=name" "&sort_dir=asc") actual = request.params self.assertEqual(expected, actual) def test_content_type_from_accept_xml(self): request = wsgi.Request.blank('/tests/123') request.headers["Accept"] = "application/xml" result = request.best_match_content_type() self.assertEqual("application/json", result) def test_content_type_from_accept_json(self): request = wsgi.Request.blank('/tests/123') request.headers["Accept"] = "application/json" result = request.best_match_content_type() self.assertEqual("application/json", result) def test_content_type_from_accept_xml_json(self): request = wsgi.Request.blank('/tests/123') request.headers["Accept"] = "application/xml, application/json" result = request.best_match_content_type() self.assertEqual("application/json", result) def test_content_type_from_accept_json_xml_quality(self): request = wsgi.Request.blank('/tests/123') request.headers["Accept"] = ("application/json; q=0.3, " "application/xml; q=0.9") result = request.best_match_content_type() self.assertEqual("application/json", result) def test_content_type_accept_default(self): request = wsgi.Request.blank('/tests/123.unsupported') request.headers["Accept"] = "application/unsupported1" result = request.best_match_content_type() self.assertEqual("application/json", result) def test_language_accept_default(self): request = wsgi.Request.blank('/tests/123') request.headers["Accept-Language"] = "zz-ZZ,zz;q=0.8" result = request.best_match_language() self.assertIsNone(result) def test_language_accept_none(self): request = wsgi.Request.blank('/tests/123') result = request.best_match_language() self.assertIsNone(result) def test_best_match_language_expected(self): # If Accept-Language is a supported language, best_match_language() # returns it. self._set_expected_languages(all_locales=['it']) req = wsgi.Request.blank('/', headers={'Accept-Language': 'it'}) self.assertEqual('it', req.best_match_language()) def test_request_match_language_unexpected(self): # If Accept-Language is a language we do not support, # best_match_language() returns None. self._set_expected_languages(all_locales=['it']) req = wsgi.Request.blank('/', headers={'Accept-Language': 'unknown'}) self.assertIsNone(req.best_match_language()) @mock.patch.object(webob.acceptparse.AcceptLanguage, 'best_match') def test_best_match_language_unknown(self, mock_best_match): # Test that we are actually invoking language negotiation by webop request = wsgi.Request.blank('/') accepted = 'unknown-lang' request.headers = {'Accept-Language': accepted} mock_best_match.return_value = None self.assertIsNone(request.best_match_language()) # If Accept-Language is missing or empty, match should be None request.headers = {'Accept-Language': ''} self.assertIsNone(request.best_match_language()) request.headers.pop('Accept-Language') self.assertIsNone(request.best_match_language()) def test_http_error_response_codes(self): sample_id, member_id, tag_val, task_id = 'abc', '123', '1', '2' """Makes sure v1 unallowed methods return 405""" unallowed_methods = [ ('/images', ['PUT', 'DELETE', 'HEAD', 'PATCH']), ('/images/detail', ['POST', 'PUT', 'DELETE', 'PATCH']), ('/images/%s' % sample_id, ['POST', 'PATCH']), ('/images/%s/members' % sample_id, ['POST', 'DELETE', 'HEAD', 'PATCH']), ('/images/%s/members/%s' % (sample_id, member_id), ['POST', 'HEAD', 'PATCH']), ] api = test_utils.FakeAuthMiddleware(router_v1.API(routes.Mapper())) for uri, methods in unallowed_methods: for method in methods: req = webob.Request.blank(uri) req.method = method res = req.get_response(api) self.assertEqual(http.METHOD_NOT_ALLOWED, res.status_int) """Makes sure v2 unallowed methods return 405""" unallowed_methods = [ ('/schemas/image', ['POST', 'PUT', 'DELETE', 'PATCH', 'HEAD']), ('/schemas/images', ['POST', 'PUT', 'DELETE', 'PATCH', 'HEAD']), ('/schemas/member', ['POST', 'PUT', 'DELETE', 'PATCH', 'HEAD']), ('/schemas/members', ['POST', 'PUT', 'DELETE', 'PATCH', 'HEAD']), ('/schemas/task', ['POST', 'PUT', 'DELETE', 'PATCH', 'HEAD']), ('/schemas/tasks', ['POST', 'PUT', 'DELETE', 'PATCH', 'HEAD']), ('/images', ['PUT', 'DELETE', 'PATCH', 'HEAD']), ('/images/%s' % sample_id, ['POST', 'PUT', 'HEAD']), ('/images/%s/file' % sample_id, ['POST', 'DELETE', 'PATCH', 'HEAD']), ('/images/%s/tags/%s' % (sample_id, tag_val), ['GET', 'POST', 'PATCH', 'HEAD']), ('/images/%s/members' % sample_id, ['PUT', 'DELETE', 'PATCH', 'HEAD']), ('/images/%s/members/%s' % (sample_id, member_id), ['POST', 'PATCH', 'HEAD']), ('/tasks', ['PUT', 'DELETE', 'PATCH', 'HEAD']), ('/tasks/%s' % task_id, ['POST', 'PUT', 'PATCH', 'HEAD']), ] api = test_utils.FakeAuthMiddleware(router_v2.API(routes.Mapper())) for uri, methods in unallowed_methods: for method in methods: req = webob.Request.blank(uri) req.method = method res = req.get_response(api) self.assertEqual(http.METHOD_NOT_ALLOWED, res.status_int) # Makes sure not implemented methods return 405 req = webob.Request.blank('/schemas/image') req.method = 'NonexistentMethod' res = req.get_response(api) self.assertEqual(http.METHOD_NOT_ALLOWED, res.status_int) class ResourceTest(test_utils.BaseTestCase): def test_get_action_args(self): env = { 'wsgiorg.routing_args': [ None, { 'controller': None, 'format': None, 'action': 'update', 'id': 12, }, ], } expected = {'action': 'update', 'id': 12} actual = wsgi.Resource(None, None, None).get_action_args(env) self.assertEqual(expected, actual) def test_get_action_args_invalid_index(self): env = {'wsgiorg.routing_args': []} expected = {} actual = wsgi.Resource(None, None, None).get_action_args(env) self.assertEqual(expected, actual) def test_get_action_args_del_controller_error(self): actions = {'format': None, 'action': 'update', 'id': 12} env = {'wsgiorg.routing_args': [None, actions]} expected = {'action': 'update', 'id': 12} actual = wsgi.Resource(None, None, None).get_action_args(env) self.assertEqual(expected, actual) def test_get_action_args_del_format_error(self): actions = {'action': 'update', 'id': 12} env = {'wsgiorg.routing_args': [None, actions]} expected = {'action': 'update', 'id': 12} actual = wsgi.Resource(None, None, None).get_action_args(env) self.assertEqual(expected, actual) def test_dispatch(self): class Controller(object): def index(self, shirt, pants=None): return (shirt, pants) resource = wsgi.Resource(None, None, None) actual = resource.dispatch(Controller(), 'index', 'on', pants='off') expected = ('on', 'off') self.assertEqual(expected, actual) def test_dispatch_default(self): class Controller(object): def default(self, shirt, pants=None): return (shirt, pants) resource = wsgi.Resource(None, None, None) actual = resource.dispatch(Controller(), 'index', 'on', pants='off') expected = ('on', 'off') self.assertEqual(expected, actual) def test_dispatch_no_default(self): class Controller(object): def show(self, shirt, pants=None): return (shirt, pants) resource = wsgi.Resource(None, None, None) self.assertRaises(AttributeError, resource.dispatch, Controller(), 'index', 'on', pants='off') def test_call(self): class FakeController(object): def index(self, shirt, pants=None): return (shirt, pants) resource = wsgi.Resource(FakeController(), None, None) def dispatch(self, obj, action, *args, **kwargs): if isinstance(obj, wsgi.JSONRequestDeserializer): return [] if isinstance(obj, wsgi.JSONResponseSerializer): raise webob.exc.HTTPForbidden() self.stubs.Set(wsgi.Resource, 'dispatch', dispatch) request = wsgi.Request.blank('/') response = resource.__call__(request) self.assertIsInstance(response, webob.exc.HTTPForbidden) self.assertEqual(http.FORBIDDEN, response.status_code) def test_call_raises_exception(self): class FakeController(object): def index(self, shirt, pants=None): return (shirt, pants) resource = wsgi.Resource(FakeController(), None, None) def dispatch(self, obj, action, *args, **kwargs): raise Exception("test exception") self.stubs.Set(wsgi.Resource, 'dispatch', dispatch) request = wsgi.Request.blank('/') response = resource.__call__(request) self.assertIsInstance(response, webob.exc.HTTPInternalServerError) self.assertEqual(http.INTERNAL_SERVER_ERROR, response.status_code) @mock.patch.object(wsgi, 'translate_exception') def test_resource_call_error_handle_localized(self, mock_translate_exception): class Controller(object): def delete(self, req, identity): raise webob.exc.HTTPBadRequest(explanation='Not Found') actions = {'action': 'delete', 'identity': 12} env = {'wsgiorg.routing_args': [None, actions]} request = wsgi.Request.blank('/tests/123', environ=env) message_es = 'No Encontrado' resource = wsgi.Resource(Controller(), wsgi.JSONRequestDeserializer(), None) translated_exc = webob.exc.HTTPBadRequest(message_es) mock_translate_exception.return_value = translated_exc e = self.assertRaises(webob.exc.HTTPBadRequest, resource, request) self.assertEqual(message_es, str(e)) @mock.patch.object(webob.acceptparse.AcceptLanguage, 'best_match') @mock.patch.object(i18n, 'translate') def test_translate_exception(self, mock_translate, mock_best_match): mock_translate.return_value = 'No Encontrado' mock_best_match.return_value = 'de' req = wsgi.Request.blank('/tests/123') req.headers["Accept-Language"] = "de" e = webob.exc.HTTPNotFound(explanation='Not Found') e = wsgi.translate_exception(req, e) self.assertEqual('No Encontrado', e.explanation) def test_response_headers_encoded(self): # prepare environment for_openstack_comrades = \ u'\u0417\u0430 \u043e\u043f\u0435\u043d\u0441\u0442\u0435\u043a, ' \ u'\u0442\u043e\u0432\u0430\u0440\u0438\u0449\u0438' class FakeController(object): def index(self, shirt, pants=None): return (shirt, pants) class FakeSerializer(object): def index(self, response, result): response.headers['unicode_test'] = for_openstack_comrades # make request resource = wsgi.Resource(FakeController(), None, FakeSerializer()) actions = {'action': 'index'} env = {'wsgiorg.routing_args': [None, actions]} request = wsgi.Request.blank('/tests/123', environ=env) response = resource.__call__(request) # ensure it has been encoded correctly value = (response.headers['unicode_test'].decode('utf-8') if six.PY2 else response.headers['unicode_test']) self.assertEqual(for_openstack_comrades, value) class JSONResponseSerializerTest(test_utils.BaseTestCase): def test_to_json(self): fixture = {"key": "value"} expected = b'{"key": "value"}' actual = wsgi.JSONResponseSerializer().to_json(fixture) self.assertEqual(expected, actual) def test_to_json_with_date_format_value(self): fixture = {"date": datetime.datetime(1901, 3, 8, 2)} expected = b'{"date": "1901-03-08T02:00:00.000000"}' actual = wsgi.JSONResponseSerializer().to_json(fixture) self.assertEqual(expected, actual) def test_to_json_with_more_deep_format(self): fixture = {"is_public": True, "name": [{"name1": "test"}]} expected = {"is_public": True, "name": [{"name1": "test"}]} actual = wsgi.JSONResponseSerializer().to_json(fixture) actual = jsonutils.loads(actual) for k in expected: self.assertEqual(expected[k], actual[k]) def test_to_json_with_set(self): fixture = set(["foo"]) expected = b'["foo"]' actual = wsgi.JSONResponseSerializer().to_json(fixture) self.assertEqual(expected, actual) def test_default(self): fixture = {"key": "value"} response = webob.Response() wsgi.JSONResponseSerializer().default(response, fixture) self.assertEqual(http.OK, response.status_int) content_types = [h for h in response.headerlist if h[0] == 'Content-Type'] self.assertEqual(1, len(content_types)) self.assertEqual('application/json', response.content_type) self.assertEqual(b'{"key": "value"}', response.body) class JSONRequestDeserializerTest(test_utils.BaseTestCase): def test_has_body_no_content_length(self): request = wsgi.Request.blank('/') request.method = 'POST' request.body = b'asdf' request.headers.pop('Content-Length') self.assertFalse(wsgi.JSONRequestDeserializer().has_body(request)) def test_has_body_zero_content_length(self): request = wsgi.Request.blank('/') request.method = 'POST' request.body = b'asdf' request.headers['Content-Length'] = 0 self.assertFalse(wsgi.JSONRequestDeserializer().has_body(request)) def test_has_body_has_content_length(self): request = wsgi.Request.blank('/') request.method = 'POST' request.body = b'asdf' self.assertIn('Content-Length', request.headers) self.assertTrue(wsgi.JSONRequestDeserializer().has_body(request)) def test_no_body_no_content_length(self): request = wsgi.Request.blank('/') self.assertFalse(wsgi.JSONRequestDeserializer().has_body(request)) def test_from_json(self): fixture = '{"key": "value"}' expected = {"key": "value"} actual = wsgi.JSONRequestDeserializer().from_json(fixture) self.assertEqual(expected, actual) def test_from_json_malformed(self): fixture = 'kjasdklfjsklajf' self.assertRaises(webob.exc.HTTPBadRequest, wsgi.JSONRequestDeserializer().from_json, fixture) def test_default_no_body(self): request = wsgi.Request.blank('/') actual = wsgi.JSONRequestDeserializer().default(request) expected = {} self.assertEqual(expected, actual) def test_default_with_body(self): request = wsgi.Request.blank('/') request.method = 'POST' request.body = b'{"key": "value"}' actual = wsgi.JSONRequestDeserializer().default(request) expected = {"body": {"key": "value"}} self.assertEqual(expected, actual) def test_has_body_has_transfer_encoding(self): self.assertTrue(self._check_transfer_encoding( transfer_encoding='chunked')) def test_has_body_multiple_transfer_encoding(self): self.assertTrue(self._check_transfer_encoding( transfer_encoding='chunked, gzip')) def test_has_body_invalid_transfer_encoding(self): self.assertFalse(self._check_transfer_encoding( transfer_encoding='invalid', content_length=0)) def test_has_body_invalid_transfer_encoding_no_content_len_and_body(self): self.assertFalse(self._check_transfer_encoding( transfer_encoding='invalid', include_body=False)) def test_has_body_invalid_transfer_encoding_no_content_len_but_body(self): self.assertTrue(self._check_transfer_encoding( transfer_encoding='invalid', include_body=True)) def test_has_body_invalid_transfer_encoding_with_content_length(self): self.assertTrue(self._check_transfer_encoding( transfer_encoding='invalid', content_length=5)) def test_has_body_valid_transfer_encoding_with_content_length(self): self.assertTrue(self._check_transfer_encoding( transfer_encoding='chunked', content_length=1)) def test_has_body_valid_transfer_encoding_without_content_length(self): self.assertTrue(self._check_transfer_encoding( transfer_encoding='chunked')) def _check_transfer_encoding(self, transfer_encoding=None, content_length=None, include_body=True): request = wsgi.Request.blank('/') request.method = 'POST' if include_body: request.body = b'fake_body' request.headers['transfer-encoding'] = transfer_encoding if content_length is not None: request.headers['content-length'] = content_length return wsgi.JSONRequestDeserializer().has_body(request) def test_get_bind_addr_default_value(self): expected = ('0.0.0.0', '123456') actual = wsgi.get_bind_addr(default_port="123456") self.assertEqual(expected, actual) class ServerTest(test_utils.BaseTestCase): def test_create_pool(self): """Ensure the wsgi thread pool is an eventlet.greenpool.GreenPool.""" actual = wsgi.Server(threads=1).create_pool() self.assertIsInstance(actual, eventlet.greenpool.GreenPool) @mock.patch.object(wsgi.Server, 'configure_socket') def test_http_keepalive(self, mock_configure_socket): self.config(http_keepalive=False) self.config(workers=0) server = wsgi.Server(threads=1) server.sock = 'fake_socket' # mocking eventlet.wsgi server method to check it is called with # configured 'http_keepalive' value. with mock.patch.object(eventlet.wsgi, 'server') as mock_server: fake_application = "fake-application" server.start(fake_application, 0) server.wait() mock_server.assert_called_once_with('fake_socket', fake_application, log=server._logger, debug=False, custom_pool=server.pool, keepalive=False, socket_timeout=900) def test_number_of_workers(self): """Ensure the number of workers matches num cpus limited to 8.""" def pid(): i = 1 while True: i = i + 1 yield i with mock.patch.object(os, 'fork') as mock_fork: with mock.patch('oslo_concurrency.processutils.get_worker_count', return_value=4): mock_fork.side_effect = pid server = wsgi.Server() server.configure = mock.Mock() fake_application = "fake-application" server.start(fake_application, None) self.assertEqual(4, len(server.children)) with mock.patch('oslo_concurrency.processutils.get_worker_count', return_value=24): mock_fork.side_effect = pid server = wsgi.Server() server.configure = mock.Mock() fake_application = "fake-application" server.start(fake_application, None) self.assertEqual(8, len(server.children)) mock_fork.side_effect = pid server = wsgi.Server() server.configure = mock.Mock() fake_application = "fake-application" server.start(fake_application, None) cpus = processutils.get_worker_count() expected_workers = cpus if cpus < 8 else 8 self.assertEqual(expected_workers, len(server.children)) class TestHelpers(test_utils.BaseTestCase): def test_headers_are_unicode(self): """ Verifies that the headers returned by conversion code are unicode. Headers are passed via http in non-testing mode, which automatically converts them to unicode. Verifying that the method does the conversion proves that we aren't passing data that works in tests but will fail in production. """ fixture = {'name': 'fake public image', 'is_public': True, 'size': 19, 'location': "file:///tmp/glance-tests/2", 'properties': {'distro': 'Ubuntu 10.04 LTS'}} headers = utils.image_meta_to_http_headers(fixture) for k, v in six.iteritems(headers): self.assertIsInstance(v, six.text_type) def test_data_passed_properly_through_headers(self): """ Verifies that data is the same after being passed through headers """ fixture = {'is_public': True, 'deleted': False, 'name': None, 'size': 19, 'location': "file:///tmp/glance-tests/2", 'properties': {'distro': 'Ubuntu 10.04 LTS'}} headers = utils.image_meta_to_http_headers(fixture) class FakeResponse(object): pass response = FakeResponse() response.headers = headers result = utils.get_image_meta_from_headers(response) for k, v in six.iteritems(fixture): if v is not None: self.assertEqual(v, result[k]) else: self.assertNotIn(k, result) class GetSocketTestCase(test_utils.BaseTestCase): def setUp(self): super(GetSocketTestCase, self).setUp() self.useFixture(fixtures.MonkeyPatch( "glance.common.wsgi.get_bind_addr", lambda x: ('192.168.0.13', 1234))) addr_info_list = [(2, 1, 6, '', ('192.168.0.13', 80)), (2, 2, 17, '', ('192.168.0.13', 80)), (2, 3, 0, '', ('192.168.0.13', 80))] self.useFixture(fixtures.MonkeyPatch( "glance.common.wsgi.socket.getaddrinfo", lambda *x: addr_info_list)) self.useFixture(fixtures.MonkeyPatch( "glance.common.wsgi.time.time", mock.Mock(side_effect=[0, 1, 5, 10, 20, 35]))) self.useFixture(fixtures.MonkeyPatch( "glance.common.wsgi.utils.validate_key_cert", lambda *x: None)) wsgi.CONF.cert_file = '/etc/ssl/cert' wsgi.CONF.key_file = '/etc/ssl/key' wsgi.CONF.ca_file = '/etc/ssl/ca_cert' wsgi.CONF.tcp_keepidle = 600 def test_correct_configure_socket(self): mock_socket = mock.Mock() self.useFixture(fixtures.MonkeyPatch( 'glance.common.wsgi.ssl.wrap_socket', mock_socket)) self.useFixture(fixtures.MonkeyPatch( 'glance.common.wsgi.eventlet.listen', lambda *x, **y: mock_socket)) server = wsgi.Server() server.default_port = 1234 server.configure_socket() self.assertIn(mock.call.setsockopt( socket.SOL_SOCKET, socket.SO_REUSEADDR, 1), mock_socket.mock_calls) self.assertIn(mock.call.setsockopt( socket.SOL_SOCKET, socket.SO_KEEPALIVE, 1), mock_socket.mock_calls) if hasattr(socket, 'TCP_KEEPIDLE'): self.assertIn(mock.call().setsockopt( socket.IPPROTO_TCP, socket.TCP_KEEPIDLE, wsgi.CONF.tcp_keepidle), mock_socket.mock_calls) def test_get_socket_without_all_ssl_reqs(self): wsgi.CONF.key_file = None self.assertRaises(RuntimeError, wsgi.get_socket, 1234) def test_get_socket_with_bind_problems(self): self.useFixture(fixtures.MonkeyPatch( 'glance.common.wsgi.eventlet.listen', mock.Mock(side_effect=( [wsgi.socket.error(socket.errno.EADDRINUSE)] * 3 + [None])))) self.useFixture(fixtures.MonkeyPatch( 'glance.common.wsgi.ssl.wrap_socket', lambda *x, **y: None)) self.assertRaises(RuntimeError, wsgi.get_socket, 1234) def test_get_socket_with_unexpected_socket_errno(self): self.useFixture(fixtures.MonkeyPatch( 'glance.common.wsgi.eventlet.listen', mock.Mock(side_effect=wsgi.socket.error(socket.errno.ENOMEM)))) self.useFixture(fixtures.MonkeyPatch( 'glance.common.wsgi.ssl.wrap_socket', lambda *x, **y: None)) self.assertRaises(wsgi.socket.error, wsgi.get_socket, 1234) def _cleanup_uwsgi(): wsgi.uwsgi = None class Test_UwsgiChunkedFile(test_utils.BaseTestCase): def test_read_no_data(self): reader = wsgi._UWSGIChunkFile() wsgi.uwsgi = mock.MagicMock() self.addCleanup(_cleanup_uwsgi) def fake_read(): return None wsgi.uwsgi.chunked_read = fake_read out = reader.read() self.assertEqual(out, b'') def test_read_data_no_length(self): reader = wsgi._UWSGIChunkFile() wsgi.uwsgi = mock.MagicMock() self.addCleanup(_cleanup_uwsgi) values = iter([b'a', b'b', b'c', None]) def fake_read(): return next(values) wsgi.uwsgi.chunked_read = fake_read out = reader.read() self.assertEqual(out, b'abc') def test_read_zero_length(self): reader = wsgi._UWSGIChunkFile() self.assertEqual(b'', reader.read(length=0)) def test_read_data_length(self): reader = wsgi._UWSGIChunkFile() wsgi.uwsgi = mock.MagicMock() self.addCleanup(_cleanup_uwsgi) values = iter([b'a', b'b', b'c', None]) def fake_read(): return next(values) wsgi.uwsgi.chunked_read = fake_read out = reader.read(length=2) self.assertEqual(out, b'ab') def test_read_data_negative_length(self): reader = wsgi._UWSGIChunkFile() wsgi.uwsgi = mock.MagicMock() self.addCleanup(_cleanup_uwsgi) values = iter([b'a', b'b', b'c', None]) def fake_read(): return next(values) wsgi.uwsgi.chunked_read = fake_read out = reader.read(length=-2) self.assertEqual(out, b'abc') glance-16.0.0/glance/tests/unit/common/test_config.py0000666000175100017510000001066413245511421022602 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os.path import shutil import fixtures import oslo_middleware from oslotest import moxstubout from glance.api.middleware import context from glance.common import config from glance.tests import utils as test_utils class TestPasteApp(test_utils.BaseTestCase): def setUp(self): super(TestPasteApp, self).setUp() mox_fixture = self.useFixture(moxstubout.MoxStubout()) self.stubs = mox_fixture.stubs def _do_test_load_paste_app(self, expected_app_type, make_paste_file=True, paste_flavor=None, paste_config_file=None, paste_append=None): def _writeto(path, str): with open(path, 'w') as f: f.write(str or '') f.flush() def _appendto(orig, copy, str): shutil.copy(orig, copy) with open(copy, 'a') as f: f.write(str or '') f.flush() self.config(flavor=paste_flavor, config_file=paste_config_file, group='paste_deploy') temp_dir = self.useFixture(fixtures.TempDir()).path temp_file = os.path.join(temp_dir, 'testcfg.conf') _writeto(temp_file, '[DEFAULT]\n') config.parse_args(['--config-file', temp_file]) paste_to = temp_file.replace('.conf', '-paste.ini') if not paste_config_file and make_paste_file: paste_from = os.path.join(os.getcwd(), 'etc/glance-registry-paste.ini') _appendto(paste_from, paste_to, paste_append) app = config.load_paste_app('glance-registry') self.assertIsInstance(app, expected_app_type) def test_load_paste_app(self): expected_middleware = oslo_middleware.Healthcheck self._do_test_load_paste_app(expected_middleware) def test_load_paste_app_paste_config_not_found(self): expected_middleware = context.UnauthenticatedContextMiddleware self.assertRaises(RuntimeError, self._do_test_load_paste_app, expected_middleware, make_paste_file=False) def test_load_paste_app_with_paste_flavor(self): pipeline = ('[pipeline:glance-registry-incomplete]\n' 'pipeline = context registryapp') expected_middleware = context.ContextMiddleware self._do_test_load_paste_app(expected_middleware, paste_flavor='incomplete', paste_append=pipeline) def test_load_paste_app_with_paste_config_file(self): paste_config_file = os.path.join(os.getcwd(), 'etc/glance-registry-paste.ini') expected_middleware = oslo_middleware.Healthcheck self._do_test_load_paste_app(expected_middleware, paste_config_file=paste_config_file) def test_load_paste_app_with_paste_config_file_but_not_exist(self): paste_config_file = os.path.abspath("glance-registry-paste.ini") expected_middleware = oslo_middleware.Healthcheck self.assertRaises(RuntimeError, self._do_test_load_paste_app, expected_middleware, paste_config_file=paste_config_file) def test_get_path_non_exist(self): self.assertRaises(RuntimeError, config._get_deployment_config_file) class TestDefaultConfig(test_utils.BaseTestCase): def setUp(self): super(TestDefaultConfig, self).setUp() self.CONF = config.cfg.CONF self.CONF.import_group('profiler', 'glance.common.wsgi') def test_osprofiler_disabled(self): self.assertFalse(self.CONF.profiler.enabled) self.assertFalse(self.CONF.profiler.trace_sqlalchemy) glance-16.0.0/glance/tests/unit/common/test_timeutils.py0000666000175100017510000002106113245511421023345 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import calendar import datetime import iso8601 import mock from glance.common import timeutils from glance.tests import utils as test_utils class TimeUtilsTest(test_utils.BaseTestCase): def setUp(self): super(TimeUtilsTest, self).setUp() self.skynet_self_aware_time_str = '1997-08-29T06:14:00Z' self.skynet_self_aware_time_ms_str = '1997-08-29T06:14:00.000123Z' self.skynet_self_aware_time = datetime.datetime(1997, 8, 29, 6, 14, 0) self.skynet_self_aware_ms_time = datetime.datetime( 1997, 8, 29, 6, 14, 0, 123) self.one_minute_before = datetime.datetime(1997, 8, 29, 6, 13, 0) self.one_minute_after = datetime.datetime(1997, 8, 29, 6, 15, 0) self.skynet_self_aware_time_perfect_str = '1997-08-29T06:14:00.000000' self.skynet_self_aware_time_perfect = datetime.datetime(1997, 8, 29, 6, 14, 0) def test_isotime(self): with mock.patch('datetime.datetime') as datetime_mock: datetime_mock.utcnow.return_value = self.skynet_self_aware_time dt = timeutils.isotime() self.assertEqual(dt, self.skynet_self_aware_time_str) def test_isotimei_micro_second_precision(self): with mock.patch('datetime.datetime') as datetime_mock: datetime_mock.utcnow.return_value = self.skynet_self_aware_ms_time dt = timeutils.isotime(subsecond=True) self.assertEqual(dt, self.skynet_self_aware_time_ms_str) def test_parse_isotime(self): expect = timeutils.parse_isotime(self.skynet_self_aware_time_str) skynet_self_aware_time_utc = self.skynet_self_aware_time.replace( tzinfo=iso8601.iso8601.UTC) self.assertEqual(skynet_self_aware_time_utc, expect) def test_parse_isotime_micro_second_precision(self): expect = timeutils.parse_isotime(self.skynet_self_aware_time_ms_str) skynet_self_aware_time_ms_utc = self.skynet_self_aware_ms_time.replace( tzinfo=iso8601.iso8601.UTC) self.assertEqual(skynet_self_aware_time_ms_utc, expect) def test_utcnow(self): with mock.patch('datetime.datetime') as datetime_mock: datetime_mock.utcnow.return_value = self.skynet_self_aware_time self.assertEqual(timeutils.utcnow(), self.skynet_self_aware_time) self.assertFalse(timeutils.utcnow() == self.skynet_self_aware_time) self.assertTrue(timeutils.utcnow()) def test_delta_seconds(self): before = timeutils.utcnow() after = before + datetime.timedelta(days=7, seconds=59, microseconds=123456) self.assertAlmostEquals(604859.123456, timeutils.delta_seconds(before, after)) def test_iso8601_from_timestamp(self): utcnow = timeutils.utcnow() iso = timeutils.isotime(utcnow) ts = calendar.timegm(utcnow.timetuple()) self.assertEqual(iso, timeutils.iso8601_from_timestamp(ts)) class TestIso8601Time(test_utils.BaseTestCase): def _instaneous(self, timestamp, yr, mon, day, hr, minute, sec, micro): self.assertEqual(timestamp.year, yr) self.assertEqual(timestamp.month, mon) self.assertEqual(timestamp.day, day) self.assertEqual(timestamp.hour, hr) self.assertEqual(timestamp.minute, minute) self.assertEqual(timestamp.second, sec) self.assertEqual(timestamp.microsecond, micro) def _do_test(self, time_str, yr, mon, day, hr, minute, sec, micro, shift): DAY_SECONDS = 24 * 60 * 60 timestamp = timeutils.parse_isotime(time_str) self._instaneous(timestamp, yr, mon, day, hr, minute, sec, micro) offset = timestamp.tzinfo.utcoffset(None) self.assertEqual(offset.seconds + offset.days * DAY_SECONDS, shift) def test_zulu(self): time_str = '2012-02-14T20:53:07Z' self._do_test(time_str, 2012, 2, 14, 20, 53, 7, 0, 0) def test_zulu_micros(self): time_str = '2012-02-14T20:53:07.123Z' self._do_test(time_str, 2012, 2, 14, 20, 53, 7, 123000, 0) def test_offset_east(self): time_str = '2012-02-14T20:53:07+04:30' offset = 4.5 * 60 * 60 self._do_test(time_str, 2012, 2, 14, 20, 53, 7, 0, offset) def test_offset_east_micros(self): time_str = '2012-02-14T20:53:07.42+04:30' offset = 4.5 * 60 * 60 self._do_test(time_str, 2012, 2, 14, 20, 53, 7, 420000, offset) def test_offset_west(self): time_str = '2012-02-14T20:53:07-05:30' offset = -5.5 * 60 * 60 self._do_test(time_str, 2012, 2, 14, 20, 53, 7, 0, offset) def test_offset_west_micros(self): time_str = '2012-02-14T20:53:07.654321-05:30' offset = -5.5 * 60 * 60 self._do_test(time_str, 2012, 2, 14, 20, 53, 7, 654321, offset) def test_compare(self): zulu = timeutils.parse_isotime('2012-02-14T20:53:07') east = timeutils.parse_isotime('2012-02-14T20:53:07-01:00') west = timeutils.parse_isotime('2012-02-14T20:53:07+01:00') self.assertGreater(east, west) self.assertGreater(east, zulu) self.assertGreater(zulu, west) def test_compare_micros(self): zulu = timeutils.parse_isotime('2012-02-14T20:53:07.6544') east = timeutils.parse_isotime('2012-02-14T19:53:07.654321-01:00') west = timeutils.parse_isotime('2012-02-14T21:53:07.655+01:00') self.assertLess(east, west) self.assertLess(east, zulu) self.assertLess(zulu, west) def test_zulu_roundtrip(self): time_str = '2012-02-14T20:53:07Z' zulu = timeutils.parse_isotime(time_str) self.assertEqual(zulu.tzinfo, iso8601.iso8601.UTC) self.assertEqual(timeutils.isotime(zulu), time_str) def test_east_roundtrip(self): time_str = '2012-02-14T20:53:07-07:00' east = timeutils.parse_isotime(time_str) self.assertEqual(east.tzinfo.tzname(None), '-07:00') self.assertEqual(timeutils.isotime(east), time_str) def test_west_roundtrip(self): time_str = '2012-02-14T20:53:07+11:30' west = timeutils.parse_isotime(time_str) self.assertEqual(west.tzinfo.tzname(None), '+11:30') self.assertEqual(timeutils.isotime(west), time_str) def test_now_roundtrip(self): time_str = timeutils.isotime() now = timeutils.parse_isotime(time_str) self.assertEqual(now.tzinfo, iso8601.iso8601.UTC) self.assertEqual(timeutils.isotime(now), time_str) def test_zulu_normalize(self): time_str = '2012-02-14T20:53:07Z' zulu = timeutils.parse_isotime(time_str) normed = timeutils.normalize_time(zulu) self._instaneous(normed, 2012, 2, 14, 20, 53, 7, 0) def test_east_normalize(self): time_str = '2012-02-14T20:53:07-07:00' east = timeutils.parse_isotime(time_str) normed = timeutils.normalize_time(east) self._instaneous(normed, 2012, 2, 15, 3, 53, 7, 0) def test_west_normalize(self): time_str = '2012-02-14T20:53:07+21:00' west = timeutils.parse_isotime(time_str) normed = timeutils.normalize_time(west) self._instaneous(normed, 2012, 2, 13, 23, 53, 7, 0) def test_normalize_aware_to_naive(self): dt = datetime.datetime(2011, 2, 14, 20, 53, 7) time_str = '2011-02-14T20:53:07+21:00' aware = timeutils.parse_isotime(time_str) naive = timeutils.normalize_time(aware) self.assertLess(naive, dt) def test_normalize_zulu_aware_to_naive(self): dt = datetime.datetime(2011, 2, 14, 20, 53, 7) time_str = '2011-02-14T19:53:07Z' aware = timeutils.parse_isotime(time_str) naive = timeutils.normalize_time(aware) self.assertLess(naive, dt) def test_normalize_naive(self): dt = datetime.datetime(2011, 2, 14, 20, 53, 7) dtn = datetime.datetime(2011, 2, 14, 19, 53, 7) naive = timeutils.normalize_time(dtn) self.assertLess(naive, dt) glance-16.0.0/glance/tests/unit/common/test_rpc.py0000666000175100017510000003063413245511421022120 0ustar zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2013 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime from oslo_serialization import jsonutils from oslo_utils import encodeutils import routes import six from six.moves import http_client as http import webob from glance.common import exception from glance.common import rpc from glance.common import wsgi from glance.tests.unit import base from glance.tests import utils as test_utils class FakeResource(object): """ Fake resource defining some methods that will be called later by the api. """ def get_images(self, context, keyword=None): return keyword def count_images(self, context, images): return len(images) def get_all_images(self, context): return False def raise_value_error(self, context): raise ValueError("Yep, Just like that!") def raise_weird_error(self, context): class WeirdError(Exception): pass raise WeirdError("Weirdness") def create_api(): deserializer = rpc.RPCJSONDeserializer() serializer = rpc.RPCJSONSerializer() controller = rpc.Controller() controller.register(FakeResource()) res = wsgi.Resource(controller, deserializer, serializer) mapper = routes.Mapper() mapper.connect("/rpc", controller=res, conditions=dict(method=["POST"]), action="__call__") return test_utils.FakeAuthMiddleware(wsgi.Router(mapper), is_admin=True) class TestRPCController(base.IsolatedUnitTest): def setUp(self): super(TestRPCController, self).setUp() self.res = FakeResource() self.controller = rpc.Controller() self.controller.register(self.res) def test_register(self): res = FakeResource() controller = rpc.Controller() controller.register(res) self.assertIn("get_images", controller._registered) self.assertIn("get_all_images", controller._registered) def test_reigster_filtered(self): res = FakeResource() controller = rpc.Controller() controller.register(res, filtered=["get_all_images"]) self.assertIn("get_all_images", controller._registered) def test_reigster_excluded(self): res = FakeResource() controller = rpc.Controller() controller.register(res, excluded=["get_all_images"]) self.assertIn("get_images", controller._registered) def test_reigster_refiner(self): res = FakeResource() controller = rpc.Controller() # Not callable self.assertRaises(TypeError, controller.register, res, refiner="get_all_images") # Filter returns False controller.register(res, refiner=lambda x: False) self.assertNotIn("get_images", controller._registered) self.assertNotIn("get_images", controller._registered) # Filter returns True controller.register(res, refiner=lambda x: True) self.assertIn("get_images", controller._registered) self.assertIn("get_images", controller._registered) def test_request(self): api = create_api() req = webob.Request.blank('/rpc') req.method = 'POST' req.body = jsonutils.dump_as_bytes([ { "command": "get_images", "kwargs": {"keyword": 1} } ]) res = req.get_response(api) returned = jsonutils.loads(res.body) self.assertIsInstance(returned, list) self.assertEqual(1, returned[0]) def test_request_exc(self): api = create_api() req = webob.Request.blank('/rpc') req.method = 'POST' req.body = jsonutils.dump_as_bytes([ { "command": "get_all_images", "kwargs": {"keyword": 1} } ]) # Sending non-accepted keyword # to get_all_images method res = req.get_response(api) returned = jsonutils.loads(res.body) self.assertIn("_error", returned[0]) def test_rpc_errors(self): api = create_api() req = webob.Request.blank('/rpc') req.method = 'POST' req.content_type = 'application/json' # Body is not a list, it should fail req.body = jsonutils.dump_as_bytes({}) res = req.get_response(api) self.assertEqual(http.BAD_REQUEST, res.status_int) # cmd is not dict, it should fail. req.body = jsonutils.dump_as_bytes([None]) res = req.get_response(api) self.assertEqual(http.BAD_REQUEST, res.status_int) # No command key, it should fail. req.body = jsonutils.dump_as_bytes([{}]) res = req.get_response(api) self.assertEqual(http.BAD_REQUEST, res.status_int) # kwargs not dict, it should fail. req.body = jsonutils.dump_as_bytes([{"command": "test", "kwargs": 2}]) res = req.get_response(api) self.assertEqual(http.BAD_REQUEST, res.status_int) # Command does not exist, it should fail. req.body = jsonutils.dump_as_bytes([{"command": "test"}]) res = req.get_response(api) self.assertEqual(http.NOT_FOUND, res.status_int) def test_rpc_exception_propagation(self): api = create_api() req = webob.Request.blank('/rpc') req.method = 'POST' req.content_type = 'application/json' req.body = jsonutils.dump_as_bytes([{"command": "raise_value_error"}]) res = req.get_response(api) self.assertEqual(http.OK, res.status_int) returned = jsonutils.loads(res.body)[0] err_cls = 'builtins.ValueError' if six.PY3 else 'exceptions.ValueError' self.assertEqual(err_cls, returned['_error']['cls']) req.body = jsonutils.dump_as_bytes([{"command": "raise_weird_error"}]) res = req.get_response(api) self.assertEqual(http.OK, res.status_int) returned = jsonutils.loads(res.body)[0] self.assertEqual('glance.common.exception.RPCError', returned['_error']['cls']) class TestRPCClient(base.IsolatedUnitTest): def setUp(self): super(TestRPCClient, self).setUp() self.api = create_api() self.client = rpc.RPCClient(host="http://127.0.0.1:9191") self.client._do_request = self.fake_request def fake_request(self, method, url, body, headers): req = webob.Request.blank(url.path) body = encodeutils.to_utf8(body) req.body = body req.method = method webob_res = req.get_response(self.api) return test_utils.FakeHTTPResponse(status=webob_res.status_int, headers=webob_res.headers, data=webob_res.body) def test_method_proxy(self): proxy = self.client.some_method self.assertIn("method_proxy", str(proxy)) def test_bulk_request(self): commands = [{"command": "get_images", 'kwargs': {'keyword': True}}, {"command": "get_all_images"}] res = self.client.bulk_request(commands) self.assertEqual(2, len(res)) self.assertTrue(res[0]) self.assertFalse(res[1]) def test_exception_raise(self): try: self.client.raise_value_error() self.fail("Exception not raised") except ValueError as exc: self.assertEqual("Yep, Just like that!", str(exc)) def test_rpc_exception(self): try: self.client.raise_weird_error() self.fail("Exception not raised") except exception.RPCError: pass def test_non_str_or_dict_response(self): rst = self.client.count_images(images=[1, 2, 3, 4]) self.assertEqual(4, rst) self.assertIsInstance(rst, int) class TestRPCJSONSerializer(test_utils.BaseTestCase): def test_to_json(self): fixture = {"key": "value"} expected = b'{"key": "value"}' actual = rpc.RPCJSONSerializer().to_json(fixture) self.assertEqual(expected, actual) def test_to_json_with_date_format_value(self): fixture = {"date": datetime.datetime(1900, 3, 8, 2)} expected = {"date": {"_value": "1900-03-08T02:00:00", "_type": "datetime"}} actual = rpc.RPCJSONSerializer().to_json(fixture) actual = jsonutils.loads(actual) for k in expected['date']: self.assertEqual(expected['date'][k], actual['date'][k]) def test_to_json_with_more_deep_format(self): fixture = {"is_public": True, "name": [{"name1": "test"}]} expected = {"is_public": True, "name": [{"name1": "test"}]} actual = rpc.RPCJSONSerializer().to_json(fixture) actual = wsgi.JSONResponseSerializer().to_json(fixture) actual = jsonutils.loads(actual) for k in expected: self.assertEqual(expected[k], actual[k]) def test_default(self): fixture = {"key": "value"} response = webob.Response() rpc.RPCJSONSerializer().default(response, fixture) self.assertEqual(http.OK, response.status_int) content_types = [h for h in response.headerlist if h[0] == 'Content-Type'] self.assertEqual(1, len(content_types)) self.assertEqual('application/json', response.content_type) self.assertEqual(b'{"key": "value"}', response.body) class TestRPCJSONDeserializer(test_utils.BaseTestCase): def test_has_body_no_content_length(self): request = wsgi.Request.blank('/') request.method = 'POST' request.body = b'asdf' request.headers.pop('Content-Length') self.assertFalse(rpc.RPCJSONDeserializer().has_body(request)) def test_has_body_zero_content_length(self): request = wsgi.Request.blank('/') request.method = 'POST' request.body = b'asdf' request.headers['Content-Length'] = 0 self.assertFalse(rpc.RPCJSONDeserializer().has_body(request)) def test_has_body_has_content_length(self): request = wsgi.Request.blank('/') request.method = 'POST' request.body = b'asdf' self.assertIn('Content-Length', request.headers) self.assertTrue(rpc.RPCJSONDeserializer().has_body(request)) def test_no_body_no_content_length(self): request = wsgi.Request.blank('/') self.assertFalse(rpc.RPCJSONDeserializer().has_body(request)) def test_from_json(self): fixture = '{"key": "value"}' expected = {"key": "value"} actual = rpc.RPCJSONDeserializer().from_json(fixture) self.assertEqual(expected, actual) def test_from_json_malformed(self): fixture = 'kjasdklfjsklajf' self.assertRaises(webob.exc.HTTPBadRequest, rpc.RPCJSONDeserializer().from_json, fixture) def test_default_no_body(self): request = wsgi.Request.blank('/') actual = rpc.RPCJSONDeserializer().default(request) expected = {} self.assertEqual(expected, actual) def test_default_with_body(self): request = wsgi.Request.blank('/') request.method = 'POST' request.body = b'{"key": "value"}' actual = rpc.RPCJSONDeserializer().default(request) expected = {"body": {"key": "value"}} self.assertEqual(expected, actual) def test_has_body_has_transfer_encoding(self): request = wsgi.Request.blank('/') request.method = 'POST' request.body = b'fake_body' request.headers['transfer-encoding'] = '' self.assertIn('transfer-encoding', request.headers) self.assertTrue(rpc.RPCJSONDeserializer().has_body(request)) def test_to_json_with_date_format_value(self): fixture = ('{"date": {"_value": "1900-03-08T02:00:00.000000",' '"_type": "datetime"}}') expected = {"date": datetime.datetime(1900, 3, 8, 2)} actual = rpc.RPCJSONDeserializer().from_json(fixture) self.assertEqual(expected, actual) glance-16.0.0/glance/tests/unit/common/test_utils.py0000666000175100017510000004504413245511421022475 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # Copyright 2015 Mirantis, Inc # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os import tempfile import six import webob from glance.common import exception from glance.common import utils from glance.tests import utils as test_utils class TestUtils(test_utils.BaseTestCase): """Test routines in glance.utils""" def test_cooperative_reader(self): """Ensure cooperative reader class accesses all bytes of file""" BYTES = 1024 bytes_read = 0 with tempfile.TemporaryFile('w+') as tmp_fd: tmp_fd.write('*' * BYTES) tmp_fd.seek(0) for chunk in utils.CooperativeReader(tmp_fd): bytes_read += len(chunk) self.assertEqual(BYTES, bytes_read) bytes_read = 0 with tempfile.TemporaryFile('w+') as tmp_fd: tmp_fd.write('*' * BYTES) tmp_fd.seek(0) reader = utils.CooperativeReader(tmp_fd) byte = reader.read(1) while len(byte) != 0: bytes_read += 1 byte = reader.read(1) self.assertEqual(BYTES, bytes_read) def test_cooperative_reader_of_iterator(self): """Ensure cooperative reader supports iterator backends too""" data = b'abcdefgh' data_list = [data[i:i + 1] * 3 for i in range(len(data))] reader = utils.CooperativeReader(data_list) chunks = [] while True: chunks.append(reader.read(3)) if chunks[-1] == b'': break meat = b''.join(chunks) self.assertEqual(b'aaabbbcccdddeeefffggghhh', meat) def test_cooperative_reader_of_iterator_stop_iteration_err(self): """Ensure cooperative reader supports iterator backends too""" reader = utils.CooperativeReader([l * 3 for l in '']) chunks = [] while True: chunks.append(reader.read(3)) if chunks[-1] == b'': break meat = b''.join(chunks) self.assertEqual(b'', meat) def _create_generator(self, chunk_size, max_iterations): chars = b'abc' iteration = 0 while True: index = iteration % len(chars) chunk = chars[index:index + 1] * chunk_size yield chunk iteration += 1 if iteration >= max_iterations: raise StopIteration() def _test_reader_chunked(self, chunk_size, read_size, max_iterations=5): generator = self._create_generator(chunk_size, max_iterations) reader = utils.CooperativeReader(generator) result = bytearray() while True: data = reader.read(read_size) if len(data) == 0: break self.assertLessEqual(len(data), read_size) result += data expected = (b'a' * chunk_size + b'b' * chunk_size + b'c' * chunk_size + b'a' * chunk_size + b'b' * chunk_size) self.assertEqual(expected, bytes(result)) def test_cooperative_reader_preserves_size_chunk_less_then_read(self): self._test_reader_chunked(43, 101) def test_cooperative_reader_preserves_size_chunk_equals_read(self): self._test_reader_chunked(1024, 1024) def test_cooperative_reader_preserves_size_chunk_more_then_read(self): chunk_size = 16 * 1024 * 1024 # 16 Mb, as in remote http source read_size = 8 * 1024 # 8k, as in httplib self._test_reader_chunked(chunk_size, read_size) def test_limiting_reader(self): """Ensure limiting reader class accesses all bytes of file""" BYTES = 1024 bytes_read = 0 data = six.StringIO("*" * BYTES) for chunk in utils.LimitingReader(data, BYTES): bytes_read += len(chunk) self.assertEqual(BYTES, bytes_read) bytes_read = 0 data = six.StringIO("*" * BYTES) reader = utils.LimitingReader(data, BYTES) byte = reader.read(1) while len(byte) != 0: bytes_read += 1 byte = reader.read(1) self.assertEqual(BYTES, bytes_read) def test_limiting_reader_fails(self): """Ensure limiting reader class throws exceptions if limit exceeded""" BYTES = 1024 def _consume_all_iter(): bytes_read = 0 data = six.StringIO("*" * BYTES) for chunk in utils.LimitingReader(data, BYTES - 1): bytes_read += len(chunk) self.assertRaises(exception.ImageSizeLimitExceeded, _consume_all_iter) def _consume_all_read(): bytes_read = 0 data = six.StringIO("*" * BYTES) reader = utils.LimitingReader(data, BYTES - 1) byte = reader.read(1) while len(byte) != 0: bytes_read += 1 byte = reader.read(1) self.assertRaises(exception.ImageSizeLimitExceeded, _consume_all_read) def test_get_meta_from_headers(self): resp = webob.Response() resp.headers = {"x-image-meta-name": 'test', 'x-image-meta-virtual-size': 80} result = utils.get_image_meta_from_headers(resp) self.assertEqual({'name': 'test', 'properties': {}, 'virtual_size': 80}, result) def test_get_meta_from_headers_none_virtual_size(self): resp = webob.Response() resp.headers = {"x-image-meta-name": 'test', 'x-image-meta-virtual-size': 'None'} result = utils.get_image_meta_from_headers(resp) self.assertEqual({'name': 'test', 'properties': {}, 'virtual_size': None}, result) def test_get_meta_from_headers_bad_headers(self): resp = webob.Response() resp.headers = {"x-image-meta-bad": 'test'} self.assertRaises(webob.exc.HTTPBadRequest, utils.get_image_meta_from_headers, resp) resp.headers = {"x-image-meta-": 'test'} self.assertRaises(webob.exc.HTTPBadRequest, utils.get_image_meta_from_headers, resp) resp.headers = {"x-image-meta-*": 'test'} self.assertRaises(webob.exc.HTTPBadRequest, utils.get_image_meta_from_headers, resp) def test_image_meta(self): image_meta = {'x-image-meta-size': 'test'} image_meta_properties = {'properties': {'test': "test"}} actual = utils.image_meta_to_http_headers(image_meta) actual_test2 = utils.image_meta_to_http_headers( image_meta_properties) self.assertEqual({'x-image-meta-x-image-meta-size': u'test'}, actual) self.assertEqual({'x-image-meta-property-test': u'test'}, actual_test2) def test_create_mashup_dict_with_different_core_custom_properties(self): image_meta = { 'id': 'test-123', 'name': 'fake_image', 'status': 'active', 'created_at': '', 'min_disk': '10G', 'min_ram': '1024M', 'protected': False, 'locations': '', 'checksum': 'c1234', 'owner': '', 'disk_format': 'raw', 'container_format': 'bare', 'size': '123456789', 'virtual_size': '123456789', 'is_public': 'public', 'deleted': True, 'updated_at': '', 'properties': {'test_key': 'test_1234'}, } mashup_dict = utils.create_mashup_dict(image_meta) self.assertNotIn('properties', mashup_dict) self.assertEqual(image_meta['properties']['test_key'], mashup_dict['test_key']) def test_create_mashup_dict_with_same_core_custom_properties(self): image_meta = { 'id': 'test-123', 'name': 'fake_image', 'status': 'active', 'created_at': '', 'min_disk': '10G', 'min_ram': '1024M', 'protected': False, 'locations': '', 'checksum': 'c1234', 'owner': '', 'disk_format': 'raw', 'container_format': 'bare', 'size': '123456789', 'virtual_size': '123456789', 'is_public': 'public', 'deleted': True, 'updated_at': '', 'properties': {'min_ram': '2048M'}, } mashup_dict = utils.create_mashup_dict(image_meta) self.assertNotIn('properties', mashup_dict) self.assertNotEqual(image_meta['properties']['min_ram'], mashup_dict['min_ram']) self.assertEqual(image_meta['min_ram'], mashup_dict['min_ram']) def test_mutating(self): class FakeContext(object): def __init__(self): self.read_only = False class Fake(object): def __init__(self): self.context = FakeContext() def fake_function(req, context): return 'test passed' req = webob.Request.blank('/some_request') result = utils.mutating(fake_function) self.assertEqual("test passed", result(req, Fake())) def test_validate_key_cert_key(self): self.config(digest_algorithm='sha256') var_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), '../../', 'var')) keyfile = os.path.join(var_dir, 'privatekey.key') certfile = os.path.join(var_dir, 'certificate.crt') utils.validate_key_cert(keyfile, certfile) def test_validate_key_cert_no_private_key(self): with tempfile.NamedTemporaryFile('w+') as tmpf: self.assertRaises(RuntimeError, utils.validate_key_cert, "/not/a/file", tmpf.name) def test_validate_key_cert_cert_cant_read(self): with tempfile.NamedTemporaryFile('w+') as keyf: with tempfile.NamedTemporaryFile('w+') as certf: os.chmod(certf.name, 0) self.assertRaises(RuntimeError, utils.validate_key_cert, keyf.name, certf.name) def test_validate_key_cert_key_cant_read(self): with tempfile.NamedTemporaryFile('w+') as keyf: with tempfile.NamedTemporaryFile('w+') as certf: os.chmod(keyf.name, 0) self.assertRaises(RuntimeError, utils.validate_key_cert, keyf.name, certf.name) def test_invalid_digest_algorithm(self): self.config(digest_algorithm='fake_algorithm') var_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), '../../', 'var')) keyfile = os.path.join(var_dir, 'privatekey.key') certfile = os.path.join(var_dir, 'certificate.crt') self.assertRaises(ValueError, utils.validate_key_cert, keyfile, certfile) def test_valid_hostname(self): valid_inputs = ['localhost', 'glance04-a' 'G', '528491'] for input_str in valid_inputs: self.assertTrue(utils.is_valid_hostname(input_str)) def test_valid_hostname_fail(self): invalid_inputs = ['localhost.localdomain', '192.168.0.1', u'\u2603', 'glance02.stack42.local'] for input_str in invalid_inputs: self.assertFalse(utils.is_valid_hostname(input_str)) def test_valid_fqdn(self): valid_inputs = ['localhost.localdomain', 'glance02.stack42.local' 'glance04-a.stack47.local', 'img83.glance.xn--penstack-r74e.org'] for input_str in valid_inputs: self.assertTrue(utils.is_valid_fqdn(input_str)) def test_valid_fqdn_fail(self): invalid_inputs = ['localhost', '192.168.0.1', '999.88.77.6', u'\u2603.local', 'glance02.stack42'] for input_str in invalid_inputs: self.assertFalse(utils.is_valid_fqdn(input_str)) def test_valid_host_port_string(self): valid_pairs = ['10.11.12.13:80', '172.17.17.1:65535', '[fe80::a:b:c:d]:9990', 'localhost:9990', 'localhost.localdomain:9990', 'glance02.stack42.local:1234', 'glance04-a.stack47.local:1234', 'img83.glance.xn--penstack-r74e.org:13080'] for pair_str in valid_pairs: host, port = utils.parse_valid_host_port(pair_str) escaped = pair_str.startswith('[') expected_host = '%s%s%s' % ('[' if escaped else '', host, ']' if escaped else '') self.assertTrue(pair_str.startswith(expected_host)) self.assertGreater(port, 0) expected_pair = '%s:%d' % (expected_host, port) self.assertEqual(expected_pair, pair_str) def test_valid_host_port_string_fail(self): invalid_pairs = ['', '10.11.12.13', '172.17.17.1:99999', '290.12.52.80:5673', 'absurd inputs happen', u'\u2601', u'\u2603:8080', 'fe80::1', '[fe80::2]', ':5673', '[fe80::a:b:c:d]9990', 'fe80:a:b:c:d:e:f:1:2:3:4', 'fe80:a:b:c:d:e:f:g', 'fe80::1:8080', '[fe80:a:b:c:d:e:f:g]:9090', '[a:b:s:u:r:d]:fe80'] for pair in invalid_pairs: self.assertRaises(ValueError, utils.parse_valid_host_port, pair) class SplitFilterOpTestCase(test_utils.BaseTestCase): def test_less_than_operator(self): expr = 'lt:bar' returned = utils.split_filter_op(expr) self.assertEqual(('lt', 'bar'), returned) def test_less_than_equal_operator(self): expr = 'lte:bar' returned = utils.split_filter_op(expr) self.assertEqual(('lte', 'bar'), returned) def test_greater_than_operator(self): expr = 'gt:bar' returned = utils.split_filter_op(expr) self.assertEqual(('gt', 'bar'), returned) def test_greater_than_equal_operator(self): expr = 'gte:bar' returned = utils.split_filter_op(expr) self.assertEqual(('gte', 'bar'), returned) def test_not_equal_operator(self): expr = 'neq:bar' returned = utils.split_filter_op(expr) self.assertEqual(('neq', 'bar'), returned) def test_equal_operator(self): expr = 'eq:bar' returned = utils.split_filter_op(expr) self.assertEqual(('eq', 'bar'), returned) def test_in_operator(self): expr = 'in:bar' returned = utils.split_filter_op(expr) self.assertEqual(('in', 'bar'), returned) def test_split_filter_value_for_quotes(self): expr = '\"fake\\\"name\",fakename,\"fake,name\"' returned = utils.split_filter_value_for_quotes(expr) list_values = ['fake\\"name', 'fakename', 'fake,name'] self.assertEqual(list_values, returned) def test_validate_quotes(self): expr = '\"aaa\\\"aa\",bb,\"cc\"' returned = utils.validate_quotes(expr) self.assertIsNone(returned) invalid_expr = ['\"aa', 'ss\"', 'aa\"bb\"cc', '\"aa\"\"bb\"'] for expr in invalid_expr: self.assertRaises(exception.InvalidParameterValue, utils.validate_quotes, expr) def test_default_operator(self): expr = 'bar' returned = utils.split_filter_op(expr) self.assertEqual(('eq', expr), returned) def test_default_operator_with_datetime(self): expr = '2015-08-27T09:49:58Z' returned = utils.split_filter_op(expr) self.assertEqual(('eq', expr), returned) def test_operator_with_datetime(self): expr = 'lt:2015-08-27T09:49:58Z' returned = utils.split_filter_op(expr) self.assertEqual(('lt', '2015-08-27T09:49:58Z'), returned) class EvaluateFilterOpTestCase(test_utils.BaseTestCase): def test_less_than_operator(self): self.assertTrue(utils.evaluate_filter_op(9, 'lt', 10)) self.assertFalse(utils.evaluate_filter_op(10, 'lt', 10)) self.assertFalse(utils.evaluate_filter_op(11, 'lt', 10)) def test_less_than_equal_operator(self): self.assertTrue(utils.evaluate_filter_op(9, 'lte', 10)) self.assertTrue(utils.evaluate_filter_op(10, 'lte', 10)) self.assertFalse(utils.evaluate_filter_op(11, 'lte', 10)) def test_greater_than_operator(self): self.assertFalse(utils.evaluate_filter_op(9, 'gt', 10)) self.assertFalse(utils.evaluate_filter_op(10, 'gt', 10)) self.assertTrue(utils.evaluate_filter_op(11, 'gt', 10)) def test_greater_than_equal_operator(self): self.assertFalse(utils.evaluate_filter_op(9, 'gte', 10)) self.assertTrue(utils.evaluate_filter_op(10, 'gte', 10)) self.assertTrue(utils.evaluate_filter_op(11, 'gte', 10)) def test_not_equal_operator(self): self.assertTrue(utils.evaluate_filter_op(9, 'neq', 10)) self.assertFalse(utils.evaluate_filter_op(10, 'neq', 10)) self.assertTrue(utils.evaluate_filter_op(11, 'neq', 10)) def test_equal_operator(self): self.assertFalse(utils.evaluate_filter_op(9, 'eq', 10)) self.assertTrue(utils.evaluate_filter_op(10, 'eq', 10)) self.assertFalse(utils.evaluate_filter_op(11, 'eq', 10)) def test_invalid_operator(self): self.assertRaises(exception.InvalidFilterOperatorValue, utils.evaluate_filter_op, '10', 'bar', '8') glance-16.0.0/glance/tests/unit/common/test_exception.py0000666000175100017510000000412213245511421023323 0ustar zuulzuul00000000000000# Copyright 2012 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_utils import encodeutils import six from six.moves import http_client as http from glance.common import exception from glance.tests import utils as test_utils class GlanceExceptionTestCase(test_utils.BaseTestCase): def test_default_error_msg(self): class FakeGlanceException(exception.GlanceException): message = "default message" exc = FakeGlanceException() self.assertEqual('default message', encodeutils.exception_to_unicode(exc)) def test_specified_error_msg(self): msg = exception.GlanceException('test') self.assertIn('test', encodeutils.exception_to_unicode(msg)) def test_default_error_msg_with_kwargs(self): class FakeGlanceException(exception.GlanceException): message = "default message: %(code)s" exc = FakeGlanceException(code=int(http.INTERNAL_SERVER_ERROR)) self.assertEqual("default message: 500", encodeutils.exception_to_unicode(exc)) def test_specified_error_msg_with_kwargs(self): msg = exception.GlanceException('test: %(code)s', code=int(http.INTERNAL_SERVER_ERROR)) self.assertIn('test: 500', encodeutils.exception_to_unicode(msg)) def test_non_unicode_error_msg(self): exc = exception.GlanceException(str('test')) self.assertIsInstance(encodeutils.exception_to_unicode(exc), six.text_type) glance-16.0.0/glance/tests/unit/common/__init__.py0000666000175100017510000000000013245511421022014 0ustar zuulzuul00000000000000glance-16.0.0/glance/tests/unit/common/test_property_utils.py0000666000175100017510000005731513245511421024445 0ustar zuulzuul00000000000000# Copyright 2013 OpenStack Foundation. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # NOTE(jokke): simplified transition to py3, behaves like py2 xrange from six.moves import range from glance.api import policy from glance.common import exception from glance.common import property_utils import glance.context from glance.tests.unit import base CONFIG_SECTIONS = [ '^x_owner_.*', 'spl_create_prop', 'spl_read_prop', 'spl_read_only_prop', 'spl_update_prop', 'spl_update_only_prop', 'spl_delete_prop', 'spl_delete_empty_prop', '^x_all_permitted.*', '^x_none_permitted.*', 'x_none_read', 'x_none_update', 'x_none_delete', 'x_case_insensitive', 'x_foo_matcher', 'x_foo_*', '.*' ] def create_context(policy, roles=None): if roles is None: roles = [] return glance.context.RequestContext(roles=roles, policy_enforcer=policy) class TestPropertyRulesWithRoles(base.IsolatedUnitTest): def setUp(self): super(TestPropertyRulesWithRoles, self).setUp() self.set_property_protections() self.policy = policy.Enforcer() def test_is_property_protections_enabled_true(self): self.config(property_protection_file="property-protections.conf") self.assertTrue(property_utils.is_property_protection_enabled()) def test_is_property_protections_enabled_false(self): self.config(property_protection_file=None) self.assertFalse(property_utils.is_property_protection_enabled()) def test_property_protection_file_doesnt_exist(self): self.config(property_protection_file='fake-file.conf') self.assertRaises(exception.InvalidPropertyProtectionConfiguration, property_utils.PropertyRules) def test_property_protection_with_mutually_exclusive_rule(self): exclusive_rules = {'.*': {'create': ['@', '!'], 'read': ['fake-role'], 'update': ['fake-role'], 'delete': ['fake-role']}} self.set_property_protection_rules(exclusive_rules) self.assertRaises(exception.InvalidPropertyProtectionConfiguration, property_utils.PropertyRules) def test_property_protection_with_malformed_rule(self): malformed_rules = {'^[0-9)': {'create': ['fake-role'], 'read': ['fake-role'], 'update': ['fake-role'], 'delete': ['fake-role']}} self.set_property_protection_rules(malformed_rules) self.assertRaises(exception.InvalidPropertyProtectionConfiguration, property_utils.PropertyRules) def test_property_protection_with_missing_operation(self): rules_with_missing_operation = {'^[0-9]': {'create': ['fake-role'], 'update': ['fake-role'], 'delete': ['fake-role']}} self.set_property_protection_rules(rules_with_missing_operation) self.assertRaises(exception.InvalidPropertyProtectionConfiguration, property_utils.PropertyRules) def test_property_protection_with_misspelt_operation(self): rules_with_misspelt_operation = {'^[0-9]': {'create': ['fake-role'], 'rade': ['fake-role'], 'update': ['fake-role'], 'delete': ['fake-role']}} self.set_property_protection_rules(rules_with_misspelt_operation) self.assertRaises(exception.InvalidPropertyProtectionConfiguration, property_utils.PropertyRules) def test_property_protection_with_whitespace(self): rules_whitespace = { '^test_prop.*': { 'create': ['member ,fake-role'], 'read': ['fake-role, member'], 'update': ['fake-role, member'], 'delete': ['fake-role, member'] } } self.set_property_protection_rules(rules_whitespace) self.rules_checker = property_utils.PropertyRules() self.assertTrue(self.rules_checker.check_property_rules('test_prop_1', 'read', create_context(self.policy, ['member']))) self.assertTrue(self.rules_checker.check_property_rules('test_prop_1', 'read', create_context(self.policy, ['fake-role']))) def test_check_property_rules_invalid_action(self): self.rules_checker = property_utils.PropertyRules(self.policy) self.assertFalse(self.rules_checker.check_property_rules('test_prop', 'hall', create_context(self.policy, ['admin']))) def test_check_property_rules_read_permitted_admin_role(self): self.rules_checker = property_utils.PropertyRules(self.policy) self.assertTrue(self.rules_checker.check_property_rules('test_prop', 'read', create_context(self.policy, ['admin']))) def test_check_property_rules_read_permitted_specific_role(self): self.rules_checker = property_utils.PropertyRules(self.policy) self.assertTrue(self.rules_checker.check_property_rules( 'x_owner_prop', 'read', create_context(self.policy, ['member']))) def test_check_property_rules_read_unpermitted_role(self): self.rules_checker = property_utils.PropertyRules(self.policy) self.assertFalse(self.rules_checker.check_property_rules('test_prop', 'read', create_context(self.policy, ['member']))) def test_check_property_rules_create_permitted_admin_role(self): self.rules_checker = property_utils.PropertyRules(self.policy) self.assertTrue(self.rules_checker.check_property_rules('test_prop', 'create', create_context(self.policy, ['admin']))) def test_check_property_rules_create_permitted_specific_role(self): self.rules_checker = property_utils.PropertyRules(self.policy) self.assertTrue(self.rules_checker.check_property_rules( 'x_owner_prop', 'create', create_context(self.policy, ['member']))) def test_check_property_rules_create_unpermitted_role(self): self.rules_checker = property_utils.PropertyRules(self.policy) self.assertFalse(self.rules_checker.check_property_rules('test_prop', 'create', create_context(self.policy, ['member']))) def test_check_property_rules_update_permitted_admin_role(self): self.rules_checker = property_utils.PropertyRules(self.policy) self.assertTrue(self.rules_checker.check_property_rules('test_prop', 'update', create_context(self.policy, ['admin']))) def test_check_property_rules_update_permitted_specific_role(self): self.rules_checker = property_utils.PropertyRules(self.policy) self.assertTrue(self.rules_checker.check_property_rules( 'x_owner_prop', 'update', create_context(self.policy, ['member']))) def test_check_property_rules_update_unpermitted_role(self): self.rules_checker = property_utils.PropertyRules(self.policy) self.assertFalse(self.rules_checker.check_property_rules('test_prop', 'update', create_context(self.policy, ['member']))) def test_check_property_rules_delete_permitted_admin_role(self): self.rules_checker = property_utils.PropertyRules(self.policy) self.assertTrue(self.rules_checker.check_property_rules('test_prop', 'delete', create_context(self.policy, ['admin']))) def test_check_property_rules_delete_permitted_specific_role(self): self.rules_checker = property_utils.PropertyRules(self.policy) self.assertTrue(self.rules_checker.check_property_rules( 'x_owner_prop', 'delete', create_context(self.policy, ['member']))) def test_check_property_rules_delete_unpermitted_role(self): self.rules_checker = property_utils.PropertyRules(self.policy) self.assertFalse(self.rules_checker.check_property_rules('test_prop', 'delete', create_context(self.policy, ['member']))) def test_property_config_loaded_in_order(self): """ Verify the order of loaded config sections matches that from the configuration file """ self.rules_checker = property_utils.PropertyRules(self.policy) self.assertEqual(CONFIG_SECTIONS, property_utils.CONFIG.sections()) def test_property_rules_loaded_in_order(self): """ Verify rules are iterable in the same order as read from the config file """ self.rules_checker = property_utils.PropertyRules(self.policy) for i in range(len(property_utils.CONFIG.sections())): self.assertEqual(property_utils.CONFIG.sections()[i], self.rules_checker.rules[i][0].pattern) def test_check_property_rules_create_all_permitted(self): self.rules_checker = property_utils.PropertyRules() self.assertTrue(self.rules_checker.check_property_rules( 'x_all_permitted', 'create', create_context(self.policy, ['']))) def test_check_property_rules_read_all_permitted(self): self.rules_checker = property_utils.PropertyRules() self.assertTrue(self.rules_checker.check_property_rules( 'x_all_permitted', 'read', create_context(self.policy, ['']))) def test_check_property_rules_update_all_permitted(self): self.rules_checker = property_utils.PropertyRules() self.assertTrue(self.rules_checker.check_property_rules( 'x_all_permitted', 'update', create_context(self.policy, ['']))) def test_check_property_rules_delete_all_permitted(self): self.rules_checker = property_utils.PropertyRules() self.assertTrue(self.rules_checker.check_property_rules( 'x_all_permitted', 'delete', create_context(self.policy, ['']))) def test_check_property_rules_create_none_permitted(self): self.rules_checker = property_utils.PropertyRules() self.assertFalse(self.rules_checker.check_property_rules( 'x_none_permitted', 'create', create_context(self.policy, ['']))) def test_check_property_rules_read_none_permitted(self): self.rules_checker = property_utils.PropertyRules() self.assertFalse(self.rules_checker.check_property_rules( 'x_none_permitted', 'read', create_context(self.policy, ['']))) def test_check_property_rules_update_none_permitted(self): self.rules_checker = property_utils.PropertyRules() self.assertFalse(self.rules_checker.check_property_rules( 'x_none_permitted', 'update', create_context(self.policy, ['']))) def test_check_property_rules_delete_none_permitted(self): self.rules_checker = property_utils.PropertyRules() self.assertFalse(self.rules_checker.check_property_rules( 'x_none_permitted', 'delete', create_context(self.policy, ['']))) def test_check_property_rules_read_none(self): self.rules_checker = property_utils.PropertyRules() self.assertTrue(self.rules_checker.check_property_rules( 'x_none_read', 'create', create_context(self.policy, ['admin', 'member']))) self.assertFalse(self.rules_checker.check_property_rules( 'x_none_read', 'read', create_context(self.policy, ['']))) self.assertFalse(self.rules_checker.check_property_rules( 'x_none_read', 'update', create_context(self.policy, ['']))) self.assertFalse(self.rules_checker.check_property_rules( 'x_none_read', 'delete', create_context(self.policy, ['']))) def test_check_property_rules_update_none(self): self.rules_checker = property_utils.PropertyRules() self.assertTrue(self.rules_checker.check_property_rules( 'x_none_update', 'create', create_context(self.policy, ['admin', 'member']))) self.assertTrue(self.rules_checker.check_property_rules( 'x_none_update', 'read', create_context(self.policy, ['admin', 'member']))) self.assertFalse(self.rules_checker.check_property_rules( 'x_none_update', 'update', create_context(self.policy, ['']))) self.assertTrue(self.rules_checker.check_property_rules( 'x_none_update', 'delete', create_context(self.policy, ['admin', 'member']))) def test_check_property_rules_delete_none(self): self.rules_checker = property_utils.PropertyRules() self.assertTrue(self.rules_checker.check_property_rules( 'x_none_delete', 'create', create_context(self.policy, ['admin', 'member']))) self.assertTrue(self.rules_checker.check_property_rules( 'x_none_delete', 'read', create_context(self.policy, ['admin', 'member']))) self.assertTrue(self.rules_checker.check_property_rules( 'x_none_delete', 'update', create_context(self.policy, ['admin', 'member']))) self.assertFalse(self.rules_checker.check_property_rules( 'x_none_delete', 'delete', create_context(self.policy, ['']))) def test_check_return_first_match(self): self.rules_checker = property_utils.PropertyRules() self.assertFalse(self.rules_checker.check_property_rules( 'x_foo_matcher', 'create', create_context(self.policy, ['']))) self.assertFalse(self.rules_checker.check_property_rules( 'x_foo_matcher', 'read', create_context(self.policy, ['']))) self.assertFalse(self.rules_checker.check_property_rules( 'x_foo_matcher', 'update', create_context(self.policy, ['']))) self.assertFalse(self.rules_checker.check_property_rules( 'x_foo_matcher', 'delete', create_context(self.policy, ['']))) def test_check_case_insensitive_property_rules(self): self.rules_checker = property_utils.PropertyRules() self.assertTrue(self.rules_checker.check_property_rules( 'x_case_insensitive', 'create', create_context(self.policy, ['member']))) self.assertTrue(self.rules_checker.check_property_rules( 'x_case_insensitive', 'read', create_context(self.policy, ['member']))) self.assertTrue(self.rules_checker.check_property_rules( 'x_case_insensitive', 'update', create_context(self.policy, ['member']))) self.assertTrue(self.rules_checker.check_property_rules( 'x_case_insensitive', 'delete', create_context(self.policy, ['member']))) class TestPropertyRulesWithPolicies(base.IsolatedUnitTest): def setUp(self): super(TestPropertyRulesWithPolicies, self).setUp() self.set_property_protections(use_policies=True) self.policy = policy.Enforcer() self.rules_checker = property_utils.PropertyRules(self.policy) def test_check_property_rules_create_permitted_specific_policy(self): self.assertTrue(self.rules_checker.check_property_rules( 'spl_creator_policy', 'create', create_context(self.policy, ['spl_role']))) def test_check_property_rules_create_unpermitted_policy(self): self.assertFalse(self.rules_checker.check_property_rules( 'spl_creator_policy', 'create', create_context(self.policy, ['fake-role']))) def test_check_property_rules_read_permitted_specific_policy(self): self.assertTrue(self.rules_checker.check_property_rules( 'spl_creator_policy', 'read', create_context(self.policy, ['spl_role']))) def test_check_property_rules_read_unpermitted_policy(self): self.assertFalse(self.rules_checker.check_property_rules( 'spl_creator_policy', 'read', create_context(self.policy, ['fake-role']))) def test_check_property_rules_update_permitted_specific_policy(self): self.assertTrue(self.rules_checker.check_property_rules( 'spl_creator_policy', 'update', create_context(self.policy, ['admin']))) def test_check_property_rules_update_unpermitted_policy(self): self.assertFalse(self.rules_checker.check_property_rules( 'spl_creator_policy', 'update', create_context(self.policy, ['fake-role']))) def test_check_property_rules_delete_permitted_specific_policy(self): self.assertTrue(self.rules_checker.check_property_rules( 'spl_creator_policy', 'delete', create_context(self.policy, ['admin']))) def test_check_property_rules_delete_unpermitted_policy(self): self.assertFalse(self.rules_checker.check_property_rules( 'spl_creator_policy', 'delete', create_context(self.policy, ['fake-role']))) def test_property_protection_with_malformed_rule(self): malformed_rules = {'^[0-9)': {'create': ['fake-policy'], 'read': ['fake-policy'], 'update': ['fake-policy'], 'delete': ['fake-policy']}} self.set_property_protection_rules(malformed_rules) self.assertRaises(exception.InvalidPropertyProtectionConfiguration, property_utils.PropertyRules) def test_property_protection_with_multiple_policies(self): malformed_rules = {'^x_.*': {'create': ['fake-policy, another_pol'], 'read': ['fake-policy'], 'update': ['fake-policy'], 'delete': ['fake-policy']}} self.set_property_protection_rules(malformed_rules) self.assertRaises(exception.InvalidPropertyProtectionConfiguration, property_utils.PropertyRules) def test_check_property_rules_create_all_permitted(self): self.rules_checker = property_utils.PropertyRules() self.assertTrue(self.rules_checker.check_property_rules( 'x_all_permitted', 'create', create_context(self.policy, ['']))) def test_check_property_rules_read_all_permitted(self): self.rules_checker = property_utils.PropertyRules() self.assertTrue(self.rules_checker.check_property_rules( 'x_all_permitted', 'read', create_context(self.policy, ['']))) def test_check_property_rules_update_all_permitted(self): self.rules_checker = property_utils.PropertyRules() self.assertTrue(self.rules_checker.check_property_rules( 'x_all_permitted', 'update', create_context(self.policy, ['']))) def test_check_property_rules_delete_all_permitted(self): self.rules_checker = property_utils.PropertyRules() self.assertTrue(self.rules_checker.check_property_rules( 'x_all_permitted', 'delete', create_context(self.policy, ['']))) def test_check_property_rules_create_none_permitted(self): self.rules_checker = property_utils.PropertyRules() self.assertFalse(self.rules_checker.check_property_rules( 'x_none_permitted', 'create', create_context(self.policy, ['']))) def test_check_property_rules_read_none_permitted(self): self.rules_checker = property_utils.PropertyRules() self.assertFalse(self.rules_checker.check_property_rules( 'x_none_permitted', 'read', create_context(self.policy, ['']))) def test_check_property_rules_update_none_permitted(self): self.rules_checker = property_utils.PropertyRules() self.assertFalse(self.rules_checker.check_property_rules( 'x_none_permitted', 'update', create_context(self.policy, ['']))) def test_check_property_rules_delete_none_permitted(self): self.rules_checker = property_utils.PropertyRules() self.assertFalse(self.rules_checker.check_property_rules( 'x_none_permitted', 'delete', create_context(self.policy, ['']))) def test_check_property_rules_read_none(self): self.rules_checker = property_utils.PropertyRules() self.assertTrue(self.rules_checker.check_property_rules( 'x_none_read', 'create', create_context(self.policy, ['admin', 'member']))) self.assertFalse(self.rules_checker.check_property_rules( 'x_none_read', 'read', create_context(self.policy, ['']))) self.assertFalse(self.rules_checker.check_property_rules( 'x_none_read', 'update', create_context(self.policy, ['']))) self.assertFalse(self.rules_checker.check_property_rules( 'x_none_read', 'delete', create_context(self.policy, ['']))) def test_check_property_rules_update_none(self): self.rules_checker = property_utils.PropertyRules() self.assertTrue(self.rules_checker.check_property_rules( 'x_none_update', 'create', create_context(self.policy, ['admin', 'member']))) self.assertTrue(self.rules_checker.check_property_rules( 'x_none_update', 'read', create_context(self.policy, ['admin', 'member']))) self.assertFalse(self.rules_checker.check_property_rules( 'x_none_update', 'update', create_context(self.policy, ['']))) self.assertTrue(self.rules_checker.check_property_rules( 'x_none_update', 'delete', create_context(self.policy, ['admin', 'member']))) def test_check_property_rules_delete_none(self): self.rules_checker = property_utils.PropertyRules() self.assertTrue(self.rules_checker.check_property_rules( 'x_none_delete', 'create', create_context(self.policy, ['admin', 'member']))) self.assertTrue(self.rules_checker.check_property_rules( 'x_none_delete', 'read', create_context(self.policy, ['admin', 'member']))) self.assertTrue(self.rules_checker.check_property_rules( 'x_none_delete', 'update', create_context(self.policy, ['admin', 'member']))) self.assertFalse(self.rules_checker.check_property_rules( 'x_none_delete', 'delete', create_context(self.policy, ['']))) def test_check_return_first_match(self): self.rules_checker = property_utils.PropertyRules() self.assertFalse(self.rules_checker.check_property_rules( 'x_foo_matcher', 'create', create_context(self.policy, ['']))) self.assertFalse(self.rules_checker.check_property_rules( 'x_foo_matcher', 'read', create_context(self.policy, ['']))) self.assertFalse(self.rules_checker.check_property_rules( 'x_foo_matcher', 'update', create_context(self.policy, ['']))) self.assertFalse(self.rules_checker.check_property_rules( 'x_foo_matcher', 'delete', create_context(self.policy, ['']))) glance-16.0.0/glance/tests/unit/common/scripts/0000775000175100017510000000000013245511661021410 5ustar zuulzuul00000000000000glance-16.0.0/glance/tests/unit/common/scripts/image_import/0000775000175100017510000000000013245511661024064 5ustar zuulzuul00000000000000glance-16.0.0/glance/tests/unit/common/scripts/image_import/__init__.py0000666000175100017510000000000013245511421026157 0ustar zuulzuul00000000000000glance-16.0.0/glance/tests/unit/common/scripts/image_import/test_main.py0000666000175100017510000001227213245511421026421 0ustar zuulzuul00000000000000# Copyright 2014 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from six.moves import urllib import glance.common.exception as exception from glance.common.scripts.image_import import main as image_import_script from glance.common.scripts import utils from glance.common import store_utils import glance.tests.utils as test_utils class TestImageImport(test_utils.BaseTestCase): def setUp(self): super(TestImageImport, self).setUp() def test_run(self): with mock.patch.object(image_import_script, '_execute') as mock_execute: task_id = mock.ANY context = mock.ANY task_repo = mock.ANY image_repo = mock.ANY image_factory = mock.ANY image_import_script.run(task_id, context, task_repo, image_repo, image_factory) mock_execute.assert_called_once_with(task_id, task_repo, image_repo, image_factory) def test_import_image(self): image_id = mock.ANY image = mock.Mock(image_id=image_id) image_repo = mock.Mock() image_repo.get.return_value = image image_factory = mock.ANY task_input = mock.Mock(image_properties=mock.ANY) uri = mock.ANY with mock.patch.object(image_import_script, 'create_image') as mock_create_image: with mock.patch.object(image_import_script, 'set_image_data') as mock_set_img_data: mock_create_image.return_value = image self.assertEqual( image_id, image_import_script.import_image(image_repo, image_factory, task_input, None, uri)) # Check image is in saving state before image_repo.save called self.assertEqual('saving', image.status) self.assertTrue(image_repo.save.called) mock_set_img_data.assert_called_once_with(image, uri, None) self.assertTrue(image_repo.get.called) self.assertTrue(image_repo.save.called) def test_create_image(self): image = mock.ANY image_repo = mock.Mock() image_factory = mock.Mock() image_factory.new_image.return_value = image # Note: include some base properties to ensure no error while # attempting to verify them image_properties = {'disk_format': 'foo', 'id': 'bar'} self.assertEqual(image, image_import_script.create_image(image_repo, image_factory, image_properties, None)) @mock.patch.object(utils, 'get_image_data_iter') def test_set_image_data_http(self, mock_image_iter): uri = 'http://www.example.com' image = mock.Mock() mock_image_iter.return_value = test_utils.FakeHTTPResponse() self.assertIsNone(image_import_script.set_image_data(image, uri, None)) def test_set_image_data_http_error(self): uri = 'blahhttp://www.example.com' image = mock.Mock() self.assertRaises(urllib.error.URLError, image_import_script.set_image_data, image, uri, None) @mock.patch.object(image_import_script, 'create_image') @mock.patch.object(image_import_script, 'set_image_data') @mock.patch.object(store_utils, 'delete_image_location_from_backend') def test_import_image_failed_with_expired_token( self, mock_delete_data, mock_set_img_data, mock_create_image): image_id = mock.ANY locations = ['location'] image = mock.Mock(image_id=image_id, locations=locations) image_repo = mock.Mock() image_repo.get.side_effect = [image, exception.NotAuthenticated] image_factory = mock.ANY task_input = mock.Mock(image_properties=mock.ANY) uri = mock.ANY mock_create_image.return_value = image self.assertRaises(exception.NotAuthenticated, image_import_script.import_image, image_repo, image_factory, task_input, None, uri) self.assertEqual(1, mock_set_img_data.call_count) mock_delete_data.assert_called_once_with( mock_create_image().context, image_id, 'location') glance-16.0.0/glance/tests/unit/common/scripts/__init__.py0000666000175100017510000000000013245511421023503 0ustar zuulzuul00000000000000glance-16.0.0/glance/tests/unit/common/scripts/test_scripts_utils.py0000666000175100017510000001233513245511421025730 0ustar zuulzuul00000000000000# Copyright 2014 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from six.moves import urllib from glance.common import exception from glance.common.scripts import utils as script_utils import glance.tests.utils as test_utils class TestScriptsUtils(test_utils.BaseTestCase): def setUp(self): super(TestScriptsUtils, self).setUp() def test_get_task(self): task = mock.ANY task_repo = mock.Mock(return_value=task) task_id = mock.ANY self.assertEqual(task, script_utils.get_task(task_repo, task_id)) def test_unpack_task_input(self): task_input = {"import_from": "foo", "import_from_format": "bar", "image_properties": "baz"} task = mock.Mock(task_input=task_input) self.assertEqual(task_input, script_utils.unpack_task_input(task)) def test_unpack_task_input_error(self): task_input1 = {"import_from_format": "bar", "image_properties": "baz"} task_input2 = {"import_from": "foo", "image_properties": "baz"} task_input3 = {"import_from": "foo", "import_from_format": "bar"} task1 = mock.Mock(task_input=task_input1) task2 = mock.Mock(task_input=task_input2) task3 = mock.Mock(task_input=task_input3) self.assertRaises(exception.Invalid, script_utils.unpack_task_input, task1) self.assertRaises(exception.Invalid, script_utils.unpack_task_input, task2) self.assertRaises(exception.Invalid, script_utils.unpack_task_input, task3) def test_set_base_image_properties(self): properties = {} script_utils.set_base_image_properties(properties) self.assertIn('disk_format', properties) self.assertIn('container_format', properties) self.assertEqual('qcow2', properties['disk_format']) self.assertEqual('bare', properties['container_format']) def test_set_base_image_properties_none(self): properties = None script_utils.set_base_image_properties(properties) self.assertIsNone(properties) def test_set_base_image_properties_not_empty(self): properties = {'disk_format': 'vmdk', 'container_format': 'bare'} script_utils.set_base_image_properties(properties) self.assertIn('disk_format', properties) self.assertIn('container_format', properties) self.assertEqual('vmdk', properties.get('disk_format')) self.assertEqual('bare', properties.get('container_format')) def test_validate_location_http(self): location = 'http://example.com' self.assertEqual(location, script_utils.validate_location_uri(location)) def test_validate_location_https(self): location = 'https://example.com' self.assertEqual(location, script_utils.validate_location_uri(location)) def test_validate_location_none_error(self): self.assertRaises(exception.BadStoreUri, script_utils.validate_location_uri, '') def test_validate_location_file_location_error(self): self.assertRaises(exception.BadStoreUri, script_utils.validate_location_uri, "file:///tmp") self.assertRaises(exception.BadStoreUri, script_utils.validate_location_uri, "filesystem:///tmp") def test_validate_location_unsupported_error(self): location = 'swift' self.assertRaises(urllib.error.URLError, script_utils.validate_location_uri, location) location = 'swift+http' self.assertRaises(urllib.error.URLError, script_utils.validate_location_uri, location) location = 'swift+https' self.assertRaises(urllib.error.URLError, script_utils.validate_location_uri, location) location = 'swift+config' self.assertRaises(urllib.error.URLError, script_utils.validate_location_uri, location) location = 'vsphere' self.assertRaises(urllib.error.URLError, script_utils.validate_location_uri, location) location = 'sheepdog://' self.assertRaises(urllib.error.URLError, script_utils.validate_location_uri, location) location = 'rbd://' self.assertRaises(urllib.error.URLError, script_utils.validate_location_uri, location) location = 'cinder://' self.assertRaises(urllib.error.URLError, script_utils.validate_location_uri, location) glance-16.0.0/glance/tests/unit/common/test_client.py0000666000175100017510000000571213245511421022611 0ustar zuulzuul00000000000000# Copyright 2013 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from mox3 import mox from six.moves import http_client import testtools from glance.common import auth from glance.common import client from glance.tests import utils class TestClient(testtools.TestCase): def setUp(self): super(TestClient, self).setUp() self.mock = mox.Mox() self.mock.StubOutWithMock(http_client.HTTPConnection, 'request') self.mock.StubOutWithMock(http_client.HTTPConnection, 'getresponse') self.endpoint = 'example.com' self.client = client.BaseClient(self.endpoint, port=9191, auth_token=u'abc123') def tearDown(self): super(TestClient, self).tearDown() self.mock.UnsetStubs() def test_make_auth_plugin(self): creds = {'strategy': 'keystone'} insecure = False configure_via_auth = True self.mock.StubOutWithMock(auth, 'get_plugin_from_strategy') auth.get_plugin_from_strategy('keystone', creds, insecure, configure_via_auth) self.mock.ReplayAll() self.client.make_auth_plugin(creds, insecure) self.mock.VerifyAll() def test_http_encoding_headers(self): http_client.HTTPConnection.request( mox.IgnoreArg(), mox.IgnoreArg(), mox.IgnoreArg(), mox.IgnoreArg()) # Lets fake the response # returned by http_client fake = utils.FakeHTTPResponse(data=b"Ok") http_client.HTTPConnection.getresponse().AndReturn(fake) self.mock.ReplayAll() headers = {"test": u'ni\xf1o'} resp = self.client.do_request('GET', '/v1/images/detail', headers=headers) self.assertEqual(fake, resp) def test_http_encoding_params(self): http_client.HTTPConnection.request( mox.IgnoreArg(), mox.IgnoreArg(), mox.IgnoreArg(), mox.IgnoreArg()) # Lets fake the response # returned by http_client fake = utils.FakeHTTPResponse(data=b"Ok") http_client.HTTPConnection.getresponse().AndReturn(fake) self.mock.ReplayAll() params = {"test": u'ni\xf1o'} resp = self.client.do_request('GET', '/v1/images/detail', params=params) self.assertEqual(fake, resp) glance-16.0.0/glance/tests/unit/common/test_location_strategy.py0000666000175100017510000002050313245511421025060 0ustar zuulzuul00000000000000# Copyright 2014 IBM Corp. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import stevedore from glance.common import location_strategy from glance.common.location_strategy import location_order from glance.common.location_strategy import store_type from glance.tests.unit import base class TestLocationStrategy(base.IsolatedUnitTest): """Test routines in glance.common.location_strategy""" def _set_original_strategies(self, original_strategies): for name in location_strategy._available_strategies.keys(): if name not in original_strategies: del location_strategy._available_strategies[name] def setUp(self): super(TestLocationStrategy, self).setUp() original_strategies = ['location_order', 'store_type'] self.addCleanup(self._set_original_strategies, original_strategies) def test_load_strategy_modules(self): modules = location_strategy._load_strategies() # By default we have two built-in strategy modules. self.assertEqual(2, len(modules)) self.assertEqual(set(['location_order', 'store_type']), set(modules.keys())) self.assertEqual(location_strategy._available_strategies, modules) def test_load_strategy_module_with_deduplicating(self): modules = ['module1', 'module2'] def _fake_stevedore_extension_manager(*args, **kwargs): ret = lambda: None ret.names = lambda: modules return ret def _fake_stevedore_driver_manager(*args, **kwargs): ret = lambda: None ret.driver = lambda: None ret.driver.__name__ = kwargs['name'] # Module 1 and 2 has a same strategy name ret.driver.get_strategy_name = lambda: 'module_name' ret.driver.init = lambda: None return ret self.stub = self.stubs.Set(stevedore.extension, "ExtensionManager", _fake_stevedore_extension_manager) self.stub = self.stubs.Set(stevedore.driver, "DriverManager", _fake_stevedore_driver_manager) loaded_modules = location_strategy._load_strategies() self.assertEqual(1, len(loaded_modules)) self.assertIn('module_name', loaded_modules) # Skipped module #2, duplicated one. self.assertEqual('module1', loaded_modules['module_name'].__name__) def test_load_strategy_module_with_init_exception(self): modules = ['module_init_exception', 'module_good'] def _fake_stevedore_extension_manager(*args, **kwargs): ret = lambda: None ret.names = lambda: modules return ret def _fake_stevedore_driver_manager(*args, **kwargs): if kwargs['name'] == 'module_init_exception': raise Exception('strategy module failed to initialize.') else: ret = lambda: None ret.driver = lambda: None ret.driver.__name__ = kwargs['name'] ret.driver.get_strategy_name = lambda: kwargs['name'] ret.driver.init = lambda: None return ret self.stub = self.stubs.Set(stevedore.extension, "ExtensionManager", _fake_stevedore_extension_manager) self.stub = self.stubs.Set(stevedore.driver, "DriverManager", _fake_stevedore_driver_manager) loaded_modules = location_strategy._load_strategies() self.assertEqual(1, len(loaded_modules)) self.assertIn('module_good', loaded_modules) # Skipped module #1, initialize failed one. self.assertEqual('module_good', loaded_modules['module_good'].__name__) def test_verify_valid_location_strategy(self): for strategy_name in ['location_order', 'store_type']: self.config(location_strategy=strategy_name) location_strategy.verify_location_strategy() def test_get_ordered_locations_with_none_or_empty_locations(self): self.assertEqual([], location_strategy.get_ordered_locations(None)) self.assertEqual([], location_strategy.get_ordered_locations([])) def test_get_ordered_locations(self): self.config(location_strategy='location_order') original_locs = [{'url': 'loc1'}, {'url': 'loc2'}] ordered_locs = location_strategy.get_ordered_locations(original_locs) # Original location list should remain unchanged self.assertNotEqual(id(original_locs), id(ordered_locs)) self.assertEqual(original_locs, ordered_locs) def test_choose_best_location_with_none_or_empty_locations(self): self.assertIsNone(location_strategy.choose_best_location(None)) self.assertIsNone(location_strategy.choose_best_location([])) def test_choose_best_location(self): self.config(location_strategy='location_order') original_locs = [{'url': 'loc1'}, {'url': 'loc2'}] best_loc = location_strategy.choose_best_location(original_locs) # Deep copy protect original location. self.assertNotEqual(id(original_locs), id(best_loc)) self.assertEqual(original_locs[0], best_loc) class TestLocationOrderStrategyModule(base.IsolatedUnitTest): """Test routines in glance.common.location_strategy.location_order""" def test_get_ordered_locations(self): original_locs = [{'url': 'loc1'}, {'url': 'loc2'}] ordered_locs = location_order.get_ordered_locations(original_locs) # The result will ordered by original natural order. self.assertEqual(original_locs, ordered_locs) class TestStoreTypeStrategyModule(base.IsolatedUnitTest): """Test routines in glance.common.location_strategy.store_type""" def test_get_ordered_locations(self): self.config(store_type_preference=[' rbd', 'sheepdog ', ' file', 'swift ', ' http ', 'vmware'], group='store_type_location_strategy') locs = [{'url': 'file://image0', 'metadata': {'idx': 3}}, {'url': 'rbd://image1', 'metadata': {'idx': 0}}, {'url': 'file://image3', 'metadata': {'idx': 4}}, {'url': 'swift://image4', 'metadata': {'idx': 6}}, {'url': 'cinder://image5', 'metadata': {'idx': 9}}, {'url': 'file://image6', 'metadata': {'idx': 5}}, {'url': 'rbd://image7', 'metadata': {'idx': 1}}, {'url': 'vsphere://image9', 'metadata': {'idx': 8}}, {'url': 'sheepdog://image8', 'metadata': {'idx': 2}}] ordered_locs = store_type.get_ordered_locations(copy.deepcopy(locs)) locs.sort(key=lambda loc: loc['metadata']['idx']) # The result will ordered by preferred store type order. self.assertEqual(locs, ordered_locs) def test_get_ordered_locations_with_invalid_store_name(self): self.config(store_type_preference=[' rbd', 'sheepdog ', 'invalid', 'swift ', ' http '], group='store_type_location_strategy') locs = [{'url': 'file://image0', 'metadata': {'idx': 4}}, {'url': 'rbd://image1', 'metadata': {'idx': 0}}, {'url': 'file://image3', 'metadata': {'idx': 5}}, {'url': 'swift://image4', 'metadata': {'idx': 3}}, {'url': 'cinder://image5', 'metadata': {'idx': 6}}, {'url': 'file://image6', 'metadata': {'idx': 7}}, {'url': 'rbd://image7', 'metadata': {'idx': 1}}, {'url': 'sheepdog://image8', 'metadata': {'idx': 2}}] ordered_locs = store_type.get_ordered_locations(copy.deepcopy(locs)) locs.sort(key=lambda loc: loc['metadata']['idx']) # The result will ordered by preferred store type order. self.assertEqual(locs, ordered_locs) glance-16.0.0/glance/tests/unit/test_manage.py0000666000175100017510000010102013245511421021260 0ustar zuulzuul00000000000000# Copyright 2014 Rackspace Hosting # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from __future__ import absolute_import import fixtures import mock from six.moves import StringIO from glance.cmd import manage from glance.common import exception from glance.db.sqlalchemy import api as db_api from glance.db.sqlalchemy import metadata as db_metadata from glance.tests import utils as test_utils class TestManageBase(test_utils.BaseTestCase): def setUp(self): super(TestManageBase, self).setUp() def clear_conf(): manage.CONF.reset() manage.CONF.unregister_opt(manage.command_opt) clear_conf() self.addCleanup(clear_conf) self.useFixture(fixtures.MonkeyPatch( 'oslo_log.log.setup', lambda product_name, version='test': None)) patcher = mock.patch('glance.db.sqlalchemy.api.get_engine') patcher.start() self.addCleanup(patcher.stop) def _main_test_helper(self, argv, func_name=None, *exp_args, **exp_kwargs): self.useFixture(fixtures.MonkeyPatch('sys.argv', argv)) manage.main() func_name.assert_called_once_with(*exp_args, **exp_kwargs) class TestLegacyManage(TestManageBase): @mock.patch.object(manage.DbCommands, 'version') def test_legacy_db_version(self, db_upgrade): self._main_test_helper(['glance.cmd.manage', 'db_version'], manage.DbCommands.version) @mock.patch.object(manage.DbCommands, 'sync') def test_legacy_db_sync(self, db_sync): self._main_test_helper(['glance.cmd.manage', 'db_sync'], manage.DbCommands.sync, None) @mock.patch.object(manage.DbCommands, 'upgrade') def test_legacy_db_upgrade(self, db_upgrade): self._main_test_helper(['glance.cmd.manage', 'db_upgrade'], manage.DbCommands.upgrade, None) @mock.patch.object(manage.DbCommands, 'version_control') def test_legacy_db_version_control(self, db_version_control): self._main_test_helper(['glance.cmd.manage', 'db_version_control'], manage.DbCommands.version_control, None) @mock.patch.object(manage.DbCommands, 'sync') def test_legacy_db_sync_version(self, db_sync): self._main_test_helper(['glance.cmd.manage', 'db_sync', 'liberty'], manage.DbCommands.sync, 'liberty') @mock.patch.object(manage.DbCommands, 'upgrade') def test_legacy_db_upgrade_version(self, db_upgrade): self._main_test_helper(['glance.cmd.manage', 'db_upgrade', 'liberty'], manage.DbCommands.upgrade, 'liberty') @mock.patch.object(manage.DbCommands, 'expand') def test_legacy_db_expand(self, db_expand): self._main_test_helper(['glance.cmd.manage', 'db_expand'], manage.DbCommands.expand) @mock.patch.object(manage.DbCommands, 'migrate') def test_legacy_db_migrate(self, db_migrate): self._main_test_helper(['glance.cmd.manage', 'db_migrate'], manage.DbCommands.migrate) @mock.patch.object(manage.DbCommands, 'contract') def test_legacy_db_contract(self, db_contract): self._main_test_helper(['glance.cmd.manage', 'db_contract'], manage.DbCommands.contract) def test_db_metadefs_unload(self): db_metadata.db_unload_metadefs = mock.Mock() self._main_test_helper(['glance.cmd.manage', 'db_unload_metadefs'], db_metadata.db_unload_metadefs, db_api.get_engine()) def test_db_metadefs_load(self): db_metadata.db_load_metadefs = mock.Mock() self._main_test_helper(['glance.cmd.manage', 'db_load_metadefs'], db_metadata.db_load_metadefs, db_api.get_engine(), None, None, None, None) def test_db_metadefs_load_with_specified_path(self): db_metadata.db_load_metadefs = mock.Mock() self._main_test_helper(['glance.cmd.manage', 'db_load_metadefs', '/mock/'], db_metadata.db_load_metadefs, db_api.get_engine(), '/mock/', None, None, None) def test_db_metadefs_load_from_path_merge(self): db_metadata.db_load_metadefs = mock.Mock() self._main_test_helper(['glance.cmd.manage', 'db_load_metadefs', '/mock/', 'True'], db_metadata.db_load_metadefs, db_api.get_engine(), '/mock/', 'True', None, None) def test_db_metadefs_load_from_merge_and_prefer_new(self): db_metadata.db_load_metadefs = mock.Mock() self._main_test_helper(['glance.cmd.manage', 'db_load_metadefs', '/mock/', 'True', 'True'], db_metadata.db_load_metadefs, db_api.get_engine(), '/mock/', 'True', 'True', None) def test_db_metadefs_load_from_merge_and_prefer_new_and_overwrite(self): db_metadata.db_load_metadefs = mock.Mock() self._main_test_helper(['glance.cmd.manage', 'db_load_metadefs', '/mock/', 'True', 'True', 'True'], db_metadata.db_load_metadefs, db_api.get_engine(), '/mock/', 'True', 'True', 'True') def test_db_metadefs_export(self): db_metadata.db_export_metadefs = mock.Mock() self._main_test_helper(['glance.cmd.manage', 'db_export_metadefs'], db_metadata.db_export_metadefs, db_api.get_engine(), None) def test_db_metadefs_export_with_specified_path(self): db_metadata.db_export_metadefs = mock.Mock() self._main_test_helper(['glance.cmd.manage', 'db_export_metadefs', '/mock/'], db_metadata.db_export_metadefs, db_api.get_engine(), '/mock/') class TestManage(TestManageBase): def setUp(self): super(TestManage, self).setUp() self.db = manage.DbCommands() self.output = StringIO() self.useFixture(fixtures.MonkeyPatch('sys.stdout', self.output)) @mock.patch('glance.db.sqlalchemy.api.get_engine') @mock.patch( 'glance.db.sqlalchemy.alembic_migrations.data_migrations.' 'has_pending_migrations') @mock.patch( 'glance.db.sqlalchemy.alembic_migrations.get_current_alembic_heads') @mock.patch( 'glance.db.sqlalchemy.alembic_migrations.get_alembic_branch_head') def test_db_check_result(self, mock_get_alembic_branch_head, mock_get_current_alembic_heads, mock_has_pending_migrations, get_mock_engine): get_mock_engine.return_value = mock.Mock() engine = get_mock_engine.return_value engine.engine.name = 'postgresql' exit = self.assertRaises(SystemExit, self.db.check) self.assertIn('Rolling upgrades are currently supported only for ' 'MySQL and Sqlite', exit.code) engine = get_mock_engine.return_value engine.engine.name = 'mysql' mock_get_current_alembic_heads.return_value = ['ocata_contract01'] mock_get_alembic_branch_head.return_value = 'pike_expand01' exit = self.assertRaises(SystemExit, self.db.check) self.assertEqual(3, exit.code) self.assertIn('Your database is not up to date. ' 'Your first step is to run `glance-manage db expand`.', self.output.getvalue()) mock_get_current_alembic_heads.return_value = ['pike_expand01'] mock_get_alembic_branch_head.side_effect = ['pike_expand01', None] mock_has_pending_migrations.return_value = [mock.Mock()] exit = self.assertRaises(SystemExit, self.db.check) self.assertEqual(4, exit.code) self.assertIn('Your database is not up to date. ' 'Your next step is to run `glance-manage db migrate`.', self.output.getvalue()) mock_get_current_alembic_heads.return_value = ['pike_expand01'] mock_get_alembic_branch_head.side_effect = ['pike_expand01', 'pike_contract01'] mock_has_pending_migrations.return_value = None exit = self.assertRaises(SystemExit, self.db.check) self.assertEqual(5, exit.code) self.assertIn('Your database is not up to date. ' 'Your next step is to run `glance-manage db contract`.', self.output.getvalue()) mock_get_current_alembic_heads.return_value = ['pike_contract01'] mock_get_alembic_branch_head.side_effect = ['pike_expand01', 'pike_contract01'] mock_has_pending_migrations.return_value = None self.assertRaises(SystemExit, self.db.check) self.assertIn('Database is up to date. No upgrades needed.', self.output.getvalue()) @mock.patch( 'glance.db.sqlalchemy.alembic_migrations.get_current_alembic_heads') @mock.patch( 'glance.db.sqlalchemy.alembic_migrations.get_alembic_branch_head') @mock.patch.object(manage.DbCommands, 'expand') @mock.patch.object(manage.DbCommands, 'migrate') @mock.patch.object(manage.DbCommands, 'contract') def test_sync(self, mock_contract, mock_migrate, mock_expand, mock_get_alembic_branch_head, mock_get_current_alembic_heads): mock_get_current_alembic_heads.return_value = ['ocata_contract01'] mock_get_alembic_branch_head.return_value = ['pike_contract01'] self.db.sync() mock_expand.assert_called_once_with(online_migration=False) mock_migrate.assert_called_once_with(online_migration=False) mock_contract.assert_called_once_with(online_migration=False) self.assertIn('Database is synced successfully.', self.output.getvalue()) @mock.patch( 'glance.db.sqlalchemy.alembic_migrations.get_current_alembic_heads') @mock.patch( 'glance.db.sqlalchemy.alembic_migrations.get_alembic_branch_head') @mock.patch('glance.db.sqlalchemy.alembic_migrations.' 'place_database_under_alembic_control') @mock.patch('alembic.command.upgrade') def test_sync_db_is_already_sync(self, mock_upgrade, mock_db_under_alembic_control, mock_get_alembic_branch_head, mock_get_current_alembic_heads): mock_get_current_alembic_heads.return_value = ['pike_contract01'] mock_get_alembic_branch_head.return_value = ['pike_contract01'] self.assertRaises(SystemExit, self.db.sync) @mock.patch( 'glance.db.sqlalchemy.alembic_migrations.get_current_alembic_heads') @mock.patch( 'glance.db.sqlalchemy.alembic_migrations.get_alembic_branch_head') @mock.patch.object(manage.DbCommands, '_validate_engine') @mock.patch.object(manage.DbCommands, 'expand') def test_sync_failed_to_sync(self, mock_expand, mock_validate_engine, mock_get_alembic_branch_head, mock_get_current_alembic_heads): engine = mock_validate_engine.return_value engine.engine.name = 'mysql' mock_get_current_alembic_heads.return_value = ['ocata_contract01'] mock_get_alembic_branch_head.side_effect = ['pike_contract01', ''] mock_expand.side_effect = exception.GlanceException exit = self.assertRaises(SystemExit, self.db.sync) self.assertIn('Failed to sync database: ERROR:', exit.code) @mock.patch( 'glance.db.sqlalchemy.alembic_migrations.get_current_alembic_heads') @mock.patch( 'glance.db.sqlalchemy.alembic_migrations.get_alembic_branch_head') @mock.patch.object(manage.DbCommands, '_validate_engine') @mock.patch.object(manage.DbCommands, '_sync') def test_expand(self, mock_sync, mock_validate_engine, mock_get_alembic_branch_head, mock_get_current_alembic_heads): engine = mock_validate_engine.return_value engine.engine.name = 'mysql' mock_get_current_alembic_heads.side_effect = ['ocata_contract01', 'pike_expand01'] mock_get_alembic_branch_head.side_effect = ['pike_expand01', 'pike_contract01'] self.db.expand() mock_sync.assert_called_once_with(version='pike_expand01') @mock.patch( 'glance.db.sqlalchemy.alembic_migrations.get_current_alembic_heads') @mock.patch( 'glance.db.sqlalchemy.alembic_migrations.get_alembic_branch_head') @mock.patch.object(manage.DbCommands, '_validate_engine') def test_expand_if_not_expand_head(self, mock_validate_engine, mock_get_alembic_branch_head, mock_get_current_alembic_heads): engine = mock_validate_engine.return_value engine.engine.name = 'mysql' mock_get_current_alembic_heads.return_value = ['ocata_contract01'] mock_get_alembic_branch_head.return_value = [] exit = self.assertRaises(SystemExit, self.db.expand) self.assertIn('Database expansion failed. Couldn\'t find head ' 'revision of expand branch.', exit.code) @mock.patch( 'glance.db.sqlalchemy.alembic_migrations.get_current_alembic_heads') @mock.patch( 'glance.db.sqlalchemy.alembic_migrations.get_alembic_branch_head') @mock.patch.object(manage.DbCommands, '_validate_engine') def test_expand_db_is_already_sync(self, mock_validate_engine, mock_get_alembic_branch_head, mock_get_current_alembic_heads): engine = mock_validate_engine.return_value engine.engine.name = 'mysql' mock_get_current_alembic_heads.return_value = ['pike_contract01'] mock_get_alembic_branch_head.side_effect = ['pike_expand01', 'pike_contract01'] self.assertRaises(SystemExit, self.db.expand) self.assertIn('Database is up to date. No migrations needed.', self.output.getvalue()) @mock.patch( 'glance.db.sqlalchemy.alembic_migrations.get_current_alembic_heads') @mock.patch( 'glance.db.sqlalchemy.alembic_migrations.get_alembic_branch_head') @mock.patch.object(manage.DbCommands, '_validate_engine') def test_expand_already_sync(self, mock_validate_engine, mock_get_alembic_branch_head, mock_get_current_alembic_heads): engine = mock_validate_engine.return_value engine.engine.name = 'mysql' mock_get_current_alembic_heads.return_value = ['pike_expand01'] mock_get_alembic_branch_head.side_effect = ['pike_expand01', 'pike_contract01'] self.db.expand() self.assertIn('Database expansion is up to date. ' 'No expansion needed.', self.output.getvalue()) @mock.patch( 'glance.db.sqlalchemy.alembic_migrations.get_current_alembic_heads') @mock.patch( 'glance.db.sqlalchemy.alembic_migrations.get_alembic_branch_head') @mock.patch.object(manage.DbCommands, '_validate_engine') @mock.patch.object(manage.DbCommands, '_sync') def test_expand_failed(self, mock_sync, mock_validate_engine, mock_get_alembic_branch_head, mock_get_current_alembic_heads): engine = mock_validate_engine.return_value engine.engine.name = 'mysql' mock_get_current_alembic_heads.side_effect = ['ocata_contract01', 'test'] mock_get_alembic_branch_head.side_effect = ['pike_expand01', 'pike_contract01'] exit = self.assertRaises(SystemExit, self.db.expand) mock_sync.assert_called_once_with(version='pike_expand01') self.assertIn('Database expansion failed. Database expansion should ' 'have brought the database version up to "pike_expand01"' ' revision. But, current revisions are: test ', exit.code) @mock.patch( 'glance.db.sqlalchemy.alembic_migrations.get_current_alembic_heads') @mock.patch( 'glance.db.sqlalchemy.alembic_migrations.get_alembic_branch_head') @mock.patch.object(manage.DbCommands, '_validate_engine') @mock.patch.object(manage.DbCommands, '_sync') def test_contract(self, mock_sync, mock_validate_engine, mock_get_alembic_branch_head, mock_get_current_alembic_heads): engine = mock_validate_engine.return_value engine.engine.name = 'mysql' mock_get_current_alembic_heads.side_effect = ['pike_expand01', 'pike_contract01'] mock_get_alembic_branch_head.side_effect = ['pike_contract01', 'pike_expand01'] self.db.contract() mock_sync.assert_called_once_with(version='pike_contract01') @mock.patch( 'glance.db.sqlalchemy.alembic_migrations.get_current_alembic_heads') @mock.patch( 'glance.db.sqlalchemy.alembic_migrations.get_alembic_branch_head') @mock.patch.object(manage.DbCommands, '_validate_engine') def test_contract_if_not_contract_head(self, mock_validate_engine, mock_get_alembic_branch_head, mock_get_current_alembic_heads): engine = mock_validate_engine.return_value engine.engine.name = 'mysql' mock_get_current_alembic_heads.return_value = ['ocata_contract01'] mock_get_alembic_branch_head.return_value = [] exit = self.assertRaises(SystemExit, self.db.contract) self.assertIn('Database contraction failed. Couldn\'t find head ' 'revision of contract branch.', exit.code) @mock.patch( 'glance.db.sqlalchemy.alembic_migrations.get_current_alembic_heads') @mock.patch( 'glance.db.sqlalchemy.alembic_migrations.get_alembic_branch_head') @mock.patch.object(manage.DbCommands, '_validate_engine') def test_contract_db_is_already_sync(self, mock_validate_engine, mock_get_alembic_branch_head, mock_get_current_alembic_heads): engine = mock_validate_engine.return_value engine.engine.name = 'mysql' mock_get_current_alembic_heads.return_value = ['pike_contract01'] mock_get_alembic_branch_head.side_effect = ['pike_contract01', 'pike_expand01'] self.assertRaises(SystemExit, self.db.contract) self.assertIn('Database is up to date. No migrations needed.', self.output.getvalue()) @mock.patch( 'glance.db.sqlalchemy.alembic_migrations.get_current_alembic_heads') @mock.patch( 'glance.db.sqlalchemy.alembic_migrations.get_alembic_branch_head') @mock.patch.object(manage.DbCommands, '_validate_engine') def test_contract_before_expand(self, mock_validate_engine, mock_get_alembic_branch_head, mock_get_current_alembic_heads): engine = mock_validate_engine.return_value engine.engine.name = 'mysql' mock_get_current_alembic_heads.return_value = ['ocata_contract01'] mock_get_alembic_branch_head.side_effect = ['pike_expand01', 'pike_contract01'] exit = self.assertRaises(SystemExit, self.db.contract) self.assertIn('Database contraction did not run. Database ' 'contraction cannot be run before database expansion. ' 'Run database expansion first using "glance-manage db ' 'expand"', exit.code) @mock.patch( 'glance.db.sqlalchemy.alembic_migrations.data_migrations.' 'has_pending_migrations') @mock.patch( 'glance.db.sqlalchemy.alembic_migrations.get_current_alembic_heads') @mock.patch( 'glance.db.sqlalchemy.alembic_migrations.get_alembic_branch_head') @mock.patch.object(manage.DbCommands, '_validate_engine') def test_contract_before_migrate(self, mock_validate_engine, mock_get_alembic_branch_head, mock_get_curr_alembic_heads, mock_has_pending_migrations): engine = mock_validate_engine.return_value engine.engine.name = 'mysql' mock_get_curr_alembic_heads.side_effect = ['pike_expand01'] mock_get_alembic_branch_head.side_effect = ['pike_contract01', 'pike_expand01'] mock_has_pending_migrations.return_value = [mock.Mock()] exit = self.assertRaises(SystemExit, self.db.contract) self.assertIn('Database contraction did not run. Database ' 'contraction cannot be run before data migration is ' 'complete. Run data migration using "glance-manage db ' 'migrate".', exit.code) @mock.patch( 'glance.db.sqlalchemy.alembic_migrations.data_migrations.' 'has_pending_migrations') @mock.patch( 'glance.db.sqlalchemy.alembic_migrations.get_current_alembic_heads') @mock.patch( 'glance.db.sqlalchemy.alembic_migrations.get_alembic_branch_head') @mock.patch.object(manage.DbCommands, '_validate_engine') def test_migrate(self, mock_validate_engine, mock_get_alembic_branch_head, mock_get_current_alembic_heads, mock_has_pending_migrations): engine = mock_validate_engine.return_value engine.engine.name = 'mysql' mock_get_current_alembic_heads.side_effect = ['pike_expand01', 'pike_contract01'] mock_get_alembic_branch_head.side_effect = ['pike_contract01', 'pike_expand01'] mock_has_pending_migrations.return_value = None self.db.migrate() self.assertIn('Database migration is up to date. ' 'No migration needed.', self.output.getvalue()) @mock.patch( 'glance.db.sqlalchemy.alembic_migrations.get_current_alembic_heads') @mock.patch( 'glance.db.sqlalchemy.alembic_migrations.get_alembic_branch_head') @mock.patch.object(manage.DbCommands, '_validate_engine') def test_migrate_db_is_already_sync(self, mock_validate_engine, mock_get_alembic_branch_head, mock_get_current_alembic_heads): engine = mock_validate_engine.return_value engine.engine.name = 'mysql' mock_get_current_alembic_heads.return_value = ['pike_contract01'] mock_get_alembic_branch_head.side_effect = ['pike_contract01', 'pike_expand01'] self.assertRaises(SystemExit, self.db.migrate) self.assertIn('Database is up to date. No migrations needed.', self.output.getvalue()) @mock.patch( 'glance.db.sqlalchemy.alembic_migrations.get_current_alembic_heads') @mock.patch( 'glance.db.sqlalchemy.alembic_migrations.get_alembic_branch_head') @mock.patch.object(manage.DbCommands, '_validate_engine') def test_migrate_already_sync(self, mock_validate_engine, mock_get_alembic_branch_head, mock_get_current_alembic_heads): engine = mock_validate_engine.return_value engine.engine.name = 'mysql' mock_get_current_alembic_heads.return_value = ['ocata_contract01'] mock_get_alembic_branch_head.side_effect = ['pike_contract01', 'pike_expand01'] exit = self.assertRaises(SystemExit, self.db.migrate) self.assertIn('Data migration did not run. Data migration cannot be ' 'run before database expansion. Run database expansion ' 'first using "glance-manage db expand"', exit.code) @mock.patch( 'glance.db.sqlalchemy.alembic_migrations.data_migrations.' 'has_pending_migrations') @mock.patch( 'glance.db.sqlalchemy.alembic_migrations.get_current_alembic_heads') @mock.patch( 'glance.db.sqlalchemy.alembic_migrations.get_alembic_branch_head') @mock.patch.object(manage.DbCommands, '_validate_engine') def test_migrate_before_expand(self, mock_validate_engine, mock_get_alembic_branch_head, mock_get_current_alembic_heads, mock_has_pending_migrations): engine = mock_validate_engine.return_value engine.engine.name = 'mysql' mock_get_current_alembic_heads.return_value = ['pike_expand01'] mock_get_alembic_branch_head.side_effect = ['pike_contract01', 'pike_expand01'] mock_has_pending_migrations.return_value = None self.db.migrate() self.assertIn('Database migration is up to date. ' 'No migration needed.', self.output.getvalue()) @mock.patch.object(manage.DbCommands, 'version') def test_db_version(self, version): self._main_test_helper(['glance.cmd.manage', 'db', 'version'], manage.DbCommands.version) @mock.patch.object(manage.DbCommands, 'check') def test_db_check(self, check): self._main_test_helper(['glance.cmd.manage', 'db', 'check'], manage.DbCommands.check) @mock.patch.object(manage.DbCommands, 'sync') def test_db_sync(self, sync): self._main_test_helper(['glance.cmd.manage', 'db', 'sync'], manage.DbCommands.sync) @mock.patch.object(manage.DbCommands, 'upgrade') def test_db_upgrade(self, upgrade): self._main_test_helper(['glance.cmd.manage', 'db', 'upgrade'], manage.DbCommands.upgrade) @mock.patch.object(manage.DbCommands, 'version_control') def test_db_version_control(self, version_control): self._main_test_helper(['glance.cmd.manage', 'db', 'version_control'], manage.DbCommands.version_control) @mock.patch.object(manage.DbCommands, 'sync') def test_db_sync_version(self, sync): self._main_test_helper(['glance.cmd.manage', 'db', 'sync', 'liberty'], manage.DbCommands.sync, 'liberty') @mock.patch.object(manage.DbCommands, 'upgrade') def test_db_upgrade_version(self, upgrade): self._main_test_helper(['glance.cmd.manage', 'db', 'upgrade', 'liberty'], manage.DbCommands.upgrade, 'liberty') @mock.patch.object(manage.DbCommands, 'expand') def test_db_expand(self, expand): self._main_test_helper(['glance.cmd.manage', 'db', 'expand'], manage.DbCommands.expand) @mock.patch.object(manage.DbCommands, 'migrate') def test_db_migrate(self, migrate): self._main_test_helper(['glance.cmd.manage', 'db', 'migrate'], manage.DbCommands.migrate) @mock.patch.object(manage.DbCommands, 'contract') def test_db_contract(self, contract): self._main_test_helper(['glance.cmd.manage', 'db', 'contract'], manage.DbCommands.contract) def test_db_metadefs_unload(self): db_metadata.db_unload_metadefs = mock.Mock() self._main_test_helper(['glance.cmd.manage', 'db', 'unload_metadefs'], db_metadata.db_unload_metadefs, db_api.get_engine()) def test_db_metadefs_load(self): db_metadata.db_load_metadefs = mock.Mock() self._main_test_helper(['glance.cmd.manage', 'db', 'load_metadefs'], db_metadata.db_load_metadefs, db_api.get_engine(), None, False, False, False) def test_db_metadefs_load_with_specified_path(self): db_metadata.db_load_metadefs = mock.Mock() self._main_test_helper(['glance.cmd.manage', 'db', 'load_metadefs', '--path', '/mock/'], db_metadata.db_load_metadefs, db_api.get_engine(), '/mock/', False, False, False) def test_db_metadefs_load_prefer_new_with_path(self): db_metadata.db_load_metadefs = mock.Mock() self._main_test_helper(['glance.cmd.manage', 'db', 'load_metadefs', '--path', '/mock/', '--merge', '--prefer_new'], db_metadata.db_load_metadefs, db_api.get_engine(), '/mock/', True, True, False) def test_db_metadefs_load_prefer_new(self): db_metadata.db_load_metadefs = mock.Mock() self._main_test_helper(['glance.cmd.manage', 'db', 'load_metadefs', '--merge', '--prefer_new'], db_metadata.db_load_metadefs, db_api.get_engine(), None, True, True, False) def test_db_metadefs_load_overwrite_existing(self): db_metadata.db_load_metadefs = mock.Mock() self._main_test_helper(['glance.cmd.manage', 'db', 'load_metadefs', '--merge', '--overwrite'], db_metadata.db_load_metadefs, db_api.get_engine(), None, True, False, True) def test_db_metadefs_load_prefer_new_and_overwrite_existing(self): db_metadata.db_load_metadefs = mock.Mock() self._main_test_helper(['glance.cmd.manage', 'db', 'load_metadefs', '--merge', '--prefer_new', '--overwrite'], db_metadata.db_load_metadefs, db_api.get_engine(), None, True, True, True) def test_db_metadefs_load_from_path_overwrite_existing(self): db_metadata.db_load_metadefs = mock.Mock() self._main_test_helper(['glance.cmd.manage', 'db', 'load_metadefs', '--path', '/mock/', '--merge', '--overwrite'], db_metadata.db_load_metadefs, db_api.get_engine(), '/mock/', True, False, True) def test_db_metadefs_export(self): db_metadata.db_export_metadefs = mock.Mock() self._main_test_helper(['glance.cmd.manage', 'db', 'export_metadefs'], db_metadata.db_export_metadefs, db_api.get_engine(), None) def test_db_metadefs_export_with_specified_path(self): db_metadata.db_export_metadefs = mock.Mock() self._main_test_helper(['glance.cmd.manage', 'db', 'export_metadefs', '--path', '/mock/'], db_metadata.db_export_metadefs, db_api.get_engine(), '/mock/') glance-16.0.0/glance/tests/unit/test_data_migration_framework.py0000666000175100017510000002070213245511421025076 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from glance.db.sqlalchemy.alembic_migrations import data_migrations from glance.tests import utils as test_utils class TestDataMigrationFramework(test_utils.BaseTestCase): @mock.patch('glance.db.sqlalchemy.alembic_migrations.data_migrations' '._find_migration_modules') def test_has_pending_migrations_no_migrations(self, mock_find): mock_find.return_value = None self.assertFalse(data_migrations.has_pending_migrations(mock.Mock())) @mock.patch('glance.db.sqlalchemy.alembic_migrations.data_migrations' '._find_migration_modules') def test_has_pending_migrations_one_migration_no_pending(self, mock_find): mock_migration1 = mock.Mock() mock_migration1.has_migrations.return_value = False mock_find.return_value = [mock_migration1] self.assertFalse(data_migrations.has_pending_migrations(mock.Mock())) @mock.patch('glance.db.sqlalchemy.alembic_migrations.data_migrations' '._find_migration_modules') def test_has_pending_migrations_one_migration_with_pending(self, mock_find): mock_migration1 = mock.Mock() mock_migration1.has_migrations.return_value = True mock_find.return_value = [mock_migration1] self.assertTrue(data_migrations.has_pending_migrations(mock.Mock())) @mock.patch('glance.db.sqlalchemy.alembic_migrations.data_migrations' '._find_migration_modules') def test_has_pending_migrations_mult_migration_no_pending(self, mock_find): mock_migration1 = mock.Mock() mock_migration1.has_migrations.return_value = False mock_migration2 = mock.Mock() mock_migration2.has_migrations.return_value = False mock_migration3 = mock.Mock() mock_migration3.has_migrations.return_value = False mock_find.return_value = [mock_migration1, mock_migration2, mock_migration3] self.assertFalse(data_migrations.has_pending_migrations(mock.Mock())) @mock.patch('glance.db.sqlalchemy.alembic_migrations.data_migrations' '._find_migration_modules') def test_has_pending_migrations_mult_migration_one_pending(self, mock_find): mock_migration1 = mock.Mock() mock_migration1.has_migrations.return_value = False mock_migration2 = mock.Mock() mock_migration2.has_migrations.return_value = True mock_migration3 = mock.Mock() mock_migration3.has_migrations.return_value = False mock_find.return_value = [mock_migration1, mock_migration2, mock_migration3] self.assertTrue(data_migrations.has_pending_migrations(mock.Mock())) @mock.patch('glance.db.sqlalchemy.alembic_migrations.data_migrations' '._find_migration_modules') def test_has_pending_migrations_mult_migration_some_pending(self, mock_find): mock_migration1 = mock.Mock() mock_migration1.has_migrations.return_value = False mock_migration2 = mock.Mock() mock_migration2.has_migrations.return_value = True mock_migration3 = mock.Mock() mock_migration3.has_migrations.return_value = False mock_migration4 = mock.Mock() mock_migration4.has_migrations.return_value = True mock_find.return_value = [mock_migration1, mock_migration2, mock_migration3, mock_migration4] self.assertTrue(data_migrations.has_pending_migrations(mock.Mock())) @mock.patch('importlib.import_module') @mock.patch('pkgutil.iter_modules') def test_find_migrations(self, mock_iter, mock_import): def fake_iter_modules(blah): yield 'blah', 'zebra01', 'blah' yield 'blah', 'zebra02', 'blah' yield 'blah', 'yellow01', 'blah' yield 'blah', 'xray01', 'blah' yield 'blah', 'wrinkle01', 'blah' mock_iter.side_effect = fake_iter_modules zebra1 = mock.Mock() zebra1.has_migrations.return_value = mock.Mock() zebra1.migrate.return_value = mock.Mock() zebra2 = mock.Mock() zebra2.has_migrations.return_value = mock.Mock() zebra2.migrate.return_value = mock.Mock() fake_imported_modules = [zebra1, zebra2] mock_import.side_effect = fake_imported_modules actual = data_migrations._find_migration_modules('zebra') self.assertEqual(2, len(actual)) self.assertEqual(fake_imported_modules, actual) @mock.patch('pkgutil.iter_modules') def test_find_migrations_no_migrations(self, mock_iter): def fake_iter_modules(blah): yield 'blah', 'zebra01', 'blah' yield 'blah', 'yellow01', 'blah' yield 'blah', 'xray01', 'blah' yield 'blah', 'wrinkle01', 'blah' yield 'blah', 'victor01', 'blah' mock_iter.side_effect = fake_iter_modules actual = data_migrations._find_migration_modules('umbrella') self.assertEqual(0, len(actual)) self.assertEqual([], actual) def test_run_migrations(self): zebra1 = mock.Mock() zebra1.has_migrations.return_value = True zebra1.migrate.return_value = 100 zebra2 = mock.Mock() zebra2.has_migrations.return_value = True zebra2.migrate.return_value = 50 migrations = [zebra1, zebra2] engine = mock.Mock() actual = data_migrations._run_migrations(engine, migrations) self.assertEqual(150, actual) zebra1.has_migrations.assert_called_once_with(engine) zebra1.migrate.assert_called_once_with(engine) zebra2.has_migrations.assert_called_once_with(engine) zebra2.migrate.assert_called_once_with(engine) def test_run_migrations_with_one_pending_migration(self): zebra1 = mock.Mock() zebra1.has_migrations.return_value = False zebra1.migrate.return_value = 0 zebra2 = mock.Mock() zebra2.has_migrations.return_value = True zebra2.migrate.return_value = 50 migrations = [zebra1, zebra2] engine = mock.Mock() actual = data_migrations._run_migrations(engine, migrations) self.assertEqual(50, actual) zebra1.has_migrations.assert_called_once_with(engine) zebra1.migrate.assert_not_called() zebra2.has_migrations.assert_called_once_with(engine) zebra2.migrate.assert_called_once_with(engine) def test_run_migrations_with_no_migrations(self): migrations = [] actual = data_migrations._run_migrations(mock.Mock(), migrations) self.assertEqual(0, actual) @mock.patch('glance.db.migration.CURRENT_RELEASE', 'zebra') @mock.patch('importlib.import_module') @mock.patch('pkgutil.iter_modules') def test_migrate(self, mock_iter, mock_import): def fake_iter_modules(blah): yield 'blah', 'zebra01', 'blah' yield 'blah', 'zebra02', 'blah' yield 'blah', 'yellow01', 'blah' yield 'blah', 'xray01', 'blah' yield 'blah', 'xray02', 'blah' mock_iter.side_effect = fake_iter_modules zebra1 = mock.Mock() zebra1.has_migrations.return_value = True zebra1.migrate.return_value = 100 zebra2 = mock.Mock() zebra2.has_migrations.return_value = True zebra2.migrate.return_value = 50 fake_imported_modules = [zebra1, zebra2] mock_import.side_effect = fake_imported_modules engine = mock.Mock() actual = data_migrations.migrate(engine, 'zebra') self.assertEqual(150, actual) zebra1.has_migrations.assert_called_once_with(engine) zebra1.migrate.assert_called_once_with(engine) zebra2.has_migrations.assert_called_once_with(engine) zebra2.migrate.assert_called_once_with(engine) glance-16.0.0/glance/tests/unit/test_db.py0000666000175100017510000007606113245511421020435 0ustar zuulzuul00000000000000# Copyright 2012 OpenStack Foundation. # Copyright 2013 IBM Corp. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime import uuid import mock from oslo_config import cfg from oslo_db import exception as db_exc from oslo_utils import encodeutils from oslo_utils import timeutils from glance.common import crypt from glance.common import exception import glance.context import glance.db from glance.db.sqlalchemy import api import glance.tests.unit.utils as unit_test_utils import glance.tests.utils as test_utils CONF = cfg.CONF CONF.import_opt('metadata_encryption_key', 'glance.common.config') @mock.patch('oslo_utils.importutils.import_module') class TestDbUtilities(test_utils.BaseTestCase): def setUp(self): super(TestDbUtilities, self).setUp() self.config(data_api='silly pants') self.api = mock.Mock() def test_get_api_calls_configure_if_present(self, import_module): import_module.return_value = self.api self.assertEqual(glance.db.get_api(), self.api) import_module.assert_called_once_with('silly pants') self.api.configure.assert_called_once_with() def test_get_api_skips_configure_if_missing(self, import_module): import_module.return_value = self.api del self.api.configure self.assertEqual(glance.db.get_api(), self.api) import_module.assert_called_once_with('silly pants') self.assertFalse(hasattr(self.api, 'configure')) def test_get_api_calls_for_v1_api(self, import_module): api = glance.db.get_api(v1_mode=True) self.assertNotEqual(api, self.api) import_module.assert_called_once_with('glance.db.sqlalchemy.api') api.configure.assert_called_once_with() UUID1 = 'c80a1a6c-bd1f-41c5-90ee-81afedb1d58d' UUID2 = 'a85abd86-55b3-4d5b-b0b4-5d0a6e6042fc' UUID3 = '971ec09a-8067-4bc8-a91f-ae3557f1c4c7' UUID4 = '6bbe7cc2-eae7-4c0f-b50d-a7160b0c6a86' TENANT1 = '6838eb7b-6ded-434a-882c-b344c77fe8df' TENANT2 = '2c014f32-55eb-467d-8fcb-4bd706012f81' TENANT3 = '5a3e60e8-cfa9-4a9e-a90a-62b42cea92b8' TENANT4 = 'c6c87f25-8a94-47ed-8c83-053c25f42df4' USER1 = '54492ba0-f4df-4e4e-be62-27f4d76b29cf' UUID1_LOCATION = 'file:///path/to/image' UUID1_LOCATION_METADATA = {'key': 'value'} UUID3_LOCATION = 'http://somehost.com/place' CHECKSUM = '93264c3edf5972c9f1cb309543d38a5c' CHCKSUM1 = '43264c3edf4972c9f1cb309543d38a55' def _db_fixture(id, **kwargs): obj = { 'id': id, 'name': None, 'is_public': False, 'properties': {}, 'checksum': None, 'owner': None, 'status': 'queued', 'tags': [], 'size': None, 'locations': [], 'protected': False, 'disk_format': None, 'container_format': None, 'deleted': False, 'min_ram': None, 'min_disk': None, } if 'visibility' in kwargs: obj.pop('is_public') obj.update(kwargs) return obj def _db_image_member_fixture(image_id, member_id, **kwargs): obj = { 'image_id': image_id, 'member': member_id, } obj.update(kwargs) return obj def _db_task_fixture(task_id, type, status, **kwargs): obj = { 'id': task_id, 'type': type, 'status': status, 'input': None, 'result': None, 'owner': None, 'message': None, 'deleted': False, 'expires_at': timeutils.utcnow() + datetime.timedelta(days=365) } obj.update(kwargs) return obj class TestImageRepo(test_utils.BaseTestCase): def setUp(self): super(TestImageRepo, self).setUp() self.db = unit_test_utils.FakeDB(initialize=False) self.context = glance.context.RequestContext( user=USER1, tenant=TENANT1) self.image_repo = glance.db.ImageRepo(self.context, self.db) self.image_factory = glance.domain.ImageFactory() self._create_images() self._create_image_members() def _create_images(self): self.images = [ _db_fixture(UUID1, owner=TENANT1, checksum=CHECKSUM, name='1', size=256, is_public=True, status='active', locations=[{'url': UUID1_LOCATION, 'metadata': UUID1_LOCATION_METADATA, 'status': 'active'}]), _db_fixture(UUID2, owner=TENANT1, checksum=CHCKSUM1, name='2', size=512, is_public=False), _db_fixture(UUID3, owner=TENANT3, checksum=CHCKSUM1, name='3', size=1024, is_public=True, locations=[{'url': UUID3_LOCATION, 'metadata': {}, 'status': 'active'}]), _db_fixture(UUID4, owner=TENANT4, name='4', size=2048), ] [self.db.image_create(None, image) for image in self.images] self.db.image_tag_set_all(None, UUID1, ['ping', 'pong']) def _create_image_members(self): self.image_members = [ _db_image_member_fixture(UUID2, TENANT2), _db_image_member_fixture(UUID2, TENANT3, status='accepted'), ] [self.db.image_member_create(None, image_member) for image_member in self.image_members] def test_get(self): image = self.image_repo.get(UUID1) self.assertEqual(UUID1, image.image_id) self.assertEqual('1', image.name) self.assertEqual(set(['ping', 'pong']), image.tags) self.assertEqual('public', image.visibility) self.assertEqual('active', image.status) self.assertEqual(256, image.size) self.assertEqual(TENANT1, image.owner) def test_location_value(self): image = self.image_repo.get(UUID3) self.assertEqual(UUID3_LOCATION, image.locations[0]['url']) def test_location_data_value(self): image = self.image_repo.get(UUID1) self.assertEqual(UUID1_LOCATION, image.locations[0]['url']) self.assertEqual(UUID1_LOCATION_METADATA, image.locations[0]['metadata']) def test_location_data_exists(self): image = self.image_repo.get(UUID2) self.assertEqual([], image.locations) def test_get_not_found(self): fake_uuid = str(uuid.uuid4()) exc = self.assertRaises(exception.ImageNotFound, self.image_repo.get, fake_uuid) self.assertIn(fake_uuid, encodeutils.exception_to_unicode(exc)) def test_get_forbidden(self): self.assertRaises(exception.NotFound, self.image_repo.get, UUID4) def test_list(self): images = self.image_repo.list() image_ids = set([i.image_id for i in images]) self.assertEqual(set([UUID1, UUID2, UUID3]), image_ids) def _do_test_list_status(self, status, expected): self.context = glance.context.RequestContext( user=USER1, tenant=TENANT3) self.image_repo = glance.db.ImageRepo(self.context, self.db) images = self.image_repo.list(member_status=status) self.assertEqual(expected, len(images)) def test_list_status(self): self._do_test_list_status(None, 3) def test_list_status_pending(self): self._do_test_list_status('pending', 2) def test_list_status_rejected(self): self._do_test_list_status('rejected', 2) def test_list_status_all(self): self._do_test_list_status('all', 3) def test_list_with_marker(self): full_images = self.image_repo.list() full_ids = [i.image_id for i in full_images] marked_images = self.image_repo.list(marker=full_ids[0]) actual_ids = [i.image_id for i in marked_images] self.assertEqual(full_ids[1:], actual_ids) def test_list_with_last_marker(self): images = self.image_repo.list() marked_images = self.image_repo.list(marker=images[-1].image_id) self.assertEqual(0, len(marked_images)) def test_limited_list(self): limited_images = self.image_repo.list(limit=2) self.assertEqual(2, len(limited_images)) def test_list_with_marker_and_limit(self): full_images = self.image_repo.list() full_ids = [i.image_id for i in full_images] marked_images = self.image_repo.list(marker=full_ids[0], limit=1) actual_ids = [i.image_id for i in marked_images] self.assertEqual(full_ids[1:2], actual_ids) def test_list_private_images(self): filters = {'visibility': 'private'} images = self.image_repo.list(filters=filters) self.assertEqual(0, len(images)) def test_list_shared_images(self): filters = {'visibility': 'shared'} images = self.image_repo.list(filters=filters) image_ids = set([i.image_id for i in images]) self.assertEqual(set([UUID2]), image_ids) def test_list_with_checksum_filter_single_image(self): filters = {'checksum': CHECKSUM} images = self.image_repo.list(filters=filters) image_ids = list([i.image_id for i in images]) self.assertEqual(1, len(image_ids)) self.assertEqual([UUID1], image_ids) def test_list_with_checksum_filter_multiple_images(self): filters = {'checksum': CHCKSUM1} images = self.image_repo.list(filters=filters) image_ids = list([i.image_id for i in images]) self.assertEqual(2, len(image_ids)) self.assertIn(UUID2, image_ids) self.assertIn(UUID3, image_ids) def test_list_with_wrong_checksum(self): WRONG_CHKSUM = 'd2fd42f979e1ed1aafadc7eb9354bff839c858cd' filters = {'checksum': WRONG_CHKSUM} images = self.image_repo.list(filters=filters) self.assertEqual(0, len(images)) def test_list_with_tags_filter_single_tag(self): filters = {'tags': ['ping']} images = self.image_repo.list(filters=filters) image_ids = list([i.image_id for i in images]) self.assertEqual(1, len(image_ids)) self.assertEqual([UUID1], image_ids) def test_list_with_tags_filter_multiple_tags(self): filters = {'tags': ['ping', 'pong']} images = self.image_repo.list(filters=filters) image_ids = list([i.image_id for i in images]) self.assertEqual(1, len(image_ids)) self.assertEqual([UUID1], image_ids) def test_list_with_tags_filter_multiple_tags_and_nonexistent(self): filters = {'tags': ['ping', 'fake']} images = self.image_repo.list(filters=filters) image_ids = list([i.image_id for i in images]) self.assertEqual(0, len(image_ids)) def test_list_with_wrong_tags(self): filters = {'tags': ['fake']} images = self.image_repo.list(filters=filters) self.assertEqual(0, len(images)) def test_list_public_images(self): filters = {'visibility': 'public'} images = self.image_repo.list(filters=filters) image_ids = set([i.image_id for i in images]) self.assertEqual(set([UUID1, UUID3]), image_ids) def test_sorted_list(self): images = self.image_repo.list(sort_key=['size'], sort_dir=['asc']) image_ids = [i.image_id for i in images] self.assertEqual([UUID1, UUID2, UUID3], image_ids) def test_sorted_list_with_multiple_keys(self): temp_id = 'd80a1a6c-bd1f-41c5-90ee-81afedb1d58d' image = _db_fixture(temp_id, owner=TENANT1, checksum=CHECKSUM, name='1', size=1024, is_public=True, status='active', locations=[{'url': UUID1_LOCATION, 'metadata': UUID1_LOCATION_METADATA, 'status': 'active'}]) self.db.image_create(None, image) images = self.image_repo.list(sort_key=['name', 'size'], sort_dir=['asc']) image_ids = [i.image_id for i in images] self.assertEqual([UUID1, temp_id, UUID2, UUID3], image_ids) images = self.image_repo.list(sort_key=['size', 'name'], sort_dir=['asc']) image_ids = [i.image_id for i in images] self.assertEqual([UUID1, UUID2, temp_id, UUID3], image_ids) def test_sorted_list_with_multiple_dirs(self): temp_id = 'd80a1a6c-bd1f-41c5-90ee-81afedb1d58d' image = _db_fixture(temp_id, owner=TENANT1, checksum=CHECKSUM, name='1', size=1024, is_public=True, status='active', locations=[{'url': UUID1_LOCATION, 'metadata': UUID1_LOCATION_METADATA, 'status': 'active'}]) self.db.image_create(None, image) images = self.image_repo.list(sort_key=['name', 'size'], sort_dir=['asc', 'desc']) image_ids = [i.image_id for i in images] self.assertEqual([temp_id, UUID1, UUID2, UUID3], image_ids) images = self.image_repo.list(sort_key=['name', 'size'], sort_dir=['desc', 'asc']) image_ids = [i.image_id for i in images] self.assertEqual([UUID3, UUID2, UUID1, temp_id], image_ids) def test_add_image(self): image = self.image_factory.new_image(name='added image') self.assertEqual(image.updated_at, image.created_at) self.image_repo.add(image) retreived_image = self.image_repo.get(image.image_id) self.assertEqual('added image', retreived_image.name) self.assertEqual(image.updated_at, retreived_image.updated_at) def test_save_image(self): image = self.image_repo.get(UUID1) original_update_time = image.updated_at image.name = 'foo' image.tags = ['king', 'kong'] self.image_repo.save(image) current_update_time = image.updated_at self.assertGreater(current_update_time, original_update_time) image = self.image_repo.get(UUID1) self.assertEqual('foo', image.name) self.assertEqual(set(['king', 'kong']), image.tags) self.assertEqual(current_update_time, image.updated_at) def test_save_image_not_found(self): fake_uuid = str(uuid.uuid4()) image = self.image_repo.get(UUID1) image.image_id = fake_uuid exc = self.assertRaises(exception.ImageNotFound, self.image_repo.save, image) self.assertIn(fake_uuid, encodeutils.exception_to_unicode(exc)) def test_remove_image(self): image = self.image_repo.get(UUID1) previous_update_time = image.updated_at self.image_repo.remove(image) self.assertGreater(image.updated_at, previous_update_time) self.assertRaises(exception.ImageNotFound, self.image_repo.get, UUID1) def test_remove_image_not_found(self): fake_uuid = str(uuid.uuid4()) image = self.image_repo.get(UUID1) image.image_id = fake_uuid exc = self.assertRaises( exception.ImageNotFound, self.image_repo.remove, image) self.assertIn(fake_uuid, encodeutils.exception_to_unicode(exc)) class TestEncryptedLocations(test_utils.BaseTestCase): def setUp(self): super(TestEncryptedLocations, self).setUp() self.db = unit_test_utils.FakeDB(initialize=False) self.context = glance.context.RequestContext( user=USER1, tenant=TENANT1) self.image_repo = glance.db.ImageRepo(self.context, self.db) self.image_factory = glance.domain.ImageFactory() self.crypt_key = '0123456789abcdef' self.config(metadata_encryption_key=self.crypt_key) self.foo_bar_location = [{'url': 'foo', 'metadata': {}, 'status': 'active'}, {'url': 'bar', 'metadata': {}, 'status': 'active'}] def test_encrypt_locations_on_add(self): image = self.image_factory.new_image(UUID1) image.locations = self.foo_bar_location self.image_repo.add(image) db_data = self.db.image_get(self.context, UUID1) self.assertNotEqual(db_data['locations'], ['foo', 'bar']) decrypted_locations = [crypt.urlsafe_decrypt(self.crypt_key, l['url']) for l in db_data['locations']] self.assertEqual([l['url'] for l in self.foo_bar_location], decrypted_locations) def test_encrypt_locations_on_save(self): image = self.image_factory.new_image(UUID1) self.image_repo.add(image) image.locations = self.foo_bar_location self.image_repo.save(image) db_data = self.db.image_get(self.context, UUID1) self.assertNotEqual(db_data['locations'], ['foo', 'bar']) decrypted_locations = [crypt.urlsafe_decrypt(self.crypt_key, l['url']) for l in db_data['locations']] self.assertEqual([l['url'] for l in self.foo_bar_location], decrypted_locations) def test_decrypt_locations_on_get(self): url_loc = ['ping', 'pong'] orig_locations = [{'url': l, 'metadata': {}, 'status': 'active'} for l in url_loc] encrypted_locs = [crypt.urlsafe_encrypt(self.crypt_key, l) for l in url_loc] encrypted_locations = [{'url': l, 'metadata': {}, 'status': 'active'} for l in encrypted_locs] self.assertNotEqual(encrypted_locations, orig_locations) db_data = _db_fixture(UUID1, owner=TENANT1, locations=encrypted_locations) self.db.image_create(None, db_data) image = self.image_repo.get(UUID1) self.assertIn('id', image.locations[0]) self.assertIn('id', image.locations[1]) image.locations[0].pop('id') image.locations[1].pop('id') self.assertEqual(orig_locations, image.locations) def test_decrypt_locations_on_list(self): url_loc = ['ping', 'pong'] orig_locations = [{'url': l, 'metadata': {}, 'status': 'active'} for l in url_loc] encrypted_locs = [crypt.urlsafe_encrypt(self.crypt_key, l) for l in url_loc] encrypted_locations = [{'url': l, 'metadata': {}, 'status': 'active'} for l in encrypted_locs] self.assertNotEqual(encrypted_locations, orig_locations) db_data = _db_fixture(UUID1, owner=TENANT1, locations=encrypted_locations) self.db.image_create(None, db_data) image = self.image_repo.list()[0] self.assertIn('id', image.locations[0]) self.assertIn('id', image.locations[1]) image.locations[0].pop('id') image.locations[1].pop('id') self.assertEqual(orig_locations, image.locations) class TestImageMemberRepo(test_utils.BaseTestCase): def setUp(self): super(TestImageMemberRepo, self).setUp() self.db = unit_test_utils.FakeDB(initialize=False) self.context = glance.context.RequestContext( user=USER1, tenant=TENANT1) self.image_repo = glance.db.ImageRepo(self.context, self.db) self.image_member_factory = glance.domain.ImageMemberFactory() self._create_images() self._create_image_members() image = self.image_repo.get(UUID1) self.image_member_repo = glance.db.ImageMemberRepo(self.context, self.db, image) def _create_images(self): self.images = [ _db_fixture(UUID1, owner=TENANT1, name='1', size=256, status='active'), _db_fixture(UUID2, owner=TENANT1, name='2', size=512, visibility='shared'), ] [self.db.image_create(None, image) for image in self.images] self.db.image_tag_set_all(None, UUID1, ['ping', 'pong']) def _create_image_members(self): self.image_members = [ _db_image_member_fixture(UUID1, TENANT2), _db_image_member_fixture(UUID1, TENANT3), ] [self.db.image_member_create(None, image_member) for image_member in self.image_members] def test_list(self): image_members = self.image_member_repo.list() image_member_ids = set([i.member_id for i in image_members]) self.assertEqual(set([TENANT2, TENANT3]), image_member_ids) def test_list_no_members(self): image = self.image_repo.get(UUID2) self.image_member_repo_uuid2 = glance.db.ImageMemberRepo( self.context, self.db, image) image_members = self.image_member_repo_uuid2.list() image_member_ids = set([i.member_id for i in image_members]) self.assertEqual(set([]), image_member_ids) def test_save_image_member(self): image_member = self.image_member_repo.get(TENANT2) image_member.status = 'accepted' self.image_member_repo.save(image_member) image_member_updated = self.image_member_repo.get(TENANT2) self.assertEqual(image_member.id, image_member_updated.id) self.assertEqual('accepted', image_member_updated.status) def test_add_image_member(self): image = self.image_repo.get(UUID1) image_member = self.image_member_factory.new_image_member(image, TENANT4) self.assertIsNone(image_member.id) self.image_member_repo.add(image_member) retreived_image_member = self.image_member_repo.get(TENANT4) self.assertIsNotNone(retreived_image_member.id) self.assertEqual(image_member.image_id, retreived_image_member.image_id) self.assertEqual(image_member.member_id, retreived_image_member.member_id) self.assertEqual('pending', retreived_image_member.status) def test_add_duplicate_image_member(self): image = self.image_repo.get(UUID1) image_member = self.image_member_factory.new_image_member(image, TENANT4) self.assertIsNone(image_member.id) self.image_member_repo.add(image_member) retreived_image_member = self.image_member_repo.get(TENANT4) self.assertIsNotNone(retreived_image_member.id) self.assertEqual(image_member.image_id, retreived_image_member.image_id) self.assertEqual(image_member.member_id, retreived_image_member.member_id) self.assertEqual('pending', retreived_image_member.status) self.assertRaises(exception.Duplicate, self.image_member_repo.add, image_member) def test_get_image_member(self): image = self.image_repo.get(UUID1) image_member = self.image_member_factory.new_image_member(image, TENANT4) self.assertIsNone(image_member.id) self.image_member_repo.add(image_member) member = self.image_member_repo.get(image_member.member_id) self.assertEqual(member.id, image_member.id) self.assertEqual(member.image_id, image_member.image_id) self.assertEqual(member.member_id, image_member.member_id) self.assertEqual('pending', member.status) def test_get_nonexistent_image_member(self): fake_image_member_id = 'fake' self.assertRaises(exception.NotFound, self.image_member_repo.get, fake_image_member_id) def test_remove_image_member(self): image_member = self.image_member_repo.get(TENANT2) self.image_member_repo.remove(image_member) self.assertRaises(exception.NotFound, self.image_member_repo.get, TENANT2) def test_remove_image_member_does_not_exist(self): fake_uuid = str(uuid.uuid4()) image = self.image_repo.get(UUID2) fake_member = glance.domain.ImageMemberFactory().new_image_member( image, TENANT4) fake_member.id = fake_uuid exc = self.assertRaises(exception.NotFound, self.image_member_repo.remove, fake_member) self.assertIn(fake_uuid, encodeutils.exception_to_unicode(exc)) class TestTaskRepo(test_utils.BaseTestCase): def setUp(self): super(TestTaskRepo, self).setUp() self.db = unit_test_utils.FakeDB(initialize=False) self.context = glance.context.RequestContext(user=USER1, tenant=TENANT1) self.task_repo = glance.db.TaskRepo(self.context, self.db) self.task_factory = glance.domain.TaskFactory() self.fake_task_input = ('{"import_from": ' '"swift://cloud.foo/account/mycontainer/path"' ',"import_from_format": "qcow2"}') self._create_tasks() def _create_tasks(self): self.tasks = [ _db_task_fixture(UUID1, type='import', status='pending', input=self.fake_task_input, result='', owner=TENANT1, message='', ), _db_task_fixture(UUID2, type='import', status='processing', input=self.fake_task_input, result='', owner=TENANT1, message='', ), _db_task_fixture(UUID3, type='import', status='failure', input=self.fake_task_input, result='', owner=TENANT1, message='', ), _db_task_fixture(UUID4, type='import', status='success', input=self.fake_task_input, result='', owner=TENANT2, message='', ), ] [self.db.task_create(None, task) for task in self.tasks] def test_get(self): task = self.task_repo.get(UUID1) self.assertEqual(task.task_id, UUID1) self.assertEqual('import', task.type) self.assertEqual('pending', task.status) self.assertEqual(task.task_input, self.fake_task_input) self.assertEqual('', task.result) self.assertEqual('', task.message) self.assertEqual(task.owner, TENANT1) def test_get_not_found(self): self.assertRaises(exception.NotFound, self.task_repo.get, str(uuid.uuid4())) def test_get_forbidden(self): self.assertRaises(exception.NotFound, self.task_repo.get, UUID4) def test_list(self): tasks = self.task_repo.list() task_ids = set([i.task_id for i in tasks]) self.assertEqual(set([UUID1, UUID2, UUID3]), task_ids) def test_list_with_type(self): filters = {'type': 'import'} tasks = self.task_repo.list(filters=filters) task_ids = set([i.task_id for i in tasks]) self.assertEqual(set([UUID1, UUID2, UUID3]), task_ids) def test_list_with_status(self): filters = {'status': 'failure'} tasks = self.task_repo.list(filters=filters) task_ids = set([i.task_id for i in tasks]) self.assertEqual(set([UUID3]), task_ids) def test_list_with_marker(self): full_tasks = self.task_repo.list() full_ids = [i.task_id for i in full_tasks] marked_tasks = self.task_repo.list(marker=full_ids[0]) actual_ids = [i.task_id for i in marked_tasks] self.assertEqual(full_ids[1:], actual_ids) def test_list_with_last_marker(self): tasks = self.task_repo.list() marked_tasks = self.task_repo.list(marker=tasks[-1].task_id) self.assertEqual(0, len(marked_tasks)) def test_limited_list(self): limited_tasks = self.task_repo.list(limit=2) self.assertEqual(2, len(limited_tasks)) def test_list_with_marker_and_limit(self): full_tasks = self.task_repo.list() full_ids = [i.task_id for i in full_tasks] marked_tasks = self.task_repo.list(marker=full_ids[0], limit=1) actual_ids = [i.task_id for i in marked_tasks] self.assertEqual(full_ids[1:2], actual_ids) def test_sorted_list(self): tasks = self.task_repo.list(sort_key='status', sort_dir='desc') task_ids = [i.task_id for i in tasks] self.assertEqual([UUID2, UUID1, UUID3], task_ids) def test_add_task(self): task_type = 'import' task = self.task_factory.new_task(task_type, None, task_input=self.fake_task_input) self.assertEqual(task.updated_at, task.created_at) self.task_repo.add(task) retrieved_task = self.task_repo.get(task.task_id) self.assertEqual(task.updated_at, retrieved_task.updated_at) self.assertEqual(self.fake_task_input, retrieved_task.task_input) def test_save_task(self): task = self.task_repo.get(UUID1) original_update_time = task.updated_at self.task_repo.save(task) current_update_time = task.updated_at self.assertGreater(current_update_time, original_update_time) task = self.task_repo.get(UUID1) self.assertEqual(current_update_time, task.updated_at) def test_remove_task(self): task = self.task_repo.get(UUID1) self.task_repo.remove(task) self.assertRaises(exception.NotFound, self.task_repo.get, task.task_id) class RetryOnDeadlockTestCase(test_utils.BaseTestCase): def test_raise_deadlock(self): class TestException(Exception): pass self.attempts = 3 def _mock_get_session(): def _raise_exceptions(): self.attempts -= 1 if self.attempts <= 0: raise TestException("Exit") raise db_exc.DBDeadlock("Fake Exception") return _raise_exceptions with mock.patch.object(api, 'get_session') as sess: sess.side_effect = _mock_get_session() try: api._image_update(None, {}, 'fake-id') except TestException: self.assertEqual(3, sess.call_count) # Test retry on image destroy if db deadlock occurs self.attempts = 3 with mock.patch.object(api, 'get_session') as sess: sess.side_effect = _mock_get_session() try: api.image_destroy(None, 'fake-id') except TestException: self.assertEqual(3, sess.call_count) glance-16.0.0/glance/tests/unit/test_domain.py0000666000175100017510000005366213245511421021321 0ustar zuulzuul00000000000000# Copyright 2012 OpenStack Foundation. # Copyright 2013 IBM Corp. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime import uuid import mock from oslo_config import cfg import oslo_utils.importutils import glance.async from glance.async import taskflow_executor from glance.common import exception from glance.common import timeutils from glance import domain import glance.tests.utils as test_utils CONF = cfg.CONF UUID1 = 'c80a1a6c-bd1f-41c5-90ee-81afedb1d58d' TENANT1 = '6838eb7b-6ded-434a-882c-b344c77fe8df' class TestImageFactory(test_utils.BaseTestCase): def setUp(self): super(TestImageFactory, self).setUp() self.image_factory = domain.ImageFactory() def test_minimal_new_image(self): image = self.image_factory.new_image() self.assertIsNotNone(image.image_id) self.assertIsNotNone(image.created_at) self.assertEqual(image.created_at, image.updated_at) self.assertEqual('queued', image.status) self.assertEqual('shared', image.visibility) self.assertIsNone(image.owner) self.assertIsNone(image.name) self.assertIsNone(image.size) self.assertEqual(0, image.min_disk) self.assertEqual(0, image.min_ram) self.assertFalse(image.protected) self.assertIsNone(image.disk_format) self.assertIsNone(image.container_format) self.assertEqual({}, image.extra_properties) self.assertEqual(set([]), image.tags) def test_new_image(self): image = self.image_factory.new_image( image_id=UUID1, name='image-1', min_disk=256, owner=TENANT1) self.assertEqual(UUID1, image.image_id) self.assertIsNotNone(image.created_at) self.assertEqual(image.created_at, image.updated_at) self.assertEqual('queued', image.status) self.assertEqual('shared', image.visibility) self.assertEqual(TENANT1, image.owner) self.assertEqual('image-1', image.name) self.assertIsNone(image.size) self.assertEqual(256, image.min_disk) self.assertEqual(0, image.min_ram) self.assertFalse(image.protected) self.assertIsNone(image.disk_format) self.assertIsNone(image.container_format) self.assertEqual({}, image.extra_properties) self.assertEqual(set([]), image.tags) def test_new_image_with_extra_properties_and_tags(self): extra_properties = {'foo': 'bar'} tags = ['one', 'two'] image = self.image_factory.new_image( image_id=UUID1, name='image-1', extra_properties=extra_properties, tags=tags) self.assertEqual(UUID1, image.image_id, UUID1) self.assertIsNotNone(image.created_at) self.assertEqual(image.created_at, image.updated_at) self.assertEqual('queued', image.status) self.assertEqual('shared', image.visibility) self.assertIsNone(image.owner) self.assertEqual('image-1', image.name) self.assertIsNone(image.size) self.assertEqual(0, image.min_disk) self.assertEqual(0, image.min_ram) self.assertFalse(image.protected) self.assertIsNone(image.disk_format) self.assertIsNone(image.container_format) self.assertEqual({'foo': 'bar'}, image.extra_properties) self.assertEqual(set(['one', 'two']), image.tags) def test_new_image_read_only_property(self): self.assertRaises(exception.ReadonlyProperty, self.image_factory.new_image, image_id=UUID1, name='image-1', size=256) def test_new_image_unexpected_property(self): self.assertRaises(TypeError, self.image_factory.new_image, image_id=UUID1, image_name='name-1') def test_new_image_reserved_property(self): extra_properties = {'deleted': True} self.assertRaises(exception.ReservedProperty, self.image_factory.new_image, image_id=UUID1, extra_properties=extra_properties) def test_new_image_for_is_public(self): extra_prop = {'is_public': True} new_image = self.image_factory.new_image(image_id=UUID1, extra_properties=extra_prop) self.assertEqual(True, new_image.extra_properties['is_public']) class TestImage(test_utils.BaseTestCase): def setUp(self): super(TestImage, self).setUp() self.image_factory = domain.ImageFactory() self.image = self.image_factory.new_image( container_format='bear', disk_format='rawr') def test_extra_properties(self): self.image.extra_properties = {'foo': 'bar'} self.assertEqual({'foo': 'bar'}, self.image.extra_properties) def test_extra_properties_assign(self): self.image.extra_properties['foo'] = 'bar' self.assertEqual({'foo': 'bar'}, self.image.extra_properties) def test_delete_extra_properties(self): self.image.extra_properties = {'foo': 'bar'} self.assertEqual({'foo': 'bar'}, self.image.extra_properties) del self.image.extra_properties['foo'] self.assertEqual({}, self.image.extra_properties) def test_visibility_enumerated(self): self.image.visibility = 'public' self.image.visibility = 'private' self.image.visibility = 'shared' self.image.visibility = 'community' self.assertRaises(ValueError, setattr, self.image, 'visibility', 'ellison') def test_tags_always_a_set(self): self.image.tags = ['a', 'b', 'c'] self.assertEqual(set(['a', 'b', 'c']), self.image.tags) def test_delete_protected_image(self): self.image.protected = True self.assertRaises(exception.ProtectedImageDelete, self.image.delete) def test_status_saving(self): self.image.status = 'saving' self.assertEqual('saving', self.image.status) def test_set_incorrect_status(self): self.image.status = 'saving' self.image.status = 'killed' self.assertRaises( exception.InvalidImageStatusTransition, setattr, self.image, 'status', 'delet') def test_status_saving_without_disk_format(self): self.image.disk_format = None self.assertRaises(ValueError, setattr, self.image, 'status', 'saving') def test_status_saving_without_container_format(self): self.image.container_format = None self.assertRaises(ValueError, setattr, self.image, 'status', 'saving') def test_status_active_without_disk_format(self): self.image.disk_format = None self.assertRaises(ValueError, setattr, self.image, 'status', 'active') def test_status_active_without_container_format(self): self.image.container_format = None self.assertRaises(ValueError, setattr, self.image, 'status', 'active') def test_delayed_delete(self): self.config(delayed_delete=True) self.image.status = 'active' self.image.locations = [{'url': 'http://foo.bar/not.exists', 'metadata': {}}] self.assertEqual('active', self.image.status) self.image.delete() self.assertEqual('pending_delete', self.image.status) class TestImageMember(test_utils.BaseTestCase): def setUp(self): super(TestImageMember, self).setUp() self.image_member_factory = domain.ImageMemberFactory() self.image_factory = domain.ImageFactory() self.image = self.image_factory.new_image() self.image_member = self.image_member_factory.new_image_member( image=self.image, member_id=TENANT1) def test_status_enumerated(self): self.image_member.status = 'pending' self.image_member.status = 'accepted' self.image_member.status = 'rejected' self.assertRaises(ValueError, setattr, self.image_member, 'status', 'ellison') class TestImageMemberFactory(test_utils.BaseTestCase): def setUp(self): super(TestImageMemberFactory, self).setUp() self.image_member_factory = domain.ImageMemberFactory() self.image_factory = domain.ImageFactory() def test_minimal_new_image_member(self): member_id = 'fake-member-id' image = self.image_factory.new_image( image_id=UUID1, name='image-1', min_disk=256, owner=TENANT1) image_member = self.image_member_factory.new_image_member(image, member_id) self.assertEqual(image_member.image_id, image.image_id) self.assertIsNotNone(image_member.created_at) self.assertEqual(image_member.created_at, image_member.updated_at) self.assertEqual('pending', image_member.status) self.assertIsNotNone(image_member.member_id) class TestExtraProperties(test_utils.BaseTestCase): def test_getitem(self): a_dict = {'foo': 'bar', 'snitch': 'golden'} extra_properties = domain.ExtraProperties(a_dict) self.assertEqual('bar', extra_properties['foo']) self.assertEqual('golden', extra_properties['snitch']) def test_getitem_with_no_items(self): extra_properties = domain.ExtraProperties() self.assertRaises(KeyError, extra_properties.__getitem__, 'foo') def test_setitem(self): a_dict = {'foo': 'bar', 'snitch': 'golden'} extra_properties = domain.ExtraProperties(a_dict) extra_properties['foo'] = 'baz' self.assertEqual('baz', extra_properties['foo']) def test_delitem(self): a_dict = {'foo': 'bar', 'snitch': 'golden'} extra_properties = domain.ExtraProperties(a_dict) del extra_properties['foo'] self.assertRaises(KeyError, extra_properties.__getitem__, 'foo') self.assertEqual('golden', extra_properties['snitch']) def test_len_with_zero_items(self): extra_properties = domain.ExtraProperties() self.assertEqual(0, len(extra_properties)) def test_len_with_non_zero_items(self): extra_properties = domain.ExtraProperties() extra_properties['foo'] = 'bar' extra_properties['snitch'] = 'golden' self.assertEqual(2, len(extra_properties)) def test_eq_with_a_dict(self): a_dict = {'foo': 'bar', 'snitch': 'golden'} extra_properties = domain.ExtraProperties(a_dict) ref_extra_properties = {'foo': 'bar', 'snitch': 'golden'} self.assertEqual(ref_extra_properties, extra_properties) def test_eq_with_an_object_of_ExtraProperties(self): a_dict = {'foo': 'bar', 'snitch': 'golden'} extra_properties = domain.ExtraProperties(a_dict) ref_extra_properties = domain.ExtraProperties() ref_extra_properties['snitch'] = 'golden' ref_extra_properties['foo'] = 'bar' self.assertEqual(ref_extra_properties, extra_properties) def test_eq_with_uneqal_dict(self): a_dict = {'foo': 'bar', 'snitch': 'golden'} extra_properties = domain.ExtraProperties(a_dict) ref_extra_properties = {'boo': 'far', 'gnitch': 'solden'} self.assertNotEqual(ref_extra_properties, extra_properties) def test_eq_with_unequal_ExtraProperties_object(self): a_dict = {'foo': 'bar', 'snitch': 'golden'} extra_properties = domain.ExtraProperties(a_dict) ref_extra_properties = domain.ExtraProperties() ref_extra_properties['gnitch'] = 'solden' ref_extra_properties['boo'] = 'far' self.assertNotEqual(ref_extra_properties, extra_properties) def test_eq_with_incompatible_object(self): a_dict = {'foo': 'bar', 'snitch': 'golden'} extra_properties = domain.ExtraProperties(a_dict) random_list = ['foo', 'bar'] self.assertNotEqual(random_list, extra_properties) class TestTaskFactory(test_utils.BaseTestCase): def setUp(self): super(TestTaskFactory, self).setUp() self.task_factory = domain.TaskFactory() def test_new_task(self): task_type = 'import' owner = TENANT1 task_input = 'input' task = self.task_factory.new_task(task_type, owner, task_input=task_input, result='test_result', message='test_message') self.assertIsNotNone(task.task_id) self.assertIsNotNone(task.created_at) self.assertEqual(task_type, task.type) self.assertEqual(task.created_at, task.updated_at) self.assertEqual('pending', task.status) self.assertIsNone(task.expires_at) self.assertEqual(owner, task.owner) self.assertEqual(task_input, task.task_input) self.assertEqual('test_message', task.message) self.assertEqual('test_result', task.result) def test_new_task_invalid_type(self): task_type = 'blah' owner = TENANT1 self.assertRaises( exception.InvalidTaskType, self.task_factory.new_task, task_type, owner, ) class TestTask(test_utils.BaseTestCase): def setUp(self): super(TestTask, self).setUp() self.task_factory = domain.TaskFactory() task_type = 'import' owner = TENANT1 task_ttl = CONF.task.task_time_to_live self.task = self.task_factory.new_task(task_type, owner, task_time_to_live=task_ttl) def test_task_invalid_status(self): task_id = str(uuid.uuid4()) status = 'blah' self.assertRaises( exception.InvalidTaskStatus, domain.Task, task_id, task_type='import', status=status, owner=None, expires_at=None, created_at=timeutils.utcnow(), updated_at=timeutils.utcnow(), task_input=None, message=None, result=None ) def test_validate_status_transition_from_pending(self): self.task.begin_processing() self.assertEqual('processing', self.task.status) def test_validate_status_transition_from_processing_to_success(self): self.task.begin_processing() self.task.succeed('') self.assertEqual('success', self.task.status) def test_validate_status_transition_from_processing_to_failure(self): self.task.begin_processing() self.task.fail('') self.assertEqual('failure', self.task.status) def test_invalid_status_transitions_from_pending(self): # test do not allow transition from pending to success self.assertRaises( exception.InvalidTaskStatusTransition, self.task.succeed, '' ) def test_invalid_status_transitions_from_success(self): # test do not allow transition from success to processing self.task.begin_processing() self.task.succeed('') self.assertRaises( exception.InvalidTaskStatusTransition, self.task.begin_processing ) # test do not allow transition from success to failure self.assertRaises( exception.InvalidTaskStatusTransition, self.task.fail, '' ) def test_invalid_status_transitions_from_failure(self): # test do not allow transition from failure to processing self.task.begin_processing() self.task.fail('') self.assertRaises( exception.InvalidTaskStatusTransition, self.task.begin_processing ) # test do not allow transition from failure to success self.assertRaises( exception.InvalidTaskStatusTransition, self.task.succeed, '' ) def test_begin_processing(self): self.task.begin_processing() self.assertEqual('processing', self.task.status) @mock.patch.object(timeutils, 'utcnow') def test_succeed(self, mock_utcnow): mock_utcnow.return_value = datetime.datetime.utcnow() self.task.begin_processing() self.task.succeed('{"location": "file://home"}') self.assertEqual('success', self.task.status) self.assertEqual('{"location": "file://home"}', self.task.result) self.assertEqual(u'', self.task.message) expected = (timeutils.utcnow() + datetime.timedelta(hours=CONF.task.task_time_to_live)) self.assertEqual( expected, self.task.expires_at ) @mock.patch.object(timeutils, 'utcnow') def test_fail(self, mock_utcnow): mock_utcnow.return_value = datetime.datetime.utcnow() self.task.begin_processing() self.task.fail('{"message": "connection failed"}') self.assertEqual('failure', self.task.status) self.assertEqual('{"message": "connection failed"}', self.task.message) self.assertIsNone(self.task.result) expected = (timeutils.utcnow() + datetime.timedelta(hours=CONF.task.task_time_to_live)) self.assertEqual( expected, self.task.expires_at ) @mock.patch.object(glance.async.TaskExecutor, 'begin_processing') def test_run(self, mock_begin_processing): executor = glance.async.TaskExecutor(context=mock.ANY, task_repo=mock.ANY, image_repo=mock.ANY, image_factory=mock.ANY) self.task.run(executor) mock_begin_processing.assert_called_once_with(self.task.task_id) class TestTaskStub(test_utils.BaseTestCase): def setUp(self): super(TestTaskStub, self).setUp() self.task_id = str(uuid.uuid4()) self.task_type = 'import' self.owner = TENANT1 self.task_ttl = CONF.task.task_time_to_live def test_task_stub_init(self): self.task_factory = domain.TaskFactory() task = domain.TaskStub( self.task_id, self.task_type, 'status', self.owner, 'expires_at', 'created_at', 'updated_at' ) self.assertEqual(self.task_id, task.task_id) self.assertEqual(self.task_type, task.type) self.assertEqual(self.owner, task.owner) self.assertEqual('status', task.status) self.assertEqual('expires_at', task.expires_at) self.assertEqual('created_at', task.created_at) self.assertEqual('updated_at', task.updated_at) def test_task_stub_get_status(self): status = 'pending' task = domain.TaskStub( self.task_id, self.task_type, status, self.owner, 'expires_at', 'created_at', 'updated_at' ) self.assertEqual(status, task.status) class TestTaskExecutorFactory(test_utils.BaseTestCase): def setUp(self): super(TestTaskExecutorFactory, self).setUp() self.task_repo = mock.Mock() self.image_repo = mock.Mock() self.image_factory = mock.Mock() def test_init(self): task_executor_factory = domain.TaskExecutorFactory(self.task_repo, self.image_repo, self.image_factory) self.assertEqual(self.task_repo, task_executor_factory.task_repo) def test_new_task_executor(self): task_executor_factory = domain.TaskExecutorFactory(self.task_repo, self.image_repo, self.image_factory) context = mock.Mock() with mock.patch.object(oslo_utils.importutils, 'import_class') as mock_import_class: mock_executor = mock.Mock() mock_import_class.return_value = mock_executor task_executor_factory.new_task_executor(context) mock_executor.assert_called_once_with(context, self.task_repo, self.image_repo, self.image_factory) def test_new_task_executor_error(self): task_executor_factory = domain.TaskExecutorFactory(self.task_repo, self.image_repo, self.image_factory) context = mock.Mock() with mock.patch.object(oslo_utils.importutils, 'import_class') as mock_import_class: mock_import_class.side_effect = ImportError self.assertRaises(ImportError, task_executor_factory.new_task_executor, context) def test_new_task_eventlet_backwards_compatibility(self): context = mock.MagicMock() self.config(task_executor='eventlet', group='task') task_executor_factory = domain.TaskExecutorFactory(self.task_repo, self.image_repo, self.image_factory) # NOTE(flaper87): "eventlet" executor. short name to avoid > 79. te_evnt = task_executor_factory.new_task_executor(context) self.assertIsInstance(te_evnt, taskflow_executor.TaskExecutor) glance-16.0.0/glance/tests/unit/v2/0000775000175100017510000000000013245511661016760 5ustar zuulzuul00000000000000glance-16.0.0/glance/tests/unit/v2/test_registry_client.py0000666000175100017510000007371513245511421023610 0ustar zuulzuul00000000000000# Copyright 2013 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Tests for Glance Registry's client. This tests are temporary and will be removed once the registry's driver tests will be added. """ import copy import datetime import os import uuid from mock import patch from six.moves import reload_module from glance.common import config from glance.common import exception from glance.common import timeutils from glance import context from glance.db.sqlalchemy import api as db_api from glance.i18n import _ from glance.registry.api import v2 as rserver import glance.registry.client.v2.api as rapi from glance.registry.client.v2.api import client as rclient from glance.tests.unit import base from glance.tests import utils as test_utils _gen_uuid = lambda: str(uuid.uuid4()) UUID1 = str(uuid.uuid4()) UUID2 = str(uuid.uuid4()) # NOTE(bcwaldon): needed to init config_dir cli opt config.parse_args(args=[]) class TestRegistryV2Client(base.IsolatedUnitTest, test_utils.RegistryAPIMixIn): """Test proper actions made against a registry service. Test for both valid and invalid requests. """ # Registry server to user # in the stub. registry = rserver def setUp(self): """Establish a clean test environment""" super(TestRegistryV2Client, self).setUp() db_api.get_engine() self.context = context.RequestContext(is_admin=True) uuid1_time = timeutils.utcnow() uuid2_time = uuid1_time + datetime.timedelta(seconds=5) self.FIXTURES = [ self.get_extra_fixture( id=UUID1, name='fake image #1', visibility='shared', disk_format='ami', container_format='ami', size=13, virtual_size=26, properties={'type': 'kernel'}, location="swift://user:passwd@acct/container/obj.tar.0", created_at=uuid1_time), self.get_extra_fixture(id=UUID2, name='fake image #2', properties={}, size=19, virtual_size=38, location="file:///tmp/glance-tests/2", created_at=uuid2_time)] self.destroy_fixtures() self.create_fixtures() self.client = rclient.RegistryClient("0.0.0.0") def tearDown(self): """Clear the test environment""" super(TestRegistryV2Client, self).tearDown() self.destroy_fixtures() def test_image_get_index(self): """Test correct set of public image returned""" images = self.client.image_get_all() self.assertEqual(2, len(images)) def test_create_image_with_null_min_disk_min_ram(self): UUID3 = _gen_uuid() extra_fixture = self.get_fixture(id=UUID3, name='asdf', min_disk=None, min_ram=None) db_api.image_create(self.context, extra_fixture) image = self.client.image_get(image_id=UUID3) self.assertEqual(0, image["min_ram"]) self.assertEqual(0, image["min_disk"]) def test_get_index_sort_name_asc(self): """Tests that the registry API returns list of public images. Must be sorted alphabetically by name in ascending order. """ UUID3 = _gen_uuid() extra_fixture = self.get_fixture(id=UUID3, name='asdf') db_api.image_create(self.context, extra_fixture) UUID4 = _gen_uuid() extra_fixture = self.get_fixture(id=UUID4, name='xyz') db_api.image_create(self.context, extra_fixture) images = self.client.image_get_all(sort_key=['name'], sort_dir=['asc']) self.assertEqualImages(images, (UUID3, UUID1, UUID2, UUID4), unjsonify=False) def test_get_index_sort_status_desc(self): """Tests that the registry API returns list of public images. Must be sorted alphabetically by status in descending order. """ uuid4_time = timeutils.utcnow() + datetime.timedelta(seconds=10) UUID3 = _gen_uuid() extra_fixture = self.get_fixture(id=UUID3, name='asdf', status='queued') db_api.image_create(self.context, extra_fixture) UUID4 = _gen_uuid() extra_fixture = self.get_fixture(id=UUID4, name='xyz', created_at=uuid4_time) db_api.image_create(self.context, extra_fixture) images = self.client.image_get_all(sort_key=['status'], sort_dir=['desc']) self.assertEqualImages(images, (UUID3, UUID4, UUID2, UUID1), unjsonify=False) def test_get_index_sort_disk_format_asc(self): """Tests that the registry API returns list of public images. Must besorted alphabetically by disk_format in ascending order. """ UUID3 = _gen_uuid() extra_fixture = self.get_fixture(id=UUID3, name='asdf', disk_format='ami', container_format='ami') db_api.image_create(self.context, extra_fixture) UUID4 = _gen_uuid() extra_fixture = self.get_fixture(id=UUID4, name='xyz', disk_format='vdi') db_api.image_create(self.context, extra_fixture) images = self.client.image_get_all(sort_key=['disk_format'], sort_dir=['asc']) self.assertEqualImages(images, (UUID1, UUID3, UUID4, UUID2), unjsonify=False) def test_get_index_sort_container_format_desc(self): """Tests that the registry API returns list of public images. Must be sorted alphabetically by container_format in descending order. """ UUID3 = _gen_uuid() extra_fixture = self.get_fixture(id=UUID3, name='asdf', disk_format='ami', container_format='ami') db_api.image_create(self.context, extra_fixture) UUID4 = _gen_uuid() extra_fixture = self.get_fixture(id=UUID4, name='xyz', disk_format='iso', container_format='bare') db_api.image_create(self.context, extra_fixture) images = self.client.image_get_all(sort_key=['container_format'], sort_dir=['desc']) self.assertEqualImages(images, (UUID2, UUID4, UUID3, UUID1), unjsonify=False) def test_get_index_sort_size_asc(self): """Tests that the registry API returns list of public images. Must be sorted by size in ascending order. """ UUID3 = _gen_uuid() extra_fixture = self.get_fixture(id=UUID3, name='asdf', disk_format='ami', container_format='ami', size=100, virtual_size=200) db_api.image_create(self.context, extra_fixture) UUID4 = _gen_uuid() extra_fixture = self.get_fixture(id=UUID4, name='asdf', disk_format='iso', container_format='bare', size=2, virtual_size=4) db_api.image_create(self.context, extra_fixture) images = self.client.image_get_all(sort_key=['size'], sort_dir=['asc']) self.assertEqualImages(images, (UUID4, UUID1, UUID2, UUID3), unjsonify=False) def test_get_index_sort_created_at_asc(self): """Tests that the registry API returns list of public images. Must be sorted by created_at in ascending order. """ uuid4_time = timeutils.utcnow() + datetime.timedelta(seconds=10) uuid3_time = uuid4_time + datetime.timedelta(seconds=5) UUID3 = _gen_uuid() extra_fixture = self.get_fixture(id=UUID3, created_at=uuid3_time) db_api.image_create(self.context, extra_fixture) UUID4 = _gen_uuid() extra_fixture = self.get_fixture(id=UUID4, created_at=uuid4_time) db_api.image_create(self.context, extra_fixture) images = self.client.image_get_all(sort_key=['created_at'], sort_dir=['asc']) self.assertEqualImages(images, (UUID1, UUID2, UUID4, UUID3), unjsonify=False) def test_get_index_sort_updated_at_desc(self): """Tests that the registry API returns list of public images. Must be sorted by updated_at in descending order. """ uuid4_time = timeutils.utcnow() + datetime.timedelta(seconds=10) uuid3_time = uuid4_time + datetime.timedelta(seconds=5) UUID3 = _gen_uuid() extra_fixture = self.get_fixture(id=UUID3, created_at=None, updated_at=uuid3_time) db_api.image_create(self.context, extra_fixture) UUID4 = _gen_uuid() extra_fixture = self.get_fixture(id=UUID4, created_at=None, updated_at=uuid4_time) db_api.image_create(self.context, extra_fixture) images = self.client.image_get_all(sort_key=['updated_at'], sort_dir=['desc']) self.assertEqualImages(images, (UUID3, UUID4, UUID2, UUID1), unjsonify=False) def test_get_image_details_sort_multiple_keys(self): """ Tests that a detailed call returns list of public images sorted by name-size and size-name in ascending order. """ UUID3 = _gen_uuid() extra_fixture = self.get_fixture(id=UUID3, name='asdf', size=19) db_api.image_create(self.context, extra_fixture) UUID4 = _gen_uuid() extra_fixture = self.get_fixture(id=UUID4, name=u'xyz', size=20) db_api.image_create(self.context, extra_fixture) UUID5 = _gen_uuid() extra_fixture = self.get_fixture(id=UUID5, name=u'asdf', size=20) db_api.image_create(self.context, extra_fixture) images = self.client.image_get_all(sort_key=['name', 'size'], sort_dir=['asc']) self.assertEqualImages(images, (UUID3, UUID5, UUID1, UUID2, UUID4), unjsonify=False) images = self.client.image_get_all(sort_key=['size', 'name'], sort_dir=['asc']) self.assertEqualImages(images, (UUID1, UUID3, UUID2, UUID5, UUID4), unjsonify=False) def test_get_image_details_sort_multiple_dirs(self): """ Tests that a detailed call returns list of public images sorted by name-size and size-name in ascending and descending orders. """ UUID3 = _gen_uuid() extra_fixture = self.get_fixture(id=UUID3, name='asdf', size=19) db_api.image_create(self.context, extra_fixture) UUID4 = _gen_uuid() extra_fixture = self.get_fixture(id=UUID4, name='xyz', size=20) db_api.image_create(self.context, extra_fixture) UUID5 = _gen_uuid() extra_fixture = self.get_fixture(id=UUID5, name='asdf', size=20) db_api.image_create(self.context, extra_fixture) images = self.client.image_get_all(sort_key=['name', 'size'], sort_dir=['asc', 'desc']) self.assertEqualImages(images, (UUID5, UUID3, UUID1, UUID2, UUID4), unjsonify=False) images = self.client.image_get_all(sort_key=['name', 'size'], sort_dir=['desc', 'asc']) self.assertEqualImages(images, (UUID4, UUID2, UUID1, UUID3, UUID5), unjsonify=False) images = self.client.image_get_all(sort_key=['size', 'name'], sort_dir=['asc', 'desc']) self.assertEqualImages(images, (UUID1, UUID2, UUID3, UUID4, UUID5), unjsonify=False) images = self.client.image_get_all(sort_key=['size', 'name'], sort_dir=['desc', 'asc']) self.assertEqualImages(images, (UUID5, UUID4, UUID3, UUID2, UUID1), unjsonify=False) def test_image_get_index_marker(self): """Test correct set of images returned with marker param.""" uuid4_time = timeutils.utcnow() + datetime.timedelta(seconds=10) uuid3_time = uuid4_time + datetime.timedelta(seconds=5) UUID3 = _gen_uuid() extra_fixture = self.get_fixture(id=UUID3, name='new name! #123', status='saving', created_at=uuid3_time) db_api.image_create(self.context, extra_fixture) UUID4 = _gen_uuid() extra_fixture = self.get_fixture(id=UUID4, name='new name! #125', status='saving', created_at=uuid4_time) db_api.image_create(self.context, extra_fixture) images = self.client.image_get_all(marker=UUID3) self.assertEqualImages(images, (UUID4, UUID2, UUID1), unjsonify=False) def test_image_get_index_limit(self): """Test correct number of images returned with limit param.""" extra_fixture = self.get_fixture(id=_gen_uuid(), name='new name! #123', status='saving') db_api.image_create(self.context, extra_fixture) extra_fixture = self.get_fixture(id=_gen_uuid(), name='new name! #125', status='saving') db_api.image_create(self.context, extra_fixture) images = self.client.image_get_all(limit=2) self.assertEqual(2, len(images)) def test_image_get_index_marker_limit(self): """Test correct set of images returned with marker/limit params.""" uuid4_time = timeutils.utcnow() + datetime.timedelta(seconds=10) uuid3_time = uuid4_time + datetime.timedelta(seconds=5) UUID3 = _gen_uuid() extra_fixture = self.get_fixture(id=UUID3, name='new name! #123', status='saving', created_at=uuid3_time) db_api.image_create(self.context, extra_fixture) UUID4 = _gen_uuid() extra_fixture = self.get_fixture(id=UUID4, name='new name! #125', status='saving', created_at=uuid4_time) db_api.image_create(self.context, extra_fixture) images = self.client.image_get_all(marker=UUID4, limit=1) self.assertEqualImages(images, (UUID2,), unjsonify=False) def test_image_get_index_limit_None(self): """Test correct set of images returned with limit param == None.""" extra_fixture = self.get_fixture(id=_gen_uuid(), name='new name! #123', status='saving') db_api.image_create(self.context, extra_fixture) extra_fixture = self.get_fixture(id=_gen_uuid(), name='new name! #125', status='saving') db_api.image_create(self.context, extra_fixture) images = self.client.image_get_all(limit=None) self.assertEqual(4, len(images)) def test_image_get_index_by_name(self): """Test correct set of public, name-filtered image returned. This is just a sanity check, we test the details call more in-depth. """ extra_fixture = self.get_fixture(id=_gen_uuid(), name='new name! #123') db_api.image_create(self.context, extra_fixture) images = self.client.image_get_all(filters={'name': 'new name! #123'}) self.assertEqual(1, len(images)) for image in images: self.assertEqual('new name! #123', image['name']) def test_image_get_is_public_v2(self): """Tests that a detailed call can be filtered by a property""" extra_fixture = self.get_fixture(id=_gen_uuid(), status='saving', properties={'is_public': 'avalue'}) context = copy.copy(self.context) db_api.image_create(context, extra_fixture) filters = {'is_public': 'avalue'} images = self.client.image_get_all(filters=filters) self.assertEqual(1, len(images)) for image in images: self.assertEqual('avalue', image['properties'][0]['value']) def test_image_get(self): """Tests that the detailed info about an image returned""" fixture = self.get_fixture(id=UUID1, name='fake image #1', visibility='shared', size=13, virtual_size=26, disk_format='ami', container_format='ami') data = self.client.image_get(image_id=UUID1) for k, v in fixture.items(): el = data[k] self.assertEqual(v, data[k], "Failed v != data[k] where v = %(v)s and " "k = %(k)s and data[k] = %(el)s" % dict(v=v, k=k, el=el)) def test_image_get_non_existing(self): """Tests that NotFound is raised when getting a non-existing image""" self.assertRaises(exception.NotFound, self.client.image_get, image_id=_gen_uuid()) def test_image_create_basic(self): """Tests that we can add image metadata and returns the new id""" fixture = self.get_fixture() new_image = self.client.image_create(values=fixture) # Test all other attributes set data = self.client.image_get(image_id=new_image['id']) for k, v in fixture.items(): self.assertEqual(v, data[k]) # Test status was updated properly self.assertIn('status', data) self.assertEqual('active', data['status']) def test_image_create_with_properties(self): """Tests that we can add image metadata with properties""" fixture = self.get_fixture(location="file:///tmp/glance-tests/2", properties={'distro': 'Ubuntu 10.04 LTS'}) new_image = self.client.image_create(values=fixture) self.assertIn('properties', new_image) self.assertEqual(new_image['properties'][0]['value'], fixture['properties']['distro']) del fixture['location'] del fixture['properties'] for k, v in fixture.items(): self.assertEqual(v, new_image[k]) # Test status was updated properly self.assertIn('status', new_image.keys()) self.assertEqual('active', new_image['status']) def test_image_create_already_exists(self): """Tests proper exception is raised if image with ID already exists""" fixture = self.get_fixture(id=UUID2, location="file:///tmp/glance-tests/2") self.assertRaises(exception.Duplicate, self.client.image_create, values=fixture) def test_image_create_with_bad_status(self): """Tests proper exception is raised if a bad status is set""" fixture = self.get_fixture(status='bad status', location="file:///tmp/glance-tests/2") self.assertRaises(exception.Invalid, self.client.image_create, values=fixture) def test_image_update(self): """Tests that the registry API updates the image""" fixture = {'name': 'fake public image #2', 'disk_format': 'vmdk', 'status': 'saving'} self.assertTrue(self.client.image_update(image_id=UUID2, values=fixture)) # Test all other attributes set data = self.client.image_get(image_id=UUID2) for k, v in fixture.items(): self.assertEqual(v, data[k]) def test_image_update_conflict(self): """Tests that the registry API updates the image""" next_state = 'saving' fixture = {'name': 'fake public image #2', 'disk_format': 'vmdk', 'status': next_state} image = self.client.image_get(image_id=UUID2) current = image['status'] self.assertEqual('active', current) # image is in 'active' state so this should cause a failure. from_state = 'saving' self.assertRaises(exception.Conflict, self.client.image_update, image_id=UUID2, values=fixture, from_state=from_state) try: self.client.image_update(image_id=UUID2, values=fixture, from_state=from_state) except exception.Conflict as exc: msg = (_('cannot transition from %(current)s to ' '%(next)s in update (wanted ' 'from_state=%(from)s)') % {'current': current, 'next': next_state, 'from': from_state}) self.assertEqual(str(exc), msg) def test_image_update_with_invalid_min_disk(self): """Tests that the registry API updates the image""" next_state = 'saving' fixture = {'name': 'fake image', 'disk_format': 'vmdk', 'min_disk': 2 ** 31 + 1, 'status': next_state} image = self.client.image_get(image_id=UUID2) current = image['status'] self.assertEqual('active', current) # image is in 'active' state so this should cause a failure. from_state = 'saving' self.assertRaises(exception.Invalid, self.client.image_update, image_id=UUID2, values=fixture, from_state=from_state) def test_image_update_with_invalid_min_ram(self): """Tests that the registry API updates the image""" next_state = 'saving' fixture = {'name': 'fake image', 'disk_format': 'vmdk', 'min_ram': 2 ** 31 + 1, 'status': next_state} image = self.client.image_get(image_id=UUID2) current = image['status'] self.assertEqual('active', current) # image is in 'active' state so this should cause a failure. from_state = 'saving' self.assertRaises(exception.Invalid, self.client.image_update, image_id=UUID2, values=fixture, from_state=from_state) def _test_image_update_not_existing(self): """Tests non existing image update doesn't work""" fixture = self.get_fixture(status='bad status') self.assertRaises(exception.NotFound, self.client.image_update, image_id=_gen_uuid(), values=fixture) def test_image_destroy(self): """Tests that image metadata is deleted properly""" # Grab the original number of images orig_num_images = len(self.client.image_get_all()) # Delete image #2 image = self.FIXTURES[1] deleted_image = self.client.image_destroy(image_id=image['id']) self.assertTrue(deleted_image) self.assertEqual(image['id'], deleted_image['id']) self.assertTrue(deleted_image['deleted']) self.assertTrue(deleted_image['deleted_at']) # Verify one less image filters = {'deleted': False} new_num_images = len(self.client.image_get_all(filters=filters)) self.assertEqual(new_num_images, orig_num_images - 1) def test_image_destroy_not_existing(self): """Tests cannot delete non-existing image""" self.assertRaises(exception.NotFound, self.client.image_destroy, image_id=_gen_uuid()) def test_image_get_members(self): """Tests getting image members""" memb_list = self.client.image_member_find(image_id=UUID2) num_members = len(memb_list) self.assertEqual(0, num_members) def test_image_get_members_not_existing(self): """Tests getting non-existent image members""" self.assertRaises(exception.NotFound, self.client.image_get_members, image_id=_gen_uuid()) def test_image_member_find(self): """Tests getting member images""" memb_list = self.client.image_member_find(member='pattieblack') num_members = len(memb_list) self.assertEqual(0, num_members) def test_image_member_find_include_deleted(self): """Tests getting image members including the deleted member""" values = dict(image_id=UUID2, member='pattieblack') # create a member member = self.client.image_member_create(values=values) memb_list = self.client.image_member_find(member='pattieblack') memb_list2 = self.client.image_member_find(member='pattieblack', include_deleted=True) self.assertEqual(1, len(memb_list)) self.assertEqual(1, len(memb_list2)) # delete the member self.client.image_member_delete(memb_id=member['id']) memb_list = self.client.image_member_find(member='pattieblack') memb_list2 = self.client.image_member_find(member='pattieblack', include_deleted=True) self.assertEqual(0, len(memb_list)) self.assertEqual(1, len(memb_list2)) # create it again member = self.client.image_member_create(values=values) memb_list = self.client.image_member_find(member='pattieblack') memb_list2 = self.client.image_member_find(member='pattieblack', include_deleted=True) self.assertEqual(1, len(memb_list)) self.assertEqual(2, len(memb_list2)) def test_add_update_members(self): """Tests updating image members""" values = dict(image_id=UUID2, member='pattieblack') member = self.client.image_member_create(values=values) self.assertTrue(member) values['member'] = 'pattieblack2' self.assertTrue(self.client.image_member_update(memb_id=member['id'], values=values)) def test_add_delete_member(self): """Tests deleting image members""" values = dict(image_id=UUID2, member='pattieblack') member = self.client.image_member_create(values=values) self.client.image_member_delete(memb_id=member['id']) memb_list = self.client.image_member_find(member='pattieblack') self.assertEqual(0, len(memb_list)) class TestRegistryV2ClientApi(base.IsolatedUnitTest): """Test proper actions made against a registry service. Test for both valid and invalid requests. """ def setUp(self): """Establish a clean test environment""" super(TestRegistryV2ClientApi, self).setUp() reload_module(rapi) def test_configure_registry_client_not_using_use_user_token(self): self.config(use_user_token=False) with patch.object(rapi, 'configure_registry_admin_creds') as mock_rapi: rapi.configure_registry_client() mock_rapi.assert_called_once_with() def _get_fake_config_creds(self, auth_url='auth_url', strategy='keystone'): return { 'user': 'user', 'password': 'password', 'username': 'user', 'tenant': 'tenant', 'auth_url': auth_url, 'strategy': strategy, 'region': 'region' } def test_configure_registry_admin_creds(self): expected = self._get_fake_config_creds(auth_url=None, strategy='configured_strategy') self.config(admin_user=expected['user']) self.config(admin_password=expected['password']) self.config(admin_tenant_name=expected['tenant']) self.config(auth_strategy=expected['strategy']) self.config(auth_region=expected['region']) self.stubs.Set(os, 'getenv', lambda x: None) self.assertIsNone(rapi._CLIENT_CREDS) rapi.configure_registry_admin_creds() self.assertEqual(expected, rapi._CLIENT_CREDS) def test_configure_registry_admin_creds_with_auth_url(self): expected = self._get_fake_config_creds() self.config(admin_user=expected['user']) self.config(admin_password=expected['password']) self.config(admin_tenant_name=expected['tenant']) self.config(auth_url=expected['auth_url']) self.config(auth_strategy='test_strategy') self.config(auth_region=expected['region']) self.assertIsNone(rapi._CLIENT_CREDS) rapi.configure_registry_admin_creds() self.assertEqual(expected, rapi._CLIENT_CREDS) glance-16.0.0/glance/tests/unit/v2/test_registry_api.py0000666000175100017510000016424513245511421023102 0ustar zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2013 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime import uuid from oslo_serialization import jsonutils import routes import six from six.moves import http_client as http import webob import glance.api.common import glance.common.config from glance.common import timeutils import glance.context from glance.db.sqlalchemy import api as db_api from glance.db.sqlalchemy import models as db_models from glance.registry.api import v2 as rserver from glance.tests.unit import base from glance.tests import utils as test_utils _gen_uuid = lambda: str(uuid.uuid4()) UUID1 = _gen_uuid() UUID2 = _gen_uuid() class TestRegistryRPC(base.IsolatedUnitTest): def setUp(self): super(TestRegistryRPC, self).setUp() self.mapper = routes.Mapper() self.api = test_utils.FakeAuthMiddleware(rserver.API(self.mapper), is_admin=True) uuid1_time = timeutils.utcnow() uuid2_time = uuid1_time + datetime.timedelta(seconds=5) self.FIXTURES = [ {'id': UUID1, 'name': 'fake image #1', 'status': 'active', 'disk_format': 'ami', 'container_format': 'ami', 'visibility': 'shared', 'created_at': uuid1_time, 'updated_at': uuid1_time, 'deleted_at': None, 'deleted': False, 'checksum': None, 'min_disk': 0, 'min_ram': 0, 'size': 13, 'locations': [{'url': "file:///%s/%s" % (self.test_dir, UUID1), 'metadata': {}, 'status': 'active'}], 'properties': {'type': 'kernel'}}, {'id': UUID2, 'name': 'fake image #2', 'status': 'active', 'disk_format': 'vhd', 'container_format': 'ovf', 'visibility': 'public', 'created_at': uuid2_time, 'updated_at': uuid2_time, 'deleted_at': None, 'deleted': False, 'checksum': None, 'min_disk': 5, 'min_ram': 256, 'size': 19, 'locations': [{'url': "file:///%s/%s" % (self.test_dir, UUID2), 'metadata': {}, 'status': 'active'}], 'properties': {}}] self.context = glance.context.RequestContext(is_admin=True) db_api.get_engine() self.destroy_fixtures() self.create_fixtures() def tearDown(self): """Clear the test environment""" super(TestRegistryRPC, self).tearDown() self.destroy_fixtures() def create_fixtures(self): for fixture in self.FIXTURES: db_api.image_create(self.context, fixture) # We write a fake image file to the filesystem with open("%s/%s" % (self.test_dir, fixture['id']), 'wb') as image: image.write(b"chunk00000remainder") image.flush() def destroy_fixtures(self): # Easiest to just drop the models and re-create them... db_models.unregister_models(db_api.get_engine()) db_models.register_models(db_api.get_engine()) def _compare_images_and_uuids(self, uuids, images): self.assertListEqual(uuids, [image['id'] for image in images]) def test_show(self): """Tests that registry API endpoint returns the expected image.""" fixture = {'id': UUID2, 'name': 'fake image #2', 'size': 19, 'min_ram': 256, 'min_disk': 5, 'checksum': None} req = webob.Request.blank('/rpc') req.method = "POST" cmd = [{ 'command': 'image_get', 'kwargs': {'image_id': UUID2}, }] req.body = jsonutils.dump_as_bytes(cmd) res = req.get_response(self.api) self.assertEqual(http.OK, res.status_int) res_dict = jsonutils.loads(res.body)[0] image = res_dict for k, v in six.iteritems(fixture): self.assertEqual(v, image[k]) def test_show_unknown(self): """Tests the registry API endpoint returns 404 for an unknown id.""" req = webob.Request.blank('/rpc') req.method = "POST" cmd = [{ 'command': 'image_get', 'kwargs': {'image_id': _gen_uuid()}, }] req.body = jsonutils.dump_as_bytes(cmd) res = req.get_response(self.api) res_dict = jsonutils.loads(res.body)[0] self.assertEqual('glance.common.exception.ImageNotFound', res_dict["_error"]["cls"]) def test_get_index(self): """Tests that the image_get_all command returns list of images.""" fixture = {'id': UUID2, 'name': 'fake image #2', 'size': 19, 'checksum': None} req = webob.Request.blank('/rpc') req.method = "POST" cmd = [{ 'command': 'image_get_all', 'kwargs': {'filters': fixture}, }] req.body = jsonutils.dump_as_bytes(cmd) res = req.get_response(self.api) self.assertEqual(http.OK, res.status_int) images = jsonutils.loads(res.body)[0] self.assertEqual(1, len(images)) for k, v in six.iteritems(fixture): self.assertEqual(v, images[0][k]) def test_get_index_marker(self): """Tests that the registry API returns list of public images. Must conforms to a marker query param. """ uuid5_time = timeutils.utcnow() + datetime.timedelta(seconds=10) uuid4_time = uuid5_time + datetime.timedelta(seconds=5) uuid3_time = uuid4_time + datetime.timedelta(seconds=5) UUID3 = _gen_uuid() extra_fixture = {'id': UUID3, 'status': 'active', 'visibility': 'public', 'disk_format': 'vhd', 'container_format': 'ovf', 'name': 'new name! #123', 'size': 19, 'checksum': None, 'created_at': uuid3_time, 'updated_at': uuid3_time} db_api.image_create(self.context, extra_fixture) UUID4 = _gen_uuid() extra_fixture = {'id': UUID4, 'status': 'active', 'visibility': 'public', 'disk_format': 'vhd', 'container_format': 'ovf', 'name': 'new name! #123', 'size': 20, 'checksum': None, 'created_at': uuid4_time, 'updated_at': uuid4_time} db_api.image_create(self.context, extra_fixture) UUID5 = _gen_uuid() extra_fixture = {'id': UUID5, 'status': 'active', 'visibility': 'public', 'disk_format': 'vhd', 'container_format': 'ovf', 'name': 'new name! #123', 'size': 20, 'checksum': None, 'created_at': uuid5_time, 'updated_at': uuid5_time} db_api.image_create(self.context, extra_fixture) req = webob.Request.blank('/rpc') req.method = "POST" cmd = [{ 'command': 'image_get_all', 'kwargs': {'marker': UUID4, "is_public": True}, }] req.body = jsonutils.dump_as_bytes(cmd) res = req.get_response(self.api) self.assertEqual(http.OK, res.status_int) images = jsonutils.loads(res.body)[0] # should be sorted by created_at desc, id desc # page should start after marker 4 uuid_list = [UUID5, UUID2] self._compare_images_and_uuids(uuid_list, images) def test_get_index_marker_and_name_asc(self): """Test marker and null name ascending Tests that the registry API returns 200 when a marker and a null name are combined ascending order """ UUID3 = _gen_uuid() extra_fixture = {'id': UUID3, 'status': 'active', 'visibility': 'public', 'disk_format': 'vhd', 'container_format': 'ovf', 'name': None, 'size': 19, 'checksum': None} db_api.image_create(self.context, extra_fixture) req = webob.Request.blank('/rpc') req.method = "POST" cmd = [{ 'command': 'image_get_all', 'kwargs': {'marker': UUID3, 'sort_key': ['name'], 'sort_dir': ['asc']}, }] req.body = jsonutils.dump_as_bytes(cmd) res = req.get_response(self.api) self.assertEqual(http.OK, res.status_int) images = jsonutils.loads(res.body)[0] self.assertEqual(2, len(images)) def test_get_index_marker_and_name_desc(self): """Test marker and null name descending Tests that the registry API returns 200 when a marker and a null name are combined descending order """ UUID3 = _gen_uuid() extra_fixture = {'id': UUID3, 'status': 'active', 'visibility': 'public', 'disk_format': 'vhd', 'container_format': 'ovf', 'name': None, 'size': 19, 'checksum': None} db_api.image_create(self.context, extra_fixture) req = webob.Request.blank('/rpc') req.method = "POST" cmd = [{ 'command': 'image_get_all', 'kwargs': {'marker': UUID3, 'sort_key': ['name'], 'sort_dir': ['desc']}, }] req.body = jsonutils.dump_as_bytes(cmd) res = req.get_response(self.api) self.assertEqual(http.OK, res.status_int) images = jsonutils.loads(res.body)[0] self.assertEqual(0, len(images)) def test_get_index_marker_and_disk_format_asc(self): """Test marker and null disk format ascending Tests that the registry API returns 200 when a marker and a null disk_format are combined ascending order """ UUID3 = _gen_uuid() extra_fixture = {'id': UUID3, 'status': 'active', 'visibility': 'public', 'disk_format': None, 'container_format': 'ovf', 'name': 'Fake image', 'size': 19, 'checksum': None} db_api.image_create(self.context, extra_fixture) req = webob.Request.blank('/rpc') req.method = "POST" cmd = [{ 'command': 'image_get_all', 'kwargs': {'marker': UUID3, 'sort_key': ['disk_format'], 'sort_dir': ['asc']}, }] req.body = jsonutils.dump_as_bytes(cmd) res = req.get_response(self.api) self.assertEqual(http.OK, res.status_int) images = jsonutils.loads(res.body)[0] self.assertEqual(2, len(images)) def test_get_index_marker_and_disk_format_desc(self): """Test marker and null disk format descending Tests that the registry API returns 200 when a marker and a null disk_format are combined descending order """ UUID3 = _gen_uuid() extra_fixture = {'id': UUID3, 'status': 'active', 'visibility': 'public', 'disk_format': None, 'container_format': 'ovf', 'name': 'Fake image', 'size': 19, 'checksum': None} db_api.image_create(self.context, extra_fixture) req = webob.Request.blank('/rpc') req.method = "POST" cmd = [{ 'command': 'image_get_all', 'kwargs': {'marker': UUID3, 'sort_key': ['disk_format'], 'sort_dir': ['desc']}, }] req.body = jsonutils.dump_as_bytes(cmd) res = req.get_response(self.api) self.assertEqual(http.OK, res.status_int) images = jsonutils.loads(res.body)[0] self.assertEqual(0, len(images)) def test_get_index_marker_and_container_format_asc(self): """Test marker and null container format ascending Tests that the registry API returns 200 when a marker and a null container_format are combined ascending order """ UUID3 = _gen_uuid() extra_fixture = {'id': UUID3, 'status': 'active', 'visibility': 'public', 'disk_format': 'vhd', 'container_format': None, 'name': 'Fake image', 'size': 19, 'checksum': None} db_api.image_create(self.context, extra_fixture) req = webob.Request.blank('/rpc') req.method = "POST" cmd = [{ 'command': 'image_get_all', 'kwargs': {'marker': UUID3, 'sort_key': ['container_format'], 'sort_dir': ['asc']}, }] req.body = jsonutils.dump_as_bytes(cmd) res = req.get_response(self.api) self.assertEqual(http.OK, res.status_int) images = jsonutils.loads(res.body)[0] self.assertEqual(2, len(images)) def test_get_index_marker_and_container_format_desc(self): """Test marker and null container format descending Tests that the registry API returns 200 when a marker and a null container_format are combined descending order """ UUID3 = _gen_uuid() extra_fixture = {'id': UUID3, 'status': 'active', 'visibility': 'public', 'disk_format': 'vhd', 'container_format': None, 'name': 'Fake image', 'size': 19, 'checksum': None} db_api.image_create(self.context, extra_fixture) req = webob.Request.blank('/rpc') req.method = "POST" cmd = [{ 'command': 'image_get_all', 'kwargs': {'marker': UUID3, 'sort_key': ['container_format'], 'sort_dir': ['desc']}, }] req.body = jsonutils.dump_as_bytes(cmd) res = req.get_response(self.api) self.assertEqual(http.OK, res.status_int) images = jsonutils.loads(res.body)[0] self.assertEqual(0, len(images)) def test_get_index_unknown_marker(self): """Tests the registry API returns a NotFound with unknown marker.""" req = webob.Request.blank('/rpc') req.method = "POST" cmd = [{ 'command': 'image_get_all', 'kwargs': {'marker': _gen_uuid()}, }] req.body = jsonutils.dump_as_bytes(cmd) res = req.get_response(self.api) result = jsonutils.loads(res.body)[0] self.assertIn("_error", result) self.assertIn("NotFound", result["_error"]["cls"]) def test_get_index_limit(self): """Tests that the registry API returns list of public images. Must conforms to a limit query param. """ uuid3_time = timeutils.utcnow() + datetime.timedelta(seconds=10) uuid4_time = uuid3_time + datetime.timedelta(seconds=5) UUID3 = _gen_uuid() extra_fixture = {'id': UUID3, 'status': 'active', 'visibility': 'public', 'disk_format': 'vhd', 'container_format': 'ovf', 'name': 'new name! #123', 'size': 19, 'checksum': None, 'created_at': uuid3_time, 'updated_at': uuid3_time} db_api.image_create(self.context, extra_fixture) UUID4 = _gen_uuid() extra_fixture = {'id': UUID4, 'status': 'active', 'visibility': 'public', 'disk_format': 'vhd', 'container_format': 'ovf', 'name': 'new name! #123', 'size': 20, 'checksum': None, 'created_at': uuid4_time, 'updated_at': uuid4_time} db_api.image_create(self.context, extra_fixture) req = webob.Request.blank('/rpc') req.method = "POST" cmd = [{ 'command': 'image_get_all', 'kwargs': {'limit': 1}, }] req.body = jsonutils.dump_as_bytes(cmd) res = req.get_response(self.api) images = jsonutils.loads(res.body)[0] self.assertEqual(http.OK, res.status_int) self._compare_images_and_uuids([UUID4], images) def test_get_index_limit_marker(self): """Tests that the registry API returns list of public images. Must conforms to limit and marker query params. """ uuid3_time = timeutils.utcnow() + datetime.timedelta(seconds=10) uuid4_time = uuid3_time + datetime.timedelta(seconds=5) UUID3 = _gen_uuid() extra_fixture = {'id': UUID3, 'status': 'active', 'visibility': 'public', 'disk_format': 'vhd', 'container_format': 'ovf', 'name': 'new name! #123', 'size': 19, 'checksum': None, 'created_at': uuid3_time, 'updated_at': uuid3_time} db_api.image_create(self.context, extra_fixture) extra_fixture = {'id': _gen_uuid(), 'status': 'active', 'visibility': 'public', 'disk_format': 'vhd', 'container_format': 'ovf', 'name': 'new name! #123', 'size': 20, 'checksum': None, 'created_at': uuid4_time, 'updated_at': uuid4_time} db_api.image_create(self.context, extra_fixture) req = webob.Request.blank('/rpc') req.method = "POST" cmd = [{ 'command': 'image_get_all', 'kwargs': {'marker': UUID3, 'limit': 1}, }] req.body = jsonutils.dump_as_bytes(cmd) res = req.get_response(self.api) res_dict = jsonutils.loads(res.body)[0] self.assertEqual(http.OK, res.status_int) images = res_dict self._compare_images_and_uuids([UUID2], images) def test_get_index_filter_name(self): """Tests that the registry API returns list of public images. Use a specific name. This is really a sanity check, filtering is tested more in-depth using /images/detail """ extra_fixture = {'id': _gen_uuid(), 'status': 'active', 'visibility': 'public', 'disk_format': 'vhd', 'container_format': 'ovf', 'name': 'new name! #123', 'size': 19, 'checksum': None} db_api.image_create(self.context, extra_fixture) extra_fixture = {'id': _gen_uuid(), 'status': 'active', 'visibility': 'public', 'disk_format': 'vhd', 'container_format': 'ovf', 'name': 'new name! #123', 'size': 20, 'checksum': None} db_api.image_create(self.context, extra_fixture) req = webob.Request.blank('/rpc') req.method = "POST" cmd = [{ 'command': 'image_get_all', 'kwargs': {'filters': {'name': 'new name! #123'}}, }] req.body = jsonutils.dump_as_bytes(cmd) res = req.get_response(self.api) res_dict = jsonutils.loads(res.body)[0] self.assertEqual(http.OK, res.status_int) images = res_dict self.assertEqual(2, len(images)) for image in images: self.assertEqual('new name! #123', image['name']) def test_get_index_filter_on_user_defined_properties(self): """Tests that the registry API returns list of public images. Use a specific user-defined properties. """ properties = {'distro': 'ubuntu', 'arch': 'i386', 'type': 'kernel'} extra_id = _gen_uuid() extra_fixture = {'id': extra_id, 'status': 'active', 'visibility': 'public', 'disk_format': 'vhd', 'container_format': 'ovf', 'name': 'image-extra-1', 'size': 19, 'properties': properties, 'checksum': None} db_api.image_create(self.context, extra_fixture) # testing with a common property. req = webob.Request.blank('/rpc') req.method = "POST" cmd = [{ 'command': 'image_get_all', 'kwargs': {'filters': {'type': 'kernel'}}, }] req.body = jsonutils.dump_as_bytes(cmd) res = req.get_response(self.api) self.assertEqual(http.OK, res.status_int) images = jsonutils.loads(res.body)[0] self.assertEqual(2, len(images)) self.assertEqual(extra_id, images[0]['id']) self.assertEqual(UUID1, images[1]['id']) # testing with a non-existent value for a common property. cmd = [{ 'command': 'image_get_all', 'kwargs': {'filters': {'type': 'random'}}, }] req.body = jsonutils.dump_as_bytes(cmd) res = req.get_response(self.api) self.assertEqual(http.OK, res.status_int) images = jsonutils.loads(res.body)[0] self.assertEqual(0, len(images)) # testing with a non-existent value for a common property. cmd = [{ 'command': 'image_get_all', 'kwargs': {'filters': {'type': 'random'}}, }] req.body = jsonutils.dump_as_bytes(cmd) res = req.get_response(self.api) self.assertEqual(http.OK, res.status_int) images = jsonutils.loads(res.body)[0] self.assertEqual(0, len(images)) # testing with a non-existent property. cmd = [{ 'command': 'image_get_all', 'kwargs': {'filters': {'poo': 'random'}}, }] req.body = jsonutils.dump_as_bytes(cmd) res = req.get_response(self.api) self.assertEqual(http.OK, res.status_int) images = jsonutils.loads(res.body)[0] self.assertEqual(0, len(images)) # testing with multiple existing properties. cmd = [{ 'command': 'image_get_all', 'kwargs': {'filters': {'type': 'kernel', 'distro': 'ubuntu'}}, }] req.body = jsonutils.dump_as_bytes(cmd) res = req.get_response(self.api) self.assertEqual(http.OK, res.status_int) images = jsonutils.loads(res.body)[0] self.assertEqual(1, len(images)) self.assertEqual(extra_id, images[0]['id']) # testing with multiple existing properties but non-existent values. cmd = [{ 'command': 'image_get_all', 'kwargs': {'filters': {'type': 'random', 'distro': 'random'}}, }] req.body = jsonutils.dump_as_bytes(cmd) res = req.get_response(self.api) self.assertEqual(http.OK, res.status_int) images = jsonutils.loads(res.body)[0] self.assertEqual(0, len(images)) # testing with multiple non-existing properties. cmd = [{ 'command': 'image_get_all', 'kwargs': {'filters': {'typo': 'random', 'poo': 'random'}}, }] req.body = jsonutils.dump_as_bytes(cmd) res = req.get_response(self.api) self.assertEqual(http.OK, res.status_int) images = jsonutils.loads(res.body)[0] self.assertEqual(0, len(images)) # testing with one existing property and the other non-existing. cmd = [{ 'command': 'image_get_all', 'kwargs': {'filters': {'type': 'kernel', 'poo': 'random'}}, }] req.body = jsonutils.dump_as_bytes(cmd) res = req.get_response(self.api) self.assertEqual(http.OK, res.status_int) images = jsonutils.loads(res.body)[0] self.assertEqual(0, len(images)) def test_get_index_sort_default_created_at_desc(self): """Tests that the registry API returns list of public images. Must conforms to a default sort key/dir. """ uuid5_time = timeutils.utcnow() + datetime.timedelta(seconds=10) uuid4_time = uuid5_time + datetime.timedelta(seconds=5) uuid3_time = uuid4_time + datetime.timedelta(seconds=5) UUID3 = _gen_uuid() extra_fixture = {'id': UUID3, 'status': 'active', 'visibility': 'public', 'disk_format': 'vhd', 'container_format': 'ovf', 'name': 'new name! #123', 'size': 19, 'checksum': None, 'created_at': uuid3_time, 'updated_at': uuid3_time} db_api.image_create(self.context, extra_fixture) UUID4 = _gen_uuid() extra_fixture = {'id': UUID4, 'status': 'active', 'visibility': 'public', 'disk_format': 'vhd', 'container_format': 'ovf', 'name': 'new name! #123', 'size': 20, 'checksum': None, 'created_at': uuid4_time, 'updated_at': uuid4_time} db_api.image_create(self.context, extra_fixture) UUID5 = _gen_uuid() extra_fixture = {'id': UUID5, 'status': 'active', 'visibility': 'public', 'disk_format': 'vhd', 'container_format': 'ovf', 'name': 'new name! #123', 'size': 20, 'checksum': None, 'created_at': uuid5_time, 'updated_at': uuid5_time} db_api.image_create(self.context, extra_fixture) req = webob.Request.blank('/rpc') req.method = "POST" cmd = [{ 'command': 'image_get_all', }] req.body = jsonutils.dump_as_bytes(cmd) res = req.get_response(self.api) res_dict = jsonutils.loads(res.body)[0] self.assertEqual(http.OK, res.status_int) images = res_dict # (flaper87)registry's v1 forced is_public to True # when no value was specified. This is not # the default behaviour anymore. uuid_list = [UUID3, UUID4, UUID5, UUID2, UUID1] self._compare_images_and_uuids(uuid_list, images) def test_get_index_sort_name_asc(self): """Tests that the registry API returns list of public images. Must be sorted alphabetically by name in ascending order. """ UUID3 = _gen_uuid() extra_fixture = {'id': UUID3, 'status': 'active', 'visibility': 'public', 'disk_format': 'vhd', 'container_format': 'ovf', 'name': 'asdf', 'size': 19, 'checksum': None} db_api.image_create(self.context, extra_fixture) UUID4 = _gen_uuid() extra_fixture = {'id': UUID4, 'status': 'active', 'visibility': 'public', 'disk_format': 'vhd', 'container_format': 'ovf', 'name': 'xyz', 'size': 20, 'checksum': None} db_api.image_create(self.context, extra_fixture) UUID5 = _gen_uuid() extra_fixture = {'id': UUID5, 'status': 'active', 'visibility': 'public', 'disk_format': 'vhd', 'container_format': 'ovf', 'name': None, 'size': 20, 'checksum': None} db_api.image_create(self.context, extra_fixture) req = webob.Request.blank('/rpc') req.method = "POST" cmd = [{ 'command': 'image_get_all', 'kwargs': {'sort_key': ['name'], 'sort_dir': ['asc']} }] req.body = jsonutils.dump_as_bytes(cmd) res = req.get_response(self.api) self.assertEqual(http.OK, res.status_int) res_dict = jsonutils.loads(res.body)[0] images = res_dict uuid_list = [UUID5, UUID3, UUID1, UUID2, UUID4] self._compare_images_and_uuids(uuid_list, images) def test_get_index_sort_status_desc(self): """Tests that the registry API returns list of public images. Must be sorted alphabetically by status in descending order. """ uuid4_time = timeutils.utcnow() + datetime.timedelta(seconds=10) UUID3 = _gen_uuid() extra_fixture = {'id': UUID3, 'status': 'queued', 'visibility': 'public', 'disk_format': 'vhd', 'container_format': 'ovf', 'name': 'asdf', 'size': 19, 'checksum': None} db_api.image_create(self.context, extra_fixture) UUID4 = _gen_uuid() extra_fixture = {'id': UUID4, 'status': 'active', 'visibility': 'public', 'disk_format': 'vhd', 'container_format': 'ovf', 'name': 'xyz', 'size': 20, 'checksum': None, 'created_at': uuid4_time, 'updated_at': uuid4_time} db_api.image_create(self.context, extra_fixture) req = webob.Request.blank('/rpc') req.method = "POST" cmd = [{ 'command': 'image_get_all', 'kwargs': {'sort_key': ['status'], 'sort_dir': ['asc']} }] req.body = jsonutils.dump_as_bytes(cmd) res = req.get_response(self.api) self.assertEqual(http.OK, res.status_int) res_dict = jsonutils.loads(res.body)[0] images = res_dict uuid_list = [UUID1, UUID2, UUID4, UUID3] self._compare_images_and_uuids(uuid_list, images) def test_get_index_sort_disk_format_asc(self): """Tests that the registry API returns list of public images. Must be sorted alphabetically by disk_format in ascending order. """ uuid3_time = timeutils.utcnow() + datetime.timedelta(seconds=5) UUID3 = _gen_uuid() extra_fixture = {'id': UUID3, 'status': 'active', 'visibility': 'public', 'disk_format': 'ami', 'container_format': 'ami', 'name': 'asdf', 'size': 19, 'checksum': None, 'created_at': uuid3_time, 'updated_at': uuid3_time} db_api.image_create(self.context, extra_fixture) UUID4 = _gen_uuid() extra_fixture = {'id': UUID4, 'status': 'active', 'visibility': 'public', 'disk_format': 'vdi', 'container_format': 'ovf', 'name': 'xyz', 'size': 20, 'checksum': None} db_api.image_create(self.context, extra_fixture) req = webob.Request.blank('/rpc') req.method = "POST" cmd = [{ 'command': 'image_get_all', 'kwargs': {'sort_key': ['disk_format'], 'sort_dir': ['asc']} }] req.body = jsonutils.dump_as_bytes(cmd) res = req.get_response(self.api) self.assertEqual(http.OK, res.status_int) res_dict = jsonutils.loads(res.body)[0] images = res_dict uuid_list = [UUID1, UUID3, UUID4, UUID2] self._compare_images_and_uuids(uuid_list, images) def test_get_index_sort_container_format_desc(self): """Tests that the registry API returns list of public images. Must be sorted alphabetically by container_format in descending order. """ uuid3_time = timeutils.utcnow() + datetime.timedelta(seconds=5) UUID3 = _gen_uuid() extra_fixture = {'id': UUID3, 'status': 'active', 'visibility': 'public', 'disk_format': 'ami', 'container_format': 'ami', 'name': 'asdf', 'size': 19, 'checksum': None, 'created_at': uuid3_time, 'updated_at': uuid3_time} db_api.image_create(self.context, extra_fixture) UUID4 = _gen_uuid() extra_fixture = {'id': UUID4, 'status': 'active', 'visibility': 'public', 'disk_format': 'iso', 'container_format': 'bare', 'name': 'xyz', 'size': 20, 'checksum': None} db_api.image_create(self.context, extra_fixture) req = webob.Request.blank('/rpc') req.method = "POST" cmd = [{ 'command': 'image_get_all', 'kwargs': {'sort_key': ['container_format'], 'sort_dir': ['desc']} }] req.body = jsonutils.dump_as_bytes(cmd) res = req.get_response(self.api) self.assertEqual(http.OK, res.status_int) res_dict = jsonutils.loads(res.body)[0] images = res_dict uuid_list = [UUID2, UUID4, UUID3, UUID1] self._compare_images_and_uuids(uuid_list, images) def test_get_index_sort_size_asc(self): """Tests that the registry API returns list of public images. Must be sorted by size in ascending order. """ UUID3 = _gen_uuid() extra_fixture = {'id': UUID3, 'status': 'active', 'visibility': 'public', 'disk_format': 'ami', 'container_format': 'ami', 'name': 'asdf', 'size': 100, 'checksum': None} db_api.image_create(self.context, extra_fixture) UUID4 = _gen_uuid() extra_fixture = {'id': UUID4, 'status': 'active', 'visibility': 'public', 'disk_format': 'iso', 'container_format': 'bare', 'name': 'xyz', 'size': 2, 'checksum': None} db_api.image_create(self.context, extra_fixture) req = webob.Request.blank('/rpc') req.method = "POST" cmd = [{ 'command': 'image_get_all', 'kwargs': {'sort_key': ['size'], 'sort_dir': ['asc']} }] req.body = jsonutils.dump_as_bytes(cmd) res = req.get_response(self.api) self.assertEqual(http.OK, res.status_int) res_dict = jsonutils.loads(res.body)[0] images = res_dict uuid_list = [UUID4, UUID1, UUID2, UUID3] self._compare_images_and_uuids(uuid_list, images) def test_get_index_sort_created_at_asc(self): """Tests that the registry API returns list of public images. Must be sorted by created_at in ascending order. """ uuid4_time = timeutils.utcnow() + datetime.timedelta(seconds=10) uuid3_time = uuid4_time + datetime.timedelta(seconds=5) UUID3 = _gen_uuid() extra_fixture = {'id': UUID3, 'status': 'active', 'visibility': 'public', 'disk_format': 'vhd', 'container_format': 'ovf', 'name': 'new name! #123', 'size': 19, 'checksum': None, 'created_at': uuid3_time, 'updated_at': uuid3_time} db_api.image_create(self.context, extra_fixture) UUID4 = _gen_uuid() extra_fixture = {'id': UUID4, 'status': 'active', 'visibility': 'public', 'disk_format': 'vhd', 'container_format': 'ovf', 'name': 'new name! #123', 'size': 20, 'checksum': None, 'created_at': uuid4_time, 'updated_at': uuid4_time} db_api.image_create(self.context, extra_fixture) req = webob.Request.blank('/rpc') req.method = "POST" cmd = [{ 'command': 'image_get_all', 'kwargs': {'sort_key': ['created_at'], 'sort_dir': ['asc']} }] req.body = jsonutils.dump_as_bytes(cmd) res = req.get_response(self.api) self.assertEqual(http.OK, res.status_int) res_dict = jsonutils.loads(res.body)[0] images = res_dict uuid_list = [UUID1, UUID2, UUID4, UUID3] self._compare_images_and_uuids(uuid_list, images) def test_get_index_sort_updated_at_desc(self): """Tests that the registry API returns list of public images. Must be sorted by updated_at in descending order. """ uuid4_time = timeutils.utcnow() + datetime.timedelta(seconds=10) uuid3_time = uuid4_time + datetime.timedelta(seconds=5) UUID3 = _gen_uuid() extra_fixture = {'id': UUID3, 'status': 'active', 'visibility': 'public', 'disk_format': 'vhd', 'container_format': 'ovf', 'name': 'new name! #123', 'size': 19, 'checksum': None, 'created_at': None, 'updated_at': uuid3_time} db_api.image_create(self.context, extra_fixture) UUID4 = _gen_uuid() extra_fixture = {'id': UUID4, 'status': 'active', 'visibility': 'public', 'disk_format': 'vhd', 'container_format': 'ovf', 'name': 'new name! #123', 'size': 20, 'checksum': None, 'created_at': None, 'updated_at': uuid4_time} db_api.image_create(self.context, extra_fixture) req = webob.Request.blank('/rpc') req.method = "POST" cmd = [{ 'command': 'image_get_all', 'kwargs': {'sort_key': ['updated_at'], 'sort_dir': ['desc']} }] req.body = jsonutils.dump_as_bytes(cmd) res = req.get_response(self.api) self.assertEqual(http.OK, res.status_int) res_dict = jsonutils.loads(res.body)[0] images = res_dict uuid_list = [UUID3, UUID4, UUID2, UUID1] self._compare_images_and_uuids(uuid_list, images) def test_get_index_sort_multiple_keys_one_sort_dir(self): """ Tests that the registry API returns list of public images sorted by name-size and size-name with ascending sort direction. """ uuid4_time = timeutils.utcnow() + datetime.timedelta(seconds=10) uuid3_time = uuid4_time + datetime.timedelta(seconds=5) UUID3 = _gen_uuid() extra_fixture = {'id': UUID3, 'status': 'active', 'visibility': 'public', 'disk_format': 'vhd', 'container_format': 'ovf', 'name': 'asdf', 'size': 19, 'checksum': None, 'created_at': None, 'updated_at': uuid3_time} db_api.image_create(self.context, extra_fixture) UUID4 = _gen_uuid() extra_fixture = {'id': UUID4, 'status': 'active', 'visibility': 'public', 'disk_format': 'vhd', 'container_format': 'ovf', 'name': 'xyz', 'size': 20, 'checksum': None, 'created_at': None, 'updated_at': uuid4_time} db_api.image_create(self.context, extra_fixture) UUID5 = _gen_uuid() extra_fixture = {'id': UUID5, 'status': 'active', 'visibility': 'public', 'disk_format': 'vhd', 'container_format': 'ovf', 'name': 'asdf', 'size': 20, 'checksum': None, 'created_at': None, 'updated_at': uuid4_time} db_api.image_create(self.context, extra_fixture) req = webob.Request.blank('/rpc') req.method = "POST" cmd = [{ 'command': 'image_get_all', 'kwargs': {'sort_key': ['name', 'size'], 'sort_dir': ['asc']} }] req.body = jsonutils.dump_as_bytes(cmd) res = req.get_response(self.api) self.assertEqual(http.OK, res.status_int) res_dict = jsonutils.loads(res.body)[0] images = res_dict uuid_list = [UUID3, UUID5, UUID1, UUID2, UUID4] self._compare_images_and_uuids(uuid_list, images) cmd = [{ 'command': 'image_get_all', 'kwargs': {'sort_key': ['size', 'name'], 'sort_dir': ['asc']} }] req.body = jsonutils.dump_as_bytes(cmd) res = req.get_response(self.api) self.assertEqual(http.OK, res.status_int) res_dict = jsonutils.loads(res.body)[0] images = res_dict uuid_list = [UUID1, UUID3, UUID2, UUID5, UUID4] self._compare_images_and_uuids(uuid_list, images) def test_get_index_sort_multiple_keys_multiple_sort_dirs(self): """ Tests that the registry API returns list of public images sorted by name-size and size-name with ascending and descending directions. """ uuid4_time = timeutils.utcnow() + datetime.timedelta(seconds=10) uuid3_time = uuid4_time + datetime.timedelta(seconds=5) UUID3 = _gen_uuid() extra_fixture = {'id': UUID3, 'status': 'active', 'visibility': 'public', 'disk_format': 'vhd', 'container_format': 'ovf', 'name': 'asdf', 'size': 19, 'checksum': None, 'created_at': None, 'updated_at': uuid3_time} db_api.image_create(self.context, extra_fixture) UUID4 = _gen_uuid() extra_fixture = {'id': UUID4, 'status': 'active', 'visibility': 'public', 'disk_format': 'vhd', 'container_format': 'ovf', 'name': 'xyz', 'size': 20, 'checksum': None, 'created_at': None, 'updated_at': uuid4_time} db_api.image_create(self.context, extra_fixture) UUID5 = _gen_uuid() extra_fixture = {'id': UUID5, 'status': 'active', 'visibility': 'public', 'disk_format': 'vhd', 'container_format': 'ovf', 'name': 'asdf', 'size': 20, 'checksum': None, 'created_at': None, 'updated_at': uuid4_time} db_api.image_create(self.context, extra_fixture) req = webob.Request.blank('/rpc') req.method = "POST" cmd = [{ 'command': 'image_get_all', 'kwargs': {'sort_key': ['name', 'size'], 'sort_dir': ['desc', 'asc']} }] req.body = jsonutils.dump_as_bytes(cmd) res = req.get_response(self.api) self.assertEqual(http.OK, res.status_int) res_dict = jsonutils.loads(res.body)[0] images = res_dict uuid_list = [UUID4, UUID2, UUID1, UUID3, UUID5] self._compare_images_and_uuids(uuid_list, images) cmd = [{ 'command': 'image_get_all', 'kwargs': {'sort_key': ['size', 'name'], 'sort_dir': ['desc', 'asc']} }] req.body = jsonutils.dump_as_bytes(cmd) res = req.get_response(self.api) self.assertEqual(http.OK, res.status_int) res_dict = jsonutils.loads(res.body)[0] images = res_dict uuid_list = [UUID5, UUID4, UUID3, UUID2, UUID1] self._compare_images_and_uuids(uuid_list, images) cmd = [{ 'command': 'image_get_all', 'kwargs': {'sort_key': ['name', 'size'], 'sort_dir': ['asc', 'desc']} }] req.body = jsonutils.dump_as_bytes(cmd) res = req.get_response(self.api) self.assertEqual(http.OK, res.status_int) res_dict = jsonutils.loads(res.body)[0] images = res_dict uuid_list = [UUID5, UUID3, UUID1, UUID2, UUID4] self._compare_images_and_uuids(uuid_list, images) cmd = [{ 'command': 'image_get_all', 'kwargs': {'sort_key': ['size', 'name'], 'sort_dir': ['asc', 'desc']} }] req.body = jsonutils.dump_as_bytes(cmd) res = req.get_response(self.api) self.assertEqual(http.OK, res.status_int) res_dict = jsonutils.loads(res.body)[0] images = res_dict uuid_list = [UUID1, UUID2, UUID3, UUID4, UUID5] self._compare_images_and_uuids(uuid_list, images) def test_create_image(self): """Tests that the registry API creates the image""" fixture = {'name': 'fake public image', 'status': 'active', 'visibility': 'public', 'disk_format': 'vhd', 'container_format': 'ovf'} req = webob.Request.blank('/rpc') req.method = "POST" cmd = [{ 'command': 'image_create', 'kwargs': {'values': fixture} }] req.body = jsonutils.dump_as_bytes(cmd) res = req.get_response(self.api) self.assertEqual(http.OK, res.status_int) res_dict = jsonutils.loads(res.body)[0] for k, v in six.iteritems(fixture): self.assertEqual(v, res_dict[k]) # Test status was updated properly self.assertEqual('active', res_dict['status']) def test_create_image_with_min_disk(self): """Tests that the registry API creates the image""" fixture = {'name': 'fake public image', 'visibility': 'public', 'status': 'active', 'min_disk': 5, 'disk_format': 'vhd', 'container_format': 'ovf'} req = webob.Request.blank('/rpc') req.method = "POST" cmd = [{ 'command': 'image_create', 'kwargs': {'values': fixture} }] req.body = jsonutils.dump_as_bytes(cmd) res = req.get_response(self.api) self.assertEqual(http.OK, res.status_int) res_dict = jsonutils.loads(res.body)[0] self.assertEqual(fixture['min_disk'], res_dict['min_disk']) def test_create_image_with_min_ram(self): """Tests that the registry API creates the image""" fixture = {'name': 'fake public image', 'visibility': 'public', 'status': 'active', 'min_ram': 256, 'disk_format': 'vhd', 'container_format': 'ovf'} req = webob.Request.blank('/rpc') req.method = "POST" cmd = [{ 'command': 'image_create', 'kwargs': {'values': fixture} }] req.body = jsonutils.dump_as_bytes(cmd) res = req.get_response(self.api) self.assertEqual(http.OK, res.status_int) res_dict = jsonutils.loads(res.body)[0] self.assertEqual(fixture['min_ram'], res_dict['min_ram']) def test_create_image_with_min_ram_default(self): """Tests that the registry API creates the image""" fixture = {'name': 'fake public image', 'status': 'active', 'visibility': 'public', 'disk_format': 'vhd', 'container_format': 'ovf'} req = webob.Request.blank('/rpc') req.method = "POST" cmd = [{ 'command': 'image_create', 'kwargs': {'values': fixture} }] req.body = jsonutils.dump_as_bytes(cmd) res = req.get_response(self.api) self.assertEqual(http.OK, res.status_int) res_dict = jsonutils.loads(res.body)[0] self.assertEqual(0, res_dict['min_ram']) def test_create_image_with_min_disk_default(self): """Tests that the registry API creates the image""" fixture = {'name': 'fake public image', 'status': 'active', 'visibility': 'public', 'disk_format': 'vhd', 'container_format': 'ovf'} req = webob.Request.blank('/rpc') req.method = "POST" cmd = [{ 'command': 'image_create', 'kwargs': {'values': fixture} }] req.body = jsonutils.dump_as_bytes(cmd) res = req.get_response(self.api) self.assertEqual(http.OK, res.status_int) res_dict = jsonutils.loads(res.body)[0] self.assertEqual(0, res_dict['min_disk']) def test_update_image(self): """Tests that the registry API updates the image""" fixture = {'name': 'fake public image #2', 'min_disk': 5, 'min_ram': 256, 'disk_format': 'raw'} req = webob.Request.blank('/rpc') req.method = "POST" cmd = [{ 'command': 'image_update', 'kwargs': {'values': fixture, 'image_id': UUID2} }] req.body = jsonutils.dump_as_bytes(cmd) res = req.get_response(self.api) self.assertEqual(http.OK, res.status_int) res_dict = jsonutils.loads(res.body)[0] self.assertNotEqual(res_dict['created_at'], res_dict['updated_at']) for k, v in six.iteritems(fixture): self.assertEqual(v, res_dict[k]) def _send_request(self, command, kwargs, method): req = webob.Request.blank('/rpc') req.method = method cmd = [{'command': command, 'kwargs': kwargs}] req.body = jsonutils.dump_as_bytes(cmd) res = req.get_response(self.api) res_dict = jsonutils.loads(res.body)[0] return res.status_int, res_dict def _expect_fail(self, command, kwargs, error_cls, method='POST'): # on any exception status_int is always 200, so have to check _error # dict code, res_dict = self._send_request(command, kwargs, method) self.assertIn('_error', res_dict) self.assertEqual(error_cls, res_dict['_error']['cls']) return res_dict def _expect_ok(self, command, kwargs, method, expected_status=http.OK): code, res_dict = self._send_request(command, kwargs) self.assertEqual(expected_status, code) return res_dict def test_create_image_bad_name(self): fixture = {'name': u'A bad name \U0001fff2', 'status': 'queued'} self._expect_fail('image_create', {'values': fixture}, 'glance.common.exception.Invalid') def test_create_image_bad_location(self): fixture = {'status': 'queued', 'locations': [{'url': u'file:///tmp/tests/\U0001fee2', 'metadata': {}, 'status': 'active'}]} self._expect_fail('image_create', {'values': fixture}, 'glance.common.exception.Invalid') def test_create_image_bad_property(self): fixture = {'status': 'queued', 'properties': {'ok key': u' bad value \U0001f2aa'}} self._expect_fail('image_create', {'values': fixture}, 'glance.common.exception.Invalid') fixture = {'status': 'queued', 'properties': {u'invalid key \U00010020': 'ok value'}} self._expect_fail('image_create', {'values': fixture}, 'glance.common.exception.Invalid') def test_update_image_bad_tag(self): self._expect_fail('image_tag_create', {'value': u'\U0001fff2', 'image_id': UUID2}, 'glance.common.exception.Invalid') def test_update_image_bad_name(self): fixture = {'name': u'A bad name \U0001fff2'} self._expect_fail('image_update', {'values': fixture, 'image_id': UUID1}, 'glance.common.exception.Invalid') def test_update_image_bad_location(self): fixture = {'locations': [{'url': u'file:///tmp/glance-tests/\U0001fee2', 'metadata': {}, 'status': 'active'}]} self._expect_fail('image_update', {'values': fixture, 'image_id': UUID1}, 'glance.common.exception.Invalid') def test_update_bad_property(self): fixture = {'properties': {'ok key': u' bad value \U0001f2aa'}} self._expect_fail('image_update', {'values': fixture, 'image_id': UUID2}, 'glance.common.exception.Invalid') fixture = {'properties': {u'invalid key \U00010020': 'ok value'}} self._expect_fail('image_update', {'values': fixture, 'image_id': UUID2}, 'glance.common.exception.Invalid') def test_delete_image(self): """Tests that the registry API deletes the image""" # Grab the original number of images req = webob.Request.blank('/rpc') req.method = "POST" cmd = [{ 'command': 'image_get_all', 'kwargs': {'filters': {'deleted': False}} }] req.body = jsonutils.dump_as_bytes(cmd) res = req.get_response(self.api) res_dict = jsonutils.loads(res.body)[0] self.assertEqual(http.OK, res.status_int) orig_num_images = len(res_dict) # Delete image #2 cmd = [{ 'command': 'image_destroy', 'kwargs': {'image_id': UUID2} }] req.body = jsonutils.dump_as_bytes(cmd) res = req.get_response(self.api) self.assertEqual(http.OK, res.status_int) # Verify one less image cmd = [{ 'command': 'image_get_all', 'kwargs': {'filters': {'deleted': False}} }] req.body = jsonutils.dump_as_bytes(cmd) res = req.get_response(self.api) res_dict = jsonutils.loads(res.body)[0] self.assertEqual(http.OK, res.status_int) new_num_images = len(res_dict) self.assertEqual(new_num_images, orig_num_images - 1) def test_delete_image_response(self): """Tests that the registry API delete returns the image metadata""" image = self.FIXTURES[0] req = webob.Request.blank('/rpc') req.method = 'POST' cmd = [{ 'command': 'image_destroy', 'kwargs': {'image_id': image['id']} }] req.body = jsonutils.dump_as_bytes(cmd) res = req.get_response(self.api) self.assertEqual(http.OK, res.status_int) deleted_image = jsonutils.loads(res.body)[0] self.assertEqual(image['id'], deleted_image['id']) self.assertTrue(deleted_image['deleted']) self.assertTrue(deleted_image['deleted_at']) def test_get_image_members(self): """Tests members listing for existing images.""" req = webob.Request.blank('/rpc') req.method = 'POST' cmd = [{ 'command': 'image_member_find', 'kwargs': {'image_id': UUID2} }] req.body = jsonutils.dump_as_bytes(cmd) res = req.get_response(self.api) self.assertEqual(http.OK, res.status_int) memb_list = jsonutils.loads(res.body)[0] self.assertEqual(0, len(memb_list)) glance-16.0.0/glance/tests/unit/v2/test_metadef_resources.py0000666000175100017510000026110713245511421024073 0ustar zuulzuul00000000000000# Copyright 2012 OpenStack Foundation. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime import mock from oslo_serialization import jsonutils import webob from glance.api.v2 import metadef_namespaces as namespaces from glance.api.v2 import metadef_objects as objects from glance.api.v2 import metadef_properties as properties from glance.api.v2 import metadef_resource_types as resource_types from glance.api.v2 import metadef_tags as tags import glance.gateway from glance.tests.unit import base import glance.tests.unit.utils as unit_test_utils DATETIME = datetime.datetime(2012, 5, 16, 15, 27, 36, 325355) ISOTIME = '2012-05-16T15:27:36Z' NAMESPACE1 = 'Namespace1' NAMESPACE2 = 'Namespace2' NAMESPACE3 = 'Namespace3' NAMESPACE4 = 'Namespace4' NAMESPACE5 = 'Namespace5' NAMESPACE6 = 'Namespace6' PROPERTY1 = 'Property1' PROPERTY2 = 'Property2' PROPERTY3 = 'Property3' PROPERTY4 = 'Property4' OBJECT1 = 'Object1' OBJECT2 = 'Object2' OBJECT3 = 'Object3' RESOURCE_TYPE1 = 'ResourceType1' RESOURCE_TYPE2 = 'ResourceType2' RESOURCE_TYPE3 = 'ResourceType3' RESOURCE_TYPE4 = 'ResourceType4' TAG1 = 'Tag1' TAG2 = 'Tag2' TAG3 = 'Tag3' TAG4 = 'Tag4' TAG5 = 'Tag5' TENANT1 = '6838eb7b-6ded-434a-882c-b344c77fe8df' TENANT2 = '2c014f32-55eb-467d-8fcb-4bd706012f81' TENANT3 = '5a3e60e8-cfa9-4a9e-a90a-62b42cea92b8' TENANT4 = 'c6c87f25-8a94-47ed-8c83-053c25f42df4' PREFIX1 = 'pref' def _db_namespace_fixture(namespace, **kwargs): obj = { 'namespace': namespace, 'display_name': None, 'description': None, 'visibility': 'public', 'protected': False, 'owner': None, } obj.update(kwargs) return obj def _db_property_fixture(name, **kwargs): obj = { 'name': name, 'json_schema': {"type": "string", "title": "title"}, } obj.update(kwargs) return obj def _db_object_fixture(name, **kwargs): obj = { 'name': name, 'description': None, 'json_schema': {}, 'required': '[]', } obj.update(kwargs) return obj def _db_resource_type_fixture(name, **kwargs): obj = { 'name': name, 'protected': False, } obj.update(kwargs) return obj def _db_tag_fixture(name, **kwargs): obj = { 'name': name } obj.update(kwargs) return obj def _db_tags_fixture(tag_names=None): tag_list = [] if not tag_names: tag_names = [TAG1, TAG2, TAG3] for tag_name in tag_names: tag = tags.MetadefTag() tag.name = tag_name tag_list.append(tag) return tag_list def _db_namespace_resource_type_fixture(name, **kwargs): obj = { 'name': name, 'properties_target': None, 'prefix': None, } obj.update(kwargs) return obj class TestMetadefsControllers(base.IsolatedUnitTest): def setUp(self): super(TestMetadefsControllers, self).setUp() self.db = unit_test_utils.FakeDB(initialize=False) self.policy = unit_test_utils.FakePolicyEnforcer() self.notifier = unit_test_utils.FakeNotifier() self._create_namespaces() self._create_properties() self._create_objects() self._create_resource_types() self._create_namespaces_resource_types() self._create_tags() self.namespace_controller = namespaces.NamespaceController( self.db, self.policy, self.notifier) self.property_controller = properties.NamespacePropertiesController( self.db, self.policy, self.notifier) self.object_controller = objects.MetadefObjectsController( self.db, self.policy, self.notifier) self.rt_controller = resource_types.ResourceTypeController( self.db, self.policy, self.notifier) self.tag_controller = tags.TagsController( self.db, self.policy, self.notifier) self.deserializer = objects.RequestDeserializer() self.property_deserializer = properties.RequestDeserializer() def _create_namespaces(self): req = unit_test_utils.get_fake_request() self.namespaces = [ _db_namespace_fixture(NAMESPACE1, owner=TENANT1, visibility='private', protected=True), _db_namespace_fixture(NAMESPACE2, owner=TENANT2, visibility='private'), _db_namespace_fixture(NAMESPACE3, owner=TENANT3), _db_namespace_fixture(NAMESPACE5, owner=TENANT4), _db_namespace_fixture(NAMESPACE6, owner=TENANT4), ] [self.db.metadef_namespace_create(req.context, namespace) for namespace in self.namespaces] def _create_properties(self): req = unit_test_utils.get_fake_request() self.properties = [ (NAMESPACE3, _db_property_fixture(PROPERTY1)), (NAMESPACE3, _db_property_fixture(PROPERTY2)), (NAMESPACE1, _db_property_fixture(PROPERTY1)), (NAMESPACE6, _db_property_fixture(PROPERTY4)), ] [self.db.metadef_property_create(req.context, namespace, property) for namespace, property in self.properties] def _create_objects(self): req = unit_test_utils.get_fake_request() self.objects = [ (NAMESPACE3, _db_object_fixture(OBJECT1)), (NAMESPACE3, _db_object_fixture(OBJECT2)), (NAMESPACE1, _db_object_fixture(OBJECT1)), ] [self.db.metadef_object_create(req.context, namespace, object) for namespace, object in self.objects] def _create_resource_types(self): req = unit_test_utils.get_fake_request() self.resource_types = [ _db_resource_type_fixture(RESOURCE_TYPE1), _db_resource_type_fixture(RESOURCE_TYPE2), _db_resource_type_fixture(RESOURCE_TYPE4), ] [self.db.metadef_resource_type_create(req.context, resource_type) for resource_type in self.resource_types] def _create_tags(self): req = unit_test_utils.get_fake_request() self.tags = [ (NAMESPACE3, _db_tag_fixture(TAG1)), (NAMESPACE3, _db_tag_fixture(TAG2)), (NAMESPACE1, _db_tag_fixture(TAG1)), ] [self.db.metadef_tag_create(req.context, namespace, tag) for namespace, tag in self.tags] def _create_namespaces_resource_types(self): req = unit_test_utils.get_fake_request(is_admin=True) self.ns_resource_types = [ (NAMESPACE1, _db_namespace_resource_type_fixture(RESOURCE_TYPE1)), (NAMESPACE3, _db_namespace_resource_type_fixture(RESOURCE_TYPE1)), (NAMESPACE2, _db_namespace_resource_type_fixture(RESOURCE_TYPE1)), (NAMESPACE2, _db_namespace_resource_type_fixture(RESOURCE_TYPE2)), (NAMESPACE6, _db_namespace_resource_type_fixture(RESOURCE_TYPE4, prefix=PREFIX1)), ] [self.db.metadef_resource_type_association_create(req.context, namespace, ns_resource_type) for namespace, ns_resource_type in self.ns_resource_types] def assertNotificationLog(self, expected_event_type, expected_payloads): events = [{'type': expected_event_type, 'payload': payload} for payload in expected_payloads] self.assertNotificationsLog(events) def assertNotificationsLog(self, expected_events): output_logs = self.notifier.get_logs() expected_logs_count = len(expected_events) self.assertEqual(expected_logs_count, len(output_logs)) for output_log, event in zip(output_logs, expected_events): self.assertEqual('INFO', output_log['notification_type']) self.assertEqual(event['type'], output_log['event_type']) self.assertDictContainsSubset(event['payload'], output_log['payload']) self.notifier.log = [] def test_namespace_index(self): request = unit_test_utils.get_fake_request() output = self.namespace_controller.index(request) output = output.to_dict() self.assertEqual(4, len(output['namespaces'])) actual = set([namespace.namespace for namespace in output['namespaces']]) expected = set([NAMESPACE1, NAMESPACE3, NAMESPACE5, NAMESPACE6]) self.assertEqual(expected, actual) def test_namespace_index_admin(self): request = unit_test_utils.get_fake_request(is_admin=True) output = self.namespace_controller.index(request) output = output.to_dict() self.assertEqual(5, len(output['namespaces'])) actual = set([namespace.namespace for namespace in output['namespaces']]) expected = set([NAMESPACE1, NAMESPACE2, NAMESPACE3, NAMESPACE5, NAMESPACE6]) self.assertEqual(expected, actual) def test_namespace_index_visibility_public(self): request = unit_test_utils.get_fake_request(tenant=TENANT3) filters = {'visibility': 'public'} output = self.namespace_controller.index(request, filters=filters) output = output.to_dict() self.assertEqual(3, len(output['namespaces'])) actual = set([namespace.namespace for namespace in output['namespaces']]) expected = set([NAMESPACE3, NAMESPACE5, NAMESPACE6]) self.assertEqual(expected, actual) def test_namespace_index_resource_type(self): request = unit_test_utils.get_fake_request() filters = {'resource_types': [RESOURCE_TYPE1]} output = self.namespace_controller.index(request, filters=filters) output = output.to_dict() self.assertEqual(2, len(output['namespaces'])) actual = set([namespace.namespace for namespace in output['namespaces']]) expected = set([NAMESPACE1, NAMESPACE3]) self.assertEqual(expected, actual) def test_namespace_show(self): request = unit_test_utils.get_fake_request() output = self.namespace_controller.show(request, NAMESPACE1) output = output.to_dict() self.assertEqual(NAMESPACE1, output['namespace']) self.assertEqual(TENANT1, output['owner']) self.assertTrue(output['protected']) self.assertEqual('private', output['visibility']) def test_namespace_show_with_related_resources(self): request = unit_test_utils.get_fake_request() output = self.namespace_controller.show(request, NAMESPACE3) output = output.to_dict() self.assertEqual(NAMESPACE3, output['namespace']) self.assertEqual(TENANT3, output['owner']) self.assertFalse(output['protected']) self.assertEqual('public', output['visibility']) self.assertEqual(2, len(output['properties'])) actual = set([property for property in output['properties']]) expected = set([PROPERTY1, PROPERTY2]) self.assertEqual(expected, actual) self.assertEqual(2, len(output['objects'])) actual = set([object.name for object in output['objects']]) expected = set([OBJECT1, OBJECT2]) self.assertEqual(expected, actual) self.assertEqual(1, len(output['resource_type_associations'])) actual = set([rt.name for rt in output['resource_type_associations']]) expected = set([RESOURCE_TYPE1]) self.assertEqual(expected, actual) def test_namespace_show_with_property_prefix(self): request = unit_test_utils.get_fake_request() rt = resource_types.ResourceTypeAssociation() rt.name = RESOURCE_TYPE2 rt.prefix = 'pref' rt = self.rt_controller.create(request, rt, NAMESPACE3) object = objects.MetadefObject() object.name = OBJECT3 object.required = [] property = properties.PropertyType() property.name = PROPERTY2 property.type = 'string' property.title = 'title' object.properties = {'prop1': property} object = self.object_controller.create(request, object, NAMESPACE3) self.assertNotificationsLog([ { 'type': 'metadef_resource_type.create', 'payload': { 'namespace': NAMESPACE3, 'name': RESOURCE_TYPE2, 'prefix': 'pref', 'properties_target': None, } }, { 'type': 'metadef_object.create', 'payload': { 'name': OBJECT3, 'namespace': NAMESPACE3, 'properties': [{ 'name': 'prop1', 'additionalItems': None, 'confidential': None, 'title': u'title', 'default': None, 'pattern': None, 'enum': None, 'maximum': None, 'minItems': None, 'minimum': None, 'maxItems': None, 'minLength': None, 'uniqueItems': None, 'maxLength': None, 'items': None, 'type': u'string', 'description': None }], 'required': [], 'description': None, } } ]) filters = {'resource_type': RESOURCE_TYPE2} output = self.namespace_controller.show(request, NAMESPACE3, filters) output = output.to_dict() [self.assertTrue(property_name.startswith(rt.prefix)) for property_name in output['properties'].keys()] for object in output['objects']: [self.assertTrue(property_name.startswith(rt.prefix)) for property_name in object.properties.keys()] @mock.patch('glance.api.v2.metadef_namespaces.LOG') def test_cleanup_namespace_success(self, mock_log): fake_gateway = glance.gateway.Gateway(db_api=self.db, notifier=self.notifier, policy_enforcer=self.policy) req = unit_test_utils.get_fake_request() ns_factory = fake_gateway.get_metadef_namespace_factory( req.context) ns_repo = fake_gateway.get_metadef_namespace_repo(req.context) namespace = namespaces.Namespace() namespace.namespace = 'FakeNamespace' new_namespace = ns_factory.new_namespace(**namespace.to_dict()) ns_repo.add(new_namespace) self.namespace_controller._cleanup_namespace(ns_repo, namespace, True) mock_log.debug.assert_called_with( "Cleaned up namespace %(namespace)s ", {'namespace': namespace.namespace}) @mock.patch('glance.api.v2.metadef_namespaces.LOG') @mock.patch('glance.api.authorization.MetadefNamespaceRepoProxy.remove') def test_cleanup_namespace_exception(self, mock_remove, mock_log): mock_remove.side_effect = Exception(u'Mock remove was called') fake_gateway = glance.gateway.Gateway(db_api=self.db, notifier=self.notifier, policy_enforcer=self.policy) req = unit_test_utils.get_fake_request() ns_factory = fake_gateway.get_metadef_namespace_factory( req.context) ns_repo = fake_gateway.get_metadef_namespace_repo(req.context) namespace = namespaces.Namespace() namespace.namespace = 'FakeNamespace' new_namespace = ns_factory.new_namespace(**namespace.to_dict()) ns_repo.add(new_namespace) self.namespace_controller._cleanup_namespace(ns_repo, namespace, True) called_msg = 'Failed to delete namespace %(namespace)s.' \ 'Exception: %(exception)s' called_args = {'exception': u'Mock remove was called', 'namespace': u'FakeNamespace'} mock_log.error.assert_called_with((called_msg, called_args)) mock_remove.assert_called_once_with(mock.ANY) def test_namespace_show_non_existing(self): request = unit_test_utils.get_fake_request() self.assertRaises(webob.exc.HTTPNotFound, self.namespace_controller.show, request, 'FakeName') def test_namespace_show_non_visible(self): request = unit_test_utils.get_fake_request() self.assertRaises(webob.exc.HTTPNotFound, self.namespace_controller.show, request, NAMESPACE2) def test_namespace_delete(self): request = unit_test_utils.get_fake_request(tenant=TENANT2) self.namespace_controller.delete(request, NAMESPACE2) self.assertNotificationLog("metadef_namespace.delete", [{'namespace': NAMESPACE2}]) self.assertRaises(webob.exc.HTTPNotFound, self.namespace_controller.show, request, NAMESPACE2) def test_namespace_delete_notification_disabled(self): self.config(disabled_notifications=["metadef_namespace.delete"]) request = unit_test_utils.get_fake_request(tenant=TENANT2) self.namespace_controller.delete(request, NAMESPACE2) self.assertNotificationsLog([]) self.assertRaises(webob.exc.HTTPNotFound, self.namespace_controller.show, request, NAMESPACE2) def test_namespace_delete_notification_group_disabled(self): self.config(disabled_notifications=["metadef_namespace"]) request = unit_test_utils.get_fake_request(tenant=TENANT2) self.namespace_controller.delete(request, NAMESPACE2) self.assertNotificationsLog([]) self.assertRaises(webob.exc.HTTPNotFound, self.namespace_controller.show, request, NAMESPACE2) def test_namespace_delete_notification_create_disabled(self): self.config(disabled_notifications=["metadef_namespace.create"]) request = unit_test_utils.get_fake_request(tenant=TENANT2) self.namespace_controller.delete(request, NAMESPACE2) self.assertNotificationLog("metadef_namespace.delete", [{'namespace': NAMESPACE2}]) self.assertRaises(webob.exc.HTTPNotFound, self.namespace_controller.show, request, NAMESPACE2) def test_namespace_delete_non_existing(self): request = unit_test_utils.get_fake_request() self.assertRaises(webob.exc.HTTPNotFound, self.namespace_controller.delete, request, 'FakeName') self.assertNotificationsLog([]) def test_namespace_delete_non_visible(self): request = unit_test_utils.get_fake_request() self.assertRaises(webob.exc.HTTPNotFound, self.namespace_controller.delete, request, NAMESPACE2) self.assertNotificationsLog([]) def test_namespace_delete_non_visible_admin(self): request = unit_test_utils.get_fake_request(is_admin=True) self.namespace_controller.delete(request, NAMESPACE2) self.assertNotificationLog("metadef_namespace.delete", [{'namespace': NAMESPACE2}]) self.assertRaises(webob.exc.HTTPNotFound, self.namespace_controller.show, request, NAMESPACE2) def test_namespace_delete_protected(self): request = unit_test_utils.get_fake_request() self.assertRaises(webob.exc.HTTPForbidden, self.namespace_controller.delete, request, NAMESPACE1) self.assertNotificationsLog([]) def test_namespace_delete_protected_admin(self): request = unit_test_utils.get_fake_request(is_admin=True) self.assertRaises(webob.exc.HTTPForbidden, self.namespace_controller.delete, request, NAMESPACE1) self.assertNotificationsLog([]) def test_namespace_delete_with_contents(self): request = unit_test_utils.get_fake_request(tenant=TENANT3) self.namespace_controller.delete(request, NAMESPACE3) self.assertRaises(webob.exc.HTTPNotFound, self.namespace_controller.show, request, NAMESPACE3) self.assertRaises(webob.exc.HTTPNotFound, self.object_controller.show, request, NAMESPACE3, OBJECT1) self.assertRaises(webob.exc.HTTPNotFound, self.property_controller.show, request, NAMESPACE3, OBJECT1) def test_namespace_delete_properties(self): request = unit_test_utils.get_fake_request(tenant=TENANT3) self.namespace_controller.delete_properties(request, NAMESPACE3) output = self.property_controller.index(request, NAMESPACE3) output = output.to_dict() self.assertEqual(0, len(output['properties'])) self.assertNotificationLog("metadef_namespace.delete_properties", [{'namespace': NAMESPACE3}]) def test_namespace_delete_properties_other_owner(self): request = unit_test_utils.get_fake_request() self.assertRaises(webob.exc.HTTPForbidden, self.namespace_controller.delete_properties, request, NAMESPACE3) self.assertNotificationsLog([]) def test_namespace_delete_properties_other_owner_admin(self): request = unit_test_utils.get_fake_request(is_admin=True) self.namespace_controller.delete_properties(request, NAMESPACE3) output = self.property_controller.index(request, NAMESPACE3) output = output.to_dict() self.assertEqual(0, len(output['properties'])) self.assertNotificationLog("metadef_namespace.delete_properties", [{'namespace': NAMESPACE3}]) def test_namespace_non_existing_delete_properties(self): request = unit_test_utils.get_fake_request() self.assertRaises(webob.exc.HTTPNotFound, self.namespace_controller.delete_properties, request, NAMESPACE4) self.assertNotificationsLog([]) def test_namespace_delete_objects(self): request = unit_test_utils.get_fake_request(tenant=TENANT3) self.namespace_controller.delete_objects(request, NAMESPACE3) output = self.object_controller.index(request, NAMESPACE3) output = output.to_dict() self.assertEqual(0, len(output['objects'])) self.assertNotificationLog("metadef_namespace.delete_objects", [{'namespace': NAMESPACE3}]) def test_namespace_delete_objects_other_owner(self): request = unit_test_utils.get_fake_request() self.assertRaises(webob.exc.HTTPForbidden, self.namespace_controller.delete_objects, request, NAMESPACE3) self.assertNotificationsLog([]) def test_namespace_delete_objects_other_owner_admin(self): request = unit_test_utils.get_fake_request(is_admin=True) self.namespace_controller.delete_objects(request, NAMESPACE3) output = self.object_controller.index(request, NAMESPACE3) output = output.to_dict() self.assertEqual(0, len(output['objects'])) self.assertNotificationLog("metadef_namespace.delete_objects", [{'namespace': NAMESPACE3}]) def test_namespace_non_existing_delete_objects(self): request = unit_test_utils.get_fake_request() self.assertRaises(webob.exc.HTTPNotFound, self.namespace_controller.delete_objects, request, NAMESPACE4) self.assertNotificationsLog([]) def test_namespace_delete_tags(self): request = unit_test_utils.get_fake_request(tenant=TENANT3) self.namespace_controller.delete_tags(request, NAMESPACE3) output = self.tag_controller.index(request, NAMESPACE3) output = output.to_dict() self.assertEqual(0, len(output['tags'])) self.assertNotificationLog("metadef_namespace.delete_tags", [{'namespace': NAMESPACE3}]) def test_namespace_delete_tags_other_owner(self): request = unit_test_utils.get_fake_request() self.assertRaises(webob.exc.HTTPForbidden, self.namespace_controller.delete_tags, request, NAMESPACE3) self.assertNotificationsLog([]) def test_namespace_delete_tags_other_owner_admin(self): request = unit_test_utils.get_fake_request(is_admin=True) self.namespace_controller.delete_tags(request, NAMESPACE3) output = self.tag_controller.index(request, NAMESPACE3) output = output.to_dict() self.assertEqual(0, len(output['tags'])) self.assertNotificationLog("metadef_namespace.delete_tags", [{'namespace': NAMESPACE3}]) def test_namespace_non_existing_delete_tags(self): request = unit_test_utils.get_fake_request() self.assertRaises(webob.exc.HTTPNotFound, self.namespace_controller.delete_tags, request, NAMESPACE4) self.assertNotificationsLog([]) def test_namespace_create(self): request = unit_test_utils.get_fake_request() namespace = namespaces.Namespace() namespace.namespace = NAMESPACE4 namespace = self.namespace_controller.create(request, namespace) self.assertEqual(NAMESPACE4, namespace.namespace) self.assertNotificationLog("metadef_namespace.create", [{'namespace': NAMESPACE4}]) namespace = self.namespace_controller.show(request, NAMESPACE4) self.assertEqual(NAMESPACE4, namespace.namespace) def test_namespace_create_with_4byte_character(self): request = unit_test_utils.get_fake_request() namespace = namespaces.Namespace() namespace.namespace = u'\U0001f693' self.assertRaises(webob.exc.HTTPBadRequest, self.namespace_controller.create, request, namespace) def test_namespace_create_duplicate(self): request = unit_test_utils.get_fake_request() namespace = namespaces.Namespace() namespace.namespace = 'new-namespace' new_ns = self.namespace_controller.create(request, namespace) self.assertEqual('new-namespace', new_ns.namespace) self.assertRaises(webob.exc.HTTPConflict, self.namespace_controller.create, request, namespace) def test_namespace_create_different_owner(self): request = unit_test_utils.get_fake_request() namespace = namespaces.Namespace() namespace.namespace = NAMESPACE4 namespace.owner = TENANT4 self.assertRaises(webob.exc.HTTPForbidden, self.namespace_controller.create, request, namespace) self.assertNotificationsLog([]) def test_namespace_create_different_owner_admin(self): request = unit_test_utils.get_fake_request(is_admin=True) namespace = namespaces.Namespace() namespace.namespace = NAMESPACE4 namespace.owner = TENANT4 namespace = self.namespace_controller.create(request, namespace) self.assertEqual(NAMESPACE4, namespace.namespace) self.assertNotificationLog("metadef_namespace.create", [{'namespace': NAMESPACE4}]) namespace = self.namespace_controller.show(request, NAMESPACE4) self.assertEqual(NAMESPACE4, namespace.namespace) def test_namespace_create_with_related_resources(self): request = unit_test_utils.get_fake_request() namespace = namespaces.Namespace() namespace.namespace = NAMESPACE4 prop1 = properties.PropertyType() prop1.type = 'string' prop1.title = 'title' prop2 = properties.PropertyType() prop2.type = 'string' prop2.title = 'title' namespace.properties = {PROPERTY1: prop1, PROPERTY2: prop2} object1 = objects.MetadefObject() object1.name = OBJECT1 object1.required = [] object1.properties = {} object2 = objects.MetadefObject() object2.name = OBJECT2 object2.required = [] object2.properties = {} namespace.objects = [object1, object2] output = self.namespace_controller.create(request, namespace) self.assertEqual(NAMESPACE4, namespace.namespace) output = output.to_dict() self.assertEqual(2, len(output['properties'])) actual = set([property for property in output['properties']]) expected = set([PROPERTY1, PROPERTY2]) self.assertEqual(expected, actual) self.assertEqual(2, len(output['objects'])) actual = set([object.name for object in output['objects']]) expected = set([OBJECT1, OBJECT2]) self.assertEqual(expected, actual) output = self.namespace_controller.show(request, NAMESPACE4) self.assertEqual(NAMESPACE4, namespace.namespace) output = output.to_dict() self.assertEqual(2, len(output['properties'])) actual = set([property for property in output['properties']]) expected = set([PROPERTY1, PROPERTY2]) self.assertEqual(expected, actual) self.assertEqual(2, len(output['objects'])) actual = set([object.name for object in output['objects']]) expected = set([OBJECT1, OBJECT2]) self.assertEqual(expected, actual) self.assertNotificationsLog([ { 'type': 'metadef_namespace.create', 'payload': { 'namespace': NAMESPACE4, 'owner': TENANT1, } }, { 'type': 'metadef_object.create', 'payload': { 'namespace': NAMESPACE4, 'name': OBJECT1, 'properties': [], } }, { 'type': 'metadef_object.create', 'payload': { 'namespace': NAMESPACE4, 'name': OBJECT2, 'properties': [], } }, { 'type': 'metadef_property.create', 'payload': { 'namespace': NAMESPACE4, 'type': 'string', 'title': 'title', } }, { 'type': 'metadef_property.create', 'payload': { 'namespace': NAMESPACE4, 'type': 'string', 'title': 'title', } } ]) def test_namespace_create_conflict(self): request = unit_test_utils.get_fake_request() namespace = namespaces.Namespace() namespace.namespace = NAMESPACE1 self.assertRaises(webob.exc.HTTPConflict, self.namespace_controller.create, request, namespace) self.assertNotificationsLog([]) def test_namespace_update(self): request = unit_test_utils.get_fake_request() namespace = self.namespace_controller.show(request, NAMESPACE1) namespace.protected = False namespace = self.namespace_controller.update(request, namespace, NAMESPACE1) self.assertFalse(namespace.protected) self.assertNotificationLog("metadef_namespace.update", [ {'namespace': NAMESPACE1, 'protected': False} ]) namespace = self.namespace_controller.show(request, NAMESPACE1) self.assertFalse(namespace.protected) def test_namespace_update_non_existing(self): request = unit_test_utils.get_fake_request() namespace = namespaces.Namespace() namespace.namespace = NAMESPACE4 self.assertRaises(webob.exc.HTTPNotFound, self.namespace_controller.update, request, namespace, NAMESPACE4) self.assertNotificationsLog([]) def test_namespace_update_non_visible(self): request = unit_test_utils.get_fake_request() namespace = namespaces.Namespace() namespace.namespace = NAMESPACE2 self.assertRaises(webob.exc.HTTPNotFound, self.namespace_controller.update, request, namespace, NAMESPACE2) self.assertNotificationsLog([]) def test_namespace_update_non_visible_admin(self): request = unit_test_utils.get_fake_request(is_admin=True) namespace = self.namespace_controller.show(request, NAMESPACE2) namespace.protected = False namespace = self.namespace_controller.update(request, namespace, NAMESPACE2) self.assertFalse(namespace.protected) self.assertNotificationLog("metadef_namespace.update", [ {'namespace': NAMESPACE2, 'protected': False} ]) namespace = self.namespace_controller.show(request, NAMESPACE2) self.assertFalse(namespace.protected) def test_namespace_update_name(self): request = unit_test_utils.get_fake_request() namespace = self.namespace_controller.show(request, NAMESPACE1) namespace.namespace = NAMESPACE4 namespace = self.namespace_controller.update(request, namespace, NAMESPACE1) self.assertEqual(NAMESPACE4, namespace.namespace) self.assertNotificationLog("metadef_namespace.update", [ {'namespace': NAMESPACE4, 'namespace_old': NAMESPACE1} ]) namespace = self.namespace_controller.show(request, NAMESPACE4) self.assertEqual(NAMESPACE4, namespace.namespace) self.assertRaises(webob.exc.HTTPNotFound, self.namespace_controller.show, request, NAMESPACE1) def test_namespace_update_with_4byte_character(self): request = unit_test_utils.get_fake_request() namespace = self.namespace_controller.show(request, NAMESPACE1) namespace.namespace = u'\U0001f693' self.assertRaises(webob.exc.HTTPBadRequest, self.namespace_controller.update, request, namespace, NAMESPACE1) def test_namespace_update_name_conflict(self): request = unit_test_utils.get_fake_request() namespace = self.namespace_controller.show(request, NAMESPACE1) namespace.namespace = NAMESPACE2 self.assertRaises(webob.exc.HTTPConflict, self.namespace_controller.update, request, namespace, NAMESPACE1) self.assertNotificationsLog([]) def test_property_index(self): request = unit_test_utils.get_fake_request() output = self.property_controller.index(request, NAMESPACE3) self.assertEqual(2, len(output.properties)) actual = set([property for property in output.properties]) expected = set([PROPERTY1, PROPERTY2]) self.assertEqual(expected, actual) def test_property_index_empty(self): request = unit_test_utils.get_fake_request(tenant=TENANT2) output = self.property_controller.index(request, NAMESPACE2) self.assertEqual(0, len(output.properties)) def test_property_index_non_existing_namespace(self): request = unit_test_utils.get_fake_request(tenant=TENANT2) self.assertRaises(webob.exc.HTTPNotFound, self.property_controller.index, request, NAMESPACE4) def test_property_show(self): request = unit_test_utils.get_fake_request() output = self.property_controller.show(request, NAMESPACE3, PROPERTY1) self.assertEqual(PROPERTY1, output.name) def test_property_show_specific_resource_type(self): request = unit_test_utils.get_fake_request() output = self.property_controller.show( request, NAMESPACE6, ''.join([PREFIX1, PROPERTY4]), filters={'resource_type': RESOURCE_TYPE4}) self.assertEqual(PROPERTY4, output.name) def test_property_show_prefix_mismatch(self): request = unit_test_utils.get_fake_request() self.assertRaises(webob.exc.HTTPNotFound, self.property_controller.show, request, NAMESPACE6, PROPERTY4, filters={'resource_type': RESOURCE_TYPE4}) def test_property_show_non_existing_resource_type(self): request = unit_test_utils.get_fake_request() self.assertRaises(webob.exc.HTTPNotFound, self.property_controller.show, request, NAMESPACE2, PROPERTY1, filters={'resource_type': 'test'}) def test_property_show_non_existing(self): request = unit_test_utils.get_fake_request() self.assertRaises(webob.exc.HTTPNotFound, self.property_controller.show, request, NAMESPACE2, PROPERTY1) def test_property_show_non_visible(self): request = unit_test_utils.get_fake_request(tenant=TENANT2) self.assertRaises(webob.exc.HTTPNotFound, self.property_controller.show, request, NAMESPACE1, PROPERTY1) def test_property_show_non_visible_admin(self): request = unit_test_utils.get_fake_request(tenant=TENANT2, is_admin=True) output = self.property_controller.show(request, NAMESPACE1, PROPERTY1) self.assertEqual(PROPERTY1, output.name) def test_property_delete(self): request = unit_test_utils.get_fake_request(tenant=TENANT3) self.property_controller.delete(request, NAMESPACE3, PROPERTY1) self.assertNotificationLog("metadef_property.delete", [{'name': PROPERTY1, 'namespace': NAMESPACE3}]) self.assertRaises(webob.exc.HTTPNotFound, self.property_controller.show, request, NAMESPACE3, PROPERTY1) def test_property_delete_disabled_notification(self): self.config(disabled_notifications=["metadef_property.delete"]) request = unit_test_utils.get_fake_request(tenant=TENANT3) self.property_controller.delete(request, NAMESPACE3, PROPERTY1) self.assertNotificationsLog([]) self.assertRaises(webob.exc.HTTPNotFound, self.property_controller.show, request, NAMESPACE3, PROPERTY1) def test_property_delete_other_owner(self): request = unit_test_utils.get_fake_request() self.assertRaises(webob.exc.HTTPForbidden, self.property_controller.delete, request, NAMESPACE3, PROPERTY1) self.assertNotificationsLog([]) def test_property_delete_other_owner_admin(self): request = unit_test_utils.get_fake_request(is_admin=True) self.property_controller.delete(request, NAMESPACE3, PROPERTY1) self.assertNotificationLog("metadef_property.delete", [{'name': PROPERTY1, 'namespace': NAMESPACE3}]) self.assertRaises(webob.exc.HTTPNotFound, self.property_controller.show, request, NAMESPACE3, PROPERTY1) def test_property_delete_non_existing(self): request = unit_test_utils.get_fake_request() self.assertRaises(webob.exc.HTTPNotFound, self.property_controller.delete, request, NAMESPACE5, PROPERTY2) self.assertNotificationsLog([]) def test_property_delete_non_existing_namespace(self): request = unit_test_utils.get_fake_request() self.assertRaises(webob.exc.HTTPNotFound, self.property_controller.delete, request, NAMESPACE4, PROPERTY1) self.assertNotificationsLog([]) def test_property_delete_non_visible(self): request = unit_test_utils.get_fake_request(tenant=TENANT2) self.assertRaises(webob.exc.HTTPNotFound, self.property_controller.delete, request, NAMESPACE1, PROPERTY1) self.assertNotificationsLog([]) def test_property_delete_admin_protected(self): request = unit_test_utils.get_fake_request(is_admin=True) self.assertRaises(webob.exc.HTTPForbidden, self.property_controller.delete, request, NAMESPACE1, PROPERTY1) self.assertNotificationsLog([]) def test_property_create(self): request = unit_test_utils.get_fake_request() property = properties.PropertyType() property.name = PROPERTY2 property.type = 'string' property.title = 'title' property = self.property_controller.create(request, NAMESPACE1, property) self.assertEqual(PROPERTY2, property.name) self.assertEqual('string', property.type) self.assertEqual('title', property.title) self.assertNotificationLog("metadef_property.create", [{'name': PROPERTY2, 'namespace': NAMESPACE1}]) property = self.property_controller.show(request, NAMESPACE1, PROPERTY2) self.assertEqual(PROPERTY2, property.name) self.assertEqual('string', property.type) self.assertEqual('title', property.title) def test_property_create_overlimit_name(self): request = unit_test_utils.get_fake_request('/metadefs/namespaces/' 'Namespace3/' 'properties') request.body = jsonutils.dump_as_bytes({ 'name': 'a' * 81, 'type': 'string', 'title': 'fake'}) exc = self.assertRaises(webob.exc.HTTPBadRequest, self.property_deserializer.create, request) self.assertIn("Failed validating 'maxLength' in " "schema['properties']['name']", exc.explanation) def test_property_create_with_4byte_character(self): request = unit_test_utils.get_fake_request() property = properties.PropertyType() property.name = u'\U0001f693' property.type = 'string' property.title = 'title' self.assertRaises(webob.exc.HTTPBadRequest, self.property_controller.create, request, NAMESPACE1, property) def test_property_create_with_operators(self): request = unit_test_utils.get_fake_request() property = properties.PropertyType() property.name = PROPERTY2 property.type = 'string' property.title = 'title' property.operators = [''] property = self.property_controller.create(request, NAMESPACE1, property) self.assertEqual(PROPERTY2, property.name) self.assertEqual('string', property.type) self.assertEqual('title', property.title) self.assertEqual([''], property.operators) property = self.property_controller.show(request, NAMESPACE1, PROPERTY2) self.assertEqual(PROPERTY2, property.name) self.assertEqual('string', property.type) self.assertEqual('title', property.title) self.assertEqual([''], property.operators) def test_property_create_conflict(self): request = unit_test_utils.get_fake_request() property = properties.PropertyType() property.name = PROPERTY1 property.type = 'string' property.title = 'title' self.assertRaises(webob.exc.HTTPConflict, self.property_controller.create, request, NAMESPACE1, property) self.assertNotificationsLog([]) def test_property_create_non_visible_namespace(self): request = unit_test_utils.get_fake_request(tenant=TENANT2) property = properties.PropertyType() property.name = PROPERTY1 property.type = 'string' property.title = 'title' self.assertRaises(webob.exc.HTTPForbidden, self.property_controller.create, request, NAMESPACE1, property) self.assertNotificationsLog([]) def test_property_create_non_visible_namespace_admin(self): request = unit_test_utils.get_fake_request(tenant=TENANT2, is_admin=True) property = properties.PropertyType() property.name = PROPERTY2 property.type = 'string' property.title = 'title' property = self.property_controller.create(request, NAMESPACE1, property) self.assertEqual(PROPERTY2, property.name) self.assertEqual('string', property.type) self.assertEqual('title', property.title) self.assertNotificationLog("metadef_property.create", [{'name': PROPERTY2, 'namespace': NAMESPACE1}]) property = self.property_controller.show(request, NAMESPACE1, PROPERTY2) self.assertEqual(PROPERTY2, property.name) self.assertEqual('string', property.type) self.assertEqual('title', property.title) def test_property_create_non_existing_namespace(self): request = unit_test_utils.get_fake_request() property = properties.PropertyType() property.name = PROPERTY1 property.type = 'string' property.title = 'title' self.assertRaises(webob.exc.HTTPNotFound, self.property_controller.create, request, NAMESPACE4, property) self.assertNotificationsLog([]) def test_property_create_duplicate(self): request = unit_test_utils.get_fake_request() property = properties.PropertyType() property.name = 'new-property' property.type = 'string' property.title = 'title' new_property = self.property_controller.create(request, NAMESPACE1, property) self.assertEqual('new-property', new_property.name) self.assertRaises(webob.exc.HTTPConflict, self.property_controller.create, request, NAMESPACE1, property) def test_property_update(self): request = unit_test_utils.get_fake_request(tenant=TENANT3) property = self.property_controller.show(request, NAMESPACE3, PROPERTY1) property.name = PROPERTY1 property.type = 'string123' property.title = 'title123' property = self.property_controller.update(request, NAMESPACE3, PROPERTY1, property) self.assertEqual(PROPERTY1, property.name) self.assertEqual('string123', property.type) self.assertEqual('title123', property.title) self.assertNotificationLog("metadef_property.update", [ { 'name': PROPERTY1, 'namespace': NAMESPACE3, 'type': 'string123', 'title': 'title123', } ]) property = self.property_controller.show(request, NAMESPACE3, PROPERTY1) self.assertEqual(PROPERTY1, property.name) self.assertEqual('string123', property.type) self.assertEqual('title123', property.title) def test_property_update_name(self): request = unit_test_utils.get_fake_request(tenant=TENANT3) property = self.property_controller.show(request, NAMESPACE3, PROPERTY1) property.name = PROPERTY3 property.type = 'string' property.title = 'title' property = self.property_controller.update(request, NAMESPACE3, PROPERTY1, property) self.assertEqual(PROPERTY3, property.name) self.assertEqual('string', property.type) self.assertEqual('title', property.title) self.assertNotificationLog("metadef_property.update", [ { 'name': PROPERTY3, 'name_old': PROPERTY1, 'namespace': NAMESPACE3, 'type': 'string', 'title': 'title', } ]) property = self.property_controller.show(request, NAMESPACE3, PROPERTY2) self.assertEqual(PROPERTY2, property.name) self.assertEqual('string', property.type) self.assertEqual('title', property.title) def test_property_update_conflict(self): request = unit_test_utils.get_fake_request(tenant=TENANT3) property = self.property_controller.show(request, NAMESPACE3, PROPERTY1) property.name = PROPERTY2 property.type = 'string' property.title = 'title' self.assertRaises(webob.exc.HTTPConflict, self.property_controller.update, request, NAMESPACE3, PROPERTY1, property) self.assertNotificationsLog([]) def test_property_update_with_overlimit_name(self): request = unit_test_utils.get_fake_request() request.body = jsonutils.dump_as_bytes({ 'name': 'a' * 81, 'type': 'string', 'title': 'fake'}) exc = self.assertRaises(webob.exc.HTTPBadRequest, self.property_deserializer.create, request) self.assertIn("Failed validating 'maxLength' in " "schema['properties']['name']", exc.explanation) def test_property_update_with_4byte_character(self): request = unit_test_utils.get_fake_request(tenant=TENANT3) property = self.property_controller.show(request, NAMESPACE3, PROPERTY1) property.name = u'\U0001f693' property.type = 'string' property.title = 'title' self.assertRaises(webob.exc.HTTPBadRequest, self.property_controller.update, request, NAMESPACE3, PROPERTY1, property) def test_property_update_non_existing(self): request = unit_test_utils.get_fake_request(tenant=TENANT3) property = properties.PropertyType() property.name = PROPERTY1 property.type = 'string' property.title = 'title' self.assertRaises(webob.exc.HTTPNotFound, self.property_controller.update, request, NAMESPACE5, PROPERTY1, property) self.assertNotificationsLog([]) def test_property_update_namespace_non_existing(self): request = unit_test_utils.get_fake_request(tenant=TENANT3) property = properties.PropertyType() property.name = PROPERTY1 property.type = 'string' property.title = 'title' self.assertRaises(webob.exc.HTTPNotFound, self.property_controller.update, request, NAMESPACE4, PROPERTY1, property) self.assertNotificationsLog([]) def test_object_index(self): request = unit_test_utils.get_fake_request() output = self.object_controller.index(request, NAMESPACE3) output = output.to_dict() self.assertEqual(2, len(output['objects'])) actual = set([object.name for object in output['objects']]) expected = set([OBJECT1, OBJECT2]) self.assertEqual(expected, actual) def test_object_index_zero_limit(self): request = unit_test_utils.get_fake_request('/metadefs/namespaces/' 'Namespace3/' 'objects?limit=0') self.assertRaises(webob.exc.HTTPBadRequest, self.deserializer.index, request) def test_object_index_empty(self): request = unit_test_utils.get_fake_request() output = self.object_controller.index(request, NAMESPACE5) output = output.to_dict() self.assertEqual(0, len(output['objects'])) def test_object_index_non_existing_namespace(self): request = unit_test_utils.get_fake_request() self.assertRaises(webob.exc.HTTPNotFound, self.object_controller.index, request, NAMESPACE4) def test_object_show(self): request = unit_test_utils.get_fake_request() output = self.object_controller.show(request, NAMESPACE3, OBJECT1) self.assertEqual(OBJECT1, output.name) def test_object_show_non_existing(self): request = unit_test_utils.get_fake_request() self.assertRaises(webob.exc.HTTPNotFound, self.object_controller.show, request, NAMESPACE5, OBJECT1) def test_object_show_non_visible(self): request = unit_test_utils.get_fake_request(tenant=TENANT2) self.assertRaises(webob.exc.HTTPNotFound, self.object_controller.show, request, NAMESPACE1, OBJECT1) def test_object_show_non_visible_admin(self): request = unit_test_utils.get_fake_request(tenant=TENANT2, is_admin=True) output = self.object_controller.show(request, NAMESPACE1, OBJECT1) self.assertEqual(OBJECT1, output.name) def test_object_delete(self): request = unit_test_utils.get_fake_request(tenant=TENANT3) self.object_controller.delete(request, NAMESPACE3, OBJECT1) self.assertNotificationLog("metadef_object.delete", [{'name': OBJECT1, 'namespace': NAMESPACE3}]) self.assertRaises(webob.exc.HTTPNotFound, self.object_controller.show, request, NAMESPACE3, OBJECT1) def test_object_delete_disabled_notification(self): self.config(disabled_notifications=["metadef_object.delete"]) request = unit_test_utils.get_fake_request(tenant=TENANT3) self.object_controller.delete(request, NAMESPACE3, OBJECT1) self.assertNotificationsLog([]) self.assertRaises(webob.exc.HTTPNotFound, self.object_controller.show, request, NAMESPACE3, OBJECT1) def test_object_delete_other_owner(self): request = unit_test_utils.get_fake_request() self.assertRaises(webob.exc.HTTPForbidden, self.object_controller.delete, request, NAMESPACE3, OBJECT1) self.assertNotificationsLog([]) def test_object_delete_other_owner_admin(self): request = unit_test_utils.get_fake_request(is_admin=True) self.object_controller.delete(request, NAMESPACE3, OBJECT1) self.assertNotificationLog("metadef_object.delete", [{'name': OBJECT1, 'namespace': NAMESPACE3}]) self.assertRaises(webob.exc.HTTPNotFound, self.object_controller.show, request, NAMESPACE3, OBJECT1) def test_object_delete_non_existing(self): request = unit_test_utils.get_fake_request() self.assertRaises(webob.exc.HTTPNotFound, self.object_controller.delete, request, NAMESPACE5, OBJECT1) self.assertNotificationsLog([]) def test_object_delete_non_existing_namespace(self): request = unit_test_utils.get_fake_request() self.assertRaises(webob.exc.HTTPNotFound, self.object_controller.delete, request, NAMESPACE4, OBJECT1) self.assertNotificationsLog([]) def test_object_delete_non_visible(self): request = unit_test_utils.get_fake_request(tenant=TENANT2) self.assertRaises(webob.exc.HTTPNotFound, self.object_controller.delete, request, NAMESPACE1, OBJECT1) self.assertNotificationsLog([]) def test_object_delete_admin_protected(self): request = unit_test_utils.get_fake_request(is_admin=True) self.assertRaises(webob.exc.HTTPForbidden, self.object_controller.delete, request, NAMESPACE1, OBJECT1) self.assertNotificationsLog([]) def test_object_create(self): request = unit_test_utils.get_fake_request() object = objects.MetadefObject() object.name = OBJECT2 object.required = [] object.properties = {} object = self.object_controller.create(request, object, NAMESPACE1) self.assertEqual(OBJECT2, object.name) self.assertEqual([], object.required) self.assertEqual({}, object.properties) self.assertNotificationLog("metadef_object.create", [{'name': OBJECT2, 'namespace': NAMESPACE1, 'properties': []}]) object = self.object_controller.show(request, NAMESPACE1, OBJECT2) self.assertEqual(OBJECT2, object.name) self.assertEqual([], object.required) self.assertEqual({}, object.properties) def test_object_create_invalid_properties(self): request = unit_test_utils.get_fake_request('/metadefs/namespaces/' 'Namespace3/' 'objects') body = { "name": "My Object", "description": "object1 description.", "properties": { "property1": { "type": "integer", "title": "property", "description": "property description", "test-key": "test-value", } } } request.body = jsonutils.dump_as_bytes(body) self.assertRaises(webob.exc.HTTPBadRequest, self.deserializer.create, request) def test_object_create_overlimit_name(self): request = unit_test_utils.get_fake_request('/metadefs/namespaces/' 'Namespace3/' 'objects') request.body = jsonutils.dump_as_bytes({'name': 'a' * 81}) exc = self.assertRaises(webob.exc.HTTPBadRequest, self.deserializer.create, request) self.assertIn("Failed validating 'maxLength' in " "schema['properties']['name']", exc.explanation) def test_object_create_duplicate(self): request = unit_test_utils.get_fake_request() object = objects.MetadefObject() object.name = 'New-Object' object.required = [] object.properties = {} new_obj = self.object_controller.create(request, object, NAMESPACE3) self.assertEqual('New-Object', new_obj.name) self.assertRaises(webob.exc.HTTPConflict, self.object_controller.create, request, object, NAMESPACE3) def test_object_create_conflict(self): request = unit_test_utils.get_fake_request() object = objects.MetadefObject() object.name = OBJECT1 object.required = [] object.properties = {} self.assertRaises(webob.exc.HTTPConflict, self.object_controller.create, request, object, NAMESPACE1) self.assertNotificationsLog([]) def test_object_create_with_4byte_character(self): request = unit_test_utils.get_fake_request() object = objects.MetadefObject() object.name = u'\U0001f693' object.required = [] object.properties = {} self.assertRaises(webob.exc.HTTPBadRequest, self.object_controller.create, request, object, NAMESPACE1) def test_object_create_non_existing_namespace(self): request = unit_test_utils.get_fake_request() object = objects.MetadefObject() object.name = PROPERTY1 object.required = [] object.properties = {} self.assertRaises(webob.exc.HTTPNotFound, self.object_controller.create, request, object, NAMESPACE4) self.assertNotificationsLog([]) def test_object_create_non_visible_namespace(self): request = unit_test_utils.get_fake_request(tenant=TENANT2) object = objects.MetadefObject() object.name = OBJECT1 object.required = [] object.properties = {} self.assertRaises(webob.exc.HTTPForbidden, self.object_controller.create, request, object, NAMESPACE1) self.assertNotificationsLog([]) def test_object_create_non_visible_namespace_admin(self): request = unit_test_utils.get_fake_request(tenant=TENANT2, is_admin=True) object = objects.MetadefObject() object.name = OBJECT2 object.required = [] object.properties = {} object = self.object_controller.create(request, object, NAMESPACE1) self.assertEqual(OBJECT2, object.name) self.assertEqual([], object.required) self.assertEqual({}, object.properties) self.assertNotificationLog("metadef_object.create", [{'name': OBJECT2, 'namespace': NAMESPACE1}]) object = self.object_controller.show(request, NAMESPACE1, OBJECT2) self.assertEqual(OBJECT2, object.name) self.assertEqual([], object.required) self.assertEqual({}, object.properties) def test_object_create_missing_properties(self): request = unit_test_utils.get_fake_request() object = objects.MetadefObject() object.name = OBJECT2 object.required = [] object = self.object_controller.create(request, object, NAMESPACE1) self.assertEqual(OBJECT2, object.name) self.assertEqual([], object.required) self.assertNotificationLog("metadef_object.create", [{'name': OBJECT2, 'namespace': NAMESPACE1, 'properties': []}]) object = self.object_controller.show(request, NAMESPACE1, OBJECT2) self.assertEqual(OBJECT2, object.name) self.assertEqual([], object.required) self.assertEqual({}, object.properties) def test_object_update(self): request = unit_test_utils.get_fake_request(tenant=TENANT3) object = self.object_controller.show(request, NAMESPACE3, OBJECT1) object.name = OBJECT1 object.description = 'description' object = self.object_controller.update(request, object, NAMESPACE3, OBJECT1) self.assertEqual(OBJECT1, object.name) self.assertEqual('description', object.description) self.assertNotificationLog("metadef_object.update", [ { 'name': OBJECT1, 'namespace': NAMESPACE3, 'description': 'description', } ]) property = self.object_controller.show(request, NAMESPACE3, OBJECT1) self.assertEqual(OBJECT1, property.name) self.assertEqual('description', object.description) def test_object_update_name(self): request = unit_test_utils.get_fake_request() object = self.object_controller.show(request, NAMESPACE1, OBJECT1) object.name = OBJECT2 object = self.object_controller.update(request, object, NAMESPACE1, OBJECT1) self.assertEqual(OBJECT2, object.name) self.assertNotificationLog("metadef_object.update", [ { 'name': OBJECT2, 'name_old': OBJECT1, 'namespace': NAMESPACE1, } ]) object = self.object_controller.show(request, NAMESPACE1, OBJECT2) self.assertEqual(OBJECT2, object.name) def test_object_update_with_4byte_character(self): request = unit_test_utils.get_fake_request() object = self.object_controller.show(request, NAMESPACE1, OBJECT1) object.name = u'\U0001f693' self.assertRaises(webob.exc.HTTPBadRequest, self.object_controller.update, request, object, NAMESPACE1, OBJECT1) def test_object_update_with_overlimit_name(self): request = unit_test_utils.get_fake_request() request.body = jsonutils.dump_as_bytes( {"properties": {}, "name": "a" * 81, "required": []}) exc = self.assertRaises(webob.exc.HTTPBadRequest, self.deserializer.update, request) self.assertIn("Failed validating 'maxLength' in " "schema['properties']['name']", exc.explanation) def test_object_update_conflict(self): request = unit_test_utils.get_fake_request(tenant=TENANT3) object = self.object_controller.show(request, NAMESPACE3, OBJECT1) object.name = OBJECT2 self.assertRaises(webob.exc.HTTPConflict, self.object_controller.update, request, object, NAMESPACE3, OBJECT1) self.assertNotificationsLog([]) def test_object_update_non_existing(self): request = unit_test_utils.get_fake_request(tenant=TENANT3) object = objects.MetadefObject() object.name = OBJECT1 object.required = [] object.properties = {} self.assertRaises(webob.exc.HTTPNotFound, self.object_controller.update, request, object, NAMESPACE5, OBJECT1) self.assertNotificationsLog([]) def test_object_update_namespace_non_existing(self): request = unit_test_utils.get_fake_request(tenant=TENANT3) object = objects.MetadefObject() object.name = OBJECT1 object.required = [] object.properties = {} self.assertRaises(webob.exc.HTTPNotFound, self.object_controller.update, request, object, NAMESPACE4, OBJECT1) self.assertNotificationsLog([]) def test_resource_type_index(self): request = unit_test_utils.get_fake_request() output = self.rt_controller.index(request) self.assertEqual(3, len(output.resource_types)) actual = set([rtype.name for rtype in output.resource_types]) expected = set([RESOURCE_TYPE1, RESOURCE_TYPE2, RESOURCE_TYPE4]) self.assertEqual(expected, actual) def test_resource_type_show(self): request = unit_test_utils.get_fake_request() output = self.rt_controller.show(request, NAMESPACE3) self.assertEqual(1, len(output.resource_type_associations)) actual = set([rt.name for rt in output.resource_type_associations]) expected = set([RESOURCE_TYPE1]) self.assertEqual(expected, actual) def test_resource_type_show_empty(self): request = unit_test_utils.get_fake_request() output = self.rt_controller.show(request, NAMESPACE5) self.assertEqual(0, len(output.resource_type_associations)) def test_resource_type_show_non_visible(self): request = unit_test_utils.get_fake_request() self.assertRaises(webob.exc.HTTPNotFound, self.rt_controller.show, request, NAMESPACE2) def test_resource_type_show_non_visible_admin(self): request = unit_test_utils.get_fake_request(tenant=TENANT2, is_admin=True) output = self.rt_controller.show(request, NAMESPACE2) self.assertEqual(2, len(output.resource_type_associations)) actual = set([rt.name for rt in output.resource_type_associations]) expected = set([RESOURCE_TYPE1, RESOURCE_TYPE2]) self.assertEqual(expected, actual) def test_resource_type_show_non_existing_namespace(self): request = unit_test_utils.get_fake_request() self.assertRaises(webob.exc.HTTPNotFound, self.rt_controller.show, request, NAMESPACE4) def test_resource_type_association_delete(self): request = unit_test_utils.get_fake_request(tenant=TENANT3) self.rt_controller.delete(request, NAMESPACE3, RESOURCE_TYPE1) self.assertNotificationLog("metadef_resource_type.delete", [{'name': RESOURCE_TYPE1, 'namespace': NAMESPACE3}]) output = self.rt_controller.show(request, NAMESPACE3) self.assertEqual(0, len(output.resource_type_associations)) def test_resource_type_association_delete_disabled_notification(self): self.config(disabled_notifications=["metadef_resource_type.delete"]) request = unit_test_utils.get_fake_request(tenant=TENANT3) self.rt_controller.delete(request, NAMESPACE3, RESOURCE_TYPE1) self.assertNotificationsLog([]) output = self.rt_controller.show(request, NAMESPACE3) self.assertEqual(0, len(output.resource_type_associations)) def test_resource_type_association_delete_other_owner(self): request = unit_test_utils.get_fake_request() self.assertRaises(webob.exc.HTTPForbidden, self.rt_controller.delete, request, NAMESPACE3, RESOURCE_TYPE1) self.assertNotificationsLog([]) def test_resource_type_association_delete_other_owner_admin(self): request = unit_test_utils.get_fake_request(is_admin=True) self.rt_controller.delete(request, NAMESPACE3, RESOURCE_TYPE1) self.assertNotificationLog("metadef_resource_type.delete", [{'name': RESOURCE_TYPE1, 'namespace': NAMESPACE3}]) output = self.rt_controller.show(request, NAMESPACE3) self.assertEqual(0, len(output.resource_type_associations)) def test_resource_type_association_delete_non_existing(self): request = unit_test_utils.get_fake_request() self.assertRaises(webob.exc.HTTPNotFound, self.rt_controller.delete, request, NAMESPACE1, RESOURCE_TYPE2) self.assertNotificationsLog([]) def test_resource_type_association_delete_non_existing_namespace(self): request = unit_test_utils.get_fake_request() self.assertRaises(webob.exc.HTTPNotFound, self.rt_controller.delete, request, NAMESPACE4, RESOURCE_TYPE1) self.assertNotificationsLog([]) def test_resource_type_association_delete_non_visible(self): request = unit_test_utils.get_fake_request(tenant=TENANT3) self.assertRaises(webob.exc.HTTPNotFound, self.rt_controller.delete, request, NAMESPACE1, RESOURCE_TYPE1) self.assertNotificationsLog([]) def test_resource_type_association_delete_protected_admin(self): request = unit_test_utils.get_fake_request(is_admin=True) self.assertRaises(webob.exc.HTTPForbidden, self.rt_controller.delete, request, NAMESPACE1, RESOURCE_TYPE1) self.assertNotificationsLog([]) def test_resource_type_association_create(self): request = unit_test_utils.get_fake_request() rt = resource_types.ResourceTypeAssociation() rt.name = RESOURCE_TYPE2 rt.prefix = 'pref' rt = self.rt_controller.create(request, rt, NAMESPACE1) self.assertEqual(RESOURCE_TYPE2, rt.name) self.assertEqual('pref', rt.prefix) self.assertNotificationLog("metadef_resource_type.create", [{'name': RESOURCE_TYPE2, 'namespace': NAMESPACE1}]) output = self.rt_controller.show(request, NAMESPACE1) self.assertEqual(2, len(output.resource_type_associations)) actual = set([x.name for x in output.resource_type_associations]) expected = set([RESOURCE_TYPE1, RESOURCE_TYPE2]) self.assertEqual(expected, actual) def test_resource_type_association_create_conflict(self): request = unit_test_utils.get_fake_request() rt = resource_types.ResourceTypeAssociation() rt.name = RESOURCE_TYPE1 rt.prefix = 'pref' self.assertRaises(webob.exc.HTTPConflict, self.rt_controller.create, request, rt, NAMESPACE1) self.assertNotificationsLog([]) def test_resource_type_association_create_non_existing_namespace(self): request = unit_test_utils.get_fake_request() rt = resource_types.ResourceTypeAssociation() rt.name = RESOURCE_TYPE1 rt.prefix = 'pref' self.assertRaises(webob.exc.HTTPNotFound, self.rt_controller.create, request, rt, NAMESPACE4) self.assertNotificationsLog([]) def test_resource_type_association_create_non_existing_resource_type(self): request = unit_test_utils.get_fake_request() rt = resource_types.ResourceTypeAssociation() rt.name = RESOURCE_TYPE3 rt.prefix = 'pref' self.assertRaises(webob.exc.HTTPNotFound, self.rt_controller.create, request, rt, NAMESPACE1) self.assertNotificationsLog([]) def test_resource_type_association_create_non_visible_namespace(self): request = unit_test_utils.get_fake_request(tenant=TENANT2) rt = resource_types.ResourceTypeAssociation() rt.name = RESOURCE_TYPE2 rt.prefix = 'pref' self.assertRaises(webob.exc.HTTPForbidden, self.rt_controller.create, request, rt, NAMESPACE1) self.assertNotificationsLog([]) def test_resource_type_association_create_non_visible_namesp_admin(self): request = unit_test_utils.get_fake_request(tenant=TENANT2, is_admin=True) rt = resource_types.ResourceTypeAssociation() rt.name = RESOURCE_TYPE2 rt.prefix = 'pref' rt = self.rt_controller.create(request, rt, NAMESPACE1) self.assertEqual(RESOURCE_TYPE2, rt.name) self.assertEqual('pref', rt.prefix) self.assertNotificationLog("metadef_resource_type.create", [{'name': RESOURCE_TYPE2, 'namespace': NAMESPACE1}]) output = self.rt_controller.show(request, NAMESPACE1) self.assertEqual(2, len(output.resource_type_associations)) actual = set([x.name for x in output.resource_type_associations]) expected = set([RESOURCE_TYPE1, RESOURCE_TYPE2]) self.assertEqual(expected, actual) def test_tag_index(self): request = unit_test_utils.get_fake_request() output = self.tag_controller.index(request, NAMESPACE3) output = output.to_dict() self.assertEqual(2, len(output['tags'])) actual = set([tag.name for tag in output['tags']]) expected = set([TAG1, TAG2]) self.assertEqual(expected, actual) def test_tag_index_empty(self): request = unit_test_utils.get_fake_request() output = self.tag_controller.index(request, NAMESPACE5) output = output.to_dict() self.assertEqual(0, len(output['tags'])) def test_tag_index_non_existing_namespace(self): request = unit_test_utils.get_fake_request() self.assertRaises(webob.exc.HTTPNotFound, self.tag_controller.index, request, NAMESPACE4) def test_tag_show(self): request = unit_test_utils.get_fake_request() output = self.tag_controller.show(request, NAMESPACE3, TAG1) self.assertEqual(TAG1, output.name) def test_tag_show_non_existing(self): request = unit_test_utils.get_fake_request() self.assertRaises(webob.exc.HTTPNotFound, self.tag_controller.show, request, NAMESPACE5, TAG1) def test_tag_show_non_visible(self): request = unit_test_utils.get_fake_request(tenant=TENANT2) self.assertRaises(webob.exc.HTTPNotFound, self.tag_controller.show, request, NAMESPACE1, TAG1) def test_tag_show_non_visible_admin(self): request = unit_test_utils.get_fake_request(tenant=TENANT2, is_admin=True) output = self.tag_controller.show(request, NAMESPACE1, TAG1) self.assertEqual(TAG1, output.name) def test_tag_delete(self): request = unit_test_utils.get_fake_request(tenant=TENANT3) self.tag_controller.delete(request, NAMESPACE3, TAG1) self.assertNotificationLog("metadef_tag.delete", [{'name': TAG1, 'namespace': NAMESPACE3}]) self.assertRaises(webob.exc.HTTPNotFound, self.tag_controller.show, request, NAMESPACE3, TAG1) def test_tag_delete_disabled_notification(self): self.config(disabled_notifications=["metadef_tag.delete"]) request = unit_test_utils.get_fake_request(tenant=TENANT3) self.tag_controller.delete(request, NAMESPACE3, TAG1) self.assertNotificationsLog([]) self.assertRaises(webob.exc.HTTPNotFound, self.tag_controller.show, request, NAMESPACE3, TAG1) def test_tag_delete_other_owner(self): request = unit_test_utils.get_fake_request() self.assertRaises(webob.exc.HTTPForbidden, self.tag_controller.delete, request, NAMESPACE3, TAG1) self.assertNotificationsLog([]) def test_tag_delete_other_owner_admin(self): request = unit_test_utils.get_fake_request(is_admin=True) self.tag_controller.delete(request, NAMESPACE3, TAG1) self.assertNotificationLog("metadef_tag.delete", [{'name': TAG1, 'namespace': NAMESPACE3}]) self.assertRaises(webob.exc.HTTPNotFound, self.tag_controller.show, request, NAMESPACE3, TAG1) def test_tag_delete_non_existing(self): request = unit_test_utils.get_fake_request() self.assertRaises(webob.exc.HTTPNotFound, self.tag_controller.delete, request, NAMESPACE5, TAG1) self.assertNotificationsLog([]) def test_tag_delete_non_existing_namespace(self): request = unit_test_utils.get_fake_request() self.assertRaises(webob.exc.HTTPNotFound, self.tag_controller.delete, request, NAMESPACE4, TAG1) self.assertNotificationsLog([]) def test_tag_delete_non_visible(self): request = unit_test_utils.get_fake_request(tenant=TENANT2) self.assertRaises(webob.exc.HTTPNotFound, self.tag_controller.delete, request, NAMESPACE1, TAG1) self.assertNotificationsLog([]) def test_tag_delete_admin_protected(self): request = unit_test_utils.get_fake_request(is_admin=True) self.assertRaises(webob.exc.HTTPForbidden, self.tag_controller.delete, request, NAMESPACE1, TAG1) self.assertNotificationsLog([]) def test_tag_create(self): request = unit_test_utils.get_fake_request() tag = self.tag_controller.create(request, NAMESPACE1, TAG2) self.assertEqual(TAG2, tag.name) self.assertNotificationLog("metadef_tag.create", [{'name': TAG2, 'namespace': NAMESPACE1}]) tag = self.tag_controller.show(request, NAMESPACE1, TAG2) self.assertEqual(TAG2, tag.name) def test_tag_create_overlimit_name(self): request = unit_test_utils.get_fake_request() exc = self.assertRaises(webob.exc.HTTPBadRequest, self.tag_controller.create, request, NAMESPACE1, 'a' * 81) self.assertIn("Failed validating 'maxLength' in " "schema['properties']['name']", exc.explanation) def test_tag_create_with_4byte_character(self): request = unit_test_utils.get_fake_request() self.assertRaises(webob.exc.HTTPBadRequest, self.tag_controller.create, request, NAMESPACE1, u'\U0001f693') def test_tag_create_tags(self): request = unit_test_utils.get_fake_request() metadef_tags = tags.MetadefTags() metadef_tags.tags = _db_tags_fixture() output = self.tag_controller.create_tags( request, metadef_tags, NAMESPACE1) output = output.to_dict() self.assertEqual(3, len(output['tags'])) actual = set([tag.name for tag in output['tags']]) expected = set([TAG1, TAG2, TAG3]) self.assertEqual(expected, actual) self.assertNotificationLog( "metadef_tag.create", [ {'name': TAG1, 'namespace': NAMESPACE1}, {'name': TAG2, 'namespace': NAMESPACE1}, {'name': TAG3, 'namespace': NAMESPACE1}, ] ) def test_tag_create_duplicate_tags(self): request = unit_test_utils.get_fake_request() metadef_tags = tags.MetadefTags() metadef_tags.tags = _db_tags_fixture([TAG4, TAG5, TAG4]) self.assertRaises( webob.exc.HTTPConflict, self.tag_controller.create_tags, request, metadef_tags, NAMESPACE1) self.assertNotificationsLog([]) def test_tag_create_duplicate_with_pre_existing_tags(self): request = unit_test_utils.get_fake_request() metadef_tags = tags.MetadefTags() metadef_tags.tags = _db_tags_fixture([TAG1, TAG2, TAG3]) output = self.tag_controller.create_tags( request, metadef_tags, NAMESPACE1) output = output.to_dict() self.assertEqual(3, len(output['tags'])) actual = set([tag.name for tag in output['tags']]) expected = set([TAG1, TAG2, TAG3]) self.assertEqual(expected, actual) self.assertNotificationLog( "metadef_tag.create", [ {'name': TAG1, 'namespace': NAMESPACE1}, {'name': TAG2, 'namespace': NAMESPACE1}, {'name': TAG3, 'namespace': NAMESPACE1}, ] ) metadef_tags = tags.MetadefTags() metadef_tags.tags = _db_tags_fixture([TAG4, TAG5, TAG4]) self.assertRaises( webob.exc.HTTPConflict, self.tag_controller.create_tags, request, metadef_tags, NAMESPACE1) self.assertNotificationsLog([]) output = self.tag_controller.index(request, NAMESPACE1) output = output.to_dict() self.assertEqual(3, len(output['tags'])) actual = set([tag.name for tag in output['tags']]) expected = set([TAG1, TAG2, TAG3]) self.assertEqual(expected, actual) def test_tag_create_conflict(self): request = unit_test_utils.get_fake_request() self.assertRaises(webob.exc.HTTPConflict, self.tag_controller.create, request, NAMESPACE1, TAG1) self.assertNotificationsLog([]) def test_tag_create_non_existing_namespace(self): request = unit_test_utils.get_fake_request() self.assertRaises(webob.exc.HTTPNotFound, self.tag_controller.create, request, NAMESPACE4, TAG1) self.assertNotificationsLog([]) def test_tag_create_non_visible_namespace(self): request = unit_test_utils.get_fake_request(tenant=TENANT2) self.assertRaises(webob.exc.HTTPForbidden, self.tag_controller.create, request, NAMESPACE1, TAG1) self.assertNotificationsLog([]) def test_tag_create_non_visible_namespace_admin(self): request = unit_test_utils.get_fake_request(tenant=TENANT2, is_admin=True) tag = self.tag_controller.create(request, NAMESPACE1, TAG2) self.assertEqual(TAG2, tag.name) self.assertNotificationLog("metadef_tag.create", [{'name': TAG2, 'namespace': NAMESPACE1}]) tag = self.tag_controller.show(request, NAMESPACE1, TAG2) self.assertEqual(TAG2, tag.name) def test_tag_update(self): request = unit_test_utils.get_fake_request(tenant=TENANT3) tag = self.tag_controller.show(request, NAMESPACE3, TAG1) tag.name = TAG3 tag = self.tag_controller.update(request, tag, NAMESPACE3, TAG1) self.assertEqual(TAG3, tag.name) self.assertNotificationLog("metadef_tag.update", [ {'name': TAG3, 'namespace': NAMESPACE3} ]) property = self.tag_controller.show(request, NAMESPACE3, TAG3) self.assertEqual(TAG3, property.name) def test_tag_update_name(self): request = unit_test_utils.get_fake_request() tag = self.tag_controller.show(request, NAMESPACE1, TAG1) tag.name = TAG2 tag = self.tag_controller.update(request, tag, NAMESPACE1, TAG1) self.assertEqual(TAG2, tag.name) self.assertNotificationLog("metadef_tag.update", [ {'name': TAG2, 'name_old': TAG1, 'namespace': NAMESPACE1} ]) tag = self.tag_controller.show(request, NAMESPACE1, TAG2) self.assertEqual(TAG2, tag.name) def test_tag_update_with_4byte_character(self): request = unit_test_utils.get_fake_request() tag = self.tag_controller.show(request, NAMESPACE1, TAG1) tag.name = u'\U0001f693' self.assertRaises(webob.exc.HTTPBadRequest, self.tag_controller.update, request, tag, NAMESPACE1, TAG1) def test_tag_update_with_name_overlimit(self): request = unit_test_utils.get_fake_request() request.body = jsonutils.dump_as_bytes( {"properties": {}, "name": "a" * 81, "required": []}) exc = self.assertRaises(webob.exc.HTTPBadRequest, self.deserializer.update, request) self.assertIn("Failed validating 'maxLength' in " "schema['properties']['name']", exc.explanation) def test_tag_update_conflict(self): request = unit_test_utils.get_fake_request(tenant=TENANT3) tag = self.tag_controller.show(request, NAMESPACE3, TAG1) tag.name = TAG2 self.assertRaises(webob.exc.HTTPConflict, self.tag_controller.update, request, tag, NAMESPACE3, TAG1) self.assertNotificationsLog([]) def test_tag_update_non_existing(self): request = unit_test_utils.get_fake_request(tenant=TENANT3) tag = tags.MetadefTag() tag.name = TAG1 self.assertRaises(webob.exc.HTTPNotFound, self.tag_controller.update, request, tag, NAMESPACE5, TAG1) self.assertNotificationsLog([]) def test_tag_update_namespace_non_existing(self): request = unit_test_utils.get_fake_request(tenant=TENANT3) tag = tags.MetadefTag() tag.name = TAG1 self.assertRaises(webob.exc.HTTPNotFound, self.tag_controller.update, request, tag, NAMESPACE4, TAG1) self.assertNotificationsLog([]) class TestMetadefNamespaceResponseSerializers(base.IsolatedUnitTest): def setUp(self): super(TestMetadefNamespaceResponseSerializers, self).setUp() self.serializer = namespaces.ResponseSerializer(schema={}) self.response = mock.Mock() self.result = mock.Mock() def test_delete_tags(self): self.serializer.delete_tags(self.response, self.result) self.assertEqual(204, self.response.status_int) glance-16.0.0/glance/tests/unit/v2/test_schemas_resource.py0000666000175100017510000000516213245511421023723 0ustar zuulzuul00000000000000# Copyright 2012 OpenStack Foundation. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import glance.api.v2.schemas import glance.tests.unit.utils as unit_test_utils import glance.tests.utils as test_utils class TestSchemasController(test_utils.BaseTestCase): def setUp(self): super(TestSchemasController, self).setUp() self.controller = glance.api.v2.schemas.Controller() def test_image(self): req = unit_test_utils.get_fake_request() output = self.controller.image(req) self.assertEqual('image', output['name']) expected = set(['status', 'name', 'tags', 'checksum', 'created_at', 'disk_format', 'updated_at', 'visibility', 'self', 'file', 'container_format', 'schema', 'id', 'size', 'direct_url', 'min_ram', 'min_disk', 'protected', 'locations', 'owner', 'virtual_size']) self.assertEqual(expected, set(output['properties'].keys())) def test_images(self): req = unit_test_utils.get_fake_request() output = self.controller.images(req) self.assertEqual('images', output['name']) expected = set(['images', 'schema', 'first', 'next']) self.assertEqual(expected, set(output['properties'].keys())) expected = set(['{schema}', '{first}', '{next}']) actual = set([link['href'] for link in output['links']]) self.assertEqual(expected, actual) def test_member(self): req = unit_test_utils.get_fake_request() output = self.controller.member(req) self.assertEqual('member', output['name']) expected = set(['status', 'created_at', 'updated_at', 'image_id', 'member_id', 'schema']) self.assertEqual(expected, set(output['properties'].keys())) def test_members(self): req = unit_test_utils.get_fake_request() output = self.controller.members(req) self.assertEqual('members', output['name']) expected = set(['schema', 'members']) self.assertEqual(expected, set(output['properties'].keys())) glance-16.0.0/glance/tests/unit/v2/test_image_data_resource.py0000666000175100017510000012453113245511421024355 0ustar zuulzuul00000000000000# Copyright 2012 OpenStack Foundation. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import uuid from cursive import exception as cursive_exception import glance_store from glance_store._drivers import filesystem import mock import six from six.moves import http_client as http import webob import glance.api.policy import glance.api.v2.image_data from glance.common import exception from glance.common import wsgi from glance.tests.unit import base import glance.tests.unit.utils as unit_test_utils import glance.tests.utils as test_utils class Raise(object): def __init__(self, exc): self.exc = exc def __call__(self, *args, **kwargs): raise self.exc class FakeImage(object): def __init__(self, image_id=None, data=None, checksum=None, size=0, virtual_size=0, locations=None, container_format='bear', disk_format='rawr', status=None): self.image_id = image_id self.data = data self.checksum = checksum self.size = size self.virtual_size = virtual_size self.locations = locations self.container_format = container_format self.disk_format = disk_format self._status = status @property def status(self): return self._status @status.setter def status(self, value): if isinstance(self._status, BaseException): raise self._status else: self._status = value def get_data(self, offset=0, chunk_size=None): if chunk_size: return self.data[offset:offset + chunk_size] return self.data[offset:] def set_data(self, data, size=None): self.data = ''.join(data) self.size = size self.status = 'modified-by-fake' class FakeImageRepo(object): def __init__(self, result=None): self.result = result def get(self, image_id): if isinstance(self.result, BaseException): raise self.result else: return self.result def save(self, image, from_state=None): self.saved_image = image class FakeGateway(object): def __init__(self, db=None, store=None, notifier=None, policy=None, repo=None): self.db = db self.store = store self.notifier = notifier self.policy = policy self.repo = repo def get_repo(self, context): return self.repo class TestImagesController(base.StoreClearingUnitTest): def setUp(self): super(TestImagesController, self).setUp() self.config(debug=True) self.image_repo = FakeImageRepo() db = unit_test_utils.FakeDB() policy = unit_test_utils.FakePolicyEnforcer() notifier = unit_test_utils.FakeNotifier() store = unit_test_utils.FakeStoreAPI() self.controller = glance.api.v2.image_data.ImageDataController() self.controller.gateway = FakeGateway(db, store, notifier, policy, self.image_repo) def test_download(self): request = unit_test_utils.get_fake_request() image = FakeImage('abcd', locations=[{'url': 'http://example.com/image', 'metadata': {}, 'status': 'active'}]) self.image_repo.result = image image = self.controller.download(request, unit_test_utils.UUID1) self.assertEqual('abcd', image.image_id) def test_download_deactivated(self): request = unit_test_utils.get_fake_request() image = FakeImage('abcd', status='deactivated', locations=[{'url': 'http://example.com/image', 'metadata': {}, 'status': 'active'}]) self.image_repo.result = image self.assertRaises(webob.exc.HTTPForbidden, self.controller.download, request, str(uuid.uuid4())) def test_download_no_location(self): # NOTE(mclaren): NoContent will be raised by the ResponseSerializer # That's tested below. request = unit_test_utils.get_fake_request() self.image_repo.result = FakeImage('abcd') image = self.controller.download(request, unit_test_utils.UUID2) self.assertEqual('abcd', image.image_id) def test_download_non_existent_image(self): request = unit_test_utils.get_fake_request() self.image_repo.result = exception.NotFound() self.assertRaises(webob.exc.HTTPNotFound, self.controller.download, request, str(uuid.uuid4())) def test_download_forbidden(self): request = unit_test_utils.get_fake_request() self.image_repo.result = exception.Forbidden() self.assertRaises(webob.exc.HTTPForbidden, self.controller.download, request, str(uuid.uuid4())) def test_download_ok_when_get_image_location_forbidden(self): class ImageLocations(object): def __len__(self): raise exception.Forbidden() request = unit_test_utils.get_fake_request() image = FakeImage('abcd') self.image_repo.result = image image.locations = ImageLocations() image = self.controller.download(request, unit_test_utils.UUID1) self.assertEqual('abcd', image.image_id) def test_upload(self): request = unit_test_utils.get_fake_request() image = FakeImage('abcd') self.image_repo.result = image self.controller.upload(request, unit_test_utils.UUID2, 'YYYY', 4) self.assertEqual('YYYY', image.data) self.assertEqual(4, image.size) def test_upload_status(self): request = unit_test_utils.get_fake_request() image = FakeImage('abcd') self.image_repo.result = image insurance = {'called': False} def read_data(): insurance['called'] = True self.assertEqual('saving', self.image_repo.saved_image.status) yield 'YYYY' self.controller.upload(request, unit_test_utils.UUID2, read_data(), None) self.assertTrue(insurance['called']) self.assertEqual('modified-by-fake', self.image_repo.saved_image.status) def test_upload_no_size(self): request = unit_test_utils.get_fake_request() image = FakeImage('abcd') self.image_repo.result = image self.controller.upload(request, unit_test_utils.UUID2, 'YYYY', None) self.assertEqual('YYYY', image.data) self.assertIsNone(image.size) @mock.patch.object(glance.api.policy.Enforcer, 'enforce') def test_upload_image_forbidden(self, mock_enforce): request = unit_test_utils.get_fake_request() mock_enforce.side_effect = exception.Forbidden self.assertRaises(webob.exc.HTTPForbidden, self.controller.upload, request, unit_test_utils.UUID2, 'YYYY', 4) mock_enforce.assert_called_once_with(request.context, "upload_image", {}) def test_upload_invalid(self): request = unit_test_utils.get_fake_request() image = FakeImage('abcd') image.status = ValueError() self.image_repo.result = image self.assertRaises(webob.exc.HTTPBadRequest, self.controller.upload, request, unit_test_utils.UUID1, 'YYYY', 4) def test_upload_with_expired_token(self): def side_effect(image, from_state=None): if from_state == 'saving': raise exception.NotAuthenticated() mocked_save = mock.Mock(side_effect=side_effect) mocked_delete = mock.Mock() request = unit_test_utils.get_fake_request() image = FakeImage('abcd') image.delete = mocked_delete self.image_repo.result = image self.image_repo.save = mocked_save self.assertRaises(webob.exc.HTTPUnauthorized, self.controller.upload, request, unit_test_utils.UUID1, 'YYYY', 4) self.assertEqual(3, mocked_save.call_count) mocked_delete.assert_called_once_with() def test_upload_non_existent_image_during_save_initiates_deletion(self): def fake_save_not_found(self, from_state=None): raise exception.ImageNotFound() def fake_save_conflict(self, from_state=None): raise exception.Conflict() for fun in [fake_save_not_found, fake_save_conflict]: request = unit_test_utils.get_fake_request() image = FakeImage('abcd', locations=['http://example.com/image']) self.image_repo.result = image self.image_repo.save = fun image.delete = mock.Mock() self.assertRaises(webob.exc.HTTPGone, self.controller.upload, request, str(uuid.uuid4()), 'ABC', 3) self.assertTrue(image.delete.called) def test_upload_non_existent_image_raises_image_not_found_exception(self): def fake_save(self, from_state=None): raise exception.ImageNotFound() def fake_delete(): raise exception.ImageNotFound() request = unit_test_utils.get_fake_request() image = FakeImage('abcd', locations=['http://example.com/image']) self.image_repo.result = image self.image_repo.save = fake_save image.delete = fake_delete self.assertRaises(webob.exc.HTTPGone, self.controller.upload, request, str(uuid.uuid4()), 'ABC', 3) def test_upload_non_existent_image_raises_store_not_found_exception(self): def fake_save(self, from_state=None): raise glance_store.NotFound() def fake_delete(): raise exception.ImageNotFound() request = unit_test_utils.get_fake_request() image = FakeImage('abcd', locations=['http://example.com/image']) self.image_repo.result = image self.image_repo.save = fake_save image.delete = fake_delete self.assertRaises(webob.exc.HTTPGone, self.controller.upload, request, str(uuid.uuid4()), 'ABC', 3) def test_upload_non_existent_image_before_save(self): request = unit_test_utils.get_fake_request() self.image_repo.result = exception.NotFound() self.assertRaises(webob.exc.HTTPNotFound, self.controller.upload, request, str(uuid.uuid4()), 'ABC', 3) def test_upload_data_exists(self): request = unit_test_utils.get_fake_request() image = FakeImage() exc = exception.InvalidImageStatusTransition(cur_status='active', new_status='queued') image.set_data = Raise(exc) self.image_repo.result = image self.assertRaises(webob.exc.HTTPConflict, self.controller.upload, request, unit_test_utils.UUID1, 'YYYY', 4) def test_upload_storage_full(self): request = unit_test_utils.get_fake_request() image = FakeImage() image.set_data = Raise(glance_store.StorageFull) self.image_repo.result = image self.assertRaises(webob.exc.HTTPRequestEntityTooLarge, self.controller.upload, request, unit_test_utils.UUID2, 'YYYYYYY', 7) def test_upload_signature_verification_fails(self): request = unit_test_utils.get_fake_request() image = FakeImage() image.set_data = Raise(cursive_exception.SignatureVerificationError) self.image_repo.result = image self.assertRaises(webob.exc.HTTPBadRequest, self.controller.upload, request, unit_test_utils.UUID1, 'YYYY', 4) self.assertEqual('killed', self.image_repo.saved_image.status) def test_image_size_limit_exceeded(self): request = unit_test_utils.get_fake_request() image = FakeImage() image.set_data = Raise(exception.ImageSizeLimitExceeded) self.image_repo.result = image self.assertRaises(webob.exc.HTTPRequestEntityTooLarge, self.controller.upload, request, unit_test_utils.UUID1, 'YYYYYYY', 7) def test_upload_storage_quota_full(self): request = unit_test_utils.get_fake_request() self.image_repo.result = exception.StorageQuotaFull("message") self.assertRaises(webob.exc.HTTPRequestEntityTooLarge, self.controller.upload, request, unit_test_utils.UUID1, 'YYYYYYY', 7) def test_upload_storage_forbidden(self): request = unit_test_utils.get_fake_request(user=unit_test_utils.USER2) image = FakeImage() image.set_data = Raise(exception.Forbidden) self.image_repo.result = image self.assertRaises(webob.exc.HTTPForbidden, self.controller.upload, request, unit_test_utils.UUID2, 'YY', 2) def test_upload_storage_internal_error(self): request = unit_test_utils.get_fake_request() self.image_repo.result = exception.ServerError() self.assertRaises(exception.ServerError, self.controller.upload, request, unit_test_utils.UUID1, 'ABC', 3) def test_upload_storage_write_denied(self): request = unit_test_utils.get_fake_request(user=unit_test_utils.USER3) image = FakeImage() image.set_data = Raise(glance_store.StorageWriteDenied) self.image_repo.result = image self.assertRaises(webob.exc.HTTPServiceUnavailable, self.controller.upload, request, unit_test_utils.UUID2, 'YY', 2) def test_upload_storage_store_disabled(self): """Test that uploading an image file raises StoreDisabled exception""" request = unit_test_utils.get_fake_request(user=unit_test_utils.USER3) image = FakeImage() image.set_data = Raise(glance_store.StoreAddDisabled) self.image_repo.result = image self.assertRaises(webob.exc.HTTPGone, self.controller.upload, request, unit_test_utils.UUID2, 'YY', 2) @mock.patch("glance.common.trust_auth.TokenRefresher") def test_upload_with_trusts(self, mock_refresher): """Test that uploading with registry correctly uses trusts""" # initialize trust environment self.config(data_api='glance.db.registry.api') refresher = mock.MagicMock() mock_refresher.return_value = refresher refresher.refresh_token.return_value = "fake_token" # request an image upload request = unit_test_utils.get_fake_request() request.environ['keystone.token_auth'] = mock.MagicMock() request.environ['keystone.token_info'] = { 'token': { 'roles': [{'name': 'FakeRole', 'id': 'FakeID'}] } } image = FakeImage('abcd') self.image_repo.result = image mock_fake_save = mock.Mock() mock_fake_save.side_effect = [None, exception.NotAuthenticated, None] temp_save = FakeImageRepo.save # mocking save to raise NotAuthenticated on the second call FakeImageRepo.save = mock_fake_save self.controller.upload(request, unit_test_utils.UUID2, 'YYYY', 4) # check image data self.assertEqual('YYYY', image.data) self.assertEqual(4, image.size) FakeImageRepo.save = temp_save # check that token has been correctly acquired and deleted mock_refresher.assert_called_once_with( request.environ['keystone.token_auth'], request.context.tenant, ['FakeRole']) refresher.refresh_token.assert_called_once_with() refresher.release_resources.assert_called_once_with() self.assertEqual("fake_token", request.context.auth_token) @mock.patch("glance.common.trust_auth.TokenRefresher") def test_upload_with_trusts_fails(self, mock_refresher): """Test upload with registry if trust was not successfully created""" # initialize trust environment self.config(data_api='glance.db.registry.api') mock_refresher().side_effect = Exception() # request an image upload request = unit_test_utils.get_fake_request() image = FakeImage('abcd') self.image_repo.result = image self.controller.upload(request, unit_test_utils.UUID2, 'YYYY', 4) # check image data self.assertEqual('YYYY', image.data) self.assertEqual(4, image.size) # check that the token has not been updated self.assertEqual(0, mock_refresher().refresh_token.call_count) def _test_upload_download_prepare_notification(self): request = unit_test_utils.get_fake_request() self.controller.upload(request, unit_test_utils.UUID2, 'YYYY', 4) output = self.controller.download(request, unit_test_utils.UUID2) output_log = self.notifier.get_logs() prepare_payload = output['meta'].copy() prepare_payload['checksum'] = None prepare_payload['size'] = None prepare_payload['virtual_size'] = None prepare_payload['location'] = None prepare_payload['status'] = 'queued' del prepare_payload['updated_at'] prepare_log = { 'notification_type': "INFO", 'event_type': "image.prepare", 'payload': prepare_payload, } self.assertEqual(3, len(output_log)) prepare_updated_at = output_log[0]['payload']['updated_at'] del output_log[0]['payload']['updated_at'] self.assertLessEqual(prepare_updated_at, output['meta']['updated_at']) self.assertEqual(prepare_log, output_log[0]) def _test_upload_download_upload_notification(self): request = unit_test_utils.get_fake_request() self.controller.upload(request, unit_test_utils.UUID2, 'YYYY', 4) output = self.controller.download(request, unit_test_utils.UUID2) output_log = self.notifier.get_logs() upload_payload = output['meta'].copy() upload_log = { 'notification_type': "INFO", 'event_type': "image.upload", 'payload': upload_payload, } self.assertEqual(3, len(output_log)) self.assertEqual(upload_log, output_log[1]) def _test_upload_download_activate_notification(self): request = unit_test_utils.get_fake_request() self.controller.upload(request, unit_test_utils.UUID2, 'YYYY', 4) output = self.controller.download(request, unit_test_utils.UUID2) output_log = self.notifier.get_logs() activate_payload = output['meta'].copy() activate_log = { 'notification_type': "INFO", 'event_type': "image.activate", 'payload': activate_payload, } self.assertEqual(3, len(output_log)) self.assertEqual(activate_log, output_log[2]) def test_restore_image_when_upload_failed(self): request = unit_test_utils.get_fake_request() image = FakeImage('fake') image.set_data = Raise(glance_store.StorageWriteDenied) self.image_repo.result = image self.assertRaises(webob.exc.HTTPServiceUnavailable, self.controller.upload, request, unit_test_utils.UUID2, 'ZZZ', 3) self.assertEqual('queued', self.image_repo.saved_image.status) @mock.patch.object(filesystem.Store, 'add') def test_restore_image_when_staging_failed(self, mock_store_add): mock_store_add.side_effect = glance_store.StorageWriteDenied() request = unit_test_utils.get_fake_request() image_id = str(uuid.uuid4()) image = FakeImage('fake') self.image_repo.result = image self.assertRaises(webob.exc.HTTPServiceUnavailable, self.controller.stage, request, image_id, 'YYYYYYY', 7) self.assertEqual('queued', self.image_repo.saved_image.status) def test_stage(self): image_id = str(uuid.uuid4()) request = unit_test_utils.get_fake_request() image = FakeImage(image_id=image_id) self.image_repo.result = image with mock.patch.object(filesystem.Store, 'add'): self.controller.stage(request, image_id, 'YYYY', 4) self.assertEqual('uploading', image.status) self.assertEqual(0, image.size) def test_image_already_on_staging(self): image_id = str(uuid.uuid4()) request = unit_test_utils.get_fake_request() image = FakeImage(image_id=image_id) self.image_repo.result = image with mock.patch.object(filesystem.Store, 'add') as mock_store_add: self.controller.stage(request, image_id, 'YYYY', 4) self.assertEqual('uploading', image.status) mock_store_add.side_effect = glance_store.Duplicate() self.assertEqual(0, image.size) self.assertRaises(webob.exc.HTTPConflict, self.controller.stage, request, image_id, 'YYYY', 4) @mock.patch.object(glance_store.driver.Store, 'configure') def test_image_stage_raises_bad_store_uri(self, mock_store_configure): mock_store_configure.side_effect = AttributeError() image_id = str(uuid.uuid4()) request = unit_test_utils.get_fake_request() self.assertRaises(exception.BadStoreUri, self.controller.stage, request, image_id, 'YYYY', 4) @mock.patch.object(filesystem.Store, 'add') def test_image_stage_raises_storage_full(self, mock_store_add): mock_store_add.side_effect = glance_store.StorageFull() image_id = str(uuid.uuid4()) request = unit_test_utils.get_fake_request() image = FakeImage(image_id=image_id) self.image_repo.result = image with mock.patch.object(self.controller, "_unstage"): self.assertRaises(webob.exc.HTTPRequestEntityTooLarge, self.controller.stage, request, image_id, 'YYYYYYY', 7) @mock.patch.object(filesystem.Store, 'add') def test_image_stage_raises_storage_quota_full(self, mock_store_add): mock_store_add.side_effect = exception.StorageQuotaFull("message") image_id = str(uuid.uuid4()) request = unit_test_utils.get_fake_request() image = FakeImage(image_id=image_id) self.image_repo.result = image with mock.patch.object(self.controller, "_unstage"): self.assertRaises(webob.exc.HTTPRequestEntityTooLarge, self.controller.stage, request, image_id, 'YYYYYYY', 7) @mock.patch.object(filesystem.Store, 'add') def test_image_stage_raises_storage_write_denied(self, mock_store_add): mock_store_add.side_effect = glance_store.StorageWriteDenied() image_id = str(uuid.uuid4()) request = unit_test_utils.get_fake_request() image = FakeImage(image_id=image_id) self.image_repo.result = image with mock.patch.object(self.controller, "_unstage"): self.assertRaises(webob.exc.HTTPServiceUnavailable, self.controller.stage, request, image_id, 'YYYYYYY', 7) def test_image_stage_raises_internal_error(self): image_id = str(uuid.uuid4()) request = unit_test_utils.get_fake_request() self.image_repo.result = exception.ServerError() self.assertRaises(exception.ServerError, self.controller.stage, request, image_id, 'YYYYYYY', 7) def test_image_stage_non_existent_image(self): request = unit_test_utils.get_fake_request() self.image_repo.result = exception.NotFound() self.assertRaises(webob.exc.HTTPNotFound, self.controller.stage, request, str(uuid.uuid4()), 'ABC', 3) @mock.patch.object(filesystem.Store, 'add') def test_image_stage_raises_image_size_exceeded(self, mock_store_add): mock_store_add.side_effect = exception.ImageSizeLimitExceeded() image_id = str(uuid.uuid4()) request = unit_test_utils.get_fake_request() image = FakeImage(image_id=image_id) self.image_repo.result = image with mock.patch.object(self.controller, "_unstage"): self.assertRaises(webob.exc.HTTPRequestEntityTooLarge, self.controller.stage, request, image_id, 'YYYYYYY', 7) @mock.patch.object(filesystem.Store, 'add') def test_image_stage_invalid_image_transition(self, mock_store_add): image_id = str(uuid.uuid4()) request = unit_test_utils.get_fake_request() image = FakeImage(image_id=image_id) self.image_repo.result = image self.controller.stage(request, image_id, 'YYYY', 4) self.assertEqual('uploading', image.status) self.assertEqual(0, image.size) # try staging again mock_store_add.side_effect = exception.InvalidImageStatusTransition( cur_status='uploading', new_status='uploading') self.assertRaises(webob.exc.HTTPConflict, self.controller.stage, request, image_id, 'YYYY', 4) class TestImageDataDeserializer(test_utils.BaseTestCase): def setUp(self): super(TestImageDataDeserializer, self).setUp() self.deserializer = glance.api.v2.image_data.RequestDeserializer() def test_upload(self): request = unit_test_utils.get_fake_request() request.headers['Content-Type'] = 'application/octet-stream' request.body = b'YYY' request.headers['Content-Length'] = 3 output = self.deserializer.upload(request) data = output.pop('data') self.assertEqual(b'YYY', data.read()) expected = {'size': 3} self.assertEqual(expected, output) def test_upload_chunked(self): request = unit_test_utils.get_fake_request() request.headers['Content-Type'] = 'application/octet-stream' # If we use body_file, webob assumes we want to do a chunked upload, # ignoring the Content-Length header request.body_file = six.StringIO('YYY') output = self.deserializer.upload(request) data = output.pop('data') self.assertEqual('YYY', data.read()) expected = {'size': None} self.assertEqual(expected, output) def test_upload_chunked_with_content_length(self): request = unit_test_utils.get_fake_request() request.headers['Content-Type'] = 'application/octet-stream' request.body_file = six.BytesIO(b'YYY') # The deserializer shouldn't care if the Content-Length is # set when the user is attempting to send chunked data. request.headers['Content-Length'] = 3 output = self.deserializer.upload(request) data = output.pop('data') self.assertEqual(b'YYY', data.read()) expected = {'size': 3} self.assertEqual(expected, output) def test_upload_with_incorrect_content_length(self): request = unit_test_utils.get_fake_request() request.headers['Content-Type'] = 'application/octet-stream' # The deserializer shouldn't care if the Content-Length and # actual request body length differ. That job is left up # to the controller request.body = b'YYY' request.headers['Content-Length'] = 4 output = self.deserializer.upload(request) data = output.pop('data') self.assertEqual(b'YYY', data.read()) expected = {'size': 4} self.assertEqual(expected, output) def test_upload_wrong_content_type(self): request = unit_test_utils.get_fake_request() request.headers['Content-Type'] = 'application/json' request.body = b'YYYYY' self.assertRaises(webob.exc.HTTPUnsupportedMediaType, self.deserializer.upload, request) request = unit_test_utils.get_fake_request() request.headers['Content-Type'] = 'application/octet-st' request.body = b'YYYYY' self.assertRaises(webob.exc.HTTPUnsupportedMediaType, self.deserializer.upload, request) def test_stage(self): self.config(enable_image_import=True) req = unit_test_utils.get_fake_request() req.headers['Content-Type'] = 'application/octet-stream' req.headers['Content-Length'] = 4 req.body_file = six.BytesIO(b'YYYY') output = self.deserializer.stage(req) data = output.pop('data') self.assertEqual(b'YYYY', data.read()) def test_stage_if_image_import_is_disabled(self): self.config(enable_image_import=False) req = unit_test_utils.get_fake_request() self.assertRaises(webob.exc.HTTPNotFound, self.deserializer.stage, req) def test_stage_raises_invalid_content_type(self): # TODO(abhishekk): change this when import methods are # listed in the config file self.config(enable_image_import=True) req = unit_test_utils.get_fake_request() req.headers['Content-Type'] = 'application/json' self.assertRaises(webob.exc.HTTPUnsupportedMediaType, self.deserializer.stage, req) class TestImageDataSerializer(test_utils.BaseTestCase): def setUp(self): super(TestImageDataSerializer, self).setUp() self.serializer = glance.api.v2.image_data.ResponseSerializer() def test_download(self): request = wsgi.Request.blank('/') request.environ = {} response = webob.Response() response.request = request image = FakeImage(size=3, data=[b'Z', b'Z', b'Z']) self.serializer.download(response, image) self.assertEqual(b'ZZZ', response.body) self.assertEqual('3', response.headers['Content-Length']) self.assertNotIn('Content-MD5', response.headers) self.assertEqual('application/octet-stream', response.headers['Content-Type']) def test_range_requests_for_image_downloads(self): """ Test partial download 'Range' requests for images (random image access) """ def download_successful_Range(d_range): request = wsgi.Request.blank('/') request.environ = {} request.headers['Range'] = d_range response = webob.Response() response.request = request image = FakeImage(size=3, data=[b'X', b'Y', b'Z']) self.serializer.download(response, image) self.assertEqual(206, response.status_code) self.assertEqual('2', response.headers['Content-Length']) self.assertEqual('bytes 1-2/3', response.headers['Content-Range']) self.assertEqual(b'YZ', response.body) download_successful_Range('bytes=1-2') download_successful_Range('bytes=1-') download_successful_Range('bytes=1-3') download_successful_Range('bytes=-2') download_successful_Range('bytes=1-100') def full_image_download_w_range(d_range): request = wsgi.Request.blank('/') request.environ = {} request.headers['Range'] = d_range response = webob.Response() response.request = request image = FakeImage(size=3, data=[b'X', b'Y', b'Z']) self.serializer.download(response, image) self.assertEqual(206, response.status_code) self.assertEqual('3', response.headers['Content-Length']) self.assertEqual('bytes 0-2/3', response.headers['Content-Range']) self.assertEqual(b'XYZ', response.body) full_image_download_w_range('bytes=0-') full_image_download_w_range('bytes=0-2') full_image_download_w_range('bytes=0-3') full_image_download_w_range('bytes=-3') full_image_download_w_range('bytes=-4') full_image_download_w_range('bytes=0-100') full_image_download_w_range('bytes=-100') def download_failures_Range(d_range): request = wsgi.Request.blank('/') request.environ = {} request.headers['Range'] = d_range response = webob.Response() response.request = request image = FakeImage(size=3, data=[b'Z', b'Z', b'Z']) self.assertRaises(webob.exc.HTTPRequestRangeNotSatisfiable, self.serializer.download, response, image) return download_failures_Range('bytes=4-1') download_failures_Range('bytes=4-') download_failures_Range('bytes=3-') download_failures_Range('bytes=1') download_failures_Range('bytes=100') download_failures_Range('bytes=100-') download_failures_Range('bytes=') def test_multi_range_requests_raises_bad_request_error(self): request = wsgi.Request.blank('/') request.environ = {} request.headers['Range'] = 'bytes=0-0,-1' response = webob.Response() response.request = request image = FakeImage(size=3, data=[b'Z', b'Z', b'Z']) self.assertRaises(webob.exc.HTTPBadRequest, self.serializer.download, response, image) def test_download_failure_with_valid_range(self): with mock.patch.object(glance.api.policy.ImageProxy, 'get_data') as mock_get_data: mock_get_data.side_effect = glance_store.NotFound(image="image") request = wsgi.Request.blank('/') request.environ = {} request.headers['Range'] = 'bytes=1-2' response = webob.Response() response.request = request image = FakeImage(size=3, data=[b'Z', b'Z', b'Z']) image.get_data = mock_get_data self.assertRaises(webob.exc.HTTPNoContent, self.serializer.download, response, image) def test_content_range_requests_for_image_downloads(self): """ Even though Content-Range is incorrect on requests, we support it for backward compatibility with clients written for pre-Pike Glance. The following test is for 'Content-Range' requests, which we have to ensure that we prevent regression. """ def download_successful_ContentRange(d_range): request = wsgi.Request.blank('/') request.environ = {} request.headers['Content-Range'] = d_range response = webob.Response() response.request = request image = FakeImage(size=3, data=[b'X', b'Y', b'Z']) self.serializer.download(response, image) self.assertEqual(206, response.status_code) self.assertEqual('2', response.headers['Content-Length']) self.assertEqual('bytes 1-2/3', response.headers['Content-Range']) self.assertEqual(b'YZ', response.body) download_successful_ContentRange('bytes 1-2/3') download_successful_ContentRange('bytes 1-2/*') def download_failures_ContentRange(d_range): request = wsgi.Request.blank('/') request.environ = {} request.headers['Content-Range'] = d_range response = webob.Response() response.request = request image = FakeImage(size=3, data=[b'Z', b'Z', b'Z']) self.assertRaises(webob.exc.HTTPRequestRangeNotSatisfiable, self.serializer.download, response, image) return download_failures_ContentRange('bytes -3/3') download_failures_ContentRange('bytes 1-/3') download_failures_ContentRange('bytes 1-3/3') download_failures_ContentRange('bytes 1-4/3') download_failures_ContentRange('bytes 1-4/*') download_failures_ContentRange('bytes 4-1/3') download_failures_ContentRange('bytes 4-1/*') download_failures_ContentRange('bytes 4-8/*') download_failures_ContentRange('bytes 4-8/10') download_failures_ContentRange('bytes 4-8/3') def test_download_failure_with_valid_content_range(self): with mock.patch.object(glance.api.policy.ImageProxy, 'get_data') as mock_get_data: mock_get_data.side_effect = glance_store.NotFound(image="image") request = wsgi.Request.blank('/') request.environ = {} request.headers['Content-Range'] = 'bytes %s-%s/3' % (1, 2) response = webob.Response() response.request = request image = FakeImage(size=3, data=[b'Z', b'Z', b'Z']) image.get_data = mock_get_data self.assertRaises(webob.exc.HTTPNoContent, self.serializer.download, response, image) def test_download_with_checksum(self): request = wsgi.Request.blank('/') request.environ = {} response = webob.Response() response.request = request checksum = '0745064918b49693cca64d6b6a13d28a' image = FakeImage(size=3, checksum=checksum, data=[b'Z', b'Z', b'Z']) self.serializer.download(response, image) self.assertEqual(b'ZZZ', response.body) self.assertEqual('3', response.headers['Content-Length']) self.assertEqual(checksum, response.headers['Content-MD5']) self.assertEqual('application/octet-stream', response.headers['Content-Type']) def test_download_forbidden(self): """Make sure the serializer can return 403 forbidden error instead of 500 internal server error. """ def get_data(*args, **kwargs): raise exception.Forbidden() self.stubs.Set(glance.api.policy.ImageProxy, 'get_data', get_data) request = wsgi.Request.blank('/') request.environ = {} response = webob.Response() response.request = request image = FakeImage(size=3, data=iter('ZZZ')) image.get_data = get_data self.assertRaises(webob.exc.HTTPForbidden, self.serializer.download, response, image) def test_download_no_content(self): """Test image download returns HTTPNoContent Make sure that serializer returns 204 no content error in case of image data is not available at specified location. """ with mock.patch.object(glance.api.policy.ImageProxy, 'get_data') as mock_get_data: mock_get_data.side_effect = glance_store.NotFound(image="image") request = wsgi.Request.blank('/') response = webob.Response() response.request = request image = FakeImage(size=3, data=iter('ZZZ')) image.get_data = mock_get_data self.assertRaises(webob.exc.HTTPNoContent, self.serializer.download, response, image) def test_download_service_unavailable(self): """Test image download returns HTTPServiceUnavailable.""" with mock.patch.object(glance.api.policy.ImageProxy, 'get_data') as mock_get_data: mock_get_data.side_effect = glance_store.RemoteServiceUnavailable() request = wsgi.Request.blank('/') response = webob.Response() response.request = request image = FakeImage(size=3, data=iter('ZZZ')) image.get_data = mock_get_data self.assertRaises(webob.exc.HTTPServiceUnavailable, self.serializer.download, response, image) def test_download_store_get_not_support(self): """Test image download returns HTTPBadRequest. Make sure that serializer returns 400 bad request error in case of getting images from this store is not supported at specified location. """ with mock.patch.object(glance.api.policy.ImageProxy, 'get_data') as mock_get_data: mock_get_data.side_effect = glance_store.StoreGetNotSupported() request = wsgi.Request.blank('/') response = webob.Response() response.request = request image = FakeImage(size=3, data=iter('ZZZ')) image.get_data = mock_get_data self.assertRaises(webob.exc.HTTPBadRequest, self.serializer.download, response, image) def test_download_store_random_get_not_support(self): """Test image download returns HTTPBadRequest. Make sure that serializer returns 400 bad request error in case of getting randomly images from this store is not supported at specified location. """ with mock.patch.object(glance.api.policy.ImageProxy, 'get_data') as m_get_data: err = glance_store.StoreRandomGetNotSupported(offset=0, chunk_size=0) m_get_data.side_effect = err request = wsgi.Request.blank('/') response = webob.Response() response.request = request image = FakeImage(size=3, data=iter('ZZZ')) image.get_data = m_get_data self.assertRaises(webob.exc.HTTPBadRequest, self.serializer.download, response, image) def test_upload(self): request = webob.Request.blank('/') request.environ = {} response = webob.Response() response.request = request self.serializer.upload(response, {}) self.assertEqual(http.NO_CONTENT, response.status_int) self.assertEqual('0', response.headers['Content-Length']) def test_stage(self): request = webob.Request.blank('/') request.environ = {} response = webob.Response() response.request = request self.serializer.stage(response, {}) self.assertEqual(http.NO_CONTENT, response.status_int) self.assertEqual('0', response.headers['Content-Length']) glance-16.0.0/glance/tests/unit/v2/__init__.py0000666000175100017510000000000013245511421021053 0ustar zuulzuul00000000000000glance-16.0.0/glance/tests/unit/v2/test_images_resource.py0000666000175100017510000054655513245511421023565 0ustar zuulzuul00000000000000# Copyright 2012 OpenStack Foundation. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime import eventlet import uuid import glance_store as store import mock from oslo_serialization import jsonutils import six from six.moves import http_client as http # NOTE(jokke): simplified transition to py3, behaves like py2 xrange from six.moves import range import testtools import webob import glance.api.v2.image_actions import glance.api.v2.images from glance.common import exception from glance import domain import glance.schema from glance.tests.unit import base import glance.tests.unit.utils as unit_test_utils import glance.tests.utils as test_utils DATETIME = datetime.datetime(2012, 5, 16, 15, 27, 36, 325355) ISOTIME = '2012-05-16T15:27:36Z' BASE_URI = unit_test_utils.BASE_URI UUID1 = 'c80a1a6c-bd1f-41c5-90ee-81afedb1d58d' UUID2 = 'a85abd86-55b3-4d5b-b0b4-5d0a6e6042fc' UUID3 = '971ec09a-8067-4bc8-a91f-ae3557f1c4c7' UUID4 = '6bbe7cc2-eae7-4c0f-b50d-a7160b0c6a86' TENANT1 = '6838eb7b-6ded-434a-882c-b344c77fe8df' TENANT2 = '2c014f32-55eb-467d-8fcb-4bd706012f81' TENANT3 = '5a3e60e8-cfa9-4a9e-a90a-62b42cea92b8' TENANT4 = 'c6c87f25-8a94-47ed-8c83-053c25f42df4' CHKSUM = '93264c3edf5972c9f1cb309543d38a5c' CHKSUM1 = '43254c3edf6972c9f1cb309543d38a8c' def _db_fixture(id, **kwargs): obj = { 'id': id, 'name': None, 'visibility': 'shared', 'properties': {}, 'checksum': None, 'owner': None, 'status': 'queued', 'tags': [], 'size': None, 'virtual_size': None, 'locations': [], 'protected': False, 'disk_format': None, 'container_format': None, 'deleted': False, 'min_ram': None, 'min_disk': None, } obj.update(kwargs) return obj def _domain_fixture(id, **kwargs): properties = { 'image_id': id, 'name': None, 'visibility': 'private', 'checksum': None, 'owner': None, 'status': 'queued', 'size': None, 'virtual_size': None, 'locations': [], 'protected': False, 'disk_format': None, 'container_format': None, 'min_ram': None, 'min_disk': None, 'tags': [], } properties.update(kwargs) return glance.domain.Image(**properties) def _db_image_member_fixture(image_id, member_id, **kwargs): obj = { 'image_id': image_id, 'member': member_id, } obj.update(kwargs) return obj class TestImagesController(base.IsolatedUnitTest): def setUp(self): super(TestImagesController, self).setUp() self.db = unit_test_utils.FakeDB(initialize=False) self.policy = unit_test_utils.FakePolicyEnforcer() self.notifier = unit_test_utils.FakeNotifier() self.store = unit_test_utils.FakeStoreAPI() for i in range(1, 4): self.store.data['%s/fake_location_%i' % (BASE_URI, i)] = ('Z', 1) self.store_utils = unit_test_utils.FakeStoreUtils(self.store) self._create_images() self._create_image_members() self.controller = glance.api.v2.images.ImagesController(self.db, self.policy, self.notifier, self.store) self.action_controller = (glance.api.v2.image_actions. ImageActionsController(self.db, self.policy, self.notifier, self.store)) self.controller.gateway.store_utils = self.store_utils store.create_stores() def _create_images(self): self.images = [ _db_fixture(UUID1, owner=TENANT1, checksum=CHKSUM, name='1', size=256, virtual_size=1024, visibility='public', locations=[{'url': '%s/%s' % (BASE_URI, UUID1), 'metadata': {}, 'status': 'active'}], disk_format='raw', container_format='bare', status='active'), _db_fixture(UUID2, owner=TENANT1, checksum=CHKSUM1, name='2', size=512, virtual_size=2048, visibility='public', disk_format='raw', container_format='bare', status='active', tags=['redhat', '64bit', 'power'], properties={'hypervisor_type': 'kvm', 'foo': 'bar', 'bar': 'foo'}), _db_fixture(UUID3, owner=TENANT3, checksum=CHKSUM1, name='3', size=512, virtual_size=2048, visibility='public', tags=['windows', '64bit', 'x86']), _db_fixture(UUID4, owner=TENANT4, name='4', size=1024, virtual_size=3072), ] [self.db.image_create(None, image) for image in self.images] self.db.image_tag_set_all(None, UUID1, ['ping', 'pong']) def _create_image_members(self): self.image_members = [ _db_image_member_fixture(UUID4, TENANT2), _db_image_member_fixture(UUID4, TENANT3, status='accepted'), ] [self.db.image_member_create(None, image_member) for image_member in self.image_members] def test_index(self): self.config(limit_param_default=1, api_limit_max=3) request = unit_test_utils.get_fake_request() output = self.controller.index(request) self.assertEqual(1, len(output['images'])) actual = set([image.image_id for image in output['images']]) expected = set([UUID3]) self.assertEqual(expected, actual) def test_index_member_status_accepted(self): self.config(limit_param_default=5, api_limit_max=5) request = unit_test_utils.get_fake_request(tenant=TENANT2) output = self.controller.index(request) self.assertEqual(3, len(output['images'])) actual = set([image.image_id for image in output['images']]) expected = set([UUID1, UUID2, UUID3]) # can see only the public image self.assertEqual(expected, actual) request = unit_test_utils.get_fake_request(tenant=TENANT3) output = self.controller.index(request) self.assertEqual(4, len(output['images'])) actual = set([image.image_id for image in output['images']]) expected = set([UUID1, UUID2, UUID3, UUID4]) self.assertEqual(expected, actual) def test_index_admin(self): request = unit_test_utils.get_fake_request(is_admin=True) output = self.controller.index(request) self.assertEqual(4, len(output['images'])) def test_index_admin_deleted_images_hidden(self): request = unit_test_utils.get_fake_request(is_admin=True) self.controller.delete(request, UUID1) output = self.controller.index(request) self.assertEqual(3, len(output['images'])) actual = set([image.image_id for image in output['images']]) expected = set([UUID2, UUID3, UUID4]) self.assertEqual(expected, actual) def test_index_return_parameters(self): self.config(limit_param_default=1, api_limit_max=3) request = unit_test_utils.get_fake_request() output = self.controller.index(request, marker=UUID3, limit=1, sort_key=['created_at'], sort_dir=['desc']) self.assertEqual(1, len(output['images'])) actual = set([image.image_id for image in output['images']]) expected = set([UUID2]) self.assertEqual(actual, expected) self.assertEqual(UUID2, output['next_marker']) def test_index_next_marker(self): self.config(limit_param_default=1, api_limit_max=3) request = unit_test_utils.get_fake_request() output = self.controller.index(request, marker=UUID3, limit=2) self.assertEqual(2, len(output['images'])) actual = set([image.image_id for image in output['images']]) expected = set([UUID2, UUID1]) self.assertEqual(expected, actual) self.assertEqual(UUID1, output['next_marker']) def test_index_no_next_marker(self): self.config(limit_param_default=1, api_limit_max=3) request = unit_test_utils.get_fake_request() output = self.controller.index(request, marker=UUID1, limit=2) self.assertEqual(0, len(output['images'])) actual = set([image.image_id for image in output['images']]) expected = set([]) self.assertEqual(expected, actual) self.assertNotIn('next_marker', output) def test_index_with_id_filter(self): request = unit_test_utils.get_fake_request('/images?id=%s' % UUID1) output = self.controller.index(request, filters={'id': UUID1}) self.assertEqual(1, len(output['images'])) actual = set([image.image_id for image in output['images']]) expected = set([UUID1]) self.assertEqual(expected, actual) def test_index_with_checksum_filter_single_image(self): req = unit_test_utils.get_fake_request('/images?checksum=%s' % CHKSUM) output = self.controller.index(req, filters={'checksum': CHKSUM}) self.assertEqual(1, len(output['images'])) actual = list([image.image_id for image in output['images']]) expected = [UUID1] self.assertEqual(expected, actual) def test_index_with_checksum_filter_multiple_images(self): req = unit_test_utils.get_fake_request('/images?checksum=%s' % CHKSUM1) output = self.controller.index(req, filters={'checksum': CHKSUM1}) self.assertEqual(2, len(output['images'])) actual = list([image.image_id for image in output['images']]) expected = [UUID3, UUID2] self.assertEqual(expected, actual) def test_index_with_non_existent_checksum(self): req = unit_test_utils.get_fake_request('/images?checksum=236231827') output = self.controller.index(req, filters={'checksum': '236231827'}) self.assertEqual(0, len(output['images'])) def test_index_size_max_filter(self): request = unit_test_utils.get_fake_request('/images?size_max=512') output = self.controller.index(request, filters={'size_max': 512}) self.assertEqual(3, len(output['images'])) actual = set([image.image_id for image in output['images']]) expected = set([UUID1, UUID2, UUID3]) self.assertEqual(expected, actual) def test_index_size_min_filter(self): request = unit_test_utils.get_fake_request('/images?size_min=512') output = self.controller.index(request, filters={'size_min': 512}) self.assertEqual(2, len(output['images'])) actual = set([image.image_id for image in output['images']]) expected = set([UUID2, UUID3]) self.assertEqual(expected, actual) def test_index_size_range_filter(self): path = '/images?size_min=512&size_max=512' request = unit_test_utils.get_fake_request(path) output = self.controller.index(request, filters={'size_min': 512, 'size_max': 512}) self.assertEqual(2, len(output['images'])) actual = set([image.image_id for image in output['images']]) expected = set([UUID2, UUID3]) self.assertEqual(expected, actual) def test_index_virtual_size_max_filter(self): ref = '/images?virtual_size_max=2048' request = unit_test_utils.get_fake_request(ref) output = self.controller.index(request, filters={'virtual_size_max': 2048}) self.assertEqual(3, len(output['images'])) actual = set([image.image_id for image in output['images']]) expected = set([UUID1, UUID2, UUID3]) self.assertEqual(expected, actual) def test_index_virtual_size_min_filter(self): ref = '/images?virtual_size_min=2048' request = unit_test_utils.get_fake_request(ref) output = self.controller.index(request, filters={'virtual_size_min': 2048}) self.assertEqual(2, len(output['images'])) actual = set([image.image_id for image in output['images']]) expected = set([UUID2, UUID3]) self.assertEqual(expected, actual) def test_index_virtual_size_range_filter(self): path = '/images?virtual_size_min=512&virtual_size_max=2048' request = unit_test_utils.get_fake_request(path) output = self.controller.index(request, filters={'virtual_size_min': 2048, 'virtual_size_max': 2048}) self.assertEqual(2, len(output['images'])) actual = set([image.image_id for image in output['images']]) expected = set([UUID2, UUID3]) self.assertEqual(expected, actual) def test_index_with_invalid_max_range_filter_value(self): request = unit_test_utils.get_fake_request('/images?size_max=blah') self.assertRaises(webob.exc.HTTPBadRequest, self.controller.index, request, filters={'size_max': 'blah'}) def test_index_with_filters_return_many(self): path = '/images?status=queued' request = unit_test_utils.get_fake_request(path) output = self.controller.index(request, filters={'status': 'queued'}) self.assertEqual(1, len(output['images'])) actual = set([image.image_id for image in output['images']]) expected = set([UUID3]) self.assertEqual(expected, actual) def test_index_with_nonexistent_name_filter(self): request = unit_test_utils.get_fake_request('/images?name=%s' % 'blah') images = self.controller.index(request, filters={'name': 'blah'})['images'] self.assertEqual(0, len(images)) def test_index_with_non_default_is_public_filter(self): private_uuid = str(uuid.uuid4()) new_image = _db_fixture(private_uuid, visibility='private', owner=TENANT3) self.db.image_create(None, new_image) path = '/images?visibility=private' request = unit_test_utils.get_fake_request(path, is_admin=True) output = self.controller.index(request, filters={'visibility': 'private'}) self.assertEqual(1, len(output['images'])) actual = set([image.image_id for image in output['images']]) expected = set([private_uuid]) self.assertEqual(expected, actual) path = '/images?visibility=shared' request = unit_test_utils.get_fake_request(path, is_admin=True) output = self.controller.index(request, filters={'visibility': 'shared'}) self.assertEqual(1, len(output['images'])) actual = set([image.image_id for image in output['images']]) expected = set([UUID4]) self.assertEqual(expected, actual) def test_index_with_many_filters(self): url = '/images?status=queued&name=3' request = unit_test_utils.get_fake_request(url) output = self.controller.index(request, filters={ 'status': 'queued', 'name': '3', }) self.assertEqual(1, len(output['images'])) actual = set([image.image_id for image in output['images']]) expected = set([UUID3]) self.assertEqual(expected, actual) def test_index_with_marker(self): self.config(limit_param_default=1, api_limit_max=3) path = '/images' request = unit_test_utils.get_fake_request(path) output = self.controller.index(request, marker=UUID3) actual = set([image.image_id for image in output['images']]) self.assertEqual(1, len(actual)) self.assertIn(UUID2, actual) def test_index_with_limit(self): path = '/images' limit = 2 request = unit_test_utils.get_fake_request(path) output = self.controller.index(request, limit=limit) actual = set([image.image_id for image in output['images']]) self.assertEqual(limit, len(actual)) self.assertIn(UUID3, actual) self.assertIn(UUID2, actual) def test_index_greater_than_limit_max(self): self.config(limit_param_default=1, api_limit_max=3) path = '/images' request = unit_test_utils.get_fake_request(path) output = self.controller.index(request, limit=4) actual = set([image.image_id for image in output['images']]) self.assertEqual(3, len(actual)) self.assertNotIn(output['next_marker'], output) def test_index_default_limit(self): self.config(limit_param_default=1, api_limit_max=3) path = '/images' request = unit_test_utils.get_fake_request(path) output = self.controller.index(request) actual = set([image.image_id for image in output['images']]) self.assertEqual(1, len(actual)) def test_index_with_sort_dir(self): path = '/images' request = unit_test_utils.get_fake_request(path) output = self.controller.index(request, sort_dir=['asc'], limit=3) actual = [image.image_id for image in output['images']] self.assertEqual(3, len(actual)) self.assertEqual(UUID1, actual[0]) self.assertEqual(UUID2, actual[1]) self.assertEqual(UUID3, actual[2]) def test_index_with_sort_key(self): path = '/images' request = unit_test_utils.get_fake_request(path) output = self.controller.index(request, sort_key=['created_at'], limit=3) actual = [image.image_id for image in output['images']] self.assertEqual(3, len(actual)) self.assertEqual(UUID3, actual[0]) self.assertEqual(UUID2, actual[1]) self.assertEqual(UUID1, actual[2]) def test_index_with_multiple_sort_keys(self): path = '/images' request = unit_test_utils.get_fake_request(path) output = self.controller.index(request, sort_key=['created_at', 'name'], limit=3) actual = [image.image_id for image in output['images']] self.assertEqual(3, len(actual)) self.assertEqual(UUID3, actual[0]) self.assertEqual(UUID2, actual[1]) self.assertEqual(UUID1, actual[2]) def test_index_with_marker_not_found(self): fake_uuid = str(uuid.uuid4()) path = '/images' request = unit_test_utils.get_fake_request(path) self.assertRaises(webob.exc.HTTPBadRequest, self.controller.index, request, marker=fake_uuid) def test_index_invalid_sort_key(self): path = '/images' request = unit_test_utils.get_fake_request(path) self.assertRaises(webob.exc.HTTPBadRequest, self.controller.index, request, sort_key=['foo']) def test_index_zero_images(self): self.db.reset() request = unit_test_utils.get_fake_request() output = self.controller.index(request) self.assertEqual([], output['images']) def test_index_with_tags(self): path = '/images?tag=64bit' request = unit_test_utils.get_fake_request(path) output = self.controller.index(request, filters={'tags': ['64bit']}) actual = [image.tags for image in output['images']] self.assertEqual(2, len(actual)) self.assertIn('64bit', actual[0]) self.assertIn('64bit', actual[1]) def test_index_with_multi_tags(self): path = '/images?tag=power&tag=64bit' request = unit_test_utils.get_fake_request(path) output = self.controller.index(request, filters={'tags': ['power', '64bit']}) actual = [image.tags for image in output['images']] self.assertEqual(1, len(actual)) self.assertIn('64bit', actual[0]) self.assertIn('power', actual[0]) def test_index_with_multi_tags_and_nonexistent(self): path = '/images?tag=power&tag=fake' request = unit_test_utils.get_fake_request(path) output = self.controller.index(request, filters={'tags': ['power', 'fake']}) actual = [image.tags for image in output['images']] self.assertEqual(0, len(actual)) def test_index_with_tags_and_properties(self): path = '/images?tag=64bit&hypervisor_type=kvm' request = unit_test_utils.get_fake_request(path) output = self.controller.index(request, filters={'tags': ['64bit'], 'hypervisor_type': 'kvm'}) tags = [image.tags for image in output['images']] properties = [image.extra_properties for image in output['images']] self.assertEqual(len(tags), len(properties)) self.assertIn('64bit', tags[0]) self.assertEqual('kvm', properties[0]['hypervisor_type']) def test_index_with_multiple_properties(self): path = '/images?foo=bar&hypervisor_type=kvm' request = unit_test_utils.get_fake_request(path) output = self.controller.index(request, filters={'foo': 'bar', 'hypervisor_type': 'kvm'}) properties = [image.extra_properties for image in output['images']] self.assertEqual('kvm', properties[0]['hypervisor_type']) self.assertEqual('bar', properties[0]['foo']) def test_index_with_core_and_extra_property(self): path = '/images?disk_format=raw&foo=bar' request = unit_test_utils.get_fake_request(path) output = self.controller.index(request, filters={'foo': 'bar', 'disk_format': 'raw'}) properties = [image.extra_properties for image in output['images']] self.assertEqual(1, len(output['images'])) self.assertEqual('raw', output['images'][0].disk_format) self.assertEqual('bar', properties[0]['foo']) def test_index_with_nonexistent_properties(self): path = '/images?abc=xyz&pudding=banana' request = unit_test_utils.get_fake_request(path) output = self.controller.index(request, filters={'abc': 'xyz', 'pudding': 'banana'}) self.assertEqual(0, len(output['images'])) def test_index_with_non_existent_tags(self): path = '/images?tag=fake' request = unit_test_utils.get_fake_request(path) output = self.controller.index(request, filters={'tags': ['fake']}) actual = [image.tags for image in output['images']] self.assertEqual(0, len(actual)) def test_show(self): request = unit_test_utils.get_fake_request() output = self.controller.show(request, image_id=UUID2) self.assertEqual(UUID2, output.image_id) self.assertEqual('2', output.name) def test_show_deleted_properties(self): """Ensure that the api filters out deleted image properties.""" # get the image properties into the odd state image = { 'id': str(uuid.uuid4()), 'status': 'active', 'properties': {'poo': 'bear'}, } self.db.image_create(None, image) self.db.image_update(None, image['id'], {'properties': {'yin': 'yang'}}, purge_props=True) request = unit_test_utils.get_fake_request() output = self.controller.show(request, image['id']) self.assertEqual('yang', output.extra_properties['yin']) def test_show_non_existent(self): request = unit_test_utils.get_fake_request() image_id = str(uuid.uuid4()) self.assertRaises(webob.exc.HTTPNotFound, self.controller.show, request, image_id) def test_show_deleted_image_admin(self): request = unit_test_utils.get_fake_request(is_admin=True) self.controller.delete(request, UUID1) self.assertRaises(webob.exc.HTTPNotFound, self.controller.show, request, UUID1) def test_show_not_allowed(self): request = unit_test_utils.get_fake_request() self.assertEqual(TENANT1, request.context.tenant) self.assertRaises(webob.exc.HTTPNotFound, self.controller.show, request, UUID4) def test_image_import_raises_conflict(self): request = unit_test_utils.get_fake_request() # NOTE(abhishekk): Due to # https://bugs.launchpad.net/glance/+bug/1712463 taskflow is not # executing. Once it is fixed instead of mocking spawn_n method # we should mock execute method of _ImportToStore task. with mock.patch.object(eventlet.GreenPool, 'spawn_n', side_effect=exception.Conflict): self.assertRaises(webob.exc.HTTPConflict, self.controller.import_image, request, UUID4, {'method': {'name': 'glance-direct'}}) def test_image_import_raises_conflict_for_invalid_status_change(self): request = unit_test_utils.get_fake_request() # NOTE(abhishekk): Due to # https://bugs.launchpad.net/glance/+bug/1712463 taskflow is not # executing. Once it is fixed instead of mocking spawn_n method # we should mock execute method of _ImportToStore task. with mock.patch.object( eventlet.GreenPool, 'spawn_n', side_effect=exception.InvalidImageStatusTransition): self.assertRaises(webob.exc.HTTPConflict, self.controller.import_image, request, UUID4, {'method': {'name': 'glance-direct'}}) def test_image_import_raises_bad_request(self): request = unit_test_utils.get_fake_request() # NOTE(abhishekk): Due to # https://bugs.launchpad.net/glance/+bug/1712463 taskflow is not # executing. Once it is fixed instead of mocking spawn_n method # we should mock execute method of _ImportToStore task. with mock.patch.object(eventlet.GreenPool, 'spawn_n', side_effect=ValueError): self.assertRaises(webob.exc.HTTPBadRequest, self.controller.import_image, request, UUID4, {'method': {'name': 'glance-direct'}}) def test_create(self): request = unit_test_utils.get_fake_request() image = {'name': 'image-1'} output = self.controller.create(request, image=image, extra_properties={}, tags=[]) self.assertEqual('image-1', output.name) self.assertEqual({}, output.extra_properties) self.assertEqual(set([]), output.tags) self.assertEqual('shared', output.visibility) output_logs = self.notifier.get_logs() self.assertEqual(1, len(output_logs)) output_log = output_logs[0] self.assertEqual('INFO', output_log['notification_type']) self.assertEqual('image.create', output_log['event_type']) self.assertEqual('image-1', output_log['payload']['name']) def test_create_disabled_notification(self): self.config(disabled_notifications=["image.create"]) request = unit_test_utils.get_fake_request() image = {'name': 'image-1'} output = self.controller.create(request, image=image, extra_properties={}, tags=[]) self.assertEqual('image-1', output.name) self.assertEqual({}, output.extra_properties) self.assertEqual(set([]), output.tags) self.assertEqual('shared', output.visibility) output_logs = self.notifier.get_logs() self.assertEqual(0, len(output_logs)) def test_create_with_properties(self): request = unit_test_utils.get_fake_request() image_properties = {'foo': 'bar'} image = {'name': 'image-1'} output = self.controller.create(request, image=image, extra_properties=image_properties, tags=[]) self.assertEqual('image-1', output.name) self.assertEqual(image_properties, output.extra_properties) self.assertEqual(set([]), output.tags) self.assertEqual('shared', output.visibility) output_logs = self.notifier.get_logs() self.assertEqual(1, len(output_logs)) output_log = output_logs[0] self.assertEqual('INFO', output_log['notification_type']) self.assertEqual('image.create', output_log['event_type']) self.assertEqual('image-1', output_log['payload']['name']) def test_create_with_too_many_properties(self): self.config(image_property_quota=1) request = unit_test_utils.get_fake_request() image_properties = {'foo': 'bar', 'foo2': 'bar'} image = {'name': 'image-1'} self.assertRaises(webob.exc.HTTPRequestEntityTooLarge, self.controller.create, request, image=image, extra_properties=image_properties, tags=[]) def test_create_with_bad_min_disk_size(self): request = unit_test_utils.get_fake_request() image = {'min_disk': -42, 'name': 'image-1'} self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, request, image=image, extra_properties={}, tags=[]) def test_create_with_bad_min_ram_size(self): request = unit_test_utils.get_fake_request() image = {'min_ram': -42, 'name': 'image-1'} self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, request, image=image, extra_properties={}, tags=[]) def test_create_public_image_as_admin(self): request = unit_test_utils.get_fake_request() image = {'name': 'image-1', 'visibility': 'public'} output = self.controller.create(request, image=image, extra_properties={}, tags=[]) self.assertEqual('public', output.visibility) output_logs = self.notifier.get_logs() self.assertEqual(1, len(output_logs)) output_log = output_logs[0] self.assertEqual('INFO', output_log['notification_type']) self.assertEqual('image.create', output_log['event_type']) self.assertEqual(output.image_id, output_log['payload']['id']) def test_create_dup_id(self): request = unit_test_utils.get_fake_request() image = {'image_id': UUID4} self.assertRaises(webob.exc.HTTPConflict, self.controller.create, request, image=image, extra_properties={}, tags=[]) def test_create_duplicate_tags(self): request = unit_test_utils.get_fake_request() tags = ['ping', 'ping'] output = self.controller.create(request, image={}, extra_properties={}, tags=tags) self.assertEqual(set(['ping']), output.tags) output_logs = self.notifier.get_logs() self.assertEqual(1, len(output_logs)) output_log = output_logs[0] self.assertEqual('INFO', output_log['notification_type']) self.assertEqual('image.create', output_log['event_type']) self.assertEqual(output.image_id, output_log['payload']['id']) def test_create_with_too_many_tags(self): self.config(image_tag_quota=1) request = unit_test_utils.get_fake_request() tags = ['ping', 'pong'] self.assertRaises(webob.exc.HTTPRequestEntityTooLarge, self.controller.create, request, image={}, extra_properties={}, tags=tags) def test_create_with_owner_non_admin(self): request = unit_test_utils.get_fake_request() request.context.is_admin = False image = {'owner': '12345'} self.assertRaises(webob.exc.HTTPForbidden, self.controller.create, request, image=image, extra_properties={}, tags=[]) request = unit_test_utils.get_fake_request() request.context.is_admin = False image = {'owner': TENANT1} output = self.controller.create(request, image=image, extra_properties={}, tags=[]) self.assertEqual(TENANT1, output.owner) def test_create_with_owner_admin(self): request = unit_test_utils.get_fake_request() request.context.is_admin = True image = {'owner': '12345'} output = self.controller.create(request, image=image, extra_properties={}, tags=[]) self.assertEqual('12345', output.owner) def test_create_with_duplicate_location(self): request = unit_test_utils.get_fake_request() location = {'url': '%s/fake_location' % BASE_URI, 'metadata': {}} image = {'name': 'image-1', 'locations': [location, location]} self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, request, image=image, extra_properties={}, tags=[]) def test_create_unexpected_property(self): request = unit_test_utils.get_fake_request() image_properties = {'unexpected': 'unexpected'} image = {'name': 'image-1'} with mock.patch.object(domain.ImageFactory, 'new_image', side_effect=TypeError): self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, request, image=image, extra_properties=image_properties, tags=[]) def test_create_reserved_property(self): request = unit_test_utils.get_fake_request() image_properties = {'reserved': 'reserved'} image = {'name': 'image-1'} with mock.patch.object(domain.ImageFactory, 'new_image', side_effect=exception.ReservedProperty( property='reserved')): self.assertRaises(webob.exc.HTTPForbidden, self.controller.create, request, image=image, extra_properties=image_properties, tags=[]) def test_create_readonly_property(self): request = unit_test_utils.get_fake_request() image_properties = {'readonly': 'readonly'} image = {'name': 'image-1'} with mock.patch.object(domain.ImageFactory, 'new_image', side_effect=exception.ReadonlyProperty( property='readonly')): self.assertRaises(webob.exc.HTTPForbidden, self.controller.create, request, image=image, extra_properties=image_properties, tags=[]) def test_update_no_changes(self): request = unit_test_utils.get_fake_request() output = self.controller.update(request, UUID1, changes=[]) self.assertEqual(UUID1, output.image_id) self.assertEqual(output.created_at, output.updated_at) self.assertEqual(2, len(output.tags)) self.assertIn('ping', output.tags) self.assertIn('pong', output.tags) output_logs = self.notifier.get_logs() # NOTE(markwash): don't send a notification if nothing is updated self.assertEqual(0, len(output_logs)) def test_update_with_bad_min_disk(self): request = unit_test_utils.get_fake_request() changes = [{'op': 'replace', 'path': ['min_disk'], 'value': -42}] self.assertRaises(webob.exc.HTTPBadRequest, self.controller.update, request, UUID1, changes=changes) def test_update_with_bad_min_ram(self): request = unit_test_utils.get_fake_request() changes = [{'op': 'replace', 'path': ['min_ram'], 'value': -42}] self.assertRaises(webob.exc.HTTPBadRequest, self.controller.update, request, UUID1, changes=changes) def test_update_image_doesnt_exist(self): request = unit_test_utils.get_fake_request() self.assertRaises(webob.exc.HTTPNotFound, self.controller.update, request, str(uuid.uuid4()), changes=[]) def test_update_deleted_image_admin(self): request = unit_test_utils.get_fake_request(is_admin=True) self.controller.delete(request, UUID1) self.assertRaises(webob.exc.HTTPNotFound, self.controller.update, request, UUID1, changes=[]) def test_update_with_too_many_properties(self): self.config(show_multiple_locations=True) self.config(user_storage_quota='1') new_location = {'url': '%s/fake_location' % BASE_URI, 'metadata': {}} request = unit_test_utils.get_fake_request() changes = [{'op': 'add', 'path': ['locations', '-'], 'value': new_location}] self.assertRaises(webob.exc.HTTPRequestEntityTooLarge, self.controller.update, request, UUID1, changes=changes) def test_update_replace_base_attribute(self): self.db.image_update(None, UUID1, {'properties': {'foo': 'bar'}}) request = unit_test_utils.get_fake_request() request.context.is_admin = True changes = [{'op': 'replace', 'path': ['name'], 'value': 'fedora'}, {'op': 'replace', 'path': ['owner'], 'value': TENANT3}] output = self.controller.update(request, UUID1, changes) self.assertEqual(UUID1, output.image_id) self.assertEqual('fedora', output.name) self.assertEqual(TENANT3, output.owner) self.assertEqual({'foo': 'bar'}, output.extra_properties) self.assertNotEqual(output.created_at, output.updated_at) def test_update_replace_onwer_non_admin(self): request = unit_test_utils.get_fake_request() request.context.is_admin = False changes = [{'op': 'replace', 'path': ['owner'], 'value': TENANT3}] self.assertRaises(webob.exc.HTTPForbidden, self.controller.update, request, UUID1, changes) def test_update_replace_tags(self): request = unit_test_utils.get_fake_request() changes = [ {'op': 'replace', 'path': ['tags'], 'value': ['king', 'kong']}, ] output = self.controller.update(request, UUID1, changes) self.assertEqual(UUID1, output.image_id) self.assertEqual(2, len(output.tags)) self.assertIn('king', output.tags) self.assertIn('kong', output.tags) self.assertNotEqual(output.created_at, output.updated_at) def test_update_replace_property(self): request = unit_test_utils.get_fake_request() properties = {'foo': 'bar', 'snitch': 'golden'} self.db.image_update(None, UUID1, {'properties': properties}) output = self.controller.show(request, UUID1) self.assertEqual('bar', output.extra_properties['foo']) self.assertEqual('golden', output.extra_properties['snitch']) changes = [ {'op': 'replace', 'path': ['foo'], 'value': 'baz'}, ] output = self.controller.update(request, UUID1, changes) self.assertEqual(UUID1, output.image_id) self.assertEqual('baz', output.extra_properties['foo']) self.assertEqual('golden', output.extra_properties['snitch']) self.assertNotEqual(output.created_at, output.updated_at) def test_update_add_too_many_properties(self): self.config(image_property_quota=1) request = unit_test_utils.get_fake_request() changes = [ {'op': 'add', 'path': ['foo'], 'value': 'baz'}, {'op': 'add', 'path': ['snitch'], 'value': 'golden'}, ] self.assertRaises(webob.exc.HTTPRequestEntityTooLarge, self.controller.update, request, UUID1, changes) def test_update_add_and_remove_too_many_properties(self): request = unit_test_utils.get_fake_request() changes = [ {'op': 'add', 'path': ['foo'], 'value': 'baz'}, {'op': 'add', 'path': ['snitch'], 'value': 'golden'}, ] self.controller.update(request, UUID1, changes) self.config(image_property_quota=1) # We must remove two properties to avoid being # over the limit of 1 property changes = [ {'op': 'remove', 'path': ['foo']}, {'op': 'add', 'path': ['fizz'], 'value': 'buzz'}, ] self.assertRaises(webob.exc.HTTPRequestEntityTooLarge, self.controller.update, request, UUID1, changes) def test_update_add_unlimited_properties(self): self.config(image_property_quota=-1) request = unit_test_utils.get_fake_request() output = self.controller.show(request, UUID1) changes = [{'op': 'add', 'path': ['foo'], 'value': 'bar'}] output = self.controller.update(request, UUID1, changes) self.assertEqual(UUID1, output.image_id) self.assertNotEqual(output.created_at, output.updated_at) def test_update_format_properties(self): statuses_for_immutability = ['active', 'saving', 'killed'] request = unit_test_utils.get_fake_request(is_admin=True) for status in statuses_for_immutability: image = { 'id': str(uuid.uuid4()), 'status': status, 'disk_format': 'ari', 'container_format': 'ari', } self.db.image_create(None, image) changes = [ {'op': 'replace', 'path': ['disk_format'], 'value': 'ami'}, ] self.assertRaises(webob.exc.HTTPForbidden, self.controller.update, request, image['id'], changes) changes = [ {'op': 'replace', 'path': ['container_format'], 'value': 'ami'}, ] self.assertRaises(webob.exc.HTTPForbidden, self.controller.update, request, image['id'], changes) self.db.image_update(None, image['id'], {'status': 'queued'}) changes = [ {'op': 'replace', 'path': ['disk_format'], 'value': 'raw'}, {'op': 'replace', 'path': ['container_format'], 'value': 'bare'}, ] resp = self.controller.update(request, image['id'], changes) self.assertEqual('raw', resp.disk_format) self.assertEqual('bare', resp.container_format) def test_update_remove_property_while_over_limit(self): """Ensure that image properties can be removed. Image properties should be able to be removed as long as the image has fewer than the limited number of image properties after the transaction. """ request = unit_test_utils.get_fake_request() changes = [ {'op': 'add', 'path': ['foo'], 'value': 'baz'}, {'op': 'add', 'path': ['snitch'], 'value': 'golden'}, {'op': 'add', 'path': ['fizz'], 'value': 'buzz'}, ] self.controller.update(request, UUID1, changes) self.config(image_property_quota=1) # We must remove two properties to avoid being # over the limit of 1 property changes = [ {'op': 'remove', 'path': ['foo']}, {'op': 'remove', 'path': ['snitch']}, ] output = self.controller.update(request, UUID1, changes) self.assertEqual(UUID1, output.image_id) self.assertEqual(1, len(output.extra_properties)) self.assertEqual('buzz', output.extra_properties['fizz']) self.assertNotEqual(output.created_at, output.updated_at) def test_update_add_and_remove_property_under_limit(self): """Ensure that image properties can be removed. Image properties should be able to be added and removed simultaneously as long as the image has fewer than the limited number of image properties after the transaction. """ request = unit_test_utils.get_fake_request() changes = [ {'op': 'add', 'path': ['foo'], 'value': 'baz'}, {'op': 'add', 'path': ['snitch'], 'value': 'golden'}, ] self.controller.update(request, UUID1, changes) self.config(image_property_quota=1) # We must remove two properties to avoid being # over the limit of 1 property changes = [ {'op': 'remove', 'path': ['foo']}, {'op': 'remove', 'path': ['snitch']}, {'op': 'add', 'path': ['fizz'], 'value': 'buzz'}, ] output = self.controller.update(request, UUID1, changes) self.assertEqual(UUID1, output.image_id) self.assertEqual(1, len(output.extra_properties)) self.assertEqual('buzz', output.extra_properties['fizz']) self.assertNotEqual(output.created_at, output.updated_at) def test_update_replace_missing_property(self): request = unit_test_utils.get_fake_request() changes = [ {'op': 'replace', 'path': 'foo', 'value': 'baz'}, ] self.assertRaises(webob.exc.HTTPConflict, self.controller.update, request, UUID1, changes) def test_prop_protection_with_create_and_permitted_role(self): enforcer = glance.api.policy.Enforcer() self.controller = glance.api.v2.images.ImagesController(self.db, enforcer, self.notifier, self.store) self.set_property_protections() request = unit_test_utils.get_fake_request(roles=['admin']) image = {'name': 'image-1'} created_image = self.controller.create(request, image=image, extra_properties={}, tags=[]) another_request = unit_test_utils.get_fake_request(roles=['member']) changes = [ {'op': 'add', 'path': ['x_owner_foo'], 'value': 'bar'}, ] output = self.controller.update(another_request, created_image.image_id, changes) self.assertEqual('bar', output.extra_properties['x_owner_foo']) def test_prop_protection_with_update_and_permitted_policy(self): self.set_property_protections(use_policies=True) enforcer = glance.api.policy.Enforcer() self.controller = glance.api.v2.images.ImagesController(self.db, enforcer, self.notifier, self.store) request = unit_test_utils.get_fake_request(roles=['spl_role']) image = {'name': 'image-1'} extra_props = {'spl_creator_policy': 'bar'} created_image = self.controller.create(request, image=image, extra_properties=extra_props, tags=[]) self.assertEqual('bar', created_image.extra_properties['spl_creator_policy']) another_request = unit_test_utils.get_fake_request(roles=['spl_role']) changes = [ {'op': 'replace', 'path': ['spl_creator_policy'], 'value': 'par'}, ] self.assertRaises(webob.exc.HTTPForbidden, self.controller.update, another_request, created_image.image_id, changes) another_request = unit_test_utils.get_fake_request(roles=['admin']) output = self.controller.update(another_request, created_image.image_id, changes) self.assertEqual('par', output.extra_properties['spl_creator_policy']) def test_prop_protection_with_create_with_patch_and_policy(self): self.set_property_protections(use_policies=True) enforcer = glance.api.policy.Enforcer() self.controller = glance.api.v2.images.ImagesController(self.db, enforcer, self.notifier, self.store) request = unit_test_utils.get_fake_request(roles=['spl_role', 'admin']) image = {'name': 'image-1'} extra_props = {'spl_default_policy': 'bar'} created_image = self.controller.create(request, image=image, extra_properties=extra_props, tags=[]) another_request = unit_test_utils.get_fake_request(roles=['fake_role']) changes = [ {'op': 'add', 'path': ['spl_creator_policy'], 'value': 'bar'}, ] self.assertRaises(webob.exc.HTTPForbidden, self.controller.update, another_request, created_image.image_id, changes) another_request = unit_test_utils.get_fake_request(roles=['spl_role']) output = self.controller.update(another_request, created_image.image_id, changes) self.assertEqual('bar', output.extra_properties['spl_creator_policy']) def test_prop_protection_with_create_and_unpermitted_role(self): enforcer = glance.api.policy.Enforcer() self.controller = glance.api.v2.images.ImagesController(self.db, enforcer, self.notifier, self.store) self.set_property_protections() request = unit_test_utils.get_fake_request(roles=['admin']) image = {'name': 'image-1'} created_image = self.controller.create(request, image=image, extra_properties={}, tags=[]) roles = ['fake_member'] another_request = unit_test_utils.get_fake_request(roles=roles) changes = [ {'op': 'add', 'path': ['x_owner_foo'], 'value': 'bar'}, ] self.assertRaises(webob.exc.HTTPForbidden, self.controller.update, another_request, created_image.image_id, changes) def test_prop_protection_with_show_and_permitted_role(self): enforcer = glance.api.policy.Enforcer() self.controller = glance.api.v2.images.ImagesController(self.db, enforcer, self.notifier, self.store) self.set_property_protections() request = unit_test_utils.get_fake_request(roles=['admin']) image = {'name': 'image-1'} extra_props = {'x_owner_foo': 'bar'} created_image = self.controller.create(request, image=image, extra_properties=extra_props, tags=[]) another_request = unit_test_utils.get_fake_request(roles=['member']) output = self.controller.show(another_request, created_image.image_id) self.assertEqual('bar', output.extra_properties['x_owner_foo']) def test_prop_protection_with_show_and_unpermitted_role(self): enforcer = glance.api.policy.Enforcer() self.controller = glance.api.v2.images.ImagesController(self.db, enforcer, self.notifier, self.store) self.set_property_protections() request = unit_test_utils.get_fake_request(roles=['member']) image = {'name': 'image-1'} extra_props = {'x_owner_foo': 'bar'} created_image = self.controller.create(request, image=image, extra_properties=extra_props, tags=[]) another_request = unit_test_utils.get_fake_request(roles=['fake_role']) output = self.controller.show(another_request, created_image.image_id) self.assertRaises(KeyError, output.extra_properties.__getitem__, 'x_owner_foo') def test_prop_protection_with_update_and_permitted_role(self): enforcer = glance.api.policy.Enforcer() self.controller = glance.api.v2.images.ImagesController(self.db, enforcer, self.notifier, self.store) self.set_property_protections() request = unit_test_utils.get_fake_request(roles=['admin']) image = {'name': 'image-1'} extra_props = {'x_owner_foo': 'bar'} created_image = self.controller.create(request, image=image, extra_properties=extra_props, tags=[]) another_request = unit_test_utils.get_fake_request(roles=['member']) changes = [ {'op': 'replace', 'path': ['x_owner_foo'], 'value': 'baz'}, ] output = self.controller.update(another_request, created_image.image_id, changes) self.assertEqual('baz', output.extra_properties['x_owner_foo']) def test_prop_protection_with_update_and_unpermitted_role(self): enforcer = glance.api.policy.Enforcer() self.controller = glance.api.v2.images.ImagesController(self.db, enforcer, self.notifier, self.store) self.set_property_protections() request = unit_test_utils.get_fake_request(roles=['admin']) image = {'name': 'image-1'} extra_props = {'x_owner_foo': 'bar'} created_image = self.controller.create(request, image=image, extra_properties=extra_props, tags=[]) another_request = unit_test_utils.get_fake_request(roles=['fake_role']) changes = [ {'op': 'replace', 'path': ['x_owner_foo'], 'value': 'baz'}, ] self.assertRaises(webob.exc.HTTPConflict, self.controller.update, another_request, created_image.image_id, changes) def test_prop_protection_with_delete_and_permitted_role(self): enforcer = glance.api.policy.Enforcer() self.controller = glance.api.v2.images.ImagesController(self.db, enforcer, self.notifier, self.store) self.set_property_protections() request = unit_test_utils.get_fake_request(roles=['admin']) image = {'name': 'image-1'} extra_props = {'x_owner_foo': 'bar'} created_image = self.controller.create(request, image=image, extra_properties=extra_props, tags=[]) another_request = unit_test_utils.get_fake_request(roles=['member']) changes = [ {'op': 'remove', 'path': ['x_owner_foo']} ] output = self.controller.update(another_request, created_image.image_id, changes) self.assertRaises(KeyError, output.extra_properties.__getitem__, 'x_owner_foo') def test_prop_protection_with_delete_and_unpermitted_role(self): enforcer = glance.api.policy.Enforcer() self.controller = glance.api.v2.images.ImagesController(self.db, enforcer, self.notifier, self.store) self.set_property_protections() request = unit_test_utils.get_fake_request(roles=['admin']) image = {'name': 'image-1'} extra_props = {'x_owner_foo': 'bar'} created_image = self.controller.create(request, image=image, extra_properties=extra_props, tags=[]) another_request = unit_test_utils.get_fake_request(roles=['fake_role']) changes = [ {'op': 'remove', 'path': ['x_owner_foo']} ] self.assertRaises(webob.exc.HTTPConflict, self.controller.update, another_request, created_image.image_id, changes) def test_create_protected_prop_case_insensitive(self): enforcer = glance.api.policy.Enforcer() self.controller = glance.api.v2.images.ImagesController(self.db, enforcer, self.notifier, self.store) self.set_property_protections() request = unit_test_utils.get_fake_request(roles=['admin']) image = {'name': 'image-1'} created_image = self.controller.create(request, image=image, extra_properties={}, tags=[]) another_request = unit_test_utils.get_fake_request(roles=['member']) changes = [ {'op': 'add', 'path': ['x_case_insensitive'], 'value': '1'}, ] output = self.controller.update(another_request, created_image.image_id, changes) self.assertEqual('1', output.extra_properties['x_case_insensitive']) def test_read_protected_prop_case_insensitive(self): enforcer = glance.api.policy.Enforcer() self.controller = glance.api.v2.images.ImagesController(self.db, enforcer, self.notifier, self.store) self.set_property_protections() request = unit_test_utils.get_fake_request(roles=['admin']) image = {'name': 'image-1'} extra_props = {'x_case_insensitive': '1'} created_image = self.controller.create(request, image=image, extra_properties=extra_props, tags=[]) another_request = unit_test_utils.get_fake_request(roles=['member']) output = self.controller.show(another_request, created_image.image_id) self.assertEqual('1', output.extra_properties['x_case_insensitive']) def test_update_protected_prop_case_insensitive(self): enforcer = glance.api.policy.Enforcer() self.controller = glance.api.v2.images.ImagesController(self.db, enforcer, self.notifier, self.store) self.set_property_protections() request = unit_test_utils.get_fake_request(roles=['admin']) image = {'name': 'image-1'} extra_props = {'x_case_insensitive': '1'} created_image = self.controller.create(request, image=image, extra_properties=extra_props, tags=[]) another_request = unit_test_utils.get_fake_request(roles=['member']) changes = [ {'op': 'replace', 'path': ['x_case_insensitive'], 'value': '2'}, ] output = self.controller.update(another_request, created_image.image_id, changes) self.assertEqual('2', output.extra_properties['x_case_insensitive']) def test_delete_protected_prop_case_insensitive(self): enforcer = glance.api.policy.Enforcer() self.controller = glance.api.v2.images.ImagesController(self.db, enforcer, self.notifier, self.store) self.set_property_protections() request = unit_test_utils.get_fake_request(roles=['admin']) image = {'name': 'image-1'} extra_props = {'x_case_insensitive': 'bar'} created_image = self.controller.create(request, image=image, extra_properties=extra_props, tags=[]) another_request = unit_test_utils.get_fake_request(roles=['member']) changes = [ {'op': 'remove', 'path': ['x_case_insensitive']} ] output = self.controller.update(another_request, created_image.image_id, changes) self.assertRaises(KeyError, output.extra_properties.__getitem__, 'x_case_insensitive') def test_create_non_protected_prop(self): """Property marked with special char @ creatable by an unknown role""" self.set_property_protections() request = unit_test_utils.get_fake_request(roles=['admin']) image = {'name': 'image-1'} extra_props = {'x_all_permitted_1': '1'} created_image = self.controller.create(request, image=image, extra_properties=extra_props, tags=[]) self.assertEqual('1', created_image.extra_properties['x_all_permitted_1']) another_request = unit_test_utils.get_fake_request(roles=['joe_soap']) extra_props = {'x_all_permitted_2': '2'} created_image = self.controller.create(another_request, image=image, extra_properties=extra_props, tags=[]) self.assertEqual('2', created_image.extra_properties['x_all_permitted_2']) def test_read_non_protected_prop(self): """Property marked with special char @ readable by an unknown role""" self.set_property_protections() request = unit_test_utils.get_fake_request(roles=['admin']) image = {'name': 'image-1'} extra_props = {'x_all_permitted': '1'} created_image = self.controller.create(request, image=image, extra_properties=extra_props, tags=[]) another_request = unit_test_utils.get_fake_request(roles=['joe_soap']) output = self.controller.show(another_request, created_image.image_id) self.assertEqual('1', output.extra_properties['x_all_permitted']) def test_update_non_protected_prop(self): """Property marked with special char @ updatable by an unknown role""" self.set_property_protections() request = unit_test_utils.get_fake_request(roles=['admin']) image = {'name': 'image-1'} extra_props = {'x_all_permitted': 'bar'} created_image = self.controller.create(request, image=image, extra_properties=extra_props, tags=[]) another_request = unit_test_utils.get_fake_request(roles=['joe_soap']) changes = [ {'op': 'replace', 'path': ['x_all_permitted'], 'value': 'baz'}, ] output = self.controller.update(another_request, created_image.image_id, changes) self.assertEqual('baz', output.extra_properties['x_all_permitted']) def test_delete_non_protected_prop(self): """Property marked with special char @ deletable by an unknown role""" self.set_property_protections() request = unit_test_utils.get_fake_request(roles=['admin']) image = {'name': 'image-1'} extra_props = {'x_all_permitted': 'bar'} created_image = self.controller.create(request, image=image, extra_properties=extra_props, tags=[]) another_request = unit_test_utils.get_fake_request(roles=['member']) changes = [ {'op': 'remove', 'path': ['x_all_permitted']} ] output = self.controller.update(another_request, created_image.image_id, changes) self.assertRaises(KeyError, output.extra_properties.__getitem__, 'x_all_permitted') def test_create_locked_down_protected_prop(self): """Property marked with special char ! creatable by no one""" self.set_property_protections() request = unit_test_utils.get_fake_request(roles=['admin']) image = {'name': 'image-1'} created_image = self.controller.create(request, image=image, extra_properties={}, tags=[]) roles = ['fake_member'] another_request = unit_test_utils.get_fake_request(roles=roles) changes = [ {'op': 'add', 'path': ['x_none_permitted'], 'value': 'bar'}, ] self.assertRaises(webob.exc.HTTPForbidden, self.controller.update, another_request, created_image.image_id, changes) def test_read_locked_down_protected_prop(self): """Property marked with special char ! readable by no one""" self.set_property_protections() request = unit_test_utils.get_fake_request(roles=['member']) image = {'name': 'image-1'} extra_props = {'x_none_read': 'bar'} created_image = self.controller.create(request, image=image, extra_properties=extra_props, tags=[]) another_request = unit_test_utils.get_fake_request(roles=['fake_role']) output = self.controller.show(another_request, created_image.image_id) self.assertRaises(KeyError, output.extra_properties.__getitem__, 'x_none_read') def test_update_locked_down_protected_prop(self): """Property marked with special char ! updatable by no one""" self.set_property_protections() request = unit_test_utils.get_fake_request(roles=['admin']) image = {'name': 'image-1'} extra_props = {'x_none_update': 'bar'} created_image = self.controller.create(request, image=image, extra_properties=extra_props, tags=[]) another_request = unit_test_utils.get_fake_request(roles=['fake_role']) changes = [ {'op': 'replace', 'path': ['x_none_update'], 'value': 'baz'}, ] self.assertRaises(webob.exc.HTTPConflict, self.controller.update, another_request, created_image.image_id, changes) def test_delete_locked_down_protected_prop(self): """Property marked with special char ! deletable by no one""" self.set_property_protections() request = unit_test_utils.get_fake_request(roles=['admin']) image = {'name': 'image-1'} extra_props = {'x_none_delete': 'bar'} created_image = self.controller.create(request, image=image, extra_properties=extra_props, tags=[]) another_request = unit_test_utils.get_fake_request(roles=['fake_role']) changes = [ {'op': 'remove', 'path': ['x_none_delete']} ] self.assertRaises(webob.exc.HTTPConflict, self.controller.update, another_request, created_image.image_id, changes) def test_update_replace_locations_non_empty(self): self.config(show_multiple_locations=True) new_location = {'url': '%s/fake_location' % BASE_URI, 'metadata': {}} request = unit_test_utils.get_fake_request() changes = [{'op': 'replace', 'path': ['locations'], 'value': [new_location]}] self.assertRaises(webob.exc.HTTPBadRequest, self.controller.update, request, UUID1, changes) def test_update_replace_locations_metadata_update(self): self.config(show_multiple_locations=True) location = {'url': '%s/%s' % (BASE_URI, UUID1), 'metadata': {'a': 1}} request = unit_test_utils.get_fake_request() changes = [{'op': 'replace', 'path': ['locations'], 'value': [location]}] output = self.controller.update(request, UUID1, changes) self.assertEqual({'a': 1}, output.locations[0]['metadata']) def test_locations_actions_with_locations_invisible(self): self.config(show_multiple_locations=False) new_location = {'url': '%s/fake_location' % BASE_URI, 'metadata': {}} request = unit_test_utils.get_fake_request() changes = [{'op': 'replace', 'path': ['locations'], 'value': [new_location]}] self.assertRaises(webob.exc.HTTPForbidden, self.controller.update, request, UUID1, changes) def test_update_replace_locations_invalid(self): request = unit_test_utils.get_fake_request() changes = [{'op': 'replace', 'path': ['locations'], 'value': []}] self.assertRaises(webob.exc.HTTPForbidden, self.controller.update, request, UUID1, changes) def test_update_add_property(self): request = unit_test_utils.get_fake_request() changes = [ {'op': 'add', 'path': ['foo'], 'value': 'baz'}, {'op': 'add', 'path': ['snitch'], 'value': 'golden'}, ] output = self.controller.update(request, UUID1, changes) self.assertEqual(UUID1, output.image_id) self.assertEqual('baz', output.extra_properties['foo']) self.assertEqual('golden', output.extra_properties['snitch']) self.assertNotEqual(output.created_at, output.updated_at) def test_update_add_base_property_json_schema_version_4(self): request = unit_test_utils.get_fake_request() changes = [{ 'json_schema_version': 4, 'op': 'add', 'path': ['name'], 'value': 'fedora' }] self.assertRaises(webob.exc.HTTPConflict, self.controller.update, request, UUID1, changes) def test_update_add_extra_property_json_schema_version_4(self): self.db.image_update(None, UUID1, {'properties': {'foo': 'bar'}}) request = unit_test_utils.get_fake_request() changes = [{ 'json_schema_version': 4, 'op': 'add', 'path': ['foo'], 'value': 'baz' }] self.assertRaises(webob.exc.HTTPConflict, self.controller.update, request, UUID1, changes) def test_update_add_base_property_json_schema_version_10(self): request = unit_test_utils.get_fake_request() changes = [{ 'json_schema_version': 10, 'op': 'add', 'path': ['name'], 'value': 'fedora' }] output = self.controller.update(request, UUID1, changes) self.assertEqual(UUID1, output.image_id) self.assertEqual('fedora', output.name) def test_update_add_extra_property_json_schema_version_10(self): self.db.image_update(None, UUID1, {'properties': {'foo': 'bar'}}) request = unit_test_utils.get_fake_request() changes = [{ 'json_schema_version': 10, 'op': 'add', 'path': ['foo'], 'value': 'baz' }] output = self.controller.update(request, UUID1, changes) self.assertEqual(UUID1, output.image_id) self.assertEqual({'foo': 'baz'}, output.extra_properties) def test_update_add_property_already_present_json_schema_version_4(self): request = unit_test_utils.get_fake_request() properties = {'foo': 'bar'} self.db.image_update(None, UUID1, {'properties': properties}) output = self.controller.show(request, UUID1) self.assertEqual('bar', output.extra_properties['foo']) changes = [ {'json_schema_version': 4, 'op': 'add', 'path': ['foo'], 'value': 'baz'}, ] self.assertRaises(webob.exc.HTTPConflict, self.controller.update, request, UUID1, changes) def test_update_add_property_already_present_json_schema_version_10(self): request = unit_test_utils.get_fake_request() properties = {'foo': 'bar'} self.db.image_update(None, UUID1, {'properties': properties}) output = self.controller.show(request, UUID1) self.assertEqual('bar', output.extra_properties['foo']) changes = [ {'json_schema_version': 10, 'op': 'add', 'path': ['foo'], 'value': 'baz'}, ] output = self.controller.update(request, UUID1, changes) self.assertEqual(UUID1, output.image_id) self.assertEqual({'foo': 'baz'}, output.extra_properties) def test_update_add_locations(self): self.config(show_multiple_locations=True) new_location = {'url': '%s/fake_location' % BASE_URI, 'metadata': {}} request = unit_test_utils.get_fake_request() changes = [{'op': 'add', 'path': ['locations', '-'], 'value': new_location}] output = self.controller.update(request, UUID1, changes) self.assertEqual(UUID1, output.image_id) self.assertEqual(2, len(output.locations)) self.assertEqual(new_location, output.locations[1]) def test_replace_location_possible_on_queued(self): self.skipTest('This test is intermittently failing at the gate. ' 'See bug #1649300') self.config(show_multiple_locations=True) self.images = [ _db_fixture('1', owner=TENANT1, checksum=CHKSUM, name='1', is_public=True, disk_format='raw', container_format='bare', status='queued'), ] self.db.image_create(None, self.images[0]) request = unit_test_utils.get_fake_request() new_location = {'url': '%s/fake_location_1' % BASE_URI, 'metadata': {}} changes = [{'op': 'replace', 'path': ['locations'], 'value': [new_location]}] output = self.controller.update(request, '1', changes) self.assertEqual('1', output.image_id) self.assertEqual(1, len(output.locations)) self.assertEqual(new_location, output.locations[0]) def test_add_location_possible_on_queued(self): self.skipTest('This test is intermittently failing at the gate. ' 'See bug #1649300') self.config(show_multiple_locations=True) self.images = [ _db_fixture('1', owner=TENANT1, checksum=CHKSUM, name='1', is_public=True, disk_format='raw', container_format='bare', status='queued'), ] self.db.image_create(None, self.images[0]) request = unit_test_utils.get_fake_request() new_location = {'url': '%s/fake_location_1' % BASE_URI, 'metadata': {}} changes = [{'op': 'add', 'path': ['locations', '-'], 'value': new_location}] output = self.controller.update(request, '1', changes) self.assertEqual('1', output.image_id) self.assertEqual(1, len(output.locations)) self.assertEqual(new_location, output.locations[0]) def _test_update_locations_status(self, image_status, update): self.config(show_multiple_locations=True) self.images = [ _db_fixture('1', owner=TENANT1, checksum=CHKSUM, name='1', disk_format='raw', container_format='bare', status=image_status), ] request = unit_test_utils.get_fake_request() if image_status == 'deactivated': self.db.image_create(request.context, self.images[0]) else: self.db.image_create(None, self.images[0]) new_location = {'url': '%s/fake_location' % BASE_URI, 'metadata': {}} changes = [{'op': update, 'path': ['locations', '-'], 'value': new_location}] self.assertRaises(webob.exc.HTTPConflict, self.controller.update, request, '1', changes) def test_location_add_not_permitted_status_saving(self): self._test_update_locations_status('saving', 'add') def test_location_add_not_permitted_status_deactivated(self): self._test_update_locations_status('deactivated', 'add') def test_location_add_not_permitted_status_deleted(self): self._test_update_locations_status('deleted', 'add') def test_location_add_not_permitted_status_pending_delete(self): self._test_update_locations_status('pending_delete', 'add') def test_location_add_not_permitted_status_killed(self): self._test_update_locations_status('killed', 'add') def test_location_remove_not_permitted_status_saving(self): self._test_update_locations_status('saving', 'remove') def test_location_remove_not_permitted_status_deactivated(self): self._test_update_locations_status('deactivated', 'remove') def test_location_remove_not_permitted_status_deleted(self): self._test_update_locations_status('deleted', 'remove') def test_location_remove_not_permitted_status_pending_delete(self): self._test_update_locations_status('pending_delete', 'remove') def test_location_remove_not_permitted_status_killed(self): self._test_update_locations_status('killed', 'remove') def test_location_remove_not_permitted_status_queued(self): self._test_update_locations_status('queued', 'remove') def test_location_replace_not_permitted_status_saving(self): self._test_update_locations_status('saving', 'replace') def test_location_replace_not_permitted_status_deactivated(self): self._test_update_locations_status('deactivated', 'replace') def test_location_replace_not_permitted_status_deleted(self): self._test_update_locations_status('deleted', 'replace') def test_location_replace_not_permitted_status_pending_delete(self): self._test_update_locations_status('pending_delete', 'replace') def test_location_replace_not_permitted_status_killed(self): self._test_update_locations_status('killed', 'replace') def test_update_add_locations_insertion(self): self.config(show_multiple_locations=True) new_location = {'url': '%s/fake_location' % BASE_URI, 'metadata': {}} request = unit_test_utils.get_fake_request() changes = [{'op': 'add', 'path': ['locations', '0'], 'value': new_location}] output = self.controller.update(request, UUID1, changes) self.assertEqual(UUID1, output.image_id) self.assertEqual(2, len(output.locations)) self.assertEqual(new_location, output.locations[0]) def test_update_add_locations_list(self): self.config(show_multiple_locations=True) request = unit_test_utils.get_fake_request() changes = [{'op': 'add', 'path': ['locations', '-'], 'value': {'url': 'foo', 'metadata': {}}}] self.assertRaises(webob.exc.HTTPBadRequest, self.controller.update, request, UUID1, changes) def test_update_add_locations_invalid(self): self.config(show_multiple_locations=True) request = unit_test_utils.get_fake_request() changes = [{'op': 'add', 'path': ['locations', '-'], 'value': {'url': 'unknow://foo', 'metadata': {}}}] self.assertRaises(webob.exc.HTTPBadRequest, self.controller.update, request, UUID1, changes) changes = [{'op': 'add', 'path': ['locations', None], 'value': {'url': 'unknow://foo', 'metadata': {}}}] self.assertRaises(webob.exc.HTTPBadRequest, self.controller.update, request, UUID1, changes) def test_update_add_duplicate_locations(self): self.config(show_multiple_locations=True) new_location = {'url': '%s/fake_location' % BASE_URI, 'metadata': {}} request = unit_test_utils.get_fake_request() changes = [{'op': 'add', 'path': ['locations', '-'], 'value': new_location}] output = self.controller.update(request, UUID1, changes) self.assertEqual(UUID1, output.image_id) self.assertEqual(2, len(output.locations)) self.assertEqual(new_location, output.locations[1]) self.assertRaises(webob.exc.HTTPBadRequest, self.controller.update, request, UUID1, changes) def test_update_add_too_many_locations(self): self.config(show_multiple_locations=True) self.config(image_location_quota=1) request = unit_test_utils.get_fake_request() changes = [ {'op': 'add', 'path': ['locations', '-'], 'value': {'url': '%s/fake_location_1' % BASE_URI, 'metadata': {}}}, {'op': 'add', 'path': ['locations', '-'], 'value': {'url': '%s/fake_location_2' % BASE_URI, 'metadata': {}}}, ] self.assertRaises(webob.exc.HTTPRequestEntityTooLarge, self.controller.update, request, UUID1, changes) def test_update_add_and_remove_too_many_locations(self): self.config(show_multiple_locations=True) request = unit_test_utils.get_fake_request() changes = [ {'op': 'add', 'path': ['locations', '-'], 'value': {'url': '%s/fake_location_1' % BASE_URI, 'metadata': {}}}, {'op': 'add', 'path': ['locations', '-'], 'value': {'url': '%s/fake_location_2' % BASE_URI, 'metadata': {}}}, ] self.controller.update(request, UUID1, changes) self.config(image_location_quota=1) # We must remove two properties to avoid being # over the limit of 1 property changes = [ {'op': 'remove', 'path': ['locations', '0']}, {'op': 'add', 'path': ['locations', '-'], 'value': {'url': '%s/fake_location_3' % BASE_URI, 'metadata': {}}}, ] self.assertRaises(webob.exc.HTTPRequestEntityTooLarge, self.controller.update, request, UUID1, changes) def test_update_add_unlimited_locations(self): self.config(show_multiple_locations=True) self.config(image_location_quota=-1) request = unit_test_utils.get_fake_request() changes = [ {'op': 'add', 'path': ['locations', '-'], 'value': {'url': '%s/fake_location_1' % BASE_URI, 'metadata': {}}}, ] output = self.controller.update(request, UUID1, changes) self.assertEqual(UUID1, output.image_id) self.assertNotEqual(output.created_at, output.updated_at) def test_update_remove_location_while_over_limit(self): """Ensure that image locations can be removed. Image locations should be able to be removed as long as the image has fewer than the limited number of image locations after the transaction. """ self.config(show_multiple_locations=True) request = unit_test_utils.get_fake_request() changes = [ {'op': 'add', 'path': ['locations', '-'], 'value': {'url': '%s/fake_location_1' % BASE_URI, 'metadata': {}}}, {'op': 'add', 'path': ['locations', '-'], 'value': {'url': '%s/fake_location_2' % BASE_URI, 'metadata': {}}}, ] self.controller.update(request, UUID1, changes) self.config(image_location_quota=1) self.config(show_multiple_locations=True) # We must remove two locations to avoid being over # the limit of 1 location changes = [ {'op': 'remove', 'path': ['locations', '0']}, {'op': 'remove', 'path': ['locations', '0']}, ] output = self.controller.update(request, UUID1, changes) self.assertEqual(UUID1, output.image_id) self.assertEqual(1, len(output.locations)) self.assertIn('fake_location_2', output.locations[0]['url']) self.assertNotEqual(output.created_at, output.updated_at) def test_update_add_and_remove_location_under_limit(self): """Ensure that image locations can be removed. Image locations should be able to be added and removed simultaneously as long as the image has fewer than the limited number of image locations after the transaction. """ self.stubs.Set(store, 'get_size_from_backend', unit_test_utils.fake_get_size_from_backend) self.config(show_multiple_locations=True) request = unit_test_utils.get_fake_request() changes = [ {'op': 'add', 'path': ['locations', '-'], 'value': {'url': '%s/fake_location_1' % BASE_URI, 'metadata': {}}}, {'op': 'add', 'path': ['locations', '-'], 'value': {'url': '%s/fake_location_2' % BASE_URI, 'metadata': {}}}, ] self.controller.update(request, UUID1, changes) self.config(image_location_quota=2) # We must remove two properties to avoid being # over the limit of 1 property changes = [ {'op': 'remove', 'path': ['locations', '0']}, {'op': 'remove', 'path': ['locations', '0']}, {'op': 'add', 'path': ['locations', '-'], 'value': {'url': '%s/fake_location_3' % BASE_URI, 'metadata': {}}}, ] output = self.controller.update(request, UUID1, changes) self.assertEqual(UUID1, output.image_id) self.assertEqual(2, len(output.locations)) self.assertIn('fake_location_3', output.locations[1]['url']) self.assertNotEqual(output.created_at, output.updated_at) def test_update_remove_base_property(self): self.db.image_update(None, UUID1, {'properties': {'foo': 'bar'}}) request = unit_test_utils.get_fake_request() changes = [{'op': 'remove', 'path': ['name']}] self.assertRaises(webob.exc.HTTPForbidden, self.controller.update, request, UUID1, changes) def test_update_remove_property(self): request = unit_test_utils.get_fake_request() properties = {'foo': 'bar', 'snitch': 'golden'} self.db.image_update(None, UUID1, {'properties': properties}) output = self.controller.show(request, UUID1) self.assertEqual('bar', output.extra_properties['foo']) self.assertEqual('golden', output.extra_properties['snitch']) changes = [ {'op': 'remove', 'path': ['snitch']}, ] output = self.controller.update(request, UUID1, changes) self.assertEqual(UUID1, output.image_id) self.assertEqual({'foo': 'bar'}, output.extra_properties) self.assertNotEqual(output.created_at, output.updated_at) def test_update_remove_missing_property(self): request = unit_test_utils.get_fake_request() changes = [ {'op': 'remove', 'path': ['foo']}, ] self.assertRaises(webob.exc.HTTPConflict, self.controller.update, request, UUID1, changes) def test_update_remove_location(self): self.config(show_multiple_locations=True) self.stubs.Set(store, 'get_size_from_backend', unit_test_utils.fake_get_size_from_backend) request = unit_test_utils.get_fake_request() new_location = {'url': '%s/fake_location' % BASE_URI, 'metadata': {}} changes = [{'op': 'add', 'path': ['locations', '-'], 'value': new_location}] self.controller.update(request, UUID1, changes) changes = [{'op': 'remove', 'path': ['locations', '0']}] output = self.controller.update(request, UUID1, changes) self.assertEqual(UUID1, output.image_id) self.assertEqual(1, len(output.locations)) self.assertEqual('active', output.status) def test_update_remove_location_invalid_pos(self): self.config(show_multiple_locations=True) request = unit_test_utils.get_fake_request() changes = [ {'op': 'add', 'path': ['locations', '-'], 'value': {'url': '%s/fake_location' % BASE_URI, 'metadata': {}}}] self.controller.update(request, UUID1, changes) changes = [{'op': 'remove', 'path': ['locations', None]}] self.assertRaises(webob.exc.HTTPBadRequest, self.controller.update, request, UUID1, changes) changes = [{'op': 'remove', 'path': ['locations', '-1']}] self.assertRaises(webob.exc.HTTPBadRequest, self.controller.update, request, UUID1, changes) changes = [{'op': 'remove', 'path': ['locations', '99']}] self.assertRaises(webob.exc.HTTPBadRequest, self.controller.update, request, UUID1, changes) changes = [{'op': 'remove', 'path': ['locations', 'x']}] self.assertRaises(webob.exc.HTTPBadRequest, self.controller.update, request, UUID1, changes) def test_update_remove_location_store_exception(self): self.config(show_multiple_locations=True) def fake_delete_image_location_from_backend(self, *args, **kwargs): raise Exception('fake_backend_exception') self.stubs.Set(self.store_utils, 'delete_image_location_from_backend', fake_delete_image_location_from_backend) request = unit_test_utils.get_fake_request() changes = [ {'op': 'add', 'path': ['locations', '-'], 'value': {'url': '%s/fake_location' % BASE_URI, 'metadata': {}}}] self.controller.update(request, UUID1, changes) changes = [{'op': 'remove', 'path': ['locations', '0']}] self.assertRaises(webob.exc.HTTPInternalServerError, self.controller.update, request, UUID1, changes) def test_update_multiple_changes(self): request = unit_test_utils.get_fake_request() properties = {'foo': 'bar', 'snitch': 'golden'} self.db.image_update(None, UUID1, {'properties': properties}) changes = [ {'op': 'replace', 'path': ['min_ram'], 'value': 128}, {'op': 'replace', 'path': ['foo'], 'value': 'baz'}, {'op': 'remove', 'path': ['snitch']}, {'op': 'add', 'path': ['kb'], 'value': 'dvorak'}, ] output = self.controller.update(request, UUID1, changes) self.assertEqual(UUID1, output.image_id) self.assertEqual(128, output.min_ram) self.addDetail('extra_properties', testtools.content.json_content( jsonutils.dumps(output.extra_properties))) self.assertEqual(2, len(output.extra_properties)) self.assertEqual('baz', output.extra_properties['foo']) self.assertEqual('dvorak', output.extra_properties['kb']) self.assertNotEqual(output.created_at, output.updated_at) def test_update_invalid_operation(self): request = unit_test_utils.get_fake_request() change = {'op': 'test', 'path': 'options', 'value': 'puts'} try: self.controller.update(request, UUID1, [change]) except AttributeError: pass # AttributeError is the desired behavior else: self.fail('Failed to raise AssertionError on %s' % change) def test_update_duplicate_tags(self): request = unit_test_utils.get_fake_request() changes = [ {'op': 'replace', 'path': ['tags'], 'value': ['ping', 'ping']}, ] output = self.controller.update(request, UUID1, changes) self.assertEqual(1, len(output.tags)) self.assertIn('ping', output.tags) output_logs = self.notifier.get_logs() self.assertEqual(1, len(output_logs)) output_log = output_logs[0] self.assertEqual('INFO', output_log['notification_type']) self.assertEqual('image.update', output_log['event_type']) self.assertEqual(UUID1, output_log['payload']['id']) def test_update_disabled_notification(self): self.config(disabled_notifications=["image.update"]) request = unit_test_utils.get_fake_request() changes = [ {'op': 'replace', 'path': ['name'], 'value': 'Ping Pong'}, ] output = self.controller.update(request, UUID1, changes) self.assertEqual('Ping Pong', output.name) output_logs = self.notifier.get_logs() self.assertEqual(0, len(output_logs)) def test_delete(self): request = unit_test_utils.get_fake_request() self.assertIn('%s/%s' % (BASE_URI, UUID1), self.store.data) try: self.controller.delete(request, UUID1) output_logs = self.notifier.get_logs() self.assertEqual(1, len(output_logs)) output_log = output_logs[0] self.assertEqual('INFO', output_log['notification_type']) self.assertEqual("image.delete", output_log['event_type']) except Exception as e: self.fail("Delete raised exception: %s" % e) deleted_img = self.db.image_get(request.context, UUID1, force_show_deleted=True) self.assertTrue(deleted_img['deleted']) self.assertEqual('deleted', deleted_img['status']) self.assertNotIn('%s/%s' % (BASE_URI, UUID1), self.store.data) def test_delete_with_tags(self): request = unit_test_utils.get_fake_request() changes = [ {'op': 'replace', 'path': ['tags'], 'value': ['many', 'cool', 'new', 'tags']}, ] self.controller.update(request, UUID1, changes) self.assertIn('%s/%s' % (BASE_URI, UUID1), self.store.data) self.controller.delete(request, UUID1) output_logs = self.notifier.get_logs() # Get `delete` event from logs output_delete_logs = [output_log for output_log in output_logs if output_log['event_type'] == 'image.delete'] self.assertEqual(1, len(output_delete_logs)) output_log = output_delete_logs[0] self.assertEqual('INFO', output_log['notification_type']) deleted_img = self.db.image_get(request.context, UUID1, force_show_deleted=True) self.assertTrue(deleted_img['deleted']) self.assertEqual('deleted', deleted_img['status']) self.assertNotIn('%s/%s' % (BASE_URI, UUID1), self.store.data) def test_delete_disabled_notification(self): self.config(disabled_notifications=["image.delete"]) request = unit_test_utils.get_fake_request() self.assertIn('%s/%s' % (BASE_URI, UUID1), self.store.data) try: self.controller.delete(request, UUID1) output_logs = self.notifier.get_logs() self.assertEqual(0, len(output_logs)) except Exception as e: self.fail("Delete raised exception: %s" % e) deleted_img = self.db.image_get(request.context, UUID1, force_show_deleted=True) self.assertTrue(deleted_img['deleted']) self.assertEqual('deleted', deleted_img['status']) self.assertNotIn('%s/%s' % (BASE_URI, UUID1), self.store.data) def test_delete_queued_updates_status(self): """Ensure status of queued image is updated (LP bug #1048851)""" request = unit_test_utils.get_fake_request(is_admin=True) image = self.db.image_create(request.context, {'status': 'queued'}) image_id = image['id'] self.controller.delete(request, image_id) image = self.db.image_get(request.context, image_id, force_show_deleted=True) self.assertTrue(image['deleted']) self.assertEqual('deleted', image['status']) def test_delete_queued_updates_status_delayed_delete(self): """Ensure status of queued image is updated (LP bug #1048851). Must be set to 'deleted' when delayed_delete isenabled. """ self.config(delayed_delete=True) request = unit_test_utils.get_fake_request(is_admin=True) image = self.db.image_create(request.context, {'status': 'queued'}) image_id = image['id'] self.controller.delete(request, image_id) image = self.db.image_get(request.context, image_id, force_show_deleted=True) self.assertTrue(image['deleted']) self.assertEqual('deleted', image['status']) def test_delete_not_in_store(self): request = unit_test_utils.get_fake_request() self.assertIn('%s/%s' % (BASE_URI, UUID1), self.store.data) for k in self.store.data: if UUID1 in k: del self.store.data[k] break self.controller.delete(request, UUID1) deleted_img = self.db.image_get(request.context, UUID1, force_show_deleted=True) self.assertTrue(deleted_img['deleted']) self.assertEqual('deleted', deleted_img['status']) self.assertNotIn('%s/%s' % (BASE_URI, UUID1), self.store.data) def test_delayed_delete(self): self.config(delayed_delete=True) request = unit_test_utils.get_fake_request() self.assertIn('%s/%s' % (BASE_URI, UUID1), self.store.data) self.controller.delete(request, UUID1) deleted_img = self.db.image_get(request.context, UUID1, force_show_deleted=True) self.assertTrue(deleted_img['deleted']) self.assertEqual('pending_delete', deleted_img['status']) self.assertIn('%s/%s' % (BASE_URI, UUID1), self.store.data) def test_delete_non_existent(self): request = unit_test_utils.get_fake_request() self.assertRaises(webob.exc.HTTPNotFound, self.controller.delete, request, str(uuid.uuid4())) def test_delete_already_deleted_image_admin(self): request = unit_test_utils.get_fake_request(is_admin=True) self.controller.delete(request, UUID1) self.assertRaises(webob.exc.HTTPNotFound, self.controller.delete, request, UUID1) def test_delete_not_allowed(self): request = unit_test_utils.get_fake_request() self.assertRaises(webob.exc.HTTPNotFound, self.controller.delete, request, UUID4) def test_delete_in_use(self): def fake_safe_delete_from_backend(self, *args, **kwargs): raise store.exceptions.InUseByStore() self.stubs.Set(self.store_utils, 'safe_delete_from_backend', fake_safe_delete_from_backend) request = unit_test_utils.get_fake_request() self.assertRaises(webob.exc.HTTPConflict, self.controller.delete, request, UUID1) def test_delete_has_snapshot(self): def fake_safe_delete_from_backend(self, *args, **kwargs): raise store.exceptions.HasSnapshot() self.stubs.Set(self.store_utils, 'safe_delete_from_backend', fake_safe_delete_from_backend) request = unit_test_utils.get_fake_request() self.assertRaises(webob.exc.HTTPConflict, self.controller.delete, request, UUID1) def test_delete_to_unallowed_status(self): # from deactivated to pending-delete self.config(delayed_delete=True) request = unit_test_utils.get_fake_request(is_admin=True) self.action_controller.deactivate(request, UUID1) self.assertRaises(webob.exc.HTTPBadRequest, self.controller.delete, request, UUID1) def test_delete_uploading_status_image(self): """Ensure status of uploading image is updated (LP bug #1733289)""" self.config(enable_image_import=True) request = unit_test_utils.get_fake_request(is_admin=True) image = self.db.image_create(request.context, {'status': 'uploading'}) image_id = image['id'] with mock.patch.object(self.store, 'delete_from_backend') as mock_store: self.controller.delete(request, image_id) # Ensure delete_from_backend is called self.assertEqual(1, mock_store.call_count) image = self.db.image_get(request.context, image_id, force_show_deleted=True) self.assertTrue(image['deleted']) self.assertEqual('deleted', image['status']) def test_index_with_invalid_marker(self): fake_uuid = str(uuid.uuid4()) request = unit_test_utils.get_fake_request() self.assertRaises(webob.exc.HTTPBadRequest, self.controller.index, request, marker=fake_uuid) def test_invalid_locations_op_pos(self): pos = self.controller._get_locations_op_pos(None, 2, True) self.assertIsNone(pos) pos = self.controller._get_locations_op_pos('1', None, True) self.assertIsNone(pos) def test_image_import(self): request = unit_test_utils.get_fake_request() output = self.controller.import_image(request, UUID4, {'method': {'name': 'glance-direct'}}) self.assertEqual(UUID4, output) def test_image_import_not_allowed(self): request = unit_test_utils.get_fake_request() # NOTE(abhishekk): For coverage purpose setting tenant to # None. It is not expected to do in normal scenarios. request.context.tenant = None self.assertRaises(webob.exc.HTTPForbidden, self.controller.import_image, request, UUID4, {'method': {'name': 'glance-direct'}}) class TestImagesControllerPolicies(base.IsolatedUnitTest): def setUp(self): super(TestImagesControllerPolicies, self).setUp() self.db = unit_test_utils.FakeDB() self.policy = unit_test_utils.FakePolicyEnforcer() self.controller = glance.api.v2.images.ImagesController(self.db, self.policy) store = unit_test_utils.FakeStoreAPI() self.store_utils = unit_test_utils.FakeStoreUtils(store) def test_index_unauthorized(self): rules = {"get_images": False} self.policy.set_rules(rules) request = unit_test_utils.get_fake_request() self.assertRaises(webob.exc.HTTPForbidden, self.controller.index, request) def test_show_unauthorized(self): rules = {"get_image": False} self.policy.set_rules(rules) request = unit_test_utils.get_fake_request() self.assertRaises(webob.exc.HTTPForbidden, self.controller.show, request, image_id=UUID2) def test_create_image_unauthorized(self): rules = {"add_image": False} self.policy.set_rules(rules) request = unit_test_utils.get_fake_request() image = {'name': 'image-1'} extra_properties = {} tags = [] self.assertRaises(webob.exc.HTTPForbidden, self.controller.create, request, image, extra_properties, tags) def test_create_public_image_unauthorized(self): rules = {"publicize_image": False} self.policy.set_rules(rules) request = unit_test_utils.get_fake_request() image = {'name': 'image-1', 'visibility': 'public'} extra_properties = {} tags = [] self.assertRaises(webob.exc.HTTPForbidden, self.controller.create, request, image, extra_properties, tags) def test_create_community_image_unauthorized(self): rules = {"communitize_image": False} self.policy.set_rules(rules) request = unit_test_utils.get_fake_request() image = {'name': 'image-c1', 'visibility': 'community'} extra_properties = {} tags = [] self.assertRaises(webob.exc.HTTPForbidden, self.controller.create, request, image, extra_properties, tags) def test_update_unauthorized(self): rules = {"modify_image": False} self.policy.set_rules(rules) request = unit_test_utils.get_fake_request() changes = [{'op': 'replace', 'path': ['name'], 'value': 'image-2'}] self.assertRaises(webob.exc.HTTPForbidden, self.controller.update, request, UUID1, changes) def test_update_publicize_image_unauthorized(self): rules = {"publicize_image": False} self.policy.set_rules(rules) request = unit_test_utils.get_fake_request() changes = [{'op': 'replace', 'path': ['visibility'], 'value': 'public'}] self.assertRaises(webob.exc.HTTPForbidden, self.controller.update, request, UUID1, changes) def test_update_communitize_image_unauthorized(self): rules = {"communitize_image": False} self.policy.set_rules(rules) request = unit_test_utils.get_fake_request() changes = [{'op': 'replace', 'path': ['visibility'], 'value': 'community'}] self.assertRaises(webob.exc.HTTPForbidden, self.controller.update, request, UUID1, changes) def test_update_depublicize_image_unauthorized(self): rules = {"publicize_image": False} self.policy.set_rules(rules) request = unit_test_utils.get_fake_request() changes = [{'op': 'replace', 'path': ['visibility'], 'value': 'private'}] output = self.controller.update(request, UUID1, changes) self.assertEqual('private', output.visibility) def test_update_decommunitize_image_unauthorized(self): rules = {"communitize_image": False} self.policy.set_rules(rules) request = unit_test_utils.get_fake_request() changes = [{'op': 'replace', 'path': ['visibility'], 'value': 'private'}] output = self.controller.update(request, UUID1, changes) self.assertEqual('private', output.visibility) def test_update_get_image_location_unauthorized(self): rules = {"get_image_location": False} self.policy.set_rules(rules) request = unit_test_utils.get_fake_request() changes = [{'op': 'replace', 'path': ['locations'], 'value': []}] self.assertRaises(webob.exc.HTTPForbidden, self.controller.update, request, UUID1, changes) def test_update_set_image_location_unauthorized(self): def fake_delete_image_location_from_backend(self, *args, **kwargs): pass rules = {"set_image_location": False} self.policy.set_rules(rules) new_location = {'url': '%s/fake_location' % BASE_URI, 'metadata': {}} request = unit_test_utils.get_fake_request() changes = [{'op': 'add', 'path': ['locations', '-'], 'value': new_location}] self.assertRaises(webob.exc.HTTPForbidden, self.controller.update, request, UUID1, changes) def test_update_delete_image_location_unauthorized(self): rules = {"delete_image_location": False} self.policy.set_rules(rules) request = unit_test_utils.get_fake_request() changes = [{'op': 'replace', 'path': ['locations'], 'value': []}] self.assertRaises(webob.exc.HTTPForbidden, self.controller.update, request, UUID1, changes) def test_delete_unauthorized(self): rules = {"delete_image": False} self.policy.set_rules(rules) request = unit_test_utils.get_fake_request() self.assertRaises(webob.exc.HTTPForbidden, self.controller.delete, request, UUID1) class TestImagesDeserializer(test_utils.BaseTestCase): def setUp(self): super(TestImagesDeserializer, self).setUp() self.deserializer = glance.api.v2.images.RequestDeserializer() def test_create_minimal(self): request = unit_test_utils.get_fake_request() request.body = jsonutils.dump_as_bytes({}) output = self.deserializer.create(request) expected = {'image': {}, 'extra_properties': {}, 'tags': []} self.assertEqual(expected, output) def test_create_invalid_id(self): request = unit_test_utils.get_fake_request() request.body = jsonutils.dump_as_bytes({'id': 'gabe'}) self.assertRaises(webob.exc.HTTPBadRequest, self.deserializer.create, request) def test_create_id_to_image_id(self): request = unit_test_utils.get_fake_request() request.body = jsonutils.dump_as_bytes({'id': UUID4}) output = self.deserializer.create(request) expected = {'image': {'image_id': UUID4}, 'extra_properties': {}, 'tags': []} self.assertEqual(expected, output) def test_create_no_body(self): request = unit_test_utils.get_fake_request() self.assertRaises(webob.exc.HTTPBadRequest, self.deserializer.create, request) def test_create_full(self): request = unit_test_utils.get_fake_request() request.body = jsonutils.dump_as_bytes({ 'id': UUID3, 'name': 'image-1', 'visibility': 'public', 'tags': ['one', 'two'], 'container_format': 'ami', 'disk_format': 'ami', 'min_ram': 128, 'min_disk': 10, 'foo': 'bar', 'protected': True, }) output = self.deserializer.create(request) properties = { 'image_id': UUID3, 'name': 'image-1', 'visibility': 'public', 'container_format': 'ami', 'disk_format': 'ami', 'min_ram': 128, 'min_disk': 10, 'protected': True, } self.maxDiff = None expected = {'image': properties, 'extra_properties': {'foo': 'bar'}, 'tags': ['one', 'two']} self.assertEqual(expected, output) def test_create_invalid_property_key(self): request = unit_test_utils.get_fake_request() request.body = jsonutils.dump_as_bytes({ 'id': UUID3, 'name': 'image-1', 'visibility': 'public', 'tags': ['one', 'two'], 'container_format': 'ami', 'disk_format': 'ami', 'min_ram': 128, 'min_disk': 10, 'f' * 256: 'bar', 'protected': True, }) self.assertRaises(webob.exc.HTTPBadRequest, self.deserializer.create, request) def test_create_readonly_attributes_forbidden(self): bodies = [ {'direct_url': 'http://example.com'}, {'self': 'http://example.com'}, {'file': 'http://example.com'}, {'schema': 'http://example.com'}, ] for body in bodies: request = unit_test_utils.get_fake_request() request.body = jsonutils.dump_as_bytes(body) self.assertRaises(webob.exc.HTTPForbidden, self.deserializer.create, request) def _get_fake_patch_request(self, content_type_minor_version=1): request = unit_test_utils.get_fake_request() template = 'application/openstack-images-v2.%d-json-patch' request.content_type = template % content_type_minor_version return request def test_update_empty_body(self): request = self._get_fake_patch_request() request.body = jsonutils.dump_as_bytes([]) output = self.deserializer.update(request) expected = {'changes': []} self.assertEqual(expected, output) def test_update_unsupported_content_type(self): request = unit_test_utils.get_fake_request() request.content_type = 'application/json-patch' request.body = jsonutils.dump_as_bytes([]) try: self.deserializer.update(request) except webob.exc.HTTPUnsupportedMediaType as e: # desired result, but must have correct Accept-Patch header accept_patch = ['application/openstack-images-v2.1-json-patch', 'application/openstack-images-v2.0-json-patch'] expected = ', '.join(sorted(accept_patch)) self.assertEqual(expected, e.headers['Accept-Patch']) else: self.fail('Did not raise HTTPUnsupportedMediaType') def test_update_body_not_a_list(self): bodies = [ {'op': 'add', 'path': '/someprop', 'value': 'somevalue'}, 'just some string', 123, True, False, None, ] for body in bodies: request = self._get_fake_patch_request() request.body = jsonutils.dump_as_bytes(body) self.assertRaises(webob.exc.HTTPBadRequest, self.deserializer.update, request) def test_update_invalid_changes(self): changes = [ ['a', 'list', 'of', 'stuff'], 'just some string', 123, True, False, None, {'op': 'invalid', 'path': '/name', 'value': 'fedora'} ] for change in changes: request = self._get_fake_patch_request() request.body = jsonutils.dump_as_bytes([change]) self.assertRaises(webob.exc.HTTPBadRequest, self.deserializer.update, request) def test_update(self): request = self._get_fake_patch_request() body = [ {'op': 'replace', 'path': '/name', 'value': 'fedora'}, {'op': 'replace', 'path': '/tags', 'value': ['king', 'kong']}, {'op': 'replace', 'path': '/foo', 'value': 'bar'}, {'op': 'add', 'path': '/bebim', 'value': 'bap'}, {'op': 'remove', 'path': '/sparks'}, {'op': 'add', 'path': '/locations/-', 'value': {'url': 'scheme3://path3', 'metadata': {}}}, {'op': 'add', 'path': '/locations/10', 'value': {'url': 'scheme4://path4', 'metadata': {}}}, {'op': 'remove', 'path': '/locations/2'}, {'op': 'replace', 'path': '/locations', 'value': []}, {'op': 'replace', 'path': '/locations', 'value': [{'url': 'scheme5://path5', 'metadata': {}}, {'url': 'scheme6://path6', 'metadata': {}}]}, ] request.body = jsonutils.dump_as_bytes(body) output = self.deserializer.update(request) expected = {'changes': [ {'json_schema_version': 10, 'op': 'replace', 'path': ['name'], 'value': 'fedora'}, {'json_schema_version': 10, 'op': 'replace', 'path': ['tags'], 'value': ['king', 'kong']}, {'json_schema_version': 10, 'op': 'replace', 'path': ['foo'], 'value': 'bar'}, {'json_schema_version': 10, 'op': 'add', 'path': ['bebim'], 'value': 'bap'}, {'json_schema_version': 10, 'op': 'remove', 'path': ['sparks']}, {'json_schema_version': 10, 'op': 'add', 'path': ['locations', '-'], 'value': {'url': 'scheme3://path3', 'metadata': {}}}, {'json_schema_version': 10, 'op': 'add', 'path': ['locations', '10'], 'value': {'url': 'scheme4://path4', 'metadata': {}}}, {'json_schema_version': 10, 'op': 'remove', 'path': ['locations', '2']}, {'json_schema_version': 10, 'op': 'replace', 'path': ['locations'], 'value': []}, {'json_schema_version': 10, 'op': 'replace', 'path': ['locations'], 'value': [{'url': 'scheme5://path5', 'metadata': {}}, {'url': 'scheme6://path6', 'metadata': {}}]}, ]} self.assertEqual(expected, output) def test_update_v2_0_compatibility(self): request = self._get_fake_patch_request(content_type_minor_version=0) body = [ {'replace': '/name', 'value': 'fedora'}, {'replace': '/tags', 'value': ['king', 'kong']}, {'replace': '/foo', 'value': 'bar'}, {'add': '/bebim', 'value': 'bap'}, {'remove': '/sparks'}, {'add': '/locations/-', 'value': {'url': 'scheme3://path3', 'metadata': {}}}, {'add': '/locations/10', 'value': {'url': 'scheme4://path4', 'metadata': {}}}, {'remove': '/locations/2'}, {'replace': '/locations', 'value': []}, {'replace': '/locations', 'value': [{'url': 'scheme5://path5', 'metadata': {}}, {'url': 'scheme6://path6', 'metadata': {}}]}, ] request.body = jsonutils.dump_as_bytes(body) output = self.deserializer.update(request) expected = {'changes': [ {'json_schema_version': 4, 'op': 'replace', 'path': ['name'], 'value': 'fedora'}, {'json_schema_version': 4, 'op': 'replace', 'path': ['tags'], 'value': ['king', 'kong']}, {'json_schema_version': 4, 'op': 'replace', 'path': ['foo'], 'value': 'bar'}, {'json_schema_version': 4, 'op': 'add', 'path': ['bebim'], 'value': 'bap'}, {'json_schema_version': 4, 'op': 'remove', 'path': ['sparks']}, {'json_schema_version': 4, 'op': 'add', 'path': ['locations', '-'], 'value': {'url': 'scheme3://path3', 'metadata': {}}}, {'json_schema_version': 4, 'op': 'add', 'path': ['locations', '10'], 'value': {'url': 'scheme4://path4', 'metadata': {}}}, {'json_schema_version': 4, 'op': 'remove', 'path': ['locations', '2']}, {'json_schema_version': 4, 'op': 'replace', 'path': ['locations'], 'value': []}, {'json_schema_version': 4, 'op': 'replace', 'path': ['locations'], 'value': [{'url': 'scheme5://path5', 'metadata': {}}, {'url': 'scheme6://path6', 'metadata': {}}]}, ]} self.assertEqual(expected, output) def test_update_base_attributes(self): request = self._get_fake_patch_request() body = [ {'op': 'replace', 'path': '/name', 'value': 'fedora'}, {'op': 'replace', 'path': '/visibility', 'value': 'public'}, {'op': 'replace', 'path': '/tags', 'value': ['king', 'kong']}, {'op': 'replace', 'path': '/protected', 'value': True}, {'op': 'replace', 'path': '/container_format', 'value': 'bare'}, {'op': 'replace', 'path': '/disk_format', 'value': 'raw'}, {'op': 'replace', 'path': '/min_ram', 'value': 128}, {'op': 'replace', 'path': '/min_disk', 'value': 10}, {'op': 'replace', 'path': '/locations', 'value': []}, {'op': 'replace', 'path': '/locations', 'value': [{'url': 'scheme5://path5', 'metadata': {}}, {'url': 'scheme6://path6', 'metadata': {}}]} ] request.body = jsonutils.dump_as_bytes(body) output = self.deserializer.update(request) expected = {'changes': [ {'json_schema_version': 10, 'op': 'replace', 'path': ['name'], 'value': 'fedora'}, {'json_schema_version': 10, 'op': 'replace', 'path': ['visibility'], 'value': 'public'}, {'json_schema_version': 10, 'op': 'replace', 'path': ['tags'], 'value': ['king', 'kong']}, {'json_schema_version': 10, 'op': 'replace', 'path': ['protected'], 'value': True}, {'json_schema_version': 10, 'op': 'replace', 'path': ['container_format'], 'value': 'bare'}, {'json_schema_version': 10, 'op': 'replace', 'path': ['disk_format'], 'value': 'raw'}, {'json_schema_version': 10, 'op': 'replace', 'path': ['min_ram'], 'value': 128}, {'json_schema_version': 10, 'op': 'replace', 'path': ['min_disk'], 'value': 10}, {'json_schema_version': 10, 'op': 'replace', 'path': ['locations'], 'value': []}, {'json_schema_version': 10, 'op': 'replace', 'path': ['locations'], 'value': [{'url': 'scheme5://path5', 'metadata': {}}, {'url': 'scheme6://path6', 'metadata': {}}]} ]} self.assertEqual(expected, output) def test_update_disallowed_attributes(self): samples = { 'direct_url': '/a/b/c/d', 'self': '/e/f/g/h', 'file': '/e/f/g/h/file', 'schema': '/i/j/k', } for key, value in samples.items(): request = self._get_fake_patch_request() body = [{'op': 'replace', 'path': '/%s' % key, 'value': value}] request.body = jsonutils.dump_as_bytes(body) try: self.deserializer.update(request) except webob.exc.HTTPForbidden: pass # desired behavior else: self.fail("Updating %s did not result in HTTPForbidden" % key) def test_update_readonly_attributes(self): samples = { 'id': '00000000-0000-0000-0000-000000000000', 'status': 'active', 'checksum': 'abcdefghijklmnopqrstuvwxyz012345', 'size': 9001, 'virtual_size': 9001, 'created_at': ISOTIME, 'updated_at': ISOTIME, } for key, value in samples.items(): request = self._get_fake_patch_request() body = [{'op': 'replace', 'path': '/%s' % key, 'value': value}] request.body = jsonutils.dump_as_bytes(body) try: self.deserializer.update(request) except webob.exc.HTTPForbidden: pass # desired behavior else: self.fail("Updating %s did not result in HTTPForbidden" % key) def test_update_reserved_attributes(self): samples = { 'deleted': False, 'deleted_at': ISOTIME, } for key, value in samples.items(): request = self._get_fake_patch_request() body = [{'op': 'replace', 'path': '/%s' % key, 'value': value}] request.body = jsonutils.dump_as_bytes(body) try: self.deserializer.update(request) except webob.exc.HTTPForbidden: pass # desired behavior else: self.fail("Updating %s did not result in HTTPForbidden" % key) def test_update_invalid_attributes(self): keys = [ 'noslash', '///twoslash', '/two/ /slash', '/ / ', '/trailingslash/', '/lone~tilde', '/trailingtilde~' ] for key in keys: request = self._get_fake_patch_request() body = [{'op': 'replace', 'path': '%s' % key, 'value': 'dummy'}] request.body = jsonutils.dump_as_bytes(body) try: self.deserializer.update(request) except webob.exc.HTTPBadRequest: pass # desired behavior else: self.fail("Updating %s did not result in HTTPBadRequest" % key) def test_update_pointer_encoding(self): samples = { '/keywith~1slash': [u'keywith/slash'], '/keywith~0tilde': [u'keywith~tilde'], '/tricky~01': [u'tricky~1'], } for encoded, decoded in samples.items(): request = self._get_fake_patch_request() doc = [{'op': 'replace', 'path': '%s' % encoded, 'value': 'dummy'}] request.body = jsonutils.dump_as_bytes(doc) output = self.deserializer.update(request) self.assertEqual(decoded, output['changes'][0]['path']) def test_update_deep_limited_attributes(self): samples = { 'locations/1/2': [], } for key, value in samples.items(): request = self._get_fake_patch_request() body = [{'op': 'replace', 'path': '/%s' % key, 'value': value}] request.body = jsonutils.dump_as_bytes(body) try: self.deserializer.update(request) except webob.exc.HTTPBadRequest: pass # desired behavior else: self.fail("Updating %s did not result in HTTPBadRequest" % key) def test_update_v2_1_missing_operations(self): request = self._get_fake_patch_request() body = [{'path': '/colburn', 'value': 'arcata'}] request.body = jsonutils.dump_as_bytes(body) self.assertRaises(webob.exc.HTTPBadRequest, self.deserializer.update, request) def test_update_v2_1_missing_value(self): request = self._get_fake_patch_request() body = [{'op': 'replace', 'path': '/colburn'}] request.body = jsonutils.dump_as_bytes(body) self.assertRaises(webob.exc.HTTPBadRequest, self.deserializer.update, request) def test_update_v2_1_missing_path(self): request = self._get_fake_patch_request() body = [{'op': 'replace', 'value': 'arcata'}] request.body = jsonutils.dump_as_bytes(body) self.assertRaises(webob.exc.HTTPBadRequest, self.deserializer.update, request) def test_update_v2_0_multiple_operations(self): request = self._get_fake_patch_request(content_type_minor_version=0) body = [{'replace': '/foo', 'add': '/bar', 'value': 'snore'}] request.body = jsonutils.dump_as_bytes(body) self.assertRaises(webob.exc.HTTPBadRequest, self.deserializer.update, request) def test_update_v2_0_missing_operations(self): request = self._get_fake_patch_request(content_type_minor_version=0) body = [{'value': 'arcata'}] request.body = jsonutils.dump_as_bytes(body) self.assertRaises(webob.exc.HTTPBadRequest, self.deserializer.update, request) def test_update_v2_0_missing_value(self): request = self._get_fake_patch_request(content_type_minor_version=0) body = [{'replace': '/colburn'}] request.body = jsonutils.dump_as_bytes(body) self.assertRaises(webob.exc.HTTPBadRequest, self.deserializer.update, request) def test_index(self): marker = str(uuid.uuid4()) path = '/images?limit=1&marker=%s&member_status=pending' % marker request = unit_test_utils.get_fake_request(path) expected = {'limit': 1, 'marker': marker, 'sort_key': ['created_at'], 'sort_dir': ['desc'], 'member_status': 'pending', 'filters': {}} output = self.deserializer.index(request) self.assertEqual(expected, output) def test_index_with_filter(self): name = 'My Little Image' path = '/images?name=%s' % name request = unit_test_utils.get_fake_request(path) output = self.deserializer.index(request) self.assertEqual(name, output['filters']['name']) def test_index_strip_params_from_filters(self): name = 'My Little Image' path = '/images?name=%s' % name request = unit_test_utils.get_fake_request(path) output = self.deserializer.index(request) self.assertEqual(name, output['filters']['name']) self.assertEqual(1, len(output['filters'])) def test_index_with_many_filter(self): name = 'My Little Image' instance_id = str(uuid.uuid4()) path = ('/images?name=%(name)s&id=%(instance_id)s' % {'name': name, 'instance_id': instance_id}) request = unit_test_utils.get_fake_request(path) output = self.deserializer.index(request) self.assertEqual(name, output['filters']['name']) self.assertEqual(instance_id, output['filters']['id']) def test_index_with_filter_and_limit(self): name = 'My Little Image' path = '/images?name=%s&limit=1' % name request = unit_test_utils.get_fake_request(path) output = self.deserializer.index(request) self.assertEqual(name, output['filters']['name']) self.assertEqual(1, output['limit']) def test_index_non_integer_limit(self): request = unit_test_utils.get_fake_request('/images?limit=blah') self.assertRaises(webob.exc.HTTPBadRequest, self.deserializer.index, request) def test_index_zero_limit(self): request = unit_test_utils.get_fake_request('/images?limit=0') expected = {'limit': 0, 'sort_key': ['created_at'], 'member_status': 'accepted', 'sort_dir': ['desc'], 'filters': {}} output = self.deserializer.index(request) self.assertEqual(expected, output) def test_index_negative_limit(self): request = unit_test_utils.get_fake_request('/images?limit=-1') self.assertRaises(webob.exc.HTTPBadRequest, self.deserializer.index, request) def test_index_fraction(self): request = unit_test_utils.get_fake_request('/images?limit=1.1') self.assertRaises(webob.exc.HTTPBadRequest, self.deserializer.index, request) def test_index_invalid_status(self): path = '/images?member_status=blah' request = unit_test_utils.get_fake_request(path) self.assertRaises(webob.exc.HTTPBadRequest, self.deserializer.index, request) def test_index_marker(self): marker = str(uuid.uuid4()) path = '/images?marker=%s' % marker request = unit_test_utils.get_fake_request(path) output = self.deserializer.index(request) self.assertEqual(marker, output.get('marker')) def test_index_marker_not_specified(self): request = unit_test_utils.get_fake_request('/images') output = self.deserializer.index(request) self.assertNotIn('marker', output) def test_index_limit_not_specified(self): request = unit_test_utils.get_fake_request('/images') output = self.deserializer.index(request) self.assertNotIn('limit', output) def test_index_sort_key_id(self): request = unit_test_utils.get_fake_request('/images?sort_key=id') output = self.deserializer.index(request) expected = { 'sort_key': ['id'], 'sort_dir': ['desc'], 'member_status': 'accepted', 'filters': {} } self.assertEqual(expected, output) def test_index_multiple_sort_keys(self): request = unit_test_utils.get_fake_request('/images?' 'sort_key=name&' 'sort_key=size') output = self.deserializer.index(request) expected = { 'sort_key': ['name', 'size'], 'sort_dir': ['desc'], 'member_status': 'accepted', 'filters': {} } self.assertEqual(expected, output) def test_index_invalid_multiple_sort_keys(self): # blah is an invalid sort key request = unit_test_utils.get_fake_request('/images?' 'sort_key=name&' 'sort_key=blah') self.assertRaises(webob.exc.HTTPBadRequest, self.deserializer.index, request) def test_index_sort_dir_asc(self): request = unit_test_utils.get_fake_request('/images?sort_dir=asc') output = self.deserializer.index(request) expected = { 'sort_key': ['created_at'], 'sort_dir': ['asc'], 'member_status': 'accepted', 'filters': {}} self.assertEqual(expected, output) def test_index_multiple_sort_dirs(self): req_string = ('/images?sort_key=name&sort_dir=asc&' 'sort_key=id&sort_dir=desc') request = unit_test_utils.get_fake_request(req_string) output = self.deserializer.index(request) expected = { 'sort_key': ['name', 'id'], 'sort_dir': ['asc', 'desc'], 'member_status': 'accepted', 'filters': {}} self.assertEqual(expected, output) def test_index_new_sorting_syntax_single_key_default_dir(self): req_string = '/images?sort=name' request = unit_test_utils.get_fake_request(req_string) output = self.deserializer.index(request) expected = { 'sort_key': ['name'], 'sort_dir': ['desc'], 'member_status': 'accepted', 'filters': {}} self.assertEqual(expected, output) def test_index_new_sorting_syntax_single_key_desc_dir(self): req_string = '/images?sort=name:desc' request = unit_test_utils.get_fake_request(req_string) output = self.deserializer.index(request) expected = { 'sort_key': ['name'], 'sort_dir': ['desc'], 'member_status': 'accepted', 'filters': {}} self.assertEqual(expected, output) def test_index_new_sorting_syntax_multiple_keys_default_dir(self): req_string = '/images?sort=name,size' request = unit_test_utils.get_fake_request(req_string) output = self.deserializer.index(request) expected = { 'sort_key': ['name', 'size'], 'sort_dir': ['desc', 'desc'], 'member_status': 'accepted', 'filters': {}} self.assertEqual(expected, output) def test_index_new_sorting_syntax_multiple_keys_asc_dir(self): req_string = '/images?sort=name:asc,size:asc' request = unit_test_utils.get_fake_request(req_string) output = self.deserializer.index(request) expected = { 'sort_key': ['name', 'size'], 'sort_dir': ['asc', 'asc'], 'member_status': 'accepted', 'filters': {}} self.assertEqual(expected, output) def test_index_new_sorting_syntax_multiple_keys_different_dirs(self): req_string = '/images?sort=name:desc,size:asc' request = unit_test_utils.get_fake_request(req_string) output = self.deserializer.index(request) expected = { 'sort_key': ['name', 'size'], 'sort_dir': ['desc', 'asc'], 'member_status': 'accepted', 'filters': {}} self.assertEqual(expected, output) def test_index_new_sorting_syntax_multiple_keys_optional_dir(self): req_string = '/images?sort=name:asc,size' request = unit_test_utils.get_fake_request(req_string) output = self.deserializer.index(request) expected = { 'sort_key': ['name', 'size'], 'sort_dir': ['asc', 'desc'], 'member_status': 'accepted', 'filters': {}} self.assertEqual(expected, output) req_string = '/images?sort=name,size:asc' request = unit_test_utils.get_fake_request(req_string) output = self.deserializer.index(request) expected = { 'sort_key': ['name', 'size'], 'sort_dir': ['desc', 'asc'], 'member_status': 'accepted', 'filters': {}} self.assertEqual(expected, output) req_string = '/images?sort=name,id:asc,size' request = unit_test_utils.get_fake_request(req_string) output = self.deserializer.index(request) expected = { 'sort_key': ['name', 'id', 'size'], 'sort_dir': ['desc', 'asc', 'desc'], 'member_status': 'accepted', 'filters': {}} self.assertEqual(expected, output) req_string = '/images?sort=name:asc,id,size:asc' request = unit_test_utils.get_fake_request(req_string) output = self.deserializer.index(request) expected = { 'sort_key': ['name', 'id', 'size'], 'sort_dir': ['asc', 'desc', 'asc'], 'member_status': 'accepted', 'filters': {}} self.assertEqual(expected, output) def test_index_sort_wrong_sort_dirs_number(self): req_string = '/images?sort_key=name&sort_dir=asc&sort_dir=desc' request = unit_test_utils.get_fake_request(req_string) self.assertRaises(webob.exc.HTTPBadRequest, self.deserializer.index, request) def test_index_sort_dirs_fewer_than_keys(self): req_string = ('/images?sort_key=name&sort_dir=asc&sort_key=id&' 'sort_dir=asc&sort_key=created_at') request = unit_test_utils.get_fake_request(req_string) self.assertRaises(webob.exc.HTTPBadRequest, self.deserializer.index, request) def test_index_sort_wrong_sort_dirs_number_without_key(self): req_string = '/images?sort_dir=asc&sort_dir=desc' request = unit_test_utils.get_fake_request(req_string) self.assertRaises(webob.exc.HTTPBadRequest, self.deserializer.index, request) def test_index_sort_private_key(self): request = unit_test_utils.get_fake_request('/images?sort_key=min_ram') self.assertRaises(webob.exc.HTTPBadRequest, self.deserializer.index, request) def test_index_sort_key_invalid_value(self): # blah is an invalid sort key request = unit_test_utils.get_fake_request('/images?sort_key=blah') self.assertRaises(webob.exc.HTTPBadRequest, self.deserializer.index, request) def test_index_sort_dir_invalid_value(self): # foo is an invalid sort dir request = unit_test_utils.get_fake_request('/images?sort_dir=foo') self.assertRaises(webob.exc.HTTPBadRequest, self.deserializer.index, request) def test_index_new_sorting_syntax_invalid_request(self): # 'blah' is not a supported sorting key req_string = '/images?sort=blah' request = unit_test_utils.get_fake_request(req_string) self.assertRaises(webob.exc.HTTPBadRequest, self.deserializer.index, request) req_string = '/images?sort=name,blah' request = unit_test_utils.get_fake_request(req_string) self.assertRaises(webob.exc.HTTPBadRequest, self.deserializer.index, request) # 'foo' isn't a valid sort direction req_string = '/images?sort=name:foo' request = unit_test_utils.get_fake_request(req_string) self.assertRaises(webob.exc.HTTPBadRequest, self.deserializer.index, request) # 'asc:desc' isn't a valid sort direction req_string = '/images?sort=name:asc:desc' request = unit_test_utils.get_fake_request(req_string) self.assertRaises(webob.exc.HTTPBadRequest, self.deserializer.index, request) def test_index_combined_sorting_syntax(self): req_string = '/images?sort_dir=name&sort=name' request = unit_test_utils.get_fake_request(req_string) self.assertRaises(webob.exc.HTTPBadRequest, self.deserializer.index, request) def test_index_with_tag(self): path = '/images?tag=%s&tag=%s' % ('x86', '64bit') request = unit_test_utils.get_fake_request(path) output = self.deserializer.index(request) self.assertEqual(sorted(['x86', '64bit']), sorted(output['filters']['tags'])) def test_image_import(self): self.config(enable_image_import=True) request = unit_test_utils.get_fake_request() import_body = { "method": { "name": "glance-direct" } } request.body = jsonutils.dump_as_bytes(import_body) output = self.deserializer.import_image(request) expected = {"body": import_body} self.assertEqual(expected, output) def test_import_image_disabled(self): self.config(enable_image_import=False) request = unit_test_utils.get_fake_request() self.assertRaises(webob.exc.HTTPNotFound, self.deserializer.import_image, request) def test_import_image_invalid_body(self): self.config(enable_image_import=True) request = unit_test_utils.get_fake_request() import_body = { "method1": { "name": "glance-direct" } } request.body = jsonutils.dump_as_bytes(import_body) self.assertRaises(webob.exc.HTTPBadRequest, self.deserializer.import_image, request) def test_import_image_invalid_input(self): self.config(enable_image_import=True) request = unit_test_utils.get_fake_request() import_body = { "method": { "abcd": "glance-direct" } } request.body = jsonutils.dump_as_bytes(import_body) self.assertRaises(webob.exc.HTTPBadRequest, self.deserializer.import_image, request) def test_import_image_invalid_import_method(self): self.config(enable_image_import=True) request = unit_test_utils.get_fake_request() import_body = { "method": { "name": "abcd" } } request.body = jsonutils.dump_as_bytes(import_body) self.assertRaises(webob.exc.HTTPBadRequest, self.deserializer.import_image, request) class TestImagesDeserializerWithExtendedSchema(test_utils.BaseTestCase): def setUp(self): super(TestImagesDeserializerWithExtendedSchema, self).setUp() self.config(allow_additional_image_properties=False) custom_image_properties = { 'pants': { 'type': 'string', 'enum': ['on', 'off'], }, } schema = glance.api.v2.images.get_schema(custom_image_properties) self.deserializer = glance.api.v2.images.RequestDeserializer(schema) def test_create(self): request = unit_test_utils.get_fake_request() request.body = jsonutils.dump_as_bytes({ 'name': 'image-1', 'pants': 'on' }) output = self.deserializer.create(request) expected = { 'image': {'name': 'image-1'}, 'extra_properties': {'pants': 'on'}, 'tags': [], } self.assertEqual(expected, output) def test_create_bad_data(self): request = unit_test_utils.get_fake_request() request.body = jsonutils.dump_as_bytes({ 'name': 'image-1', 'pants': 'borked' }) self.assertRaises(webob.exc.HTTPBadRequest, self.deserializer.create, request) def test_update(self): request = unit_test_utils.get_fake_request() request.content_type = 'application/openstack-images-v2.1-json-patch' doc = [{'op': 'add', 'path': '/pants', 'value': 'off'}] request.body = jsonutils.dump_as_bytes(doc) output = self.deserializer.update(request) expected = {'changes': [ {'json_schema_version': 10, 'op': 'add', 'path': ['pants'], 'value': 'off'}, ]} self.assertEqual(expected, output) def test_update_bad_data(self): request = unit_test_utils.get_fake_request() request.content_type = 'application/openstack-images-v2.1-json-patch' doc = [{'op': 'add', 'path': '/pants', 'value': 'cutoffs'}] request.body = jsonutils.dump_as_bytes(doc) self.assertRaises(webob.exc.HTTPBadRequest, self.deserializer.update, request) class TestImagesDeserializerWithAdditionalProperties(test_utils.BaseTestCase): def setUp(self): super(TestImagesDeserializerWithAdditionalProperties, self).setUp() self.config(allow_additional_image_properties=True) self.deserializer = glance.api.v2.images.RequestDeserializer() def test_create(self): request = unit_test_utils.get_fake_request() request.body = jsonutils.dump_as_bytes({'foo': 'bar'}) output = self.deserializer.create(request) expected = {'image': {}, 'extra_properties': {'foo': 'bar'}, 'tags': []} self.assertEqual(expected, output) def test_create_with_numeric_property(self): request = unit_test_utils.get_fake_request() request.body = jsonutils.dump_as_bytes({'abc': 123}) self.assertRaises(webob.exc.HTTPBadRequest, self.deserializer.create, request) def test_update_with_numeric_property(self): request = unit_test_utils.get_fake_request() request.content_type = 'application/openstack-images-v2.1-json-patch' doc = [{'op': 'add', 'path': '/foo', 'value': 123}] request.body = jsonutils.dump_as_bytes(doc) self.assertRaises(webob.exc.HTTPBadRequest, self.deserializer.update, request) def test_create_with_list_property(self): request = unit_test_utils.get_fake_request() request.body = jsonutils.dump_as_bytes({'foo': ['bar']}) self.assertRaises(webob.exc.HTTPBadRequest, self.deserializer.create, request) def test_update_with_list_property(self): request = unit_test_utils.get_fake_request() request.content_type = 'application/openstack-images-v2.1-json-patch' doc = [{'op': 'add', 'path': '/foo', 'value': ['bar', 'baz']}] request.body = jsonutils.dump_as_bytes(doc) self.assertRaises(webob.exc.HTTPBadRequest, self.deserializer.update, request) def test_update(self): request = unit_test_utils.get_fake_request() request.content_type = 'application/openstack-images-v2.1-json-patch' doc = [{'op': 'add', 'path': '/foo', 'value': 'bar'}] request.body = jsonutils.dump_as_bytes(doc) output = self.deserializer.update(request) change = { 'json_schema_version': 10, 'op': 'add', 'path': ['foo'], 'value': 'bar' } self.assertEqual({'changes': [change]}, output) class TestImagesDeserializerNoAdditionalProperties(test_utils.BaseTestCase): def setUp(self): super(TestImagesDeserializerNoAdditionalProperties, self).setUp() self.config(allow_additional_image_properties=False) self.deserializer = glance.api.v2.images.RequestDeserializer() def test_create_with_additional_properties_disallowed(self): self.config(allow_additional_image_properties=False) request = unit_test_utils.get_fake_request() request.body = jsonutils.dump_as_bytes({'foo': 'bar'}) self.assertRaises(webob.exc.HTTPBadRequest, self.deserializer.create, request) def test_update(self): request = unit_test_utils.get_fake_request() request.content_type = 'application/openstack-images-v2.1-json-patch' doc = [{'op': 'add', 'path': '/foo', 'value': 'bar'}] request.body = jsonutils.dump_as_bytes(doc) self.assertRaises(webob.exc.HTTPBadRequest, self.deserializer.update, request) class TestImagesSerializer(test_utils.BaseTestCase): def setUp(self): super(TestImagesSerializer, self).setUp() self.serializer = glance.api.v2.images.ResponseSerializer() self.fixtures = [ # NOTE(bcwaldon): This first fixture has every property defined _domain_fixture(UUID1, name='image-1', size=1024, virtual_size=3072, created_at=DATETIME, updated_at=DATETIME, owner=TENANT1, visibility='public', container_format='ami', tags=['one', 'two'], disk_format='ami', min_ram=128, min_disk=10, checksum='ca425b88f047ce8ec45ee90e813ada91'), # NOTE(bcwaldon): This second fixture depends on default behavior # and sets most values to None _domain_fixture(UUID2, created_at=DATETIME, updated_at=DATETIME), ] def test_index(self): expected = { 'images': [ { 'id': UUID1, 'name': 'image-1', 'status': 'queued', 'visibility': 'public', 'protected': False, 'tags': set(['one', 'two']), 'size': 1024, 'virtual_size': 3072, 'checksum': 'ca425b88f047ce8ec45ee90e813ada91', 'container_format': 'ami', 'disk_format': 'ami', 'min_ram': 128, 'min_disk': 10, 'created_at': ISOTIME, 'updated_at': ISOTIME, 'self': '/v2/images/%s' % UUID1, 'file': '/v2/images/%s/file' % UUID1, 'schema': '/v2/schemas/image', 'owner': '6838eb7b-6ded-434a-882c-b344c77fe8df', }, { 'id': UUID2, 'status': 'queued', 'visibility': 'private', 'protected': False, 'tags': set([]), 'created_at': ISOTIME, 'updated_at': ISOTIME, 'self': '/v2/images/%s' % UUID2, 'file': '/v2/images/%s/file' % UUID2, 'schema': '/v2/schemas/image', 'size': None, 'name': None, 'owner': None, 'min_ram': None, 'min_disk': None, 'checksum': None, 'disk_format': None, 'virtual_size': None, 'container_format': None, }, ], 'first': '/v2/images', 'schema': '/v2/schemas/images', } request = webob.Request.blank('/v2/images') response = webob.Response(request=request) result = {'images': self.fixtures} self.serializer.index(response, result) actual = jsonutils.loads(response.body) for image in actual['images']: image['tags'] = set(image['tags']) self.assertEqual(expected, actual) self.assertEqual('application/json', response.content_type) def test_index_next_marker(self): request = webob.Request.blank('/v2/images') response = webob.Response(request=request) result = {'images': self.fixtures, 'next_marker': UUID2} self.serializer.index(response, result) output = jsonutils.loads(response.body) self.assertEqual('/v2/images?marker=%s' % UUID2, output['next']) def test_index_carries_query_parameters(self): url = '/v2/images?limit=10&sort_key=id&sort_dir=asc' request = webob.Request.blank(url) response = webob.Response(request=request) result = {'images': self.fixtures, 'next_marker': UUID2} self.serializer.index(response, result) output = jsonutils.loads(response.body) expected_url = '/v2/images?limit=10&sort_dir=asc&sort_key=id' self.assertEqual(unit_test_utils.sort_url_by_qs_keys(expected_url), unit_test_utils.sort_url_by_qs_keys(output['first'])) expect_next = '/v2/images?limit=10&marker=%s&sort_dir=asc&sort_key=id' self.assertEqual(unit_test_utils.sort_url_by_qs_keys( expect_next % UUID2), unit_test_utils.sort_url_by_qs_keys(output['next'])) def test_index_forbidden_get_image_location(self): """Make sure the serializer works fine. No mater if current user is authorized to get image location if the show_multiple_locations is False. """ class ImageLocations(object): def __len__(self): raise exception.Forbidden() self.config(show_multiple_locations=False) self.config(show_image_direct_url=False) url = '/v2/images?limit=10&sort_key=id&sort_dir=asc' request = webob.Request.blank(url) response = webob.Response(request=request) result = {'images': self.fixtures} self.assertEqual(http.OK, response.status_int) # The image index should work though the user is forbidden result['images'][0].locations = ImageLocations() self.serializer.index(response, result) self.assertEqual(http.OK, response.status_int) def test_show_full_fixture(self): expected = { 'id': UUID1, 'name': 'image-1', 'status': 'queued', 'visibility': 'public', 'protected': False, 'tags': set(['one', 'two']), 'size': 1024, 'virtual_size': 3072, 'checksum': 'ca425b88f047ce8ec45ee90e813ada91', 'container_format': 'ami', 'disk_format': 'ami', 'min_ram': 128, 'min_disk': 10, 'created_at': ISOTIME, 'updated_at': ISOTIME, 'self': '/v2/images/%s' % UUID1, 'file': '/v2/images/%s/file' % UUID1, 'schema': '/v2/schemas/image', 'owner': '6838eb7b-6ded-434a-882c-b344c77fe8df', } response = webob.Response() self.serializer.show(response, self.fixtures[0]) actual = jsonutils.loads(response.body) actual['tags'] = set(actual['tags']) self.assertEqual(expected, actual) self.assertEqual('application/json', response.content_type) def test_show_minimal_fixture(self): expected = { 'id': UUID2, 'status': 'queued', 'visibility': 'private', 'protected': False, 'tags': [], 'created_at': ISOTIME, 'updated_at': ISOTIME, 'self': '/v2/images/%s' % UUID2, 'file': '/v2/images/%s/file' % UUID2, 'schema': '/v2/schemas/image', 'size': None, 'name': None, 'owner': None, 'min_ram': None, 'min_disk': None, 'checksum': None, 'disk_format': None, 'virtual_size': None, 'container_format': None, } response = webob.Response() self.serializer.show(response, self.fixtures[1]) self.assertEqual(expected, jsonutils.loads(response.body)) def test_create(self): expected = { 'id': UUID1, 'name': 'image-1', 'status': 'queued', 'visibility': 'public', 'protected': False, 'tags': ['one', 'two'], 'size': 1024, 'virtual_size': 3072, 'checksum': 'ca425b88f047ce8ec45ee90e813ada91', 'container_format': 'ami', 'disk_format': 'ami', 'min_ram': 128, 'min_disk': 10, 'created_at': ISOTIME, 'updated_at': ISOTIME, 'self': '/v2/images/%s' % UUID1, 'file': '/v2/images/%s/file' % UUID1, 'schema': '/v2/schemas/image', 'owner': '6838eb7b-6ded-434a-882c-b344c77fe8df', } response = webob.Response() self.serializer.create(response, self.fixtures[0]) self.assertEqual(http.CREATED, response.status_int) actual = jsonutils.loads(response.body) actual['tags'] = sorted(actual['tags']) self.assertEqual(expected, actual) self.assertEqual('application/json', response.content_type) self.assertEqual('/v2/images/%s' % UUID1, response.location) def test_create_has_import_methods_header(self): # NOTE(rosmaita): enabled_import_methods is defined as type # oslo.config.cfg.ListOpt, so it is stored internally as a list # but is converted to a string for output in the HTTP header header_name = 'OpenStack-image-import-methods' # check multiple methods enabled_methods = ['one', 'two', 'three'] self.config(enabled_import_methods=enabled_methods) response = webob.Response() self.serializer.create(response, self.fixtures[0]) self.assertEqual(http.CREATED, response.status_int) header_value = response.headers.get(header_name) self.assertIsNotNone(header_value) self.assertItemsEqual(enabled_methods, header_value.split(',')) # check single method self.config(enabled_import_methods=['swift-party-time']) response = webob.Response() self.serializer.create(response, self.fixtures[0]) self.assertEqual(http.CREATED, response.status_int) header_value = response.headers.get(header_name) self.assertIsNotNone(header_value) self.assertEqual('swift-party-time', header_value) # no header for empty config value self.config(enabled_import_methods=[]) response = webob.Response() self.serializer.create(response, self.fixtures[0]) self.assertEqual(http.CREATED, response.status_int) headers = response.headers.keys() self.assertNotIn(header_name, headers) # TODO(rosmaita): remove this test when the enable_image_import # option is removed def test_create_has_no_import_methods_header(self): header_name = 'OpenStack-image-import-methods' self.config(enable_image_import=False) response = webob.Response() self.serializer.create(response, self.fixtures[0]) self.assertEqual(http.CREATED, response.status_int) headers = response.headers.keys() self.assertNotIn(header_name, headers) def test_update(self): expected = { 'id': UUID1, 'name': 'image-1', 'status': 'queued', 'visibility': 'public', 'protected': False, 'tags': set(['one', 'two']), 'size': 1024, 'virtual_size': 3072, 'checksum': 'ca425b88f047ce8ec45ee90e813ada91', 'container_format': 'ami', 'disk_format': 'ami', 'min_ram': 128, 'min_disk': 10, 'created_at': ISOTIME, 'updated_at': ISOTIME, 'self': '/v2/images/%s' % UUID1, 'file': '/v2/images/%s/file' % UUID1, 'schema': '/v2/schemas/image', 'owner': '6838eb7b-6ded-434a-882c-b344c77fe8df', } response = webob.Response() self.serializer.update(response, self.fixtures[0]) actual = jsonutils.loads(response.body) actual['tags'] = set(actual['tags']) self.assertEqual(expected, actual) self.assertEqual('application/json', response.content_type) def test_import_image(self): response = webob.Response() self.serializer.import_image(response, {}) self.assertEqual(http.ACCEPTED, response.status_int) self.assertEqual('0', response.headers['Content-Length']) class TestImagesSerializerWithUnicode(test_utils.BaseTestCase): def setUp(self): super(TestImagesSerializerWithUnicode, self).setUp() self.serializer = glance.api.v2.images.ResponseSerializer() self.fixtures = [ # NOTE(bcwaldon): This first fixture has every property defined _domain_fixture(UUID1, **{ 'name': u'OpenStack\u2122-1', 'size': 1024, 'virtual_size': 3072, 'tags': [u'\u2160', u'\u2161'], 'created_at': DATETIME, 'updated_at': DATETIME, 'owner': TENANT1, 'visibility': 'public', 'container_format': 'ami', 'disk_format': 'ami', 'min_ram': 128, 'min_disk': 10, 'checksum': u'ca425b88f047ce8ec45ee90e813ada91', 'extra_properties': {'lang': u'Fran\u00E7ais', u'dispos\u00E9': u'f\u00E2ch\u00E9'}, }), ] def test_index(self): expected = { u'images': [ { u'id': UUID1, u'name': u'OpenStack\u2122-1', u'status': u'queued', u'visibility': u'public', u'protected': False, u'tags': [u'\u2160', u'\u2161'], u'size': 1024, u'virtual_size': 3072, u'checksum': u'ca425b88f047ce8ec45ee90e813ada91', u'container_format': u'ami', u'disk_format': u'ami', u'min_ram': 128, u'min_disk': 10, u'created_at': six.text_type(ISOTIME), u'updated_at': six.text_type(ISOTIME), u'self': u'/v2/images/%s' % UUID1, u'file': u'/v2/images/%s/file' % UUID1, u'schema': u'/v2/schemas/image', u'lang': u'Fran\u00E7ais', u'dispos\u00E9': u'f\u00E2ch\u00E9', u'owner': u'6838eb7b-6ded-434a-882c-b344c77fe8df', }, ], u'first': u'/v2/images', u'schema': u'/v2/schemas/images', } request = webob.Request.blank('/v2/images') response = webob.Response(request=request) result = {u'images': self.fixtures} self.serializer.index(response, result) actual = jsonutils.loads(response.body) actual['images'][0]['tags'] = sorted(actual['images'][0]['tags']) self.assertEqual(expected, actual) self.assertEqual('application/json', response.content_type) def test_show_full_fixture(self): expected = { u'id': UUID1, u'name': u'OpenStack\u2122-1', u'status': u'queued', u'visibility': u'public', u'protected': False, u'tags': set([u'\u2160', u'\u2161']), u'size': 1024, u'virtual_size': 3072, u'checksum': u'ca425b88f047ce8ec45ee90e813ada91', u'container_format': u'ami', u'disk_format': u'ami', u'min_ram': 128, u'min_disk': 10, u'created_at': six.text_type(ISOTIME), u'updated_at': six.text_type(ISOTIME), u'self': u'/v2/images/%s' % UUID1, u'file': u'/v2/images/%s/file' % UUID1, u'schema': u'/v2/schemas/image', u'lang': u'Fran\u00E7ais', u'dispos\u00E9': u'f\u00E2ch\u00E9', u'owner': u'6838eb7b-6ded-434a-882c-b344c77fe8df', } response = webob.Response() self.serializer.show(response, self.fixtures[0]) actual = jsonutils.loads(response.body) actual['tags'] = set(actual['tags']) self.assertEqual(expected, actual) self.assertEqual('application/json', response.content_type) def test_create(self): expected = { u'id': UUID1, u'name': u'OpenStack\u2122-1', u'status': u'queued', u'visibility': u'public', u'protected': False, u'tags': [u'\u2160', u'\u2161'], u'size': 1024, u'virtual_size': 3072, u'checksum': u'ca425b88f047ce8ec45ee90e813ada91', u'container_format': u'ami', u'disk_format': u'ami', u'min_ram': 128, u'min_disk': 10, u'created_at': six.text_type(ISOTIME), u'updated_at': six.text_type(ISOTIME), u'self': u'/v2/images/%s' % UUID1, u'file': u'/v2/images/%s/file' % UUID1, u'schema': u'/v2/schemas/image', u'lang': u'Fran\u00E7ais', u'dispos\u00E9': u'f\u00E2ch\u00E9', u'owner': u'6838eb7b-6ded-434a-882c-b344c77fe8df', } response = webob.Response() self.serializer.create(response, self.fixtures[0]) self.assertEqual(http.CREATED, response.status_int) actual = jsonutils.loads(response.body) actual['tags'] = sorted(actual['tags']) self.assertEqual(expected, actual) self.assertEqual('application/json', response.content_type) self.assertEqual('/v2/images/%s' % UUID1, response.location) def test_update(self): expected = { u'id': UUID1, u'name': u'OpenStack\u2122-1', u'status': u'queued', u'visibility': u'public', u'protected': False, u'tags': set([u'\u2160', u'\u2161']), u'size': 1024, u'virtual_size': 3072, u'checksum': u'ca425b88f047ce8ec45ee90e813ada91', u'container_format': u'ami', u'disk_format': u'ami', u'min_ram': 128, u'min_disk': 10, u'created_at': six.text_type(ISOTIME), u'updated_at': six.text_type(ISOTIME), u'self': u'/v2/images/%s' % UUID1, u'file': u'/v2/images/%s/file' % UUID1, u'schema': u'/v2/schemas/image', u'lang': u'Fran\u00E7ais', u'dispos\u00E9': u'f\u00E2ch\u00E9', u'owner': u'6838eb7b-6ded-434a-882c-b344c77fe8df', } response = webob.Response() self.serializer.update(response, self.fixtures[0]) actual = jsonutils.loads(response.body) actual['tags'] = set(actual['tags']) self.assertEqual(expected, actual) self.assertEqual('application/json', response.content_type) class TestImagesSerializerWithExtendedSchema(test_utils.BaseTestCase): def setUp(self): super(TestImagesSerializerWithExtendedSchema, self).setUp() self.config(allow_additional_image_properties=False) custom_image_properties = { 'color': { 'type': 'string', 'enum': ['red', 'green'], }, } schema = glance.api.v2.images.get_schema(custom_image_properties) self.serializer = glance.api.v2.images.ResponseSerializer(schema) props = dict(color='green', mood='grouchy') self.fixture = _domain_fixture( UUID2, name='image-2', owner=TENANT2, checksum='ca425b88f047ce8ec45ee90e813ada91', created_at=DATETIME, updated_at=DATETIME, size=1024, virtual_size=3072, extra_properties=props) def test_show(self): expected = { 'id': UUID2, 'name': 'image-2', 'status': 'queued', 'visibility': 'private', 'protected': False, 'checksum': 'ca425b88f047ce8ec45ee90e813ada91', 'tags': [], 'size': 1024, 'virtual_size': 3072, 'owner': '2c014f32-55eb-467d-8fcb-4bd706012f81', 'color': 'green', 'created_at': ISOTIME, 'updated_at': ISOTIME, 'self': '/v2/images/%s' % UUID2, 'file': '/v2/images/%s/file' % UUID2, 'schema': '/v2/schemas/image', 'min_ram': None, 'min_disk': None, 'disk_format': None, 'container_format': None, } response = webob.Response() self.serializer.show(response, self.fixture) self.assertEqual(expected, jsonutils.loads(response.body)) def test_show_reports_invalid_data(self): self.fixture.extra_properties['color'] = 'invalid' expected = { 'id': UUID2, 'name': 'image-2', 'status': 'queued', 'visibility': 'private', 'protected': False, 'checksum': 'ca425b88f047ce8ec45ee90e813ada91', 'tags': [], 'size': 1024, 'virtual_size': 3072, 'owner': '2c014f32-55eb-467d-8fcb-4bd706012f81', 'color': 'invalid', 'created_at': ISOTIME, 'updated_at': ISOTIME, 'self': '/v2/images/%s' % UUID2, 'file': '/v2/images/%s/file' % UUID2, 'schema': '/v2/schemas/image', 'min_ram': None, 'min_disk': None, 'disk_format': None, 'container_format': None, } response = webob.Response() self.serializer.show(response, self.fixture) self.assertEqual(expected, jsonutils.loads(response.body)) class TestImagesSerializerWithAdditionalProperties(test_utils.BaseTestCase): def setUp(self): super(TestImagesSerializerWithAdditionalProperties, self).setUp() self.config(allow_additional_image_properties=True) self.fixture = _domain_fixture( UUID2, name='image-2', owner=TENANT2, checksum='ca425b88f047ce8ec45ee90e813ada91', created_at=DATETIME, updated_at=DATETIME, size=1024, virtual_size=3072, extra_properties={'marx': 'groucho'}) def test_show(self): serializer = glance.api.v2.images.ResponseSerializer() expected = { 'id': UUID2, 'name': 'image-2', 'status': 'queued', 'visibility': 'private', 'protected': False, 'checksum': 'ca425b88f047ce8ec45ee90e813ada91', 'marx': 'groucho', 'tags': [], 'size': 1024, 'virtual_size': 3072, 'created_at': ISOTIME, 'updated_at': ISOTIME, 'self': '/v2/images/%s' % UUID2, 'file': '/v2/images/%s/file' % UUID2, 'schema': '/v2/schemas/image', 'owner': '2c014f32-55eb-467d-8fcb-4bd706012f81', 'min_ram': None, 'min_disk': None, 'disk_format': None, 'container_format': None, } response = webob.Response() serializer.show(response, self.fixture) self.assertEqual(expected, jsonutils.loads(response.body)) def test_show_invalid_additional_property(self): """Ensure that the serializer passes through invalid additional properties. It must not complains with i.e. non-string. """ serializer = glance.api.v2.images.ResponseSerializer() self.fixture.extra_properties['marx'] = 123 expected = { 'id': UUID2, 'name': 'image-2', 'status': 'queued', 'visibility': 'private', 'protected': False, 'checksum': 'ca425b88f047ce8ec45ee90e813ada91', 'marx': 123, 'tags': [], 'size': 1024, 'virtual_size': 3072, 'created_at': ISOTIME, 'updated_at': ISOTIME, 'self': '/v2/images/%s' % UUID2, 'file': '/v2/images/%s/file' % UUID2, 'schema': '/v2/schemas/image', 'owner': '2c014f32-55eb-467d-8fcb-4bd706012f81', 'min_ram': None, 'min_disk': None, 'disk_format': None, 'container_format': None, } response = webob.Response() serializer.show(response, self.fixture) self.assertEqual(expected, jsonutils.loads(response.body)) def test_show_with_additional_properties_disabled(self): self.config(allow_additional_image_properties=False) serializer = glance.api.v2.images.ResponseSerializer() expected = { 'id': UUID2, 'name': 'image-2', 'status': 'queued', 'visibility': 'private', 'protected': False, 'checksum': 'ca425b88f047ce8ec45ee90e813ada91', 'tags': [], 'size': 1024, 'virtual_size': 3072, 'owner': '2c014f32-55eb-467d-8fcb-4bd706012f81', 'created_at': ISOTIME, 'updated_at': ISOTIME, 'self': '/v2/images/%s' % UUID2, 'file': '/v2/images/%s/file' % UUID2, 'schema': '/v2/schemas/image', 'min_ram': None, 'min_disk': None, 'disk_format': None, 'container_format': None, } response = webob.Response() serializer.show(response, self.fixture) self.assertEqual(expected, jsonutils.loads(response.body)) class TestImagesSerializerDirectUrl(test_utils.BaseTestCase): def setUp(self): super(TestImagesSerializerDirectUrl, self).setUp() self.serializer = glance.api.v2.images.ResponseSerializer() self.active_image = _domain_fixture( UUID1, name='image-1', visibility='public', status='active', size=1024, virtual_size=3072, created_at=DATETIME, updated_at=DATETIME, locations=[{'id': '1', 'url': 'http://some/fake/location', 'metadata': {}, 'status': 'active'}]) self.queued_image = _domain_fixture( UUID2, name='image-2', status='active', created_at=DATETIME, updated_at=DATETIME, checksum='ca425b88f047ce8ec45ee90e813ada91') self.location_data_image_url = 'http://abc.com/somewhere' self.location_data_image_meta = {'key': 98231} self.location_data_image = _domain_fixture( UUID2, name='image-2', status='active', created_at=DATETIME, updated_at=DATETIME, locations=[{'id': '2', 'url': self.location_data_image_url, 'metadata': self.location_data_image_meta, 'status': 'active'}]) def _do_index(self): request = webob.Request.blank('/v2/images') response = webob.Response(request=request) self.serializer.index(response, {'images': [self.active_image, self.queued_image]}) return jsonutils.loads(response.body)['images'] def _do_show(self, image): request = webob.Request.blank('/v2/images') response = webob.Response(request=request) self.serializer.show(response, image) return jsonutils.loads(response.body) def test_index_store_location_enabled(self): self.config(show_image_direct_url=True) images = self._do_index() # NOTE(markwash): ordering sanity check self.assertEqual(UUID1, images[0]['id']) self.assertEqual(UUID2, images[1]['id']) self.assertEqual('http://some/fake/location', images[0]['direct_url']) self.assertNotIn('direct_url', images[1]) def test_index_store_multiple_location_enabled(self): self.config(show_multiple_locations=True) request = webob.Request.blank('/v2/images') response = webob.Response(request=request) self.serializer.index(response, {'images': [self.location_data_image]}), images = jsonutils.loads(response.body)['images'] location = images[0]['locations'][0] self.assertEqual(location['url'], self.location_data_image_url) self.assertEqual(location['metadata'], self.location_data_image_meta) def test_index_store_location_explicitly_disabled(self): self.config(show_image_direct_url=False) images = self._do_index() self.assertNotIn('direct_url', images[0]) self.assertNotIn('direct_url', images[1]) def test_show_location_enabled(self): self.config(show_image_direct_url=True) image = self._do_show(self.active_image) self.assertEqual('http://some/fake/location', image['direct_url']) def test_show_location_enabled_but_not_set(self): self.config(show_image_direct_url=True) image = self._do_show(self.queued_image) self.assertNotIn('direct_url', image) def test_show_location_explicitly_disabled(self): self.config(show_image_direct_url=False) image = self._do_show(self.active_image) self.assertNotIn('direct_url', image) class TestImageSchemaFormatConfiguration(test_utils.BaseTestCase): def test_default_disk_formats(self): schema = glance.api.v2.images.get_schema() expected = [None, 'ami', 'ari', 'aki', 'vhd', 'vhdx', 'vmdk', 'raw', 'qcow2', 'vdi', 'iso', 'ploop'] actual = schema.properties['disk_format']['enum'] self.assertEqual(expected, actual) def test_custom_disk_formats(self): self.config(disk_formats=['gabe'], group="image_format") schema = glance.api.v2.images.get_schema() expected = [None, 'gabe'] actual = schema.properties['disk_format']['enum'] self.assertEqual(expected, actual) def test_default_container_formats(self): schema = glance.api.v2.images.get_schema() expected = [None, 'ami', 'ari', 'aki', 'bare', 'ovf', 'ova', 'docker'] actual = schema.properties['container_format']['enum'] self.assertEqual(expected, actual) def test_custom_container_formats(self): self.config(container_formats=['mark'], group="image_format") schema = glance.api.v2.images.get_schema() expected = [None, 'mark'] actual = schema.properties['container_format']['enum'] self.assertEqual(expected, actual) class TestImageSchemaDeterminePropertyBasis(test_utils.BaseTestCase): def test_custom_property_marked_as_non_base(self): self.config(allow_additional_image_properties=False) custom_image_properties = { 'pants': { 'type': 'string', }, } schema = glance.api.v2.images.get_schema(custom_image_properties) self.assertFalse(schema.properties['pants'].get('is_base', True)) def test_base_property_marked_as_base(self): schema = glance.api.v2.images.get_schema() self.assertTrue(schema.properties['disk_format'].get('is_base', True)) glance-16.0.0/glance/tests/unit/v2/test_image_members_resource.py0000666000175100017510000005627713245511421025111 0ustar zuulzuul00000000000000# Copyright 2013 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime import glance_store from oslo_config import cfg from oslo_serialization import jsonutils from six.moves import http_client as http import webob import glance.api.v2.image_members import glance.tests.unit.utils as unit_test_utils import glance.tests.utils as test_utils DATETIME = datetime.datetime(2012, 5, 16, 15, 27, 36, 325355) ISOTIME = '2012-05-16T15:27:36Z' CONF = cfg.CONF BASE_URI = unit_test_utils.BASE_URI UUID1 = 'c80a1a6c-bd1f-41c5-90ee-81afedb1d58d' UUID2 = 'a85abd86-55b3-4d5b-b0b4-5d0a6e6042fc' UUID3 = '971ec09a-8067-4bc8-a91f-ae3557f1c4c7' UUID4 = '6bbe7cc2-eae7-4c0f-b50d-a7160b0c6a86' UUID5 = '3eee7cc2-eae7-4c0f-b50d-a7160b0c62ed' TENANT1 = '6838eb7b-6ded-434a-882c-b344c77fe8df' TENANT2 = '2c014f32-55eb-467d-8fcb-4bd706012f81' TENANT3 = '5a3e60e8-cfa9-4a9e-a90a-62b42cea92b8' TENANT4 = 'c6c87f25-8a94-47ed-8c83-053c25f42df4' def _db_fixture(id, **kwargs): obj = { 'id': id, 'name': None, 'visibility': 'shared', 'properties': {}, 'checksum': None, 'owner': None, 'status': 'queued', 'tags': [], 'size': None, 'locations': [], 'protected': False, 'disk_format': None, 'container_format': None, 'deleted': False, 'min_ram': None, 'min_disk': None, } obj.update(kwargs) return obj def _db_image_member_fixture(image_id, member_id, **kwargs): obj = { 'image_id': image_id, 'member': member_id, 'status': 'pending', } obj.update(kwargs) return obj def _domain_fixture(id, **kwargs): properties = { 'id': id, } properties.update(kwargs) return glance.domain.ImageMembership(**properties) class TestImageMembersController(test_utils.BaseTestCase): def setUp(self): super(TestImageMembersController, self).setUp() self.db = unit_test_utils.FakeDB(initialize=False) self.store = unit_test_utils.FakeStoreAPI() self.policy = unit_test_utils.FakePolicyEnforcer() self.notifier = unit_test_utils.FakeNotifier() self._create_images() self._create_image_members() self.controller = glance.api.v2.image_members.ImageMembersController( self.db, self.policy, self.notifier, self.store) glance_store.register_opts(CONF) self.config(default_store='filesystem', filesystem_store_datadir=self.test_dir, group="glance_store") glance_store.create_stores() def _create_images(self): self.images = [ _db_fixture(UUID1, owner=TENANT1, name='1', size=256, visibility='public', locations=[{'url': '%s/%s' % (BASE_URI, UUID1), 'metadata': {}, 'status': 'active'}]), _db_fixture(UUID2, owner=TENANT1, name='2', size=512), _db_fixture(UUID3, owner=TENANT3, name='3', size=512), _db_fixture(UUID4, owner=TENANT4, name='4', size=1024), _db_fixture(UUID5, owner=TENANT1, name='5', size=1024), ] [self.db.image_create(None, image) for image in self.images] self.db.image_tag_set_all(None, UUID1, ['ping', 'pong']) def _create_image_members(self): self.image_members = [ _db_image_member_fixture(UUID2, TENANT4), _db_image_member_fixture(UUID3, TENANT4), _db_image_member_fixture(UUID3, TENANT2), _db_image_member_fixture(UUID4, TENANT1), ] [self.db.image_member_create(None, image_member) for image_member in self.image_members] def test_index(self): request = unit_test_utils.get_fake_request() output = self.controller.index(request, UUID2) self.assertEqual(1, len(output['members'])) actual = set([image_member.member_id for image_member in output['members']]) expected = set([TENANT4]) self.assertEqual(expected, actual) def test_index_no_members(self): request = unit_test_utils.get_fake_request() output = self.controller.index(request, UUID5) self.assertEqual(0, len(output['members'])) self.assertEqual({'members': []}, output) def test_index_member_view(self): # UUID3 is a shared image owned by TENANT3 # UUID3 has members TENANT2 and TENANT4 # When TENANT4 lists members for UUID3, should not see TENANT2 request = unit_test_utils.get_fake_request(tenant=TENANT4) output = self.controller.index(request, UUID3) self.assertEqual(1, len(output['members'])) actual = set([image_member.member_id for image_member in output['members']]) expected = set([TENANT4]) self.assertEqual(expected, actual) def test_index_private_image(self): request = unit_test_utils.get_fake_request(tenant=TENANT2) self.assertRaises(webob.exc.HTTPNotFound, self.controller.index, request, UUID5) def test_index_public_image(self): request = unit_test_utils.get_fake_request() self.assertRaises(webob.exc.HTTPForbidden, self.controller.index, request, UUID1) def test_index_private_image_visible_members_admin(self): request = unit_test_utils.get_fake_request(is_admin=True) output = self.controller.index(request, UUID4) self.assertEqual(1, len(output['members'])) actual = set([image_member.member_id for image_member in output['members']]) expected = set([TENANT1]) self.assertEqual(expected, actual) def test_index_allowed_by_get_members_policy(self): rules = {"get_members": True} self.policy.set_rules(rules) request = unit_test_utils.get_fake_request() output = self.controller.index(request, UUID2) self.assertEqual(1, len(output['members'])) def test_index_forbidden_by_get_members_policy(self): rules = {"get_members": False} self.policy.set_rules(rules) request = unit_test_utils.get_fake_request() self.assertRaises(webob.exc.HTTPForbidden, self.controller.index, request, image_id=UUID2) def test_show(self): request = unit_test_utils.get_fake_request(tenant=TENANT1) output = self.controller.show(request, UUID2, TENANT4) expected = self.image_members[0] self.assertEqual(expected['image_id'], output.image_id) self.assertEqual(expected['member'], output.member_id) self.assertEqual(expected['status'], output.status) def test_show_by_member(self): request = unit_test_utils.get_fake_request(tenant=TENANT4) output = self.controller.show(request, UUID2, TENANT4) expected = self.image_members[0] self.assertEqual(expected['image_id'], output.image_id) self.assertEqual(expected['member'], output.member_id) self.assertEqual(expected['status'], output.status) def test_show_forbidden(self): request = unit_test_utils.get_fake_request(tenant=TENANT2) self.assertRaises(webob.exc.HTTPNotFound, self.controller.show, request, UUID2, TENANT4) def test_show_not_found(self): # one member should not be able to view status of another member # of the same image request = unit_test_utils.get_fake_request(tenant=TENANT2) self.assertRaises(webob.exc.HTTPNotFound, self.controller.show, request, UUID3, TENANT4) def test_create(self): request = unit_test_utils.get_fake_request() image_id = UUID2 member_id = TENANT3 output = self.controller.create(request, image_id=image_id, member_id=member_id) self.assertEqual(UUID2, output.image_id) self.assertEqual(TENANT3, output.member_id) def test_create_allowed_by_add_policy(self): rules = {"add_member": True} self.policy.set_rules(rules) request = unit_test_utils.get_fake_request() output = self.controller.create(request, image_id=UUID2, member_id=TENANT3) self.assertEqual(UUID2, output.image_id) self.assertEqual(TENANT3, output.member_id) def test_create_forbidden_by_add_policy(self): rules = {"add_member": False} self.policy.set_rules(rules) request = unit_test_utils.get_fake_request() self.assertRaises(webob.exc.HTTPForbidden, self.controller.create, request, image_id=UUID2, member_id=TENANT3) def test_create_duplicate_member(self): request = unit_test_utils.get_fake_request() image_id = UUID2 member_id = TENANT3 output = self.controller.create(request, image_id=image_id, member_id=member_id) self.assertEqual(UUID2, output.image_id) self.assertEqual(TENANT3, output.member_id) self.assertRaises(webob.exc.HTTPConflict, self.controller.create, request, image_id=image_id, member_id=member_id) def test_create_overlimit(self): self.config(image_member_quota=0) request = unit_test_utils.get_fake_request() image_id = UUID2 member_id = TENANT3 self.assertRaises(webob.exc.HTTPRequestEntityTooLarge, self.controller.create, request, image_id=image_id, member_id=member_id) def test_create_unlimited(self): self.config(image_member_quota=-1) request = unit_test_utils.get_fake_request() image_id = UUID2 member_id = TENANT3 output = self.controller.create(request, image_id=image_id, member_id=member_id) self.assertEqual(UUID2, output.image_id) self.assertEqual(TENANT3, output.member_id) def test_member_create_raises_bad_request_for_unicode_value(self): request = unit_test_utils.get_fake_request() self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, request, image_id=UUID5, member_id=u'\U0001f693') def test_update_done_by_member(self): request = unit_test_utils.get_fake_request(tenant=TENANT4) image_id = UUID2 member_id = TENANT4 output = self.controller.update(request, image_id=image_id, member_id=member_id, status='accepted') self.assertEqual(UUID2, output.image_id) self.assertEqual(TENANT4, output.member_id) self.assertEqual('accepted', output.status) def test_update_done_by_member_forbidden_by_policy(self): rules = {"modify_member": False} self.policy.set_rules(rules) request = unit_test_utils.get_fake_request(tenant=TENANT4) self.assertRaises(webob.exc.HTTPForbidden, self.controller.update, request, image_id=UUID2, member_id=TENANT4, status='accepted') def test_update_done_by_member_allowed_by_policy(self): rules = {"modify_member": True} self.policy.set_rules(rules) request = unit_test_utils.get_fake_request(tenant=TENANT4) output = self.controller.update(request, image_id=UUID2, member_id=TENANT4, status='accepted') self.assertEqual(UUID2, output.image_id) self.assertEqual(TENANT4, output.member_id) self.assertEqual('accepted', output.status) def test_update_done_by_owner(self): request = unit_test_utils.get_fake_request(tenant=TENANT1) self.assertRaises(webob.exc.HTTPForbidden, self.controller.update, request, UUID2, TENANT4, status='accepted') def test_update_non_existent_image(self): request = unit_test_utils.get_fake_request(tenant=TENANT1) self.assertRaises(webob.exc.HTTPNotFound, self.controller.update, request, '123', TENANT4, status='accepted') def test_update_invalid_status(self): request = unit_test_utils.get_fake_request(tenant=TENANT4) self.assertRaises(webob.exc.HTTPBadRequest, self.controller.update, request, UUID2, TENANT4, status='accept') def test_create_private_image(self): request = unit_test_utils.get_fake_request() self.assertRaises(webob.exc.HTTPForbidden, self.controller.create, request, UUID4, TENANT2) def test_create_public_image(self): request = unit_test_utils.get_fake_request() self.assertRaises(webob.exc.HTTPForbidden, self.controller.create, request, UUID1, TENANT2) def test_create_image_does_not_exist(self): request = unit_test_utils.get_fake_request() image_id = 'fake-image-id' member_id = TENANT3 self.assertRaises(webob.exc.HTTPNotFound, self.controller.create, request, image_id=image_id, member_id=member_id) def test_delete(self): request = unit_test_utils.get_fake_request() member_id = TENANT4 image_id = UUID2 res = self.controller.delete(request, image_id, member_id) self.assertEqual(b'', res.body) self.assertEqual(http.NO_CONTENT, res.status_code) found_member = self.db.image_member_find( request.context, image_id=image_id, member=member_id) self.assertEqual([], found_member) def test_delete_by_member(self): request = unit_test_utils.get_fake_request(tenant=TENANT4) self.assertRaises(webob.exc.HTTPForbidden, self.controller.delete, request, UUID2, TENANT4) request = unit_test_utils.get_fake_request() output = self.controller.index(request, UUID2) self.assertEqual(1, len(output['members'])) actual = set([image_member.member_id for image_member in output['members']]) expected = set([TENANT4]) self.assertEqual(expected, actual) def test_delete_allowed_by_policies(self): rules = {"get_member": True, "delete_member": True} self.policy.set_rules(rules) request = unit_test_utils.get_fake_request(tenant=TENANT1) output = self.controller.delete(request, image_id=UUID2, member_id=TENANT4) request = unit_test_utils.get_fake_request() output = self.controller.index(request, UUID2) self.assertEqual(0, len(output['members'])) def test_delete_forbidden_by_get_member_policy(self): rules = {"get_member": False} self.policy.set_rules(rules) request = unit_test_utils.get_fake_request(tenant=TENANT1) self.assertRaises(webob.exc.HTTPForbidden, self.controller.delete, request, UUID2, TENANT4) def test_delete_forbidden_by_delete_member_policy(self): rules = {"delete_member": False} self.policy.set_rules(rules) request = unit_test_utils.get_fake_request(tenant=TENANT1) self.assertRaises(webob.exc.HTTPForbidden, self.controller.delete, request, UUID2, TENANT4) def test_delete_private_image(self): request = unit_test_utils.get_fake_request() self.assertRaises(webob.exc.HTTPForbidden, self.controller.delete, request, UUID4, TENANT1) def test_delete_public_image(self): request = unit_test_utils.get_fake_request() self.assertRaises(webob.exc.HTTPForbidden, self.controller.delete, request, UUID1, TENANT1) def test_delete_image_does_not_exist(self): request = unit_test_utils.get_fake_request() member_id = TENANT2 image_id = 'fake-image-id' self.assertRaises(webob.exc.HTTPNotFound, self.controller.delete, request, image_id, member_id) def test_delete_member_does_not_exist(self): request = unit_test_utils.get_fake_request() member_id = 'fake-member-id' image_id = UUID2 found_member = self.db.image_member_find( request.context, image_id=image_id, member=member_id) self.assertEqual([], found_member) self.assertRaises(webob.exc.HTTPNotFound, self.controller.delete, request, image_id, member_id) class TestImageMembersSerializer(test_utils.BaseTestCase): def setUp(self): super(TestImageMembersSerializer, self).setUp() self.serializer = glance.api.v2.image_members.ResponseSerializer() self.fixtures = [ _domain_fixture(id='1', image_id=UUID2, member_id=TENANT1, status='accepted', created_at=DATETIME, updated_at=DATETIME), _domain_fixture(id='2', image_id=UUID2, member_id=TENANT2, status='pending', created_at=DATETIME, updated_at=DATETIME), ] def test_index(self): expected = { 'members': [ { 'image_id': UUID2, 'member_id': TENANT1, 'status': 'accepted', 'created_at': ISOTIME, 'updated_at': ISOTIME, 'schema': '/v2/schemas/member', }, { 'image_id': UUID2, 'member_id': TENANT2, 'status': 'pending', 'created_at': ISOTIME, 'updated_at': ISOTIME, 'schema': '/v2/schemas/member', }, ], 'schema': '/v2/schemas/members', } request = webob.Request.blank('/v2/images/%s/members' % UUID2) response = webob.Response(request=request) result = {'members': self.fixtures} self.serializer.index(response, result) actual = jsonutils.loads(response.body) self.assertEqual(expected, actual) self.assertEqual('application/json', response.content_type) def test_show(self): expected = { 'image_id': UUID2, 'member_id': TENANT1, 'status': 'accepted', 'created_at': ISOTIME, 'updated_at': ISOTIME, 'schema': '/v2/schemas/member', } request = webob.Request.blank('/v2/images/%s/members/%s' % (UUID2, TENANT1)) response = webob.Response(request=request) result = self.fixtures[0] self.serializer.show(response, result) actual = jsonutils.loads(response.body) self.assertEqual(expected, actual) self.assertEqual('application/json', response.content_type) def test_create(self): expected = {'image_id': UUID2, 'member_id': TENANT1, 'status': 'accepted', 'schema': '/v2/schemas/member', 'created_at': ISOTIME, 'updated_at': ISOTIME} request = webob.Request.blank('/v2/images/%s/members/%s' % (UUID2, TENANT1)) response = webob.Response(request=request) result = self.fixtures[0] self.serializer.create(response, result) actual = jsonutils.loads(response.body) self.assertEqual(expected, actual) self.assertEqual('application/json', response.content_type) def test_update(self): expected = {'image_id': UUID2, 'member_id': TENANT1, 'status': 'accepted', 'schema': '/v2/schemas/member', 'created_at': ISOTIME, 'updated_at': ISOTIME} request = webob.Request.blank('/v2/images/%s/members/%s' % (UUID2, TENANT1)) response = webob.Response(request=request) result = self.fixtures[0] self.serializer.update(response, result) actual = jsonutils.loads(response.body) self.assertEqual(expected, actual) self.assertEqual('application/json', response.content_type) class TestImagesDeserializer(test_utils.BaseTestCase): def setUp(self): super(TestImagesDeserializer, self).setUp() self.deserializer = glance.api.v2.image_members.RequestDeserializer() def test_create(self): request = unit_test_utils.get_fake_request() request.body = jsonutils.dump_as_bytes({'member': TENANT1}) output = self.deserializer.create(request) expected = {'member_id': TENANT1} self.assertEqual(expected, output) def test_create_invalid(self): request = unit_test_utils.get_fake_request() request.body = jsonutils.dump_as_bytes({'mem': TENANT1}) self.assertRaises(webob.exc.HTTPBadRequest, self.deserializer.create, request) def test_create_no_body(self): request = unit_test_utils.get_fake_request() self.assertRaises(webob.exc.HTTPBadRequest, self.deserializer.create, request) def test_create_member_empty(self): request = unit_test_utils.get_fake_request() request.body = jsonutils.dump_as_bytes({'member': ''}) self.assertRaises(webob.exc.HTTPBadRequest, self.deserializer.create, request) def test_create_list_return_error(self): request = unit_test_utils.get_fake_request() request.body = jsonutils.dump_as_bytes([TENANT1]) self.assertRaises(webob.exc.HTTPBadRequest, self.deserializer.create, request) def test_update_list_return_error(self): request = unit_test_utils.get_fake_request() request.body = jsonutils.dump_as_bytes([TENANT1]) self.assertRaises(webob.exc.HTTPBadRequest, self.deserializer.update, request) def test_update(self): request = unit_test_utils.get_fake_request() request.body = jsonutils.dump_as_bytes({'status': 'accepted'}) output = self.deserializer.update(request) expected = {'status': 'accepted'} self.assertEqual(expected, output) def test_update_invalid(self): request = unit_test_utils.get_fake_request() request.body = jsonutils.dump_as_bytes({'mem': TENANT1}) self.assertRaises(webob.exc.HTTPBadRequest, self.deserializer.update, request) def test_update_no_body(self): request = unit_test_utils.get_fake_request() self.assertRaises(webob.exc.HTTPBadRequest, self.deserializer.update, request) glance-16.0.0/glance/tests/unit/v2/test_image_tags_resource.py0000666000175100017510000001015213245511421024373 0ustar zuulzuul00000000000000# Copyright 2012 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from six.moves import http_client as http import webob import glance.api.v2.image_tags from glance.common import exception from glance.tests.unit import base import glance.tests.unit.utils as unit_test_utils import glance.tests.unit.v2.test_image_data_resource as image_data_tests import glance.tests.utils as test_utils class TestImageTagsController(base.IsolatedUnitTest): def setUp(self): super(TestImageTagsController, self).setUp() self.db = unit_test_utils.FakeDB() self.controller = glance.api.v2.image_tags.Controller(self.db) def test_create_tag(self): request = unit_test_utils.get_fake_request() self.controller.update(request, unit_test_utils.UUID1, 'dink') context = request.context tags = self.db.image_tag_get_all(context, unit_test_utils.UUID1) self.assertEqual(1, len([tag for tag in tags if tag == 'dink'])) def test_create_too_many_tags(self): self.config(image_tag_quota=0) request = unit_test_utils.get_fake_request() self.assertRaises(webob.exc.HTTPRequestEntityTooLarge, self.controller.update, request, unit_test_utils.UUID1, 'dink') def test_create_duplicate_tag_ignored(self): request = unit_test_utils.get_fake_request() self.controller.update(request, unit_test_utils.UUID1, 'dink') self.controller.update(request, unit_test_utils.UUID1, 'dink') context = request.context tags = self.db.image_tag_get_all(context, unit_test_utils.UUID1) self.assertEqual(1, len([tag for tag in tags if tag == 'dink'])) def test_update_tag_of_non_existing_image(self): request = unit_test_utils.get_fake_request() self.assertRaises(webob.exc.HTTPNotFound, self.controller.update, request, "abcd", "dink") def test_delete_tag_forbidden(self): def fake_get(self): raise exception.Forbidden() image_repo = image_data_tests.FakeImageRepo() image_repo.get = fake_get def get_fake_repo(self): return image_repo self.controller.gateway.get_repo = get_fake_repo request = unit_test_utils.get_fake_request() self.assertRaises(webob.exc.HTTPForbidden, self.controller.update, request, unit_test_utils.UUID1, "ping") def test_delete_tag(self): request = unit_test_utils.get_fake_request() self.controller.delete(request, unit_test_utils.UUID1, 'ping') def test_delete_tag_not_found(self): request = unit_test_utils.get_fake_request() self.assertRaises(webob.exc.HTTPNotFound, self.controller.delete, request, unit_test_utils.UUID1, 'what') def test_delete_tag_of_non_existing_image(self): request = unit_test_utils.get_fake_request() self.assertRaises(webob.exc.HTTPNotFound, self.controller.delete, request, "abcd", "dink") class TestImagesSerializer(test_utils.BaseTestCase): def setUp(self): super(TestImagesSerializer, self).setUp() self.serializer = glance.api.v2.image_tags.ResponseSerializer() def test_create_tag(self): response = webob.Response() self.serializer.update(response, None) self.assertEqual(http.NO_CONTENT, response.status_int) def test_delete_tag(self): response = webob.Response() self.serializer.delete(response, None) self.assertEqual(http.NO_CONTENT, response.status_int) glance-16.0.0/glance/tests/unit/v2/test_image_actions_resource.py0000666000175100017510000001452713245511421025107 0ustar zuulzuul00000000000000# Copyright 2015 OpenStack Foundation. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import glance_store as store import webob import glance.api.v2.image_actions as image_actions import glance.context from glance.tests.unit import base import glance.tests.unit.utils as unit_test_utils BASE_URI = unit_test_utils.BASE_URI USER1 = '54492ba0-f4df-4e4e-be62-27f4d76b29cf' UUID1 = 'c80a1a6c-bd1f-41c5-90ee-81afedb1d58d' TENANT1 = '6838eb7b-6ded-434a-882c-b344c77fe8df' CHKSUM = '93264c3edf5972c9f1cb309543d38a5c' def _db_fixture(id, **kwargs): obj = { 'id': id, 'name': None, 'visibility': 'shared', 'properties': {}, 'checksum': None, 'owner': None, 'status': 'queued', 'tags': [], 'size': None, 'virtual_size': None, 'locations': [], 'protected': False, 'disk_format': None, 'container_format': None, 'deleted': False, 'min_ram': None, 'min_disk': None, } obj.update(kwargs) return obj class TestImageActionsController(base.IsolatedUnitTest): def setUp(self): super(TestImageActionsController, self).setUp() self.db = unit_test_utils.FakeDB(initialize=False) self.policy = unit_test_utils.FakePolicyEnforcer() self.notifier = unit_test_utils.FakeNotifier() self.store = unit_test_utils.FakeStoreAPI() for i in range(1, 4): self.store.data['%s/fake_location_%i' % (BASE_URI, i)] = ('Z', 1) self.store_utils = unit_test_utils.FakeStoreUtils(self.store) self.controller = image_actions.ImageActionsController( self.db, self.policy, self.notifier, self.store) self.controller.gateway.store_utils = self.store_utils store.create_stores() def _get_fake_context(self, user=USER1, tenant=TENANT1, roles=None, is_admin=False): if roles is None: roles = ['member'] kwargs = { 'user': user, 'tenant': tenant, 'roles': roles, 'is_admin': is_admin, } context = glance.context.RequestContext(**kwargs) return context def _create_image(self, status): self.images = [ _db_fixture(UUID1, owner=TENANT1, checksum=CHKSUM, name='1', size=256, virtual_size=1024, visibility='public', locations=[{'url': '%s/%s' % (BASE_URI, UUID1), 'metadata': {}, 'status': 'active'}], disk_format='raw', container_format='bare', status=status), ] context = self._get_fake_context() [self.db.image_create(context, image) for image in self.images] def test_deactivate_from_active(self): self._create_image('active') request = unit_test_utils.get_fake_request() self.controller.deactivate(request, UUID1) image = self.db.image_get(request.context, UUID1) self.assertEqual('deactivated', image['status']) def test_deactivate_from_deactivated(self): self._create_image('deactivated') request = unit_test_utils.get_fake_request() self.controller.deactivate(request, UUID1) image = self.db.image_get(request.context, UUID1) self.assertEqual('deactivated', image['status']) def _test_deactivate_from_wrong_status(self, status): # deactivate will yield an error if the initial status is anything # other than 'active' or 'deactivated' self._create_image(status) request = unit_test_utils.get_fake_request() self.assertRaises(webob.exc.HTTPForbidden, self.controller.deactivate, request, UUID1) def test_deactivate_from_queued(self): self._test_deactivate_from_wrong_status('queued') def test_deactivate_from_saving(self): self._test_deactivate_from_wrong_status('saving') def test_deactivate_from_killed(self): self._test_deactivate_from_wrong_status('killed') def test_deactivate_from_pending_delete(self): self._test_deactivate_from_wrong_status('pending_delete') def test_deactivate_from_deleted(self): self._test_deactivate_from_wrong_status('deleted') def test_reactivate_from_active(self): self._create_image('active') request = unit_test_utils.get_fake_request() self.controller.reactivate(request, UUID1) image = self.db.image_get(request.context, UUID1) self.assertEqual('active', image['status']) def test_reactivate_from_deactivated(self): self._create_image('deactivated') request = unit_test_utils.get_fake_request() self.controller.reactivate(request, UUID1) image = self.db.image_get(request.context, UUID1) self.assertEqual('active', image['status']) def _test_reactivate_from_wrong_status(self, status): # reactivate will yield an error if the initial status is anything # other than 'active' or 'deactivated' self._create_image(status) request = unit_test_utils.get_fake_request() self.assertRaises(webob.exc.HTTPForbidden, self.controller.reactivate, request, UUID1) def test_reactivate_from_queued(self): self._test_reactivate_from_wrong_status('queued') def test_reactivate_from_saving(self): self._test_reactivate_from_wrong_status('saving') def test_reactivate_from_killed(self): self._test_reactivate_from_wrong_status('killed') def test_reactivate_from_pending_delete(self): self._test_reactivate_from_wrong_status('pending_delete') def test_reactivate_from_deleted(self): self._test_reactivate_from_wrong_status('deleted') glance-16.0.0/glance/tests/unit/v2/test_tasks_resource.py0000666000175100017510000011227513245511421023431 0ustar zuulzuul00000000000000# Copyright 2013 IBM Corp. # All Rights Reserved. # # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime import uuid import mock from oslo_config import cfg from oslo_serialization import jsonutils from six.moves import http_client as http # NOTE(jokke): simplified transition to py3, behaves like py2 xrange from six.moves import range import webob import glance.api.v2.tasks from glance.common import timeutils import glance.domain import glance.gateway from glance.tests.unit import base import glance.tests.unit.utils as unit_test_utils import glance.tests.utils as test_utils UUID1 = 'c80a1a6c-bd1f-41c5-90ee-81afedb1d58d' UUID2 = 'a85abd86-55b3-4d5b-b0b4-5d0a6e6042fc' UUID3 = '971ec09a-8067-4bc8-a91f-ae3557f1c4c7' UUID4 = '6bbe7cc2-eae7-4c0f-b50d-a7160b0c6a86' TENANT1 = '6838eb7b-6ded-434a-882c-b344c77fe8df' TENANT2 = '2c014f32-55eb-467d-8fcb-4bd706012f81' TENANT3 = '5a3e60e8-cfa9-4a9e-a90a-62b42cea92b8' TENANT4 = 'c6c87f25-8a94-47ed-8c83-053c25f42df4' DATETIME = datetime.datetime(2013, 9, 28, 15, 27, 36, 325355) ISOTIME = '2013-09-28T15:27:36Z' def _db_fixture(task_id, **kwargs): default_datetime = timeutils.utcnow() obj = { 'id': task_id, 'status': 'pending', 'type': 'import', 'input': {}, 'result': None, 'owner': None, 'message': None, 'expires_at': default_datetime + datetime.timedelta(days=365), 'created_at': default_datetime, 'updated_at': default_datetime, 'deleted_at': None, 'deleted': False } obj.update(kwargs) return obj def _domain_fixture(task_id, **kwargs): default_datetime = timeutils.utcnow() task_properties = { 'task_id': task_id, 'status': kwargs.get('status', 'pending'), 'task_type': kwargs.get('type', 'import'), 'owner': kwargs.get('owner'), 'expires_at': kwargs.get('expires_at'), 'created_at': kwargs.get('created_at', default_datetime), 'updated_at': kwargs.get('updated_at', default_datetime), 'task_input': kwargs.get('task_input', {}), 'message': kwargs.get('message'), 'result': kwargs.get('result') } task = glance.domain.Task(**task_properties) return task CONF = cfg.CONF CONF.import_opt('task_time_to_live', 'glance.common.config', group='task') class TestTasksController(test_utils.BaseTestCase): def setUp(self): super(TestTasksController, self).setUp() self.db = unit_test_utils.FakeDB(initialize=False) self.policy = unit_test_utils.FakePolicyEnforcer() self.notifier = unit_test_utils.FakeNotifier() self.store = unit_test_utils.FakeStoreAPI() self._create_tasks() self.controller = glance.api.v2.tasks.TasksController(self.db, self.policy, self.notifier, self.store) self.gateway = glance.gateway.Gateway(self.db, self.store, self.notifier, self.policy) def _create_tasks(self): now = timeutils.utcnow() times = [now + datetime.timedelta(seconds=5 * i) for i in range(4)] self.tasks = [ _db_fixture(UUID1, owner=TENANT1, created_at=times[0], updated_at=times[0]), # FIXME(venkatesh): change the type to include clone and export # once they are included as a valid types under Task domain model. _db_fixture(UUID2, owner=TENANT2, type='import', created_at=times[1], updated_at=times[1]), _db_fixture(UUID3, owner=TENANT3, type='import', created_at=times[2], updated_at=times[2]), _db_fixture(UUID4, owner=TENANT4, type='import', created_at=times[3], updated_at=times[3])] [self.db.task_create(None, task) for task in self.tasks] def test_index(self): self.config(limit_param_default=1, api_limit_max=3) request = unit_test_utils.get_fake_request() output = self.controller.index(request) self.assertEqual(1, len(output['tasks'])) actual = set([task.task_id for task in output['tasks']]) expected = set([UUID1]) self.assertEqual(expected, actual) def test_index_admin(self): request = unit_test_utils.get_fake_request(is_admin=True) output = self.controller.index(request) self.assertEqual(4, len(output['tasks'])) def test_index_return_parameters(self): self.config(limit_param_default=1, api_limit_max=4) request = unit_test_utils.get_fake_request(is_admin=True) output = self.controller.index(request, marker=UUID3, limit=1, sort_key='created_at', sort_dir='desc') self.assertEqual(1, len(output['tasks'])) actual = set([task.task_id for task in output['tasks']]) expected = set([UUID2]) self.assertEqual(expected, actual) self.assertEqual(UUID2, output['next_marker']) def test_index_next_marker(self): self.config(limit_param_default=1, api_limit_max=3) request = unit_test_utils.get_fake_request(is_admin=True) output = self.controller.index(request, marker=UUID3, limit=2) self.assertEqual(2, len(output['tasks'])) actual = set([task.task_id for task in output['tasks']]) expected = set([UUID2, UUID1]) self.assertEqual(expected, actual) self.assertEqual(UUID1, output['next_marker']) def test_index_no_next_marker(self): self.config(limit_param_default=1, api_limit_max=3) request = unit_test_utils.get_fake_request(is_admin=True) output = self.controller.index(request, marker=UUID1, limit=2) self.assertEqual(0, len(output['tasks'])) actual = set([task.task_id for task in output['tasks']]) expected = set([]) self.assertEqual(expected, actual) self.assertNotIn('next_marker', output) def test_index_with_id_filter(self): request = unit_test_utils.get_fake_request('/tasks?id=%s' % UUID1) output = self.controller.index(request, filters={'id': UUID1}) self.assertEqual(1, len(output['tasks'])) actual = set([task.task_id for task in output['tasks']]) expected = set([UUID1]) self.assertEqual(expected, actual) def test_index_with_filters_return_many(self): path = '/tasks?status=pending' request = unit_test_utils.get_fake_request(path, is_admin=True) output = self.controller.index(request, filters={'status': 'pending'}) self.assertEqual(4, len(output['tasks'])) actual = set([task.task_id for task in output['tasks']]) expected = set([UUID1, UUID2, UUID3, UUID4]) self.assertEqual(sorted(expected), sorted(actual)) def test_index_with_many_filters(self): url = '/tasks?status=pending&type=import' request = unit_test_utils.get_fake_request(url, is_admin=True) output = self.controller.index(request, filters={ 'status': 'pending', 'type': 'import', 'owner': TENANT1, }) self.assertEqual(1, len(output['tasks'])) actual = set([task.task_id for task in output['tasks']]) expected = set([UUID1]) self.assertEqual(expected, actual) def test_index_with_marker(self): self.config(limit_param_default=1, api_limit_max=3) path = '/tasks' request = unit_test_utils.get_fake_request(path, is_admin=True) output = self.controller.index(request, marker=UUID3) actual = set([task.task_id for task in output['tasks']]) self.assertEqual(1, len(actual)) self.assertIn(UUID2, actual) def test_index_with_limit(self): path = '/tasks' limit = 2 request = unit_test_utils.get_fake_request(path, is_admin=True) output = self.controller.index(request, limit=limit) actual = set([task.task_id for task in output['tasks']]) self.assertEqual(limit, len(actual)) def test_index_greater_than_limit_max(self): self.config(limit_param_default=1, api_limit_max=3) path = '/tasks' request = unit_test_utils.get_fake_request(path, is_admin=True) output = self.controller.index(request, limit=4) actual = set([task.task_id for task in output['tasks']]) self.assertEqual(3, len(actual)) self.assertNotIn(output['next_marker'], output) def test_index_default_limit(self): self.config(limit_param_default=1, api_limit_max=3) path = '/tasks' request = unit_test_utils.get_fake_request(path) output = self.controller.index(request) actual = set([task.task_id for task in output['tasks']]) self.assertEqual(1, len(actual)) def test_index_with_sort_dir(self): path = '/tasks' request = unit_test_utils.get_fake_request(path, is_admin=True) output = self.controller.index(request, sort_dir='asc', limit=3) actual = [task.task_id for task in output['tasks']] self.assertEqual(3, len(actual)) self.assertEqual([UUID1, UUID2, UUID3], actual) def test_index_with_sort_key(self): path = '/tasks' request = unit_test_utils.get_fake_request(path, is_admin=True) output = self.controller.index(request, sort_key='created_at', limit=3) actual = [task.task_id for task in output['tasks']] self.assertEqual(3, len(actual)) self.assertEqual(UUID4, actual[0]) self.assertEqual(UUID3, actual[1]) self.assertEqual(UUID2, actual[2]) def test_index_with_marker_not_found(self): fake_uuid = str(uuid.uuid4()) path = '/tasks' request = unit_test_utils.get_fake_request(path) self.assertRaises(webob.exc.HTTPBadRequest, self.controller.index, request, marker=fake_uuid) def test_index_with_marker_is_not_like_uuid(self): marker = 'INVALID_UUID' path = '/tasks' request = unit_test_utils.get_fake_request(path) self.assertRaises(webob.exc.HTTPBadRequest, self.controller.index, request, marker=marker) def test_index_invalid_sort_key(self): path = '/tasks' request = unit_test_utils.get_fake_request(path) self.assertRaises(webob.exc.HTTPBadRequest, self.controller.index, request, sort_key='foo') def test_index_zero_tasks(self): self.db.reset() request = unit_test_utils.get_fake_request() output = self.controller.index(request) self.assertEqual([], output['tasks']) def test_get(self): request = unit_test_utils.get_fake_request() task = self.controller.get(request, task_id=UUID1) self.assertEqual(UUID1, task.task_id) self.assertEqual('import', task.type) def test_get_non_existent(self): request = unit_test_utils.get_fake_request() task_id = str(uuid.uuid4()) self.assertRaises(webob.exc.HTTPNotFound, self.controller.get, request, task_id) def test_get_not_allowed(self): request = unit_test_utils.get_fake_request() self.assertEqual(TENANT1, request.context.tenant) self.assertRaises(webob.exc.HTTPNotFound, self.controller.get, request, UUID4) @mock.patch.object(glance.gateway.Gateway, 'get_task_factory') @mock.patch.object(glance.gateway.Gateway, 'get_task_executor_factory') @mock.patch.object(glance.gateway.Gateway, 'get_task_repo') def test_create(self, mock_get_task_repo, mock_get_task_executor_factory, mock_get_task_factory): # setup request = unit_test_utils.get_fake_request() task = { "type": "import", "input": { "import_from": "swift://cloud.foo/myaccount/mycontainer/path", "import_from_format": "qcow2", "image_properties": {} } } get_task_factory = mock.Mock() mock_get_task_factory.return_value = get_task_factory new_task = mock.Mock() get_task_factory.new_task.return_value = new_task new_task.run.return_value = mock.ANY get_task_executor_factory = mock.Mock() mock_get_task_executor_factory.return_value = get_task_executor_factory get_task_executor_factory.new_task_executor.return_value = mock.Mock() get_task_repo = mock.Mock() mock_get_task_repo.return_value = get_task_repo get_task_repo.add.return_value = mock.Mock() # call self.controller.create(request, task=task) # assert self.assertEqual(1, get_task_factory.new_task.call_count) self.assertEqual(1, get_task_repo.add.call_count) self.assertEqual( 1, get_task_executor_factory.new_task_executor.call_count) @mock.patch('glance.common.scripts.utils.get_image_data_iter') @mock.patch('glance.common.scripts.utils.validate_location_uri') def test_create_with_live_time(self, mock_validate_location_uri, mock_get_image_data_iter): request = unit_test_utils.get_fake_request() task = { "type": "import", "input": { "import_from": "http://download.cirros-cloud.net/0.3.4/" "cirros-0.3.4-x86_64-disk.img", "import_from_format": "qcow2", "image_properties": { "disk_format": "qcow2", "container_format": "bare", "name": "test-task" } } } new_task = self.controller.create(request, task=task) executor_factory = self.gateway.get_task_executor_factory( request.context) task_executor = executor_factory.new_task_executor(request.context) task_executor.begin_processing(new_task.task_id) success_task = self.controller.get(request, new_task.task_id) # ignore second and microsecond to avoid flaky runs task_live_time = (success_task.expires_at.replace(second=0, microsecond=0) - success_task.updated_at.replace(second=0, microsecond=0)) task_live_time_hour = (task_live_time.days * 24 + task_live_time.seconds / 3600) self.assertEqual(CONF.task.task_time_to_live, task_live_time_hour) def test_create_with_wrong_import_form(self): request = unit_test_utils.get_fake_request() wrong_import_from = [ "swift://cloud.foo/myaccount/mycontainer/path", "file:///path", "cinder://volume-id" ] executor_factory = self.gateway.get_task_executor_factory( request.context) task_repo = self.gateway.get_task_repo(request.context) for import_from in wrong_import_from: task = { "type": "import", "input": { "import_from": import_from, "import_from_format": "qcow2", "image_properties": { "disk_format": "qcow2", "container_format": "bare", "name": "test-task" } } } new_task = self.controller.create(request, task=task) task_executor = executor_factory.new_task_executor(request.context) task_executor.begin_processing(new_task.task_id) final_task = task_repo.get(new_task.task_id) self.assertEqual('failure', final_task.status) if import_from.startswith("file:///"): msg = ("File based imports are not allowed. Please use a " "non-local source of image data.") else: supported = ['http', ] msg = ("The given uri is not valid. Please specify a " "valid uri from the following list of supported uri " "%(supported)s") % {'supported': supported} self.assertEqual(msg, final_task.message) def test_create_with_properties_missed(self): request = unit_test_utils.get_fake_request() executor_factory = self.gateway.get_task_executor_factory( request.context) task_repo = self.gateway.get_task_repo(request.context) task = { "type": "import", "input": { "import_from": "swift://cloud.foo/myaccount/mycontainer/path", "import_from_format": "qcow2", } } new_task = self.controller.create(request, task=task) task_executor = executor_factory.new_task_executor(request.context) task_executor.begin_processing(new_task.task_id) final_task = task_repo.get(new_task.task_id) self.assertEqual('failure', final_task.status) msg = "Input does not contain 'image_properties' field" self.assertEqual(msg, final_task.message) @mock.patch.object(glance.gateway.Gateway, 'get_task_factory') def test_notifications_on_create(self, mock_get_task_factory): request = unit_test_utils.get_fake_request() new_task = mock.MagicMock(type='import') mock_get_task_factory.new_task.return_value = new_task new_task.run.return_value = mock.ANY task = {"type": "import", "input": { "import_from": "http://cloud.foo/myaccount/mycontainer/path", "import_from_format": "qcow2", "image_properties": {} } } task = self.controller.create(request, task=task) output_logs = [nlog for nlog in self.notifier.get_logs() if nlog['event_type'] == 'task.create'] self.assertEqual(1, len(output_logs)) output_log = output_logs[0] self.assertEqual('INFO', output_log['notification_type']) self.assertEqual('task.create', output_log['event_type']) class TestTasksControllerPolicies(base.IsolatedUnitTest): def setUp(self): super(TestTasksControllerPolicies, self).setUp() self.db = unit_test_utils.FakeDB() self.policy = unit_test_utils.FakePolicyEnforcer() self.controller = glance.api.v2.tasks.TasksController(self.db, self.policy) def test_index_unauthorized(self): rules = {"get_tasks": False} self.policy.set_rules(rules) request = unit_test_utils.get_fake_request() self.assertRaises(webob.exc.HTTPForbidden, self.controller.index, request) def test_get_unauthorized(self): rules = {"get_task": False} self.policy.set_rules(rules) request = unit_test_utils.get_fake_request() self.assertRaises(webob.exc.HTTPForbidden, self.controller.get, request, task_id=UUID2) def test_access_get_unauthorized(self): rules = {"tasks_api_access": False, "get_task": True} self.policy.set_rules(rules) request = unit_test_utils.get_fake_request() self.assertRaises(webob.exc.HTTPForbidden, self.controller.get, request, task_id=UUID2) def test_create_task_unauthorized(self): rules = {"add_task": False} self.policy.set_rules(rules) request = unit_test_utils.get_fake_request() task = {'type': 'import', 'input': {"import_from": "fake"}} self.assertRaises(webob.exc.HTTPForbidden, self.controller.create, request, task) def test_delete(self): request = unit_test_utils.get_fake_request() self.assertRaises(webob.exc.HTTPMethodNotAllowed, self.controller.delete, request, 'fake_id') def test_access_delete_unauthorized(self): rules = {"tasks_api_access": False} self.policy.set_rules(rules) request = unit_test_utils.get_fake_request() self.assertRaises(webob.exc.HTTPForbidden, self.controller.delete, request, 'fake_id') class TestTasksDeserializerPolicies(test_utils.BaseTestCase): # NOTE(rosmaita): this is a bit weird, but we check the access # policy in the RequestDeserializer for calls that take bodies # or query strings because we want to make sure the failure is # a 403, not a 400 due to bad request format def setUp(self): super(TestTasksDeserializerPolicies, self).setUp() self.policy = unit_test_utils.FakePolicyEnforcer() self.deserializer = glance.api.v2.tasks.RequestDeserializer( schema=None, policy_engine=self.policy) bad_path = '/tasks?limit=NaN' def test_access_index_authorized_bad_query_string(self): """Allow access, fail with 400""" rules = {"tasks_api_access": True, "get_tasks": True} self.policy.set_rules(rules) request = unit_test_utils.get_fake_request(self.bad_path) self.assertRaises(webob.exc.HTTPBadRequest, self.deserializer.index, request) def test_access_index_unauthorized(self): """Disallow access with bad request, fail with 403""" rules = {"tasks_api_access": False, "get_tasks": True} self.policy.set_rules(rules) request = unit_test_utils.get_fake_request(self.bad_path) self.assertRaises(webob.exc.HTTPForbidden, self.deserializer.index, request) bad_task = {'typo': 'import', 'input': {"import_from": "fake"}} def test_access_create_authorized_bad_format(self): """Allow access, fail with 400""" rules = {"tasks_api_access": True, "add_task": True} self.policy.set_rules(rules) request = unit_test_utils.get_fake_request() request.body = jsonutils.dump_as_bytes(self.bad_task) self.assertRaises(webob.exc.HTTPBadRequest, self.deserializer.create, request) def test_access_create_unauthorized(self): """Disallow access with bad request, fail with 403""" rules = {"tasks_api_access": False, "add_task": True} self.policy.set_rules(rules) request = unit_test_utils.get_fake_request() request.body = jsonutils.dump_as_bytes(self.bad_task) self.assertRaises(webob.exc.HTTPForbidden, self.deserializer.create, request) class TestTasksDeserializer(test_utils.BaseTestCase): def setUp(self): super(TestTasksDeserializer, self).setUp() self.deserializer = glance.api.v2.tasks.RequestDeserializer() def test_create_no_body(self): request = unit_test_utils.get_fake_request() self.assertRaises(webob.exc.HTTPBadRequest, self.deserializer.create, request) def test_create(self): request = unit_test_utils.get_fake_request() request.body = jsonutils.dump_as_bytes({ 'type': 'import', 'input': {'import_from': 'swift://cloud.foo/myaccount/mycontainer/path', 'import_from_format': 'qcow2', 'image_properties': {'name': 'fake1'}}, }) output = self.deserializer.create(request) properties = { 'type': 'import', 'input': {'import_from': 'swift://cloud.foo/myaccount/mycontainer/path', 'import_from_format': 'qcow2', 'image_properties': {'name': 'fake1'}}, } self.maxDiff = None expected = {'task': properties} self.assertEqual(expected, output) def test_index(self): marker = str(uuid.uuid4()) path = '/tasks?limit=1&marker=%s' % marker request = unit_test_utils.get_fake_request(path) expected = {'limit': 1, 'marker': marker, 'sort_key': 'created_at', 'sort_dir': 'desc', 'filters': {}} output = self.deserializer.index(request) self.assertEqual(expected, output) def test_index_strip_params_from_filters(self): type = 'import' path = '/tasks?type=%s' % type request = unit_test_utils.get_fake_request(path) output = self.deserializer.index(request) self.assertEqual(type, output['filters']['type']) def test_index_with_many_filter(self): status = 'success' type = 'import' path = '/tasks?status=%(status)s&type=%(type)s' % {'status': status, 'type': type} request = unit_test_utils.get_fake_request(path) output = self.deserializer.index(request) self.assertEqual(status, output['filters']['status']) self.assertEqual(type, output['filters']['type']) def test_index_with_filter_and_limit(self): status = 'success' path = '/tasks?status=%s&limit=1' % status request = unit_test_utils.get_fake_request(path) output = self.deserializer.index(request) self.assertEqual(status, output['filters']['status']) self.assertEqual(1, output['limit']) def test_index_non_integer_limit(self): request = unit_test_utils.get_fake_request('/tasks?limit=blah') self.assertRaises(webob.exc.HTTPBadRequest, self.deserializer.index, request) def test_index_zero_limit(self): request = unit_test_utils.get_fake_request('/tasks?limit=0') expected = {'limit': 0, 'sort_key': 'created_at', 'sort_dir': 'desc', 'filters': {}} output = self.deserializer.index(request) self.assertEqual(expected, output) def test_index_negative_limit(self): path = '/tasks?limit=-1' request = unit_test_utils.get_fake_request(path) self.assertRaises(webob.exc.HTTPBadRequest, self.deserializer.index, request) def test_index_fraction(self): request = unit_test_utils.get_fake_request('/tasks?limit=1.1') self.assertRaises(webob.exc.HTTPBadRequest, self.deserializer.index, request) def test_index_invalid_status(self): path = '/tasks?status=blah' request = unit_test_utils.get_fake_request(path) self.assertRaises(webob.exc.HTTPBadRequest, self.deserializer.index, request) def test_index_marker(self): marker = str(uuid.uuid4()) path = '/tasks?marker=%s' % marker request = unit_test_utils.get_fake_request(path) output = self.deserializer.index(request) self.assertEqual(marker, output.get('marker')) def test_index_marker_not_specified(self): request = unit_test_utils.get_fake_request('/tasks') output = self.deserializer.index(request) self.assertNotIn('marker', output) def test_index_limit_not_specified(self): request = unit_test_utils.get_fake_request('/tasks') output = self.deserializer.index(request) self.assertNotIn('limit', output) def test_index_sort_key_id(self): request = unit_test_utils.get_fake_request('/tasks?sort_key=id') output = self.deserializer.index(request) expected = { 'sort_key': 'id', 'sort_dir': 'desc', 'filters': {} } self.assertEqual(expected, output) def test_index_sort_dir_asc(self): request = unit_test_utils.get_fake_request('/tasks?sort_dir=asc') output = self.deserializer.index(request) expected = { 'sort_key': 'created_at', 'sort_dir': 'asc', 'filters': {}} self.assertEqual(expected, output) def test_index_sort_dir_bad_value(self): request = unit_test_utils.get_fake_request('/tasks?sort_dir=invalid') self.assertRaises(webob.exc.HTTPBadRequest, self.deserializer.index, request) class TestTasksSerializer(test_utils.BaseTestCase): def setUp(self): super(TestTasksSerializer, self).setUp() self.serializer = glance.api.v2.tasks.ResponseSerializer() self.fixtures = [ _domain_fixture(UUID1, type='import', status='pending', task_input={'loc': 'fake'}, result={}, owner=TENANT1, message='', created_at=DATETIME, updated_at=DATETIME), _domain_fixture(UUID2, type='import', status='processing', task_input={'loc': 'bake'}, owner=TENANT2, message='', created_at=DATETIME, updated_at=DATETIME, result={}), _domain_fixture(UUID3, type='import', status='success', task_input={'loc': 'foo'}, owner=TENANT3, message='', created_at=DATETIME, updated_at=DATETIME, result={}, expires_at=DATETIME), _domain_fixture(UUID4, type='import', status='failure', task_input={'loc': 'boo'}, owner=TENANT4, message='', created_at=DATETIME, updated_at=DATETIME, result={}, expires_at=DATETIME), ] def test_index(self): expected = { 'tasks': [ { 'id': UUID1, 'type': 'import', 'status': 'pending', 'owner': TENANT1, 'created_at': ISOTIME, 'updated_at': ISOTIME, 'self': '/v2/tasks/%s' % UUID1, 'schema': '/v2/schemas/task', }, { 'id': UUID2, 'type': 'import', 'status': 'processing', 'owner': TENANT2, 'created_at': ISOTIME, 'updated_at': ISOTIME, 'self': '/v2/tasks/%s' % UUID2, 'schema': '/v2/schemas/task', }, { 'id': UUID3, 'type': 'import', 'status': 'success', 'owner': TENANT3, 'expires_at': ISOTIME, 'created_at': ISOTIME, 'updated_at': ISOTIME, 'self': '/v2/tasks/%s' % UUID3, 'schema': '/v2/schemas/task', }, { 'id': UUID4, 'type': 'import', 'status': 'failure', 'owner': TENANT4, 'expires_at': ISOTIME, 'created_at': ISOTIME, 'updated_at': ISOTIME, 'self': '/v2/tasks/%s' % UUID4, 'schema': '/v2/schemas/task', }, ], 'first': '/v2/tasks', 'schema': '/v2/schemas/tasks', } request = webob.Request.blank('/v2/tasks') response = webob.Response(request=request) task_fixtures = [f for f in self.fixtures] result = {'tasks': task_fixtures} self.serializer.index(response, result) actual = jsonutils.loads(response.body) self.assertEqual(expected, actual) self.assertEqual('application/json', response.content_type) def test_index_next_marker(self): request = webob.Request.blank('/v2/tasks') response = webob.Response(request=request) task_fixtures = [f for f in self.fixtures] result = {'tasks': task_fixtures, 'next_marker': UUID2} self.serializer.index(response, result) output = jsonutils.loads(response.body) self.assertEqual('/v2/tasks?marker=%s' % UUID2, output['next']) def test_index_carries_query_parameters(self): url = '/v2/tasks?limit=10&sort_key=id&sort_dir=asc' request = webob.Request.blank(url) response = webob.Response(request=request) task_fixtures = [f for f in self.fixtures] result = {'tasks': task_fixtures, 'next_marker': UUID2} self.serializer.index(response, result) output = jsonutils.loads(response.body) expected_url = '/v2/tasks?limit=10&sort_dir=asc&sort_key=id' self.assertEqual(unit_test_utils.sort_url_by_qs_keys(expected_url), unit_test_utils.sort_url_by_qs_keys(output['first'])) expect_next = '/v2/tasks?limit=10&marker=%s&sort_dir=asc&sort_key=id' self.assertEqual(unit_test_utils.sort_url_by_qs_keys( expect_next % UUID2), unit_test_utils.sort_url_by_qs_keys(output['next'])) def test_get(self): expected = { 'id': UUID4, 'type': 'import', 'status': 'failure', 'input': {'loc': 'boo'}, 'result': {}, 'owner': TENANT4, 'message': '', 'created_at': ISOTIME, 'updated_at': ISOTIME, 'expires_at': ISOTIME, 'self': '/v2/tasks/%s' % UUID4, 'schema': '/v2/schemas/task', } response = webob.Response() self.serializer.get(response, self.fixtures[3]) actual = jsonutils.loads(response.body) self.assertEqual(expected, actual) self.assertEqual('application/json', response.content_type) def test_get_ensure_expires_at_not_returned(self): expected = { 'id': UUID1, 'type': 'import', 'status': 'pending', 'input': {'loc': 'fake'}, 'result': {}, 'owner': TENANT1, 'message': '', 'created_at': ISOTIME, 'updated_at': ISOTIME, 'self': '/v2/tasks/%s' % UUID1, 'schema': '/v2/schemas/task', } response = webob.Response() self.serializer.get(response, self.fixtures[0]) actual = jsonutils.loads(response.body) self.assertEqual(expected, actual) self.assertEqual('application/json', response.content_type) expected = { 'id': UUID2, 'type': 'import', 'status': 'processing', 'input': {'loc': 'bake'}, 'result': {}, 'owner': TENANT2, 'message': '', 'created_at': ISOTIME, 'updated_at': ISOTIME, 'self': '/v2/tasks/%s' % UUID2, 'schema': '/v2/schemas/task', } response = webob.Response() self.serializer.get(response, self.fixtures[1]) actual = jsonutils.loads(response.body) self.assertEqual(expected, actual) self.assertEqual('application/json', response.content_type) def test_create(self): response = webob.Response() self.serializer.create(response, self.fixtures[3]) serialized_task = jsonutils.loads(response.body) self.assertEqual(http.CREATED, response.status_int) self.assertEqual(self.fixtures[3].task_id, serialized_task['id']) self.assertEqual(self.fixtures[3].task_input, serialized_task['input']) self.assertIn('expires_at', serialized_task) self.assertEqual('application/json', response.content_type) def test_create_ensure_expires_at_is_not_returned(self): response = webob.Response() self.serializer.create(response, self.fixtures[0]) serialized_task = jsonutils.loads(response.body) self.assertEqual(http.CREATED, response.status_int) self.assertEqual(self.fixtures[0].task_id, serialized_task['id']) self.assertEqual(self.fixtures[0].task_input, serialized_task['input']) self.assertNotIn('expires_at', serialized_task) self.assertEqual('application/json', response.content_type) response = webob.Response() self.serializer.create(response, self.fixtures[1]) serialized_task = jsonutils.loads(response.body) self.assertEqual(http.CREATED, response.status_int) self.assertEqual(self.fixtures[1].task_id, serialized_task['id']) self.assertEqual(self.fixtures[1].task_input, serialized_task['input']) self.assertNotIn('expires_at', serialized_task) self.assertEqual('application/json', response.content_type) glance-16.0.0/glance/tests/unit/v2/test_discovery_image_import.py0000666000175100017510000000333013245511421025127 0ustar zuulzuul00000000000000# Copyright (c) 2017 RedHat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import webob.exc import glance.api.v2.discovery import glance.tests.unit.utils as unit_test_utils import glance.tests.utils as test_utils class TestInfoControllers(test_utils.BaseTestCase): def setUp(self): super(TestInfoControllers, self).setUp() self.controller = glance.api.v2.discovery.InfoController() def test_get_import_info_when_import_not_enabled(self): """When import not enabled, should return 404 just like v2.5""" self.config(enable_image_import=False) req = unit_test_utils.get_fake_request() self.assertRaises(webob.exc.HTTPNotFound, self.controller.get_image_import, req) def test_get_import_info(self): # TODO(rosmaita): change this when import methods are # listed in the config file import_methods = ['glance-direct', 'web-download'] self.config(enable_image_import=True) req = unit_test_utils.get_fake_request() output = self.controller.get_image_import(req) self.assertIn('import-methods', output) self.assertEqual(import_methods, output['import-methods']['value']) glance-16.0.0/glance/tests/unit/test_glance_manage.py0000666000175100017510000000736313245511421022610 0ustar zuulzuul00000000000000# Copyright 2016 OpenStack Foundation. # Copyright 2016 NTT Data. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_db import exception as db_exception from glance.cmd import manage from glance import context from glance.db.sqlalchemy import api as db_api import glance.tests.utils as test_utils TENANT1 = '6838eb7b-6ded-434a-882c-b344c77fe8df' USER1 = '54492ba0-f4df-4e4e-be62-27f4d76b29cf' class DBCommandsTestCase(test_utils.BaseTestCase): def setUp(self): super(DBCommandsTestCase, self).setUp() self.commands = manage.DbCommands() self.context = context.RequestContext( user=USER1, tenant=TENANT1) @mock.patch.object(db_api, 'purge_deleted_rows') @mock.patch.object(context, 'get_admin_context') def test_purge_command(self, mock_context, mock_db_purge): mock_context.return_value = self.context self.commands.purge(0, 100) mock_db_purge.assert_called_once_with(self.context, 0, 100) def test_purge_command_negative_rows(self): exit = self.assertRaises(SystemExit, self.commands.purge, 1, -1) self.assertEqual("Minimal rows limit is 1.", exit.code) def test_purge_invalid_age_in_days(self): age_in_days = 'abcd' ex = self.assertRaises(SystemExit, self.commands.purge, age_in_days) expected = ("Invalid int value for age_in_days: " "%(age_in_days)s") % {'age_in_days': age_in_days} self.assertEqual(expected, ex.code) def test_purge_negative_age_in_days(self): ex = self.assertRaises(SystemExit, self.commands.purge, '-1') self.assertEqual("Must supply a non-negative value for age.", ex.code) def test_purge_invalid_max_rows(self): max_rows = 'abcd' ex = self.assertRaises(SystemExit, self.commands.purge, 1, max_rows) expected = ("Invalid int value for max_rows: " "%(max_rows)s") % {'max_rows': max_rows} self.assertEqual(expected, ex.code) @mock.patch.object(db_api, 'purge_deleted_rows') @mock.patch.object(context, 'get_admin_context') def test_purge_max_rows(self, mock_context, mock_db_purge): mock_context.return_value = self.context value = (2 ** 31) - 1 self.commands.purge(age_in_days=1, max_rows=value) mock_db_purge.assert_called_once_with(self.context, 1, value) def test_purge_command_exceeded_maximum_rows(self): # value(2 ** 31) is greater than max_rows(2147483647) by 1. value = 2 ** 31 ex = self.assertRaises(SystemExit, self.commands.purge, age_in_days=1, max_rows=value) expected = "'max_rows' value out of range, must not exceed 2147483647." self.assertEqual(expected, ex.code) @mock.patch('glance.db.sqlalchemy.api.purge_deleted_rows') def test_purge_command_fk_constraint_failure(self, purge_deleted_rows): purge_deleted_rows.side_effect = db_exception.DBReferenceError( 'fake_table', 'fake_constraint', 'fake_key', 'fake_key_table') exit = self.assertRaises(SystemExit, self.commands.purge, 10, 100) self.assertEqual("Purge command failed, check glance-manage logs" " for more details.", exit.code) glance-16.0.0/glance/tests/unit/fixtures.py0000666000175100017510000000312613245511421020652 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Fixtures for Glance unit tests.""" # NOTE(mriedem): This is needed for importing from fixtures. from __future__ import absolute_import import warnings import fixtures as pyfixtures class WarningsFixture(pyfixtures.Fixture): """Filters out warnings during test runs.""" def setUp(self): super(WarningsFixture, self).setUp() # NOTE(sdague): Make deprecation warnings only happen once. Otherwise # this gets kind of crazy given the way that upstream python libs use # this. warnings.simplefilter('once', DeprecationWarning) # NOTE(sdague): this remains an unresolved item around the way # forward on is_admin, the deprecation is definitely really premature. warnings.filterwarnings( 'ignore', message='Policy enforcement is depending on the value of is_admin.' ' This key is deprecated. Please update your policy ' 'file to use the standard policy values.') self.addCleanup(warnings.resetwarnings) glance-16.0.0/glance/tests/unit/test_domain_proxy.py0000666000175100017510000002464113245511421022555 0ustar zuulzuul00000000000000# Copyright 2013 OpenStack Foundation. # Copyright 2013 IBM Corp. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock # NOTE(jokke): simplified transition to py3, behaves like py2 xrange from six.moves import range from glance.domain import proxy import glance.tests.utils as test_utils UUID1 = 'c80a1a6c-bd1f-41c5-90ee-81afedb1d58d' TENANT1 = '6838eb7b-6ded-434a-882c-b344c77fe8df' class FakeProxy(object): def __init__(self, base, *args, **kwargs): self.base = base self.args = args self.kwargs = kwargs class FakeRepo(object): def __init__(self, result=None): self.args = None self.kwargs = None self.result = result def fake_method(self, *args, **kwargs): self.args = args self.kwargs = kwargs return self.result get = fake_method list = fake_method add = fake_method save = fake_method remove = fake_method class TestProxyRepoPlain(test_utils.BaseTestCase): def setUp(self): super(TestProxyRepoPlain, self).setUp() self.fake_repo = FakeRepo() self.proxy_repo = proxy.Repo(self.fake_repo) def _test_method(self, name, base_result, *args, **kwargs): self.fake_repo.result = base_result method = getattr(self.proxy_repo, name) proxy_result = method(*args, **kwargs) self.assertEqual(base_result, proxy_result) self.assertEqual(args, self.fake_repo.args) self.assertEqual(kwargs, self.fake_repo.kwargs) def test_get(self): self._test_method('get', 'snarf', 'abcd') def test_list(self): self._test_method('list', ['sniff', 'snarf'], 2, filter='^sn') def test_add(self): self._test_method('add', 'snuff', 'enough') def test_save(self): self._test_method('save', 'snuff', 'enough', from_state=None) def test_remove(self): self._test_method('add', None, 'flying') class TestProxyRepoWrapping(test_utils.BaseTestCase): def setUp(self): super(TestProxyRepoWrapping, self).setUp() self.fake_repo = FakeRepo() self.proxy_repo = proxy.Repo(self.fake_repo, item_proxy_class=FakeProxy, item_proxy_kwargs={'a': 1}) def _test_method(self, name, base_result, *args, **kwargs): self.fake_repo.result = base_result method = getattr(self.proxy_repo, name) proxy_result = method(*args, **kwargs) self.assertIsInstance(proxy_result, FakeProxy) self.assertEqual(base_result, proxy_result.base) self.assertEqual(0, len(proxy_result.args)) self.assertEqual({'a': 1}, proxy_result.kwargs) self.assertEqual(args, self.fake_repo.args) self.assertEqual(kwargs, self.fake_repo.kwargs) def test_get(self): self.fake_repo.result = 'snarf' result = self.proxy_repo.get('some-id') self.assertIsInstance(result, FakeProxy) self.assertEqual(('some-id',), self.fake_repo.args) self.assertEqual({}, self.fake_repo.kwargs) self.assertEqual('snarf', result.base) self.assertEqual(tuple(), result.args) self.assertEqual({'a': 1}, result.kwargs) def test_list(self): self.fake_repo.result = ['scratch', 'sniff'] results = self.proxy_repo.list(2, prefix='s') self.assertEqual((2,), self.fake_repo.args) self.assertEqual({'prefix': 's'}, self.fake_repo.kwargs) self.assertEqual(2, len(results)) for i in range(2): self.assertIsInstance(results[i], FakeProxy) self.assertEqual(self.fake_repo.result[i], results[i].base) self.assertEqual(tuple(), results[i].args) self.assertEqual({'a': 1}, results[i].kwargs) def _test_method_with_proxied_argument(self, name, result, **kwargs): self.fake_repo.result = result item = FakeProxy('snoop') method = getattr(self.proxy_repo, name) proxy_result = method(item) self.assertEqual(('snoop',), self.fake_repo.args) self.assertEqual(kwargs, self.fake_repo.kwargs) if result is None: self.assertIsNone(proxy_result) else: self.assertIsInstance(proxy_result, FakeProxy) self.assertEqual(result, proxy_result.base) self.assertEqual(tuple(), proxy_result.args) self.assertEqual({'a': 1}, proxy_result.kwargs) def test_add(self): self._test_method_with_proxied_argument('add', 'dog') def test_add_with_no_result(self): self._test_method_with_proxied_argument('add', None) def test_save(self): self._test_method_with_proxied_argument('save', 'dog', from_state=None) def test_save_with_no_result(self): self._test_method_with_proxied_argument('save', None, from_state=None) def test_remove(self): self._test_method_with_proxied_argument('remove', 'dog') def test_remove_with_no_result(self): self._test_method_with_proxied_argument('remove', None) class FakeImageFactory(object): def __init__(self, result=None): self.result = None self.kwargs = None def new_image(self, **kwargs): self.kwargs = kwargs return self.result class TestImageFactory(test_utils.BaseTestCase): def setUp(self): super(TestImageFactory, self).setUp() self.factory = FakeImageFactory() def test_proxy_plain(self): proxy_factory = proxy.ImageFactory(self.factory) self.factory.result = 'eddard' image = proxy_factory.new_image(a=1, b='two') self.assertEqual('eddard', image) self.assertEqual({'a': 1, 'b': 'two'}, self.factory.kwargs) def test_proxy_wrapping(self): proxy_factory = proxy.ImageFactory(self.factory, proxy_class=FakeProxy, proxy_kwargs={'dog': 'bark'}) self.factory.result = 'stark' image = proxy_factory.new_image(a=1, b='two') self.assertIsInstance(image, FakeProxy) self.assertEqual('stark', image.base) self.assertEqual({'a': 1, 'b': 'two'}, self.factory.kwargs) class FakeImageMembershipFactory(object): def __init__(self, result=None): self.result = None self.image = None self.member_id = None def new_image_member(self, image, member_id): self.image = image self.member_id = member_id return self.result class TestImageMembershipFactory(test_utils.BaseTestCase): def setUp(self): super(TestImageMembershipFactory, self).setUp() self.factory = FakeImageMembershipFactory() def test_proxy_plain(self): proxy_factory = proxy.ImageMembershipFactory(self.factory) self.factory.result = 'tyrion' membership = proxy_factory.new_image_member('jaime', 'cersei') self.assertEqual('tyrion', membership) self.assertEqual('jaime', self.factory.image) self.assertEqual('cersei', self.factory.member_id) def test_proxy_wrapped_membership(self): proxy_factory = proxy.ImageMembershipFactory( self.factory, proxy_class=FakeProxy, proxy_kwargs={'a': 1}) self.factory.result = 'tyrion' membership = proxy_factory.new_image_member('jaime', 'cersei') self.assertIsInstance(membership, FakeProxy) self.assertEqual('tyrion', membership.base) self.assertEqual({'a': 1}, membership.kwargs) self.assertEqual('jaime', self.factory.image) self.assertEqual('cersei', self.factory.member_id) def test_proxy_wrapped_image(self): proxy_factory = proxy.ImageMembershipFactory( self.factory, proxy_class=FakeProxy) self.factory.result = 'tyrion' image = FakeProxy('jaime') membership = proxy_factory.new_image_member(image, 'cersei') self.assertIsInstance(membership, FakeProxy) self.assertIsInstance(self.factory.image, FakeProxy) self.assertEqual('cersei', self.factory.member_id) def test_proxy_both_wrapped(self): class FakeProxy2(FakeProxy): pass proxy_factory = proxy.ImageMembershipFactory( self.factory, proxy_class=FakeProxy, proxy_kwargs={'b': 2}) self.factory.result = 'tyrion' image = FakeProxy2('jaime') membership = proxy_factory.new_image_member(image, 'cersei') self.assertIsInstance(membership, FakeProxy) self.assertEqual('tyrion', membership.base) self.assertEqual({'b': 2}, membership.kwargs) self.assertIsInstance(self.factory.image, FakeProxy2) self.assertEqual('cersei', self.factory.member_id) class FakeImage(object): def __init__(self, result=None): self.result = result class TestTaskFactory(test_utils.BaseTestCase): def setUp(self): super(TestTaskFactory, self).setUp() self.factory = mock.Mock() self.fake_type = 'import' self.fake_owner = "owner" def test_proxy_plain(self): proxy_factory = proxy.TaskFactory(self.factory) proxy_factory.new_task( type=self.fake_type, owner=self.fake_owner ) self.factory.new_task.assert_called_once_with( type=self.fake_type, owner=self.fake_owner ) def test_proxy_wrapping(self): proxy_factory = proxy.TaskFactory( self.factory, task_proxy_class=FakeProxy, task_proxy_kwargs={'dog': 'bark'}) self.factory.new_task.return_value = 'fake_task' task = proxy_factory.new_task( type=self.fake_type, owner=self.fake_owner ) self.factory.new_task.assert_called_once_with( type=self.fake_type, owner=self.fake_owner ) self.assertIsInstance(task, FakeProxy) self.assertEqual('fake_task', task.base) glance-16.0.0/glance/tests/unit/test_db_metadef.py0000666000175100017510000005560513245511421022123 0ustar zuulzuul00000000000000# Copyright 2012 OpenStack Foundation. # Copyright 2014 Intel Corporation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_utils import encodeutils from glance.common import exception import glance.context import glance.db import glance.tests.unit.utils as unit_test_utils import glance.tests.utils as test_utils TENANT1 = '6838eb7b-6ded-434a-882c-b344c77fe8df' TENANT2 = '2c014f32-55eb-467d-8fcb-4bd706012f81' TENANT3 = '5a3e60e8-cfa9-4a9e-a90a-62b42cea92b8' TENANT4 = 'c6c87f25-8a94-47ed-8c83-053c25f42df4' USER1 = '54492ba0-f4df-4e4e-be62-27f4d76b29cf' NAMESPACE1 = 'namespace1' NAMESPACE2 = 'namespace2' NAMESPACE3 = 'namespace3' NAMESPACE4 = 'namespace4' PROPERTY1 = 'Property1' PROPERTY2 = 'Property2' PROPERTY3 = 'Property3' OBJECT1 = 'Object1' OBJECT2 = 'Object2' OBJECT3 = 'Object3' TAG1 = 'Tag1' TAG2 = 'Tag2' TAG3 = 'Tag3' TAG4 = 'Tag4' TAG5 = 'Tag5' RESOURCE_TYPE1 = 'ResourceType1' RESOURCE_TYPE2 = 'ResourceType2' RESOURCE_TYPE3 = 'ResourceType3' def _db_namespace_fixture(**kwargs): namespace = { 'namespace': None, 'display_name': None, 'description': None, 'visibility': True, 'protected': False, 'owner': None } namespace.update(kwargs) return namespace def _db_property_fixture(name, **kwargs): property = { 'name': name, 'json_schema': {"type": "string", "title": "title"}, } property.update(kwargs) return property def _db_object_fixture(name, **kwargs): obj = { 'name': name, 'description': None, 'json_schema': {}, 'required': '[]', } obj.update(kwargs) return obj def _db_tag_fixture(name, **kwargs): obj = { 'name': name } obj.update(kwargs) return obj def _db_tags_fixture(names=None): tags = [] if names: tag_name_list = names else: tag_name_list = [TAG1, TAG2, TAG3] for tag_name in tag_name_list: tags.append(_db_tag_fixture(tag_name)) return tags def _db_resource_type_fixture(name, **kwargs): obj = { 'name': name, 'protected': False, } obj.update(kwargs) return obj def _db_namespace_resource_type_fixture(name, **kwargs): obj = { 'name': name, 'properties_target': None, 'prefix': None, } obj.update(kwargs) return obj class TestMetadefRepo(test_utils.BaseTestCase): def setUp(self): super(TestMetadefRepo, self).setUp() self.db = unit_test_utils.FakeDB(initialize=False) self.context = glance.context.RequestContext(user=USER1, tenant=TENANT1) self.namespace_repo = glance.db.MetadefNamespaceRepo(self.context, self.db) self.property_repo = glance.db.MetadefPropertyRepo(self.context, self.db) self.object_repo = glance.db.MetadefObjectRepo(self.context, self.db) self.tag_repo = glance.db.MetadefTagRepo(self.context, self.db) self.resource_type_repo = glance.db.MetadefResourceTypeRepo( self.context, self.db) self.namespace_factory = glance.domain.MetadefNamespaceFactory() self.property_factory = glance.domain.MetadefPropertyFactory() self.object_factory = glance.domain.MetadefObjectFactory() self.tag_factory = glance.domain.MetadefTagFactory() self.resource_type_factory = glance.domain.MetadefResourceTypeFactory() self._create_namespaces() self._create_properties() self._create_objects() self._create_tags() self._create_resource_types() def _create_namespaces(self): self.namespaces = [ _db_namespace_fixture(namespace=NAMESPACE1, display_name='1', description='desc1', visibility='private', protected=True, owner=TENANT1), _db_namespace_fixture(namespace=NAMESPACE2, display_name='2', description='desc2', visibility='public', protected=False, owner=TENANT1), _db_namespace_fixture(namespace=NAMESPACE3, display_name='3', description='desc3', visibility='private', protected=True, owner=TENANT3), _db_namespace_fixture(namespace=NAMESPACE4, display_name='4', description='desc4', visibility='public', protected=True, owner=TENANT3) ] [self.db.metadef_namespace_create(None, namespace) for namespace in self.namespaces] def _create_properties(self): self.properties = [ _db_property_fixture(name=PROPERTY1), _db_property_fixture(name=PROPERTY2), _db_property_fixture(name=PROPERTY3) ] [self.db.metadef_property_create(self.context, NAMESPACE1, property) for property in self.properties] [self.db.metadef_property_create(self.context, NAMESPACE4, property) for property in self.properties] def _create_objects(self): self.objects = [ _db_object_fixture(name=OBJECT1, description='desc1'), _db_object_fixture(name=OBJECT2, description='desc2'), _db_object_fixture(name=OBJECT3, description='desc3'), ] [self.db.metadef_object_create(self.context, NAMESPACE1, object) for object in self.objects] [self.db.metadef_object_create(self.context, NAMESPACE4, object) for object in self.objects] def _create_tags(self): self.tags = [ _db_tag_fixture(name=TAG1), _db_tag_fixture(name=TAG2), _db_tag_fixture(name=TAG3), ] [self.db.metadef_tag_create(self.context, NAMESPACE1, tag) for tag in self.tags] [self.db.metadef_tag_create(self.context, NAMESPACE4, tag) for tag in self.tags] def _create_resource_types(self): self.resource_types = [ _db_resource_type_fixture(name=RESOURCE_TYPE1, protected=False), _db_resource_type_fixture(name=RESOURCE_TYPE2, protected=False), _db_resource_type_fixture(name=RESOURCE_TYPE3, protected=True), ] [self.db.metadef_resource_type_create(self.context, resource_type) for resource_type in self.resource_types] def test_get_namespace(self): namespace = self.namespace_repo.get(NAMESPACE1) self.assertEqual(NAMESPACE1, namespace.namespace) self.assertEqual('desc1', namespace.description) self.assertEqual('1', namespace.display_name) self.assertEqual(TENANT1, namespace.owner) self.assertTrue(namespace.protected) self.assertEqual('private', namespace.visibility) def test_get_namespace_not_found(self): fake_namespace = "fake_namespace" exc = self.assertRaises(exception.NotFound, self.namespace_repo.get, fake_namespace) self.assertIn(fake_namespace, encodeutils.exception_to_unicode(exc)) def test_get_namespace_forbidden(self): self.assertRaises(exception.NotFound, self.namespace_repo.get, NAMESPACE3) def test_list_namespace(self): namespaces = self.namespace_repo.list() namespace_names = set([n.namespace for n in namespaces]) self.assertEqual(set([NAMESPACE1, NAMESPACE2, NAMESPACE4]), namespace_names) def test_list_private_namespaces(self): filters = {'visibility': 'private'} namespaces = self.namespace_repo.list(filters=filters) namespace_names = set([n.namespace for n in namespaces]) self.assertEqual(set([NAMESPACE1]), namespace_names) def test_add_namespace(self): # NOTE(pawel-koniszewski): Change db_namespace_fixture to # namespace_factory when namespace primary key in DB # will be changed from Integer to UUID namespace = _db_namespace_fixture(namespace='added_namespace', display_name='fake', description='fake_desc', visibility='public', protected=True, owner=TENANT1) self.assertEqual('added_namespace', namespace['namespace']) self.db.metadef_namespace_create(None, namespace) retrieved_namespace = self.namespace_repo.get(namespace['namespace']) self.assertEqual('added_namespace', retrieved_namespace.namespace) def test_save_namespace(self): namespace = self.namespace_repo.get(NAMESPACE1) namespace.display_name = 'save_name' namespace.description = 'save_desc' self.namespace_repo.save(namespace) namespace = self.namespace_repo.get(NAMESPACE1) self.assertEqual('save_name', namespace.display_name) self.assertEqual('save_desc', namespace.description) def test_remove_namespace(self): namespace = self.namespace_repo.get(NAMESPACE1) self.namespace_repo.remove(namespace) self.assertRaises(exception.NotFound, self.namespace_repo.get, NAMESPACE1) def test_remove_namespace_not_found(self): fake_name = 'fake_name' namespace = self.namespace_repo.get(NAMESPACE1) namespace.namespace = fake_name exc = self.assertRaises(exception.NotFound, self.namespace_repo.remove, namespace) self.assertIn(fake_name, encodeutils.exception_to_unicode(exc)) def test_get_property(self): property = self.property_repo.get(NAMESPACE1, PROPERTY1) namespace = self.namespace_repo.get(NAMESPACE1) self.assertEqual(PROPERTY1, property.name) self.assertEqual(namespace.namespace, property.namespace.namespace) def test_get_property_not_found(self): exc = self.assertRaises(exception.NotFound, self.property_repo.get, NAMESPACE2, PROPERTY1) self.assertIn(PROPERTY1, encodeutils.exception_to_unicode(exc)) def test_list_property(self): properties = self.property_repo.list(filters={'namespace': NAMESPACE1}) property_names = set([p.name for p in properties]) self.assertEqual(set([PROPERTY1, PROPERTY2, PROPERTY3]), property_names) def test_list_property_empty_result(self): properties = self.property_repo.list(filters={'namespace': NAMESPACE2}) property_names = set([p.name for p in properties]) self.assertEqual(set([]), property_names) def test_list_property_namespace_not_found(self): exc = self.assertRaises(exception.NotFound, self.property_repo.list, filters={'namespace': 'not-a-namespace'}) self.assertIn('not-a-namespace', encodeutils.exception_to_unicode(exc)) def test_add_property(self): # NOTE(pawel-koniszewski): Change db_property_fixture to # property_factory when property primary key in DB # will be changed from Integer to UUID property = _db_property_fixture(name='added_property') self.assertEqual('added_property', property['name']) self.db.metadef_property_create(self.context, NAMESPACE1, property) retrieved_property = self.property_repo.get(NAMESPACE1, 'added_property') self.assertEqual('added_property', retrieved_property.name) def test_add_property_namespace_forbidden(self): # NOTE(pawel-koniszewski): Change db_property_fixture to # property_factory when property primary key in DB # will be changed from Integer to UUID property = _db_property_fixture(name='added_property') self.assertEqual('added_property', property['name']) self.assertRaises(exception.Forbidden, self.db.metadef_property_create, self.context, NAMESPACE3, property) def test_add_property_namespace_not_found(self): # NOTE(pawel-koniszewski): Change db_property_fixture to # property_factory when property primary key in DB # will be changed from Integer to UUID property = _db_property_fixture(name='added_property') self.assertEqual('added_property', property['name']) self.assertRaises(exception.NotFound, self.db.metadef_property_create, self.context, 'not_a_namespace', property) def test_save_property(self): property = self.property_repo.get(NAMESPACE1, PROPERTY1) property.schema = '{"save": "schema"}' self.property_repo.save(property) property = self.property_repo.get(NAMESPACE1, PROPERTY1) self.assertEqual(PROPERTY1, property.name) self.assertEqual('{"save": "schema"}', property.schema) def test_remove_property(self): property = self.property_repo.get(NAMESPACE1, PROPERTY1) self.property_repo.remove(property) self.assertRaises(exception.NotFound, self.property_repo.get, NAMESPACE1, PROPERTY1) def test_remove_property_not_found(self): fake_name = 'fake_name' property = self.property_repo.get(NAMESPACE1, PROPERTY1) property.name = fake_name self.assertRaises(exception.NotFound, self.property_repo.remove, property) def test_get_object(self): object = self.object_repo.get(NAMESPACE1, OBJECT1) namespace = self.namespace_repo.get(NAMESPACE1) self.assertEqual(OBJECT1, object.name) self.assertEqual('desc1', object.description) self.assertEqual(['[]'], object.required) self.assertEqual({}, object.properties) self.assertEqual(namespace.namespace, object.namespace.namespace) def test_get_object_not_found(self): exc = self.assertRaises(exception.NotFound, self.object_repo.get, NAMESPACE2, OBJECT1) self.assertIn(OBJECT1, encodeutils.exception_to_unicode(exc)) def test_list_object(self): objects = self.object_repo.list(filters={'namespace': NAMESPACE1}) object_names = set([o.name for o in objects]) self.assertEqual(set([OBJECT1, OBJECT2, OBJECT3]), object_names) def test_list_object_empty_result(self): objects = self.object_repo.list(filters={'namespace': NAMESPACE2}) object_names = set([o.name for o in objects]) self.assertEqual(set([]), object_names) def test_list_object_namespace_not_found(self): exc = self.assertRaises(exception.NotFound, self.object_repo.list, filters={'namespace': 'not-a-namespace'}) self.assertIn('not-a-namespace', encodeutils.exception_to_unicode(exc)) def test_add_object(self): # NOTE(pawel-koniszewski): Change db_object_fixture to # object_factory when object primary key in DB # will be changed from Integer to UUID object = _db_object_fixture(name='added_object') self.assertEqual('added_object', object['name']) self.db.metadef_object_create(self.context, NAMESPACE1, object) retrieved_object = self.object_repo.get(NAMESPACE1, 'added_object') self.assertEqual('added_object', retrieved_object.name) def test_add_object_namespace_forbidden(self): # NOTE(pawel-koniszewski): Change db_object_fixture to # object_factory when object primary key in DB # will be changed from Integer to UUID object = _db_object_fixture(name='added_object') self.assertEqual('added_object', object['name']) self.assertRaises(exception.Forbidden, self.db.metadef_object_create, self.context, NAMESPACE3, object) def test_add_object_namespace_not_found(self): # NOTE(pawel-koniszewski): Change db_object_fixture to # object_factory when object primary key in DB # will be changed from Integer to UUID object = _db_object_fixture(name='added_object') self.assertEqual('added_object', object['name']) self.assertRaises(exception.NotFound, self.db.metadef_object_create, self.context, 'not-a-namespace', object) def test_save_object(self): object = self.object_repo.get(NAMESPACE1, OBJECT1) object.required = ['save_req'] object.description = 'save_desc' self.object_repo.save(object) object = self.object_repo.get(NAMESPACE1, OBJECT1) self.assertEqual(OBJECT1, object.name) self.assertEqual(['save_req'], object.required) self.assertEqual('save_desc', object.description) def test_remove_object(self): object = self.object_repo.get(NAMESPACE1, OBJECT1) self.object_repo.remove(object) self.assertRaises(exception.NotFound, self.object_repo.get, NAMESPACE1, OBJECT1) def test_remove_object_not_found(self): fake_name = 'fake_name' object = self.object_repo.get(NAMESPACE1, OBJECT1) object.name = fake_name self.assertRaises(exception.NotFound, self.object_repo.remove, object) def test_list_resource_type(self): resource_type = self.resource_type_repo.list( filters={'namespace': NAMESPACE1}) self.assertEqual(0, len(resource_type)) def test_get_tag(self): tag = self.tag_repo.get(NAMESPACE1, TAG1) namespace = self.namespace_repo.get(NAMESPACE1) self.assertEqual(TAG1, tag.name) self.assertEqual(namespace.namespace, tag.namespace.namespace) def test_get_tag_not_found(self): exc = self.assertRaises(exception.NotFound, self.tag_repo.get, NAMESPACE2, TAG1) self.assertIn(TAG1, encodeutils.exception_to_unicode(exc)) def test_list_tag(self): tags = self.tag_repo.list(filters={'namespace': NAMESPACE1}) tag_names = set([t.name for t in tags]) self.assertEqual(set([TAG1, TAG2, TAG3]), tag_names) def test_list_tag_empty_result(self): tags = self.tag_repo.list(filters={'namespace': NAMESPACE2}) tag_names = set([t.name for t in tags]) self.assertEqual(set([]), tag_names) def test_list_tag_namespace_not_found(self): exc = self.assertRaises(exception.NotFound, self.tag_repo.list, filters={'namespace': 'not-a-namespace'}) self.assertIn('not-a-namespace', encodeutils.exception_to_unicode(exc)) def test_add_tag(self): # NOTE(pawel-koniszewski): Change db_tag_fixture to # tag_factory when tag primary key in DB # will be changed from Integer to UUID tag = _db_tag_fixture(name='added_tag') self.assertEqual('added_tag', tag['name']) self.db.metadef_tag_create(self.context, NAMESPACE1, tag) retrieved_tag = self.tag_repo.get(NAMESPACE1, 'added_tag') self.assertEqual('added_tag', retrieved_tag.name) def test_add_tags(self): tags = self.tag_repo.list(filters={'namespace': NAMESPACE1}) tag_names = set([t.name for t in tags]) self.assertEqual(set([TAG1, TAG2, TAG3]), tag_names) tags = _db_tags_fixture([TAG3, TAG4, TAG5]) self.db.metadef_tag_create_tags(self.context, NAMESPACE1, tags) tags = self.tag_repo.list(filters={'namespace': NAMESPACE1}) tag_names = set([t.name for t in tags]) self.assertEqual(set([TAG3, TAG4, TAG5]), tag_names) def test_add_duplicate_tags_with_pre_existing_tags(self): tags = self.tag_repo.list(filters={'namespace': NAMESPACE1}) tag_names = set([t.name for t in tags]) self.assertEqual(set([TAG1, TAG2, TAG3]), tag_names) tags = _db_tags_fixture([TAG5, TAG4, TAG5]) self.assertRaises(exception.Duplicate, self.db.metadef_tag_create_tags, self.context, NAMESPACE1, tags) tags = self.tag_repo.list(filters={'namespace': NAMESPACE1}) tag_names = set([t.name for t in tags]) self.assertEqual(set([TAG1, TAG2, TAG3]), tag_names) def test_add_tag_namespace_forbidden(self): # NOTE(pawel-koniszewski): Change db_tag_fixture to # tag_factory when tag primary key in DB # will be changed from Integer to UUID tag = _db_tag_fixture(name='added_tag') self.assertEqual('added_tag', tag['name']) self.assertRaises(exception.Forbidden, self.db.metadef_tag_create, self.context, NAMESPACE3, tag) def test_add_tag_namespace_not_found(self): # NOTE(pawel-koniszewski): Change db_tag_fixture to # tag_factory when tag primary key in DB # will be changed from Integer to UUID tag = _db_tag_fixture(name='added_tag') self.assertEqual('added_tag', tag['name']) self.assertRaises(exception.NotFound, self.db.metadef_tag_create, self.context, 'not-a-namespace', tag) def test_save_tag(self): tag = self.tag_repo.get(NAMESPACE1, TAG1) self.tag_repo.save(tag) tag = self.tag_repo.get(NAMESPACE1, TAG1) self.assertEqual(TAG1, tag.name) def test_remove_tag(self): tag = self.tag_repo.get(NAMESPACE1, TAG1) self.tag_repo.remove(tag) self.assertRaises(exception.NotFound, self.tag_repo.get, NAMESPACE1, TAG1) def test_remove_tag_not_found(self): fake_name = 'fake_name' tag = self.tag_repo.get(NAMESPACE1, TAG1) tag.name = fake_name self.assertRaises(exception.NotFound, self.tag_repo.remove, tag) glance-16.0.0/glance/tests/unit/test_context.py0000666000175100017510000001420313245511421021522 0ustar zuulzuul00000000000000# Copyright 2010-2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # NOTE(jokke): simplified transition to py3, behaves like py2 xrange from six.moves import range from glance import context from glance.tests.unit import utils as unit_utils from glance.tests import utils def _fake_image(owner, is_public): return { 'id': None, 'owner': owner, 'visibility': 'public' if is_public else 'shared', } def _fake_membership(can_share=False): return {'can_share': can_share} class TestContext(utils.BaseTestCase): def setUp(self): super(TestContext, self).setUp() self.db_api = unit_utils.FakeDB() def do_visible(self, exp_res, img_owner, img_public, **kwargs): """ Perform a context visibility test. Creates a (fake) image with the specified owner and is_public attributes, then creates a context with the given keyword arguments and expects exp_res as the result of an is_image_visible() call on the context. """ img = _fake_image(img_owner, img_public) ctx = context.RequestContext(**kwargs) self.assertEqual(exp_res, self.db_api.is_image_visible(ctx, img)) def test_empty_public(self): """ Tests that an empty context (with is_admin set to True) can access an image with is_public set to True. """ self.do_visible(True, None, True, is_admin=True) def test_empty_public_owned(self): """ Tests that an empty context (with is_admin set to True) can access an owned image with is_public set to True. """ self.do_visible(True, 'pattieblack', True, is_admin=True) def test_empty_private(self): """ Tests that an empty context (with is_admin set to True) can access an image with is_public set to False. """ self.do_visible(True, None, False, is_admin=True) def test_empty_private_owned(self): """ Tests that an empty context (with is_admin set to True) can access an owned image with is_public set to False. """ self.do_visible(True, 'pattieblack', False, is_admin=True) def test_anon_public(self): """ Tests that an anonymous context (with is_admin set to False) can access an image with is_public set to True. """ self.do_visible(True, None, True) def test_anon_public_owned(self): """ Tests that an anonymous context (with is_admin set to False) can access an owned image with is_public set to True. """ self.do_visible(True, 'pattieblack', True) def test_anon_private(self): """ Tests that an anonymous context (with is_admin set to False) can access an unowned image with is_public set to False. """ self.do_visible(True, None, False) def test_anon_private_owned(self): """ Tests that an anonymous context (with is_admin set to False) cannot access an owned image with is_public set to False. """ self.do_visible(False, 'pattieblack', False) def test_auth_public(self): """ Tests that an authenticated context (with is_admin set to False) can access an image with is_public set to True. """ self.do_visible(True, None, True, project_id='froggy') def test_auth_public_unowned(self): """ Tests that an authenticated context (with is_admin set to False) can access an image (which it does not own) with is_public set to True. """ self.do_visible(True, 'pattieblack', True, project_id='froggy') def test_auth_public_owned(self): """ Tests that an authenticated context (with is_admin set to False) can access an image (which it does own) with is_public set to True. """ self.do_visible(True, 'pattieblack', True, project_id='pattieblack') def test_auth_private(self): """ Tests that an authenticated context (with is_admin set to False) can access an image with is_public set to False. """ self.do_visible(True, None, False, project_id='froggy') def test_auth_private_unowned(self): """ Tests that an authenticated context (with is_admin set to False) cannot access an image (which it does not own) with is_public set to False. """ self.do_visible(False, 'pattieblack', False, project_id='froggy') def test_auth_private_owned(self): """ Tests that an authenticated context (with is_admin set to False) can access an image (which it does own) with is_public set to False. """ self.do_visible(True, 'pattieblack', False, project_id='pattieblack') def test_request_id(self): contexts = [context.RequestContext().request_id for _ in range(5)] # Check for uniqueness -- set() will normalize its argument self.assertEqual(5, len(set(contexts))) def test_service_catalog(self): ctx = context.RequestContext(service_catalog=['foo']) self.assertEqual(['foo'], ctx.service_catalog) def test_user_identity(self): ctx = context.RequestContext(user_id="user", project_id="tenant", domain_id="domain", user_domain_id="user-domain", project_domain_id="project-domain") self.assertEqual('user tenant domain user-domain project-domain', ctx.to_dict()["user_identity"]) glance-16.0.0/glance/tests/unit/base.py0000666000175100017510000000524713245511421017721 0ustar zuulzuul00000000000000# Copyright 2012 OpenStack Foundation. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os import glance_store as store from glance_store import location from oslo_concurrency import lockutils from oslo_config import cfg from oslo_db import options from oslo_serialization import jsonutils from glance.tests import stubs from glance.tests import utils as test_utils CONF = cfg.CONF class StoreClearingUnitTest(test_utils.BaseTestCase): def setUp(self): super(StoreClearingUnitTest, self).setUp() # Ensure stores + locations cleared location.SCHEME_TO_CLS_MAP = {} self._create_stores() self.addCleanup(setattr, location, 'SCHEME_TO_CLS_MAP', dict()) def _create_stores(self, passing_config=True): """Create known stores. Mock out sheepdog's subprocess dependency on collie. :param passing_config: making store driver passes basic configurations. :returns: the number of how many store drivers been loaded. """ store.register_opts(CONF) self.config(default_store='filesystem', filesystem_store_datadir=self.test_dir, group="glance_store") store.create_stores(CONF) class IsolatedUnitTest(StoreClearingUnitTest): """ Unit test case that establishes a mock environment within a testing directory (in isolation) """ registry = None def setUp(self): super(IsolatedUnitTest, self).setUp() options.set_defaults(CONF, connection='sqlite://') lockutils.set_defaults(os.path.join(self.test_dir)) self.config(debug=False) self.config(default_store='filesystem', filesystem_store_datadir=self.test_dir, group="glance_store") store.create_stores() stubs.stub_out_registry_and_store_server(self.stubs, self.test_dir, registry=self.registry) def set_policy_rules(self, rules): fap = open(CONF.oslo_policy.policy_file, 'w') fap.write(jsonutils.dumps(rules)) fap.close() glance-16.0.0/glance/tests/unit/test_notifier.py0000666000175100017510000007217513245511421021671 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # Copyright 2013 IBM Corp. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime import glance_store import mock from oslo_config import cfg import oslo_messaging import webob import glance.async from glance.common import exception from glance.common import timeutils import glance.context from glance import notifier import glance.tests.unit.utils as unit_test_utils from glance.tests import utils DATETIME = datetime.datetime(2012, 5, 16, 15, 27, 36, 325355) UUID1 = 'c80a1a6c-bd1f-41c5-90ee-81afedb1d58d' USER1 = '54492ba0-f4df-4e4e-be62-27f4d76b29cf' TENANT1 = '6838eb7b-6ded-434a-882c-b344c77fe8df' TENANT2 = '2c014f32-55eb-467d-8fcb-4bd706012f81' class ImageStub(glance.domain.Image): def get_data(self, offset=0, chunk_size=None): return ['01234', '56789'] def set_data(self, data, size=None): for chunk in data: pass class ImageRepoStub(object): def remove(self, *args, **kwargs): return 'image_from_get' def save(self, *args, **kwargs): return 'image_from_save' def add(self, *args, **kwargs): return 'image_from_add' def get(self, *args, **kwargs): return 'image_from_get' def list(self, *args, **kwargs): return ['images_from_list'] class ImageMemberRepoStub(object): def remove(self, *args, **kwargs): return 'image_member_from_remove' def save(self, *args, **kwargs): return 'image_member_from_save' def add(self, *args, **kwargs): return 'image_member_from_add' def get(self, *args, **kwargs): return 'image_member_from_get' def list(self, *args, **kwargs): return ['image_members_from_list'] class TaskStub(glance.domain.TaskStub): def run(self, executor): pass class Task(glance.domain.Task): def succeed(self, result): pass def fail(self, message): pass class TaskRepoStub(object): def remove(self, *args, **kwargs): return 'task_from_remove' def save(self, *args, **kwargs): return 'task_from_save' def add(self, *args, **kwargs): return 'task_from_add' def get_task(self, *args, **kwargs): return 'task_from_get' def list(self, *args, **kwargs): return ['tasks_from_list'] class TestNotifier(utils.BaseTestCase): @mock.patch.object(oslo_messaging, 'Notifier') @mock.patch.object(oslo_messaging, 'get_notification_transport') def _test_load_strategy(self, mock_get_transport, mock_notifier, url, driver): nfier = notifier.Notifier() mock_get_transport.assert_called_with(cfg.CONF) self.assertIsNotNone(nfier._transport) mock_notifier.assert_called_with(nfier._transport, publisher_id='image.localhost') self.assertIsNotNone(nfier._notifier) def test_notifier_load(self): self._test_load_strategy(url=None, driver=None) @mock.patch.object(oslo_messaging, 'set_transport_defaults') def test_set_defaults(self, mock_set_trans_defaults): notifier.set_defaults(control_exchange='foo') mock_set_trans_defaults.assert_called_with('foo') notifier.set_defaults() mock_set_trans_defaults.assert_called_with('glance') class TestImageNotifications(utils.BaseTestCase): """Test Image Notifications work""" def setUp(self): super(TestImageNotifications, self).setUp() self.image = ImageStub( image_id=UUID1, name='image-1', status='active', size=1024, created_at=DATETIME, updated_at=DATETIME, owner=TENANT1, visibility='public', container_format='ami', virtual_size=2048, tags=['one', 'two'], disk_format='ami', min_ram=128, min_disk=10, checksum='ca425b88f047ce8ec45ee90e813ada91', locations=['http://127.0.0.1']) self.context = glance.context.RequestContext(tenant=TENANT2, user=USER1) self.image_repo_stub = ImageRepoStub() self.notifier = unit_test_utils.FakeNotifier() self.image_repo_proxy = glance.notifier.ImageRepoProxy( self.image_repo_stub, self.context, self.notifier) self.image_proxy = glance.notifier.ImageProxy( self.image, self.context, self.notifier) def test_image_save_notification(self): self.image_repo_proxy.save(self.image_proxy) output_logs = self.notifier.get_logs() self.assertEqual(1, len(output_logs)) output_log = output_logs[0] self.assertEqual('INFO', output_log['notification_type']) self.assertEqual('image.update', output_log['event_type']) self.assertEqual(self.image.image_id, output_log['payload']['id']) if 'location' in output_log['payload']: self.fail('Notification contained location field.') def test_image_save_notification_disabled(self): self.config(disabled_notifications=["image.update"]) self.image_repo_proxy.save(self.image_proxy) output_logs = self.notifier.get_logs() self.assertEqual(0, len(output_logs)) def test_image_add_notification(self): self.image_repo_proxy.add(self.image_proxy) output_logs = self.notifier.get_logs() self.assertEqual(1, len(output_logs)) output_log = output_logs[0] self.assertEqual('INFO', output_log['notification_type']) self.assertEqual('image.create', output_log['event_type']) self.assertEqual(self.image.image_id, output_log['payload']['id']) if 'location' in output_log['payload']: self.fail('Notification contained location field.') def test_image_add_notification_disabled(self): self.config(disabled_notifications=["image.create"]) self.image_repo_proxy.add(self.image_proxy) output_logs = self.notifier.get_logs() self.assertEqual(0, len(output_logs)) def test_image_delete_notification(self): self.image_repo_proxy.remove(self.image_proxy) output_logs = self.notifier.get_logs() self.assertEqual(1, len(output_logs)) output_log = output_logs[0] self.assertEqual('INFO', output_log['notification_type']) self.assertEqual('image.delete', output_log['event_type']) self.assertEqual(self.image.image_id, output_log['payload']['id']) self.assertTrue(output_log['payload']['deleted']) if 'location' in output_log['payload']: self.fail('Notification contained location field.') def test_image_delete_notification_disabled(self): self.config(disabled_notifications=['image.delete']) self.image_repo_proxy.remove(self.image_proxy) output_logs = self.notifier.get_logs() self.assertEqual(0, len(output_logs)) def test_image_get(self): image = self.image_repo_proxy.get(UUID1) self.assertIsInstance(image, glance.notifier.ImageProxy) self.assertEqual('image_from_get', image.repo) def test_image_list(self): images = self.image_repo_proxy.list() self.assertIsInstance(images[0], glance.notifier.ImageProxy) self.assertEqual('images_from_list', images[0].repo) def test_image_get_data_should_call_next_image_get_data(self): with mock.patch.object(self.image, 'get_data') as get_data_mock: self.image_proxy.get_data() self.assertTrue(get_data_mock.called) def test_image_get_data_notification(self): self.image_proxy.size = 10 data = ''.join(self.image_proxy.get_data()) self.assertEqual('0123456789', data) output_logs = self.notifier.get_logs() self.assertEqual(1, len(output_logs)) output_log = output_logs[0] self.assertEqual('INFO', output_log['notification_type']) self.assertEqual('image.send', output_log['event_type']) self.assertEqual(self.image.image_id, output_log['payload']['image_id']) self.assertEqual(TENANT2, output_log['payload']['receiver_tenant_id']) self.assertEqual(USER1, output_log['payload']['receiver_user_id']) self.assertEqual(10, output_log['payload']['bytes_sent']) self.assertEqual(TENANT1, output_log['payload']['owner_id']) def test_image_get_data_notification_disabled(self): self.config(disabled_notifications=['image.send']) self.image_proxy.size = 10 data = ''.join(self.image_proxy.get_data()) self.assertEqual('0123456789', data) output_logs = self.notifier.get_logs() self.assertEqual(0, len(output_logs)) def test_image_get_data_size_mismatch(self): self.image_proxy.size = 11 list(self.image_proxy.get_data()) output_logs = self.notifier.get_logs() self.assertEqual(1, len(output_logs)) output_log = output_logs[0] self.assertEqual('ERROR', output_log['notification_type']) self.assertEqual('image.send', output_log['event_type']) self.assertEqual(self.image.image_id, output_log['payload']['image_id']) def test_image_set_data_prepare_notification(self): insurance = {'called': False} def data_iterator(): output_logs = self.notifier.get_logs() self.assertEqual(1, len(output_logs)) output_log = output_logs[0] self.assertEqual('INFO', output_log['notification_type']) self.assertEqual('image.prepare', output_log['event_type']) self.assertEqual(self.image.image_id, output_log['payload']['id']) yield 'abcd' yield 'efgh' insurance['called'] = True self.image_proxy.set_data(data_iterator(), 8) self.assertTrue(insurance['called']) def test_image_set_data_prepare_notification_disabled(self): insurance = {'called': False} def data_iterator(): output_logs = self.notifier.get_logs() self.assertEqual(0, len(output_logs)) yield 'abcd' yield 'efgh' insurance['called'] = True self.config(disabled_notifications=['image.prepare']) self.image_proxy.set_data(data_iterator(), 8) self.assertTrue(insurance['called']) def test_image_set_data_upload_and_activate_notification(self): def data_iterator(): self.notifier.log = [] yield 'abcde' yield 'fghij' self.image_proxy.set_data(data_iterator(), 10) output_logs = self.notifier.get_logs() self.assertEqual(2, len(output_logs)) output_log = output_logs[0] self.assertEqual('INFO', output_log['notification_type']) self.assertEqual('image.upload', output_log['event_type']) self.assertEqual(self.image.image_id, output_log['payload']['id']) output_log = output_logs[1] self.assertEqual('INFO', output_log['notification_type']) self.assertEqual('image.activate', output_log['event_type']) self.assertEqual(self.image.image_id, output_log['payload']['id']) def test_image_set_data_upload_and_activate_notification_disabled(self): insurance = {'called': False} def data_iterator(): self.notifier.log = [] yield 'abcde' yield 'fghij' insurance['called'] = True self.config(disabled_notifications=['image.activate', 'image.upload']) self.image_proxy.set_data(data_iterator(), 10) self.assertTrue(insurance['called']) output_logs = self.notifier.get_logs() self.assertEqual(0, len(output_logs)) def test_image_set_data_storage_full(self): def data_iterator(): self.notifier.log = [] yield 'abcde' raise glance_store.StorageFull(message='Modern Major General') self.assertRaises(webob.exc.HTTPRequestEntityTooLarge, self.image_proxy.set_data, data_iterator(), 10) output_logs = self.notifier.get_logs() self.assertEqual(1, len(output_logs)) output_log = output_logs[0] self.assertEqual('ERROR', output_log['notification_type']) self.assertEqual('image.upload', output_log['event_type']) self.assertIn('Modern Major General', output_log['payload']) def test_image_set_data_value_error(self): def data_iterator(): self.notifier.log = [] yield 'abcde' raise ValueError('value wrong') self.assertRaises(webob.exc.HTTPBadRequest, self.image_proxy.set_data, data_iterator(), 10) output_logs = self.notifier.get_logs() self.assertEqual(1, len(output_logs)) output_log = output_logs[0] self.assertEqual('ERROR', output_log['notification_type']) self.assertEqual('image.upload', output_log['event_type']) self.assertIn('value wrong', output_log['payload']) def test_image_set_data_duplicate(self): def data_iterator(): self.notifier.log = [] yield 'abcde' raise exception.Duplicate('Cant have duplicates') self.assertRaises(webob.exc.HTTPConflict, self.image_proxy.set_data, data_iterator(), 10) output_logs = self.notifier.get_logs() self.assertEqual(1, len(output_logs)) output_log = output_logs[0] self.assertEqual('ERROR', output_log['notification_type']) self.assertEqual('image.upload', output_log['event_type']) self.assertIn('Cant have duplicates', output_log['payload']) def test_image_set_data_storage_write_denied(self): def data_iterator(): self.notifier.log = [] yield 'abcde' raise glance_store.StorageWriteDenied(message='The Very Model') self.assertRaises(webob.exc.HTTPServiceUnavailable, self.image_proxy.set_data, data_iterator(), 10) output_logs = self.notifier.get_logs() self.assertEqual(1, len(output_logs)) output_log = output_logs[0] self.assertEqual('ERROR', output_log['notification_type']) self.assertEqual('image.upload', output_log['event_type']) self.assertIn('The Very Model', output_log['payload']) def test_image_set_data_forbidden(self): def data_iterator(): self.notifier.log = [] yield 'abcde' raise exception.Forbidden('Not allowed') self.assertRaises(webob.exc.HTTPForbidden, self.image_proxy.set_data, data_iterator(), 10) output_logs = self.notifier.get_logs() self.assertEqual(1, len(output_logs)) output_log = output_logs[0] self.assertEqual('ERROR', output_log['notification_type']) self.assertEqual('image.upload', output_log['event_type']) self.assertIn('Not allowed', output_log['payload']) def test_image_set_data_not_found(self): def data_iterator(): self.notifier.log = [] yield 'abcde' raise exception.NotFound('Not found') self.assertRaises(webob.exc.HTTPNotFound, self.image_proxy.set_data, data_iterator(), 10) output_logs = self.notifier.get_logs() self.assertEqual(1, len(output_logs)) output_log = output_logs[0] self.assertEqual('ERROR', output_log['notification_type']) self.assertEqual('image.upload', output_log['event_type']) self.assertIn('Not found', output_log['payload']) def test_image_set_data_HTTP_error(self): def data_iterator(): self.notifier.log = [] yield 'abcde' raise webob.exc.HTTPError('Http issue') self.assertRaises(webob.exc.HTTPError, self.image_proxy.set_data, data_iterator(), 10) output_logs = self.notifier.get_logs() self.assertEqual(1, len(output_logs)) output_log = output_logs[0] self.assertEqual('ERROR', output_log['notification_type']) self.assertEqual('image.upload', output_log['event_type']) self.assertIn('Http issue', output_log['payload']) def test_image_set_data_error(self): def data_iterator(): self.notifier.log = [] yield 'abcde' raise exception.GlanceException('Failed') self.assertRaises(exception.GlanceException, self.image_proxy.set_data, data_iterator(), 10) output_logs = self.notifier.get_logs() self.assertEqual(1, len(output_logs)) output_log = output_logs[0] self.assertEqual('ERROR', output_log['notification_type']) self.assertEqual('image.upload', output_log['event_type']) self.assertIn('Failed', output_log['payload']) class TestImageMemberNotifications(utils.BaseTestCase): """Test Image Member Notifications work""" def setUp(self): super(TestImageMemberNotifications, self).setUp() self.context = glance.context.RequestContext(tenant=TENANT2, user=USER1) self.notifier = unit_test_utils.FakeNotifier() self.image = ImageStub( image_id=UUID1, name='image-1', status='active', size=1024, created_at=DATETIME, updated_at=DATETIME, owner=TENANT1, visibility='public', container_format='ami', tags=['one', 'two'], disk_format='ami', min_ram=128, min_disk=10, checksum='ca425b88f047ce8ec45ee90e813ada91', locations=['http://127.0.0.1']) self.image_member = glance.domain.ImageMembership( id=1, image_id=UUID1, member_id=TENANT1, created_at=DATETIME, updated_at=DATETIME, status='accepted') self.image_member_repo_stub = ImageMemberRepoStub() self.image_member_repo_proxy = glance.notifier.ImageMemberRepoProxy( self.image_member_repo_stub, self.image, self.context, self.notifier) self.image_member_proxy = glance.notifier.ImageMemberProxy( self.image_member, self.context, self.notifier) def _assert_image_member_with_notifier(self, output_log, deleted=False): self.assertEqual(self.image_member.member_id, output_log['payload']['member_id']) self.assertEqual(self.image_member.image_id, output_log['payload']['image_id']) self.assertEqual(self.image_member.status, output_log['payload']['status']) self.assertEqual(timeutils.isotime(self.image_member.created_at), output_log['payload']['created_at']) self.assertEqual(timeutils.isotime(self.image_member.updated_at), output_log['payload']['updated_at']) if deleted: self.assertTrue(output_log['payload']['deleted']) self.assertIsNotNone(output_log['payload']['deleted_at']) else: self.assertFalse(output_log['payload']['deleted']) self.assertIsNone(output_log['payload']['deleted_at']) def test_image_member_add_notification(self): self.image_member_repo_proxy.add(self.image_member_proxy) output_logs = self.notifier.get_logs() self.assertEqual(1, len(output_logs)) output_log = output_logs[0] self.assertEqual('INFO', output_log['notification_type']) self.assertEqual('image.member.create', output_log['event_type']) self._assert_image_member_with_notifier(output_log) def test_image_member_add_notification_disabled(self): self.config(disabled_notifications=['image.member.create']) self.image_member_repo_proxy.add(self.image_member_proxy) output_logs = self.notifier.get_logs() self.assertEqual(0, len(output_logs)) def test_image_member_save_notification(self): self.image_member_repo_proxy.save(self.image_member_proxy) output_logs = self.notifier.get_logs() self.assertEqual(1, len(output_logs)) output_log = output_logs[0] self.assertEqual('INFO', output_log['notification_type']) self.assertEqual('image.member.update', output_log['event_type']) self._assert_image_member_with_notifier(output_log) def test_image_member_save_notification_disabled(self): self.config(disabled_notifications=['image.member.update']) self.image_member_repo_proxy.save(self.image_member_proxy) output_logs = self.notifier.get_logs() self.assertEqual(0, len(output_logs)) def test_image_member_delete_notification(self): self.image_member_repo_proxy.remove(self.image_member_proxy) output_logs = self.notifier.get_logs() self.assertEqual(1, len(output_logs)) output_log = output_logs[0] self.assertEqual('INFO', output_log['notification_type']) self.assertEqual('image.member.delete', output_log['event_type']) self._assert_image_member_with_notifier(output_log, deleted=True) def test_image_member_delete_notification_disabled(self): self.config(disabled_notifications=['image.member.delete']) self.image_member_repo_proxy.remove(self.image_member_proxy) output_logs = self.notifier.get_logs() self.assertEqual(0, len(output_logs)) def test_image_member_get(self): image_member = self.image_member_repo_proxy.get(TENANT1) self.assertIsInstance(image_member, glance.notifier.ImageMemberProxy) self.assertEqual('image_member_from_get', image_member.repo) def test_image_member_list(self): image_members = self.image_member_repo_proxy.list() self.assertIsInstance(image_members[0], glance.notifier.ImageMemberProxy) self.assertEqual('image_members_from_list', image_members[0].repo) class TestTaskNotifications(utils.BaseTestCase): """Test Task Notifications work""" def setUp(self): super(TestTaskNotifications, self).setUp() task_input = {"loc": "fake"} self.task_stub = TaskStub( task_id='aaa', task_type='import', status='pending', owner=TENANT2, expires_at=None, created_at=DATETIME, updated_at=DATETIME, ) self.task = Task( task_id='aaa', task_type='import', status='pending', owner=TENANT2, expires_at=None, created_at=DATETIME, updated_at=DATETIME, task_input=task_input, result='res', message='blah' ) self.context = glance.context.RequestContext( tenant=TENANT2, user=USER1 ) self.task_repo_stub = TaskRepoStub() self.notifier = unit_test_utils.FakeNotifier() self.task_repo_proxy = glance.notifier.TaskRepoProxy( self.task_repo_stub, self.context, self.notifier ) self.task_proxy = glance.notifier.TaskProxy( self.task, self.context, self.notifier ) self.task_stub_proxy = glance.notifier.TaskStubProxy( self.task_stub, self.context, self.notifier ) self.patcher = mock.patch.object(timeutils, 'utcnow') mock_utcnow = self.patcher.start() mock_utcnow.return_value = datetime.datetime.utcnow() def tearDown(self): super(TestTaskNotifications, self).tearDown() self.patcher.stop() def test_task_create_notification(self): self.task_repo_proxy.add(self.task_stub_proxy) output_logs = self.notifier.get_logs() self.assertEqual(1, len(output_logs)) output_log = output_logs[0] self.assertEqual('INFO', output_log['notification_type']) self.assertEqual('task.create', output_log['event_type']) self.assertEqual(self.task.task_id, output_log['payload']['id']) self.assertEqual( timeutils.isotime(self.task.updated_at), output_log['payload']['updated_at'] ) self.assertEqual( timeutils.isotime(self.task.created_at), output_log['payload']['created_at'] ) if 'location' in output_log['payload']: self.fail('Notification contained location field.') def test_task_create_notification_disabled(self): self.config(disabled_notifications=['task.create']) self.task_repo_proxy.add(self.task_stub_proxy) output_logs = self.notifier.get_logs() self.assertEqual(0, len(output_logs)) def test_task_delete_notification(self): now = timeutils.isotime() self.task_repo_proxy.remove(self.task_stub_proxy) output_logs = self.notifier.get_logs() self.assertEqual(1, len(output_logs)) output_log = output_logs[0] self.assertEqual('INFO', output_log['notification_type']) self.assertEqual('task.delete', output_log['event_type']) self.assertEqual(self.task.task_id, output_log['payload']['id']) self.assertEqual( timeutils.isotime(self.task.updated_at), output_log['payload']['updated_at'] ) self.assertEqual( timeutils.isotime(self.task.created_at), output_log['payload']['created_at'] ) self.assertEqual( now, output_log['payload']['deleted_at'] ) if 'location' in output_log['payload']: self.fail('Notification contained location field.') def test_task_delete_notification_disabled(self): self.config(disabled_notifications=['task.delete']) self.task_repo_proxy.remove(self.task_stub_proxy) output_logs = self.notifier.get_logs() self.assertEqual(0, len(output_logs)) def test_task_run_notification(self): with mock.patch('glance.async.TaskExecutor') as mock_executor: executor = mock_executor.return_value executor._run.return_value = mock.Mock() self.task_proxy.run(executor=mock_executor) output_logs = self.notifier.get_logs() self.assertEqual(1, len(output_logs)) output_log = output_logs[0] self.assertEqual('INFO', output_log['notification_type']) self.assertEqual('task.run', output_log['event_type']) self.assertEqual(self.task.task_id, output_log['payload']['id']) def test_task_run_notification_disabled(self): self.config(disabled_notifications=['task.run']) with mock.patch('glance.async.TaskExecutor') as mock_executor: executor = mock_executor.return_value executor._run.return_value = mock.Mock() self.task_proxy.run(executor=mock_executor) output_logs = self.notifier.get_logs() self.assertEqual(0, len(output_logs)) def test_task_processing_notification(self): self.task_proxy.begin_processing() output_logs = self.notifier.get_logs() self.assertEqual(1, len(output_logs)) output_log = output_logs[0] self.assertEqual('INFO', output_log['notification_type']) self.assertEqual('task.processing', output_log['event_type']) self.assertEqual(self.task.task_id, output_log['payload']['id']) def test_task_processing_notification_disabled(self): self.config(disabled_notifications=['task.processing']) self.task_proxy.begin_processing() output_logs = self.notifier.get_logs() self.assertEqual(0, len(output_logs)) def test_task_success_notification(self): self.task_proxy.begin_processing() self.task_proxy.succeed(result=None) output_logs = self.notifier.get_logs() self.assertEqual(2, len(output_logs)) output_log = output_logs[1] self.assertEqual('INFO', output_log['notification_type']) self.assertEqual('task.success', output_log['event_type']) self.assertEqual(self.task.task_id, output_log['payload']['id']) def test_task_success_notification_disabled(self): self.config(disabled_notifications=['task.processing', 'task.success']) self.task_proxy.begin_processing() self.task_proxy.succeed(result=None) output_logs = self.notifier.get_logs() self.assertEqual(0, len(output_logs)) def test_task_failure_notification(self): self.task_proxy.fail(message=None) output_logs = self.notifier.get_logs() self.assertEqual(1, len(output_logs)) output_log = output_logs[0] self.assertEqual('INFO', output_log['notification_type']) self.assertEqual('task.failure', output_log['event_type']) self.assertEqual(self.task.task_id, output_log['payload']['id']) def test_task_failure_notification_disabled(self): self.config(disabled_notifications=['task.failure']) self.task_proxy.fail(message=None) output_logs = self.notifier.get_logs() self.assertEqual(0, len(output_logs)) glance-16.0.0/glance/tests/unit/fake_rados.py0000666000175100017510000000611613245511421021101 0ustar zuulzuul00000000000000# Copyright 2013 Canonical Ltd. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. class mock_rados(object): class ioctx(object): def __init__(self, *args, **kwargs): pass def __enter__(self, *args, **kwargs): return self def __exit__(self, *args, **kwargs): return False def close(self, *args, **kwargs): pass class Rados(object): def __init__(self, *args, **kwargs): pass def __enter__(self, *args, **kwargs): return self def __exit__(self, *args, **kwargs): return False def connect(self, *args, **kwargs): pass def open_ioctx(self, *args, **kwargs): return mock_rados.ioctx() def shutdown(self, *args, **kwargs): pass class mock_rbd(object): class ImageExists(Exception): pass class ImageBusy(Exception): pass class ImageNotFound(Exception): pass class Image(object): def __init__(self, *args, **kwargs): pass def __enter__(self, *args, **kwargs): return self def __exit__(self, *args, **kwargs): pass def create_snap(self, *args, **kwargs): pass def remove_snap(self, *args, **kwargs): pass def protect_snap(self, *args, **kwargs): pass def unprotect_snap(self, *args, **kwargs): pass def read(self, *args, **kwargs): raise NotImplementedError() def write(self, *args, **kwargs): raise NotImplementedError() def resize(self, *args, **kwargs): raise NotImplementedError() def discard(self, offset, length): raise NotImplementedError() def close(self): pass def list_snaps(self): raise NotImplementedError() def parent_info(self): raise NotImplementedError() def size(self): raise NotImplementedError() class RBD(object): def __init__(self, *args, **kwargs): pass def __enter__(self, *args, **kwargs): return self def __exit__(self, *args, **kwargs): return False def create(self, *args, **kwargs): pass def remove(self, *args, **kwargs): pass def list(self, *args, **kwargs): raise NotImplementedError() def clone(self, *args, **kwargs): raise NotImplementedError() glance-16.0.0/glance/tests/unit/test_misc.py0000666000175100017510000000560113245511421020773 0ustar zuulzuul00000000000000# Copyright 2010-2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os import six # NOTE(jokke): simplified transition to py3, behaves like py2 xrange from six.moves import range from glance.common import crypt from glance.common import utils from glance.tests import utils as test_utils class UtilsTestCase(test_utils.BaseTestCase): def test_encryption(self): # Check that original plaintext and unencrypted ciphertext match # Check keys of the three allowed lengths key_list = ["1234567890abcdef", "12345678901234567890abcd", "1234567890abcdef1234567890ABCDEF"] plaintext_list = [''] blocksize = 64 for i in range(3 * blocksize): text = os.urandom(i) if six.PY3: text = text.decode('latin1') plaintext_list.append(text) for key in key_list: for plaintext in plaintext_list: ciphertext = crypt.urlsafe_encrypt(key, plaintext, blocksize) self.assertIsInstance(ciphertext, str) self.assertNotEqual(ciphertext, plaintext) text = crypt.urlsafe_decrypt(key, ciphertext) self.assertIsInstance(text, str) self.assertEqual(plaintext, text) def test_empty_metadata_headers(self): """Ensure unset metadata is not encoded in HTTP headers""" metadata = { 'foo': 'bar', 'snafu': None, 'bells': 'whistles', 'unset': None, 'empty': '', 'properties': { 'distro': '', 'arch': None, 'user': 'nobody', }, } headers = utils.image_meta_to_http_headers(metadata) self.assertNotIn('x-image-meta-snafu', headers) self.assertNotIn('x-image-meta-uset', headers) self.assertNotIn('x-image-meta-snafu', headers) self.assertNotIn('x-image-meta-property-arch', headers) self.assertEqual('bar', headers.get('x-image-meta-foo')) self.assertEqual('whistles', headers.get('x-image-meta-bells')) self.assertEqual('', headers.get('x-image-meta-empty')) self.assertEqual('', headers.get('x-image-meta-property-distro')) self.assertEqual('nobody', headers.get('x-image-meta-property-user')) glance-16.0.0/glance/tests/unit/test_store_image.py0000666000175100017510000011220213245511421022332 0ustar zuulzuul00000000000000# Copyright 2012 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from cursive import exception as cursive_exception from cursive import signature_utils import glance_store import mock from glance.common import exception import glance.location from glance.tests.unit import base as unit_test_base from glance.tests.unit import utils as unit_test_utils from glance.tests import utils BASE_URI = 'http://storeurl.com/container' UUID1 = 'c80a1a6c-bd1f-41c5-90ee-81afedb1d58d' UUID2 = '971ec09a-8067-4bc8-a91f-ae3557f1c4c7' USER1 = '54492ba0-f4df-4e4e-be62-27f4d76b29cf' TENANT1 = '6838eb7b-6ded-434a-882c-b344c77fe8df' TENANT2 = '2c014f32-55eb-467d-8fcb-4bd706012f81' TENANT3 = '228c6da5-29cd-4d67-9457-ed632e083fc0' class ImageRepoStub(object): def add(self, image): return image def save(self, image, from_state=None): return image class ImageStub(object): def __init__(self, image_id, status=None, locations=None, visibility=None, extra_properties=None): self.image_id = image_id self.status = status self.locations = locations or [] self.visibility = visibility self.size = 1 self.extra_properties = extra_properties or {} def delete(self): self.status = 'deleted' def get_member_repo(self): return FakeMemberRepo(self, [TENANT1, TENANT2]) class ImageFactoryStub(object): def new_image(self, image_id=None, name=None, visibility='private', min_disk=0, min_ram=0, protected=False, owner=None, disk_format=None, container_format=None, extra_properties=None, tags=None, **other_args): return ImageStub(image_id, visibility=visibility, extra_properties=extra_properties, **other_args) class FakeMemberRepo(object): def __init__(self, image, tenants=None): self.image = image self.factory = glance.domain.ImageMemberFactory() self.tenants = tenants or [] def list(self, *args, **kwargs): return [self.factory.new_image_member(self.image, tenant) for tenant in self.tenants] def add(self, member): self.tenants.append(member.member_id) def remove(self, member): self.tenants.remove(member.member_id) class TestStoreImage(utils.BaseTestCase): def setUp(self): locations = [{'url': '%s/%s' % (BASE_URI, UUID1), 'metadata': {}, 'status': 'active'}] self.image_stub = ImageStub(UUID1, 'active', locations) self.store_api = unit_test_utils.FakeStoreAPI() self.store_utils = unit_test_utils.FakeStoreUtils(self.store_api) super(TestStoreImage, self).setUp() def test_image_delete(self): image = glance.location.ImageProxy(self.image_stub, {}, self.store_api, self.store_utils) location = image.locations[0] self.assertEqual('active', image.status) self.store_api.get_from_backend(location['url'], context={}) image.delete() self.assertEqual('deleted', image.status) self.assertRaises(glance_store.NotFound, self.store_api.get_from_backend, location['url'], {}) def test_image_get_data(self): image = glance.location.ImageProxy(self.image_stub, {}, self.store_api, self.store_utils) self.assertEqual('XXX', image.get_data()) def test_image_get_data_from_second_location(self): def fake_get_from_backend(self, location, offset=0, chunk_size=None, context=None): if UUID1 in location: raise Exception('not allow download from %s' % location) else: return self.data[location] image1 = glance.location.ImageProxy(self.image_stub, {}, self.store_api, self.store_utils) self.assertEqual('XXX', image1.get_data()) # Multiple location support context = glance.context.RequestContext(user=USER1) (image2, image_stub2) = self._add_image(context, UUID2, 'ZZZ', 3) location_data = image2.locations[0] image1.locations.append(location_data) self.assertEqual(2, len(image1.locations)) self.assertEqual(UUID2, location_data['url']) self.stubs.Set(unit_test_utils.FakeStoreAPI, 'get_from_backend', fake_get_from_backend) # This time, image1.get_data() returns the data wrapped in a # LimitingReader|CooperativeReader pipeline, so peeking under # the hood of those objects to get at the underlying string. self.assertEqual('ZZZ', image1.get_data().data.fd) image1.locations.pop(0) self.assertEqual(1, len(image1.locations)) image2.delete() def test_image_set_data(self): context = glance.context.RequestContext(user=USER1) image_stub = ImageStub(UUID2, status='queued', locations=[]) image = glance.location.ImageProxy(image_stub, context, self.store_api, self.store_utils) image.set_data('YYYY', 4) self.assertEqual(4, image.size) # NOTE(markwash): FakeStore returns image_id for location self.assertEqual(UUID2, image.locations[0]['url']) self.assertEqual('Z', image.checksum) self.assertEqual('active', image.status) def test_image_set_data_location_metadata(self): context = glance.context.RequestContext(user=USER1) image_stub = ImageStub(UUID2, status='queued', locations=[]) loc_meta = {'key': 'value5032'} store_api = unit_test_utils.FakeStoreAPI(store_metadata=loc_meta) store_utils = unit_test_utils.FakeStoreUtils(store_api) image = glance.location.ImageProxy(image_stub, context, store_api, store_utils) image.set_data('YYYY', 4) self.assertEqual(4, image.size) location_data = image.locations[0] self.assertEqual(UUID2, location_data['url']) self.assertEqual(loc_meta, location_data['metadata']) self.assertEqual('Z', image.checksum) self.assertEqual('active', image.status) image.delete() self.assertEqual(image.status, 'deleted') self.assertRaises(glance_store.NotFound, self.store_api.get_from_backend, image.locations[0]['url'], {}) def test_image_set_data_unknown_size(self): context = glance.context.RequestContext(user=USER1) image_stub = ImageStub(UUID2, status='queued', locations=[]) image = glance.location.ImageProxy(image_stub, context, self.store_api, self.store_utils) image.set_data('YYYY', None) self.assertEqual(4, image.size) # NOTE(markwash): FakeStore returns image_id for location self.assertEqual(UUID2, image.locations[0]['url']) self.assertEqual('Z', image.checksum) self.assertEqual('active', image.status) image.delete() self.assertEqual(image.status, 'deleted') self.assertRaises(glance_store.NotFound, self.store_api.get_from_backend, image.locations[0]['url'], context={}) @mock.patch('glance.location.LOG') def test_image_set_data_valid_signature(self, mock_log): context = glance.context.RequestContext(user=USER1) extra_properties = { 'img_signature_certificate_uuid': 'UUID', 'img_signature_hash_method': 'METHOD', 'img_signature_key_type': 'TYPE', 'img_signature': 'VALID' } image_stub = ImageStub(UUID2, status='queued', extra_properties=extra_properties) self.stubs.Set(signature_utils, 'get_verifier', unit_test_utils.fake_get_verifier) image = glance.location.ImageProxy(image_stub, context, self.store_api, self.store_utils) image.set_data('YYYY', 4) self.assertEqual('active', image.status) mock_log.info.assert_called_once_with( u'Successfully verified signature for image %s', UUID2) def test_image_set_data_invalid_signature(self): context = glance.context.RequestContext(user=USER1) extra_properties = { 'img_signature_certificate_uuid': 'UUID', 'img_signature_hash_method': 'METHOD', 'img_signature_key_type': 'TYPE', 'img_signature': 'INVALID' } image_stub = ImageStub(UUID2, status='queued', extra_properties=extra_properties) self.stubs.Set(signature_utils, 'get_verifier', unit_test_utils.fake_get_verifier) image = glance.location.ImageProxy(image_stub, context, self.store_api, self.store_utils) self.assertRaises(cursive_exception.SignatureVerificationError, image.set_data, 'YYYY', 4) def test_image_set_data_invalid_signature_missing_metadata(self): context = glance.context.RequestContext(user=USER1) extra_properties = { 'img_signature_hash_method': 'METHOD', 'img_signature_key_type': 'TYPE', 'img_signature': 'INVALID' } image_stub = ImageStub(UUID2, status='queued', extra_properties=extra_properties) self.stubs.Set(signature_utils, 'get_verifier', unit_test_utils.fake_get_verifier) image = glance.location.ImageProxy(image_stub, context, self.store_api, self.store_utils) image.set_data('YYYY', 4) self.assertEqual(UUID2, image.locations[0]['url']) self.assertEqual('Z', image.checksum) # Image is still active, since invalid signature was ignored self.assertEqual('active', image.status) def _add_image(self, context, image_id, data, len): image_stub = ImageStub(image_id, status='queued', locations=[]) image = glance.location.ImageProxy(image_stub, context, self.store_api, self.store_utils) image.set_data(data, len) self.assertEqual(len, image.size) # NOTE(markwash): FakeStore returns image_id for location location = {'url': image_id, 'metadata': {}, 'status': 'active'} self.assertEqual([location], image.locations) self.assertEqual([location], image_stub.locations) self.assertEqual('active', image.status) return (image, image_stub) def test_image_change_append_invalid_location_uri(self): self.assertEqual(2, len(self.store_api.data.keys())) context = glance.context.RequestContext(user=USER1) (image1, image_stub1) = self._add_image(context, UUID2, 'XXXX', 4) location_bad = {'url': 'unknown://location', 'metadata': {}} self.assertRaises(exception.BadStoreUri, image1.locations.append, location_bad) image1.delete() self.assertEqual(2, len(self.store_api.data.keys())) self.assertNotIn(UUID2, self.store_api.data.keys()) def test_image_change_append_invalid_location_metatdata(self): UUID3 = 'a8a61ec4-d7a3-11e2-8c28-000c29c27581' self.assertEqual(2, len(self.store_api.data.keys())) context = glance.context.RequestContext(user=USER1) (image1, image_stub1) = self._add_image(context, UUID2, 'XXXX', 4) (image2, image_stub2) = self._add_image(context, UUID3, 'YYYY', 4) # Using only one test rule here is enough to make sure # 'store.check_location_metadata()' can be triggered # in Location proxy layer. Complete test rule for # 'store.check_location_metadata()' testing please # check below cases within 'TestStoreMetaDataChecker'. location_bad = {'url': UUID3, 'metadata': b"a invalid metadata"} self.assertRaises(glance_store.BackendException, image1.locations.append, location_bad) image1.delete() image2.delete() self.assertEqual(2, len(self.store_api.data.keys())) self.assertNotIn(UUID2, self.store_api.data.keys()) self.assertNotIn(UUID3, self.store_api.data.keys()) def test_image_change_append_locations(self): UUID3 = 'a8a61ec4-d7a3-11e2-8c28-000c29c27581' self.assertEqual(2, len(self.store_api.data.keys())) context = glance.context.RequestContext(user=USER1) (image1, image_stub1) = self._add_image(context, UUID2, 'XXXX', 4) (image2, image_stub2) = self._add_image(context, UUID3, 'YYYY', 4) location2 = {'url': UUID2, 'metadata': {}, 'status': 'active'} location3 = {'url': UUID3, 'metadata': {}, 'status': 'active'} image1.locations.append(location3) self.assertEqual([location2, location3], image_stub1.locations) self.assertEqual([location2, location3], image1.locations) image1.delete() self.assertEqual(2, len(self.store_api.data.keys())) self.assertNotIn(UUID2, self.store_api.data.keys()) self.assertNotIn(UUID3, self.store_api.data.keys()) image2.delete() def test_image_change_pop_location(self): UUID3 = 'a8a61ec4-d7a3-11e2-8c28-000c29c27581' self.assertEqual(2, len(self.store_api.data.keys())) context = glance.context.RequestContext(user=USER1) (image1, image_stub1) = self._add_image(context, UUID2, 'XXXX', 4) (image2, image_stub2) = self._add_image(context, UUID3, 'YYYY', 4) location2 = {'url': UUID2, 'metadata': {}, 'status': 'active'} location3 = {'url': UUID3, 'metadata': {}, 'status': 'active'} image1.locations.append(location3) self.assertEqual([location2, location3], image_stub1.locations) self.assertEqual([location2, location3], image1.locations) image1.locations.pop() self.assertEqual([location2], image_stub1.locations) self.assertEqual([location2], image1.locations) image1.delete() self.assertEqual(2, len(self.store_api.data.keys())) self.assertNotIn(UUID2, self.store_api.data.keys()) self.assertNotIn(UUID3, self.store_api.data.keys()) image2.delete() def test_image_change_extend_invalid_locations_uri(self): self.assertEqual(2, len(self.store_api.data.keys())) context = glance.context.RequestContext(user=USER1) (image1, image_stub1) = self._add_image(context, UUID2, 'XXXX', 4) location_bad = {'url': 'unknown://location', 'metadata': {}} self.assertRaises(exception.BadStoreUri, image1.locations.extend, [location_bad]) image1.delete() self.assertEqual(2, len(self.store_api.data.keys())) self.assertNotIn(UUID2, self.store_api.data.keys()) def test_image_change_extend_invalid_locations_metadata(self): UUID3 = 'a8a61ec4-d7a3-11e2-8c28-000c29c27581' self.assertEqual(2, len(self.store_api.data.keys())) context = glance.context.RequestContext(user=USER1) (image1, image_stub1) = self._add_image(context, UUID2, 'XXXX', 4) (image2, image_stub2) = self._add_image(context, UUID3, 'YYYY', 4) location_bad = {'url': UUID3, 'metadata': b"a invalid metadata"} self.assertRaises(glance_store.BackendException, image1.locations.extend, [location_bad]) image1.delete() image2.delete() self.assertEqual(2, len(self.store_api.data.keys())) self.assertNotIn(UUID2, self.store_api.data.keys()) self.assertNotIn(UUID3, self.store_api.data.keys()) def test_image_change_extend_locations(self): UUID3 = 'a8a61ec4-d7a3-11e2-8c28-000c29c27581' self.assertEqual(2, len(self.store_api.data.keys())) context = glance.context.RequestContext(user=USER1) (image1, image_stub1) = self._add_image(context, UUID2, 'XXXX', 4) (image2, image_stub2) = self._add_image(context, UUID3, 'YYYY', 4) location2 = {'url': UUID2, 'metadata': {}, 'status': 'active'} location3 = {'url': UUID3, 'metadata': {}, 'status': 'active'} image1.locations.extend([location3]) self.assertEqual([location2, location3], image_stub1.locations) self.assertEqual([location2, location3], image1.locations) image1.delete() self.assertEqual(2, len(self.store_api.data.keys())) self.assertNotIn(UUID2, self.store_api.data.keys()) self.assertNotIn(UUID3, self.store_api.data.keys()) image2.delete() def test_image_change_remove_location(self): UUID3 = 'a8a61ec4-d7a3-11e2-8c28-000c29c27581' self.assertEqual(2, len(self.store_api.data.keys())) context = glance.context.RequestContext(user=USER1) (image1, image_stub1) = self._add_image(context, UUID2, 'XXXX', 4) (image2, image_stub2) = self._add_image(context, UUID3, 'YYYY', 4) location2 = {'url': UUID2, 'metadata': {}, 'status': 'active'} location3 = {'url': UUID3, 'metadata': {}, 'status': 'active'} location_bad = {'url': 'unknown://location', 'metadata': {}} image1.locations.extend([location3]) image1.locations.remove(location2) self.assertEqual([location3], image_stub1.locations) self.assertEqual([location3], image1.locations) self.assertRaises(ValueError, image1.locations.remove, location_bad) image1.delete() image2.delete() self.assertEqual(2, len(self.store_api.data.keys())) self.assertNotIn(UUID2, self.store_api.data.keys()) self.assertNotIn(UUID3, self.store_api.data.keys()) def test_image_change_delete_location(self): self.assertEqual(2, len(self.store_api.data.keys())) context = glance.context.RequestContext(user=USER1) (image1, image_stub1) = self._add_image(context, UUID2, 'XXXX', 4) del image1.locations[0] self.assertEqual([], image_stub1.locations) self.assertEqual(0, len(image1.locations)) self.assertEqual(2, len(self.store_api.data.keys())) self.assertNotIn(UUID2, self.store_api.data.keys()) image1.delete() def test_image_change_insert_invalid_location_uri(self): self.assertEqual(2, len(self.store_api.data.keys())) context = glance.context.RequestContext(user=USER1) (image1, image_stub1) = self._add_image(context, UUID2, 'XXXX', 4) location_bad = {'url': 'unknown://location', 'metadata': {}} self.assertRaises(exception.BadStoreUri, image1.locations.insert, 0, location_bad) image1.delete() self.assertEqual(2, len(self.store_api.data.keys())) self.assertNotIn(UUID2, self.store_api.data.keys()) def test_image_change_insert_invalid_location_metadata(self): UUID3 = 'a8a61ec4-d7a3-11e2-8c28-000c29c27581' self.assertEqual(2, len(self.store_api.data.keys())) context = glance.context.RequestContext(user=USER1) (image1, image_stub1) = self._add_image(context, UUID2, 'XXXX', 4) (image2, image_stub2) = self._add_image(context, UUID3, 'YYYY', 4) location_bad = {'url': UUID3, 'metadata': b"a invalid metadata"} self.assertRaises(glance_store.BackendException, image1.locations.insert, 0, location_bad) image1.delete() image2.delete() self.assertEqual(2, len(self.store_api.data.keys())) self.assertNotIn(UUID2, self.store_api.data.keys()) self.assertNotIn(UUID3, self.store_api.data.keys()) def test_image_change_insert_location(self): UUID3 = 'a8a61ec4-d7a3-11e2-8c28-000c29c27581' self.assertEqual(2, len(self.store_api.data.keys())) context = glance.context.RequestContext(user=USER1) (image1, image_stub1) = self._add_image(context, UUID2, 'XXXX', 4) (image2, image_stub2) = self._add_image(context, UUID3, 'YYYY', 4) location2 = {'url': UUID2, 'metadata': {}, 'status': 'active'} location3 = {'url': UUID3, 'metadata': {}, 'status': 'active'} image1.locations.insert(0, location3) self.assertEqual([location3, location2], image_stub1.locations) self.assertEqual([location3, location2], image1.locations) image1.delete() self.assertEqual(2, len(self.store_api.data.keys())) self.assertNotIn(UUID2, self.store_api.data.keys()) self.assertNotIn(UUID3, self.store_api.data.keys()) image2.delete() def test_image_change_delete_locations(self): UUID3 = 'a8a61ec4-d7a3-11e2-8c28-000c29c27581' self.assertEqual(2, len(self.store_api.data.keys())) context = glance.context.RequestContext(user=USER1) (image1, image_stub1) = self._add_image(context, UUID2, 'XXXX', 4) (image2, image_stub2) = self._add_image(context, UUID3, 'YYYY', 4) location2 = {'url': UUID2, 'metadata': {}} location3 = {'url': UUID3, 'metadata': {}} image1.locations.insert(0, location3) del image1.locations[0:100] self.assertEqual([], image_stub1.locations) self.assertEqual(0, len(image1.locations)) self.assertRaises(exception.BadStoreUri, image1.locations.insert, 0, location2) self.assertRaises(exception.BadStoreUri, image2.locations.insert, 0, location3) image1.delete() image2.delete() self.assertEqual(2, len(self.store_api.data.keys())) self.assertNotIn(UUID2, self.store_api.data.keys()) self.assertNotIn(UUID3, self.store_api.data.keys()) def test_image_change_adding_invalid_location_uri(self): self.assertEqual(2, len(self.store_api.data.keys())) context = glance.context.RequestContext(user=USER1) image_stub1 = ImageStub('fake_image_id', status='queued', locations=[]) image1 = glance.location.ImageProxy(image_stub1, context, self.store_api, self.store_utils) location_bad = {'url': 'unknown://location', 'metadata': {}} self.assertRaises(exception.BadStoreUri, image1.locations.__iadd__, [location_bad]) self.assertEqual([], image_stub1.locations) self.assertEqual([], image1.locations) image1.delete() self.assertEqual(2, len(self.store_api.data.keys())) self.assertNotIn(UUID2, self.store_api.data.keys()) def test_image_change_adding_invalid_location_metadata(self): self.assertEqual(2, len(self.store_api.data.keys())) context = glance.context.RequestContext(user=USER1) (image1, image_stub1) = self._add_image(context, UUID2, 'XXXX', 4) image_stub2 = ImageStub('fake_image_id', status='queued', locations=[]) image2 = glance.location.ImageProxy(image_stub2, context, self.store_api, self.store_utils) location_bad = {'url': UUID2, 'metadata': b"a invalid metadata"} self.assertRaises(glance_store.BackendException, image2.locations.__iadd__, [location_bad]) self.assertEqual([], image_stub2.locations) self.assertEqual([], image2.locations) image1.delete() image2.delete() self.assertEqual(2, len(self.store_api.data.keys())) self.assertNotIn(UUID2, self.store_api.data.keys()) def test_image_change_adding_locations(self): UUID3 = 'a8a61ec4-d7a3-11e2-8c28-000c29c27581' self.assertEqual(2, len(self.store_api.data.keys())) context = glance.context.RequestContext(user=USER1) (image1, image_stub1) = self._add_image(context, UUID2, 'XXXX', 4) (image2, image_stub2) = self._add_image(context, UUID3, 'YYYY', 4) image_stub3 = ImageStub('fake_image_id', status='queued', locations=[]) image3 = glance.location.ImageProxy(image_stub3, context, self.store_api, self.store_utils) location2 = {'url': UUID2, 'metadata': {}} location3 = {'url': UUID3, 'metadata': {}} image3.locations += [location2, location3] self.assertEqual([location2, location3], image_stub3.locations) self.assertEqual([location2, location3], image3.locations) image3.delete() self.assertEqual(2, len(self.store_api.data.keys())) self.assertNotIn(UUID2, self.store_api.data.keys()) self.assertNotIn(UUID3, self.store_api.data.keys()) image1.delete() image2.delete() def test_image_get_location_index(self): UUID3 = 'a8a61ec4-d7a3-11e2-8c28-000c29c27581' self.assertEqual(2, len(self.store_api.data.keys())) context = glance.context.RequestContext(user=USER1) (image1, image_stub1) = self._add_image(context, UUID2, 'XXXX', 4) (image2, image_stub2) = self._add_image(context, UUID3, 'YYYY', 4) image_stub3 = ImageStub('fake_image_id', status='queued', locations=[]) image3 = glance.location.ImageProxy(image_stub3, context, self.store_api, self.store_utils) location2 = {'url': UUID2, 'metadata': {}} location3 = {'url': UUID3, 'metadata': {}} image3.locations += [location2, location3] self.assertEqual(1, image_stub3.locations.index(location3)) image3.delete() self.assertEqual(2, len(self.store_api.data.keys())) self.assertNotIn(UUID2, self.store_api.data.keys()) self.assertNotIn(UUID3, self.store_api.data.keys()) image1.delete() image2.delete() def test_image_get_location_by_index(self): UUID3 = 'a8a61ec4-d7a3-11e2-8c28-000c29c27581' self.assertEqual(2, len(self.store_api.data.keys())) context = glance.context.RequestContext(user=USER1) (image1, image_stub1) = self._add_image(context, UUID2, 'XXXX', 4) (image2, image_stub2) = self._add_image(context, UUID3, 'YYYY', 4) image_stub3 = ImageStub('fake_image_id', status='queued', locations=[]) image3 = glance.location.ImageProxy(image_stub3, context, self.store_api, self.store_utils) location2 = {'url': UUID2, 'metadata': {}} location3 = {'url': UUID3, 'metadata': {}} image3.locations += [location2, location3] self.assertEqual(1, image_stub3.locations.index(location3)) self.assertEqual(location2, image_stub3.locations[0]) image3.delete() self.assertEqual(2, len(self.store_api.data.keys())) self.assertNotIn(UUID2, self.store_api.data.keys()) self.assertNotIn(UUID3, self.store_api.data.keys()) image1.delete() image2.delete() def test_image_checking_location_exists(self): UUID3 = 'a8a61ec4-d7a3-11e2-8c28-000c29c27581' self.assertEqual(2, len(self.store_api.data.keys())) context = glance.context.RequestContext(user=USER1) (image1, image_stub1) = self._add_image(context, UUID2, 'XXXX', 4) (image2, image_stub2) = self._add_image(context, UUID3, 'YYYY', 4) image_stub3 = ImageStub('fake_image_id', status='queued', locations=[]) image3 = glance.location.ImageProxy(image_stub3, context, self.store_api, self.store_utils) location2 = {'url': UUID2, 'metadata': {}} location3 = {'url': UUID3, 'metadata': {}} location_bad = {'url': 'unknown://location', 'metadata': {}} image3.locations += [location2, location3] self.assertIn(location3, image_stub3.locations) self.assertNotIn(location_bad, image_stub3.locations) image3.delete() self.assertEqual(2, len(self.store_api.data.keys())) self.assertNotIn(UUID2, self.store_api.data.keys()) self.assertNotIn(UUID3, self.store_api.data.keys()) image1.delete() image2.delete() def test_image_reverse_locations_order(self): UUID3 = 'a8a61ec4-d7a3-11e2-8c28-000c29c27581' self.assertEqual(2, len(self.store_api.data.keys())) context = glance.context.RequestContext(user=USER1) (image1, image_stub1) = self._add_image(context, UUID2, 'XXXX', 4) (image2, image_stub2) = self._add_image(context, UUID3, 'YYYY', 4) location2 = {'url': UUID2, 'metadata': {}} location3 = {'url': UUID3, 'metadata': {}} image_stub3 = ImageStub('fake_image_id', status='queued', locations=[]) image3 = glance.location.ImageProxy(image_stub3, context, self.store_api, self.store_utils) image3.locations += [location2, location3] image_stub3.locations.reverse() self.assertEqual([location3, location2], image_stub3.locations) self.assertEqual([location3, location2], image3.locations) image3.delete() self.assertEqual(2, len(self.store_api.data.keys())) self.assertNotIn(UUID2, self.store_api.data.keys()) self.assertNotIn(UUID3, self.store_api.data.keys()) image1.delete() image2.delete() class TestStoreImageRepo(utils.BaseTestCase): def setUp(self): super(TestStoreImageRepo, self).setUp() self.store_api = unit_test_utils.FakeStoreAPI() store_utils = unit_test_utils.FakeStoreUtils(self.store_api) self.image_stub = ImageStub(UUID1) self.image = glance.location.ImageProxy(self.image_stub, {}, self.store_api, store_utils) self.image_repo_stub = ImageRepoStub() self.image_repo = glance.location.ImageRepoProxy(self.image_repo_stub, {}, self.store_api, store_utils) patcher = mock.patch("glance.location._get_member_repo_for_store", self.get_fake_member_repo) patcher.start() self.addCleanup(patcher.stop) self.fake_member_repo = FakeMemberRepo(self.image, [TENANT1, TENANT2]) self.image_member_repo = glance.location.ImageMemberRepoProxy( self.fake_member_repo, self.image, {}, self.store_api) def get_fake_member_repo(self, image, context, db_api, store_api): return FakeMemberRepo(self.image, [TENANT1, TENANT2]) def test_add_updates_acls(self): self.image_stub.locations = [{'url': 'foo', 'metadata': {}, 'status': 'active'}, {'url': 'bar', 'metadata': {}, 'status': 'active'}] self.image_stub.visibility = 'public' self.image_repo.add(self.image) self.assertTrue(self.store_api.acls['foo']['public']) self.assertEqual([], self.store_api.acls['foo']['read']) self.assertEqual([], self.store_api.acls['foo']['write']) self.assertTrue(self.store_api.acls['bar']['public']) self.assertEqual([], self.store_api.acls['bar']['read']) self.assertEqual([], self.store_api.acls['bar']['write']) def test_add_ignores_acls_if_no_locations(self): self.image_stub.locations = [] self.image_stub.visibility = 'public' self.image_repo.add(self.image) self.assertEqual(0, len(self.store_api.acls)) def test_save_updates_acls(self): self.image_stub.locations = [{'url': 'foo', 'metadata': {}, 'status': 'active'}] self.image_repo.save(self.image) self.assertIn('foo', self.store_api.acls) def test_add_fetches_members_if_private(self): self.image_stub.locations = [{'url': 'glue', 'metadata': {}, 'status': 'active'}] self.image_stub.visibility = 'private' self.image_repo.add(self.image) self.assertIn('glue', self.store_api.acls) acls = self.store_api.acls['glue'] self.assertFalse(acls['public']) self.assertEqual([], acls['write']) self.assertEqual([TENANT1, TENANT2], acls['read']) def test_save_fetches_members_if_private(self): self.image_stub.locations = [{'url': 'glue', 'metadata': {}, 'status': 'active'}] self.image_stub.visibility = 'private' self.image_repo.save(self.image) self.assertIn('glue', self.store_api.acls) acls = self.store_api.acls['glue'] self.assertFalse(acls['public']) self.assertEqual([], acls['write']) self.assertEqual([TENANT1, TENANT2], acls['read']) def test_member_addition_updates_acls(self): self.image_stub.locations = [{'url': 'glug', 'metadata': {}, 'status': 'active'}] self.image_stub.visibility = 'private' membership = glance.domain.ImageMembership( UUID1, TENANT3, None, None, status='accepted') self.image_member_repo.add(membership) self.assertIn('glug', self.store_api.acls) acls = self.store_api.acls['glug'] self.assertFalse(acls['public']) self.assertEqual([], acls['write']) self.assertEqual([TENANT1, TENANT2, TENANT3], acls['read']) def test_member_removal_updates_acls(self): self.image_stub.locations = [{'url': 'glug', 'metadata': {}, 'status': 'active'}] self.image_stub.visibility = 'private' membership = glance.domain.ImageMembership( UUID1, TENANT1, None, None, status='accepted') self.image_member_repo.remove(membership) self.assertIn('glug', self.store_api.acls) acls = self.store_api.acls['glug'] self.assertFalse(acls['public']) self.assertEqual([], acls['write']) self.assertEqual([TENANT2], acls['read']) class TestImageFactory(unit_test_base.StoreClearingUnitTest): def setUp(self): super(TestImageFactory, self).setUp() store_api = unit_test_utils.FakeStoreAPI() store_utils = unit_test_utils.FakeStoreUtils(store_api) self.image_factory = glance.location.ImageFactoryProxy( ImageFactoryStub(), glance.context.RequestContext(user=USER1), store_api, store_utils) def test_new_image(self): image = self.image_factory.new_image() self.assertIsNone(image.image_id) self.assertIsNone(image.status) self.assertEqual('private', image.visibility) self.assertEqual([], image.locations) def test_new_image_with_location(self): locations = [{'url': '%s/%s' % (BASE_URI, UUID1), 'metadata': {}}] image = self.image_factory.new_image(locations=locations) self.assertEqual(locations, image.locations) location_bad = {'url': 'unknown://location', 'metadata': {}} self.assertRaises(exception.BadStoreUri, self.image_factory.new_image, locations=[location_bad]) class TestStoreMetaDataChecker(utils.BaseTestCase): def test_empty(self): glance_store.check_location_metadata({}) def test_unicode(self): m = {'key': u'somevalue'} glance_store.check_location_metadata(m) def test_unicode_list(self): m = {'key': [u'somevalue', u'2']} glance_store.check_location_metadata(m) def test_unicode_dict(self): inner = {'key1': u'somevalue', 'key2': u'somevalue'} m = {'topkey': inner} glance_store.check_location_metadata(m) def test_unicode_dict_list(self): inner = {'key1': u'somevalue', 'key2': u'somevalue'} m = {'topkey': inner, 'list': [u'somevalue', u'2'], 'u': u'2'} glance_store.check_location_metadata(m) def test_nested_dict(self): inner = {'key1': u'somevalue', 'key2': u'somevalue'} inner = {'newkey': inner} inner = {'anotherkey': inner} m = {'topkey': inner} glance_store.check_location_metadata(m) def test_simple_bad(self): m = {'key1': object()} self.assertRaises(glance_store.BackendException, glance_store.check_location_metadata, m) def test_list_bad(self): m = {'key1': [u'somevalue', object()]} self.assertRaises(glance_store.BackendException, glance_store.check_location_metadata, m) def test_nested_dict_bad(self): inner = {'key1': u'somevalue', 'key2': object()} inner = {'newkey': inner} inner = {'anotherkey': inner} m = {'topkey': inner} self.assertRaises(glance_store.BackendException, glance_store.check_location_metadata, m) glance-16.0.0/glance/tests/unit/test_cached_images.py0000666000175100017510000001115713245511421022577 0ustar zuulzuul00000000000000# Copyright (C) 2013 Yahoo! Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testtools import webob from glance.api import cached_images from glance.api import policy from glance.common import exception from glance import image_cache class FakePolicyEnforcer(policy.Enforcer): def __init__(self): self.default_rule = '' self.policy_path = '' self.policy_file_mtime = None self.policy_file_contents = None def enforce(self, context, action, target): return 'pass' def check(rule, target, creds, exc=None, *args, **kwargs): return 'pass' def _check(self, context, rule, target, *args, **kwargs): return 'pass' class FakeCache(image_cache.ImageCache): def __init__(self): self.init_driver() self.deleted_images = [] def init_driver(self): pass def get_cached_images(self): return {'id': 'test'} def delete_cached_image(self, image_id): self.deleted_images.append(image_id) def delete_all_cached_images(self): self.delete_cached_image(self.get_cached_images().get('id')) return 1 def get_queued_images(self): return {'test': 'passed'} def queue_image(self, image_id): return 'pass' def delete_queued_image(self, image_id): self.deleted_images.append(image_id) def delete_all_queued_images(self): self.delete_queued_image('deleted_img') return 1 class FakeController(cached_images.Controller): def __init__(self): self.cache = FakeCache() self.policy = FakePolicyEnforcer() class TestController(testtools.TestCase): def test_initialization_without_conf(self): self.assertRaises(exception.BadDriverConfiguration, cached_images.Controller) class TestCachedImages(testtools.TestCase): def setUp(self): super(TestCachedImages, self).setUp() test_controller = FakeController() self.controller = test_controller def test_get_cached_images(self): req = webob.Request.blank('') req.context = 'test' result = self.controller.get_cached_images(req) self.assertEqual({'cached_images': {'id': 'test'}}, result) def test_delete_cached_image(self): req = webob.Request.blank('') req.context = 'test' self.controller.delete_cached_image(req, image_id='test') self.assertEqual(['test'], self.controller.cache.deleted_images) def test_delete_cached_images(self): req = webob.Request.blank('') req.context = 'test' self.assertEqual({'num_deleted': 1}, self.controller.delete_cached_images(req)) self.assertEqual(['test'], self.controller.cache.deleted_images) def test_policy_enforce_forbidden(self): def fake_enforce(context, action, target): raise exception.Forbidden() self.controller.policy.enforce = fake_enforce req = webob.Request.blank('') req.context = 'test' self.assertRaises(webob.exc.HTTPForbidden, self.controller.get_cached_images, req) def test_get_queued_images(self): req = webob.Request.blank('') req.context = 'test' result = self.controller.get_queued_images(req) self.assertEqual({'queued_images': {'test': 'passed'}}, result) def test_queue_image(self): req = webob.Request.blank('') req.context = 'test' self.controller.queue_image(req, image_id='test1') def test_delete_queued_image(self): req = webob.Request.blank('') req.context = 'test' self.controller.delete_queued_image(req, 'deleted_img') self.assertEqual(['deleted_img'], self.controller.cache.deleted_images) def test_delete_queued_images(self): req = webob.Request.blank('') req.context = 'test' self.assertEqual({'num_deleted': 1}, self.controller.delete_queued_images(req)) self.assertEqual(['deleted_img'], self.controller.cache.deleted_images) glance-16.0.0/glance/tests/unit/__init__.py0000666000175100017510000000000013245511421020524 0ustar zuulzuul00000000000000glance-16.0.0/glance/tests/unit/test_context_middleware.py0000666000175100017510000001511613245511421023723 0ustar zuulzuul00000000000000# All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import webob from glance.api.middleware import context import glance.context from glance.tests.unit import base class TestContextMiddleware(base.IsolatedUnitTest): def _build_request(self, roles=None, identity_status='Confirmed', service_catalog=None): req = webob.Request.blank('/') req.headers['x-auth-token'] = 'token1' req.headers['x-identity-status'] = identity_status req.headers['x-user-id'] = 'user1' req.headers['x-tenant-id'] = 'tenant1' _roles = roles or ['role1', 'role2'] req.headers['x-roles'] = ','.join(_roles) if service_catalog: req.headers['x-service-catalog'] = service_catalog return req def _build_middleware(self): return context.ContextMiddleware(None) def test_header_parsing(self): req = self._build_request() self._build_middleware().process_request(req) self.assertEqual('token1', req.context.auth_token) self.assertEqual('user1', req.context.user) self.assertEqual('tenant1', req.context.tenant) self.assertEqual(['role1', 'role2'], req.context.roles) def test_is_admin_flag(self): # is_admin check should look for 'admin' role by default req = self._build_request(roles=['admin', 'role2']) self._build_middleware().process_request(req) self.assertTrue(req.context.is_admin) # without the 'admin' role, is_admin should be False req = self._build_request() self._build_middleware().process_request(req) self.assertFalse(req.context.is_admin) # if we change the admin_role attribute, we should be able to use it req = self._build_request() self.config(admin_role='role1') self._build_middleware().process_request(req) self.assertTrue(req.context.is_admin) def test_roles_case_insensitive(self): # accept role from request req = self._build_request(roles=['Admin', 'role2']) self._build_middleware().process_request(req) self.assertTrue(req.context.is_admin) # accept role from config req = self._build_request(roles=['role1']) self.config(admin_role='rOLe1') self._build_middleware().process_request(req) self.assertTrue(req.context.is_admin) def test_roles_stripping(self): # stripping extra spaces in request req = self._build_request(roles=['\trole1']) self.config(admin_role='role1') self._build_middleware().process_request(req) self.assertTrue(req.context.is_admin) # stripping extra spaces in config req = self._build_request(roles=['\trole1\n']) self.config(admin_role=' role1\t') self._build_middleware().process_request(req) self.assertTrue(req.context.is_admin) def test_anonymous_access_enabled(self): req = self._build_request(identity_status='Nope') self.config(allow_anonymous_access=True) middleware = self._build_middleware() middleware.process_request(req) self.assertIsNone(req.context.auth_token) self.assertIsNone(req.context.user) self.assertIsNone(req.context.tenant) self.assertEqual([], req.context.roles) self.assertFalse(req.context.is_admin) self.assertTrue(req.context.read_only) def test_anonymous_access_defaults_to_disabled(self): req = self._build_request(identity_status='Nope') middleware = self._build_middleware() self.assertRaises(webob.exc.HTTPUnauthorized, middleware.process_request, req) def test_service_catalog(self): catalog_json = "[{}]" req = self._build_request(service_catalog=catalog_json) self._build_middleware().process_request(req) self.assertEqual([{}], req.context.service_catalog) def test_invalid_service_catalog(self): catalog_json = "bad json" req = self._build_request(service_catalog=catalog_json) middleware = self._build_middleware() self.assertRaises(webob.exc.HTTPInternalServerError, middleware.process_request, req) def test_response(self): req = self._build_request() req.context = glance.context.RequestContext() request_id = req.context.request_id resp = webob.Response() resp.request = req self._build_middleware().process_response(resp) self.assertEqual(request_id, resp.headers['x-openstack-request-id']) resp_req_id = resp.headers['x-openstack-request-id'] # Validate that request-id do not starts with 'req-req-' if isinstance(resp_req_id, bytes): resp_req_id = resp_req_id.decode('utf-8') self.assertFalse(resp_req_id.startswith('req-req-')) self.assertTrue(resp_req_id.startswith('req-')) class TestUnauthenticatedContextMiddleware(base.IsolatedUnitTest): def test_request(self): middleware = context.UnauthenticatedContextMiddleware(None) req = webob.Request.blank('/') middleware.process_request(req) self.assertIsNone(req.context.auth_token) self.assertIsNone(req.context.user) self.assertIsNone(req.context.tenant) self.assertEqual([], req.context.roles) self.assertTrue(req.context.is_admin) def test_response(self): middleware = context.UnauthenticatedContextMiddleware(None) req = webob.Request.blank('/') req.context = glance.context.RequestContext() request_id = req.context.request_id resp = webob.Response() resp.request = req middleware.process_response(resp) self.assertEqual(request_id, resp.headers['x-openstack-request-id']) resp_req_id = resp.headers['x-openstack-request-id'] if isinstance(resp_req_id, bytes): resp_req_id = resp_req_id.decode('utf-8') # Validate that request-id do not starts with 'req-req-' self.assertFalse(resp_req_id.startswith('req-req-')) self.assertTrue(resp_req_id.startswith('req-')) glance-16.0.0/glance/tests/unit/test_scrubber.py0000666000175100017510000001517513245511421021656 0ustar zuulzuul00000000000000# Copyright 2013 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import uuid import glance_store import mock from mock import patch from mox3 import mox from oslo_config import cfg # NOTE(jokke): simplified transition to py3, behaves like py2 xrange from six.moves import range from glance.common import exception from glance.db.sqlalchemy import api as db_api from glance import scrubber from glance.tests import utils as test_utils CONF = cfg.CONF class TestScrubber(test_utils.BaseTestCase): def setUp(self): super(TestScrubber, self).setUp() glance_store.register_opts(CONF) self.config(group='glance_store', default_store='file', filesystem_store_datadir=self.test_dir) glance_store.create_stores() self.mox = mox.Mox() def tearDown(self): self.mox.UnsetStubs() # These globals impact state outside of this test class, kill them. scrubber._file_queue = None scrubber._db_queue = None super(TestScrubber, self).tearDown() def _scrubber_cleanup_with_store_delete_exception(self, ex): uri = 'file://some/path/%s' % uuid.uuid4() id = 'helloworldid' scrub = scrubber.Scrubber(glance_store) self.mox.StubOutWithMock(glance_store, "delete_from_backend") glance_store.delete_from_backend( uri, mox.IgnoreArg()).AndRaise(ex) self.mox.ReplayAll() scrub._scrub_image(id, [(id, '-', uri)]) self.mox.VerifyAll() @mock.patch.object(db_api, "image_get") def test_store_delete_successful(self, mock_image_get): uri = 'file://some/path/%s' % uuid.uuid4() id = 'helloworldid' scrub = scrubber.Scrubber(glance_store) self.mox.StubOutWithMock(glance_store, "delete_from_backend") glance_store.delete_from_backend(uri, mox.IgnoreArg()).AndReturn('') self.mox.ReplayAll() scrub._scrub_image(id, [(id, '-', uri)]) self.mox.VerifyAll() @mock.patch.object(db_api, "image_get") def test_store_delete_store_exceptions(self, mock_image_get): # While scrubbing image data, all store exceptions, other than # NotFound, cause image scrubbing to fail. Essentially, no attempt # would be made to update the status of image. uri = 'file://some/path/%s' % uuid.uuid4() id = 'helloworldid' ex = glance_store.GlanceStoreException() scrub = scrubber.Scrubber(glance_store) self.mox.StubOutWithMock(glance_store, "delete_from_backend") glance_store.delete_from_backend( uri, mox.IgnoreArg()).AndRaise(ex) self.mox.ReplayAll() scrub._scrub_image(id, [(id, '-', uri)]) self.mox.VerifyAll() @mock.patch.object(db_api, "image_get") def test_store_delete_notfound_exception(self, mock_image_get): # While scrubbing image data, NotFound exception is ignored and image # scrubbing succeeds uri = 'file://some/path/%s' % uuid.uuid4() id = 'helloworldid' ex = glance_store.NotFound(message='random') scrub = scrubber.Scrubber(glance_store) self.mox.StubOutWithMock(glance_store, "delete_from_backend") glance_store.delete_from_backend(uri, mox.IgnoreArg()).AndRaise(ex) self.mox.ReplayAll() scrub._scrub_image(id, [(id, '-', uri)]) self.mox.VerifyAll() def test_scrubber_exits(self): # Checks for Scrubber exits when it is not able to fetch jobs from # the queue scrub_jobs = scrubber.ScrubDBQueue.get_all_locations scrub_jobs = mock.MagicMock() scrub_jobs.side_effect = exception.NotFound scrub = scrubber.Scrubber(glance_store) self.assertRaises(exception.FailedToGetScrubberJobs, scrub._get_delete_jobs) class TestScrubDBQueue(test_utils.BaseTestCase): def setUp(self): super(TestScrubDBQueue, self).setUp() def _create_image_list(self, count): images = [] for x in range(count): images.append({'id': x}) return images def test_get_all_images(self): scrub_queue = scrubber.ScrubDBQueue() images = self._create_image_list(15) image_pager = ImagePager(images) def make_get_images_detailed(pager): def mock_get_images_detailed(ctx, filters, marker=None, limit=None): return pager() return mock_get_images_detailed with patch.object(db_api, 'image_get_all') as ( _mock_get_images_detailed): _mock_get_images_detailed.side_effect = ( make_get_images_detailed(image_pager)) actual = list(scrub_queue._get_all_images()) self.assertEqual(images, actual) def test_get_all_images_paged(self): scrub_queue = scrubber.ScrubDBQueue() images = self._create_image_list(15) image_pager = ImagePager(images, page_size=4) def make_get_images_detailed(pager): def mock_get_images_detailed(ctx, filters, marker=None, limit=None): return pager() return mock_get_images_detailed with patch.object(db_api, 'image_get_all') as ( _mock_get_images_detailed): _mock_get_images_detailed.side_effect = ( make_get_images_detailed(image_pager)) actual = list(scrub_queue._get_all_images()) self.assertEqual(images, actual) class ImagePager(object): def __init__(self, images, page_size=0): image_count = len(images) if page_size == 0 or page_size > image_count: page_size = image_count self.image_batches = [] start = 0 l = len(images) while start < l: self.image_batches.append(images[start: start + page_size]) start += page_size if (l - start) < page_size: page_size = l - start def __call__(self): if len(self.image_batches) == 0: return [] else: return self.image_batches.pop(0) glance-16.0.0/glance/tests/unit/utils.py0000666000175100017510000002403513245511421020143 0ustar zuulzuul00000000000000# Copyright 2012 OpenStack Foundation. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from cryptography import exceptions as crypto_exception import glance_store as store import mock from oslo_config import cfg from six.moves import urllib from glance.common import exception from glance.common import store_utils from glance.common import wsgi import glance.context import glance.db.simple.api as simple_db CONF = cfg.CONF UUID1 = 'c80a1a6c-bd1f-41c5-90ee-81afedb1d58d' UUID2 = '971ec09a-8067-4bc8-a91f-ae3557f1c4c7' TENANT1 = '6838eb7b-6ded-434a-882c-b344c77fe8df' TENANT2 = '2c014f32-55eb-467d-8fcb-4bd706012f81' USER1 = '54492ba0-f4df-4e4e-be62-27f4d76b29cf' USER2 = '0b3b3006-cb76-4517-ae32-51397e22c754' USER3 = '2hss8dkl-d8jh-88yd-uhs9-879sdjsd8skd' BASE_URI = 'http://storeurl.com/container' def sort_url_by_qs_keys(url): # NOTE(kragniz): this only sorts the keys of the query string of a url. # For example, an input of '/v2/tasks?sort_key=id&sort_dir=asc&limit=10' # returns '/v2/tasks?limit=10&sort_dir=asc&sort_key=id'. This is to prevent # non-deterministic ordering of the query string causing problems with unit # tests. parsed = urllib.parse.urlparse(url) queries = urllib.parse.parse_qsl(parsed.query, True) sorted_query = sorted(queries, key=lambda x: x[0]) encoded_sorted_query = urllib.parse.urlencode(sorted_query, True) url_parts = (parsed.scheme, parsed.netloc, parsed.path, parsed.params, encoded_sorted_query, parsed.fragment) return urllib.parse.urlunparse(url_parts) def get_fake_request(path='', method='POST', is_admin=False, user=USER1, roles=None, tenant=TENANT1): if roles is None: roles = ['member'] req = wsgi.Request.blank(path) req.method = method kwargs = { 'user': user, 'tenant': tenant, 'roles': roles, 'is_admin': is_admin, } req.context = glance.context.RequestContext(**kwargs) return req def fake_get_size_from_backend(uri, context=None): return 1 def fake_get_verifier(context, img_signature_certificate_uuid, img_signature_hash_method, img_signature, img_signature_key_type): verifier = mock.Mock() if (img_signature is not None and img_signature == 'VALID'): verifier.verify.return_value = None else: ex = crypto_exception.InvalidSignature() verifier.verify.side_effect = ex return verifier def get_fake_context(user=USER1, tenant=TENANT1, roles=None, is_admin=False): if roles is None: roles = ['member'] kwargs = { 'user': user, 'tenant': tenant, 'roles': roles, 'is_admin': is_admin, } context = glance.context.RequestContext(**kwargs) return context class FakeDB(object): def __init__(self, initialize=True): self.reset() if initialize: self.init_db() @staticmethod def init_db(): images = [ {'id': UUID1, 'owner': TENANT1, 'status': 'queued', 'locations': [{'url': '%s/%s' % (BASE_URI, UUID1), 'metadata': {}, 'status': 'queued'}], 'disk_format': 'raw', 'container_format': 'bare'}, {'id': UUID2, 'owner': TENANT1, 'status': 'queued', 'disk_format': 'raw', 'container_format': 'bare'}, ] [simple_db.image_create(None, image) for image in images] members = [ {'image_id': UUID1, 'member': TENANT1, 'can_share': True}, {'image_id': UUID1, 'member': TENANT2, 'can_share': False}, ] [simple_db.image_member_create(None, member) for member in members] simple_db.image_tag_set_all(None, UUID1, ['ping', 'pong']) @staticmethod def reset(): simple_db.reset() def __getattr__(self, key): return getattr(simple_db, key) class FakeStoreUtils(object): def __init__(self, store_api): self.store_api = store_api def safe_delete_from_backend(self, context, id, location): try: del self.store_api.data[location['url']] except KeyError: pass def schedule_delayed_delete_from_backend(self, context, id, location): pass def delete_image_location_from_backend(self, context, image_id, location): if CONF.delayed_delete: self.schedule_delayed_delete_from_backend(context, image_id, location) else: self.safe_delete_from_backend(context, image_id, location) def validate_external_location(self, uri): if uri and urllib.parse.urlparse(uri).scheme: return store_utils.validate_external_location(uri) else: return True class FakeStoreAPI(object): def __init__(self, store_metadata=None): self.data = { '%s/%s' % (BASE_URI, UUID1): ('XXX', 3), '%s/fake_location' % (BASE_URI): ('YYY', 3) } self.acls = {} if store_metadata is None: self.store_metadata = {} else: self.store_metadata = store_metadata def create_stores(self): pass def set_acls(self, uri, public=False, read_tenants=None, write_tenants=None, context=None): if read_tenants is None: read_tenants = [] if write_tenants is None: write_tenants = [] self.acls[uri] = { 'public': public, 'read': read_tenants, 'write': write_tenants, } def get_from_backend(self, location, offset=0, chunk_size=None, context=None): try: scheme = location[:location.find('/') - 1] if scheme == 'unknown': raise store.UnknownScheme(scheme=scheme) return self.data[location] except KeyError: raise store.NotFound(image=location) def get_size_from_backend(self, location, context=None): return self.get_from_backend(location, context=context)[1] def add_to_backend(self, conf, image_id, data, size, scheme=None, context=None, verifier=None): store_max_size = 7 current_store_size = 2 for location in self.data.keys(): if image_id in location: raise exception.Duplicate() if not size: # 'data' is a string wrapped in a LimitingReader|CooperativeReader # pipeline, so peek under the hood of those objects to get at the # string itself. size = len(data.data.fd) if (current_store_size + size) > store_max_size: raise exception.StorageFull() if context.user == USER2: raise exception.Forbidden() if context.user == USER3: raise exception.StorageWriteDenied() self.data[image_id] = (data, size) checksum = 'Z' return (image_id, size, checksum, self.store_metadata) def check_location_metadata(self, val, key=''): store.check_location_metadata(val) def delete_from_backend(self, uri, context=None): pass class FakePolicyEnforcer(object): def __init__(self, *_args, **kwargs): self.rules = {} def enforce(self, _ctxt, action, target=None, **kwargs): """Raise Forbidden if a rule for given action is set to false.""" if self.rules.get(action) is False: raise exception.Forbidden() def set_rules(self, rules): self.rules = rules class FakeNotifier(object): def __init__(self, *_args, **kwargs): self.log = [] def _notify(self, event_type, payload, level): log = { 'notification_type': level, 'event_type': event_type, 'payload': payload } self.log.append(log) def warn(self, event_type, payload): self._notify(event_type, payload, 'WARN') def info(self, event_type, payload): self._notify(event_type, payload, 'INFO') def error(self, event_type, payload): self._notify(event_type, payload, 'ERROR') def debug(self, event_type, payload): self._notify(event_type, payload, 'DEBUG') def critical(self, event_type, payload): self._notify(event_type, payload, 'CRITICAL') def get_logs(self): return self.log class FakeGateway(object): def __init__(self, image_factory=None, image_member_factory=None, image_repo=None, task_factory=None, task_repo=None): self.image_factory = image_factory self.image_member_factory = image_member_factory self.image_repo = image_repo self.task_factory = task_factory self.task_repo = task_repo def get_image_factory(self, context): return self.image_factory def get_image_member_factory(self, context): return self.image_member_factory def get_repo(self, context): return self.image_repo def get_task_factory(self, context): return self.task_factory def get_task_repo(self, context): return self.task_repo class FakeTask(object): def __init__(self, task_id, type=None, status=None): self.task_id = task_id self.type = type self.message = None self.input = None self._status = status self._executor = None def success(self, result): self.result = result self._status = 'success' def fail(self, message): self.message = message self._status = 'failure' glance-16.0.0/glance/tests/unit/image_cache/0000775000175100017510000000000013245511661020636 5ustar zuulzuul00000000000000glance-16.0.0/glance/tests/unit/image_cache/__init__.py0000666000175100017510000000000013245511421022731 0ustar zuulzuul00000000000000glance-16.0.0/glance/tests/unit/image_cache/drivers/0000775000175100017510000000000013245511661022314 5ustar zuulzuul00000000000000glance-16.0.0/glance/tests/unit/image_cache/drivers/test_sqlite.py0000666000175100017510000000230413245511421025221 0ustar zuulzuul00000000000000# Copyright (c) 2017 Huawei Technologies Co., Ltd. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Tests for the sqlite image_cache driver. """ import os import ddt import mock from glance.image_cache.drivers import sqlite from glance.tests import utils @ddt.ddt class TestSqlite(utils.BaseTestCase): @ddt.data(True, False) def test_delete_cached_file(self, throw_not_exists): with mock.patch.object(os, 'unlink') as mock_unlink: if throw_not_exists: mock_unlink.side_effect = OSError((2, 'File not found')) # Should not raise an exception in all cases sqlite.delete_cached_file('/tmp/dummy_file') glance-16.0.0/glance/tests/unit/image_cache/drivers/__init__.py0000666000175100017510000000000013245511421024407 0ustar zuulzuul00000000000000glance-16.0.0/glance/tests/unit/test_policy.py0000666000175100017510000006063513245511421021347 0ustar zuulzuul00000000000000# Copyright 2012 OpenStack Foundation # Copyright 2013 IBM Corp. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections import os.path import mock import oslo_config.cfg import glance.api.policy from glance.common import exception import glance.context from glance.tests.unit import base import glance.tests.unit.utils as unit_test_utils from glance.tests import utils as test_utils UUID1 = 'c80a1a6c-bd1f-41c5-90ee-81afedb1d58d' class IterableMock(mock.Mock, collections.Iterable): def __iter__(self): while False: yield None class ImageRepoStub(object): def get(self, *args, **kwargs): return 'image_from_get' def save(self, *args, **kwargs): return 'image_from_save' def add(self, *args, **kwargs): return 'image_from_add' def list(self, *args, **kwargs): return ['image_from_list_0', 'image_from_list_1'] class ImageStub(object): def __init__(self, image_id=None, visibility='private', container_format='bear', disk_format='raw', status='active', extra_properties=None): if extra_properties is None: extra_properties = {} self.image_id = image_id self.visibility = visibility self.container_format = container_format self.disk_format = disk_format self.status = status self.extra_properties = extra_properties self.checksum = 'c2e5db72bd7fd153f53ede5da5a06de3' self.created_at = '2013-09-28T15:27:36Z' self.updated_at = '2013-09-28T15:27:37Z' self.locations = [] self.min_disk = 0 self.min_ram = 0 self.name = 'image_name' self.owner = 'tenant1' self.protected = False self.size = 0 self.virtual_size = 0 self.tags = [] def delete(self): self.status = 'deleted' class ImageFactoryStub(object): def new_image(self, image_id=None, name=None, visibility='private', min_disk=0, min_ram=0, protected=False, owner=None, disk_format=None, container_format=None, extra_properties=None, tags=None, **other_args): self.visibility = visibility return 'new_image' class MemberRepoStub(object): image = None def add(self, image_member): image_member.output = 'member_repo_add' def get(self, *args, **kwargs): return 'member_repo_get' def save(self, image_member, from_state=None): image_member.output = 'member_repo_save' def list(self, *args, **kwargs): return 'member_repo_list' def remove(self, image_member): image_member.output = 'member_repo_remove' class ImageMembershipStub(object): def __init__(self, output=None): self.output = output class TaskRepoStub(object): def get(self, *args, **kwargs): return 'task_from_get' def add(self, *args, **kwargs): return 'task_from_add' def list(self, *args, **kwargs): return ['task_from_list_0', 'task_from_list_1'] class TaskStub(object): def __init__(self, task_id): self.task_id = task_id self.status = 'pending' def run(self, executor): self.status = 'processing' class TaskFactoryStub(object): def new_task(self, *args): return 'new_task' class TestPolicyEnforcer(base.IsolatedUnitTest): def test_policy_file_default_rules_default_location(self): enforcer = glance.api.policy.Enforcer() context = glance.context.RequestContext(roles=[]) enforcer.enforce(context, 'get_image', {}) def test_policy_file_custom_rules_default_location(self): rules = {"get_image": '!'} self.set_policy_rules(rules) enforcer = glance.api.policy.Enforcer() context = glance.context.RequestContext(roles=[]) self.assertRaises(exception.Forbidden, enforcer.enforce, context, 'get_image', {}) def test_policy_file_custom_location(self): self.config(policy_file=os.path.join(self.test_dir, 'gobble.gobble'), group='oslo_policy') rules = {"get_image": '!'} self.set_policy_rules(rules) enforcer = glance.api.policy.Enforcer() context = glance.context.RequestContext(roles=[]) self.assertRaises(exception.Forbidden, enforcer.enforce, context, 'get_image', {}) def test_policy_file_check(self): self.config(policy_file=os.path.join(self.test_dir, 'gobble.gobble'), group='oslo_policy') rules = {"get_image": '!'} self.set_policy_rules(rules) enforcer = glance.api.policy.Enforcer() context = glance.context.RequestContext(roles=[]) self.assertEqual(False, enforcer.check(context, 'get_image', {})) def test_policy_file_get_image_default_everybody(self): rules = {"default": ''} self.set_policy_rules(rules) enforcer = glance.api.policy.Enforcer() context = glance.context.RequestContext(roles=[]) self.assertEqual(True, enforcer.check(context, 'get_image', {})) def test_policy_file_get_image_default_nobody(self): rules = {"default": '!'} self.set_policy_rules(rules) enforcer = glance.api.policy.Enforcer() context = glance.context.RequestContext(roles=[]) self.assertRaises(exception.Forbidden, enforcer.enforce, context, 'get_image', {}) class TestPolicyEnforcerNoFile(base.IsolatedUnitTest): def test_policy_file_specified_but_not_found(self): """Missing defined policy file should result in a default ruleset""" self.config(policy_file='gobble.gobble', group='oslo_policy') enforcer = glance.api.policy.Enforcer() context = glance.context.RequestContext(roles=[]) self.assertRaises(exception.Forbidden, enforcer.enforce, context, 'manage_image_cache', {}) admin_context = glance.context.RequestContext(roles=['admin']) enforcer.enforce(admin_context, 'manage_image_cache', {}) def test_policy_file_default_not_found(self): """Missing default policy file should result in a default ruleset""" def fake_find_file(self, name): return None self.stubs.Set(oslo_config.cfg.ConfigOpts, 'find_file', fake_find_file) enforcer = glance.api.policy.Enforcer() context = glance.context.RequestContext(roles=[]) self.assertRaises(exception.Forbidden, enforcer.enforce, context, 'manage_image_cache', {}) admin_context = glance.context.RequestContext(roles=['admin']) enforcer.enforce(admin_context, 'manage_image_cache', {}) class TestImagePolicy(test_utils.BaseTestCase): def setUp(self): self.image_stub = ImageStub(UUID1) self.image_repo_stub = ImageRepoStub() self.image_factory_stub = ImageFactoryStub() self.policy = mock.Mock() self.policy.enforce = mock.Mock() super(TestImagePolicy, self).setUp() def test_publicize_image_not_allowed(self): self.policy.enforce.side_effect = exception.Forbidden image = glance.api.policy.ImageProxy(self.image_stub, {}, self.policy) self.assertRaises(exception.Forbidden, setattr, image, 'visibility', 'public') self.assertEqual('private', image.visibility) self.policy.enforce.assert_called_once_with({}, "publicize_image", image.target) def test_publicize_image_allowed(self): image = glance.api.policy.ImageProxy(self.image_stub, {}, self.policy) image.visibility = 'public' self.assertEqual('public', image.visibility) self.policy.enforce.assert_called_once_with({}, "publicize_image", image.target) def test_communitize_image_not_allowed(self): self.policy.enforce.side_effect = exception.Forbidden image = glance.api.policy.ImageProxy(self.image_stub, {}, self.policy) self.assertRaises(exception.Forbidden, setattr, image, 'visibility', 'community') self.assertEqual('private', image.visibility) self.policy.enforce.assert_called_once_with({}, "communitize_image", image.target) def test_communitize_image_allowed(self): image = glance.api.policy.ImageProxy(self.image_stub, {}, self.policy) image.visibility = 'community' self.assertEqual('community', image.visibility) self.policy.enforce.assert_called_once_with({}, "communitize_image", image.target) def test_delete_image_not_allowed(self): self.policy.enforce.side_effect = exception.Forbidden image = glance.api.policy.ImageProxy(self.image_stub, {}, self.policy) self.assertRaises(exception.Forbidden, image.delete) self.assertEqual('active', image.status) self.policy.enforce.assert_called_once_with({}, "delete_image", image.target) def test_delete_image_allowed(self): image = glance.api.policy.ImageProxy(self.image_stub, {}, self.policy) args = dict(image.target) image.delete() self.assertEqual('deleted', image.status) self.policy.enforce.assert_called_once_with({}, "delete_image", args) def test_get_image_not_allowed(self): self.policy.enforce.side_effect = exception.Forbidden image_target = IterableMock() with mock.patch.object(glance.api.policy, 'ImageTarget') as target: target.return_value = image_target image_repo = glance.api.policy.ImageRepoProxy(self.image_repo_stub, {}, self.policy) self.assertRaises(exception.Forbidden, image_repo.get, UUID1) self.policy.enforce.assert_called_once_with({}, "get_image", dict(image_target)) def test_get_image_allowed(self): image_target = IterableMock() with mock.patch.object(glance.api.policy, 'ImageTarget') as target: target.return_value = image_target image_repo = glance.api.policy.ImageRepoProxy(self.image_repo_stub, {}, self.policy) output = image_repo.get(UUID1) self.assertIsInstance(output, glance.api.policy.ImageProxy) self.assertEqual('image_from_get', output.image) self.policy.enforce.assert_called_once_with({}, "get_image", dict(image_target)) def test_get_images_not_allowed(self): self.policy.enforce.side_effect = exception.Forbidden image_repo = glance.api.policy.ImageRepoProxy(self.image_repo_stub, {}, self.policy) self.assertRaises(exception.Forbidden, image_repo.list) self.policy.enforce.assert_called_once_with({}, "get_images", {}) def test_get_images_allowed(self): image_repo = glance.api.policy.ImageRepoProxy(self.image_repo_stub, {}, self.policy) images = image_repo.list() for i, image in enumerate(images): self.assertIsInstance(image, glance.api.policy.ImageProxy) self.assertEqual('image_from_list_%d' % i, image.image) self.policy.enforce.assert_called_once_with({}, "get_images", {}) def test_modify_image_not_allowed(self): self.policy.enforce.side_effect = exception.Forbidden image_repo = glance.api.policy.ImageRepoProxy(self.image_repo_stub, {}, self.policy) image = glance.api.policy.ImageProxy(self.image_stub, {}, self.policy) self.assertRaises(exception.Forbidden, image_repo.save, image) self.policy.enforce.assert_called_once_with({}, "modify_image", image.target) def test_modify_image_allowed(self): image_repo = glance.api.policy.ImageRepoProxy(self.image_repo_stub, {}, self.policy) image = glance.api.policy.ImageProxy(self.image_stub, {}, self.policy) image_repo.save(image) self.policy.enforce.assert_called_once_with({}, "modify_image", image.target) def test_add_image_not_allowed(self): self.policy.enforce.side_effect = exception.Forbidden image_repo = glance.api.policy.ImageRepoProxy(self.image_repo_stub, {}, self.policy) image = glance.api.policy.ImageProxy(self.image_stub, {}, self.policy) self.assertRaises(exception.Forbidden, image_repo.add, image) self.policy.enforce.assert_called_once_with({}, "add_image", image.target) def test_add_image_allowed(self): image_repo = glance.api.policy.ImageRepoProxy(self.image_repo_stub, {}, self.policy) image = glance.api.policy.ImageProxy(self.image_stub, {}, self.policy) image_repo.add(image) self.policy.enforce.assert_called_once_with({}, "add_image", image.target) def test_new_image_visibility_public_not_allowed(self): self.policy.enforce.side_effect = exception.Forbidden image_factory = glance.api.policy.ImageFactoryProxy( self.image_factory_stub, {}, self.policy) self.assertRaises(exception.Forbidden, image_factory.new_image, visibility='public') self.policy.enforce.assert_called_once_with({}, "publicize_image", {}) def test_new_image_visibility_public_allowed(self): image_factory = glance.api.policy.ImageFactoryProxy( self.image_factory_stub, {}, self.policy) image_factory.new_image(visibility='public') self.policy.enforce.assert_called_once_with({}, "publicize_image", {}) def test_new_image_visibility_community_not_allowed(self): self.policy.enforce.side_effect = exception.Forbidden image_factory = glance.api.policy.ImageFactoryProxy( self.image_factory_stub, {}, self.policy) self.assertRaises(exception.Forbidden, image_factory.new_image, visibility='community') self.policy.enforce.assert_called_once_with({}, "communitize_image", {}) def test_new_image_visibility_community_allowed(self): image_factory = glance.api.policy.ImageFactoryProxy( self.image_factory_stub, {}, self.policy) image_factory.new_image(visibility='community') self.policy.enforce.assert_called_once_with({}, "communitize_image", {}) def test_image_get_data_policy_enforced_with_target(self): extra_properties = { 'test_key': 'test_4321' } image_stub = ImageStub(UUID1, extra_properties=extra_properties) with mock.patch('glance.api.policy.ImageTarget'): image = glance.api.policy.ImageProxy(image_stub, {}, self.policy) target = image.target self.policy.enforce.side_effect = exception.Forbidden self.assertRaises(exception.Forbidden, image.get_data) self.policy.enforce.assert_called_once_with({}, "download_image", target) class TestMemberPolicy(test_utils.BaseTestCase): def setUp(self): self.policy = mock.Mock() self.policy.enforce = mock.Mock() self.image_stub = ImageStub(UUID1) image = glance.api.policy.ImageProxy(self.image_stub, {}, self.policy) self.member_repo = glance.api.policy.ImageMemberRepoProxy( MemberRepoStub(), image, {}, self.policy) self.target = self.member_repo.target super(TestMemberPolicy, self).setUp() def test_add_member_not_allowed(self): self.policy.enforce.side_effect = exception.Forbidden self.assertRaises(exception.Forbidden, self.member_repo.add, '') self.policy.enforce.assert_called_once_with({}, "add_member", self.target) def test_add_member_allowed(self): image_member = ImageMembershipStub() self.member_repo.add(image_member) self.assertEqual('member_repo_add', image_member.output) self.policy.enforce.assert_called_once_with({}, "add_member", self.target) def test_get_member_not_allowed(self): self.policy.enforce.side_effect = exception.Forbidden self.assertRaises(exception.Forbidden, self.member_repo.get, '') self.policy.enforce.assert_called_once_with({}, "get_member", self.target) def test_get_member_allowed(self): output = self.member_repo.get('') self.assertEqual('member_repo_get', output) self.policy.enforce.assert_called_once_with({}, "get_member", self.target) def test_modify_member_not_allowed(self): self.policy.enforce.side_effect = exception.Forbidden self.assertRaises(exception.Forbidden, self.member_repo.save, '') self.policy.enforce.assert_called_once_with({}, "modify_member", self.target) def test_modify_member_allowed(self): image_member = ImageMembershipStub() self.member_repo.save(image_member) self.assertEqual('member_repo_save', image_member.output) self.policy.enforce.assert_called_once_with({}, "modify_member", self.target) def test_get_members_not_allowed(self): self.policy.enforce.side_effect = exception.Forbidden self.assertRaises(exception.Forbidden, self.member_repo.list, '') self.policy.enforce.assert_called_once_with({}, "get_members", self.target) def test_get_members_allowed(self): output = self.member_repo.list('') self.assertEqual('member_repo_list', output) self.policy.enforce.assert_called_once_with({}, "get_members", self.target) def test_delete_member_not_allowed(self): self.policy.enforce.side_effect = exception.Forbidden self.assertRaises(exception.Forbidden, self.member_repo.remove, '') self.policy.enforce.assert_called_once_with({}, "delete_member", self.target) def test_delete_member_allowed(self): image_member = ImageMembershipStub() self.member_repo.remove(image_member) self.assertEqual('member_repo_remove', image_member.output) self.policy.enforce.assert_called_once_with({}, "delete_member", self.target) class TestTaskPolicy(test_utils.BaseTestCase): def setUp(self): self.task_stub = TaskStub(UUID1) self.task_repo_stub = TaskRepoStub() self.task_factory_stub = TaskFactoryStub() self.policy = unit_test_utils.FakePolicyEnforcer() super(TestTaskPolicy, self).setUp() def test_get_task_not_allowed(self): rules = {"get_task": False} self.policy.set_rules(rules) task_repo = glance.api.policy.TaskRepoProxy( self.task_repo_stub, {}, self.policy ) self.assertRaises(exception.Forbidden, task_repo.get, UUID1) def test_get_task_allowed(self): rules = {"get_task": True} self.policy.set_rules(rules) task_repo = glance.api.policy.TaskRepoProxy( self.task_repo_stub, {}, self.policy ) task = task_repo.get(UUID1) self.assertIsInstance(task, glance.api.policy.TaskProxy) self.assertEqual('task_from_get', task.task) def test_get_tasks_not_allowed(self): rules = {"get_tasks": False} self.policy.set_rules(rules) task_repo = glance.api.policy.TaskStubRepoProxy( self.task_repo_stub, {}, self.policy ) self.assertRaises(exception.Forbidden, task_repo.list) def test_get_tasks_allowed(self): rules = {"get_task": True} self.policy.set_rules(rules) task_repo = glance.api.policy.TaskStubRepoProxy( self.task_repo_stub, {}, self.policy ) tasks = task_repo.list() for i, task in enumerate(tasks): self.assertIsInstance(task, glance.api.policy.TaskStubProxy) self.assertEqual('task_from_list_%d' % i, task.task_stub) def test_add_task_not_allowed(self): rules = {"add_task": False} self.policy.set_rules(rules) task_repo = glance.api.policy.TaskRepoProxy( self.task_repo_stub, {}, self.policy ) task = glance.api.policy.TaskProxy(self.task_stub, {}, self.policy) self.assertRaises(exception.Forbidden, task_repo.add, task) def test_add_task_allowed(self): rules = {"add_task": True} self.policy.set_rules(rules) task_repo = glance.api.policy.TaskRepoProxy( self.task_repo_stub, {}, self.policy ) task = glance.api.policy.TaskProxy(self.task_stub, {}, self.policy) task_repo.add(task) class TestContextPolicyEnforcer(base.IsolatedUnitTest): def _do_test_policy_influence_context_admin(self, policy_admin_role, context_role, context_is_admin, admin_expected): self.config(policy_file=os.path.join(self.test_dir, 'gobble.gobble'), group='oslo_policy') rules = {'context_is_admin': 'role:%s' % policy_admin_role} self.set_policy_rules(rules) enforcer = glance.api.policy.Enforcer() context = glance.context.RequestContext(roles=[context_role], is_admin=context_is_admin, policy_enforcer=enforcer) self.assertEqual(admin_expected, context.is_admin) def test_context_admin_policy_admin(self): self._do_test_policy_influence_context_admin('test_admin', 'test_admin', True, True) def test_context_nonadmin_policy_admin(self): self._do_test_policy_influence_context_admin('test_admin', 'test_admin', False, True) def test_context_admin_policy_nonadmin(self): self._do_test_policy_influence_context_admin('test_admin', 'demo', True, True) def test_context_nonadmin_policy_nonadmin(self): self._do_test_policy_influence_context_admin('test_admin', 'demo', False, False) glance-16.0.0/glance/tests/unit/test_auth.py0000666000175100017510000011423113245511421021001 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # Copyright 2013 IBM Corp. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_serialization import jsonutils from oslotest import moxstubout from six.moves import http_client as http import webob from glance.api import authorization from glance.common import auth from glance.common import exception from glance.common import timeutils import glance.domain from glance.tests.unit import utils as unittest_utils from glance.tests import utils TENANT1 = '6838eb7b-6ded-434a-882c-b344c77fe8df' TENANT2 = '2c014f32-55eb-467d-8fcb-4bd706012f81' UUID1 = 'c80a1a6c-bd1f-41c5-90ee-81afedb1d58d' UUID2 = 'a85abd86-55b3-4d5b-b0b4-5d0a6e6042fc' class FakeResponse(object): """ Simple class that masks the inconsistency between webob.Response.status_int and httplib.Response.status """ def __init__(self, resp): self.resp = resp def __getitem__(self, key): return self.resp.headers.get(key) @property def status(self): return self.resp.status_int class V2Token(object): def __init__(self): self.tok = self.base_token def add_service_no_type(self): catalog = self.tok['access']['serviceCatalog'] service_type = {"name": "glance_no_type"} catalog.append(service_type) service = catalog[-1] service['endpoints'] = [self.base_endpoint] def add_service(self, s_type, region_list=None): if region_list is None: region_list = [] catalog = self.tok['access']['serviceCatalog'] service_type = {"type": s_type, "name": "glance"} catalog.append(service_type) service = catalog[-1] endpoint_list = [] if region_list == []: endpoint_list.append(self.base_endpoint) else: for region in region_list: endpoint = self.base_endpoint endpoint['region'] = region endpoint_list.append(endpoint) service['endpoints'] = endpoint_list @property def token(self): return self.tok @property def base_endpoint(self): return { "adminURL": "http://localhost:9292", "internalURL": "http://localhost:9292", "publicURL": "http://localhost:9292" } @property def base_token(self): return { "access": { "token": { "expires": "2010-11-23T16:40:53.321584", "id": "5c7f8799-2e54-43e4-851b-31f81871b6c", "tenant": {"id": "1", "name": "tenant-ok"} }, "serviceCatalog": [ ], "user": { "id": "2", "roles": [{ "tenantId": "1", "id": "1", "name": "Admin" }], "name": "joeadmin" } } } class TestKeystoneAuthPlugin(utils.BaseTestCase): """Test that the Keystone auth plugin works properly""" def setUp(self): super(TestKeystoneAuthPlugin, self).setUp() mox_fixture = self.useFixture(moxstubout.MoxStubout()) self.stubs = mox_fixture.stubs def test_get_plugin_from_strategy_keystone(self): strategy = auth.get_plugin_from_strategy('keystone') self.assertIsInstance(strategy, auth.KeystoneStrategy) self.assertTrue(strategy.configure_via_auth) def test_get_plugin_from_strategy_keystone_configure_via_auth_false(self): strategy = auth.get_plugin_from_strategy('keystone', configure_via_auth=False) self.assertIsInstance(strategy, auth.KeystoneStrategy) self.assertFalse(strategy.configure_via_auth) def test_required_creds(self): """ Test that plugin created without required credential pieces raises an exception """ bad_creds = [ {}, # missing everything { 'username': 'user1', 'strategy': 'keystone', 'password': 'pass' }, # missing auth_url { 'password': 'pass', 'strategy': 'keystone', 'auth_url': 'http://localhost/v1' }, # missing username { 'username': 'user1', 'strategy': 'keystone', 'auth_url': 'http://localhost/v1' }, # missing password { 'username': 'user1', 'password': 'pass', 'auth_url': 'http://localhost/v1' }, # missing strategy { 'username': 'user1', 'password': 'pass', 'strategy': 'keystone', 'auth_url': 'http://localhost/v2.0/' }, # v2.0: missing tenant { 'username': None, 'password': 'pass', 'auth_url': 'http://localhost/v2.0/' }, # None parameter { 'username': 'user1', 'password': 'pass', 'auth_url': 'http://localhost/v2.0/', 'tenant': None } # None tenant ] for creds in bad_creds: try: plugin = auth.KeystoneStrategy(creds) plugin.authenticate() self.fail("Failed to raise correct exception when supplying " "bad credentials: %r" % creds) except exception.MissingCredentialError: continue # Expected def test_invalid_auth_url_v1(self): """ Test that a 400 during authenticate raises exception.AuthBadRequest """ def fake_do_request(*args, **kwargs): resp = webob.Response() resp.status = http.BAD_REQUEST return FakeResponse(resp), "" self.stubs.Set(auth.KeystoneStrategy, '_do_request', fake_do_request) bad_creds = { 'username': 'user1', 'auth_url': 'http://localhost/badauthurl/', 'password': 'pass', 'strategy': 'keystone', 'region': 'RegionOne' } plugin = auth.KeystoneStrategy(bad_creds) self.assertRaises(exception.AuthBadRequest, plugin.authenticate) def test_invalid_auth_url_v2(self): """ Test that a 400 during authenticate raises exception.AuthBadRequest """ def fake_do_request(*args, **kwargs): resp = webob.Response() resp.status = http.BAD_REQUEST return FakeResponse(resp), "" self.stubs.Set(auth.KeystoneStrategy, '_do_request', fake_do_request) bad_creds = { 'username': 'user1', 'auth_url': 'http://localhost/badauthurl/v2.0/', 'password': 'pass', 'tenant': 'tenant1', 'strategy': 'keystone', 'region': 'RegionOne' } plugin = auth.KeystoneStrategy(bad_creds) self.assertRaises(exception.AuthBadRequest, plugin.authenticate) def test_v1_auth(self): """Test v1 auth code paths""" def fake_do_request(cls, url, method, headers=None, body=None): if url.find("2.0") != -1: self.fail("Invalid v1.0 token path (%s)" % url) headers = headers or {} resp = webob.Response() if (headers.get('X-Auth-User') != 'user1' or headers.get('X-Auth-Key') != 'pass'): resp.status = http.UNAUTHORIZED else: resp.status = http.OK resp.headers.update({"x-image-management-url": "example.com"}) return FakeResponse(resp), "" self.stubs.Set(auth.KeystoneStrategy, '_do_request', fake_do_request) unauthorized_creds = [ { 'username': 'wronguser', 'auth_url': 'http://localhost/badauthurl/', 'strategy': 'keystone', 'region': 'RegionOne', 'password': 'pass' }, # wrong username { 'username': 'user1', 'auth_url': 'http://localhost/badauthurl/', 'strategy': 'keystone', 'region': 'RegionOne', 'password': 'badpass' }, # bad password... ] for creds in unauthorized_creds: try: plugin = auth.KeystoneStrategy(creds) plugin.authenticate() self.fail("Failed to raise NotAuthenticated when supplying " "bad credentials: %r" % creds) except exception.NotAuthenticated: continue # Expected no_strategy_creds = { 'username': 'user1', 'auth_url': 'http://localhost/redirect/', 'password': 'pass', 'region': 'RegionOne' } try: plugin = auth.KeystoneStrategy(no_strategy_creds) plugin.authenticate() self.fail("Failed to raise MissingCredentialError when " "supplying no strategy: %r" % no_strategy_creds) except exception.MissingCredentialError: pass # Expected good_creds = [ { 'username': 'user1', 'auth_url': 'http://localhost/redirect/', 'password': 'pass', 'strategy': 'keystone', 'region': 'RegionOne' } ] for creds in good_creds: plugin = auth.KeystoneStrategy(creds) self.assertIsNone(plugin.authenticate()) self.assertEqual("example.com", plugin.management_url) # Assert it does not update management_url via auth response for creds in good_creds: plugin = auth.KeystoneStrategy(creds, configure_via_auth=False) self.assertIsNone(plugin.authenticate()) self.assertIsNone(plugin.management_url) def test_v2_auth(self): """Test v2 auth code paths""" mock_token = None def fake_do_request(cls, url, method, headers=None, body=None): if (not url.rstrip('/').endswith('v2.0/tokens') or url.count("2.0") != 1): self.fail("Invalid v2.0 token path (%s)" % url) creds = jsonutils.loads(body)['auth'] username = creds['passwordCredentials']['username'] password = creds['passwordCredentials']['password'] tenant = creds['tenantName'] resp = webob.Response() if (username != 'user1' or password != 'pass' or tenant != 'tenant-ok'): resp.status = http.UNAUTHORIZED else: resp.status = http.OK body = mock_token.token return FakeResponse(resp), jsonutils.dumps(body) mock_token = V2Token() mock_token.add_service('image', ['RegionOne']) self.stubs.Set(auth.KeystoneStrategy, '_do_request', fake_do_request) unauthorized_creds = [ { 'username': 'wronguser', 'auth_url': 'http://localhost/v2.0', 'password': 'pass', 'tenant': 'tenant-ok', 'strategy': 'keystone', 'region': 'RegionOne' }, # wrong username { 'username': 'user1', 'auth_url': 'http://localhost/v2.0', 'password': 'badpass', 'tenant': 'tenant-ok', 'strategy': 'keystone', 'region': 'RegionOne' }, # bad password... { 'username': 'user1', 'auth_url': 'http://localhost/v2.0', 'password': 'pass', 'tenant': 'carterhayes', 'strategy': 'keystone', 'region': 'RegionOne' }, # bad tenant... ] for creds in unauthorized_creds: try: plugin = auth.KeystoneStrategy(creds) plugin.authenticate() self.fail("Failed to raise NotAuthenticated when supplying " "bad credentials: %r" % creds) except exception.NotAuthenticated: continue # Expected no_region_creds = { 'username': 'user1', 'tenant': 'tenant-ok', 'auth_url': 'http://localhost/redirect/v2.0/', 'password': 'pass', 'strategy': 'keystone' } plugin = auth.KeystoneStrategy(no_region_creds) self.assertIsNone(plugin.authenticate()) self.assertEqual('http://localhost:9292', plugin.management_url) # Add another image service, with a different region mock_token.add_service('image', ['RegionTwo']) try: plugin = auth.KeystoneStrategy(no_region_creds) plugin.authenticate() self.fail("Failed to raise RegionAmbiguity when no region present " "and multiple regions exist: %r" % no_region_creds) except exception.RegionAmbiguity: pass # Expected wrong_region_creds = { 'username': 'user1', 'tenant': 'tenant-ok', 'auth_url': 'http://localhost/redirect/v2.0/', 'password': 'pass', 'strategy': 'keystone', 'region': 'NonExistentRegion' } try: plugin = auth.KeystoneStrategy(wrong_region_creds) plugin.authenticate() self.fail("Failed to raise NoServiceEndpoint when supplying " "wrong region: %r" % wrong_region_creds) except exception.NoServiceEndpoint: pass # Expected no_strategy_creds = { 'username': 'user1', 'tenant': 'tenant-ok', 'auth_url': 'http://localhost/redirect/v2.0/', 'password': 'pass', 'region': 'RegionOne' } try: plugin = auth.KeystoneStrategy(no_strategy_creds) plugin.authenticate() self.fail("Failed to raise MissingCredentialError when " "supplying no strategy: %r" % no_strategy_creds) except exception.MissingCredentialError: pass # Expected bad_strategy_creds = { 'username': 'user1', 'tenant': 'tenant-ok', 'auth_url': 'http://localhost/redirect/v2.0/', 'password': 'pass', 'region': 'RegionOne', 'strategy': 'keypebble' } try: plugin = auth.KeystoneStrategy(bad_strategy_creds) plugin.authenticate() self.fail("Failed to raise BadAuthStrategy when supplying " "bad auth strategy: %r" % bad_strategy_creds) except exception.BadAuthStrategy: pass # Expected mock_token = V2Token() mock_token.add_service('image', ['RegionOne', 'RegionTwo']) good_creds = [ { 'username': 'user1', 'auth_url': 'http://localhost/v2.0/', 'password': 'pass', 'tenant': 'tenant-ok', 'strategy': 'keystone', 'region': 'RegionOne' }, # auth_url with trailing '/' { 'username': 'user1', 'auth_url': 'http://localhost/v2.0', 'password': 'pass', 'tenant': 'tenant-ok', 'strategy': 'keystone', 'region': 'RegionOne' }, # auth_url without trailing '/' { 'username': 'user1', 'auth_url': 'http://localhost/v2.0', 'password': 'pass', 'tenant': 'tenant-ok', 'strategy': 'keystone', 'region': 'RegionTwo' } # Second region ] for creds in good_creds: plugin = auth.KeystoneStrategy(creds) self.assertIsNone(plugin.authenticate()) self.assertEqual('http://localhost:9292', plugin.management_url) ambiguous_region_creds = { 'username': 'user1', 'auth_url': 'http://localhost/v2.0/', 'password': 'pass', 'tenant': 'tenant-ok', 'strategy': 'keystone', 'region': 'RegionOne' } mock_token = V2Token() # Add two identical services mock_token.add_service('image', ['RegionOne']) mock_token.add_service('image', ['RegionOne']) try: plugin = auth.KeystoneStrategy(ambiguous_region_creds) plugin.authenticate() self.fail("Failed to raise RegionAmbiguity when " "non-unique regions exist: %r" % ambiguous_region_creds) except exception.RegionAmbiguity: pass mock_token = V2Token() mock_token.add_service('bad-image', ['RegionOne']) good_creds = { 'username': 'user1', 'auth_url': 'http://localhost/v2.0/', 'password': 'pass', 'tenant': 'tenant-ok', 'strategy': 'keystone', 'region': 'RegionOne' } try: plugin = auth.KeystoneStrategy(good_creds) plugin.authenticate() self.fail("Failed to raise NoServiceEndpoint when bad service " "type encountered") except exception.NoServiceEndpoint: pass mock_token = V2Token() mock_token.add_service_no_type() try: plugin = auth.KeystoneStrategy(good_creds) plugin.authenticate() self.fail("Failed to raise NoServiceEndpoint when bad service " "type encountered") except exception.NoServiceEndpoint: pass try: plugin = auth.KeystoneStrategy(good_creds, configure_via_auth=False) plugin.authenticate() except exception.NoServiceEndpoint: self.fail("NoServiceEndpoint was raised when authenticate " "should not check for endpoint.") class TestEndpoints(utils.BaseTestCase): def setUp(self): super(TestEndpoints, self).setUp() self.service_catalog = [ { 'endpoint_links': [], 'endpoints': [ { 'adminURL': 'http://localhost:8080/', 'region': 'RegionOne', 'internalURL': 'http://internalURL/', 'publicURL': 'http://publicURL/', }, ], 'type': 'object-store', 'name': 'Object Storage Service', } ] def test_get_endpoint_with_custom_server_type(self): endpoint = auth.get_endpoint(self.service_catalog, service_type='object-store') self.assertEqual('http://publicURL/', endpoint) def test_get_endpoint_with_custom_endpoint_type(self): endpoint = auth.get_endpoint(self.service_catalog, service_type='object-store', endpoint_type='internalURL') self.assertEqual('http://internalURL/', endpoint) def test_get_endpoint_raises_with_invalid_service_type(self): self.assertRaises(exception.NoServiceEndpoint, auth.get_endpoint, self.service_catalog, service_type='foo') def test_get_endpoint_raises_with_invalid_endpoint_type(self): self.assertRaises(exception.NoServiceEndpoint, auth.get_endpoint, self.service_catalog, service_type='object-store', endpoint_type='foo') def test_get_endpoint_raises_with_invalid_endpoint_region(self): self.assertRaises(exception.NoServiceEndpoint, auth.get_endpoint, self.service_catalog, service_type='object-store', endpoint_region='foo', endpoint_type='internalURL') class TestImageMutability(utils.BaseTestCase): def setUp(self): super(TestImageMutability, self).setUp() self.image_factory = glance.domain.ImageFactory() def _is_mutable(self, tenant, owner, is_admin=False): context = glance.context.RequestContext(tenant=tenant, is_admin=is_admin) image = self.image_factory.new_image(owner=owner) return authorization.is_image_mutable(context, image) def test_admin_everything_mutable(self): self.assertTrue(self._is_mutable(None, None, is_admin=True)) self.assertTrue(self._is_mutable(None, TENANT1, is_admin=True)) self.assertTrue(self._is_mutable(TENANT1, None, is_admin=True)) self.assertTrue(self._is_mutable(TENANT1, TENANT1, is_admin=True)) self.assertTrue(self._is_mutable(TENANT1, TENANT2, is_admin=True)) def test_no_tenant_nothing_mutable(self): self.assertFalse(self._is_mutable(None, None)) self.assertFalse(self._is_mutable(None, TENANT1)) def test_regular_user(self): self.assertFalse(self._is_mutable(TENANT1, None)) self.assertFalse(self._is_mutable(TENANT1, TENANT2)) self.assertTrue(self._is_mutable(TENANT1, TENANT1)) class TestImmutableImage(utils.BaseTestCase): def setUp(self): super(TestImmutableImage, self).setUp() image_factory = glance.domain.ImageFactory() self.context = glance.context.RequestContext(tenant=TENANT1) image = image_factory.new_image( image_id=UUID1, name='Marvin', owner=TENANT1, disk_format='raw', container_format='bare', extra_properties={'foo': 'bar'}, tags=['ping', 'pong'], ) self.image = authorization.ImmutableImageProxy(image, self.context) def _test_change(self, attr, value): self.assertRaises(exception.Forbidden, setattr, self.image, attr, value) self.assertRaises(exception.Forbidden, delattr, self.image, attr) def test_change_id(self): self._test_change('image_id', UUID2) def test_change_name(self): self._test_change('name', 'Freddie') def test_change_owner(self): self._test_change('owner', TENANT2) def test_change_min_disk(self): self._test_change('min_disk', 100) def test_change_min_ram(self): self._test_change('min_ram', 1024) def test_change_disk_format(self): self._test_change('disk_format', 'vhd') def test_change_container_format(self): self._test_change('container_format', 'ova') def test_change_visibility(self): self._test_change('visibility', 'public') def test_change_status(self): self._test_change('status', 'active') def test_change_created_at(self): self._test_change('created_at', timeutils.utcnow()) def test_change_updated_at(self): self._test_change('updated_at', timeutils.utcnow()) def test_change_locations(self): self._test_change('locations', ['http://a/b/c']) self.assertRaises(exception.Forbidden, self.image.locations.append, 'http://a/b/c') self.assertRaises(exception.Forbidden, self.image.locations.extend, ['http://a/b/c']) self.assertRaises(exception.Forbidden, self.image.locations.insert, 'foo') self.assertRaises(exception.Forbidden, self.image.locations.pop) self.assertRaises(exception.Forbidden, self.image.locations.remove, 'foo') self.assertRaises(exception.Forbidden, self.image.locations.reverse) self.assertRaises(exception.Forbidden, self.image.locations.sort) self.assertRaises(exception.Forbidden, self.image.locations.__delitem__, 0) self.assertRaises(exception.Forbidden, self.image.locations.__delslice__, 0, 2) self.assertRaises(exception.Forbidden, self.image.locations.__setitem__, 0, 'foo') self.assertRaises(exception.Forbidden, self.image.locations.__setslice__, 0, 2, ['foo', 'bar']) self.assertRaises(exception.Forbidden, self.image.locations.__iadd__, 'foo') self.assertRaises(exception.Forbidden, self.image.locations.__imul__, 2) def test_change_size(self): self._test_change('size', 32) def test_change_tags(self): self.assertRaises(exception.Forbidden, delattr, self.image, 'tags') self.assertRaises(exception.Forbidden, setattr, self.image, 'tags', ['king', 'kong']) self.assertRaises(exception.Forbidden, self.image.tags.pop) self.assertRaises(exception.Forbidden, self.image.tags.clear) self.assertRaises(exception.Forbidden, self.image.tags.add, 'king') self.assertRaises(exception.Forbidden, self.image.tags.remove, 'ping') self.assertRaises(exception.Forbidden, self.image.tags.update, set(['king', 'kong'])) self.assertRaises(exception.Forbidden, self.image.tags.intersection_update, set([])) self.assertRaises(exception.Forbidden, self.image.tags.difference_update, set([])) self.assertRaises(exception.Forbidden, self.image.tags.symmetric_difference_update, set([])) def test_change_properties(self): self.assertRaises(exception.Forbidden, delattr, self.image, 'extra_properties') self.assertRaises(exception.Forbidden, setattr, self.image, 'extra_properties', {}) self.assertRaises(exception.Forbidden, self.image.extra_properties.__delitem__, 'foo') self.assertRaises(exception.Forbidden, self.image.extra_properties.__setitem__, 'foo', 'b') self.assertRaises(exception.Forbidden, self.image.extra_properties.__setitem__, 'z', 'j') self.assertRaises(exception.Forbidden, self.image.extra_properties.pop) self.assertRaises(exception.Forbidden, self.image.extra_properties.popitem) self.assertRaises(exception.Forbidden, self.image.extra_properties.setdefault, 'p', 'j') self.assertRaises(exception.Forbidden, self.image.extra_properties.update, {}) def test_delete(self): self.assertRaises(exception.Forbidden, self.image.delete) def test_set_data(self): self.assertRaises(exception.Forbidden, self.image.set_data, 'blah', 4) def test_deactivate_image(self): self.assertRaises(exception.Forbidden, self.image.deactivate) def test_reactivate_image(self): self.assertRaises(exception.Forbidden, self.image.reactivate) def test_get_data(self): class FakeImage(object): def get_data(self): return 'tiddlywinks' image = glance.api.authorization.ImmutableImageProxy( FakeImage(), self.context) self.assertEqual('tiddlywinks', image.get_data()) class TestImageFactoryProxy(utils.BaseTestCase): def setUp(self): super(TestImageFactoryProxy, self).setUp() factory = glance.domain.ImageFactory() self.context = glance.context.RequestContext(tenant=TENANT1) self.image_factory = authorization.ImageFactoryProxy(factory, self.context) def test_default_owner_is_set(self): image = self.image_factory.new_image() self.assertEqual(TENANT1, image.owner) def test_wrong_owner_cannot_be_set(self): self.assertRaises(exception.Forbidden, self.image_factory.new_image, owner=TENANT2) def test_cannot_set_owner_to_none(self): self.assertRaises(exception.Forbidden, self.image_factory.new_image, owner=None) def test_admin_can_set_any_owner(self): self.context.is_admin = True image = self.image_factory.new_image(owner=TENANT2) self.assertEqual(TENANT2, image.owner) def test_admin_can_set_owner_to_none(self): self.context.is_admin = True image = self.image_factory.new_image(owner=None) self.assertIsNone(image.owner) def test_admin_still_gets_default_tenant(self): self.context.is_admin = True image = self.image_factory.new_image() self.assertEqual(TENANT1, image.owner) class TestImageRepoProxy(utils.BaseTestCase): class ImageRepoStub(object): def __init__(self, fixtures): self.fixtures = fixtures def get(self, image_id): for f in self.fixtures: if f.image_id == image_id: return f else: raise ValueError(image_id) def list(self, *args, **kwargs): return self.fixtures def setUp(self): super(TestImageRepoProxy, self).setUp() image_factory = glance.domain.ImageFactory() self.fixtures = [ image_factory.new_image(owner=TENANT1), image_factory.new_image(owner=TENANT2, visibility='public'), image_factory.new_image(owner=TENANT2), ] self.context = glance.context.RequestContext(tenant=TENANT1) image_repo = self.ImageRepoStub(self.fixtures) self.image_repo = authorization.ImageRepoProxy(image_repo, self.context) def test_get_mutable_image(self): image = self.image_repo.get(self.fixtures[0].image_id) self.assertEqual(image.image_id, self.fixtures[0].image_id) def test_get_immutable_image(self): image = self.image_repo.get(self.fixtures[1].image_id) self.assertRaises(exception.Forbidden, setattr, image, 'name', 'Vince') def test_list(self): images = self.image_repo.list() self.assertEqual(images[0].image_id, self.fixtures[0].image_id) self.assertRaises(exception.Forbidden, setattr, images[1], 'name', 'Wally') self.assertRaises(exception.Forbidden, setattr, images[2], 'name', 'Calvin') class TestImmutableTask(utils.BaseTestCase): def setUp(self): super(TestImmutableTask, self).setUp() task_factory = glance.domain.TaskFactory() self.context = glance.context.RequestContext(tenant=TENANT2) task_type = 'import' owner = TENANT2 task = task_factory.new_task(task_type, owner) self.task = authorization.ImmutableTaskProxy(task) def _test_change(self, attr, value): self.assertRaises( exception.Forbidden, setattr, self.task, attr, value ) self.assertRaises( exception.Forbidden, delattr, self.task, attr ) def test_change_id(self): self._test_change('task_id', UUID2) def test_change_type(self): self._test_change('type', 'fake') def test_change_status(self): self._test_change('status', 'success') def test_change_owner(self): self._test_change('owner', 'fake') def test_change_expires_at(self): self._test_change('expires_at', 'fake') def test_change_created_at(self): self._test_change('created_at', 'fake') def test_change_updated_at(self): self._test_change('updated_at', 'fake') def test_begin_processing(self): self.assertRaises( exception.Forbidden, self.task.begin_processing ) def test_succeed(self): self.assertRaises( exception.Forbidden, self.task.succeed, 'result' ) def test_fail(self): self.assertRaises( exception.Forbidden, self.task.fail, 'message' ) class TestImmutableTaskStub(utils.BaseTestCase): def setUp(self): super(TestImmutableTaskStub, self).setUp() task_factory = glance.domain.TaskFactory() self.context = glance.context.RequestContext(tenant=TENANT2) task_type = 'import' owner = TENANT2 task = task_factory.new_task(task_type, owner) self.task = authorization.ImmutableTaskStubProxy(task) def _test_change(self, attr, value): self.assertRaises( exception.Forbidden, setattr, self.task, attr, value ) self.assertRaises( exception.Forbidden, delattr, self.task, attr ) def test_change_id(self): self._test_change('task_id', UUID2) def test_change_type(self): self._test_change('type', 'fake') def test_change_status(self): self._test_change('status', 'success') def test_change_owner(self): self._test_change('owner', 'fake') def test_change_expires_at(self): self._test_change('expires_at', 'fake') def test_change_created_at(self): self._test_change('created_at', 'fake') def test_change_updated_at(self): self._test_change('updated_at', 'fake') class TestTaskFactoryProxy(utils.BaseTestCase): def setUp(self): super(TestTaskFactoryProxy, self).setUp() factory = glance.domain.TaskFactory() self.context = glance.context.RequestContext(tenant=TENANT1) self.context_owner_is_none = glance.context.RequestContext() self.task_factory = authorization.TaskFactoryProxy( factory, self.context ) self.task_type = 'import' self.task_input = '{"loc": "fake"}' self.owner = 'foo' self.request1 = unittest_utils.get_fake_request(tenant=TENANT1) self.request2 = unittest_utils.get_fake_request(tenant=TENANT2) def test_task_create_default_owner(self): owner = self.request1.context.owner task = self.task_factory.new_task(task_type=self.task_type, owner=owner) self.assertEqual(TENANT1, task.owner) def test_task_create_wrong_owner(self): self.assertRaises(exception.Forbidden, self.task_factory.new_task, task_type=self.task_type, task_input=self.task_input, owner=self.owner) def test_task_create_owner_as_None(self): self.assertRaises(exception.Forbidden, self.task_factory.new_task, task_type=self.task_type, task_input=self.task_input, owner=None) def test_task_create_admin_context_owner_as_None(self): self.context.is_admin = True self.assertRaises(exception.Forbidden, self.task_factory.new_task, task_type=self.task_type, task_input=self.task_input, owner=None) class TestTaskRepoProxy(utils.BaseTestCase): class TaskRepoStub(object): def __init__(self, fixtures): self.fixtures = fixtures def get(self, task_id): for f in self.fixtures: if f.task_id == task_id: return f else: raise ValueError(task_id) class TaskStubRepoStub(object): def __init__(self, fixtures): self.fixtures = fixtures def list(self, *args, **kwargs): return self.fixtures def setUp(self): super(TestTaskRepoProxy, self).setUp() task_factory = glance.domain.TaskFactory() task_type = 'import' owner = None self.fixtures = [ task_factory.new_task(task_type, owner), task_factory.new_task(task_type, owner), task_factory.new_task(task_type, owner), ] self.context = glance.context.RequestContext(tenant=TENANT1) task_repo = self.TaskRepoStub(self.fixtures) task_stub_repo = self.TaskStubRepoStub(self.fixtures) self.task_repo = authorization.TaskRepoProxy( task_repo, self.context ) self.task_stub_repo = authorization.TaskStubRepoProxy( task_stub_repo, self.context ) def test_get_mutable_task(self): task = self.task_repo.get(self.fixtures[0].task_id) self.assertEqual(task.task_id, self.fixtures[0].task_id) def test_get_immutable_task(self): task_id = self.fixtures[1].task_id task = self.task_repo.get(task_id) self.assertRaises(exception.Forbidden, setattr, task, 'input', 'foo') def test_list(self): tasks = self.task_stub_repo.list() self.assertEqual(tasks[0].task_id, self.fixtures[0].task_id) self.assertRaises(exception.Forbidden, setattr, tasks[1], 'owner', 'foo') self.assertRaises(exception.Forbidden, setattr, tasks[2], 'owner', 'foo') glance-16.0.0/glance/tests/unit/test_schema.py0000666000175100017510000001346113245511421021303 0ustar zuulzuul00000000000000# Copyright 2012 OpenStack Foundation. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from glance.common import exception import glance.schema from glance.tests import utils as test_utils class TestBasicSchema(test_utils.BaseTestCase): def setUp(self): super(TestBasicSchema, self).setUp() properties = { 'ham': {'type': 'string'}, 'eggs': {'type': 'string'}, } self.schema = glance.schema.Schema('basic', properties) def test_validate_passes(self): obj = {'ham': 'no', 'eggs': 'scrambled'} self.schema.validate(obj) # No exception raised def test_validate_fails_on_extra_properties(self): obj = {'ham': 'virginia', 'eggs': 'scrambled', 'bacon': 'crispy'} self.assertRaises(exception.InvalidObject, self.schema.validate, obj) def test_validate_fails_on_bad_type(self): obj = {'eggs': 2} self.assertRaises(exception.InvalidObject, self.schema.validate, obj) def test_filter_strips_extra_properties(self): obj = {'ham': 'virginia', 'eggs': 'scrambled', 'bacon': 'crispy'} filtered = self.schema.filter(obj) expected = {'ham': 'virginia', 'eggs': 'scrambled'} self.assertEqual(expected, filtered) def test_merge_properties(self): self.schema.merge_properties({'bacon': {'type': 'string'}}) expected = set(['ham', 'eggs', 'bacon']) actual = set(self.schema.raw()['properties'].keys()) self.assertEqual(expected, actual) def test_merge_conflicting_properties(self): conflicts = {'eggs': {'type': 'integer'}} self.assertRaises(exception.SchemaLoadError, self.schema.merge_properties, conflicts) def test_merge_conflicting_but_identical_properties(self): conflicts = {'ham': {'type': 'string'}} self.schema.merge_properties(conflicts) # no exception raised expected = set(['ham', 'eggs']) actual = set(self.schema.raw()['properties'].keys()) self.assertEqual(expected, actual) def test_raw_json_schema(self): expected = { 'name': 'basic', 'properties': { 'ham': {'type': 'string'}, 'eggs': {'type': 'string'}, }, 'additionalProperties': False, } self.assertEqual(expected, self.schema.raw()) class TestBasicSchemaLinks(test_utils.BaseTestCase): def setUp(self): super(TestBasicSchemaLinks, self).setUp() properties = { 'ham': {'type': 'string'}, 'eggs': {'type': 'string'}, } links = [ {'rel': 'up', 'href': '/menu'}, ] self.schema = glance.schema.Schema('basic', properties, links) def test_raw_json_schema(self): expected = { 'name': 'basic', 'properties': { 'ham': {'type': 'string'}, 'eggs': {'type': 'string'}, }, 'links': [ {'rel': 'up', 'href': '/menu'}, ], 'additionalProperties': False, } self.assertEqual(expected, self.schema.raw()) class TestPermissiveSchema(test_utils.BaseTestCase): def setUp(self): super(TestPermissiveSchema, self).setUp() properties = { 'ham': {'type': 'string'}, 'eggs': {'type': 'string'}, } self.schema = glance.schema.PermissiveSchema('permissive', properties) def test_validate_with_additional_properties_allowed(self): obj = {'ham': 'virginia', 'eggs': 'scrambled', 'bacon': 'crispy'} self.schema.validate(obj) # No exception raised def test_validate_rejects_non_string_extra_properties(self): obj = {'ham': 'virginia', 'eggs': 'scrambled', 'grits': 1000} self.assertRaises(exception.InvalidObject, self.schema.validate, obj) def test_filter_passes_extra_properties(self): obj = {'ham': 'virginia', 'eggs': 'scrambled', 'bacon': 'crispy'} filtered = self.schema.filter(obj) self.assertEqual(obj, filtered) def test_raw_json_schema(self): expected = { 'name': 'permissive', 'properties': { 'ham': {'type': 'string'}, 'eggs': {'type': 'string'}, }, 'additionalProperties': {'type': 'string'}, } self.assertEqual(expected, self.schema.raw()) class TestCollectionSchema(test_utils.BaseTestCase): def test_raw_json_schema(self): item_properties = {'cheese': {'type': 'string'}} item_schema = glance.schema.Schema('mouse', item_properties) collection_schema = glance.schema.CollectionSchema('mice', item_schema) expected = { 'name': 'mice', 'properties': { 'mice': { 'type': 'array', 'items': item_schema.raw(), }, 'first': {'type': 'string'}, 'next': {'type': 'string'}, 'schema': {'type': 'string'}, }, 'links': [ {'rel': 'first', 'href': '{first}'}, {'rel': 'next', 'href': '{next}'}, {'rel': 'describedby', 'href': '{schema}'}, ], } self.assertEqual(expected, collection_schema.raw()) glance-16.0.0/glance/tests/unit/async/0000775000175100017510000000000013245511661017546 5ustar zuulzuul00000000000000glance-16.0.0/glance/tests/unit/async/test_async.py0000666000175100017510000000323013245511421022266 0ustar zuulzuul00000000000000# Copyright 2014 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import glance.async import glance.tests.utils as test_utils class TestTaskExecutor(test_utils.BaseTestCase): def setUp(self): super(TestTaskExecutor, self).setUp() self.context = mock.Mock() self.task_repo = mock.Mock() self.image_repo = mock.Mock() self.image_factory = mock.Mock() self.executor = glance.async.TaskExecutor(self.context, self.task_repo, self.image_repo, self.image_factory) def test_begin_processing(self): # setup task_id = mock.ANY task_type = mock.ANY task = mock.Mock() with mock.patch.object( glance.async.TaskExecutor, '_run') as mock_run: self.task_repo.get.return_value = task self.executor.begin_processing(task_id) # assert the call mock_run.assert_called_once_with(task_id, task_type) glance-16.0.0/glance/tests/unit/async/__init__.py0000666000175100017510000000000013245511421021641 0ustar zuulzuul00000000000000glance-16.0.0/glance/tests/unit/async/flows/0000775000175100017510000000000013245511661020700 5ustar zuulzuul00000000000000glance-16.0.0/glance/tests/unit/async/flows/plugins/0000775000175100017510000000000013245511661022361 5ustar zuulzuul00000000000000glance-16.0.0/glance/tests/unit/async/flows/plugins/test_inject_image_metadata.py0000666000175100017510000001175313245511421030253 0ustar zuulzuul00000000000000# Copyright 2018 NTT DATA, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import os import glance_store from oslo_config import cfg import glance.async.flows.plugins.inject_image_metadata as inject_metadata from glance.common import utils from glance import domain from glance import gateway from glance.tests.unit import utils as test_unit_utils import glance.tests.utils as test_utils CONF = cfg.CONF UUID1 = 'c80a1a6c-bd1f-41c5-90ee-81afedb1d58d' TENANT1 = '6838eb7b-6ded-434a-882c-b344c77fe8df' class TestInjectImageMetadataTask(test_utils.BaseTestCase): def setUp(self): super(TestInjectImageMetadataTask, self).setUp() glance_store.register_opts(CONF) self.config(default_store='file', stores=['file', 'http'], filesystem_store_datadir=self.test_dir, group="glance_store") glance_store.create_stores(CONF) self.work_dir = os.path.join(self.test_dir, 'work_dir') utils.safe_mkdirs(self.work_dir) self.config(work_dir=self.work_dir, group='task') self.context = mock.MagicMock() self.img_repo = mock.MagicMock() self.task_repo = mock.MagicMock() self.image_id = mock.MagicMock() self.gateway = gateway.Gateway() self.task_factory = domain.TaskFactory() self.img_factory = self.gateway.get_image_factory(self.context) self.image = self.img_factory.new_image(image_id=UUID1, disk_format='qcow2', container_format='bare') task_input = { "import_from": "http://cloud.foo/image.qcow2", "import_from_format": "qcow2", "image_properties": {'disk_format': 'qcow2', 'container_format': 'bare'} } task_ttl = CONF.task.task_time_to_live self.task_type = 'import' self.task = self.task_factory.new_task(self.task_type, TENANT1, task_time_to_live=task_ttl, task_input=task_input) def test_inject_image_metadata_using_non_admin_user(self): context = test_unit_utils.get_fake_context(roles='member') inject_image_metadata = inject_metadata._InjectMetadataProperties( context, self.task.task_id, self.task_type, self.img_repo, self.image_id) self.config(inject={"test": "abc"}, group='inject_metadata_properties') with mock.patch.object(self.img_repo, 'get') as get_mock: image = mock.MagicMock(image_id=self.image_id, extra_properties={"test": "abc"}) get_mock.return_value = image with mock.patch.object(self.img_repo, 'save') as save_mock: inject_image_metadata.execute() get_mock.assert_called_once_with(self.image_id) save_mock.assert_called_once_with(image) self.assertEqual({"test": "abc"}, image.extra_properties) def test_inject_image_metadata_using_admin_user(self): context = test_unit_utils.get_fake_context(roles='admin') inject_image_metadata = inject_metadata._InjectMetadataProperties( context, self.task.task_id, self.task_type, self.img_repo, self.image_id) self.config(inject={"test": "abc"}, group='inject_metadata_properties') inject_image_metadata.execute() with mock.patch.object(self.img_repo, 'get') as get_mock: get_mock.assert_not_called() with mock.patch.object(self.img_repo, 'save') as save_mock: save_mock.assert_not_called() def test_inject_image_metadata_empty(self): context = test_unit_utils.get_fake_context(roles='member') inject_image_metadata = inject_metadata._InjectMetadataProperties( context, self.task.task_id, self.task_type, self.img_repo, self.image_id) self.config(inject={}, group='inject_metadata_properties') inject_image_metadata.execute() with mock.patch.object(self.img_repo, 'get') as get_mock: get_mock.assert_not_called() with mock.patch.object(self.img_repo, 'save') as save_mock: save_mock.assert_not_called() glance-16.0.0/glance/tests/unit/async/flows/plugins/__init__.py0000666000175100017510000000000013245511421024454 0ustar zuulzuul00000000000000glance-16.0.0/glance/tests/unit/async/flows/test_ovf_process.py0000666000175100017510000001601513245511421024640 0ustar zuulzuul00000000000000# Copyright 2015 Intel Corporation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os.path import shutil import tarfile import tempfile import mock try: from defusedxml.cElementTree import ParseError except ImportError: from defusedxml.ElementTree import ParseError from glance.async.flows import ovf_process import glance.tests.utils as test_utils from oslo_config import cfg class TestOvfProcessTask(test_utils.BaseTestCase): def setUp(self): super(TestOvfProcessTask, self).setUp() # The glance/tests/var dir containing sample ova packages used # by the tests in this class self.test_ova_dir = os.path.abspath(os.path.join( os.path.dirname(__file__), '../../../', 'var')) self.tempdir = tempfile.mkdtemp() self.config(work_dir=self.tempdir, group="task") # These are the properties that we will extract from the ovf # file contained in a ova package interested_properties = ( '{\n' ' "cim_pasd": [\n' ' "InstructionSetExtensionName",\n' ' "ProcessorArchitecture"]\n' '}\n') self.config_file_name = os.path.join(self.tempdir, 'ovf-metadata.json') with open(self.config_file_name, 'w') as config_file: config_file.write(interested_properties) self.image = mock.Mock() self.image.container_format = 'ova' self.image.context.is_admin = True self.img_repo = mock.Mock() self.img_repo.get.return_value = self.image def tearDown(self): if os.path.exists(self.tempdir): shutil.rmtree(self.tempdir) super(TestOvfProcessTask, self).tearDown() def _copy_ova_to_tmpdir(self, ova_name): # Copies an ova package to the tempdir from which # it will be read by the system-under-test shutil.copy(os.path.join(self.test_ova_dir, ova_name), self.tempdir) return os.path.join(self.tempdir, ova_name) @mock.patch.object(cfg.ConfigOpts, 'find_file') def test_ovf_process_success(self, mock_find_file): mock_find_file.return_value = self.config_file_name ova_file_path = self._copy_ova_to_tmpdir('testserver.ova') ova_uri = 'file://' + ova_file_path oprocess = ovf_process._OVF_Process('task_id', 'ovf_proc', self.img_repo) self.assertEqual(ova_uri, oprocess.execute('test_image_id', ova_uri)) # Note that the extracted disk image is overwritten onto the input ova # file with open(ova_file_path, 'rb') as disk_image_file: content = disk_image_file.read() # b'ABCD' is the exact contents of the disk image file # testserver-disk1.vmdk contained in the testserver.ova package used # by this test self.assertEqual(b'ABCD', content) # 'DMTF:x86:VT-d' is the value in the testerver.ovf file in the # testserver.ova package self.image.extra_properties.update.assert_called_once_with( {'cim_pasd_InstructionSetExtensionName': 'DMTF:x86:VT-d'}) self.assertEqual('bare', self.image.container_format) @mock.patch.object(cfg.ConfigOpts, 'find_file') def test_ovf_process_no_config_file(self, mock_find_file): # Mimics a Glance deployment without the ovf-metadata.json file mock_find_file.return_value = None ova_file_path = self._copy_ova_to_tmpdir('testserver.ova') ova_uri = 'file://' + ova_file_path oprocess = ovf_process._OVF_Process('task_id', 'ovf_proc', self.img_repo) self.assertEqual(ova_uri, oprocess.execute('test_image_id', ova_uri)) # Note that the extracted disk image is overwritten onto the input # ova file. with open(ova_file_path, 'rb') as disk_image_file: content = disk_image_file.read() # b'ABCD' is the exact contents of the disk image file # testserver-disk1.vmdk contained in the testserver.ova package used # by this test self.assertEqual(b'ABCD', content) # No properties must be selected from the ovf file self.image.extra_properties.update.assert_called_once_with({}) self.assertEqual('bare', self.image.container_format) @mock.patch.object(cfg.ConfigOpts, 'find_file') def test_ovf_process_not_admin(self, mock_find_file): mock_find_file.return_value = self.config_file_name ova_file_path = self._copy_ova_to_tmpdir('testserver.ova') ova_uri = 'file://' + ova_file_path self.image.context.is_admin = False oprocess = ovf_process._OVF_Process('task_id', 'ovf_proc', self.img_repo) self.assertRaises(RuntimeError, oprocess.execute, 'test_image_id', ova_uri) def test_extract_ova_not_tar(self): # testserver-not-tar.ova package is not in tar format ova_file_path = os.path.join(self.test_ova_dir, 'testserver-not-tar.ova') iextractor = ovf_process.OVAImageExtractor() with open(ova_file_path, 'rb') as ova_file: self.assertRaises(tarfile.ReadError, iextractor.extract, ova_file) def test_extract_ova_no_disk(self): # testserver-no-disk.ova package contains no disk image file ova_file_path = os.path.join(self.test_ova_dir, 'testserver-no-disk.ova') iextractor = ovf_process.OVAImageExtractor() with open(ova_file_path, 'rb') as ova_file: self.assertRaises(KeyError, iextractor.extract, ova_file) def test_extract_ova_no_ovf(self): # testserver-no-ovf.ova package contains no ovf file ova_file_path = os.path.join(self.test_ova_dir, 'testserver-no-ovf.ova') iextractor = ovf_process.OVAImageExtractor() with open(ova_file_path, 'rb') as ova_file: self.assertRaises(RuntimeError, iextractor.extract, ova_file) def test_extract_ova_bad_ovf(self): # testserver-bad-ovf.ova package has an ovf file that contains # invalid xml ova_file_path = os.path.join(self.test_ova_dir, 'testserver-bad-ovf.ova') iextractor = ovf_process.OVAImageExtractor() with open(ova_file_path, 'rb') as ova_file: self.assertRaises(ParseError, iextractor._parse_OVF, ova_file) glance-16.0.0/glance/tests/unit/async/flows/test_introspect.py0000666000175100017510000001077313245511421024507 0ustar zuulzuul00000000000000# Copyright 2015 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import json import mock import glance_store from oslo_concurrency import processutils from oslo_config import cfg from glance.async.flows import introspect from glance.async import utils as async_utils from glance import domain import glance.tests.utils as test_utils CONF = cfg.CONF UUID1 = 'c80a1a6c-bd1f-41c5-90ee-81afedb1d58d' TENANT1 = '6838eb7b-6ded-434a-882c-b344c77fe8df' class TestImportTask(test_utils.BaseTestCase): def setUp(self): super(TestImportTask, self).setUp() self.task_factory = domain.TaskFactory() task_input = { "import_from": "http://cloud.foo/image.qcow2", "import_from_format": "qcow2", "image_properties": mock.sentinel.image_properties } task_ttl = CONF.task.task_time_to_live self.task_type = 'import' self.task = self.task_factory.new_task(self.task_type, TENANT1, task_time_to_live=task_ttl, task_input=task_input) self.context = mock.Mock() self.img_repo = mock.Mock() self.task_repo = mock.Mock() self.img_factory = mock.Mock() glance_store.register_opts(CONF) self.config(default_store='file', stores=['file', 'http'], filesystem_store_datadir=self.test_dir, group="glance_store") glance_store.create_stores(CONF) def test_introspect_success(self): image_create = introspect._Introspect(self.task.task_id, self.task_type, self.img_repo) self.task_repo.get.return_value = self.task image_id = mock.sentinel.image_id image = mock.MagicMock(image_id=image_id) self.img_repo.get.return_value = image with mock.patch.object(processutils, 'execute') as exc_mock: result = json.dumps({ "virtual-size": 10737418240, "filename": "/tmp/image.qcow2", "cluster-size": 65536, "format": "qcow2", "actual-size": 373030912, "format-specific": { "type": "qcow2", "data": { "compat": "0.10" } }, "dirty-flag": False }) exc_mock.return_value = (result, None) image_create.execute(image, '/test/path.qcow2') self.assertEqual(10737418240, image.virtual_size) # NOTE(hemanthm): Assert that process limits are being applied on # "qemu-img info" calls. See bug #1449062 for more details. kw_args = exc_mock.call_args[1] self.assertIn('prlimit', kw_args) self.assertEqual(async_utils.QEMU_IMG_PROC_LIMITS, kw_args.get('prlimit')) def test_introspect_no_image(self): image_create = introspect._Introspect(self.task.task_id, self.task_type, self.img_repo) self.task_repo.get.return_value = self.task image_id = mock.sentinel.image_id image = mock.MagicMock(image_id=image_id, virtual_size=None) self.img_repo.get.return_value = image # NOTE(flaper87): Don't mock, test the error. with mock.patch.object(processutils, 'execute') as exc_mock: exc_mock.return_value = (None, "some error") # NOTE(flaper87): Pls, read the `OptionalTask._catch_all` # docs to know why this is commented. # self.assertRaises(RuntimeError, # image_create.execute, # image, '/test/path.qcow2') image_create.execute(image, '/test/path.qcow2') self.assertIsNone(image.virtual_size) glance-16.0.0/glance/tests/unit/async/flows/__init__.py0000666000175100017510000000000013245511421022773 0ustar zuulzuul00000000000000glance-16.0.0/glance/tests/unit/async/flows/test_convert.py0000666000175100017510000001756413245511421024002 0ustar zuulzuul00000000000000# Copyright 2015 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import json import mock import os import glance_store from oslo_concurrency import processutils from oslo_config import cfg import six from glance.async.flows import convert from glance.async import taskflow_executor from glance.common.scripts import utils as script_utils from glance.common import utils from glance import domain from glance import gateway import glance.tests.utils as test_utils CONF = cfg.CONF UUID1 = 'c80a1a6c-bd1f-41c5-90ee-81afedb1d58d' TENANT1 = '6838eb7b-6ded-434a-882c-b344c77fe8df' class TestImportTask(test_utils.BaseTestCase): def setUp(self): super(TestImportTask, self).setUp() self.work_dir = os.path.join(self.test_dir, 'work_dir') utils.safe_mkdirs(self.work_dir) self.config(work_dir=self.work_dir, group='task') self.context = mock.MagicMock() self.img_repo = mock.MagicMock() self.task_repo = mock.MagicMock() self.gateway = gateway.Gateway() self.task_factory = domain.TaskFactory() self.img_factory = self.gateway.get_image_factory(self.context) self.image = self.img_factory.new_image(image_id=UUID1, disk_format='raw', container_format='bare') task_input = { "import_from": "http://cloud.foo/image.raw", "import_from_format": "raw", "image_properties": {'disk_format': 'qcow2', 'container_format': 'bare'} } task_ttl = CONF.task.task_time_to_live self.task_type = 'import' self.task = self.task_factory.new_task(self.task_type, TENANT1, task_time_to_live=task_ttl, task_input=task_input) glance_store.register_opts(CONF) self.config(default_store='file', stores=['file', 'http'], filesystem_store_datadir=self.test_dir, group="glance_store") self.config(conversion_format='qcow2', group='taskflow_executor') glance_store.create_stores(CONF) def test_convert_success(self): image_convert = convert._Convert(self.task.task_id, self.task_type, self.img_repo) self.task_repo.get.return_value = self.task image_id = mock.sentinel.image_id image = mock.MagicMock(image_id=image_id, virtual_size=None) self.img_repo.get.return_value = image with mock.patch.object(processutils, 'execute') as exc_mock: exc_mock.return_value = ("", None) with mock.patch.object(os, 'rename') as rm_mock: rm_mock.return_value = None image_convert.execute(image, 'file:///test/path.raw') # NOTE(hemanthm): Asserting that the source format is passed # to qemu-utis to avoid inferring the image format. This # shields us from an attack vector described at # https://bugs.launchpad.net/glance/+bug/1449062/comments/72 self.assertIn('-f', exc_mock.call_args[0]) def test_convert_revert_success(self): image_convert = convert._Convert(self.task.task_id, self.task_type, self.img_repo) self.task_repo.get.return_value = self.task image_id = mock.sentinel.image_id image = mock.MagicMock(image_id=image_id, virtual_size=None) self.img_repo.get.return_value = image with mock.patch.object(processutils, 'execute') as exc_mock: exc_mock.return_value = ("", None) with mock.patch.object(os, 'remove') as rmtree_mock: rmtree_mock.return_value = None image_convert.revert(image, 'file:///tmp/test') def test_import_flow_with_convert_and_introspect(self): self.config(engine_mode='serial', group='taskflow_executor') image = self.img_factory.new_image(image_id=UUID1, disk_format='raw', container_format='bare') img_factory = mock.MagicMock() executor = taskflow_executor.TaskExecutor( self.context, self.task_repo, self.img_repo, img_factory) self.task_repo.get.return_value = self.task def create_image(*args, **kwargs): kwargs['image_id'] = UUID1 return self.img_factory.new_image(*args, **kwargs) self.img_repo.get.return_value = image img_factory.new_image.side_effect = create_image image_path = os.path.join(self.work_dir, image.image_id) def fake_execute(*args, **kwargs): if 'info' in args: # NOTE(flaper87): Make sure the file actually # exists. Extra check to verify previous tasks did # what they were supposed to do. assert os.path.exists(args[3].split("file://")[-1]) return (json.dumps({ "virtual-size": 10737418240, "filename": "/tmp/image.qcow2", "cluster-size": 65536, "format": "qcow2", "actual-size": 373030912, "format-specific": { "type": "qcow2", "data": { "compat": "0.10" } }, "dirty-flag": False }), None) open("%s.converted" % image_path, 'a').close() return ("", None) with mock.patch.object(script_utils, 'get_image_data_iter') as dmock: dmock.return_value = six.BytesIO(b"TEST_IMAGE") with mock.patch.object(processutils, 'execute') as exc_mock: exc_mock.side_effect = fake_execute executor.begin_processing(self.task.task_id) # NOTE(flaper87): DeleteFromFS should've deleted this # file. Make sure it doesn't exist. self.assertFalse(os.path.exists(image_path)) # NOTE(flaper87): Workdir should be empty after all # the tasks have been executed. self.assertEqual([], os.listdir(self.work_dir)) self.assertEqual('qcow2', image.disk_format) self.assertEqual(10737418240, image.virtual_size) # NOTE(hemanthm): Asserting that the source format is passed # to qemu-utis to avoid inferring the image format when # converting. This shields us from an attack vector described # at https://bugs.launchpad.net/glance/+bug/1449062/comments/72 # # A total of three calls will be made to 'execute': 'info', # 'convert' and 'info' towards introspection, conversion and # OVF packaging respectively. We care about the 'convert' call # here, hence we fetch the 2nd set of args from the args list. convert_call_args, _ = exc_mock.call_args_list[1] self.assertIn('-f', convert_call_args) glance-16.0.0/glance/tests/unit/async/flows/test_import.py0000666000175100017510000004270113245511421023623 0ustar zuulzuul00000000000000# Copyright 2015 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import json import mock import os import glance_store from oslo_concurrency import processutils as putils from oslo_config import cfg import six from six.moves import urllib from taskflow import task from taskflow.types import failure import glance.async.flows.base_import as import_flow from glance.async import taskflow_executor from glance.async import utils as async_utils from glance.common.scripts.image_import import main as image_import from glance.common.scripts import utils as script_utils from glance.common import utils from glance import domain from glance import gateway import glance.tests.utils as test_utils CONF = cfg.CONF UUID1 = 'c80a1a6c-bd1f-41c5-90ee-81afedb1d58d' TENANT1 = '6838eb7b-6ded-434a-882c-b344c77fe8df' class _ErrorTask(task.Task): def execute(self): raise RuntimeError() class TestImportTask(test_utils.BaseTestCase): def setUp(self): super(TestImportTask, self).setUp() glance_store.register_opts(CONF) self.config(default_store='file', stores=['file', 'http'], filesystem_store_datadir=self.test_dir, group="glance_store") glance_store.create_stores(CONF) self.work_dir = os.path.join(self.test_dir, 'work_dir') utils.safe_mkdirs(self.work_dir) self.config(work_dir=self.work_dir, group='task') self.context = mock.MagicMock() self.img_repo = mock.MagicMock() self.task_repo = mock.MagicMock() self.gateway = gateway.Gateway() self.task_factory = domain.TaskFactory() self.img_factory = self.gateway.get_image_factory(self.context) self.image = self.img_factory.new_image(image_id=UUID1, disk_format='qcow2', container_format='bare') task_input = { "import_from": "http://cloud.foo/image.qcow2", "import_from_format": "qcow2", "image_properties": {'disk_format': 'qcow2', 'container_format': 'bare'} } task_ttl = CONF.task.task_time_to_live self.task_type = 'import' self.task = self.task_factory.new_task(self.task_type, TENANT1, task_time_to_live=task_ttl, task_input=task_input) def _assert_qemu_process_limits(self, exec_mock): # NOTE(hemanthm): Assert that process limits are being applied # on "qemu-img info" calls. See bug #1449062 for more details. kw_args = exec_mock.call_args[1] self.assertIn('prlimit', kw_args) self.assertEqual(async_utils.QEMU_IMG_PROC_LIMITS, kw_args.get('prlimit')) def test_import_flow(self): self.config(engine_mode='serial', group='taskflow_executor') img_factory = mock.MagicMock() executor = taskflow_executor.TaskExecutor( self.context, self.task_repo, self.img_repo, img_factory) self.task_repo.get.return_value = self.task def create_image(*args, **kwargs): kwargs['image_id'] = UUID1 return self.img_factory.new_image(*args, **kwargs) self.img_repo.get.return_value = self.image img_factory.new_image.side_effect = create_image with mock.patch.object(script_utils, 'get_image_data_iter') as dmock: dmock.return_value = six.BytesIO(b"TEST_IMAGE") with mock.patch.object(putils, 'trycmd') as tmock: tmock.return_value = (json.dumps({ 'format': 'qcow2', }), None) executor.begin_processing(self.task.task_id) image_path = os.path.join(self.test_dir, self.image.image_id) tmp_image_path = os.path.join(self.work_dir, "%s.tasks_import" % image_path) self.assertFalse(os.path.exists(tmp_image_path)) self.assertTrue(os.path.exists(image_path)) self.assertEqual(1, len(list(self.image.locations))) self.assertEqual("file://%s/%s" % (self.test_dir, self.image.image_id), self.image.locations[0]['url']) self._assert_qemu_process_limits(tmock) def test_import_flow_missing_work_dir(self): self.config(engine_mode='serial', group='taskflow_executor') self.config(work_dir=None, group='task') img_factory = mock.MagicMock() executor = taskflow_executor.TaskExecutor( self.context, self.task_repo, self.img_repo, img_factory) self.task_repo.get.return_value = self.task def create_image(*args, **kwargs): kwargs['image_id'] = UUID1 return self.img_factory.new_image(*args, **kwargs) self.img_repo.get.return_value = self.image img_factory.new_image.side_effect = create_image with mock.patch.object(script_utils, 'get_image_data_iter') as dmock: dmock.return_value = six.BytesIO(b"TEST_IMAGE") with mock.patch.object(import_flow._ImportToFS, 'execute') as emk: executor.begin_processing(self.task.task_id) self.assertFalse(emk.called) image_path = os.path.join(self.test_dir, self.image.image_id) tmp_image_path = os.path.join(self.work_dir, "%s.tasks_import" % image_path) self.assertFalse(os.path.exists(tmp_image_path)) self.assertTrue(os.path.exists(image_path)) def test_import_flow_revert_import_to_fs(self): self.config(engine_mode='serial', group='taskflow_executor') img_factory = mock.MagicMock() executor = taskflow_executor.TaskExecutor( self.context, self.task_repo, self.img_repo, img_factory) self.task_repo.get.return_value = self.task def create_image(*args, **kwargs): kwargs['image_id'] = UUID1 return self.img_factory.new_image(*args, **kwargs) self.img_repo.get.return_value = self.image img_factory.new_image.side_effect = create_image with mock.patch.object(script_utils, 'get_image_data_iter') as dmock: dmock.side_effect = RuntimeError with mock.patch.object(import_flow._ImportToFS, 'revert') as rmock: self.assertRaises(RuntimeError, executor.begin_processing, self.task.task_id) self.assertTrue(rmock.called) self.assertIsInstance(rmock.call_args[1]['result'], failure.Failure) image_path = os.path.join(self.test_dir, self.image.image_id) tmp_image_path = os.path.join(self.work_dir, "%s.tasks_import" % image_path) self.assertFalse(os.path.exists(tmp_image_path)) # Note(sabari): The image should not have been uploaded to # the store as the flow failed before ImportToStore Task. self.assertFalse(os.path.exists(image_path)) def test_import_flow_backed_file_import_to_fs(self): self.config(engine_mode='serial', group='taskflow_executor') img_factory = mock.MagicMock() executor = taskflow_executor.TaskExecutor( self.context, self.task_repo, self.img_repo, img_factory) self.task_repo.get.return_value = self.task def create_image(*args, **kwargs): kwargs['image_id'] = UUID1 return self.img_factory.new_image(*args, **kwargs) self.img_repo.get.return_value = self.image img_factory.new_image.side_effect = create_image with mock.patch.object(script_utils, 'get_image_data_iter') as dmock: dmock.return_value = six.BytesIO(b"TEST_IMAGE") with mock.patch.object(putils, 'trycmd') as tmock: tmock.return_value = (json.dumps({ 'backing-filename': '/etc/password' }), None) with mock.patch.object(import_flow._ImportToFS, 'revert') as rmock: self.assertRaises(RuntimeError, executor.begin_processing, self.task.task_id) self.assertTrue(rmock.called) self.assertIsInstance(rmock.call_args[1]['result'], failure.Failure) self._assert_qemu_process_limits(tmock) image_path = os.path.join(self.test_dir, self.image.image_id) fname = "%s.tasks_import" % image_path tmp_image_path = os.path.join(self.work_dir, fname) self.assertFalse(os.path.exists(tmp_image_path)) # Note(sabari): The image should not have been uploaded to # the store as the flow failed before ImportToStore Task. self.assertFalse(os.path.exists(image_path)) def test_import_flow_revert(self): self.config(engine_mode='serial', group='taskflow_executor') img_factory = mock.MagicMock() executor = taskflow_executor.TaskExecutor( self.context, self.task_repo, self.img_repo, img_factory) self.task_repo.get.return_value = self.task def create_image(*args, **kwargs): kwargs['image_id'] = UUID1 return self.img_factory.new_image(*args, **kwargs) self.img_repo.get.return_value = self.image img_factory.new_image.side_effect = create_image with mock.patch.object(script_utils, 'get_image_data_iter') as dmock: dmock.return_value = six.BytesIO(b"TEST_IMAGE") with mock.patch.object(putils, 'trycmd') as tmock: tmock.return_value = (json.dumps({ 'format': 'qcow2', }), None) with mock.patch.object(import_flow, "_get_import_flows") as imock: imock.return_value = (x for x in [_ErrorTask()]) self.assertRaises(RuntimeError, executor.begin_processing, self.task.task_id) self._assert_qemu_process_limits(tmock) image_path = os.path.join(self.test_dir, self.image.image_id) tmp_image_path = os.path.join(self.work_dir, ("%s.tasks_import" % image_path)) self.assertFalse(os.path.exists(tmp_image_path)) # NOTE(flaper87): Eventually, we want this to be assertTrue # The current issue is there's no way to tell taskflow to # continue on failures. That is, revert the subflow but # keep executing the parent flow. Under # discussion/development. self.assertFalse(os.path.exists(image_path)) def test_import_flow_no_import_flows(self): self.config(engine_mode='serial', group='taskflow_executor') img_factory = mock.MagicMock() executor = taskflow_executor.TaskExecutor( self.context, self.task_repo, self.img_repo, img_factory) self.task_repo.get.return_value = self.task def create_image(*args, **kwargs): kwargs['image_id'] = UUID1 return self.img_factory.new_image(*args, **kwargs) self.img_repo.get.return_value = self.image img_factory.new_image.side_effect = create_image with mock.patch.object(urllib.request, 'urlopen') as umock: content = b"TEST_IMAGE" umock.return_value = six.BytesIO(content) with mock.patch.object(import_flow, "_get_import_flows") as imock: imock.return_value = (x for x in []) executor.begin_processing(self.task.task_id) image_path = os.path.join(self.test_dir, self.image.image_id) tmp_image_path = os.path.join(self.work_dir, "%s.tasks_import" % image_path) self.assertFalse(os.path.exists(tmp_image_path)) self.assertTrue(os.path.exists(image_path)) self.assertEqual(1, umock.call_count) with open(image_path, 'rb') as ifile: self.assertEqual(content, ifile.read()) def test_create_image(self): image_create = import_flow._CreateImage(self.task.task_id, self.task_type, self.task_repo, self.img_repo, self.img_factory) self.task_repo.get.return_value = self.task with mock.patch.object(image_import, 'create_image') as ci_mock: ci_mock.return_value = mock.Mock() image_create.execute() ci_mock.assert_called_once_with(self.img_repo, self.img_factory, {'container_format': 'bare', 'disk_format': 'qcow2'}, self.task.task_id) def test_save_image(self): save_image = import_flow._SaveImage(self.task.task_id, self.task_type, self.img_repo) with mock.patch.object(self.img_repo, 'get') as get_mock: image_id = mock.sentinel.image_id image = mock.MagicMock(image_id=image_id, status='saving') get_mock.return_value = image with mock.patch.object(self.img_repo, 'save') as save_mock: save_image.execute(image.image_id) get_mock.assert_called_once_with(image_id) save_mock.assert_called_once_with(image) self.assertEqual('active', image.status) def test_import_to_fs(self): import_fs = import_flow._ImportToFS(self.task.task_id, self.task_type, self.task_repo, 'http://example.com/image.qcow2') with mock.patch.object(script_utils, 'get_image_data_iter') as dmock: content = b"test" dmock.return_value = [content] with mock.patch.object(putils, 'trycmd') as tmock: tmock.return_value = (json.dumps({ 'format': 'qcow2', }), None) image_id = UUID1 path = import_fs.execute(image_id) reader, size = glance_store.get_from_backend(path) self.assertEqual(4, size) self.assertEqual(content, b"".join(reader)) image_path = os.path.join(self.work_dir, image_id) tmp_image_path = os.path.join(self.work_dir, image_path) self.assertTrue(os.path.exists(tmp_image_path)) self._assert_qemu_process_limits(tmock) def test_delete_from_fs(self): delete_fs = import_flow._DeleteFromFS(self.task.task_id, self.task_type) data = [b"test"] store = glance_store.get_store_from_scheme('file') path = glance_store.store_add_to_backend(mock.sentinel.image_id, data, mock.sentinel.image_size, store, context=None)[0] path_wo_scheme = path.split("file://")[1] self.assertTrue(os.path.exists(path_wo_scheme)) delete_fs.execute(path) self.assertFalse(os.path.exists(path_wo_scheme)) def test_complete_task(self): complete_task = import_flow._CompleteTask(self.task.task_id, self.task_type, self.task_repo) image_id = mock.sentinel.image_id image = mock.MagicMock(image_id=image_id) self.task_repo.get.return_value = self.task with mock.patch.object(self.task, 'succeed') as succeed: complete_task.execute(image.image_id) succeed.assert_called_once_with({'image_id': image_id}) glance-16.0.0/glance/tests/unit/async/test_taskflow_executor.py0000666000175100017510000000726013245511421024730 0ustar zuulzuul00000000000000# Copyright 2015 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import glance_store from oslo_config import cfg from taskflow import engines from glance.async import taskflow_executor from glance.common.scripts.image_import import main as image_import from glance import domain import glance.tests.utils as test_utils CONF = cfg.CONF TENANT1 = '6838eb7b-6ded-434a-882c-b344c77fe8df' class TestTaskExecutor(test_utils.BaseTestCase): def setUp(self): super(TestTaskExecutor, self).setUp() glance_store.register_opts(CONF) self.config(default_store='file', stores=['file', 'http'], filesystem_store_datadir=self.test_dir, group="glance_store") glance_store.create_stores(CONF) self.config(engine_mode='serial', group='taskflow_executor') self.context = mock.Mock() self.task_repo = mock.Mock() self.image_repo = mock.Mock() self.image_factory = mock.Mock() task_input = { "import_from": "http://cloud.foo/image.qcow2", "import_from_format": "qcow2", "image_properties": {'disk_format': 'qcow2', 'container_format': 'bare'} } task_ttl = CONF.task.task_time_to_live self.task_type = 'import' self.task_factory = domain.TaskFactory() self.task = self.task_factory.new_task(self.task_type, TENANT1, task_time_to_live=task_ttl, task_input=task_input) self.executor = taskflow_executor.TaskExecutor( self.context, self.task_repo, self.image_repo, self.image_factory) def test_begin_processing(self): with mock.patch.object(engines, 'load') as load_mock: engine = mock.Mock() load_mock.return_value = engine self.task_repo.get.return_value = self.task self.executor.begin_processing(self.task.task_id) # assert the call self.assertEqual(1, load_mock.call_count) self.assertEqual(1, engine.run.call_count) def test_task_fail(self): with mock.patch.object(engines, 'load') as load_mock: engine = mock.Mock() load_mock.return_value = engine engine.run.side_effect = RuntimeError self.task_repo.get.return_value = self.task self.assertRaises(RuntimeError, self.executor.begin_processing, self.task.task_id) self.assertEqual('failure', self.task.status) self.task_repo.save.assert_called_with(self.task) def test_task_fail_upload(self): with mock.patch.object(image_import, 'set_image_data') as import_mock: import_mock.side_effect = IOError self.task_repo.get.return_value = self.task self.executor.begin_processing(self.task.task_id) self.assertEqual('failure', self.task.status) self.task_repo.save.assert_called_with(self.task) self.assertEqual(1, import_mock.call_count) glance-16.0.0/glance/tests/unit/test_quota.py0000666000175100017510000007163713245511421021205 0ustar zuulzuul00000000000000# Copyright 2013, Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import uuid import mock from mock import patch from oslo_utils import encodeutils from oslo_utils import units # NOTE(jokke): simplified transition to py3, behaves like py2 xrange from six.moves import range from glance.common import exception from glance.common import store_utils import glance.quota from glance.tests.unit import utils as unit_test_utils from glance.tests import utils as test_utils UUID1 = 'c80a1a6c-bd1f-41c5-90ee-81afedb1d58d' class FakeContext(object): owner = 'someone' is_admin = False class FakeImage(object): size = None image_id = 'someid' locations = [{'url': 'file:///not/a/path', 'metadata': {}}] tags = set([]) def set_data(self, data, size=None): self.size = 0 for d in data: self.size += len(d) def __init__(self, **kwargs): self.extra_properties = kwargs.get('extra_properties', {}) class TestImageQuota(test_utils.BaseTestCase): def setUp(self): super(TestImageQuota, self).setUp() def _get_image(self, location_count=1, image_size=10): context = FakeContext() db_api = unit_test_utils.FakeDB() store_api = unit_test_utils.FakeStoreAPI() store = unit_test_utils.FakeStoreUtils(store_api) base_image = FakeImage() base_image.image_id = 'xyz' base_image.size = image_size image = glance.quota.ImageProxy(base_image, context, db_api, store) locations = [] for i in range(location_count): locations.append({'url': 'file:///g/there/it/is%d' % i, 'metadata': {}, 'status': 'active'}) image_values = {'id': 'xyz', 'owner': context.owner, 'status': 'active', 'size': image_size, 'locations': locations} db_api.image_create(context, image_values) return image def test_quota_allowed(self): quota = 10 self.config(user_storage_quota=str(quota)) context = FakeContext() db_api = unit_test_utils.FakeDB() store_api = unit_test_utils.FakeStoreAPI() store = unit_test_utils.FakeStoreUtils(store_api) base_image = FakeImage() base_image.image_id = 'id' image = glance.quota.ImageProxy(base_image, context, db_api, store) data = '*' * quota base_image.set_data(data, size=None) image.set_data(data) self.assertEqual(quota, base_image.size) def _test_quota_allowed_unit(self, data_length, config_quota): self.config(user_storage_quota=config_quota) context = FakeContext() db_api = unit_test_utils.FakeDB() store_api = unit_test_utils.FakeStoreAPI() store = unit_test_utils.FakeStoreUtils(store_api) base_image = FakeImage() base_image.image_id = 'id' image = glance.quota.ImageProxy(base_image, context, db_api, store) data = '*' * data_length base_image.set_data(data, size=None) image.set_data(data) self.assertEqual(data_length, base_image.size) def test_quota_allowed_unit_b(self): self._test_quota_allowed_unit(10, '10B') def test_quota_allowed_unit_kb(self): self._test_quota_allowed_unit(10, '1KB') def test_quota_allowed_unit_mb(self): self._test_quota_allowed_unit(10, '1MB') def test_quota_allowed_unit_gb(self): self._test_quota_allowed_unit(10, '1GB') def test_quota_allowed_unit_tb(self): self._test_quota_allowed_unit(10, '1TB') def _quota_exceeded_size(self, quota, data, deleted=True, size=None): self.config(user_storage_quota=quota) context = FakeContext() db_api = unit_test_utils.FakeDB() store_api = unit_test_utils.FakeStoreAPI() store = unit_test_utils.FakeStoreUtils(store_api) base_image = FakeImage() base_image.image_id = 'id' image = glance.quota.ImageProxy(base_image, context, db_api, store) if deleted: with patch.object(store_utils, 'safe_delete_from_backend'): store_utils.safe_delete_from_backend( context, image.image_id, base_image.locations[0]) self.assertRaises(exception.StorageQuotaFull, image.set_data, data, size=size) def test_quota_exceeded_no_size(self): quota = 10 data = '*' * (quota + 1) # NOTE(jbresnah) When the image size is None it means that it is # not known. In this case the only time we will raise an # exception is when there is no room left at all, thus we know # it will not fit. # That's why 'get_remaining_quota' is mocked with return_value = 0. with patch.object(glance.api.common, 'get_remaining_quota', return_value=0): self._quota_exceeded_size(str(quota), data) def test_quota_exceeded_with_right_size(self): quota = 10 data = '*' * (quota + 1) self._quota_exceeded_size(str(quota), data, size=len(data), deleted=False) def test_quota_exceeded_with_right_size_b(self): quota = 10 data = '*' * (quota + 1) self._quota_exceeded_size('10B', data, size=len(data), deleted=False) def test_quota_exceeded_with_right_size_kb(self): quota = units.Ki data = '*' * (quota + 1) self._quota_exceeded_size('1KB', data, size=len(data), deleted=False) def test_quota_exceeded_with_lie_size(self): quota = 10 data = '*' * (quota + 1) self._quota_exceeded_size(str(quota), data, deleted=False, size=quota - 1) def test_append_location(self): new_location = {'url': 'file:///a/path', 'metadata': {}, 'status': 'active'} image = self._get_image() pre_add_locations = image.locations[:] image.locations.append(new_location) pre_add_locations.append(new_location) self.assertEqual(image.locations, pre_add_locations) def test_insert_location(self): new_location = {'url': 'file:///a/path', 'metadata': {}, 'status': 'active'} image = self._get_image() pre_add_locations = image.locations[:] image.locations.insert(0, new_location) pre_add_locations.insert(0, new_location) self.assertEqual(image.locations, pre_add_locations) def test_extend_location(self): new_location = {'url': 'file:///a/path', 'metadata': {}, 'status': 'active'} image = self._get_image() pre_add_locations = image.locations[:] image.locations.extend([new_location]) pre_add_locations.extend([new_location]) self.assertEqual(image.locations, pre_add_locations) def test_iadd_location(self): new_location = {'url': 'file:///a/path', 'metadata': {}, 'status': 'active'} image = self._get_image() pre_add_locations = image.locations[:] image.locations += [new_location] pre_add_locations += [new_location] self.assertEqual(image.locations, pre_add_locations) def test_set_location(self): new_location = {'url': 'file:///a/path', 'metadata': {}, 'status': 'active'} image = self._get_image() image.locations = [new_location] self.assertEqual(image.locations, [new_location]) def _make_image_with_quota(self, image_size=10, location_count=2): quota = image_size * location_count self.config(user_storage_quota=str(quota)) return self._get_image(image_size=image_size, location_count=location_count) def test_exceed_append_location(self): image = self._make_image_with_quota() self.assertRaises(exception.StorageQuotaFull, image.locations.append, {'url': 'file:///a/path', 'metadata': {}, 'status': 'active'}) def test_exceed_insert_location(self): image = self._make_image_with_quota() self.assertRaises(exception.StorageQuotaFull, image.locations.insert, 0, {'url': 'file:///a/path', 'metadata': {}, 'status': 'active'}) def test_exceed_extend_location(self): image = self._make_image_with_quota() self.assertRaises(exception.StorageQuotaFull, image.locations.extend, [{'url': 'file:///a/path', 'metadata': {}, 'status': 'active'}]) def test_set_location_under(self): image = self._make_image_with_quota(location_count=1) image.locations = [{'url': 'file:///a/path', 'metadata': {}, 'status': 'active'}] def test_set_location_exceed(self): image = self._make_image_with_quota(location_count=1) try: image.locations = [{'url': 'file:///a/path', 'metadata': {}, 'status': 'active'}, {'url': 'file:///a/path2', 'metadata': {}, 'status': 'active'}] self.fail('Should have raised the quota exception') except exception.StorageQuotaFull: pass def test_iadd_location_exceed(self): image = self._make_image_with_quota(location_count=1) try: image.locations += [{'url': 'file:///a/path', 'metadata': {}, 'status': 'active'}] self.fail('Should have raised the quota exception') except exception.StorageQuotaFull: pass def test_append_location_for_queued_image(self): context = FakeContext() db_api = unit_test_utils.FakeDB() store_api = unit_test_utils.FakeStoreAPI() store = unit_test_utils.FakeStoreUtils(store_api) base_image = FakeImage() base_image.image_id = str(uuid.uuid4()) image = glance.quota.ImageProxy(base_image, context, db_api, store) self.assertIsNone(image.size) self.stubs.Set(store_api, 'get_size_from_backend', unit_test_utils.fake_get_size_from_backend) image.locations.append({'url': 'file:///fake.img.tar.gz', 'metadata': {}}) self.assertIn({'url': 'file:///fake.img.tar.gz', 'metadata': {}}, image.locations) def test_insert_location_for_queued_image(self): context = FakeContext() db_api = unit_test_utils.FakeDB() store_api = unit_test_utils.FakeStoreAPI() store = unit_test_utils.FakeStoreUtils(store_api) base_image = FakeImage() base_image.image_id = str(uuid.uuid4()) image = glance.quota.ImageProxy(base_image, context, db_api, store) self.assertIsNone(image.size) self.stubs.Set(store_api, 'get_size_from_backend', unit_test_utils.fake_get_size_from_backend) image.locations.insert(0, {'url': 'file:///fake.img.tar.gz', 'metadata': {}}) self.assertIn({'url': 'file:///fake.img.tar.gz', 'metadata': {}}, image.locations) def test_set_location_for_queued_image(self): context = FakeContext() db_api = unit_test_utils.FakeDB() store_api = unit_test_utils.FakeStoreAPI() store = unit_test_utils.FakeStoreUtils(store_api) base_image = FakeImage() base_image.image_id = str(uuid.uuid4()) image = glance.quota.ImageProxy(base_image, context, db_api, store) self.assertIsNone(image.size) self.stubs.Set(store_api, 'get_size_from_backend', unit_test_utils.fake_get_size_from_backend) image.locations = [{'url': 'file:///fake.img.tar.gz', 'metadata': {}}] self.assertEqual([{'url': 'file:///fake.img.tar.gz', 'metadata': {}}], image.locations) def test_iadd_location_for_queued_image(self): context = FakeContext() db_api = unit_test_utils.FakeDB() store_api = unit_test_utils.FakeStoreAPI() store = unit_test_utils.FakeStoreUtils(store_api) base_image = FakeImage() base_image.image_id = str(uuid.uuid4()) image = glance.quota.ImageProxy(base_image, context, db_api, store) self.assertIsNone(image.size) self.stubs.Set(store_api, 'get_size_from_backend', unit_test_utils.fake_get_size_from_backend) image.locations += [{'url': 'file:///fake.img.tar.gz', 'metadata': {}}] self.assertIn({'url': 'file:///fake.img.tar.gz', 'metadata': {}}, image.locations) class TestImagePropertyQuotas(test_utils.BaseTestCase): def setUp(self): super(TestImagePropertyQuotas, self).setUp() self.base_image = FakeImage() self.image = glance.quota.ImageProxy(self.base_image, mock.Mock(), mock.Mock(), mock.Mock()) self.image_repo_mock = mock.Mock() self.image_repo_mock.add.return_value = self.base_image self.image_repo_mock.save.return_value = self.base_image self.image_repo_proxy = glance.quota.ImageRepoProxy( self.image_repo_mock, mock.Mock(), mock.Mock(), mock.Mock()) def test_save_image_with_image_property(self): self.config(image_property_quota=1) self.image.extra_properties = {'foo': 'bar'} self.image_repo_proxy.save(self.image) self.image_repo_mock.save.assert_called_once_with(self.base_image, from_state=None) def test_save_image_too_many_image_properties(self): self.config(image_property_quota=1) self.image.extra_properties = {'foo': 'bar', 'foo2': 'bar2'} exc = self.assertRaises(exception.ImagePropertyLimitExceeded, self.image_repo_proxy.save, self.image) self.assertIn("Attempted: 2, Maximum: 1", encodeutils.exception_to_unicode(exc)) def test_save_image_unlimited_image_properties(self): self.config(image_property_quota=-1) self.image.extra_properties = {'foo': 'bar'} self.image_repo_proxy.save(self.image) self.image_repo_mock.save.assert_called_once_with(self.base_image, from_state=None) def test_add_image_with_image_property(self): self.config(image_property_quota=1) self.image.extra_properties = {'foo': 'bar'} self.image_repo_proxy.add(self.image) self.image_repo_mock.add.assert_called_once_with(self.base_image) def test_add_image_too_many_image_properties(self): self.config(image_property_quota=1) self.image.extra_properties = {'foo': 'bar', 'foo2': 'bar2'} exc = self.assertRaises(exception.ImagePropertyLimitExceeded, self.image_repo_proxy.add, self.image) self.assertIn("Attempted: 2, Maximum: 1", encodeutils.exception_to_unicode(exc)) def test_add_image_unlimited_image_properties(self): self.config(image_property_quota=-1) self.image.extra_properties = {'foo': 'bar'} self.image_repo_proxy.add(self.image) self.image_repo_mock.add.assert_called_once_with(self.base_image) def _quota_exceed_setup(self): self.config(image_property_quota=2) self.base_image.extra_properties = {'foo': 'bar', 'spam': 'ham'} self.image = glance.quota.ImageProxy(self.base_image, mock.Mock(), mock.Mock(), mock.Mock()) def test_modify_image_properties_when_quota_exceeded(self): self._quota_exceed_setup() self.config(image_property_quota=1) self.image.extra_properties = {'foo': 'frob', 'spam': 'eggs'} self.image_repo_proxy.save(self.image) self.image_repo_mock.save.assert_called_once_with(self.base_image, from_state=None) self.assertEqual('frob', self.base_image.extra_properties['foo']) self.assertEqual('eggs', self.base_image.extra_properties['spam']) def test_delete_image_properties_when_quota_exceeded(self): self._quota_exceed_setup() self.config(image_property_quota=1) del self.image.extra_properties['foo'] self.image_repo_proxy.save(self.image) self.image_repo_mock.save.assert_called_once_with(self.base_image, from_state=None) self.assertNotIn('foo', self.base_image.extra_properties) self.assertEqual('ham', self.base_image.extra_properties['spam']) def test_invalid_quota_config_parameter(self): self.config(user_storage_quota='foo') location = {"url": "file:///fake.img.tar.gz", "metadata": {}} self.assertRaises(exception.InvalidOptionValue, self.image.locations.append, location) def test_exceed_quota_during_patch_operation(self): self._quota_exceed_setup() self.image.extra_properties['frob'] = 'baz' self.image.extra_properties['lorem'] = 'ipsum' self.assertEqual('bar', self.base_image.extra_properties['foo']) self.assertEqual('ham', self.base_image.extra_properties['spam']) self.assertEqual('baz', self.base_image.extra_properties['frob']) self.assertEqual('ipsum', self.base_image.extra_properties['lorem']) del self.image.extra_properties['frob'] del self.image.extra_properties['lorem'] self.image_repo_proxy.save(self.image) call_args = mock.call(self.base_image, from_state=None) self.assertEqual(call_args, self.image_repo_mock.save.call_args) self.assertEqual('bar', self.base_image.extra_properties['foo']) self.assertEqual('ham', self.base_image.extra_properties['spam']) self.assertNotIn('frob', self.base_image.extra_properties) self.assertNotIn('lorem', self.base_image.extra_properties) def test_quota_exceeded_after_delete_image_properties(self): self.config(image_property_quota=3) self.base_image.extra_properties = {'foo': 'bar', 'spam': 'ham', 'frob': 'baz'} self.image = glance.quota.ImageProxy(self.base_image, mock.Mock(), mock.Mock(), mock.Mock()) self.config(image_property_quota=1) del self.image.extra_properties['foo'] self.image_repo_proxy.save(self.image) self.image_repo_mock.save.assert_called_once_with(self.base_image, from_state=None) self.assertNotIn('foo', self.base_image.extra_properties) self.assertEqual('ham', self.base_image.extra_properties['spam']) self.assertEqual('baz', self.base_image.extra_properties['frob']) class TestImageTagQuotas(test_utils.BaseTestCase): def setUp(self): super(TestImageTagQuotas, self).setUp() self.base_image = mock.Mock() self.base_image.tags = set([]) self.base_image.extra_properties = {} self.image = glance.quota.ImageProxy(self.base_image, mock.Mock(), mock.Mock(), mock.Mock()) self.image_repo_mock = mock.Mock() self.image_repo_proxy = glance.quota.ImageRepoProxy( self.image_repo_mock, mock.Mock(), mock.Mock(), mock.Mock()) def test_replace_image_tag(self): self.config(image_tag_quota=1) self.image.tags = ['foo'] self.assertEqual(1, len(self.image.tags)) def test_replace_too_many_image_tags(self): self.config(image_tag_quota=0) exc = self.assertRaises(exception.ImageTagLimitExceeded, setattr, self.image, 'tags', ['foo', 'bar']) self.assertIn('Attempted: 2, Maximum: 0', encodeutils.exception_to_unicode(exc)) self.assertEqual(0, len(self.image.tags)) def test_replace_unlimited_image_tags(self): self.config(image_tag_quota=-1) self.image.tags = ['foo'] self.assertEqual(1, len(self.image.tags)) def test_add_image_tag(self): self.config(image_tag_quota=1) self.image.tags.add('foo') self.assertEqual(1, len(self.image.tags)) def test_add_too_many_image_tags(self): self.config(image_tag_quota=1) self.image.tags.add('foo') exc = self.assertRaises(exception.ImageTagLimitExceeded, self.image.tags.add, 'bar') self.assertIn('Attempted: 2, Maximum: 1', encodeutils.exception_to_unicode(exc)) def test_add_unlimited_image_tags(self): self.config(image_tag_quota=-1) self.image.tags.add('foo') self.assertEqual(1, len(self.image.tags)) def test_remove_image_tag_while_over_quota(self): self.config(image_tag_quota=1) self.image.tags.add('foo') self.assertEqual(1, len(self.image.tags)) self.config(image_tag_quota=0) self.image.tags.remove('foo') self.assertEqual(0, len(self.image.tags)) class TestQuotaImageTagsProxy(test_utils.BaseTestCase): def setUp(self): super(TestQuotaImageTagsProxy, self).setUp() def test_add(self): proxy = glance.quota.QuotaImageTagsProxy(set([])) proxy.add('foo') self.assertIn('foo', proxy) def test_add_too_many_tags(self): self.config(image_tag_quota=0) proxy = glance.quota.QuotaImageTagsProxy(set([])) exc = self.assertRaises(exception.ImageTagLimitExceeded, proxy.add, 'bar') self.assertIn('Attempted: 1, Maximum: 0', encodeutils.exception_to_unicode(exc)) def test_equals(self): proxy = glance.quota.QuotaImageTagsProxy(set([])) self.assertEqual(set([]), proxy) def test_not_equals(self): proxy = glance.quota.QuotaImageTagsProxy(set([])) self.assertNotEqual('foo', proxy) def test_contains(self): proxy = glance.quota.QuotaImageTagsProxy(set(['foo'])) self.assertIn('foo', proxy) def test_len(self): proxy = glance.quota.QuotaImageTagsProxy(set(['foo', 'bar', 'baz', 'niz'])) self.assertEqual(4, len(proxy)) def test_iter(self): items = set(['foo', 'bar', 'baz', 'niz']) proxy = glance.quota.QuotaImageTagsProxy(items.copy()) self.assertEqual(4, len(items)) for item in proxy: items.remove(item) self.assertEqual(0, len(items)) class TestImageMemberQuotas(test_utils.BaseTestCase): def setUp(self): super(TestImageMemberQuotas, self).setUp() db_api = unit_test_utils.FakeDB() store_api = unit_test_utils.FakeStoreAPI() store = unit_test_utils.FakeStoreUtils(store_api) context = FakeContext() self.image = mock.Mock() self.base_image_member_factory = mock.Mock() self.image_member_factory = glance.quota.ImageMemberFactoryProxy( self.base_image_member_factory, context, db_api, store) def test_new_image_member(self): self.config(image_member_quota=1) self.image_member_factory.new_image_member(self.image, 'fake_id') nim = self.base_image_member_factory.new_image_member nim.assert_called_once_with(self.image, 'fake_id') def test_new_image_member_unlimited_members(self): self.config(image_member_quota=-1) self.image_member_factory.new_image_member(self.image, 'fake_id') nim = self.base_image_member_factory.new_image_member nim.assert_called_once_with(self.image, 'fake_id') def test_new_image_member_too_many_members(self): self.config(image_member_quota=0) self.assertRaises(exception.ImageMemberLimitExceeded, self.image_member_factory.new_image_member, self.image, 'fake_id') class TestImageLocationQuotas(test_utils.BaseTestCase): def setUp(self): super(TestImageLocationQuotas, self).setUp() self.base_image = mock.Mock() self.base_image.locations = [] self.base_image.size = 1 self.base_image.extra_properties = {} self.image = glance.quota.ImageProxy(self.base_image, mock.Mock(), mock.Mock(), mock.Mock()) self.image_repo_mock = mock.Mock() self.image_repo_proxy = glance.quota.ImageRepoProxy( self.image_repo_mock, mock.Mock(), mock.Mock(), mock.Mock()) def test_replace_image_location(self): self.config(image_location_quota=1) self.image.locations = [{"url": "file:///fake.img.tar.gz", "metadata": {} }] self.assertEqual(1, len(self.image.locations)) def test_replace_too_many_image_locations(self): self.config(image_location_quota=1) self.image.locations = [{"url": "file:///fake.img.tar.gz", "metadata": {}} ] locations = [ {"url": "file:///fake1.img.tar.gz", "metadata": {}}, {"url": "file:///fake2.img.tar.gz", "metadata": {}}, {"url": "file:///fake3.img.tar.gz", "metadata": {}} ] exc = self.assertRaises(exception.ImageLocationLimitExceeded, setattr, self.image, 'locations', locations) self.assertIn('Attempted: 3, Maximum: 1', encodeutils.exception_to_unicode(exc)) self.assertEqual(1, len(self.image.locations)) def test_replace_unlimited_image_locations(self): self.config(image_location_quota=-1) self.image.locations = [{"url": "file:///fake.img.tar.gz", "metadata": {}} ] self.assertEqual(1, len(self.image.locations)) def test_add_image_location(self): self.config(image_location_quota=1) location = {"url": "file:///fake.img.tar.gz", "metadata": {}} self.image.locations.append(location) self.assertEqual(1, len(self.image.locations)) def test_add_too_many_image_locations(self): self.config(image_location_quota=1) location1 = {"url": "file:///fake1.img.tar.gz", "metadata": {}} self.image.locations.append(location1) location2 = {"url": "file:///fake2.img.tar.gz", "metadata": {}} exc = self.assertRaises(exception.ImageLocationLimitExceeded, self.image.locations.append, location2) self.assertIn('Attempted: 2, Maximum: 1', encodeutils.exception_to_unicode(exc)) def test_add_unlimited_image_locations(self): self.config(image_location_quota=-1) location1 = {"url": "file:///fake1.img.tar.gz", "metadata": {}} self.image.locations.append(location1) self.assertEqual(1, len(self.image.locations)) def test_remove_image_location_while_over_quota(self): self.config(image_location_quota=1) location1 = {"url": "file:///fake1.img.tar.gz", "metadata": {}} self.image.locations.append(location1) self.assertEqual(1, len(self.image.locations)) self.config(image_location_quota=0) self.image.locations.remove(location1) self.assertEqual(0, len(self.image.locations)) glance-16.0.0/glance/tests/unit/test_image_cache_client.py0000666000175100017510000001216513245511421023606 0ustar zuulzuul00000000000000# Copyright 2013 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os import mock from glance.common import exception from glance.image_cache import client from glance.tests import utils class CacheClientTestCase(utils.BaseTestCase): def setUp(self): super(CacheClientTestCase, self).setUp() self.client = client.CacheClient('test_host') self.client.do_request = mock.Mock() def test_delete_cached_image(self): self.client.do_request.return_value = utils.FakeHTTPResponse() self.assertTrue(self.client.delete_cached_image('test_id')) self.client.do_request.assert_called_with("DELETE", "/cached_images/test_id") def test_get_cached_images(self): expected_data = b'{"cached_images": "some_images"}' self.client.do_request.return_value = utils.FakeHTTPResponse( data=expected_data) self.assertEqual("some_images", self.client.get_cached_images()) self.client.do_request.assert_called_with("GET", "/cached_images") def test_get_queued_images(self): expected_data = b'{"queued_images": "some_images"}' self.client.do_request.return_value = utils.FakeHTTPResponse( data=expected_data) self.assertEqual("some_images", self.client.get_queued_images()) self.client.do_request.assert_called_with("GET", "/queued_images") def test_delete_all_cached_images(self): expected_data = b'{"num_deleted": 4}' self.client.do_request.return_value = utils.FakeHTTPResponse( data=expected_data) self.assertEqual(4, self.client.delete_all_cached_images()) self.client.do_request.assert_called_with("DELETE", "/cached_images") def test_queue_image_for_caching(self): self.client.do_request.return_value = utils.FakeHTTPResponse() self.assertTrue(self.client.queue_image_for_caching('test_id')) self.client.do_request.assert_called_with("PUT", "/queued_images/test_id") def test_delete_queued_image(self): self.client.do_request.return_value = utils.FakeHTTPResponse() self.assertTrue(self.client.delete_queued_image('test_id')) self.client.do_request.assert_called_with("DELETE", "/queued_images/test_id") def test_delete_all_queued_images(self): expected_data = b'{"num_deleted": 4}' self.client.do_request.return_value = utils.FakeHTTPResponse( data=expected_data) self.assertEqual(4, self.client.delete_all_queued_images()) self.client.do_request.assert_called_with("DELETE", "/queued_images") class GetClientTestCase(utils.BaseTestCase): def setUp(self): super(GetClientTestCase, self).setUp() self.host = 'test_host' self.env = os.environ.copy() os.environ.clear() def tearDown(self): os.environ = self.env super(GetClientTestCase, self).tearDown() def test_get_client_host_only(self): expected_creds = { 'username': None, 'password': None, 'tenant': None, 'auth_url': None, 'strategy': 'noauth', 'region': None } self.assertEqual(expected_creds, client.get_client(self.host).creds) def test_get_client_all_creds(self): expected_creds = { 'username': 'name', 'password': 'pass', 'tenant': 'ten', 'auth_url': 'url', 'strategy': 'keystone', 'region': 'reg' } creds = client.get_client( self.host, username='name', password='pass', tenant='ten', auth_url='url', auth_strategy='strategy', region='reg' ).creds self.assertEqual(expected_creds, creds) def test_get_client_using_provided_host(self): cli = client.get_client(self.host) cli._do_request = mock.MagicMock() cli.configure_from_url = mock.MagicMock() cli.auth_plugin.management_url = mock.MagicMock() cli.do_request("GET", "/queued_images") self.assertFalse(cli.configure_from_url.called) self.assertFalse(client.get_client(self.host).configure_via_auth) def test_get_client_client_configuration_error(self): self.assertRaises(exception.ClientConfigurationError, client.get_client, self.host, username='name', password='pass', tenant='ten', auth_strategy='keystone', region='reg') glance-16.0.0/glance/tests/unit/v1/0000775000175100017510000000000013245511661016757 5ustar zuulzuul00000000000000glance-16.0.0/glance/tests/unit/v1/test_registry_client.py0000666000175100017510000011224613245511421023600 0ustar zuulzuul00000000000000# Copyright 2010-2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import datetime import os import uuid from mock import patch from six.moves import http_client as http from six.moves import reload_module import testtools from glance.api.v1.images import Controller as acontroller from glance.common import client as test_client from glance.common import config from glance.common import exception from glance.common import timeutils from glance import context from glance.db.sqlalchemy import api as db_api from glance.registry.api.v1.images import Controller as rcontroller import glance.registry.client.v1.api as rapi from glance.registry.client.v1.api import client as rclient from glance.tests.unit import base from glance.tests import utils as test_utils import webob _gen_uuid = lambda: str(uuid.uuid4()) UUID1 = _gen_uuid() UUID2 = _gen_uuid() # NOTE(bcwaldon): needed to init config_dir cli opt config.parse_args(args=[]) class TestRegistryV1Client(base.IsolatedUnitTest, test_utils.RegistryAPIMixIn): """ Test proper actions made for both valid and invalid requests against a Registry service """ def setUp(self): """Establish a clean test environment""" super(TestRegistryV1Client, self).setUp() db_api.get_engine() self.context = context.RequestContext(is_admin=True) self.FIXTURES = [ self.get_fixture( id=UUID1, name='fake image #1', is_public=False, disk_format='ami', container_format='ami', size=13, location="swift://user:passwd@acct/container/obj.tar.0", properties={'type': 'kernel'}), self.get_fixture(id=UUID2, name='fake image #2', properties={}, size=19, location="file:///tmp/glance-tests/2")] self.destroy_fixtures() self.create_fixtures() self.client = rclient.RegistryClient("0.0.0.0") def tearDown(self): """Clear the test environment""" super(TestRegistryV1Client, self).tearDown() self.destroy_fixtures() def test_get_image_index(self): """Test correct set of public image returned""" fixture = { 'id': UUID2, 'name': 'fake image #2' } images = self.client.get_images() self.assertEqualImages(images, (UUID2,), unjsonify=False) for k, v in fixture.items(): self.assertEqual(v, images[0][k]) def test_create_image_with_null_min_disk_min_ram(self): UUID3 = _gen_uuid() extra_fixture = self.get_fixture(id=UUID3, min_disk=None, min_ram=None) db_api.image_create(self.context, extra_fixture) image = self.client.get_image(UUID3) self.assertEqual(0, image["min_ram"]) self.assertEqual(0, image["min_disk"]) def test_get_index_sort_name_asc(self): """ Tests that the /images registry API returns list of public images sorted alphabetically by name in ascending order. """ UUID3 = _gen_uuid() extra_fixture = self.get_fixture(id=UUID3, name='asdf') db_api.image_create(self.context, extra_fixture) UUID4 = _gen_uuid() extra_fixture = self.get_fixture(id=UUID4, name='xyz') db_api.image_create(self.context, extra_fixture) images = self.client.get_images(sort_key='name', sort_dir='asc') self.assertEqualImages(images, (UUID3, UUID2, UUID4), unjsonify=False) def test_get_index_sort_status_desc(self): """ Tests that the /images registry API returns list of public images sorted alphabetically by status in descending order. """ UUID3 = _gen_uuid() extra_fixture = self.get_fixture(id=UUID3, name='asdf', status='queued') db_api.image_create(self.context, extra_fixture) UUID4 = _gen_uuid() extra_fixture = self.get_fixture(id=UUID4, name='xyz') db_api.image_create(self.context, extra_fixture) images = self.client.get_images(sort_key='status', sort_dir='desc') self.assertEqualImages(images, (UUID3, UUID4, UUID2), unjsonify=False) def test_get_index_sort_disk_format_asc(self): """ Tests that the /images registry API returns list of public images sorted alphabetically by disk_format in ascending order. """ UUID3 = _gen_uuid() extra_fixture = self.get_fixture(id=UUID3, name='asdf', disk_format='ami', container_format='ami') db_api.image_create(self.context, extra_fixture) UUID4 = _gen_uuid() extra_fixture = self.get_fixture(id=UUID4, name='xyz', disk_format='vdi') db_api.image_create(self.context, extra_fixture) images = self.client.get_images(sort_key='disk_format', sort_dir='asc') self.assertEqualImages(images, (UUID3, UUID4, UUID2), unjsonify=False) def test_get_index_sort_container_format_desc(self): """ Tests that the /images registry API returns list of public images sorted alphabetically by container_format in descending order. """ UUID3 = _gen_uuid() extra_fixture = self.get_fixture(id=UUID3, name='asdf', disk_format='ami', container_format='ami') db_api.image_create(self.context, extra_fixture) UUID4 = _gen_uuid() extra_fixture = self.get_fixture(id=UUID4, name='xyz', disk_format='iso', container_format='bare') db_api.image_create(self.context, extra_fixture) images = self.client.get_images(sort_key='container_format', sort_dir='desc') self.assertEqualImages(images, (UUID2, UUID4, UUID3), unjsonify=False) def test_get_index_sort_size_asc(self): """ Tests that the /images registry API returns list of public images sorted by size in ascending order. """ UUID3 = _gen_uuid() extra_fixture = self.get_fixture(id=UUID3, name='asdf', disk_format='ami', container_format='ami', size=100) db_api.image_create(self.context, extra_fixture) UUID4 = _gen_uuid() extra_fixture = self.get_fixture(id=UUID4, name='asdf', disk_format='iso', container_format='bare', size=2) db_api.image_create(self.context, extra_fixture) images = self.client.get_images(sort_key='size', sort_dir='asc') self.assertEqualImages(images, (UUID4, UUID2, UUID3), unjsonify=False) def test_get_index_sort_created_at_asc(self): """ Tests that the /images registry API returns list of public images sorted by created_at in ascending order. """ now = timeutils.utcnow() time1 = now + datetime.timedelta(seconds=5) time2 = now UUID3 = _gen_uuid() extra_fixture = self.get_fixture(id=UUID3, created_at=time1) db_api.image_create(self.context, extra_fixture) UUID4 = _gen_uuid() extra_fixture = self.get_fixture(id=UUID4, created_at=time2) db_api.image_create(self.context, extra_fixture) images = self.client.get_images(sort_key='created_at', sort_dir='asc') self.assertEqualImages(images, (UUID2, UUID4, UUID3), unjsonify=False) def test_get_index_sort_updated_at_desc(self): """ Tests that the /images registry API returns list of public images sorted by updated_at in descending order. """ now = timeutils.utcnow() time1 = now + datetime.timedelta(seconds=5) time2 = now UUID3 = _gen_uuid() extra_fixture = self.get_fixture(id=UUID3, created_at=None, updated_at=time1) db_api.image_create(self.context, extra_fixture) UUID4 = _gen_uuid() extra_fixture = self.get_fixture(id=UUID4, created_at=None, updated_at=time2) db_api.image_create(self.context, extra_fixture) images = self.client.get_images(sort_key='updated_at', sort_dir='desc') self.assertEqualImages(images, (UUID3, UUID4, UUID2), unjsonify=False) def test_get_image_index_marker(self): """Test correct set of images returned with marker param.""" UUID3 = _gen_uuid() extra_fixture = self.get_fixture(id=UUID3, name='new name! #123', status='saving') db_api.image_create(self.context, extra_fixture) UUID4 = _gen_uuid() extra_fixture = self.get_fixture(id=UUID4, name='new name! #125', status='saving') db_api.image_create(self.context, extra_fixture) images = self.client.get_images(marker=UUID4) self.assertEqualImages(images, (UUID3, UUID2), unjsonify=False) def test_get_image_index_invalid_marker(self): """Test exception is raised when marker is invalid""" self.assertRaises(exception.Invalid, self.client.get_images, marker=_gen_uuid()) def test_get_image_index_forbidden_marker(self): """Test exception is raised when marker is forbidden""" UUID5 = _gen_uuid() extra_fixture = self.get_fixture(id=UUID5, owner='0123', status='saving', is_public=False) db_api.image_create(self.context, extra_fixture) def non_admin_get_images(self, context, *args, **kwargs): """Convert to non-admin context""" context.is_admin = False rcontroller.__get_images(self, context, *args, **kwargs) rcontroller.__get_images = rcontroller._get_images self.stubs.Set(rcontroller, '_get_images', non_admin_get_images) self.assertRaises(exception.Invalid, self.client.get_images, marker=UUID5) def test_get_image_index_private_marker(self): """Test exception is not raised if private non-owned marker is used""" UUID4 = _gen_uuid() extra_fixture = self.get_fixture(id=UUID4, owner='1234', status='saving', is_public=False) db_api.image_create(self.context, extra_fixture) try: self.client.get_images(marker=UUID4) except Exception as e: self.fail("Unexpected exception '%s'" % e) def test_get_image_index_limit(self): """Test correct number of images returned with limit param.""" extra_fixture = self.get_fixture(id=_gen_uuid(), status='saving') db_api.image_create(self.context, extra_fixture) extra_fixture = self.get_fixture(id=_gen_uuid(), status='saving') db_api.image_create(self.context, extra_fixture) images = self.client.get_images(limit=2) self.assertEqual(2, len(images)) def test_get_image_index_marker_limit(self): """Test correct set of images returned with marker/limit params.""" UUID3 = _gen_uuid() extra_fixture = self.get_fixture(id=UUID3, name='new name! #123', status='saving') db_api.image_create(self.context, extra_fixture) UUID4 = _gen_uuid() extra_fixture = self.get_fixture(id=UUID4, name='new name! #125', status='saving') db_api.image_create(self.context, extra_fixture) images = self.client.get_images(marker=UUID3, limit=1) self.assertEqualImages(images, (UUID2,), unjsonify=False) def test_get_image_index_limit_None(self): """Test correct set of images returned with limit param == None.""" extra_fixture = self.get_fixture(id=_gen_uuid(), status='saving') db_api.image_create(self.context, extra_fixture) extra_fixture = self.get_fixture(id=_gen_uuid(), status='saving') db_api.image_create(self.context, extra_fixture) images = self.client.get_images(limit=None) self.assertEqual(3, len(images)) def test_get_image_index_by_name(self): """ Test correct set of public, name-filtered image returned. This is just a sanity check, we test the details call more in-depth. """ extra_fixture = self.get_fixture(id=_gen_uuid(), name='new name! #123') db_api.image_create(self.context, extra_fixture) images = self.client.get_images(filters={'name': 'new name! #123'}) self.assertEqual(1, len(images)) for image in images: self.assertEqual('new name! #123', image['name']) def test_get_image_details(self): """Tests that the detailed info about public images returned""" fixture = self.get_fixture(id=UUID2, name='fake image #2', properties={}, size=19, is_public=True) images = self.client.get_images_detailed() self.assertEqual(1, len(images)) for k, v in fixture.items(): self.assertEqual(v, images[0][k]) def test_get_image_details_marker_limit(self): """Test correct set of images returned with marker/limit params.""" UUID3 = _gen_uuid() extra_fixture = self.get_fixture(id=UUID3, status='saving') db_api.image_create(self.context, extra_fixture) extra_fixture = self.get_fixture(id=_gen_uuid(), status='saving') db_api.image_create(self.context, extra_fixture) images = self.client.get_images_detailed(marker=UUID3, limit=1) self.assertEqualImages(images, (UUID2,), unjsonify=False) def test_get_image_details_invalid_marker(self): """Test exception is raised when marker is invalid""" self.assertRaises(exception.Invalid, self.client.get_images_detailed, marker=_gen_uuid()) def test_get_image_details_forbidden_marker(self): """Test exception is raised when marker is forbidden""" UUID5 = _gen_uuid() extra_fixture = self.get_fixture(id=UUID5, is_public=False, owner='0123', status='saving') db_api.image_create(self.context, extra_fixture) def non_admin_get_images(self, context, *args, **kwargs): """Convert to non-admin context""" context.is_admin = False rcontroller.__get_images(self, context, *args, **kwargs) rcontroller.__get_images = rcontroller._get_images self.stubs.Set(rcontroller, '_get_images', non_admin_get_images) self.assertRaises(exception.Invalid, self.client.get_images_detailed, marker=UUID5) def test_get_image_details_private_marker(self): """Test exception is not raised if private non-owned marker is used""" UUID4 = _gen_uuid() extra_fixture = self.get_fixture(id=UUID4, is_public=False, owner='1234', status='saving') db_api.image_create(self.context, extra_fixture) try: self.client.get_images_detailed(marker=UUID4) except Exception as e: self.fail("Unexpected exception '%s'" % e) def test_get_image_details_by_name(self): """Tests that a detailed call can be filtered by name""" extra_fixture = self.get_fixture(id=_gen_uuid(), name='new name! #123') db_api.image_create(self.context, extra_fixture) filters = {'name': 'new name! #123'} images = self.client.get_images_detailed(filters=filters) self.assertEqual(1, len(images)) for image in images: self.assertEqual('new name! #123', image['name']) def test_get_image_details_by_status(self): """Tests that a detailed call can be filtered by status""" extra_fixture = self.get_fixture(id=_gen_uuid(), status='saving') db_api.image_create(self.context, extra_fixture) images = self.client.get_images_detailed(filters={'status': 'saving'}) self.assertEqual(1, len(images)) for image in images: self.assertEqual('saving', image['status']) def test_get_image_details_by_container_format(self): """Tests that a detailed call can be filtered by container_format""" extra_fixture = self.get_fixture(id=_gen_uuid(), status='saving') db_api.image_create(self.context, extra_fixture) filters = {'container_format': 'ovf'} images = self.client.get_images_detailed(filters=filters) self.assertEqual(2, len(images)) for image in images: self.assertEqual('ovf', image['container_format']) def test_get_image_details_by_disk_format(self): """Tests that a detailed call can be filtered by disk_format""" extra_fixture = self.get_fixture(id=_gen_uuid(), status='saving') db_api.image_create(self.context, extra_fixture) filters = {'disk_format': 'vhd'} images = self.client.get_images_detailed(filters=filters) self.assertEqual(2, len(images)) for image in images: self.assertEqual('vhd', image['disk_format']) def test_get_image_details_with_maximum_size(self): """Tests that a detailed call can be filtered by size_max""" extra_fixture = self.get_fixture(id=_gen_uuid(), status='saving', size=21) db_api.image_create(self.context, extra_fixture) images = self.client.get_images_detailed(filters={'size_max': 20}) self.assertEqual(1, len(images)) for image in images: self.assertLessEqual(image['size'], 20) def test_get_image_details_with_minimum_size(self): """Tests that a detailed call can be filtered by size_min""" extra_fixture = self.get_fixture(id=_gen_uuid(), status='saving') db_api.image_create(self.context, extra_fixture) images = self.client.get_images_detailed(filters={'size_min': 20}) self.assertEqual(1, len(images)) for image in images: self.assertGreaterEqual(image['size'], 20) def test_get_image_details_with_changes_since(self): """Tests that a detailed call can be filtered by changes-since""" dt1 = timeutils.utcnow() - datetime.timedelta(1) iso1 = timeutils.isotime(dt1) dt2 = timeutils.utcnow() + datetime.timedelta(1) iso2 = timeutils.isotime(dt2) dt3 = timeutils.utcnow() + datetime.timedelta(2) dt4 = timeutils.utcnow() + datetime.timedelta(3) iso4 = timeutils.isotime(dt4) UUID3 = _gen_uuid() extra_fixture = self.get_fixture(id=UUID3, name='fake image #3') db_api.image_create(self.context, extra_fixture) db_api.image_destroy(self.context, UUID3) UUID4 = _gen_uuid() extra_fixture = self.get_fixture(id=UUID4, name='fake image #4', created_at=dt3, updated_at=dt3) db_api.image_create(self.context, extra_fixture) # Check a standard list, 4 images in db (2 deleted) images = self.client.get_images_detailed(filters={}) self.assertEqualImages(images, (UUID4, UUID2), unjsonify=False) # Expect 3 images (1 deleted) filters = {'changes-since': iso1} images = self.client.get_images(filters=filters) self.assertEqualImages(images, (UUID4, UUID3, UUID2), unjsonify=False) # Expect 1 images (0 deleted) filters = {'changes-since': iso2} images = self.client.get_images_detailed(filters=filters) self.assertEqualImages(images, (UUID4,), unjsonify=False) # Expect 0 images (0 deleted) filters = {'changes-since': iso4} images = self.client.get_images(filters=filters) self.assertEqualImages(images, (), unjsonify=False) def test_get_image_details_with_size_min(self): """Tests that a detailed call can be filtered by size_min""" extra_fixture = self.get_fixture(id=_gen_uuid(), status='saving') db_api.image_create(self.context, extra_fixture) images = self.client.get_images_detailed(filters={'size_min': 20}) self.assertEqual(1, len(images)) for image in images: self.assertGreaterEqual(image['size'], 20) def test_get_image_details_by_property(self): """Tests that a detailed call can be filtered by a property""" extra_fixture = self.get_fixture(id=_gen_uuid(), status='saving', properties={'p a': 'v a'}) db_api.image_create(self.context, extra_fixture) filters = {'property-p a': 'v a'} images = self.client.get_images_detailed(filters=filters) self.assertEqual(1, len(images)) for image in images: self.assertEqual('v a', image['properties']['p a']) def test_get_image_is_public_v1(self): """Tests that a detailed call can be filtered by a property""" extra_fixture = self.get_fixture(id=_gen_uuid(), status='saving', properties={'is_public': 'avalue'}) context = copy.copy(self.context) db_api.image_create(context, extra_fixture) filters = {'property-is_public': 'avalue'} images = self.client.get_images_detailed(filters=filters) self.assertEqual(1, len(images)) for image in images: self.assertEqual('avalue', image['properties']['is_public']) def test_get_image_details_sort_disk_format_asc(self): """ Tests that a detailed call returns list of public images sorted alphabetically by disk_format in ascending order. """ UUID3 = _gen_uuid() extra_fixture = self.get_fixture(id=UUID3, name='asdf', disk_format='ami', container_format='ami') db_api.image_create(self.context, extra_fixture) UUID4 = _gen_uuid() extra_fixture = self.get_fixture(id=UUID4, name='xyz', disk_format='vdi') db_api.image_create(self.context, extra_fixture) images = self.client.get_images_detailed(sort_key='disk_format', sort_dir='asc') self.assertEqualImages(images, (UUID3, UUID4, UUID2), unjsonify=False) def test_get_image(self): """Tests that the detailed info about an image returned""" fixture = self.get_fixture(id=UUID1, name='fake image #1', disk_format='ami', container_format='ami', is_public=False, size=13, properties={'type': 'kernel'}) data = self.client.get_image(UUID1) for k, v in fixture.items(): el = data[k] self.assertEqual(v, data[k], "Failed v != data[k] where v = %(v)s and " "k = %(k)s and data[k] = %(el)s" % {'v': v, 'k': k, 'el': el}) def test_get_image_non_existing(self): """Tests that NotFound is raised when getting a non-existing image""" self.assertRaises(exception.NotFound, self.client.get_image, _gen_uuid()) def test_add_image_basic(self): """Tests that we can add image metadata and returns the new id""" fixture = self.get_fixture(is_public=True) new_image = self.client.add_image(fixture) # Test all other attributes set data = self.client.get_image(new_image['id']) for k, v in fixture.items(): self.assertEqual(v, data[k]) # Test status was updated properly self.assertIn('status', data.keys()) self.assertEqual('active', data['status']) def test_add_image_with_properties(self): """Tests that we can add image metadata with properties""" fixture = self.get_fixture(location="file:///tmp/glance-tests/2", properties={'distro': 'Ubuntu 10.04 LTS'}, is_public=True) new_image = self.client.add_image(fixture) del fixture['location'] for k, v in fixture.items(): self.assertEqual(v, new_image[k]) # Test status was updated properly self.assertIn('status', new_image.keys()) self.assertEqual('active', new_image['status']) def test_add_image_with_location_data(self): """Tests that we can add image metadata with properties""" location = "file:///tmp/glance-tests/2" loc_meta = {'key': 'value'} fixture = self.get_fixture(location_data=[{'url': location, 'metadata': loc_meta, 'status': 'active'}], properties={'distro': 'Ubuntu 10.04 LTS'}) new_image = self.client.add_image(fixture) self.assertEqual(location, new_image['location']) self.assertEqual(location, new_image['location_data'][0]['url']) self.assertEqual(loc_meta, new_image['location_data'][0]['metadata']) def test_add_image_with_location_data_with_encryption(self): """Tests that we can add image metadata with properties and enable encryption. """ self.client.metadata_encryption_key = '1234567890123456' location = "file:///tmp/glance-tests/%d" loc_meta = {'key': 'value'} fixture = {'name': 'fake public image', 'is_public': True, 'disk_format': 'vmdk', 'container_format': 'ovf', 'size': 19, 'location_data': [{'url': location % 1, 'metadata': loc_meta, 'status': 'active'}, {'url': location % 2, 'metadata': {}, 'status': 'active'}], 'properties': {'distro': 'Ubuntu 10.04 LTS'}} new_image = self.client.add_image(fixture) self.assertEqual(location % 1, new_image['location']) self.assertEqual(2, len(new_image['location_data'])) self.assertEqual(location % 1, new_image['location_data'][0]['url']) self.assertEqual(loc_meta, new_image['location_data'][0]['metadata']) self.assertEqual(location % 2, new_image['location_data'][1]['url']) self.assertEqual({}, new_image['location_data'][1]['metadata']) self.client.metadata_encryption_key = None def test_add_image_already_exists(self): """Tests proper exception is raised if image with ID already exists""" fixture = self.get_fixture(id=UUID2, location="file:///tmp/glance-tests/2") self.assertRaises(exception.Duplicate, self.client.add_image, fixture) def test_add_image_with_bad_status(self): """Tests proper exception is raised if a bad status is set""" fixture = self.get_fixture(status='bad status', location="file:///tmp/glance-tests/2") self.assertRaises(exception.Invalid, self.client.add_image, fixture) def test_update_image(self): """Tests that the /images PUT registry API updates the image""" fixture = {'name': 'fake public image #2', 'disk_format': 'vmdk'} self.assertTrue(self.client.update_image(UUID2, fixture)) # Test all other attributes set data = self.client.get_image(UUID2) for k, v in fixture.items(): self.assertEqual(v, data[k]) def test_update_image_public(self): """Tests that the /images PUT registry API updates the image""" fixture = {'name': 'fake public image #2', 'is_public': True, 'disk_format': 'vmdk'} self.assertTrue(self.client.update_image(UUID2, fixture)) # Test all other attributes set data = self.client.get_image(UUID2) for k, v in fixture.items(): self.assertEqual(v, data[k]) def test_update_image_private(self): """Tests that the /images PUT registry API updates the image""" fixture = {'name': 'fake public image #2', 'is_public': False, 'disk_format': 'vmdk'} self.assertTrue(self.client.update_image(UUID2, fixture)) # Test all other attributes set data = self.client.get_image(UUID2) for k, v in fixture.items(): self.assertEqual(v, data[k]) def test_update_image_not_existing(self): """Tests non existing image update doesn't work""" fixture = self.get_fixture(status='bad status') self.assertRaises(exception.NotFound, self.client.update_image, _gen_uuid(), fixture) def test_delete_image(self): """Tests that image metadata is deleted properly""" # Grab the original number of images orig_num_images = len(self.client.get_images()) # Delete image #2 image = self.FIXTURES[1] deleted_image = self.client.delete_image(image['id']) self.assertTrue(deleted_image) self.assertEqual(image['id'], deleted_image['id']) self.assertTrue(deleted_image['deleted']) self.assertTrue(deleted_image['deleted_at']) # Verify one less image new_num_images = len(self.client.get_images()) self.assertEqual(orig_num_images - 1, new_num_images) def test_delete_image_not_existing(self): """Check that one cannot delete non-existing image.""" self.assertRaises(exception.NotFound, self.client.delete_image, _gen_uuid()) def test_get_image_members(self): """Test getting image members.""" memb_list = self.client.get_image_members(UUID2) num_members = len(memb_list) self.assertEqual(0, num_members) def test_get_image_members_not_existing(self): """Test getting non-existent image members.""" self.assertRaises(exception.NotFound, self.client.get_image_members, _gen_uuid()) def test_get_member_images(self): """Test getting member images.""" memb_list = self.client.get_member_images('pattieblack') num_members = len(memb_list) self.assertEqual(0, num_members) def test_add_replace_members(self): """Test replacing image members.""" self.assertTrue(self.client.add_member(UUID2, 'pattieblack')) self.assertTrue(self.client.replace_members(UUID2, dict(member_id='pattie' 'black2'))) def test_add_delete_member(self): """Tests deleting image members""" self.client.add_member(UUID2, 'pattieblack') self.assertTrue(self.client.delete_member(UUID2, 'pattieblack')) class TestBaseClient(testtools.TestCase): """ Test proper actions made for both valid and invalid requests against a Registry service """ def test_connect_kwargs_default_values(self): actual = test_client.BaseClient('127.0.0.1').get_connect_kwargs() self.assertEqual({'timeout': None}, actual) def test_connect_kwargs(self): base_client = test_client.BaseClient( host='127.0.0.1', port=80, timeout=1, use_ssl=True) actual = base_client.get_connect_kwargs() expected = {'insecure': False, 'key_file': None, 'cert_file': None, 'timeout': 1} for k in expected.keys(): self.assertEqual(expected[k], actual[k]) class TestRegistryV1ClientApi(base.IsolatedUnitTest): def setUp(self): """Establish a clean test environment.""" super(TestRegistryV1ClientApi, self).setUp() self.context = context.RequestContext() reload_module(rapi) def test_get_registry_client(self): actual_client = rapi.get_registry_client(self.context) self.assertIsNone(actual_client.identity_headers) def test_get_registry_client_with_identity_headers(self): self.config(send_identity_headers=True) expected_identity_headers = { 'X-User-Id': '', 'X-Tenant-Id': '', 'X-Roles': ','.join(self.context.roles), 'X-Identity-Status': 'Confirmed', 'X-Service-Catalog': 'null', } actual_client = rapi.get_registry_client(self.context) self.assertEqual(expected_identity_headers, actual_client.identity_headers) def test_configure_registry_client_not_using_use_user_token(self): self.config(use_user_token=False) with patch.object(rapi, 'configure_registry_admin_creds') as mock_rapi: rapi.configure_registry_client() mock_rapi.assert_called_once_with() def _get_fake_config_creds(self, auth_url='auth_url', strategy='keystone'): return { 'user': 'user', 'password': 'password', 'username': 'user', 'tenant': 'tenant', 'auth_url': auth_url, 'strategy': strategy, 'region': 'region' } def test_configure_registry_admin_creds(self): expected = self._get_fake_config_creds(auth_url=None, strategy='configured_strategy') self.config(admin_user=expected['user']) self.config(admin_password=expected['password']) self.config(admin_tenant_name=expected['tenant']) self.config(auth_strategy=expected['strategy']) self.config(auth_region=expected['region']) self.stubs.Set(os, 'getenv', lambda x: None) self.assertIsNone(rapi._CLIENT_CREDS) rapi.configure_registry_admin_creds() self.assertEqual(expected, rapi._CLIENT_CREDS) def test_configure_registry_admin_creds_with_auth_url(self): expected = self._get_fake_config_creds() self.config(admin_user=expected['user']) self.config(admin_password=expected['password']) self.config(admin_tenant_name=expected['tenant']) self.config(auth_url=expected['auth_url']) self.config(auth_strategy='test_strategy') self.config(auth_region=expected['region']) self.assertIsNone(rapi._CLIENT_CREDS) rapi.configure_registry_admin_creds() self.assertEqual(expected, rapi._CLIENT_CREDS) class FakeResponse(object): status = http.ACCEPTED def getheader(*args, **kwargs): return None class TestRegistryV1ClientRequests(base.IsolatedUnitTest): def setUp(self): super(TestRegistryV1ClientRequests, self).setUp() def test_do_request_with_identity_headers(self): identity_headers = {'foo': 'bar'} self.client = rclient.RegistryClient("0.0.0.0", identity_headers=identity_headers) with patch.object(test_client.BaseClient, 'do_request', return_value=FakeResponse()) as mock_do_request: self.client.do_request("GET", "/images") mock_do_request.assert_called_once_with("GET", "/images", headers=identity_headers) def test_do_request(self): self.client = rclient.RegistryClient("0.0.0.0") with patch.object(test_client.BaseClient, 'do_request', return_value=FakeResponse()) as mock_do_request: self.client.do_request("GET", "/images") mock_do_request.assert_called_once_with("GET", "/images", headers={}) def test_registry_invalid_token_exception_handling(self): self.image_controller = acontroller() request = webob.Request.blank('/images') request.method = 'GET' request.context = context.RequestContext() with patch.object(rapi, 'get_images_detail') as mock_detail: mock_detail.side_effect = exception.NotAuthenticated() self.assertRaises(webob.exc.HTTPUnauthorized, self.image_controller.detail, request) glance-16.0.0/glance/tests/unit/v1/test_registry_api.py0000666000175100017510000025461713245511421023104 0ustar zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2010-2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime import uuid import mock from oslo_serialization import jsonutils import routes import six from six.moves import http_client as http import webob import glance.api.common import glance.common.config from glance.common import crypt from glance.common import timeutils from glance import context from glance.db.sqlalchemy import api as db_api from glance.db.sqlalchemy import models as db_models from glance.registry.api import v1 as rserver from glance.tests.unit import base from glance.tests import utils as test_utils _gen_uuid = lambda: str(uuid.uuid4()) UUID1 = _gen_uuid() UUID2 = _gen_uuid() class TestRegistryAPI(base.IsolatedUnitTest, test_utils.RegistryAPIMixIn): def setUp(self): """Establish a clean test environment""" super(TestRegistryAPI, self).setUp() self.mapper = routes.Mapper() self.api = test_utils.FakeAuthMiddleware(rserver.API(self.mapper), is_admin=True) def _get_extra_fixture(id, name, **kwargs): return self.get_extra_fixture( id, name, locations=[{'url': "file:///%s/%s" % (self.test_dir, id), 'metadata': {}, 'status': 'active'}], **kwargs) self.FIXTURES = [ _get_extra_fixture(UUID1, 'fake image #1', is_public=False, disk_format='ami', container_format='ami', min_disk=0, min_ram=0, owner=123, size=13, properties={'type': 'kernel'}), _get_extra_fixture(UUID2, 'fake image #2', min_disk=5, min_ram=256, size=19, properties={})] self.context = context.RequestContext(is_admin=True) db_api.get_engine() self.destroy_fixtures() self.create_fixtures() def tearDown(self): """Clear the test environment""" super(TestRegistryAPI, self).tearDown() self.destroy_fixtures() def test_show(self): """ Tests that the /images/ registry API endpoint returns the expected image """ fixture = {'id': UUID2, 'name': 'fake image #2', 'size': 19, 'min_ram': 256, 'min_disk': 5, 'checksum': None} res = self.get_api_response_ext(http.OK, '/images/%s' % UUID2) res_dict = jsonutils.loads(res.body) image = res_dict['image'] for k, v in six.iteritems(fixture): self.assertEqual(v, image[k]) def test_show_unknown(self): """ Tests that the /images/ registry API endpoint returns a 404 for an unknown image id """ self.get_api_response_ext(http.NOT_FOUND, '/images/%s' % _gen_uuid()) def test_show_invalid(self): """ Tests that the /images/ registry API endpoint returns a 404 for an invalid (therefore unknown) image id """ self.get_api_response_ext(http.NOT_FOUND, '/images/%s' % _gen_uuid()) def test_show_deleted_image_as_admin(self): """ Tests that the /images/ registry API endpoint returns a 200 for deleted image to admin user. """ # Delete image #2 self.get_api_response_ext(http.OK, '/images/%s' % UUID2, method='DELETE') self.get_api_response_ext(http.OK, '/images/%s' % UUID2) def test_show_deleted_image_as_nonadmin(self): """ Tests that the /images/ registry API endpoint returns a 404 for deleted image to non-admin user. """ # Delete image #2 self.get_api_response_ext(http.OK, '/images/%s' % UUID2, method='DELETE') api = test_utils.FakeAuthMiddleware(rserver.API(self.mapper), is_admin=False) self.get_api_response_ext(http.NOT_FOUND, '/images/%s' % UUID2, api=api) def test_show_private_image_with_no_admin_user(self): UUID4 = _gen_uuid() extra_fixture = self.get_fixture(id=UUID4, size=18, owner='test user', is_public=False) db_api.image_create(self.context, extra_fixture) test_rserv = rserver.API(self.mapper) api = test_utils.FakeAuthMiddleware(test_rserv, is_admin=False) self.get_api_response_ext(http.NOT_FOUND, '/images/%s' % UUID4, api=api) def test_get_root(self): """ Tests that the root registry API returns "index", which is a list of public images """ fixture = {'id': UUID2, 'size': 19, 'checksum': None} res = self.get_api_response_ext(http.OK, url='/') res_dict = jsonutils.loads(res.body) images = res_dict['images'] self.assertEqual(1, len(images)) for k, v in six.iteritems(fixture): self.assertEqual(v, images[0][k]) def test_get_index(self): """ Tests that the /images registry API returns list of public images """ fixture = {'id': UUID2, 'size': 19, 'checksum': None} res = self.get_api_response_ext(http.OK) res_dict = jsonutils.loads(res.body) images = res_dict['images'] self.assertEqual(1, len(images)) for k, v in six.iteritems(fixture): self.assertEqual(v, images[0][k]) def test_get_index_marker(self): """ Tests that the /images registry API returns list of public images that conforms to a marker query param """ time1 = timeutils.utcnow() + datetime.timedelta(seconds=5) time2 = timeutils.utcnow() + datetime.timedelta(seconds=4) time3 = timeutils.utcnow() UUID3 = _gen_uuid() extra_fixture = self.get_fixture(id=UUID3, size=19, created_at=time1) db_api.image_create(self.context, extra_fixture) UUID4 = _gen_uuid() extra_fixture = self.get_fixture(id=UUID4, created_at=time2) db_api.image_create(self.context, extra_fixture) UUID5 = _gen_uuid() extra_fixture = self.get_fixture(id=UUID5, created_at=time3) db_api.image_create(self.context, extra_fixture) res = self.get_api_response_ext(http.OK, url='/images?marker=%s' % UUID4) self.assertEqualImages(res, (UUID5, UUID2)) def test_get_index_unknown_marker(self): """ Tests that the /images registry API returns a 400 when an unknown marker is provided """ self.get_api_response_ext(http.BAD_REQUEST, url='/images?marker=%s' % _gen_uuid()) def test_get_index_malformed_marker(self): """ Tests that the /images registry API returns a 400 when a malformed marker is provided """ res = self.get_api_response_ext(http.BAD_REQUEST, url='/images?marker=4') self.assertIn(b'marker', res.body) def test_get_index_forbidden_marker(self): """ Tests that the /images registry API returns a 400 when a forbidden marker is provided """ test_rserv = rserver.API(self.mapper) api = test_utils.FakeAuthMiddleware(test_rserv, is_admin=False) self.get_api_response_ext(http.BAD_REQUEST, url='/images?marker=%s' % UUID1, api=api) def test_get_index_limit(self): """ Tests that the /images registry API returns list of public images that conforms to a limit query param """ UUID3 = _gen_uuid() extra_fixture = self.get_fixture(id=UUID3, size=19) db_api.image_create(self.context, extra_fixture) UUID4 = _gen_uuid() extra_fixture = self.get_fixture(id=UUID4) db_api.image_create(self.context, extra_fixture) res = self.get_api_response_ext(http.OK, url='/images?limit=1') res_dict = jsonutils.loads(res.body) images = res_dict['images'] self.assertEqual(1, len(images)) # expect list to be sorted by created_at desc self.assertEqual(UUID4, images[0]['id']) def test_get_index_limit_negative(self): """ Tests that the /images registry API returns list of public images that conforms to a limit query param """ self.get_api_response_ext(http.BAD_REQUEST, url='/images?limit=-1') def test_get_index_limit_non_int(self): """ Tests that the /images registry API returns list of public images that conforms to a limit query param """ self.get_api_response_ext(http.BAD_REQUEST, url='/images?limit=a') def test_get_index_limit_marker(self): """ Tests that the /images registry API returns list of public images that conforms to limit and marker query params """ UUID3 = _gen_uuid() extra_fixture = self.get_fixture(id=UUID3, size=19) db_api.image_create(self.context, extra_fixture) extra_fixture = self.get_fixture(id=_gen_uuid()) db_api.image_create(self.context, extra_fixture) res = self.get_api_response_ext( http.OK, url='/images?marker=%s&limit=1' % UUID3) self.assertEqualImages(res, (UUID2,)) def test_get_index_filter_on_user_defined_properties(self): """ Tests that /images registry API returns list of public images based a filter on user-defined properties. """ image1_id = _gen_uuid() properties = {'distro': 'ubuntu', 'arch': 'i386'} extra_fixture = self.get_fixture(id=image1_id, name='image-extra-1', properties=properties) db_api.image_create(self.context, extra_fixture) image2_id = _gen_uuid() properties = {'distro': 'ubuntu', 'arch': 'x86_64', 'foo': 'bar'} extra_fixture = self.get_fixture(id=image2_id, name='image-extra-2', properties=properties) db_api.image_create(self.context, extra_fixture) # Test index with filter containing one user-defined property. # Filter is 'property-distro=ubuntu'. # Verify both image1 and image2 are returned res = self.get_api_response_ext(http.OK, url='/images?' 'property-distro=ubuntu') images = jsonutils.loads(res.body)['images'] self.assertEqual(2, len(images)) self.assertEqual(image2_id, images[0]['id']) self.assertEqual(image1_id, images[1]['id']) # Test index with filter containing one user-defined property but # non-existent value. Filter is 'property-distro=fedora'. # Verify neither images are returned res = self.get_api_response_ext(http.OK, url='/images?' 'property-distro=fedora') images = jsonutils.loads(res.body)['images'] self.assertEqual(0, len(images)) # Test index with filter containing one user-defined property but # unique value. Filter is 'property-arch=i386'. # Verify only image1 is returned. res = self.get_api_response_ext(http.OK, url='/images?' 'property-arch=i386') images = jsonutils.loads(res.body)['images'] self.assertEqual(1, len(images)) self.assertEqual(image1_id, images[0]['id']) # Test index with filter containing one user-defined property but # unique value. Filter is 'property-arch=x86_64'. # Verify only image1 is returned. res = self.get_api_response_ext(http.OK, url='/images?' 'property-arch=x86_64') images = jsonutils.loads(res.body)['images'] self.assertEqual(1, len(images)) self.assertEqual(image2_id, images[0]['id']) # Test index with filter containing unique user-defined property. # Filter is 'property-foo=bar'. # Verify only image2 is returned. res = self.get_api_response_ext(http.OK, url='/images?property-foo=bar') images = jsonutils.loads(res.body)['images'] self.assertEqual(1, len(images)) self.assertEqual(image2_id, images[0]['id']) # Test index with filter containing unique user-defined property but # .value is non-existent. Filter is 'property-foo=baz'. # Verify neither images are returned. res = self.get_api_response_ext(http.OK, url='/images?property-foo=baz') images = jsonutils.loads(res.body)['images'] self.assertEqual(0, len(images)) # Test index with filter containing multiple user-defined properties # Filter is 'property-arch=x86_64&property-distro=ubuntu'. # Verify only image2 is returned. res = self.get_api_response_ext(http.OK, url='/images?' 'property-arch=x86_64&' 'property-distro=ubuntu') images = jsonutils.loads(res.body)['images'] self.assertEqual(1, len(images)) self.assertEqual(image2_id, images[0]['id']) # Test index with filter containing multiple user-defined properties # Filter is 'property-arch=i386&property-distro=ubuntu'. # Verify only image1 is returned. res = self.get_api_response_ext(http.OK, url='/images?property-arch=i386&' 'property-distro=ubuntu') images = jsonutils.loads(res.body)['images'] self.assertEqual(1, len(images)) self.assertEqual(image1_id, images[0]['id']) # Test index with filter containing multiple user-defined properties. # Filter is 'property-arch=random&property-distro=ubuntu'. # Verify neither images are returned. res = self.get_api_response_ext(http.OK, url='/images?' 'property-arch=random&' 'property-distro=ubuntu') images = jsonutils.loads(res.body)['images'] self.assertEqual(0, len(images)) # Test index with filter containing multiple user-defined properties. # Filter is 'property-arch=random&property-distro=random'. # Verify neither images are returned. res = self.get_api_response_ext(http.OK, url='/images?' 'property-arch=random&' 'property-distro=random') images = jsonutils.loads(res.body)['images'] self.assertEqual(0, len(images)) # Test index with filter containing multiple user-defined properties. # Filter is 'property-boo=far&property-poo=far'. # Verify neither images are returned. res = self.get_api_response_ext(http.OK, url='/images?property-boo=far&' 'property-poo=far') images = jsonutils.loads(res.body)['images'] self.assertEqual(0, len(images)) # Test index with filter containing multiple user-defined properties. # Filter is 'property-foo=bar&property-poo=far'. # Verify neither images are returned. res = self.get_api_response_ext(http.OK, url='/images?property-foo=bar&' 'property-poo=far') images = jsonutils.loads(res.body)['images'] self.assertEqual(0, len(images)) def test_get_index_filter_name(self): """ Tests that the /images registry API returns list of public images that have a specific name. This is really a sanity check, filtering is tested more in-depth using /images/detail """ extra_fixture = self.get_fixture(id=_gen_uuid(), name='new name! #123', size=19) db_api.image_create(self.context, extra_fixture) extra_fixture = self.get_fixture(id=_gen_uuid(), name='new name! #123') db_api.image_create(self.context, extra_fixture) res = self.get_api_response_ext(http.OK, url='/images?name=new name! #123') res_dict = jsonutils.loads(res.body) images = res_dict['images'] self.assertEqual(2, len(images)) for image in images: self.assertEqual('new name! #123', image['name']) def test_get_index_sort_default_created_at_desc(self): """ Tests that the /images registry API returns list of public images that conforms to a default sort key/dir """ time1 = timeutils.utcnow() + datetime.timedelta(seconds=5) time2 = timeutils.utcnow() + datetime.timedelta(seconds=4) time3 = timeutils.utcnow() UUID3 = _gen_uuid() extra_fixture = self.get_fixture(id=UUID3, size=19, created_at=time1) db_api.image_create(self.context, extra_fixture) UUID4 = _gen_uuid() extra_fixture = self.get_fixture(id=UUID4, created_at=time2) db_api.image_create(self.context, extra_fixture) UUID5 = _gen_uuid() extra_fixture = self.get_fixture(id=UUID5, created_at=time3) db_api.image_create(self.context, extra_fixture) res = self.get_api_response_ext(http.OK, url='/images') self.assertEqualImages(res, (UUID3, UUID4, UUID5, UUID2)) def test_get_index_bad_sort_key(self): """Ensure a 400 is returned when a bad sort_key is provided.""" self.get_api_response_ext(http.BAD_REQUEST, url='/images?sort_key=asdf') def test_get_index_bad_sort_dir(self): """Ensure a 400 is returned when a bad sort_dir is provided.""" self.get_api_response_ext(http.BAD_REQUEST, url='/images?sort_dir=asdf') def test_get_index_null_name(self): """Check 200 is returned when sort_key is null name Check 200 is returned when sort_key is name and name is null for specified marker """ UUID6 = _gen_uuid() extra_fixture = self.get_fixture(id=UUID6, name=None) db_api.image_create(self.context, extra_fixture) self.get_api_response_ext( http.OK, url='/images?sort_key=name&marker=%s' % UUID6) def test_get_index_null_disk_format(self): """Check 200 is returned when sort_key is null disk_format Check 200 is returned when sort_key is disk_format and disk_format is null for specified marker """ UUID6 = _gen_uuid() extra_fixture = self.get_fixture(id=UUID6, disk_format=None, size=19) db_api.image_create(self.context, extra_fixture) self.get_api_response_ext( http.OK, url='/images?sort_key=disk_format&marker=%s' % UUID6) def test_get_index_null_container_format(self): """Check 200 is returned when sort_key is null container_format Check 200 is returned when sort_key is container_format and container_format is null for specified marker """ UUID6 = _gen_uuid() extra_fixture = self.get_fixture(id=UUID6, container_format=None) db_api.image_create(self.context, extra_fixture) self.get_api_response_ext( http.OK, url='/images?sort_key=container_format&marker=%s' % UUID6) def test_get_index_sort_name_asc(self): """ Tests that the /images registry API returns list of public images sorted alphabetically by name in ascending order. """ UUID3 = _gen_uuid() extra_fixture = self.get_fixture(id=UUID3, name='asdf', size=19) db_api.image_create(self.context, extra_fixture) UUID4 = _gen_uuid() extra_fixture = self.get_fixture(id=UUID4, name='xyz') db_api.image_create(self.context, extra_fixture) url = '/images?sort_key=name&sort_dir=asc' res = self.get_api_response_ext(http.OK, url=url) self.assertEqualImages(res, (UUID3, UUID2, UUID4)) def test_get_index_sort_status_desc(self): """ Tests that the /images registry API returns list of public images sorted alphabetically by status in descending order. """ UUID3 = _gen_uuid() extra_fixture = self.get_fixture(id=UUID3, status='queued', size=19) db_api.image_create(self.context, extra_fixture) UUID4 = _gen_uuid() extra_fixture = self.get_fixture(id=UUID4) db_api.image_create(self.context, extra_fixture) res = self.get_api_response_ext(http.OK, url=( '/images?sort_key=status&sort_dir=desc')) self.assertEqualImages(res, (UUID3, UUID4, UUID2)) def test_get_index_sort_disk_format_asc(self): """ Tests that the /images registry API returns list of public images sorted alphabetically by disk_format in ascending order. """ UUID3 = _gen_uuid() extra_fixture = self.get_fixture(id=UUID3, disk_format='ami', container_format='ami', size=19) db_api.image_create(self.context, extra_fixture) UUID4 = _gen_uuid() extra_fixture = self.get_fixture(id=UUID4, disk_format='vdi') db_api.image_create(self.context, extra_fixture) res = self.get_api_response_ext(http.OK, url=( '/images?sort_key=disk_format&sort_dir=asc')) self.assertEqualImages(res, (UUID3, UUID4, UUID2)) def test_get_index_sort_container_format_desc(self): """ Tests that the /images registry API returns list of public images sorted alphabetically by container_format in descending order. """ UUID3 = _gen_uuid() extra_fixture = self.get_fixture(id=UUID3, size=19, disk_format='ami', container_format='ami') db_api.image_create(self.context, extra_fixture) UUID4 = _gen_uuid() extra_fixture = self.get_fixture(id=UUID4, disk_format='iso', container_format='bare') db_api.image_create(self.context, extra_fixture) url = '/images?sort_key=container_format&sort_dir=desc' res = self.get_api_response_ext(http.OK, url=url) self.assertEqualImages(res, (UUID2, UUID4, UUID3)) def test_get_index_sort_size_asc(self): """ Tests that the /images registry API returns list of public images sorted by size in ascending order. """ UUID3 = _gen_uuid() extra_fixture = self.get_fixture(id=UUID3, disk_format='ami', container_format='ami', size=100) db_api.image_create(self.context, extra_fixture) UUID4 = _gen_uuid() extra_fixture = self.get_fixture(id=UUID4, disk_format='iso', container_format='bare', size=2) db_api.image_create(self.context, extra_fixture) url = '/images?sort_key=size&sort_dir=asc' res = self.get_api_response_ext(http.OK, url=url) self.assertEqualImages(res, (UUID4, UUID2, UUID3)) def test_get_index_sort_created_at_asc(self): """ Tests that the /images registry API returns list of public images sorted by created_at in ascending order. """ now = timeutils.utcnow() time1 = now + datetime.timedelta(seconds=5) time2 = now UUID3 = _gen_uuid() extra_fixture = self.get_fixture(id=UUID3, created_at=time1, size=19) db_api.image_create(self.context, extra_fixture) UUID4 = _gen_uuid() extra_fixture = self.get_fixture(id=UUID4, created_at=time2) db_api.image_create(self.context, extra_fixture) res = self.get_api_response_ext(http.OK, url=( '/images?sort_key=created_at&sort_dir=asc')) self.assertEqualImages(res, (UUID2, UUID4, UUID3)) def test_get_index_sort_updated_at_desc(self): """ Tests that the /images registry API returns list of public images sorted by updated_at in descending order. """ now = timeutils.utcnow() time1 = now + datetime.timedelta(seconds=5) time2 = now UUID3 = _gen_uuid() extra_fixture = self.get_fixture(id=UUID3, size=19, created_at=None, updated_at=time1) db_api.image_create(self.context, extra_fixture) UUID4 = _gen_uuid() extra_fixture = self.get_fixture(id=UUID4, created_at=None, updated_at=time2) db_api.image_create(self.context, extra_fixture) res = self.get_api_response_ext(http.OK, url=( '/images?sort_key=updated_at&sort_dir=desc')) self.assertEqualImages(res, (UUID3, UUID4, UUID2)) def test_get_details(self): """ Tests that the /images/detail registry API returns a mapping containing a list of detailed image information """ fixture = {'id': UUID2, 'name': 'fake image #2', 'is_public': True, 'size': 19, 'min_disk': 5, 'min_ram': 256, 'checksum': None, 'disk_format': 'vhd', 'container_format': 'ovf', 'status': 'active'} res = self.get_api_response_ext(http.OK, url='/images/detail') res_dict = jsonutils.loads(res.body) images = res_dict['images'] self.assertEqual(1, len(images)) for k, v in six.iteritems(fixture): self.assertEqual(v, images[0][k]) def test_get_details_limit_marker(self): """ Tests that the /images/details registry API returns list of public images that conforms to limit and marker query params. This functionality is tested more thoroughly on /images, this is just a sanity check """ UUID3 = _gen_uuid() extra_fixture = self.get_fixture(id=UUID3, size=20) db_api.image_create(self.context, extra_fixture) extra_fixture = self.get_fixture(id=_gen_uuid()) db_api.image_create(self.context, extra_fixture) url = '/images/detail?marker=%s&limit=1' % UUID3 res = self.get_api_response_ext(http.OK, url=url) res_dict = jsonutils.loads(res.body) images = res_dict['images'] self.assertEqual(1, len(images)) # expect list to be sorted by created_at desc self.assertEqual(UUID2, images[0]['id']) def test_get_details_invalid_marker(self): """ Tests that the /images/detail registry API returns a 400 when an invalid marker is provided """ url = '/images/detail?marker=%s' % _gen_uuid() self.get_api_response_ext(http.BAD_REQUEST, url=url) def test_get_details_malformed_marker(self): """ Tests that the /images/detail registry API returns a 400 when a malformed marker is provided """ res = self.get_api_response_ext(http.BAD_REQUEST, url='/images/detail?marker=4') self.assertIn(b'marker', res.body) def test_get_details_forbidden_marker(self): """ Tests that the /images/detail registry API returns a 400 when a forbidden marker is provided """ test_rserv = rserver.API(self.mapper) api = test_utils.FakeAuthMiddleware(test_rserv, is_admin=False) self.get_api_response_ext(http.BAD_REQUEST, api=api, url='/images/detail?marker=%s' % UUID1) def test_get_details_filter_name(self): """ Tests that the /images/detail registry API returns list of public images that have a specific name """ extra_fixture = self.get_fixture(id=_gen_uuid(), name='new name! #123', size=20) db_api.image_create(self.context, extra_fixture) extra_fixture = self.get_fixture(id=_gen_uuid(), name='new name! #123') db_api.image_create(self.context, extra_fixture) url = '/images/detail?name=new name! #123' res = self.get_api_response_ext(http.OK, url=url) res_dict = jsonutils.loads(res.body) images = res_dict['images'] self.assertEqual(2, len(images)) for image in images: self.assertEqual('new name! #123', image['name']) def test_get_details_filter_status(self): """ Tests that the /images/detail registry API returns list of public images that have a specific status """ extra_fixture = self.get_fixture(id=_gen_uuid(), status='saving') db_api.image_create(self.context, extra_fixture) extra_fixture = self.get_fixture(id=_gen_uuid(), size=19, status='active') db_api.image_create(self.context, extra_fixture) res = self.get_api_response_ext(http.OK, url='/images/detail?status=saving') res_dict = jsonutils.loads(res.body) images = res_dict['images'] self.assertEqual(1, len(images)) for image in images: self.assertEqual('saving', image['status']) def test_get_details_filter_container_format(self): """ Tests that the /images/detail registry API returns list of public images that have a specific container_format """ extra_fixture = self.get_fixture(id=_gen_uuid(), disk_format='vdi', size=19) db_api.image_create(self.context, extra_fixture) extra_fixture = self.get_fixture(id=_gen_uuid(), disk_format='ami', container_format='ami', size=19) db_api.image_create(self.context, extra_fixture) url = '/images/detail?container_format=ovf' res = self.get_api_response_ext(http.OK, url=url) res_dict = jsonutils.loads(res.body) images = res_dict['images'] self.assertEqual(2, len(images)) for image in images: self.assertEqual('ovf', image['container_format']) def test_get_details_filter_min_disk(self): """ Tests that the /images/detail registry API returns list of public images that have a specific min_disk """ extra_fixture = self.get_fixture(id=_gen_uuid(), min_disk=7, size=19) db_api.image_create(self.context, extra_fixture) extra_fixture = self.get_fixture(id=_gen_uuid(), disk_format='ami', container_format='ami', size=19) db_api.image_create(self.context, extra_fixture) res = self.get_api_response_ext(http.OK, url='/images/detail?min_disk=7') res_dict = jsonutils.loads(res.body) images = res_dict['images'] self.assertEqual(1, len(images)) for image in images: self.assertEqual(7, image['min_disk']) def test_get_details_filter_min_ram(self): """ Tests that the /images/detail registry API returns list of public images that have a specific min_ram """ extra_fixture = self.get_fixture(id=_gen_uuid(), min_ram=514, size=19) db_api.image_create(self.context, extra_fixture) extra_fixture = self.get_fixture(id=_gen_uuid(), disk_format='ami', container_format='ami', size=19) db_api.image_create(self.context, extra_fixture) res = self.get_api_response_ext(http.OK, url='/images/detail?min_ram=514') res_dict = jsonutils.loads(res.body) images = res_dict['images'] self.assertEqual(1, len(images)) for image in images: self.assertEqual(514, image['min_ram']) def test_get_details_filter_disk_format(self): """ Tests that the /images/detail registry API returns list of public images that have a specific disk_format """ extra_fixture = self.get_fixture(id=_gen_uuid(), size=19) db_api.image_create(self.context, extra_fixture) extra_fixture = self.get_fixture(id=_gen_uuid(), disk_format='ami', container_format='ami', size=19) db_api.image_create(self.context, extra_fixture) res = self.get_api_response_ext(http.OK, url='/images/detail?disk_format=vhd') res_dict = jsonutils.loads(res.body) images = res_dict['images'] self.assertEqual(2, len(images)) for image in images: self.assertEqual('vhd', image['disk_format']) def test_get_details_filter_size_min(self): """ Tests that the /images/detail registry API returns list of public images that have a size greater than or equal to size_min """ extra_fixture = self.get_fixture(id=_gen_uuid(), size=18) db_api.image_create(self.context, extra_fixture) extra_fixture = self.get_fixture(id=_gen_uuid(), disk_format='ami', container_format='ami') db_api.image_create(self.context, extra_fixture) res = self.get_api_response_ext(http.OK, url='/images/detail?size_min=19') res_dict = jsonutils.loads(res.body) images = res_dict['images'] self.assertEqual(2, len(images)) for image in images: self.assertGreaterEqual(image['size'], 19) def test_get_details_filter_size_max(self): """ Tests that the /images/detail registry API returns list of public images that have a size less than or equal to size_max """ extra_fixture = self.get_fixture(id=_gen_uuid(), size=18) db_api.image_create(self.context, extra_fixture) extra_fixture = self.get_fixture(id=_gen_uuid(), disk_format='ami', container_format='ami') db_api.image_create(self.context, extra_fixture) res = self.get_api_response_ext(http.OK, url='/images/detail?size_max=19') res_dict = jsonutils.loads(res.body) images = res_dict['images'] self.assertEqual(2, len(images)) for image in images: self.assertLessEqual(image['size'], 19) def test_get_details_filter_size_min_max(self): """ Tests that the /images/detail registry API returns list of public images that have a size less than or equal to size_max and greater than or equal to size_min """ extra_fixture = self.get_fixture(id=_gen_uuid(), size=18) db_api.image_create(self.context, extra_fixture) extra_fixture = self.get_fixture(id=_gen_uuid(), disk_format='ami', container_format='ami') db_api.image_create(self.context, extra_fixture) extra_fixture = self.get_fixture(id=_gen_uuid(), size=6) db_api.image_create(self.context, extra_fixture) url = '/images/detail?size_min=18&size_max=19' res = self.get_api_response_ext(http.OK, url=url) res_dict = jsonutils.loads(res.body) images = res_dict['images'] self.assertEqual(2, len(images)) for image in images: self.assertTrue(18 <= image['size'] <= 19) def test_get_details_filter_changes_since(self): """ Tests that the /images/detail registry API returns list of images that changed since the time defined by changes-since """ dt1 = timeutils.utcnow() - datetime.timedelta(1) iso1 = timeutils.isotime(dt1) date_only1 = dt1.strftime('%Y-%m-%d') date_only2 = dt1.strftime('%Y%m%d') date_only3 = dt1.strftime('%Y-%m%d') dt2 = timeutils.utcnow() + datetime.timedelta(1) iso2 = timeutils.isotime(dt2) image_ts = timeutils.utcnow() + datetime.timedelta(2) hour_before = image_ts.strftime('%Y-%m-%dT%H:%M:%S%%2B01:00') hour_after = image_ts.strftime('%Y-%m-%dT%H:%M:%S-01:00') dt4 = timeutils.utcnow() + datetime.timedelta(3) iso4 = timeutils.isotime(dt4) UUID3 = _gen_uuid() extra_fixture = self.get_fixture(id=UUID3, size=18) db_api.image_create(self.context, extra_fixture) db_api.image_destroy(self.context, UUID3) UUID4 = _gen_uuid() extra_fixture = self.get_fixture(id=UUID4, disk_format='ami', container_format='ami', created_at=image_ts, updated_at=image_ts) db_api.image_create(self.context, extra_fixture) # Check a standard list, 4 images in db (2 deleted) res = self.get_api_response_ext(http.OK, url='/images/detail') self.assertEqualImages(res, (UUID4, UUID2)) # Expect 3 images (1 deleted) res = self.get_api_response_ext(http.OK, url=( '/images/detail?changes-since=%s' % iso1)) self.assertEqualImages(res, (UUID4, UUID3, UUID2)) # Expect 1 images (0 deleted) res = self.get_api_response_ext(http.OK, url=( '/images/detail?changes-since=%s' % iso2)) self.assertEqualImages(res, (UUID4,)) # Expect 1 images (0 deleted) res = self.get_api_response_ext(http.OK, url=( '/images/detail?changes-since=%s' % hour_before)) self.assertEqualImages(res, (UUID4,)) # Expect 0 images (0 deleted) res = self.get_api_response_ext(http.OK, url=( '/images/detail?changes-since=%s' % hour_after)) self.assertEqualImages(res, ()) # Expect 0 images (0 deleted) res = self.get_api_response_ext(http.OK, url=( '/images/detail?changes-since=%s' % iso4)) self.assertEqualImages(res, ()) for param in [date_only1, date_only2, date_only3]: # Expect 3 images (1 deleted) res = self.get_api_response_ext(http.OK, url=( '/images/detail?changes-since=%s' % param)) self.assertEqualImages(res, (UUID4, UUID3, UUID2)) # Bad request (empty changes-since param) self.get_api_response_ext(http.BAD_REQUEST, url='/images/detail?changes-since=') def test_get_details_filter_property(self): """ Tests that the /images/detail registry API returns list of public images that have a specific custom property """ extra_fixture = self.get_fixture(id=_gen_uuid(), size=19, properties={'prop_123': 'v a'}) db_api.image_create(self.context, extra_fixture) extra_fixture = self.get_fixture(id=_gen_uuid(), size=19, disk_format='ami', container_format='ami', properties={'prop_123': 'v b'}) db_api.image_create(self.context, extra_fixture) res = self.get_api_response_ext(http.OK, url=( '/images/detail?property-prop_123=v%20a')) res_dict = jsonutils.loads(res.body) images = res_dict['images'] self.assertEqual(1, len(images)) for image in images: self.assertEqual('v a', image['properties']['prop_123']) def test_get_details_filter_public_none(self): """ Tests that the /images/detail registry API returns list of all images if is_public none is passed """ extra_fixture = self.get_fixture(id=_gen_uuid(), is_public=False, size=18) db_api.image_create(self.context, extra_fixture) res = self.get_api_response_ext(http.OK, url='/images/detail?is_public=None') res_dict = jsonutils.loads(res.body) images = res_dict['images'] self.assertEqual(3, len(images)) def test_get_details_filter_public_false(self): """ Tests that the /images/detail registry API returns list of private images if is_public false is passed """ extra_fixture = self.get_fixture(id=_gen_uuid(), is_public=False, size=18) db_api.image_create(self.context, extra_fixture) res = self.get_api_response_ext(http.OK, url='/images/detail?is_public=False') res_dict = jsonutils.loads(res.body) images = res_dict['images'] self.assertEqual(2, len(images)) for image in images: self.assertEqual(False, image['is_public']) def test_get_details_filter_public_true(self): """ Tests that the /images/detail registry API returns list of public images if is_public true is passed (same as default) """ extra_fixture = self.get_fixture(id=_gen_uuid(), is_public=False, size=18) db_api.image_create(self.context, extra_fixture) res = self.get_api_response_ext(http.OK, url='/images/detail?is_public=True') res_dict = jsonutils.loads(res.body) images = res_dict['images'] self.assertEqual(1, len(images)) for image in images: self.assertTrue(image['is_public']) def test_get_details_filter_public_string_format(self): """ Tests that the /images/detail registry API returns 400 Bad error for filter is_public with wrong format """ extra_fixture = self.get_fixture(id=_gen_uuid(), is_public='true', size=18) db_api.image_create(self.context, extra_fixture) self.get_api_response_ext(http.BAD_REQUEST, url='/images/detail?is_public=public') def test_get_details_filter_deleted_false(self): """ Test that the /images/detail registry API return list of images with deleted filter = false """ extra_fixture = {'id': _gen_uuid(), 'status': 'active', 'disk_format': 'vhd', 'container_format': 'ovf', 'name': 'test deleted filter 1', 'size': 18, 'deleted': False, 'checksum': None} db_api.image_create(self.context, extra_fixture) res = self.get_api_response_ext(http.OK, url='/images/detail?deleted=False') res_dict = jsonutils.loads(res.body) images = res_dict['images'] for image in images: self.assertFalse(image['deleted']) def test_get_filter_no_public_with_no_admin(self): """ Tests that the /images/detail registry API returns list of public images if is_public true is passed (same as default) """ UUID4 = _gen_uuid() extra_fixture = self.get_fixture(id=UUID4, is_public=False, size=18) db_api.image_create(self.context, extra_fixture) test_rserv = rserver.API(self.mapper) api = test_utils.FakeAuthMiddleware(test_rserv, is_admin=False) res = self.get_api_response_ext(http.OK, api=api, url='/images/detail?is_public=False') res_dict = jsonutils.loads(res.body) images = res_dict['images'] self.assertEqual(1, len(images)) # Check that for non admin user only is_public = True images returns for image in images: self.assertTrue(image['is_public']) def test_get_filter_protected_with_None_value(self): """ Tests that the /images/detail registry API returns 400 error """ extra_fixture = self.get_fixture(id=_gen_uuid(), size=18, protected="False") db_api.image_create(self.context, extra_fixture) self.get_api_response_ext(http.BAD_REQUEST, url='/images/detail?protected=') def test_get_filter_protected_with_True_value(self): """ Tests that the /images/detail registry API returns 400 error """ extra_fixture = self.get_fixture(id=_gen_uuid(), size=18, protected="True") db_api.image_create(self.context, extra_fixture) self.get_api_response_ext(http.OK, url='/images/detail?protected=True') def test_get_details_sort_name_asc(self): """ Tests that the /images/details registry API returns list of public images sorted alphabetically by name in ascending order. """ UUID3 = _gen_uuid() extra_fixture = self.get_fixture(id=UUID3, name='asdf', size=19) db_api.image_create(self.context, extra_fixture) UUID4 = _gen_uuid() extra_fixture = self.get_fixture(id=UUID4, name='xyz') db_api.image_create(self.context, extra_fixture) res = self.get_api_response_ext(http.OK, url=( '/images/detail?sort_key=name&sort_dir=asc')) self.assertEqualImages(res, (UUID3, UUID2, UUID4)) def test_create_image(self): """Tests that the /images POST registry API creates the image""" fixture = self.get_minimal_fixture(is_public=True) body = jsonutils.dump_as_bytes(dict(image=fixture)) res = self.get_api_response_ext(http.OK, body=body, method='POST', content_type='json') res_dict = jsonutils.loads(res.body) for k, v in six.iteritems(fixture): self.assertEqual(v, res_dict['image'][k]) # Test status was updated properly self.assertEqual('active', res_dict['image']['status']) def test_create_image_with_min_disk(self): """Tests that the /images POST registry API creates the image""" fixture = self.get_minimal_fixture(min_disk=5) body = jsonutils.dump_as_bytes(dict(image=fixture)) res = self.get_api_response_ext(http.OK, body=body, method='POST', content_type='json') res_dict = jsonutils.loads(res.body) self.assertEqual(5, res_dict['image']['min_disk']) def test_create_image_with_min_ram(self): """Tests that the /images POST registry API creates the image""" fixture = self.get_minimal_fixture(min_ram=256) body = jsonutils.dump_as_bytes(dict(image=fixture)) res = self.get_api_response_ext(http.OK, body=body, method='POST', content_type='json') res_dict = jsonutils.loads(res.body) self.assertEqual(256, res_dict['image']['min_ram']) def test_create_image_with_min_ram_default(self): """Tests that the /images POST registry API creates the image""" fixture = self.get_minimal_fixture() body = jsonutils.dump_as_bytes(dict(image=fixture)) res = self.get_api_response_ext(http.OK, body=body, method='POST', content_type='json') res_dict = jsonutils.loads(res.body) self.assertEqual(0, res_dict['image']['min_ram']) def test_create_image_with_min_disk_default(self): """Tests that the /images POST registry API creates the image""" fixture = self.get_minimal_fixture() body = jsonutils.dump_as_bytes(dict(image=fixture)) res = self.get_api_response_ext(http.OK, body=body, method='POST', content_type='json') res_dict = jsonutils.loads(res.body) self.assertEqual(0, res_dict['image']['min_disk']) def test_create_image_with_bad_status(self): """Tests proper exception is raised if a bad status is set""" fixture = self.get_minimal_fixture(id=_gen_uuid(), status='bad status') body = jsonutils.dump_as_bytes(dict(image=fixture)) res = self.get_api_response_ext(http.BAD_REQUEST, body=body, method='POST', content_type='json') self.assertIn(b'Invalid image status', res.body) def test_create_image_with_bad_id(self): """Tests proper exception is raised if a bad disk_format is set""" fixture = self.get_minimal_fixture(id='asdf') body = jsonutils.dump_as_bytes(dict(image=fixture)) self.get_api_response_ext(http.BAD_REQUEST, content_type='json', method='POST', body=body) def test_create_image_with_image_id_in_log(self): """Tests correct image id in log message when creating image""" fixture = self.get_minimal_fixture( id='0564c64c-3545-4e34-abfb-9d18e5f2f2f9') self.log_image_id = False def fake_log_info(msg, image_data): if ('0564c64c-3545-4e34-abfb-9d18e5f2f2f9' == image_data['id'] and 'Successfully created image' in msg): self.log_image_id = True self.stubs.Set(rserver.images.LOG, 'info', fake_log_info) body = jsonutils.dump_as_bytes(dict(image=fixture)) self.get_api_response_ext(http.OK, content_type='json', method='POST', body=body) self.assertTrue(self.log_image_id) def test_update_image(self): """Tests that the /images PUT registry API updates the image""" fixture = {'name': 'fake public image #2', 'min_disk': 5, 'min_ram': 256, 'disk_format': 'raw'} body = jsonutils.dump_as_bytes(dict(image=fixture)) res = self.get_api_response_ext(http.OK, url='/images/%s' % UUID2, body=body, method='PUT', content_type='json') res_dict = jsonutils.loads(res.body) self.assertNotEqual(res_dict['image']['created_at'], res_dict['image']['updated_at']) for k, v in six.iteritems(fixture): self.assertEqual(v, res_dict['image'][k]) @mock.patch.object(rserver.images.LOG, 'debug') def test_update_image_not_log_sensitive_info(self, log_debug): """ Tests that there is no any sensitive info of image location was logged in glance during the image update operation. """ def fake_log_debug(fmt_str, image_meta): self.assertNotIn("'locations'", fmt_str % image_meta) fixture = {'name': 'fake public image #2', 'min_disk': 5, 'min_ram': 256, 'disk_format': 'raw', 'location': 'fake://image'} body = jsonutils.dump_as_bytes(dict(image=fixture)) log_debug.side_effect = fake_log_debug res = self.get_api_response_ext(http.OK, url='/images/%s' % UUID2, body=body, method='PUT', content_type='json') res_dict = jsonutils.loads(res.body) self.assertNotEqual(res_dict['image']['created_at'], res_dict['image']['updated_at']) for k, v in six.iteritems(fixture): self.assertEqual(v, res_dict['image'][k]) def test_update_image_not_existing(self): """ Tests proper exception is raised if attempt to update non-existing image """ fixture = {'status': 'killed'} body = jsonutils.dump_as_bytes(dict(image=fixture)) self.get_api_response_ext(http.NOT_FOUND, url='/images/%s' % _gen_uuid(), method='PUT', body=body, content_type='json') def test_update_image_with_bad_status(self): """Tests that exception raised trying to set a bad status""" fixture = {'status': 'invalid'} body = jsonutils.dump_as_bytes(dict(image=fixture)) res = self.get_api_response_ext(http.BAD_REQUEST, method='PUT', body=body, url='/images/%s' % UUID2, content_type='json') self.assertIn(b'Invalid image status', res.body) def test_update_private_image_no_admin(self): """ Tests proper exception is raised if attempt to update private image with non admin user, that not belongs to it """ UUID8 = _gen_uuid() extra_fixture = self.get_fixture(id=UUID8, size=19, is_public=False, protected=True, owner='test user') db_api.image_create(self.context, extra_fixture) test_rserv = rserver.API(self.mapper) api = test_utils.FakeAuthMiddleware(test_rserv, is_admin=False) body = jsonutils.dump_as_bytes(dict(image=extra_fixture)) self.get_api_response_ext(http.NOT_FOUND, body=body, api=api, url='/images/%s' % UUID8, method='PUT', content_type='json') def test_delete_image(self): """Tests that the /images DELETE registry API deletes the image""" # Grab the original number of images res = self.get_api_response_ext(http.OK) res_dict = jsonutils.loads(res.body) orig_num_images = len(res_dict['images']) # Delete image #2 self.get_api_response_ext(http.OK, url='/images/%s' % UUID2, method='DELETE') # Verify one less image res = self.get_api_response_ext(http.OK) res_dict = jsonutils.loads(res.body) new_num_images = len(res_dict['images']) self.assertEqual(orig_num_images - 1, new_num_images) def test_delete_image_response(self): """Tests that the registry API delete returns the image metadata""" image = self.FIXTURES[0] res = self.get_api_response_ext(http.OK, url='/images/%s' % image['id'], method='DELETE') deleted_image = jsonutils.loads(res.body)['image'] self.assertEqual(image['id'], deleted_image['id']) self.assertTrue(deleted_image['deleted']) self.assertTrue(deleted_image['deleted_at']) def test_delete_image_not_existing(self): """ Tests proper exception is raised if attempt to delete non-existing image """ self.get_api_response_ext(http.NOT_FOUND, url='/images/%s' % _gen_uuid(), method='DELETE') def test_delete_public_image_no_admin(self): """ Tests proper exception is raised if attempt to delete public image with non admin user """ UUID8 = _gen_uuid() extra_fixture = self.get_fixture(id=UUID8, size=19, protected=True, owner='test user') db_api.image_create(self.context, extra_fixture) test_rserv = rserver.API(self.mapper) api = test_utils.FakeAuthMiddleware(test_rserv, is_admin=False) self.get_api_response_ext(http.FORBIDDEN, url='/images/%s' % UUID8, method='DELETE', api=api) def test_delete_private_image_no_admin(self): """ Tests proper exception is raised if attempt to delete private image with non admin user, that not belongs to it """ UUID8 = _gen_uuid() extra_fixture = self.get_fixture(id=UUID8, is_public=False, size=19, protected=True, owner='test user') db_api.image_create(self.context, extra_fixture) test_rserv = rserver.API(self.mapper) api = test_utils.FakeAuthMiddleware(test_rserv, is_admin=False) self.get_api_response_ext(http.NOT_FOUND, url='/images/%s' % UUID8, method='DELETE', api=api) def test_get_image_members(self): """ Tests members listing for existing images """ res = self.get_api_response_ext(http.OK, url='/images/%s/members' % UUID2, method='GET') memb_list = jsonutils.loads(res.body) num_members = len(memb_list['members']) self.assertEqual(0, num_members) def test_get_image_members_not_existing(self): """ Tests proper exception is raised if attempt to get members of non-existing image """ self.get_api_response_ext(http.NOT_FOUND, method='GET', url='/images/%s/members' % _gen_uuid()) def test_get_image_members_forbidden(self): """ Tests proper exception is raised if attempt to get members of non-existing image """ UUID8 = _gen_uuid() extra_fixture = self.get_fixture(id=UUID8, is_public=False, size=19, protected=True, owner='test user') db_api.image_create(self.context, extra_fixture) test_rserv = rserver.API(self.mapper) api = test_utils.FakeAuthMiddleware(test_rserv, is_admin=False) self.get_api_response_ext(http.NOT_FOUND, url='/images/%s/members' % UUID8, method='GET', api=api) def test_get_member_images(self): """ Tests image listing for members """ res = self.get_api_response_ext(http.OK, url='/shared-images/pattieblack', method='GET') memb_list = jsonutils.loads(res.body) num_members = len(memb_list['shared_images']) self.assertEqual(0, num_members) def test_replace_members(self): """ Tests replacing image members raises right exception """ self.api = test_utils.FakeAuthMiddleware(rserver.API(self.mapper), is_admin=False) fixture = dict(member_id='pattieblack') body = jsonutils.dump_as_bytes(dict(image_memberships=fixture)) self.get_api_response_ext(http.UNAUTHORIZED, method='PUT', body=body, url='/images/%s/members' % UUID2, content_type='json') def test_update_all_image_members_non_existing_image_id(self): """ Test update image members raises right exception """ # Update all image members fixture = dict(member_id='test1') req = webob.Request.blank('/images/%s/members' % _gen_uuid()) req.method = 'PUT' self.context.tenant = 'test2' req.content_type = 'application/json' req.body = jsonutils.dump_as_bytes(dict(image_memberships=fixture)) res = req.get_response(self.api) self.assertEqual(http.NOT_FOUND, res.status_int) def test_update_all_image_members_invalid_membership_association(self): """ Test update image members raises right exception """ UUID8 = _gen_uuid() extra_fixture = self.get_fixture(id=UUID8, size=19, protected=False, owner='test user') db_api.image_create(self.context, extra_fixture) # Add several members to image req = webob.Request.blank('/images/%s/members/test1' % UUID8) req.method = 'PUT' res = req.get_response(self.api) # Get all image members: res = self.get_api_response_ext(http.OK, url='/images/%s/members' % UUID8, method='GET') memb_list = jsonutils.loads(res.body) num_members = len(memb_list['members']) self.assertEqual(1, num_members) fixture = dict(member_id='test1') body = jsonutils.dump_as_bytes(dict(image_memberships=fixture)) self.get_api_response_ext(http.BAD_REQUEST, url='/images/%s/members' % UUID8, method='PUT', body=body, content_type='json') def test_update_all_image_members_non_shared_image_forbidden(self): """ Test update image members raises right exception """ test_rserv = rserver.API(self.mapper) api = test_utils.FakeAuthMiddleware(test_rserv, is_admin=False) UUID9 = _gen_uuid() extra_fixture = self.get_fixture(id=UUID9, size=19, protected=False) db_api.image_create(self.context, extra_fixture) fixture = dict(member_id='test1') req = webob.Request.blank('/images/%s/members' % UUID9) req.headers['X-Auth-Token'] = 'test1:test1:' req.method = 'PUT' req.content_type = 'application/json' req.body = jsonutils.dump_as_bytes(dict(image_memberships=fixture)) res = req.get_response(api) self.assertEqual(http.FORBIDDEN, res.status_int) def test_update_all_image_members(self): """ Test update non existing image members """ UUID8 = _gen_uuid() extra_fixture = self.get_fixture(id=UUID8, size=19, protected=False, owner='test user') db_api.image_create(self.context, extra_fixture) # Add several members to image req = webob.Request.blank('/images/%s/members/test1' % UUID8) req.method = 'PUT' req.get_response(self.api) fixture = [dict(member_id='test2', can_share=True)] body = jsonutils.dump_as_bytes(dict(memberships=fixture)) self.get_api_response_ext(http.NO_CONTENT, url='/images/%s/members' % UUID8, method='PUT', body=body, content_type='json') def test_update_all_image_members_bad_request(self): """ Test that right exception is raises in case if wrong memberships association is supplied """ UUID8 = _gen_uuid() extra_fixture = self.get_fixture(id=UUID8, size=19, protected=False, owner='test user') db_api.image_create(self.context, extra_fixture) # Add several members to image req = webob.Request.blank('/images/%s/members/test1' % UUID8) req.method = 'PUT' req.get_response(self.api) fixture = dict(member_id='test3') body = jsonutils.dump_as_bytes(dict(memberships=fixture)) self.get_api_response_ext(http.BAD_REQUEST, url='/images/%s/members' % UUID8, method='PUT', body=body, content_type='json') def test_update_all_image_existing_members(self): """ Test update existing image members """ UUID8 = _gen_uuid() extra_fixture = self.get_fixture(id=UUID8, size=19, protected=False, owner='test user') db_api.image_create(self.context, extra_fixture) # Add several members to image req = webob.Request.blank('/images/%s/members/test1' % UUID8) req.method = 'PUT' req.get_response(self.api) fixture = [dict(member_id='test1', can_share=False)] body = jsonutils.dump_as_bytes(dict(memberships=fixture)) self.get_api_response_ext(http.NO_CONTENT, url='/images/%s/members' % UUID8, method='PUT', body=body, content_type='json') def test_update_all_image_existing_deleted_members(self): """ Test update existing image members """ UUID8 = _gen_uuid() extra_fixture = self.get_fixture(id=UUID8, size=19, protected=False, owner='test user') db_api.image_create(self.context, extra_fixture) # Add a new member to an image req = webob.Request.blank('/images/%s/members/test1' % UUID8) req.method = 'PUT' req.get_response(self.api) # Delete the existing member self.get_api_response_ext(http.NO_CONTENT, method='DELETE', url='/images/%s/members/test1' % UUID8) # Re-add the deleted member by replacing membership list fixture = [dict(member_id='test1', can_share=False)] body = jsonutils.dump_as_bytes(dict(memberships=fixture)) self.get_api_response_ext(http.NO_CONTENT, url='/images/%s/members' % UUID8, method='PUT', body=body, content_type='json') memb_list = db_api.image_member_find(self.context, image_id=UUID8) self.assertEqual(1, len(memb_list)) def test_add_member(self): """ Tests adding image members raises right exception """ self.api = test_utils.FakeAuthMiddleware(rserver.API(self.mapper), is_admin=False) self.get_api_response_ext(http.UNAUTHORIZED, method='PUT', url=('/images/%s/members/pattieblack' % UUID2)) def test_add_member_to_image_positive(self): """ Test check that member can be successfully added """ UUID8 = _gen_uuid() extra_fixture = self.get_fixture(id=UUID8, size=19, protected=False, owner='test user') db_api.image_create(self.context, extra_fixture) fixture = dict(can_share=True) test_uri = '/images/%s/members/test_add_member_positive' body = jsonutils.dump_as_bytes(dict(member=fixture)) self.get_api_response_ext(http.NO_CONTENT, url=test_uri % UUID8, method='PUT', body=body, content_type='json') def test_add_member_to_non_exist_image(self): """ Test check that member can't be added for non exist image """ fixture = dict(can_share=True) test_uri = '/images/%s/members/test_add_member_positive' body = jsonutils.dump_as_bytes(dict(member=fixture)) self.get_api_response_ext(http.NOT_FOUND, url=test_uri % _gen_uuid(), method='PUT', body=body, content_type='json') def test_add_image_member_non_shared_image_forbidden(self): """ Test update image members raises right exception """ test_rserver_api = rserver.API(self.mapper) api = test_utils.FakeAuthMiddleware( test_rserver_api, is_admin=False) UUID9 = _gen_uuid() extra_fixture = self.get_fixture(id=UUID9, size=19, protected=False) db_api.image_create(self.context, extra_fixture) fixture = dict(can_share=True) test_uri = '/images/%s/members/test_add_member_to_non_share_image' req = webob.Request.blank(test_uri % UUID9) req.headers['X-Auth-Token'] = 'test1:test1:' req.method = 'PUT' req.content_type = 'application/json' req.body = jsonutils.dump_as_bytes(dict(member=fixture)) res = req.get_response(api) self.assertEqual(http.FORBIDDEN, res.status_int) def test_add_member_to_image_bad_request(self): """ Test check right status code is returned """ UUID8 = _gen_uuid() extra_fixture = self.get_fixture(id=UUID8, size=19, protected=False, owner='test user') db_api.image_create(self.context, extra_fixture) fixture = [dict(can_share=True)] test_uri = '/images/%s/members/test_add_member_bad_request' body = jsonutils.dump_as_bytes(dict(member=fixture)) self.get_api_response_ext(http.BAD_REQUEST, url=test_uri % UUID8, method='PUT', body=body, content_type='json') def test_delete_member(self): """ Tests deleting image members raises right exception """ self.api = test_utils.FakeAuthMiddleware(rserver.API(self.mapper), is_admin=False) self.get_api_response_ext(http.UNAUTHORIZED, method='DELETE', url=('/images/%s/members/pattieblack' % UUID2)) def test_delete_member_invalid(self): """ Tests deleting a invalid/non existing member raises right exception """ self.api = test_utils.FakeAuthMiddleware(rserver.API(self.mapper), is_admin=True) res = self.get_api_response_ext( http.NOT_FOUND, method='DELETE', url=('/images/%s/members/pattieblack' % UUID2)) self.assertIn(b'Membership could not be found', res.body) def test_delete_member_from_non_exist_image(self): """ Tests deleting image members raises right exception """ test_rserver_api = rserver.API(self.mapper) self.api = test_utils.FakeAuthMiddleware( test_rserver_api, is_admin=True) test_uri = '/images/%s/members/pattieblack' self.get_api_response_ext(http.NOT_FOUND, method='DELETE', url=test_uri % _gen_uuid()) def test_delete_image_member_non_shared_image_forbidden(self): """ Test delete image members raises right exception """ test_rserver_api = rserver.API(self.mapper) api = test_utils.FakeAuthMiddleware( test_rserver_api, is_admin=False) UUID9 = _gen_uuid() extra_fixture = self.get_fixture(id=UUID9, size=19, protected=False) db_api.image_create(self.context, extra_fixture) test_uri = '/images/%s/members/test_add_member_to_non_share_image' req = webob.Request.blank(test_uri % UUID9) req.headers['X-Auth-Token'] = 'test1:test1:' req.method = 'DELETE' req.content_type = 'application/json' res = req.get_response(api) self.assertEqual(http.FORBIDDEN, res.status_int) def test_add_member_delete_create(self): """ Test check that the same member can be successfully added after delete it, and the same record will be reused for the same membership. """ # add a member UUID8 = _gen_uuid() extra_fixture = self.get_fixture(id=UUID8, size=19, protected=False, owner='test user') db_api.image_create(self.context, extra_fixture) fixture = dict(can_share=True) test_uri = '/images/%s/members/test_add_member_delete_create' body = jsonutils.dump_as_bytes(dict(member=fixture)) self.get_api_response_ext(http.NO_CONTENT, url=test_uri % UUID8, method='PUT', body=body, content_type='json') memb_list = db_api.image_member_find(self.context, image_id=UUID8) self.assertEqual(1, len(memb_list)) memb_list2 = db_api.image_member_find(self.context, image_id=UUID8, include_deleted=True) self.assertEqual(1, len(memb_list2)) # delete the member self.get_api_response_ext(http.NO_CONTENT, method='DELETE', url=test_uri % UUID8) memb_list = db_api.image_member_find(self.context, image_id=UUID8) self.assertEqual(0, len(memb_list)) memb_list2 = db_api.image_member_find(self.context, image_id=UUID8, include_deleted=True) self.assertEqual(1, len(memb_list2)) # create it again self.get_api_response_ext(http.NO_CONTENT, url=test_uri % UUID8, method='PUT', body=body, content_type='json') memb_list = db_api.image_member_find(self.context, image_id=UUID8) self.assertEqual(1, len(memb_list)) memb_list2 = db_api.image_member_find(self.context, image_id=UUID8, include_deleted=True) self.assertEqual(1, len(memb_list2)) def test_get_on_image_member(self): """ Test GET on image members raises 404 and produces correct Allow headers """ self.api = test_utils.FakeAuthMiddleware(rserver.API(self.mapper), is_admin=False) uri = '/images/%s/members/123' % UUID1 req = webob.Request.blank(uri) req.method = 'GET' res = req.get_response(self.api) self.assertEqual(http.METHOD_NOT_ALLOWED, res.status_int) self.assertIn(('Allow', 'PUT, DELETE'), res.headerlist) def test_get_images_bad_urls(self): """Check that routes collections are not on (LP bug 1185828)""" self.get_api_response_ext(http.NOT_FOUND, url='/images/detail.xxx') self.get_api_response_ext(http.NOT_FOUND, url='/images.xxx') self.get_api_response_ext(http.NOT_FOUND, url='/images/new') self.get_api_response_ext(http.OK, url='/images/%s/members' % UUID1) self.get_api_response_ext(http.NOT_FOUND, url='/images/%s/members.xxx' % UUID1) class TestRegistryAPILocations(base.IsolatedUnitTest, test_utils.RegistryAPIMixIn): def setUp(self): """Establish a clean test environment""" super(TestRegistryAPILocations, self).setUp() self.mapper = routes.Mapper() self.api = test_utils.FakeAuthMiddleware(rserver.API(self.mapper), is_admin=True) def _get_extra_fixture(id, name, **kwargs): return self.get_extra_fixture( id, name, locations=[{'url': "file:///%s/%s" % (self.test_dir, id), 'metadata': {}, 'status': 'active'}], **kwargs) self.FIXTURES = [ _get_extra_fixture(UUID1, 'fake image #1', is_public=False, disk_format='ami', container_format='ami', min_disk=0, min_ram=0, owner=123, size=13, properties={'type': 'kernel'}), _get_extra_fixture(UUID2, 'fake image #2', min_disk=5, min_ram=256, size=19, properties={})] self.context = context.RequestContext(is_admin=True) db_api.get_engine() self.destroy_fixtures() self.create_fixtures() def tearDown(self): """Clear the test environment""" super(TestRegistryAPILocations, self).tearDown() self.destroy_fixtures() def test_show_from_locations(self): req = webob.Request.blank('/images/%s' % UUID1) res = req.get_response(self.api) self.assertEqual(http.OK, res.status_int) res_dict = jsonutils.loads(res.body) image = res_dict['image'] self.assertIn('id', image['location_data'][0]) image['location_data'][0].pop('id') self.assertEqual(self.FIXTURES[0]['locations'][0], image['location_data'][0]) self.assertEqual(self.FIXTURES[0]['locations'][0]['url'], image['location_data'][0]['url']) self.assertEqual(self.FIXTURES[0]['locations'][0]['metadata'], image['location_data'][0]['metadata']) def test_show_from_location_data(self): req = webob.Request.blank('/images/%s' % UUID2) res = req.get_response(self.api) self.assertEqual(http.OK, res.status_int) res_dict = jsonutils.loads(res.body) image = res_dict['image'] self.assertIn('id', image['location_data'][0]) image['location_data'][0].pop('id') self.assertEqual(self.FIXTURES[1]['locations'][0], image['location_data'][0]) self.assertEqual(self.FIXTURES[1]['locations'][0]['url'], image['location_data'][0]['url']) self.assertEqual(self.FIXTURES[1]['locations'][0]['metadata'], image['location_data'][0]['metadata']) def test_create_from_location_data_with_encryption(self): encryption_key = '1234567890123456' location_url1 = "file:///%s/%s" % (self.test_dir, _gen_uuid()) location_url2 = "file:///%s/%s" % (self.test_dir, _gen_uuid()) encrypted_location_url1 = crypt.urlsafe_encrypt(encryption_key, location_url1, 64) encrypted_location_url2 = crypt.urlsafe_encrypt(encryption_key, location_url2, 64) fixture = {'name': 'fake image #3', 'status': 'active', 'disk_format': 'vhd', 'container_format': 'ovf', 'is_public': True, 'checksum': None, 'min_disk': 5, 'min_ram': 256, 'size': 19, 'location': encrypted_location_url1, 'location_data': [{'url': encrypted_location_url1, 'metadata': {'key': 'value'}, 'status': 'active'}, {'url': encrypted_location_url2, 'metadata': {'key': 'value'}, 'status': 'active'}]} self.config(metadata_encryption_key=encryption_key) req = webob.Request.blank('/images') req.method = 'POST' req.content_type = 'application/json' req.body = jsonutils.dump_as_bytes(dict(image=fixture)) res = req.get_response(self.api) self.assertEqual(http.OK, res.status_int) res_dict = jsonutils.loads(res.body) image = res_dict['image'] # NOTE(zhiyan) _normalize_image_location_for_db() function will # not re-encrypted the url within location. self.assertEqual(fixture['location'], image['location']) self.assertEqual(2, len(image['location_data'])) self.assertEqual(fixture['location_data'][0]['url'], image['location_data'][0]['url']) self.assertEqual(fixture['location_data'][0]['metadata'], image['location_data'][0]['metadata']) self.assertEqual(fixture['location_data'][1]['url'], image['location_data'][1]['url']) self.assertEqual(fixture['location_data'][1]['metadata'], image['location_data'][1]['metadata']) image_entry = db_api.image_get(self.context, image['id']) self.assertEqual(encrypted_location_url1, image_entry['locations'][0]['url']) self.assertEqual(encrypted_location_url2, image_entry['locations'][1]['url']) decrypted_location_url1 = crypt.urlsafe_decrypt( encryption_key, image_entry['locations'][0]['url']) decrypted_location_url2 = crypt.urlsafe_decrypt( encryption_key, image_entry['locations'][1]['url']) self.assertEqual(location_url1, decrypted_location_url1) self.assertEqual(location_url2, decrypted_location_url2) class TestSharability(test_utils.BaseTestCase): def setUp(self): super(TestSharability, self).setUp() self.setup_db() self.controller = glance.registry.api.v1.members.Controller() def setup_db(self): db_api.get_engine() db_models.unregister_models(db_api.get_engine()) db_models.register_models(db_api.get_engine()) def test_is_image_sharable_as_admin(self): TENANT1 = str(uuid.uuid4()) TENANT2 = str(uuid.uuid4()) ctxt1 = context.RequestContext(is_admin=False, tenant=TENANT1, auth_token='user:%s:user' % TENANT1, owner_is_tenant=True) ctxt2 = context.RequestContext(is_admin=True, user=TENANT2, auth_token='user:%s:admin' % TENANT2, owner_is_tenant=False) UUIDX = str(uuid.uuid4()) # We need private image and context.owner should not match image # owner image = db_api.image_create(ctxt1, {'id': UUIDX, 'status': 'queued', 'is_public': False, 'owner': TENANT1}) result = self.controller.is_image_sharable(ctxt2, image) self.assertTrue(result) def test_is_image_sharable_owner_can_share(self): TENANT1 = str(uuid.uuid4()) ctxt1 = context.RequestContext(is_admin=False, tenant=TENANT1, auth_token='user:%s:user' % TENANT1, owner_is_tenant=True) UUIDX = str(uuid.uuid4()) # We need private image and context.owner should not match image # owner image = db_api.image_create(ctxt1, {'id': UUIDX, 'status': 'queued', 'is_public': False, 'owner': TENANT1}) result = self.controller.is_image_sharable(ctxt1, image) self.assertTrue(result) def test_is_image_sharable_non_owner_cannot_share(self): TENANT1 = str(uuid.uuid4()) TENANT2 = str(uuid.uuid4()) ctxt1 = context.RequestContext(is_admin=False, tenant=TENANT1, auth_token='user:%s:user' % TENANT1, owner_is_tenant=True) ctxt2 = context.RequestContext(is_admin=False, user=TENANT2, auth_token='user:%s:user' % TENANT2, owner_is_tenant=False) UUIDX = str(uuid.uuid4()) # We need private image and context.owner should not match image # owner image = db_api.image_create(ctxt1, {'id': UUIDX, 'status': 'queued', 'is_public': False, 'owner': TENANT1}) result = self.controller.is_image_sharable(ctxt2, image) self.assertFalse(result) def test_is_image_sharable_non_owner_can_share_as_image_member(self): TENANT1 = str(uuid.uuid4()) TENANT2 = str(uuid.uuid4()) ctxt1 = context.RequestContext(is_admin=False, tenant=TENANT1, auth_token='user:%s:user' % TENANT1, owner_is_tenant=True) ctxt2 = context.RequestContext(is_admin=False, user=TENANT2, auth_token='user:%s:user' % TENANT2, owner_is_tenant=False) UUIDX = str(uuid.uuid4()) # We need private image and context.owner should not match image # owner image = db_api.image_create(ctxt1, {'id': UUIDX, 'status': 'queued', 'is_public': False, 'owner': TENANT1}) membership = {'can_share': True, 'member': TENANT2, 'image_id': UUIDX} db_api.image_member_create(ctxt1, membership) result = self.controller.is_image_sharable(ctxt2, image) self.assertTrue(result) def test_is_image_sharable_non_owner_as_image_member_without_sharing(self): TENANT1 = str(uuid.uuid4()) TENANT2 = str(uuid.uuid4()) ctxt1 = context.RequestContext(is_admin=False, tenant=TENANT1, auth_token='user:%s:user' % TENANT1, owner_is_tenant=True) ctxt2 = context.RequestContext(is_admin=False, user=TENANT2, auth_token='user:%s:user' % TENANT2, owner_is_tenant=False) UUIDX = str(uuid.uuid4()) # We need private image and context.owner should not match image # owner image = db_api.image_create(ctxt1, {'id': UUIDX, 'status': 'queued', 'is_public': False, 'owner': TENANT1}) membership = {'can_share': False, 'member': TENANT2, 'image_id': UUIDX} db_api.image_member_create(ctxt1, membership) result = self.controller.is_image_sharable(ctxt2, image) self.assertFalse(result) def test_is_image_sharable_owner_is_none(self): TENANT1 = str(uuid.uuid4()) ctxt1 = context.RequestContext(is_admin=False, tenant=TENANT1, auth_token='user:%s:user' % TENANT1, owner_is_tenant=True) ctxt2 = context.RequestContext(is_admin=False, tenant=None, auth_token='user:%s:user' % TENANT1, owner_is_tenant=True) UUIDX = str(uuid.uuid4()) # We need private image and context.owner should not match image # owner image = db_api.image_create(ctxt1, {'id': UUIDX, 'status': 'queued', 'is_public': False, 'owner': TENANT1}) result = self.controller.is_image_sharable(ctxt2, image) self.assertFalse(result) glance-16.0.0/glance/tests/unit/v1/test_upload_utils.py0000666000175100017510000003754713245511421023110 0ustar zuulzuul00000000000000# Copyright 2013 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from contextlib import contextmanager import glance_store import mock from mock import patch import webob.exc from glance.api.v1 import upload_utils from glance.common import exception from glance.common import store_utils from glance.common import utils import glance.registry.client.v1.api as registry from glance.tests.unit import base import glance.tests.unit.utils as unit_test_utils class TestUploadUtils(base.StoreClearingUnitTest): def setUp(self): super(TestUploadUtils, self).setUp() self.config(debug=True) def test_initiate_delete(self): req = unit_test_utils.get_fake_request() location = {"url": "file://foo/bar", "metadata": {}, "status": "active"} id = unit_test_utils.UUID1 with patch.object(store_utils, "safe_delete_from_backend") as mock_store_utils: upload_utils.initiate_deletion(req, location, id) mock_store_utils.assert_called_once_with(req.context, id, location) def test_initiate_delete_with_delayed_delete(self): self.config(delayed_delete=True) req = unit_test_utils.get_fake_request() location = {"url": "file://foo/bar", "metadata": {}, "status": "active"} id = unit_test_utils.UUID1 with patch.object(store_utils, "schedule_delayed_delete_from_backend", return_value=True) as mock_store_utils: upload_utils.initiate_deletion(req, location, id) mock_store_utils.assert_called_once_with(req.context, id, location) def test_safe_kill(self): req = unit_test_utils.get_fake_request() id = unit_test_utils.UUID1 with patch.object(registry, "update_image_metadata") as mock_registry: upload_utils.safe_kill(req, id, 'saving') mock_registry.assert_called_once_with(req.context, id, {'status': 'killed'}, from_state='saving') def test_safe_kill_with_error(self): req = unit_test_utils.get_fake_request() id = unit_test_utils.UUID1 with patch.object(registry, "update_image_metadata", side_effect=Exception()) as mock_registry: upload_utils.safe_kill(req, id, 'saving') mock_registry.assert_called_once_with(req.context, id, {'status': 'killed'}, from_state='saving') @contextmanager def _get_store_and_notifier(self, image_size=10, ext_update_data=None, ret_checksum="checksum", exc_class=None): location = "file://foo/bar" checksum = "checksum" size = 10 update_data = {'checksum': checksum} if ext_update_data is not None: update_data.update(ext_update_data) image_meta = {'id': unit_test_utils.UUID1, 'size': image_size} image_data = "blah" store = mock.MagicMock() notifier = mock.MagicMock() if exc_class is not None: store.add.side_effect = exc_class else: store.add.return_value = (location, size, ret_checksum, {}) yield (location, checksum, image_meta, image_data, store, notifier, update_data) def test_upload_data_to_store(self): # 'user_storage_quota' is not set def store_add(image_id, data, size, **kwargs): # Check if 'data' is instance of 'CooperativeReader' when # 'user_storage_quota' is disabled. self.assertIsInstance(data, utils.CooperativeReader) return location, 10, "checksum", {} req = unit_test_utils.get_fake_request() with self._get_store_and_notifier( ext_update_data={'size': 10}, exc_class=store_add) as (location, checksum, image_meta, image_data, store, notifier, update_data): ret = image_meta.update(update_data) with patch.object(registry, 'update_image_metadata', return_value=ret) as mock_update_image_metadata: actual_meta, location_data = upload_utils.upload_data_to_store( req, image_meta, image_data, store, notifier) self.assertEqual(location, location_data['url']) self.assertEqual(image_meta.update(update_data), actual_meta) mock_update_image_metadata.assert_called_once_with( req.context, image_meta['id'], update_data, from_state='saving') def test_upload_data_to_store_user_storage_quota_enabled(self): # Enable user_storage_quota self.config(user_storage_quota='100B') def store_add(image_id, data, size, **kwargs): # Check if 'data' is instance of 'LimitingReader' when # 'user_storage_quota' is enabled. self.assertIsInstance(data, utils.LimitingReader) return location, 10, "checksum", {} req = unit_test_utils.get_fake_request() with self._get_store_and_notifier( ext_update_data={'size': 10}, exc_class=store_add) as (location, checksum, image_meta, image_data, store, notifier, update_data): ret = image_meta.update(update_data) # mock 'check_quota' mock_check_quota = patch('glance.api.common.check_quota', return_value=100) mock_check_quota.start() self.addCleanup(mock_check_quota.stop) with patch.object(registry, 'update_image_metadata', return_value=ret) as mock_update_image_metadata: actual_meta, location_data = upload_utils.upload_data_to_store( req, image_meta, image_data, store, notifier) self.assertEqual(location, location_data['url']) self.assertEqual(image_meta.update(update_data), actual_meta) mock_update_image_metadata.assert_called_once_with( req.context, image_meta['id'], update_data, from_state='saving') # 'check_quota' is called two times check_quota_call_count = ( mock_check_quota.target.check_quota.call_count) self.assertEqual(2, check_quota_call_count) def test_upload_data_to_store_mismatch_size(self): req = unit_test_utils.get_fake_request() with self._get_store_and_notifier( image_size=11) as (location, checksum, image_meta, image_data, store, notifier, update_data): ret = image_meta.update(update_data) with patch.object(registry, 'update_image_metadata', return_value=ret) as mock_update_image_metadata: self.assertRaises(webob.exc.HTTPBadRequest, upload_utils.upload_data_to_store, req, image_meta, image_data, store, notifier) mock_update_image_metadata.assert_called_with( req.context, image_meta['id'], {'status': 'killed'}, from_state='saving') def test_upload_data_to_store_mismatch_checksum(self): req = unit_test_utils.get_fake_request() with self._get_store_and_notifier( ret_checksum='fake') as (location, checksum, image_meta, image_data, store, notifier, update_data): ret = image_meta.update(update_data) with patch.object(registry, "update_image_metadata", return_value=ret) as mock_update_image_metadata: self.assertRaises(webob.exc.HTTPBadRequest, upload_utils.upload_data_to_store, req, image_meta, image_data, store, notifier) mock_update_image_metadata.assert_called_with( req.context, image_meta['id'], {'status': 'killed'}, from_state='saving') def _test_upload_data_to_store_exception(self, exc_class, expected_class): req = unit_test_utils.get_fake_request() with self._get_store_and_notifier( exc_class=exc_class) as (location, checksum, image_meta, image_data, store, notifier, update_data): with patch.object(upload_utils, 'safe_kill') as mock_safe_kill: self.assertRaises(expected_class, upload_utils.upload_data_to_store, req, image_meta, image_data, store, notifier) mock_safe_kill.assert_called_once_with( req, image_meta['id'], 'saving') def _test_upload_data_to_store_exception_with_notify(self, exc_class, expected_class, image_killed=True): req = unit_test_utils.get_fake_request() with self._get_store_and_notifier( exc_class=exc_class) as (location, checksum, image_meta, image_data, store, notifier, update_data): with patch.object(upload_utils, 'safe_kill') as mock_safe_kill: self.assertRaises(expected_class, upload_utils.upload_data_to_store, req, image_meta, image_data, store, notifier) if image_killed: mock_safe_kill.assert_called_with(req, image_meta['id'], 'saving') def test_upload_data_to_store_raises_store_disabled(self): """Test StoreDisabled exception is raised while uploading data""" self._test_upload_data_to_store_exception_with_notify( glance_store.StoreAddDisabled, webob.exc.HTTPGone, image_killed=True) def test_upload_data_to_store_duplicate(self): """See note in glance.api.v1.upload_utils on why we don't want image to be deleted in this case. """ self._test_upload_data_to_store_exception_with_notify( exception.Duplicate, webob.exc.HTTPConflict, image_killed=False) def test_upload_data_to_store_forbidden(self): self._test_upload_data_to_store_exception_with_notify( exception.Forbidden, webob.exc.HTTPForbidden) def test_upload_data_to_store_storage_full(self): self._test_upload_data_to_store_exception_with_notify( glance_store.StorageFull, webob.exc.HTTPRequestEntityTooLarge) def test_upload_data_to_store_storage_write_denied(self): self._test_upload_data_to_store_exception_with_notify( glance_store.StorageWriteDenied, webob.exc.HTTPServiceUnavailable) def test_upload_data_to_store_size_limit_exceeded(self): self._test_upload_data_to_store_exception_with_notify( exception.ImageSizeLimitExceeded, webob.exc.HTTPRequestEntityTooLarge) def test_upload_data_to_store_http_error(self): self._test_upload_data_to_store_exception_with_notify( webob.exc.HTTPError, webob.exc.HTTPError) def test_upload_data_to_store_client_disconnect(self): self._test_upload_data_to_store_exception( ValueError, webob.exc.HTTPBadRequest) def test_upload_data_to_store_client_disconnect_ioerror(self): self._test_upload_data_to_store_exception( IOError, webob.exc.HTTPBadRequest) def test_upload_data_to_store_exception(self): self._test_upload_data_to_store_exception_with_notify( Exception, webob.exc.HTTPInternalServerError) def test_upload_data_to_store_not_found_after_upload(self): req = unit_test_utils.get_fake_request() with self._get_store_and_notifier( ext_update_data={'size': 10}) as (location, checksum, image_meta, image_data, store, notifier, update_data): exc = exception.ImageNotFound with patch.object(registry, 'update_image_metadata', side_effect=exc) as mock_update_image_metadata: with patch.object(upload_utils, "initiate_deletion") as mock_initiate_del: with patch.object(upload_utils, "safe_kill") as mock_safe_kill: self.assertRaises(webob.exc.HTTPPreconditionFailed, upload_utils.upload_data_to_store, req, image_meta, image_data, store, notifier) mock_update_image_metadata.assert_called_once_with( req.context, image_meta['id'], update_data, from_state='saving') mock_initiate_del.assert_called_once_with( req, {'url': location, 'status': 'active', 'metadata': {}}, image_meta['id']) mock_safe_kill.assert_called_once_with( req, image_meta['id'], 'saving') @mock.patch.object(registry, 'update_image_metadata', side_effect=exception.NotAuthenticated) @mock.patch.object(upload_utils, 'initiate_deletion') def test_activate_image_with_expired_token( self, mocked_delete, mocked_update): """Test token expiration during image upload. If users token expired before image was uploaded then if auth error was caught from registry during changing image status from 'saving' to 'active' then it's required to delete all image data. """ context = mock.Mock() req = mock.Mock() req.context = context with self._get_store_and_notifier() as (location, checksum, image_meta, image_data, store, notifier, update_data): self.assertRaises(webob.exc.HTTPUnauthorized, upload_utils.upload_data_to_store, req, image_meta, image_data, store, notifier) self.assertEqual(2, mocked_update.call_count) mocked_delete.assert_called_once_with( req, {'url': 'file://foo/bar', 'status': 'active', 'metadata': {}}, 'c80a1a6c-bd1f-41c5-90ee-81afedb1d58d') glance-16.0.0/glance/tests/unit/v1/__init__.py0000666000175100017510000000000013245511421021052 0ustar zuulzuul00000000000000glance-16.0.0/glance/tests/unit/v1/test_api.py0000666000175100017510000062432513245511421021151 0ustar zuulzuul00000000000000# -*- coding: utf-8 -*- # Copyright 2010-2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import datetime import hashlib import os import signal import uuid import glance_store as store import mock from oslo_config import cfg from oslo_serialization import jsonutils import routes import six from six.moves import http_client import webob import glance.api import glance.api.common from glance.api.v1 import router from glance.api.v1 import upload_utils import glance.common.config from glance.common import exception from glance.common import timeutils import glance.context from glance.db.sqlalchemy import api as db_api from glance.db.sqlalchemy import models as db_models import glance.registry.client.v1.api as registry from glance.tests.unit import base import glance.tests.unit.utils as unit_test_utils from glance.tests import utils as test_utils CONF = cfg.CONF _gen_uuid = lambda: str(uuid.uuid4()) UUID1 = _gen_uuid() UUID2 = _gen_uuid() UUID3 = _gen_uuid() class TestGlanceAPI(base.IsolatedUnitTest): def setUp(self): """Establish a clean test environment""" super(TestGlanceAPI, self).setUp() self.mapper = routes.Mapper() self.api = test_utils.FakeAuthMiddleware(router.API(self.mapper)) self.FIXTURES = [ {'id': UUID1, 'name': 'fake image #1', 'status': 'active', 'disk_format': 'ami', 'container_format': 'ami', 'is_public': False, 'created_at': timeutils.utcnow(), 'updated_at': timeutils.utcnow(), 'deleted_at': None, 'deleted': False, 'checksum': None, 'size': 13, 'locations': [{'url': "file:///%s/%s" % (self.test_dir, UUID1), 'metadata': {}, 'status': 'active'}], 'properties': {'type': 'kernel'}}, {'id': UUID2, 'name': 'fake image #2', 'status': 'active', 'disk_format': 'vhd', 'container_format': 'ovf', 'is_public': True, 'created_at': timeutils.utcnow(), 'updated_at': timeutils.utcnow(), 'deleted_at': None, 'deleted': False, 'checksum': 'abc123', 'size': 19, 'locations': [{'url': "file:///%s/%s" % (self.test_dir, UUID2), 'metadata': {}, 'status': 'active'}], 'properties': {}}, {'id': UUID3, 'name': 'fake image #3', 'status': 'deactivated', 'disk_format': 'ami', 'container_format': 'ami', 'is_public': False, 'created_at': timeutils.utcnow(), 'updated_at': timeutils.utcnow(), 'deleted_at': None, 'deleted': False, 'checksum': '13', 'size': 13, 'locations': [{'url': "file:///%s/%s" % (self.test_dir, UUID1), 'metadata': {}, 'status': 'active'}], 'properties': {}}] self.context = glance.context.RequestContext(is_admin=True) db_api.get_engine() self.destroy_fixtures() self.addCleanup(self.destroy_fixtures) self.create_fixtures() # Used to store/track image status changes for post-analysis self.image_status = [] self.http_server_pid = None self.addCleanup(self._cleanup_server) ret = test_utils.start_http_server("foo_image_id", b"foo_image") self.http_server_pid, self.http_port = ret def _cleanup_server(self): if self.http_server_pid is not None: os.kill(self.http_server_pid, signal.SIGKILL) def create_fixtures(self): for fixture in self.FIXTURES: db_api.image_create(self.context, fixture) # We write a fake image file to the filesystem with open("%s/%s" % (self.test_dir, fixture['id']), 'wb') as image: image.write(b"chunk00000remainder") image.flush() def destroy_fixtures(self): # Easiest to just drop the models and re-create them... db_models.unregister_models(db_api.get_engine()) db_models.register_models(db_api.get_engine()) def _do_test_defaulted_format(self, format_key, format_value): fixture_headers = {'x-image-meta-name': 'defaulted', 'x-image-meta-location': 'http://localhost:0/image', format_key: format_value} req = webob.Request.blank("/images") req.method = 'POST' for k, v in six.iteritems(fixture_headers): req.headers[k] = v http = store.get_store_from_scheme('http') with mock.patch.object(http, 'get_size') as mocked_size: mocked_size.return_value = 0 res = req.get_response(self.api) self.assertEqual(http_client.CREATED, res.status_int) res_body = jsonutils.loads(res.body)['image'] self.assertEqual(format_value, res_body['disk_format']) self.assertEqual(format_value, res_body['container_format']) def _http_loc_url(self, path): return 'http://127.0.0.1:%d%s' % (self.http_port, path) def test_defaulted_amazon_format(self): for key in ('x-image-meta-disk-format', 'x-image-meta-container-format'): for value in ('aki', 'ari', 'ami'): self._do_test_defaulted_format(key, value) def test_bad_time_create_minus_int(self): fixture_headers = {'x-image-meta-store': 'file', 'x-image-meta-disk-format': 'vhd', 'x-image-meta-container-format': 'ovf', 'x-image-meta-created_at': '-42', 'x-image-meta-name': 'fake image #3'} req = webob.Request.blank("/images") req.method = 'POST' for k, v in six.iteritems(fixture_headers): req.headers[k] = v res = req.get_response(self.api) self.assertEqual(http_client.BAD_REQUEST, res.status_int) def test_bad_time_create_string(self): fixture_headers = {'x-image-meta-store': 'file', 'x-image-meta-disk-format': 'vhd', 'x-image-meta-container-format': 'ovf', 'x-image-meta-created_at': 'foo', 'x-image-meta-name': 'fake image #3'} req = webob.Request.blank("/images") req.method = 'POST' for k, v in six.iteritems(fixture_headers): req.headers[k] = v res = req.get_response(self.api) self.assertEqual(http_client.BAD_REQUEST, res.status_int) def test_bad_time_create_low_year(self): # 'strftime' only allows values after 1900 in glance v1 fixture_headers = {'x-image-meta-store': 'file', 'x-image-meta-disk-format': 'vhd', 'x-image-meta-container-format': 'ovf', 'x-image-meta-created_at': '1100', 'x-image-meta-name': 'fake image #3'} req = webob.Request.blank("/images") req.method = 'POST' for k, v in six.iteritems(fixture_headers): req.headers[k] = v res = req.get_response(self.api) self.assertEqual(http_client.BAD_REQUEST, res.status_int) def test_bad_time_create_string_in_date(self): fixture_headers = {'x-image-meta-store': 'file', 'x-image-meta-disk-format': 'vhd', 'x-image-meta-container-format': 'ovf', 'x-image-meta-created_at': '2012-01-01hey12:32:12', 'x-image-meta-name': 'fake image #3'} req = webob.Request.blank("/images") req.method = 'POST' for k, v in six.iteritems(fixture_headers): req.headers[k] = v res = req.get_response(self.api) self.assertEqual(http_client.BAD_REQUEST, res.status_int) def test_bad_min_disk_size_create(self): fixture_headers = {'x-image-meta-store': 'file', 'x-image-meta-disk-format': 'vhd', 'x-image-meta-container-format': 'ovf', 'x-image-meta-min-disk': '-42', 'x-image-meta-name': 'fake image #3'} req = webob.Request.blank("/images") req.method = 'POST' for k, v in six.iteritems(fixture_headers): req.headers[k] = v res = req.get_response(self.api) self.assertEqual(http_client.BAD_REQUEST, res.status_int) self.assertIn(b'Invalid value', res.body) def test_updating_imageid_after_creation(self): # Test incorrect/illegal id update req = webob.Request.blank("/images/%s" % UUID1) req.method = 'PUT' req.headers['x-image-meta-id'] = '000000-000-0000-0000-000' res = req.get_response(self.api) self.assertEqual(http_client.FORBIDDEN, res.status_int) # Test using id of another image req = webob.Request.blank("/images/%s" % UUID1) req.method = 'PUT' req.headers['x-image-meta-id'] = UUID2 res = req.get_response(self.api) self.assertEqual(http_client.FORBIDDEN, res.status_int) def test_bad_min_disk_size_update(self): fixture_headers = {'x-image-meta-disk-format': 'vhd', 'x-image-meta-container-format': 'ovf', 'x-image-meta-name': 'fake image #3'} req = webob.Request.blank("/images") req.method = 'POST' for k, v in six.iteritems(fixture_headers): req.headers[k] = v res = req.get_response(self.api) self.assertEqual(http_client.CREATED, res.status_int) res_body = jsonutils.loads(res.body)['image'] self.assertEqual('queued', res_body['status']) image_id = res_body['id'] req = webob.Request.blank("/images/%s" % image_id) req.method = 'PUT' req.headers['x-image-meta-min-disk'] = '-42' res = req.get_response(self.api) self.assertEqual(http_client.BAD_REQUEST, res.status_int) self.assertIn(b'Invalid value', res.body) def test_invalid_min_disk_size_update(self): fixture_headers = {'x-image-meta-disk-format': 'vhd', 'x-image-meta-container-format': 'ovf', 'x-image-meta-name': 'fake image #3'} req = webob.Request.blank("/images") req.method = 'POST' for k, v in six.iteritems(fixture_headers): req.headers[k] = v res = req.get_response(self.api) self.assertEqual(http_client.CREATED, res.status_int) res_body = jsonutils.loads(res.body)['image'] self.assertEqual('queued', res_body['status']) image_id = res_body['id'] req = webob.Request.blank("/images/%s" % image_id) req.method = 'PUT' req.headers['x-image-meta-min-disk'] = str(2 ** 31 + 1) res = req.get_response(self.api) self.assertEqual(http_client.BAD_REQUEST, res.status_int) def test_bad_min_ram_size_create(self): fixture_headers = {'x-image-meta-store': 'file', 'x-image-meta-disk-format': 'vhd', 'x-image-meta-container-format': 'ovf', 'x-image-meta-min-ram': '-42', 'x-image-meta-name': 'fake image #3'} req = webob.Request.blank("/images") req.method = 'POST' for k, v in six.iteritems(fixture_headers): req.headers[k] = v res = req.get_response(self.api) self.assertEqual(http_client.BAD_REQUEST, res.status_int) self.assertIn(b'Invalid value', res.body) def test_bad_min_ram_size_update(self): fixture_headers = {'x-image-meta-disk-format': 'vhd', 'x-image-meta-container-format': 'ovf', 'x-image-meta-name': 'fake image #3'} req = webob.Request.blank("/images") req.method = 'POST' for k, v in six.iteritems(fixture_headers): req.headers[k] = v res = req.get_response(self.api) self.assertEqual(http_client.CREATED, res.status_int) res_body = jsonutils.loads(res.body)['image'] self.assertEqual('queued', res_body['status']) image_id = res_body['id'] req = webob.Request.blank("/images/%s" % image_id) req.method = 'PUT' req.headers['x-image-meta-min-ram'] = '-42' res = req.get_response(self.api) self.assertEqual(http_client.BAD_REQUEST, res.status_int) self.assertIn(b'Invalid value', res.body) def test_invalid_min_ram_size_update(self): fixture_headers = {'x-image-meta-disk-format': 'vhd', 'x-image-meta-container-format': 'ovf', 'x-image-meta-name': 'fake image #3'} req = webob.Request.blank("/images") req.method = 'POST' for k, v in six.iteritems(fixture_headers): req.headers[k] = v res = req.get_response(self.api) self.assertEqual(http_client.CREATED, res.status_int) res_body = jsonutils.loads(res.body)['image'] self.assertEqual('queued', res_body['status']) image_id = res_body['id'] req = webob.Request.blank("/images/%s" % image_id) req.method = 'PUT' req.headers['x-image-meta-min-ram'] = str(2 ** 31 + 1) res = req.get_response(self.api) self.assertEqual(http_client.BAD_REQUEST, res.status_int) def test_bad_disk_format(self): fixture_headers = { 'x-image-meta-store': 'bad', 'x-image-meta-name': 'bogus', 'x-image-meta-location': 'http://localhost:0/image.tar.gz', 'x-image-meta-disk-format': 'invalid', 'x-image-meta-container-format': 'ami', } req = webob.Request.blank("/images") req.method = 'POST' for k, v in six.iteritems(fixture_headers): req.headers[k] = v res = req.get_response(self.api) self.assertEqual(http_client.BAD_REQUEST, res.status_int) self.assertIn(b'Invalid disk format', res.body) def test_configured_disk_format_good(self): self.config(disk_formats=['foo'], group="image_format") fixture_headers = { 'x-image-meta-name': 'bogus', 'x-image-meta-location': 'http://localhost:0/image.tar.gz', 'x-image-meta-disk-format': 'foo', 'x-image-meta-container-format': 'bare', } req = webob.Request.blank("/images") req.method = 'POST' for k, v in six.iteritems(fixture_headers): req.headers[k] = v http = store.get_store_from_scheme('http') with mock.patch.object(http, 'get_size') as mocked_size: mocked_size.return_value = 0 res = req.get_response(self.api) self.assertEqual(http_client.CREATED, res.status_int) def test_configured_disk_format_bad(self): self.config(disk_formats=['foo'], group="image_format") fixture_headers = { 'x-image-meta-store': 'bad', 'x-image-meta-name': 'bogus', 'x-image-meta-location': 'http://localhost:0/image.tar.gz', 'x-image-meta-disk-format': 'bar', 'x-image-meta-container-format': 'bare', } req = webob.Request.blank("/images") req.method = 'POST' for k, v in six.iteritems(fixture_headers): req.headers[k] = v res = req.get_response(self.api) self.assertEqual(http_client.BAD_REQUEST, res.status_int) self.assertIn(b'Invalid disk format', res.body) def test_configured_container_format_good(self): self.config(container_formats=['foo'], group="image_format") fixture_headers = { 'x-image-meta-name': 'bogus', 'x-image-meta-location': 'http://localhost:0/image.tar.gz', 'x-image-meta-disk-format': 'raw', 'x-image-meta-container-format': 'foo', } req = webob.Request.blank("/images") req.method = 'POST' for k, v in six.iteritems(fixture_headers): req.headers[k] = v http = store.get_store_from_scheme('http') with mock.patch.object(http, 'get_size') as mocked_size: mocked_size.return_value = 0 res = req.get_response(self.api) self.assertEqual(http_client.CREATED, res.status_int) def test_configured_container_format_bad(self): self.config(container_formats=['foo'], group="image_format") fixture_headers = { 'x-image-meta-store': 'bad', 'x-image-meta-name': 'bogus', 'x-image-meta-location': 'http://localhost:0/image.tar.gz', 'x-image-meta-disk-format': 'raw', 'x-image-meta-container-format': 'bar', } req = webob.Request.blank("/images") req.method = 'POST' for k, v in six.iteritems(fixture_headers): req.headers[k] = v res = req.get_response(self.api) self.assertEqual(http_client.BAD_REQUEST, res.status_int) self.assertIn(b'Invalid container format', res.body) def test_container_and_disk_amazon_format_differs(self): fixture_headers = { 'x-image-meta-store': 'bad', 'x-image-meta-name': 'bogus', 'x-image-meta-location': 'http://localhost:0/image.tar.gz', 'x-image-meta-disk-format': 'aki', 'x-image-meta-container-format': 'ami'} req = webob.Request.blank("/images") req.method = 'POST' for k, v in six.iteritems(fixture_headers): req.headers[k] = v res = req.get_response(self.api) expected = (b"Invalid mix of disk and container formats. " b"When setting a disk or container format to one of " b"'aki', 'ari', or 'ami', " b"the container and disk formats must match.") self.assertEqual(http_client.BAD_REQUEST, res.status_int) self.assertIn(expected, res.body) def test_create_with_location_no_container_format(self): fixture_headers = { 'x-image-meta-name': 'bogus', 'x-image-meta-location': 'http://localhost:0/image.tar.gz', 'x-image-meta-disk-format': 'vhd', } req = webob.Request.blank("/images") req.method = 'POST' for k, v in six.iteritems(fixture_headers): req.headers[k] = v http = store.get_store_from_scheme('http') with mock.patch.object(http, 'get_size') as mocked_size: mocked_size.return_value = 0 res = req.get_response(self.api) self.assertEqual(http_client.BAD_REQUEST, res.status_int) self.assertIn(b'Container format is not specified', res.body) def test_create_with_location_no_disk_format(self): fixture_headers = { 'x-image-meta-name': 'bogus', 'x-image-meta-location': 'http://localhost:0/image.tar.gz', 'x-image-meta-container-format': 'bare', } req = webob.Request.blank("/images") req.method = 'POST' for k, v in six.iteritems(fixture_headers): req.headers[k] = v http = store.get_store_from_scheme('http') with mock.patch.object(http, 'get_size') as mocked_size: mocked_size.return_value = 0 res = req.get_response(self.api) self.assertEqual(http_client.BAD_REQUEST, res.status_int) self.assertIn(b'Disk format is not specified', res.body) def test_create_with_empty_location(self): fixture_headers = { 'x-image-meta-location': '', } req = webob.Request.blank("/images") req.method = 'POST' for k, v in six.iteritems(fixture_headers): req.headers[k] = v res = req.get_response(self.api) self.assertEqual(http_client.BAD_REQUEST, res.status_int) def test_create_with_empty_copy_from(self): fixture_headers = { 'x-glance-api-copy-from': '', } req = webob.Request.blank("/images") req.method = 'POST' for k, v in six.iteritems(fixture_headers): req.headers[k] = v res = req.get_response(self.api) self.assertEqual(http_client.BAD_REQUEST, res.status_int) def test_create_delayed_image_with_no_disk_and_container_formats(self): fixture_headers = { 'x-image-meta-name': 'delayed', } req = webob.Request.blank("/images") req.method = 'POST' for k, v in six.iteritems(fixture_headers): req.headers[k] = v http = store.get_store_from_scheme('http') with mock.patch.object(http, 'get_size') as mocked_size: mocked_size.return_value = 0 res = req.get_response(self.api) self.assertEqual(http_client.CREATED, res.status_int) def test_create_with_bad_store_name(self): fixture_headers = { 'x-image-meta-store': 'bad', 'x-image-meta-name': 'bogus', 'x-image-meta-disk-format': 'qcow2', 'x-image-meta-container-format': 'bare', } req = webob.Request.blank("/images") req.method = 'POST' for k, v in six.iteritems(fixture_headers): req.headers[k] = v res = req.get_response(self.api) self.assertEqual(http_client.BAD_REQUEST, res.status_int) self.assertIn(b'Required store bad is invalid', res.body) @mock.patch.object(glance.api.v1.images.Controller, '_external_source') @mock.patch.object(store, 'get_store_from_location') def test_create_with_location_get_store_or_400_raises_exception( self, mock_get_store_from_location, mock_external_source): location = 'bad+scheme://localhost:0/image.qcow2' scheme = 'bad+scheme' fixture_headers = { 'x-image-meta-name': 'bogus', 'x-image-meta-location': location, 'x-image-meta-disk-format': 'qcow2', 'x-image-meta-container-format': 'bare', } req = webob.Request.blank("/images") req.method = 'POST' for k, v in six.iteritems(fixture_headers): req.headers[k] = v mock_external_source.return_value = location mock_get_store_from_location.return_value = scheme res = req.get_response(self.api) self.assertEqual(http_client.BAD_REQUEST, res.status_int) self.assertEqual(1, mock_external_source.call_count) self.assertEqual(1, mock_get_store_from_location.call_count) self.assertIn('Store for scheme %s not found' % scheme, res.body.decode('utf-8')) def test_create_with_location_unknown_scheme(self): fixture_headers = { 'x-image-meta-store': 'bad', 'x-image-meta-name': 'bogus', 'x-image-meta-location': 'bad+scheme://localhost:0/image.qcow2', 'x-image-meta-disk-format': 'qcow2', 'x-image-meta-container-format': 'bare', } req = webob.Request.blank("/images") req.method = 'POST' for k, v in six.iteritems(fixture_headers): req.headers[k] = v res = req.get_response(self.api) self.assertEqual(http_client.BAD_REQUEST, res.status_int) self.assertIn(b'External sources are not supported', res.body) def test_create_with_location_bad_store_uri(self): fixture_headers = { 'x-image-meta-store': 'file', 'x-image-meta-name': 'bogus', 'x-image-meta-location': 'http://', 'x-image-meta-disk-format': 'qcow2', 'x-image-meta-container-format': 'bare', } req = webob.Request.blank("/images") req.method = 'POST' for k, v in six.iteritems(fixture_headers): req.headers[k] = v res = req.get_response(self.api) self.assertEqual(http_client.BAD_REQUEST, res.status_int) self.assertIn(b'Invalid location', res.body) def test_create_image_with_too_many_properties(self): self.config(image_property_quota=1) another_request = unit_test_utils.get_fake_request( path='/images', method='POST') headers = {'x-auth-token': 'user:tenant:joe_soap', 'x-image-meta-property-x_all_permitted': '1', 'x-image-meta-property-x_all_permitted_foo': '2'} for k, v in six.iteritems(headers): another_request.headers[k] = v output = another_request.get_response(self.api) self.assertEqual(http_client.REQUEST_ENTITY_TOO_LARGE, output.status_int) def test_bad_container_format(self): fixture_headers = { 'x-image-meta-store': 'bad', 'x-image-meta-name': 'bogus', 'x-image-meta-location': 'http://localhost:0/image.tar.gz', 'x-image-meta-disk-format': 'vhd', 'x-image-meta-container-format': 'invalid', } req = webob.Request.blank("/images") req.method = 'POST' for k, v in six.iteritems(fixture_headers): req.headers[k] = v res = req.get_response(self.api) self.assertEqual(http_client.BAD_REQUEST, res.status_int) self.assertIn(b'Invalid container format', res.body) def test_bad_image_size(self): fixture_headers = { 'x-image-meta-store': 'bad', 'x-image-meta-name': 'bogus', 'x-image-meta-location': self._http_loc_url('/image.tar.gz'), 'x-image-meta-disk-format': 'vhd', 'x-image-meta-container-format': 'bare', } def exec_bad_size_test(bad_size, expected_substr): fixture_headers['x-image-meta-size'] = bad_size req = webob.Request.blank("/images", method='POST', headers=fixture_headers) res = req.get_response(self.api) self.assertEqual(http_client.BAD_REQUEST, res.status_int) self.assertIn(expected_substr, res.body) expected = b"Cannot convert image size 'invalid' to an integer." exec_bad_size_test('invalid', expected) expected = b"Cannot be a negative value." exec_bad_size_test(-10, expected) def test_bad_image_name(self): fixture_headers = { 'x-image-meta-store': 'bad', 'x-image-meta-name': 'X' * 256, 'x-image-meta-location': self._http_loc_url('/image.tar.gz'), 'x-image-meta-disk-format': 'vhd', 'x-image-meta-container-format': 'bare', } req = webob.Request.blank("/images") req.method = 'POST' for k, v in six.iteritems(fixture_headers): req.headers[k] = v res = req.get_response(self.api) self.assertEqual(http_client.BAD_REQUEST, res.status_int) def test_add_image_no_location_no_image_as_body(self): """Tests creates a queued image for no body and no loc header""" fixture_headers = {'x-image-meta-store': 'file', 'x-image-meta-disk-format': 'vhd', 'x-image-meta-container-format': 'ovf', 'x-image-meta-name': 'fake image #3', 'x-image-created_at': '2015-11-20', 'x-image-updated_at': '2015-12-01 12:10:01', 'x-image-deleted_at': '2000'} req = webob.Request.blank("/images") req.method = 'POST' for k, v in six.iteritems(fixture_headers): req.headers[k] = v res = req.get_response(self.api) self.assertEqual(http_client.CREATED, res.status_int) res_body = jsonutils.loads(res.body)['image'] self.assertEqual('queued', res_body['status']) image_id = res_body['id'] # Test that we are able to edit the Location field # per LP Bug #911599 req = webob.Request.blank("/images/%s" % image_id) req.method = 'PUT' req.headers['x-image-meta-location'] = 'http://localhost:0/images/123' http = store.get_store_from_scheme('http') with mock.patch.object(http, 'get_size') as mocked_size: mocked_size.return_value = 0 res = req.get_response(self.api) self.assertEqual(http_client.OK, res.status_int) res_body = jsonutils.loads(res.body)['image'] # Once the location is set, the image should be activated # see LP Bug #939484 self.assertEqual('active', res_body['status']) self.assertNotIn('location', res_body) # location never shown def test_add_image_no_location_no_content_type(self): """Tests creates a queued image for no body and no loc header""" fixture_headers = {'x-image-meta-store': 'file', 'x-image-meta-disk-format': 'vhd', 'x-image-meta-container-format': 'ovf', 'x-image-meta-name': 'fake image #3'} req = webob.Request.blank("/images") req.method = 'POST' req.body = b"chunk00000remainder" for k, v in six.iteritems(fixture_headers): req.headers[k] = v res = req.get_response(self.api) self.assertEqual(http_client.BAD_REQUEST, res.status_int) def test_add_image_size_header_too_big(self): """Tests raises BadRequest for supplied image size that is too big""" fixture_headers = {'x-image-meta-size': CONF.image_size_cap + 1, 'x-image-meta-name': 'fake image #3'} req = webob.Request.blank("/images") req.method = 'POST' for k, v in six.iteritems(fixture_headers): req.headers[k] = v res = req.get_response(self.api) self.assertEqual(http_client.BAD_REQUEST, res.status_int) def test_add_image_size_chunked_data_too_big(self): self.config(image_size_cap=512) fixture_headers = { 'x-image-meta-name': 'fake image #3', 'x-image-meta-container_format': 'ami', 'x-image-meta-disk_format': 'ami', 'transfer-encoding': 'chunked', 'content-type': 'application/octet-stream', } req = webob.Request.blank("/images") req.method = 'POST' req.body_file = six.StringIO('X' * (CONF.image_size_cap + 1)) for k, v in six.iteritems(fixture_headers): req.headers[k] = v res = req.get_response(self.api) self.assertEqual(http_client.REQUEST_ENTITY_TOO_LARGE, res.status_int) def test_add_image_size_data_too_big(self): self.config(image_size_cap=512) fixture_headers = { 'x-image-meta-name': 'fake image #3', 'x-image-meta-container_format': 'ami', 'x-image-meta-disk_format': 'ami', 'content-type': 'application/octet-stream', } req = webob.Request.blank("/images") req.method = 'POST' req.body = b'X' * (CONF.image_size_cap + 1) for k, v in six.iteritems(fixture_headers): req.headers[k] = v res = req.get_response(self.api) self.assertEqual(http_client.BAD_REQUEST, res.status_int) def test_add_image_size_header_exceed_quota(self): quota = 500 self.config(user_storage_quota=str(quota)) fixture_headers = {'x-image-meta-size': quota + 1, 'x-image-meta-name': 'fake image #3', 'x-image-meta-container_format': 'bare', 'x-image-meta-disk_format': 'qcow2', 'content-type': 'application/octet-stream', } req = webob.Request.blank("/images") req.method = 'POST' for k, v in six.iteritems(fixture_headers): req.headers[k] = v req.body = b'X' * (quota + 1) res = req.get_response(self.api) self.assertEqual(http_client.REQUEST_ENTITY_TOO_LARGE, res.status_int) def test_add_image_size_data_exceed_quota(self): quota = 500 self.config(user_storage_quota=str(quota)) fixture_headers = { 'x-image-meta-name': 'fake image #3', 'x-image-meta-container_format': 'bare', 'x-image-meta-disk_format': 'qcow2', 'content-type': 'application/octet-stream', } req = webob.Request.blank("/images") req.method = 'POST' req.body = b'X' * (quota + 1) for k, v in six.iteritems(fixture_headers): req.headers[k] = v res = req.get_response(self.api) self.assertEqual(http_client.REQUEST_ENTITY_TOO_LARGE, res.status_int) def test_add_image_size_data_exceed_quota_readd(self): quota = 500 self.config(user_storage_quota=str(quota)) fixture_headers = { 'x-image-meta-name': 'fake image #3', 'x-image-meta-container_format': 'bare', 'x-image-meta-disk_format': 'qcow2', 'content-type': 'application/octet-stream', } req = webob.Request.blank("/images") req.method = 'POST' req.body = b'X' * (quota + 1) for k, v in six.iteritems(fixture_headers): req.headers[k] = v res = req.get_response(self.api) self.assertEqual(http_client.REQUEST_ENTITY_TOO_LARGE, res.status_int) used_size = sum([f['size'] for f in self.FIXTURES]) req = webob.Request.blank("/images") req.method = 'POST' req.body = b'X' * (quota - used_size) for k, v in six.iteritems(fixture_headers): req.headers[k] = v res = req.get_response(self.api) self.assertEqual(http_client.CREATED, res.status_int) def _add_check_no_url_info(self): fixture_headers = {'x-image-meta-disk-format': 'ami', 'x-image-meta-container-format': 'ami', 'x-image-meta-size': '0', 'x-image-meta-name': 'empty image'} req = webob.Request.blank("/images") req.method = 'POST' for k, v in six.iteritems(fixture_headers): req.headers[k] = v res = req.get_response(self.api) res_body = jsonutils.loads(res.body)['image'] self.assertNotIn('locations', res_body) self.assertNotIn('direct_url', res_body) image_id = res_body['id'] # HEAD empty image req = webob.Request.blank("/images/%s" % image_id) req.method = 'HEAD' res = req.get_response(self.api) self.assertEqual(http_client.OK, res.status_int) self.assertNotIn('x-image-meta-locations', res.headers) self.assertNotIn('x-image-meta-direct_url', res.headers) def test_add_check_no_url_info_ml(self): self.config(show_multiple_locations=True) self._add_check_no_url_info() def test_add_check_no_url_info_direct_url(self): self.config(show_image_direct_url=True) self._add_check_no_url_info() def test_add_check_no_url_info_both_on(self): self.config(show_image_direct_url=True) self.config(show_multiple_locations=True) self._add_check_no_url_info() def test_add_check_no_url_info_both_off(self): self._add_check_no_url_info() def test_add_image_zero_size(self): """Tests creating an active image with explicitly zero size""" fixture_headers = {'x-image-meta-disk-format': 'ami', 'x-image-meta-container-format': 'ami', 'x-image-meta-size': '0', 'x-image-meta-name': 'empty image'} req = webob.Request.blank("/images") req.method = 'POST' for k, v in six.iteritems(fixture_headers): req.headers[k] = v res = req.get_response(self.api) self.assertEqual(http_client.CREATED, res.status_int) res_body = jsonutils.loads(res.body)['image'] self.assertEqual('active', res_body['status']) image_id = res_body['id'] # GET empty image req = webob.Request.blank("/images/%s" % image_id) res = req.get_response(self.api) self.assertEqual(http_client.OK, res.status_int) self.assertEqual(0, len(res.body)) def _do_test_add_image_attribute_mismatch(self, attributes): fixture_headers = { 'x-image-meta-name': 'fake image #3', } fixture_headers.update(attributes) req = webob.Request.blank("/images") req.method = 'POST' for k, v in six.iteritems(fixture_headers): req.headers[k] = v req.headers['Content-Type'] = 'application/octet-stream' req.body = b"XXXX" res = req.get_response(self.api) self.assertEqual(http_client.BAD_REQUEST, res.status_int) def test_add_image_checksum_mismatch(self): attributes = { 'x-image-meta-checksum': 'asdf', } self._do_test_add_image_attribute_mismatch(attributes) def test_add_image_size_mismatch(self): attributes = { 'x-image-meta-size': str(len("XXXX") + 1), } self._do_test_add_image_attribute_mismatch(attributes) def test_add_image_checksum_and_size_mismatch(self): attributes = { 'x-image-meta-checksum': 'asdf', 'x-image-meta-size': str(len("XXXX") + 1), } self._do_test_add_image_attribute_mismatch(attributes) def test_add_image_bad_store(self): """Tests raises BadRequest for invalid store header""" fixture_headers = {'x-image-meta-store': 'bad', 'x-image-meta-name': 'fake image #3'} req = webob.Request.blank("/images") req.method = 'POST' for k, v in six.iteritems(fixture_headers): req.headers[k] = v req.headers['Content-Type'] = 'application/octet-stream' req.body = b"chunk00000remainder" res = req.get_response(self.api) self.assertEqual(http_client.BAD_REQUEST, res.status_int) def test_add_image_basic_file_store(self): """Tests to add a basic image in the file store""" fixture_headers = {'x-image-meta-store': 'file', 'x-image-meta-disk-format': 'vhd', 'x-image-meta-container-format': 'ovf', 'x-image-meta-name': 'fake image #3'} req = webob.Request.blank("/images") req.method = 'POST' for k, v in six.iteritems(fixture_headers): req.headers[k] = v req.headers['Content-Type'] = 'application/octet-stream' req.body = b"chunk00000remainder" res = req.get_response(self.api) self.assertEqual(http_client.CREATED, res.status_int) # Test that the Location: header is set to the URI to # edit the newly-created image, as required by APP. # See LP Bug #719825 self.assertIn('location', res.headers, "'location' not in response headers.\n" "res.headerlist = %r" % res.headerlist) res_body = jsonutils.loads(res.body)['image'] self.assertIn('/images/%s' % res_body['id'], res.headers['location']) self.assertEqual('active', res_body['status']) image_id = res_body['id'] # Test that we are NOT able to edit the Location field # per LP Bug #911599 req = webob.Request.blank("/images/%s" % image_id) req.method = 'PUT' url = self._http_loc_url('/images/123') req.headers['x-image-meta-location'] = url res = req.get_response(self.api) self.assertEqual(http_client.BAD_REQUEST, res.status_int) def test_add_image_unauthorized(self): rules = {"add_image": '!'} self.set_policy_rules(rules) fixture_headers = {'x-image-meta-store': 'file', 'x-image-meta-disk-format': 'vhd', 'x-image-meta-container-format': 'ovf', 'x-image-meta-name': 'fake image #3'} req = webob.Request.blank("/images") req.method = 'POST' for k, v in six.iteritems(fixture_headers): req.headers[k] = v req.headers['Content-Type'] = 'application/octet-stream' req.body = b"chunk00000remainder" res = req.get_response(self.api) self.assertEqual(http_client.FORBIDDEN, res.status_int) def test_add_publicize_image_unauthorized(self): rules = {"add_image": '@', "modify_image": '@', "publicize_image": '!'} self.set_policy_rules(rules) fixture_headers = {'x-image-meta-store': 'file', 'x-image-meta-is-public': 'true', 'x-image-meta-disk-format': 'vhd', 'x-image-meta-container-format': 'ovf', 'x-image-meta-name': 'fake image #3'} req = webob.Request.blank("/images") req.method = 'POST' for k, v in six.iteritems(fixture_headers): req.headers[k] = v req.headers['Content-Type'] = 'application/octet-stream' req.body = b"chunk00000remainder" res = req.get_response(self.api) self.assertEqual(http_client.FORBIDDEN, res.status_int) def test_add_publicize_image_authorized(self): rules = {"add_image": '@', "modify_image": '@', "publicize_image": '@', "upload_image": '@'} self.set_policy_rules(rules) fixture_headers = {'x-image-meta-store': 'file', 'x-image-meta-is-public': 'true', 'x-image-meta-disk-format': 'vhd', 'x-image-meta-container-format': 'ovf', 'x-image-meta-name': 'fake image #3'} req = webob.Request.blank("/images") req.method = 'POST' for k, v in six.iteritems(fixture_headers): req.headers[k] = v req.headers['Content-Type'] = 'application/octet-stream' req.body = b"chunk00000remainder" res = req.get_response(self.api) self.assertEqual(http_client.CREATED, res.status_int) def test_add_copy_from_image_unauthorized(self): rules = {"add_image": '@', "copy_from": '!'} self.set_policy_rules(rules) url = self._http_loc_url('/i.ovf') fixture_headers = {'x-image-meta-store': 'file', 'x-image-meta-disk-format': 'vhd', 'x-glance-api-copy-from': url, 'x-image-meta-container-format': 'ovf', 'x-image-meta-name': 'fake image #F'} req = webob.Request.blank("/images") req.method = 'POST' for k, v in six.iteritems(fixture_headers): req.headers[k] = v req.headers['Content-Type'] = 'application/octet-stream' req.body = b"chunk00000remainder" res = req.get_response(self.api) self.assertEqual(http_client.FORBIDDEN, res.status_int) def test_add_copy_from_upload_image_unauthorized(self): rules = {"add_image": '@', "copy_from": '@', "upload_image": '!'} self.set_policy_rules(rules) url = self._http_loc_url('/i.ovf') fixture_headers = {'x-image-meta-store': 'file', 'x-image-meta-disk-format': 'vhd', 'x-glance-api-copy-from': url, 'x-image-meta-container-format': 'ovf', 'x-image-meta-name': 'fake image #F'} req = webob.Request.blank("/images") req.method = 'POST' for k, v in six.iteritems(fixture_headers): req.headers[k] = v req.headers['Content-Type'] = 'application/octet-stream' res = req.get_response(self.api) self.assertEqual(http_client.FORBIDDEN, res.status_int) def test_add_copy_from_image_authorized_upload_image_authorized(self): rules = {"add_image": '@', "copy_from": '@', "upload_image": '@'} self.set_policy_rules(rules) url = self._http_loc_url('/i.ovf') fixture_headers = {'x-image-meta-store': 'file', 'x-image-meta-disk-format': 'vhd', 'x-glance-api-copy-from': url, 'x-image-meta-container-format': 'ovf', 'x-image-meta-name': 'fake image #F'} req = webob.Request.blank("/images") req.method = 'POST' for k, v in six.iteritems(fixture_headers): req.headers[k] = v req.headers['Content-Type'] = 'application/octet-stream' http = store.get_store_from_scheme('http') with mock.patch.object(http, 'get_size') as mock_size: mock_size.return_value = 0 res = req.get_response(self.api) self.assertEqual(http_client.CREATED, res.status_int) def test_upload_image_http_nonexistent_location_url(self): # Ensure HTTP 404 response returned when try to upload # image from non-existent http location URL. rules = {"add_image": '@', "copy_from": '@', "upload_image": '@'} self.set_policy_rules(rules) fixture_headers = {'x-image-meta-store': 'file', 'x-image-meta-disk-format': 'vhd', 'x-glance-api-copy-from': self._http_loc_url('/non_existing_image_path'), 'x-image-meta-container-format': 'ovf', 'x-image-meta-name': 'fake image #F'} req = webob.Request.blank("/images") req.method = 'POST' for k, v in six.iteritems(fixture_headers): req.headers[k] = v req.headers['Content-Type'] = 'application/octet-stream' res = req.get_response(self.api) self.assertEqual(http_client.NOT_FOUND, res.status_int) def test_add_copy_from_with_nonempty_body(self): """Tests creates an image from copy-from and nonempty body""" fixture_headers = {'x-image-meta-store': 'file', 'x-image-meta-disk-format': 'vhd', 'x-glance-api-copy-from': 'http://0.0.0.0:1/c.ovf', 'x-image-meta-container-format': 'ovf', 'x-image-meta-name': 'fake image #F'} req = webob.Request.blank("/images") req.headers['Content-Type'] = 'application/octet-stream' req.method = 'POST' req.body = b"chunk00000remainder" for k, v in six.iteritems(fixture_headers): req.headers[k] = v res = req.get_response(self.api) self.assertEqual(http_client.BAD_REQUEST, res.status_int) def test_add_location_with_nonempty_body(self): """Tests creates an image from location and nonempty body""" fixture_headers = {'x-image-meta-store': 'file', 'x-image-meta-disk-format': 'vhd', 'x-image-meta-location': 'http://0.0.0.0:1/c.tgz', 'x-image-meta-container-format': 'ovf', 'x-image-meta-name': 'fake image #F'} req = webob.Request.blank("/images") req.headers['Content-Type'] = 'application/octet-stream' req.method = 'POST' req.body = b"chunk00000remainder" for k, v in six.iteritems(fixture_headers): req.headers[k] = v res = req.get_response(self.api) self.assertEqual(http_client.BAD_REQUEST, res.status_int) def test_add_location_with_conflict_image_size(self): """Tests creates an image from location and conflict image size""" fixture_headers = {'x-image-meta-store': 'file', 'x-image-meta-disk-format': 'vhd', 'x-image-meta-location': 'http://a/b/c.tar.gz', 'x-image-meta-container-format': 'ovf', 'x-image-meta-name': 'fake image #F', 'x-image-meta-size': '1'} req = webob.Request.blank("/images") req.headers['Content-Type'] = 'application/octet-stream' req.method = 'POST' http = store.get_store_from_scheme('http') with mock.patch.object(http, 'get_size') as size: size.return_value = 2 for k, v in six.iteritems(fixture_headers): req.headers[k] = v res = req.get_response(self.api) self.assertEqual(http_client.CONFLICT, res.status_int) def test_add_location_with_invalid_location_on_conflict_image_size(self): """Tests creates an image from location and conflict image size""" fixture_headers = {'x-image-meta-store': 'file', 'x-image-meta-disk-format': 'vhd', 'x-image-meta-location': 'http://0.0.0.0:1/c.tgz', 'x-image-meta-container-format': 'ovf', 'x-image-meta-name': 'fake image #F', 'x-image-meta-size': '1'} req = webob.Request.blank("/images") req.headers['Content-Type'] = 'application/octet-stream' req.method = 'POST' for k, v in six.iteritems(fixture_headers): req.headers[k] = v res = req.get_response(self.api) self.assertEqual(http_client.BAD_REQUEST, res.status_int) def test_add_location_with_invalid_location_on_restricted_sources(self): """Tests creates an image from location and restricted sources""" fixture_headers = {'x-image-meta-store': 'file', 'x-image-meta-disk-format': 'vhd', 'x-image-meta-location': 'file:///etc/passwd', 'x-image-meta-container-format': 'ovf', 'x-image-meta-name': 'fake image #F'} req = webob.Request.blank("/images") req.headers['Content-Type'] = 'application/octet-stream' req.method = 'POST' for k, v in six.iteritems(fixture_headers): req.headers[k] = v res = req.get_response(self.api) self.assertEqual(http_client.BAD_REQUEST, res.status_int) fixture_headers = {'x-image-meta-store': 'file', 'x-image-meta-disk-format': 'vhd', 'x-image-meta-location': 'swift+config://xxx', 'x-image-meta-container-format': 'ovf', 'x-image-meta-name': 'fake image #F'} req = webob.Request.blank("/images") req.headers['Content-Type'] = 'application/octet-stream' req.method = 'POST' for k, v in six.iteritems(fixture_headers): req.headers[k] = v res = req.get_response(self.api) self.assertEqual(http_client.BAD_REQUEST, res.status_int) def test_create_image_with_nonexistent_location_url(self): # Ensure HTTP 404 response returned when try to create # image with non-existent http location URL. fixture_headers = { 'x-image-meta-name': 'bogus', 'x-image-meta-location': self._http_loc_url('/non_existing_image_path'), 'x-image-meta-disk-format': 'qcow2', 'x-image-meta-container-format': 'bare', } req = webob.Request.blank("/images") req.method = 'POST' for k, v in six.iteritems(fixture_headers): req.headers[k] = v res = req.get_response(self.api) self.assertEqual(http_client.NOT_FOUND, res.status_int) def test_add_copy_from_with_location(self): """Tests creates an image from copy-from and location""" fixture_headers = {'x-image-meta-store': 'file', 'x-image-meta-disk-format': 'vhd', 'x-glance-api-copy-from': 'http://0.0.0.0:1/c.ovf', 'x-image-meta-container-format': 'ovf', 'x-image-meta-name': 'fake image #F', 'x-image-meta-location': 'http://0.0.0.0:1/c.tgz'} req = webob.Request.blank("/images") req.method = 'POST' for k, v in six.iteritems(fixture_headers): req.headers[k] = v res = req.get_response(self.api) self.assertEqual(http_client.BAD_REQUEST, res.status_int) def test_add_copy_from_with_restricted_sources(self): """Tests creates an image from copy-from with restricted sources""" header_template = {'x-image-meta-store': 'file', 'x-image-meta-disk-format': 'vhd', 'x-image-meta-container-format': 'ovf', 'x-image-meta-name': 'fake image #F'} schemas = ["file:///etc/passwd", "swift+config:///xxx", "filesystem:///etc/passwd"] for schema in schemas: req = webob.Request.blank("/images") req.method = 'POST' for k, v in six.iteritems(header_template): req.headers[k] = v req.headers['x-glance-api-copy-from'] = schema res = req.get_response(self.api) self.assertEqual(http_client.BAD_REQUEST, res.status_int) def test_add_copy_from_upload_image_unauthorized_with_body(self): rules = {"upload_image": '!', "modify_image": '@', "add_image": '@'} self.set_policy_rules(rules) self.config(image_size_cap=512) fixture_headers = { 'x-image-meta-name': 'fake image #3', 'x-image-meta-container_format': 'ami', 'x-image-meta-disk_format': 'ami', 'transfer-encoding': 'chunked', 'content-type': 'application/octet-stream', } req = webob.Request.blank("/images") req.method = 'POST' req.body_file = six.StringIO('X' * (CONF.image_size_cap)) for k, v in six.iteritems(fixture_headers): req.headers[k] = v res = req.get_response(self.api) self.assertEqual(http_client.FORBIDDEN, res.status_int) def test_update_data_upload_bad_store_uri(self): fixture_headers = {'x-image-meta-name': 'fake image #3'} req = webob.Request.blank("/images") req.method = 'POST' for k, v in six.iteritems(fixture_headers): req.headers[k] = v res = req.get_response(self.api) self.assertEqual(http_client.CREATED, res.status_int) res_body = jsonutils.loads(res.body)['image'] self.assertEqual('queued', res_body['status']) image_id = res_body['id'] req = webob.Request.blank("/images/%s" % image_id) req.method = 'PUT' req.headers['Content-Type'] = 'application/octet-stream' req.headers['x-image-disk-format'] = 'vhd' req.headers['x-image-container-format'] = 'ovf' req.headers['x-image-meta-location'] = 'http://' res = req.get_response(self.api) self.assertEqual(http_client.BAD_REQUEST, res.status_int) self.assertIn(b'Invalid location', res.body) def test_update_data_upload_image_unauthorized(self): rules = {"upload_image": '!', "modify_image": '@', "add_image": '@'} self.set_policy_rules(rules) """Tests creates a queued image for no body and no loc header""" self.config(image_size_cap=512) fixture_headers = {'x-image-meta-store': 'file', 'x-image-meta-name': 'fake image #3'} req = webob.Request.blank("/images") req.method = 'POST' for k, v in six.iteritems(fixture_headers): req.headers[k] = v res = req.get_response(self.api) self.assertEqual(http_client.CREATED, res.status_int) res_body = jsonutils.loads(res.body)['image'] self.assertEqual('queued', res_body['status']) image_id = res_body['id'] req = webob.Request.blank("/images/%s" % image_id) req.method = 'PUT' req.headers['Content-Type'] = 'application/octet-stream' req.headers['transfer-encoding'] = 'chunked' req.headers['x-image-disk-format'] = 'vhd' req.headers['x-image-container-format'] = 'ovf' req.body_file = six.StringIO('X' * (CONF.image_size_cap)) res = req.get_response(self.api) self.assertEqual(http_client.FORBIDDEN, res.status_int) def test_update_copy_from_upload_image_unauthorized(self): rules = {"upload_image": '!', "modify_image": '@', "add_image": '@', "copy_from": '@'} self.set_policy_rules(rules) fixture_headers = {'x-image-meta-disk-format': 'vhd', 'x-image-meta-container-format': 'ovf', 'x-image-meta-name': 'fake image #3'} req = webob.Request.blank("/images") req.method = 'POST' for k, v in six.iteritems(fixture_headers): req.headers[k] = v res = req.get_response(self.api) self.assertEqual(http_client.CREATED, res.status_int) res_body = jsonutils.loads(res.body)['image'] self.assertEqual('queued', res_body['status']) image_id = res_body['id'] req = webob.Request.blank("/images/%s" % image_id) req.method = 'PUT' req.headers['Content-Type'] = 'application/octet-stream' req.headers['x-glance-api-copy-from'] = self._http_loc_url('/i.ovf') res = req.get_response(self.api) self.assertEqual(http_client.FORBIDDEN, res.status_int) def test_update_copy_from_unauthorized(self): rules = {"upload_image": '@', "modify_image": '@', "add_image": '@', "copy_from": '!'} self.set_policy_rules(rules) fixture_headers = {'x-image-meta-disk-format': 'vhd', 'x-image-meta-container-format': 'ovf', 'x-image-meta-name': 'fake image #3'} req = webob.Request.blank("/images") req.method = 'POST' for k, v in six.iteritems(fixture_headers): req.headers[k] = v res = req.get_response(self.api) self.assertEqual(http_client.CREATED, res.status_int) res_body = jsonutils.loads(res.body)['image'] self.assertEqual('queued', res_body['status']) image_id = res_body['id'] req = webob.Request.blank("/images/%s" % image_id) req.method = 'PUT' req.headers['Content-Type'] = 'application/octet-stream' req.headers['x-glance-api-copy-from'] = self._http_loc_url('/i.ovf') res = req.get_response(self.api) self.assertEqual(http_client.FORBIDDEN, res.status_int) def _do_test_post_image_content_missing_format(self, missing): """Tests creation of an image with missing format""" fixture_headers = {'x-image-meta-store': 'file', 'x-image-meta-disk-format': 'vhd', 'x-image-meta-container-format': 'ovf', 'x-image-meta-name': 'fake image #3'} header = 'x-image-meta-' + missing.replace('_', '-') del fixture_headers[header] req = webob.Request.blank("/images") req.method = 'POST' for k, v in six.iteritems(fixture_headers): req.headers[k] = v req.headers['Content-Type'] = 'application/octet-stream' req.body = b"chunk00000remainder" res = req.get_response(self.api) self.assertEqual(http_client.BAD_REQUEST, res.status_int) def test_post_image_content_missing_disk_format(self): """Tests creation of an image with missing disk format""" self._do_test_post_image_content_missing_format('disk_format') def test_post_image_content_missing_container_type(self): """Tests creation of an image with missing container format""" self._do_test_post_image_content_missing_format('container_format') def _do_test_put_image_content_missing_format(self, missing): """Tests delayed activation of an image with missing format""" fixture_headers = {'x-image-meta-store': 'file', 'x-image-meta-disk-format': 'vhd', 'x-image-meta-container-format': 'ovf', 'x-image-meta-name': 'fake image #3'} header = 'x-image-meta-' + missing.replace('_', '-') del fixture_headers[header] req = webob.Request.blank("/images") req.method = 'POST' for k, v in six.iteritems(fixture_headers): req.headers[k] = v res = req.get_response(self.api) self.assertEqual(http_client.CREATED, res.status_int) res_body = jsonutils.loads(res.body)['image'] self.assertEqual('queued', res_body['status']) image_id = res_body['id'] req = webob.Request.blank("/images/%s" % image_id) req.method = 'PUT' for k, v in six.iteritems(fixture_headers): req.headers[k] = v req.headers['Content-Type'] = 'application/octet-stream' req.body = b"chunk00000remainder" res = req.get_response(self.api) self.assertEqual(http_client.BAD_REQUEST, res.status_int) def test_put_image_content_missing_disk_format(self): """Tests delayed activation of image with missing disk format""" self._do_test_put_image_content_missing_format('disk_format') def test_put_image_content_missing_container_type(self): """Tests delayed activation of image with missing container format""" self._do_test_put_image_content_missing_format('container_format') def test_download_deactivated_images(self): """Tests exception raised trying to download a deactivated image""" req = webob.Request.blank("/images/%s" % UUID3) req.method = 'GET' res = req.get_response(self.api) self.assertEqual(http_client.FORBIDDEN, res.status_int) def test_update_deleted_image(self): """Tests that exception raised trying to update a deleted image""" req = webob.Request.blank("/images/%s" % UUID2) req.method = 'DELETE' res = req.get_response(self.api) self.assertEqual(http_client.OK, res.status_int) fixture = {'name': 'test_del_img'} req = webob.Request.blank('/images/%s' % UUID2) req.method = 'PUT' req.content_type = 'application/json' req.body = jsonutils.dump_as_bytes(dict(image=fixture)) res = req.get_response(self.api) self.assertEqual(http_client.FORBIDDEN, res.status_int) self.assertIn(b'Forbidden to update deleted image', res.body) def test_delete_deleted_image(self): """Tests that exception raised trying to delete a deleted image""" req = webob.Request.blank("/images/%s" % UUID2) req.method = 'DELETE' res = req.get_response(self.api) self.assertEqual(http_client.OK, res.status_int) # Verify the status is 'deleted' req = webob.Request.blank("/images/%s" % UUID2) req.method = 'HEAD' res = req.get_response(self.api) self.assertEqual(http_client.OK, res.status_int) self.assertEqual("deleted", res.headers['x-image-meta-status']) req = webob.Request.blank("/images/%s" % UUID2) req.method = 'DELETE' res = req.get_response(self.api) self.assertEqual(http_client.NOT_FOUND, res.status_int) msg = "Image %s not found." % UUID2 self.assertIn(msg, res.body.decode()) # Verify the status is still 'deleted' req = webob.Request.blank("/images/%s" % UUID2) req.method = 'HEAD' res = req.get_response(self.api) self.assertEqual(http_client.OK, res.status_int) self.assertEqual("deleted", res.headers['x-image-meta-status']) def test_image_status_when_delete_fails(self): """ Tests that the image status set to active if deletion of image fails. """ fs = store.get_store_from_scheme('file') with mock.patch.object(fs, 'delete') as mock_fsstore_delete: mock_fsstore_delete.side_effect = exception.Forbidden() # trigger the v1 delete api req = webob.Request.blank("/images/%s" % UUID2) req.method = 'DELETE' res = req.get_response(self.api) self.assertEqual(http_client.FORBIDDEN, res.status_int) self.assertIn(b'Forbidden to delete image', res.body) # check image metadata is still there with active state req = webob.Request.blank("/images/%s" % UUID2) req.method = 'HEAD' res = req.get_response(self.api) self.assertEqual(http_client.OK, res.status_int) self.assertEqual("active", res.headers['x-image-meta-status']) def test_delete_pending_delete_image(self): """ Tests that correct response returned when deleting a pending_delete image """ # First deletion self.config(delayed_delete=True) req = webob.Request.blank("/images/%s" % UUID2) req.method = 'DELETE' res = req.get_response(self.api) self.assertEqual(http_client.OK, res.status_int) # Verify the status is 'pending_delete' req = webob.Request.blank("/images/%s" % UUID2) req.method = 'HEAD' res = req.get_response(self.api) self.assertEqual(http_client.OK, res.status_int) self.assertEqual("pending_delete", res.headers['x-image-meta-status']) # Second deletion req = webob.Request.blank("/images/%s" % UUID2) req.method = 'DELETE' res = req.get_response(self.api) self.assertEqual(http_client.FORBIDDEN, res.status_int) self.assertIn(b'Forbidden to delete a pending_delete image', res.body) # Verify the status is still 'pending_delete' req = webob.Request.blank("/images/%s" % UUID2) req.method = 'HEAD' res = req.get_response(self.api) self.assertEqual(http_client.OK, res.status_int) self.assertEqual("pending_delete", res.headers['x-image-meta-status']) def test_upload_to_image_status_saving(self): """Test image upload conflict. If an image is uploaded before an existing upload to the same image completes, the original upload should succeed and the conflicting one should fail and any data be deleted. """ fixture_headers = {'x-image-meta-store': 'file', 'x-image-meta-disk-format': 'vhd', 'x-image-meta-container-format': 'ovf', 'x-image-meta-name': 'some-foo-image'} # create an image but don't upload yet. req = webob.Request.blank("/images") req.method = 'POST' for k, v in six.iteritems(fixture_headers): req.headers[k] = v res = req.get_response(self.api) self.assertEqual(http_client.CREATED, res.status_int) res_body = jsonutils.loads(res.body)['image'] image_id = res_body['id'] self.assertIn('/images/%s' % image_id, res.headers['location']) # verify the status is 'queued' self.assertEqual('queued', res_body['status']) orig_get_image_metadata = registry.get_image_metadata orig_image_get = db_api._image_get orig_image_update = db_api._image_update orig_initiate_deletion = upload_utils.initiate_deletion # this will be used to track what is called and their order. call_sequence = [] # use this to determine if we are within a db session i.e. atomic # operation, that is setting our active state. # We want first status check to be 'queued' so we get past the # first guard. test_status = { 'activate_session_started': False, 'queued_guard_passed': False } state_changes = [] def mock_image_update(context, values, image_id, purge_props=False, from_state=None): status = values.get('status') if status: state_changes.append(status) if status == 'active': # We only expect this state to be entered once. if test_status['activate_session_started']: raise Exception("target session already started") test_status['activate_session_started'] = True call_sequence.append('update_active') else: call_sequence.append('update') return orig_image_update(context, values, image_id, purge_props=purge_props, from_state=from_state) def mock_image_get(*args, **kwargs): """Force status to 'saving' if not within activate db session. If we are in the activate db session we return 'active' which we then expect to cause exception.Conflict to be raised since this indicates that another upload has succeeded. """ image = orig_image_get(*args, **kwargs) if test_status['activate_session_started']: call_sequence.append('image_get_active') setattr(image, 'status', 'active') else: setattr(image, 'status', 'saving') return image def mock_get_image_metadata(*args, **kwargs): """Force image status sequence. """ call_sequence.append('get_image_meta') meta = orig_get_image_metadata(*args, **kwargs) if not test_status['queued_guard_passed']: meta['status'] = 'queued' test_status['queued_guard_passed'] = True return meta def mock_initiate_deletion(*args, **kwargs): call_sequence.append('init_del') orig_initiate_deletion(*args, **kwargs) req = webob.Request.blank("/images/%s" % image_id) req.method = 'PUT' req.headers['Content-Type'] = 'application/octet-stream' req.body = b"chunk00000remainder" with mock.patch.object( upload_utils, 'initiate_deletion') as mock_init_del: mock_init_del.side_effect = mock_initiate_deletion with mock.patch.object( registry, 'get_image_metadata') as mock_get_meta: mock_get_meta.side_effect = mock_get_image_metadata with mock.patch.object(db_api, '_image_get') as mock_db_get: mock_db_get.side_effect = mock_image_get with mock.patch.object( db_api, '_image_update') as mock_db_update: mock_db_update.side_effect = mock_image_update # Expect a 409 Conflict. res = req.get_response(self.api) self.assertEqual(http_client.CONFLICT, res.status_int) # Check expected call sequence self.assertEqual(['get_image_meta', 'get_image_meta', 'update', 'update_active', 'image_get_active', 'init_del'], call_sequence) self.assertTrue(mock_get_meta.called) self.assertTrue(mock_db_get.called) self.assertTrue(mock_db_update.called) # Ensure cleanup occurred. self.assertEqual(1, mock_init_del.call_count) self.assertEqual(['saving', 'active'], state_changes) def test_register_and_upload(self): """ Test that the process of registering an image with some metadata, then uploading an image file with some more metadata doesn't mark the original metadata deleted :see LP Bug#901534 """ fixture_headers = {'x-image-meta-store': 'file', 'x-image-meta-disk-format': 'vhd', 'x-image-meta-container-format': 'ovf', 'x-image-meta-name': 'fake image #3', 'x-image-meta-property-key1': 'value1'} req = webob.Request.blank("/images") req.method = 'POST' for k, v in six.iteritems(fixture_headers): req.headers[k] = v res = req.get_response(self.api) self.assertEqual(http_client.CREATED, res.status_int) res_body = jsonutils.loads(res.body)['image'] self.assertIn('id', res_body) image_id = res_body['id'] self.assertIn('/images/%s' % image_id, res.headers['location']) # Verify the status is queued self.assertIn('status', res_body) self.assertEqual('queued', res_body['status']) # Check properties are not deleted self.assertIn('properties', res_body) self.assertIn('key1', res_body['properties']) self.assertEqual('value1', res_body['properties']['key1']) # Now upload the image file along with some more # metadata and verify original metadata properties # are not marked deleted req = webob.Request.blank("/images/%s" % image_id) req.method = 'PUT' req.headers['Content-Type'] = 'application/octet-stream' req.headers['x-image-meta-property-key2'] = 'value2' req.body = b"chunk00000remainder" res = req.get_response(self.api) self.assertEqual(http_client.OK, res.status_int) # Verify the status is 'queued' req = webob.Request.blank("/images/%s" % image_id) req.method = 'HEAD' res = req.get_response(self.api) self.assertEqual(http_client.OK, res.status_int) self.assertIn('x-image-meta-property-key1', res.headers, "Did not find required property in headers. " "Got headers: %r" % res.headers) self.assertEqual("active", res.headers['x-image-meta-status']) def test_upload_image_raises_store_disabled(self): """Test that uploading an image file returns HTTTP 410 response""" # create image fs = store.get_store_from_scheme('file') fixture_headers = {'x-image-meta-store': 'file', 'x-image-meta-disk-format': 'vhd', 'x-image-meta-container-format': 'ovf', 'x-image-meta-name': 'fake image #3', 'x-image-meta-property-key1': 'value1'} req = webob.Request.blank("/images") req.method = 'POST' for k, v in six.iteritems(fixture_headers): req.headers[k] = v res = req.get_response(self.api) self.assertEqual(http_client.CREATED, res.status_int) res_body = jsonutils.loads(res.body)['image'] self.assertIn('id', res_body) image_id = res_body['id'] self.assertIn('/images/%s' % image_id, res.headers['location']) # Verify the status is queued self.assertIn('status', res_body) self.assertEqual('queued', res_body['status']) # Now upload the image file with mock.patch.object(fs, 'add') as mock_fsstore_add: mock_fsstore_add.side_effect = store.StoreAddDisabled req = webob.Request.blank("/images/%s" % image_id) req.method = 'PUT' req.headers['Content-Type'] = 'application/octet-stream' req.body = b"chunk00000remainder" res = req.get_response(self.api) self.assertEqual(http_client.GONE, res.status_int) self._verify_image_status(image_id, 'killed') def _get_image_status(self, image_id): req = webob.Request.blank("/images/%s" % image_id) req.method = 'HEAD' return req.get_response(self.api) def _verify_image_status(self, image_id, status, check_deleted=False, use_cached=False): if not use_cached: res = self._get_image_status(image_id) else: res = self.image_status.pop(0) self.assertEqual(http_client.OK, res.status_int) self.assertEqual(status, res.headers['x-image-meta-status']) self.assertEqual(str(check_deleted), res.headers['x-image-meta-deleted']) def _upload_safe_kill_common(self, mocks): fixture_headers = {'x-image-meta-store': 'file', 'x-image-meta-disk-format': 'vhd', 'x-image-meta-container-format': 'ovf', 'x-image-meta-name': 'fake image #3', 'x-image-meta-property-key1': 'value1'} req = webob.Request.blank("/images") req.method = 'POST' for k, v in six.iteritems(fixture_headers): req.headers[k] = v res = req.get_response(self.api) self.assertEqual(http_client.CREATED, res.status_int) res_body = jsonutils.loads(res.body)['image'] self.assertIn('id', res_body) self.image_id = res_body['id'] self.assertIn('/images/%s' % self.image_id, res.headers['location']) # Verify the status is 'queued' self.assertEqual('queued', res_body['status']) for m in mocks: m['mock'].side_effect = m['side_effect'] # Now upload the image file along with some more metadata and # verify original metadata properties are not marked deleted req = webob.Request.blank("/images/%s" % self.image_id) req.method = 'PUT' req.headers['Content-Type'] = 'application/octet-stream' req.headers['x-image-meta-property-key2'] = 'value2' req.body = b"chunk00000remainder" res = req.get_response(self.api) # We expect 500 since an exception occurred during upload. self.assertEqual(http_client.INTERNAL_SERVER_ERROR, res.status_int) @mock.patch('glance_store.store_add_to_backend') def test_upload_safe_kill(self, mock_store_add_to_backend): def mock_store_add_to_backend_w_exception(*args, **kwargs): """Trigger mid-upload failure by raising an exception.""" self.image_status.append(self._get_image_status(self.image_id)) # Raise an exception to emulate failed upload. raise Exception("== UNIT TEST UPLOAD EXCEPTION ==") mocks = [{'mock': mock_store_add_to_backend, 'side_effect': mock_store_add_to_backend_w_exception}] self._upload_safe_kill_common(mocks) # Check we went from 'saving' -> 'killed' self._verify_image_status(self.image_id, 'saving', use_cached=True) self._verify_image_status(self.image_id, 'killed') self.assertEqual(1, mock_store_add_to_backend.call_count) @mock.patch('glance_store.store_add_to_backend') def test_upload_safe_kill_deleted(self, mock_store_add_to_backend): test_router_api = router.API(self.mapper) self.api = test_utils.FakeAuthMiddleware(test_router_api, is_admin=True) def mock_store_add_to_backend_w_exception(*args, **kwargs): """We now delete the image, assert status is 'deleted' then raise an exception to emulate a failed upload. This will be caught by upload_data_to_store() which will then try to set status to 'killed' which will be ignored since the image has been deleted. """ # expect 'saving' self.image_status.append(self._get_image_status(self.image_id)) req = webob.Request.blank("/images/%s" % self.image_id) req.method = 'DELETE' res = req.get_response(self.api) self.assertEqual(http_client.OK, res.status_int) # expect 'deleted' self.image_status.append(self._get_image_status(self.image_id)) # Raise an exception to make the upload fail. raise Exception("== UNIT TEST UPLOAD EXCEPTION ==") mocks = [{'mock': mock_store_add_to_backend, 'side_effect': mock_store_add_to_backend_w_exception}] self._upload_safe_kill_common(mocks) # Check we went from 'saving' -> 'deleted' -> 'deleted' self._verify_image_status(self.image_id, 'saving', check_deleted=False, use_cached=True) self._verify_image_status(self.image_id, 'deleted', check_deleted=True, use_cached=True) self._verify_image_status(self.image_id, 'deleted', check_deleted=True) self.assertEqual(1, mock_store_add_to_backend.call_count) def _check_delete_during_image_upload(self, is_admin=False): fixture_headers = {'x-image-meta-store': 'file', 'x-image-meta-disk-format': 'vhd', 'x-image-meta-container-format': 'ovf', 'x-image-meta-name': 'fake image #3', 'x-image-meta-property-key1': 'value1'} req = unit_test_utils.get_fake_request(path="/images", is_admin=is_admin) for k, v in six.iteritems(fixture_headers): req.headers[k] = v res = req.get_response(self.api) self.assertEqual(http_client.CREATED, res.status_int) res_body = jsonutils.loads(res.body)['image'] self.assertIn('id', res_body) image_id = res_body['id'] self.assertIn('/images/%s' % image_id, res.headers['location']) # Verify the status is 'queued' self.assertEqual('queued', res_body['status']) called = {'initiate_deletion': False} def mock_initiate_deletion(*args, **kwargs): called['initiate_deletion'] = True self.stubs.Set(glance.api.v1.upload_utils, 'initiate_deletion', mock_initiate_deletion) orig_update_image_metadata = registry.update_image_metadata data = b"somedata" def mock_update_image_metadata(*args, **kwargs): if args[2].get('size') == len(data): path = "/images/%s" % image_id req = unit_test_utils.get_fake_request(path=path, method='DELETE', is_admin=is_admin) res = req.get_response(self.api) self.assertEqual(http_client.OK, res.status_int) self.stubs.Set(registry, 'update_image_metadata', orig_update_image_metadata) return orig_update_image_metadata(*args, **kwargs) self.stubs.Set(registry, 'update_image_metadata', mock_update_image_metadata) req = unit_test_utils.get_fake_request(path="/images/%s" % image_id, method='PUT') req.headers['Content-Type'] = 'application/octet-stream' req.body = data res = req.get_response(self.api) self.assertEqual(http_client.PRECONDITION_FAILED, res.status_int) self.assertFalse(res.location) self.assertTrue(called['initiate_deletion']) req = unit_test_utils.get_fake_request(path="/images/%s" % image_id, method='HEAD', is_admin=True) res = req.get_response(self.api) self.assertEqual(http_client.OK, res.status_int) self.assertEqual('True', res.headers['x-image-meta-deleted']) self.assertEqual('deleted', res.headers['x-image-meta-status']) def test_delete_during_image_upload_by_normal_user(self): self._check_delete_during_image_upload(is_admin=False) def test_delete_during_image_upload_by_admin(self): self._check_delete_during_image_upload(is_admin=True) def test_disable_purge_props(self): """ Test the special x-glance-registry-purge-props header controls the purge property behaviour of the registry. :see LP Bug#901534 """ fixture_headers = {'x-image-meta-store': 'file', 'x-image-meta-disk-format': 'vhd', 'x-image-meta-container-format': 'ovf', 'x-image-meta-name': 'fake image #3', 'x-image-meta-property-key1': 'value1'} req = webob.Request.blank("/images") req.method = 'POST' for k, v in six.iteritems(fixture_headers): req.headers[k] = v req.headers['Content-Type'] = 'application/octet-stream' req.body = b"chunk00000remainder" res = req.get_response(self.api) self.assertEqual(http_client.CREATED, res.status_int) res_body = jsonutils.loads(res.body)['image'] self.assertIn('id', res_body) image_id = res_body['id'] self.assertIn('/images/%s' % image_id, res.headers['location']) # Verify the status is queued self.assertIn('status', res_body) self.assertEqual('active', res_body['status']) # Check properties are not deleted self.assertIn('properties', res_body) self.assertIn('key1', res_body['properties']) self.assertEqual('value1', res_body['properties']['key1']) # Now update the image, setting new properties without # passing the x-glance-registry-purge-props header and # verify that original properties are marked deleted. req = webob.Request.blank("/images/%s" % image_id) req.method = 'PUT' req.headers['x-image-meta-property-key2'] = 'value2' res = req.get_response(self.api) self.assertEqual(http_client.OK, res.status_int) # Verify the original property no longer in headers req = webob.Request.blank("/images/%s" % image_id) req.method = 'HEAD' res = req.get_response(self.api) self.assertEqual(http_client.OK, res.status_int) self.assertIn('x-image-meta-property-key2', res.headers, "Did not find required property in headers. " "Got headers: %r" % res.headers) self.assertNotIn('x-image-meta-property-key1', res.headers, "Found property in headers that was not expected. " "Got headers: %r" % res.headers) # Now update the image, setting new properties and # passing the x-glance-registry-purge-props header with # a value of "false" and verify that second property # still appears in headers. req = webob.Request.blank("/images/%s" % image_id) req.method = 'PUT' req.headers['x-image-meta-property-key3'] = 'value3' req.headers['x-glance-registry-purge-props'] = 'false' res = req.get_response(self.api) self.assertEqual(http_client.OK, res.status_int) # Verify the second and third property in headers req = webob.Request.blank("/images/%s" % image_id) req.method = 'HEAD' res = req.get_response(self.api) self.assertEqual(http_client.OK, res.status_int) self.assertIn('x-image-meta-property-key2', res.headers, "Did not find required property in headers. " "Got headers: %r" % res.headers) self.assertIn('x-image-meta-property-key3', res.headers, "Did not find required property in headers. " "Got headers: %r" % res.headers) def test_publicize_image_unauthorized(self): """Create a non-public image then fail to make public""" rules = {"add_image": '@', "publicize_image": '!'} self.set_policy_rules(rules) fixture_headers = {'x-image-meta-store': 'file', 'x-image-meta-disk-format': 'vhd', 'x-image-meta-is-public': 'false', 'x-image-meta-container-format': 'ovf', 'x-image-meta-name': 'fake image #3'} req = webob.Request.blank("/images") req.method = 'POST' for k, v in six.iteritems(fixture_headers): req.headers[k] = v res = req.get_response(self.api) self.assertEqual(http_client.CREATED, res.status_int) res_body = jsonutils.loads(res.body)['image'] req = webob.Request.blank("/images/%s" % res_body['id']) req.method = 'PUT' req.headers['x-image-meta-is-public'] = 'true' res = req.get_response(self.api) self.assertEqual(http_client.FORBIDDEN, res.status_int) def test_update_image_size_header_too_big(self): """Tests raises BadRequest for supplied image size that is too big""" fixture_headers = {'x-image-meta-size': CONF.image_size_cap + 1} req = webob.Request.blank("/images/%s" % UUID2) req.method = 'PUT' for k, v in six.iteritems(fixture_headers): req.headers[k] = v res = req.get_response(self.api) self.assertEqual(http_client.BAD_REQUEST, res.status_int) def test_update_image_size_data_too_big(self): self.config(image_size_cap=512) fixture_headers = {'content-type': 'application/octet-stream'} req = webob.Request.blank("/images/%s" % UUID2) req.method = 'PUT' req.body = b'X' * (CONF.image_size_cap + 1) for k, v in six.iteritems(fixture_headers): req.headers[k] = v res = req.get_response(self.api) self.assertEqual(http_client.BAD_REQUEST, res.status_int) def test_update_image_size_chunked_data_too_big(self): self.config(image_size_cap=512) # Create new image that has no data req = webob.Request.blank("/images") req.method = 'POST' req.headers['x-image-meta-name'] = 'something' req.headers['x-image-meta-container_format'] = 'ami' req.headers['x-image-meta-disk_format'] = 'ami' res = req.get_response(self.api) image_id = jsonutils.loads(res.body)['image']['id'] fixture_headers = { 'content-type': 'application/octet-stream', 'transfer-encoding': 'chunked', } req = webob.Request.blank("/images/%s" % image_id) req.method = 'PUT' req.body_file = six.StringIO('X' * (CONF.image_size_cap + 1)) for k, v in six.iteritems(fixture_headers): req.headers[k] = v res = req.get_response(self.api) self.assertEqual(http_client.REQUEST_ENTITY_TOO_LARGE, res.status_int) def test_update_non_existing_image(self): self.config(image_size_cap=100) req = webob.Request.blank("images/%s" % _gen_uuid()) req.method = 'PUT' req.body = b'test' req.headers['x-image-meta-name'] = 'test' req.headers['x-image-meta-container_format'] = 'ami' req.headers['x-image-meta-disk_format'] = 'ami' req.headers['x-image-meta-is_public'] = 'False' res = req.get_response(self.api) self.assertEqual(http_client.NOT_FOUND, res.status_int) def test_update_public_image(self): fixture_headers = {'x-image-meta-store': 'file', 'x-image-meta-disk-format': 'vhd', 'x-image-meta-is-public': 'true', 'x-image-meta-container-format': 'ovf', 'x-image-meta-name': 'fake image #3'} req = webob.Request.blank("/images") req.method = 'POST' for k, v in six.iteritems(fixture_headers): req.headers[k] = v res = req.get_response(self.api) self.assertEqual(http_client.CREATED, res.status_int) res_body = jsonutils.loads(res.body)['image'] req = webob.Request.blank("/images/%s" % res_body['id']) req.method = 'PUT' req.headers['x-image-meta-name'] = 'updated public image' res = req.get_response(self.api) self.assertEqual(http_client.OK, res.status_int) @mock.patch.object(registry, 'update_image_metadata') def test_update_without_public_attribute(self, mock_update_image_metadata): req = webob.Request.blank("/images/%s" % UUID1) req.context = self.context image_meta = {'properties': {}} image_controller = glance.api.v1.images.Controller() with mock.patch.object( image_controller, 'update_store_acls' ) as mock_update_store_acls: mock_update_store_acls.return_value = None mock_update_image_metadata.return_value = {} image_controller.update( req, UUID1, image_meta, None) self.assertEqual(0, mock_update_store_acls.call_count) def test_add_image_wrong_content_type(self): fixture_headers = { 'x-image-meta-name': 'fake image #3', 'x-image-meta-container_format': 'ami', 'x-image-meta-disk_format': 'ami', 'transfer-encoding': 'chunked', 'content-type': 'application/octet-st', } req = webob.Request.blank("/images") req.method = 'POST' for k, v in six.iteritems(fixture_headers): req.headers[k] = v res = req.get_response(self.api) self.assertEqual(http_client.BAD_REQUEST, res.status_int) def test_get_index_sort_name_asc(self): """ Tests that the /images API returns list of public images sorted alphabetically by name in ascending order. """ UUID3 = _gen_uuid() extra_fixture = {'id': UUID3, 'status': 'active', 'is_public': True, 'disk_format': 'vhd', 'container_format': 'ovf', 'name': 'asdf', 'size': 19, 'checksum': None} db_api.image_create(self.context, extra_fixture) UUID4 = _gen_uuid() extra_fixture = {'id': UUID4, 'status': 'active', 'is_public': True, 'disk_format': 'vhd', 'container_format': 'ovf', 'name': 'xyz', 'size': 20, 'checksum': None} db_api.image_create(self.context, extra_fixture) req = webob.Request.blank('/images?sort_key=name&sort_dir=asc') res = req.get_response(self.api) self.assertEqual(http_client.OK, res.status_int) res_dict = jsonutils.loads(res.body) images = res_dict['images'] self.assertEqual(3, len(images)) self.assertEqual(UUID3, images[0]['id']) self.assertEqual(UUID2, images[1]['id']) self.assertEqual(UUID4, images[2]['id']) def test_get_details_filter_changes_since(self): """ Tests that the /images/detail API returns list of images that changed since the time defined by changes-since """ dt1 = timeutils.utcnow() - datetime.timedelta(1) iso1 = timeutils.isotime(dt1) date_only1 = dt1.strftime('%Y-%m-%d') date_only2 = dt1.strftime('%Y%m%d') date_only3 = dt1.strftime('%Y-%m%d') dt2 = timeutils.utcnow() + datetime.timedelta(1) iso2 = timeutils.isotime(dt2) image_ts = timeutils.utcnow() + datetime.timedelta(2) hour_before = image_ts.strftime('%Y-%m-%dT%H:%M:%S%%2B01:00') hour_after = image_ts.strftime('%Y-%m-%dT%H:%M:%S-01:00') dt4 = timeutils.utcnow() + datetime.timedelta(3) iso4 = timeutils.isotime(dt4) UUID3 = _gen_uuid() extra_fixture = {'id': UUID3, 'status': 'active', 'is_public': True, 'disk_format': 'vhd', 'container_format': 'ovf', 'name': 'fake image #3', 'size': 18, 'checksum': None} db_api.image_create(self.context, extra_fixture) db_api.image_destroy(self.context, UUID3) UUID4 = _gen_uuid() extra_fixture = {'id': UUID4, 'status': 'active', 'is_public': True, 'disk_format': 'ami', 'container_format': 'ami', 'name': 'fake image #4', 'size': 20, 'checksum': None, 'created_at': image_ts, 'updated_at': image_ts} db_api.image_create(self.context, extra_fixture) # Check a standard list, 4 images in db (2 deleted) req = webob.Request.blank('/images/detail') res = req.get_response(self.api) self.assertEqual(http_client.OK, res.status_int) res_dict = jsonutils.loads(res.body) images = res_dict['images'] self.assertEqual(2, len(images)) self.assertEqual(UUID4, images[0]['id']) self.assertEqual(UUID2, images[1]['id']) # Expect 3 images (1 deleted) req = webob.Request.blank('/images/detail?changes-since=%s' % iso1) res = req.get_response(self.api) self.assertEqual(http_client.OK, res.status_int) res_dict = jsonutils.loads(res.body) images = res_dict['images'] self.assertEqual(3, len(images)) self.assertEqual(UUID4, images[0]['id']) self.assertEqual(UUID3, images[1]['id']) # deleted self.assertEqual(UUID2, images[2]['id']) # Expect 1 images (0 deleted) req = webob.Request.blank('/images/detail?changes-since=%s' % iso2) res = req.get_response(self.api) self.assertEqual(http_client.OK, res.status_int) res_dict = jsonutils.loads(res.body) images = res_dict['images'] self.assertEqual(1, len(images)) self.assertEqual(UUID4, images[0]['id']) # Expect 1 images (0 deleted) req = webob.Request.blank('/images/detail?changes-since=%s' % hour_before) res = req.get_response(self.api) self.assertEqual(http_client.OK, res.status_int) res_dict = jsonutils.loads(res.body) images = res_dict['images'] self.assertEqual(1, len(images)) self.assertEqual(UUID4, images[0]['id']) # Expect 0 images (0 deleted) req = webob.Request.blank('/images/detail?changes-since=%s' % hour_after) res = req.get_response(self.api) self.assertEqual(http_client.OK, res.status_int) res_dict = jsonutils.loads(res.body) images = res_dict['images'] self.assertEqual(0, len(images)) # Expect 0 images (0 deleted) req = webob.Request.blank('/images/detail?changes-since=%s' % iso4) res = req.get_response(self.api) self.assertEqual(http_client.OK, res.status_int) res_dict = jsonutils.loads(res.body) images = res_dict['images'] self.assertEqual(0, len(images)) for param in [date_only1, date_only2, date_only3]: # Expect 3 images (1 deleted) req = webob.Request.blank('/images/detail?changes-since=%s' % param) res = req.get_response(self.api) self.assertEqual(http_client.OK, res.status_int) res_dict = jsonutils.loads(res.body) images = res_dict['images'] self.assertEqual(3, len(images)) self.assertEqual(UUID4, images[0]['id']) self.assertEqual(UUID3, images[1]['id']) # deleted self.assertEqual(UUID2, images[2]['id']) # Bad request (empty changes-since param) req = webob.Request.blank('/images/detail?changes-since=') res = req.get_response(self.api) self.assertEqual(http_client.BAD_REQUEST, res.status_int) def test_get_images_bad_urls(self): """Check that routes collections are not on (LP bug 1185828)""" req = webob.Request.blank('/images/detail.xxx') res = req.get_response(self.api) self.assertEqual(http_client.NOT_FOUND, res.status_int) req = webob.Request.blank('/images.xxx') res = req.get_response(self.api) self.assertEqual(http_client.NOT_FOUND, res.status_int) req = webob.Request.blank('/images/new') res = req.get_response(self.api) self.assertEqual(http_client.NOT_FOUND, res.status_int) req = webob.Request.blank("/images/%s/members" % UUID1) res = req.get_response(self.api) self.assertEqual(http_client.OK, res.status_int) req = webob.Request.blank("/images/%s/members.xxx" % UUID1) res = req.get_response(self.api) self.assertEqual(http_client.NOT_FOUND, res.status_int) def test_get_index_filter_on_user_defined_properties(self): """Check that image filtering works on user-defined properties""" image1_id = _gen_uuid() properties = {'distro': 'ubuntu', 'arch': 'i386'} extra_fixture = {'id': image1_id, 'status': 'active', 'is_public': True, 'disk_format': 'vhd', 'container_format': 'ovf', 'name': 'image-extra-1', 'size': 18, 'properties': properties, 'checksum': None} db_api.image_create(self.context, extra_fixture) image2_id = _gen_uuid() properties = {'distro': 'ubuntu', 'arch': 'x86_64', 'foo': 'bar'} extra_fixture = {'id': image2_id, 'status': 'active', 'is_public': True, 'disk_format': 'ami', 'container_format': 'ami', 'name': 'image-extra-2', 'size': 20, 'properties': properties, 'checksum': None} db_api.image_create(self.context, extra_fixture) # Test index with filter containing one user-defined property. # Filter is 'property-distro=ubuntu'. # Verify both image1 and image2 are returned req = webob.Request.blank('/images?property-distro=ubuntu') res = req.get_response(self.api) self.assertEqual(http_client.OK, res.status_int) images = jsonutils.loads(res.body)['images'] self.assertEqual(2, len(images)) self.assertEqual(image2_id, images[0]['id']) self.assertEqual(image1_id, images[1]['id']) # Test index with filter containing one user-defined property but # non-existent value. Filter is 'property-distro=fedora'. # Verify neither images are returned req = webob.Request.blank('/images?property-distro=fedora') res = req.get_response(self.api) self.assertEqual(http_client.OK, res.status_int) images = jsonutils.loads(res.body)['images'] self.assertEqual(0, len(images)) # Test index with filter containing one user-defined property but # unique value. Filter is 'property-arch=i386'. # Verify only image1 is returned. req = webob.Request.blank('/images?property-arch=i386') res = req.get_response(self.api) self.assertEqual(http_client.OK, res.status_int) images = jsonutils.loads(res.body)['images'] self.assertEqual(1, len(images)) self.assertEqual(image1_id, images[0]['id']) # Test index with filter containing one user-defined property but # unique value. Filter is 'property-arch=x86_64'. # Verify only image1 is returned. req = webob.Request.blank('/images?property-arch=x86_64') res = req.get_response(self.api) self.assertEqual(http_client.OK, res.status_int) images = jsonutils.loads(res.body)['images'] self.assertEqual(1, len(images)) self.assertEqual(image2_id, images[0]['id']) # Test index with filter containing unique user-defined property. # Filter is 'property-foo=bar'. # Verify only image2 is returned. req = webob.Request.blank('/images?property-foo=bar') res = req.get_response(self.api) self.assertEqual(http_client.OK, res.status_int) images = jsonutils.loads(res.body)['images'] self.assertEqual(1, len(images)) self.assertEqual(image2_id, images[0]['id']) # Test index with filter containing unique user-defined property but # .value is non-existent. Filter is 'property-foo=baz'. # Verify neither images are returned. req = webob.Request.blank('/images?property-foo=baz') res = req.get_response(self.api) self.assertEqual(http_client.OK, res.status_int) images = jsonutils.loads(res.body)['images'] self.assertEqual(0, len(images)) # Test index with filter containing multiple user-defined properties # Filter is 'property-arch=x86_64&property-distro=ubuntu'. # Verify only image2 is returned. req = webob.Request.blank('/images?property-arch=x86_64&' 'property-distro=ubuntu') res = req.get_response(self.api) self.assertEqual(http_client.OK, res.status_int) images = jsonutils.loads(res.body)['images'] self.assertEqual(1, len(images)) self.assertEqual(image2_id, images[0]['id']) # Test index with filter containing multiple user-defined properties # Filter is 'property-arch=i386&property-distro=ubuntu'. # Verify only image1 is returned. req = webob.Request.blank('/images?property-arch=i386&' 'property-distro=ubuntu') res = req.get_response(self.api) self.assertEqual(http_client.OK, res.status_int) images = jsonutils.loads(res.body)['images'] self.assertEqual(1, len(images)) self.assertEqual(image1_id, images[0]['id']) # Test index with filter containing multiple user-defined properties. # Filter is 'property-arch=random&property-distro=ubuntu'. # Verify neither images are returned. req = webob.Request.blank('/images?property-arch=random&' 'property-distro=ubuntu') res = req.get_response(self.api) self.assertEqual(http_client.OK, res.status_int) images = jsonutils.loads(res.body)['images'] self.assertEqual(0, len(images)) # Test index with filter containing multiple user-defined properties. # Filter is 'property-arch=random&property-distro=random'. # Verify neither images are returned. req = webob.Request.blank('/images?property-arch=random&' 'property-distro=random') res = req.get_response(self.api) self.assertEqual(http_client.OK, res.status_int) images = jsonutils.loads(res.body)['images'] self.assertEqual(0, len(images)) # Test index with filter containing multiple user-defined properties. # Filter is 'property-boo=far&property-poo=far'. # Verify neither images are returned. req = webob.Request.blank('/images?property-boo=far&' 'property-poo=far') res = req.get_response(self.api) self.assertEqual(http_client.OK, res.status_int) images = jsonutils.loads(res.body)['images'] self.assertEqual(0, len(images)) # Test index with filter containing multiple user-defined properties. # Filter is 'property-foo=bar&property-poo=far'. # Verify neither images are returned. req = webob.Request.blank('/images?property-foo=bar&' 'property-poo=far') res = req.get_response(self.api) self.assertEqual(http_client.OK, res.status_int) images = jsonutils.loads(res.body)['images'] self.assertEqual(0, len(images)) def test_get_images_detailed_unauthorized(self): rules = {"get_images": '!'} self.set_policy_rules(rules) req = webob.Request.blank('/images/detail') res = req.get_response(self.api) self.assertEqual(http_client.FORBIDDEN, res.status_int) def test_get_images_unauthorized(self): rules = {"get_images": '!'} self.set_policy_rules(rules) req = webob.Request.blank('/images') res = req.get_response(self.api) self.assertEqual(http_client.FORBIDDEN, res.status_int) def test_store_location_not_revealed(self): """ Test that the internal store location is NOT revealed through the API server """ # Check index and details... for url in ('/images', '/images/detail'): req = webob.Request.blank(url) res = req.get_response(self.api) self.assertEqual(http_client.OK, res.status_int) res_dict = jsonutils.loads(res.body) images = res_dict['images'] num_locations = sum([1 for record in images if 'location' in record.keys()]) self.assertEqual(0, num_locations, images) # Check GET req = webob.Request.blank("/images/%s" % UUID2) res = req.get_response(self.api) self.assertEqual(http_client.OK, res.status_int) self.assertNotIn('X-Image-Meta-Location', res.headers) # Check HEAD req = webob.Request.blank("/images/%s" % UUID2) req.method = 'HEAD' res = req.get_response(self.api) self.assertEqual(http_client.OK, res.status_int) self.assertNotIn('X-Image-Meta-Location', res.headers) # Check PUT req = webob.Request.blank("/images/%s" % UUID2) req.body = res.body req.method = 'PUT' res = req.get_response(self.api) self.assertEqual(http_client.OK, res.status_int) res_body = jsonutils.loads(res.body) self.assertNotIn('location', res_body['image']) # Check POST req = webob.Request.blank("/images") headers = {'x-image-meta-location': 'http://localhost', 'x-image-meta-disk-format': 'vhd', 'x-image-meta-container-format': 'ovf', 'x-image-meta-name': 'fake image #3'} for k, v in six.iteritems(headers): req.headers[k] = v req.method = 'POST' http = store.get_store_from_scheme('http') with mock.patch.object(http, 'get_size') as size: size.return_value = 0 res = req.get_response(self.api) self.assertEqual(http_client.CREATED, res.status_int) res_body = jsonutils.loads(res.body) self.assertNotIn('location', res_body['image']) def test_image_is_checksummed(self): """Test that the image contents are checksummed properly""" fixture_headers = {'x-image-meta-store': 'file', 'x-image-meta-disk-format': 'vhd', 'x-image-meta-container-format': 'ovf', 'x-image-meta-name': 'fake image #3'} image_contents = b"chunk00000remainder" image_checksum = hashlib.md5(image_contents).hexdigest() req = webob.Request.blank("/images") req.method = 'POST' for k, v in six.iteritems(fixture_headers): req.headers[k] = v req.headers['Content-Type'] = 'application/octet-stream' req.body = image_contents res = req.get_response(self.api) self.assertEqual(http_client.CREATED, res.status_int) res_body = jsonutils.loads(res.body)['image'] self.assertEqual(image_checksum, res_body['checksum'], "Mismatched checksum. Expected %s, got %s" % (image_checksum, res_body['checksum'])) def test_etag_equals_checksum_header(self): """Test that the ETag header matches the x-image-meta-checksum""" fixture_headers = {'x-image-meta-store': 'file', 'x-image-meta-disk-format': 'vhd', 'x-image-meta-container-format': 'ovf', 'x-image-meta-name': 'fake image #3'} image_contents = b"chunk00000remainder" image_checksum = hashlib.md5(image_contents).hexdigest() req = webob.Request.blank("/images") req.method = 'POST' for k, v in six.iteritems(fixture_headers): req.headers[k] = v req.headers['Content-Type'] = 'application/octet-stream' req.body = image_contents res = req.get_response(self.api) self.assertEqual(http_client.CREATED, res.status_int) image = jsonutils.loads(res.body)['image'] # HEAD the image and check the ETag equals the checksum header... expected_headers = {'x-image-meta-checksum': image_checksum, 'etag': image_checksum} req = webob.Request.blank("/images/%s" % image['id']) req.method = 'HEAD' res = req.get_response(self.api) self.assertEqual(http_client.OK, res.status_int) for key in expected_headers.keys(): self.assertIn(key, res.headers, "required header '%s' missing from " "returned headers" % key) for key, value in six.iteritems(expected_headers): self.assertEqual(value, res.headers[key]) def test_bad_checksum_prevents_image_creation(self): """Test that the image contents are checksummed properly""" image_contents = b"chunk00000remainder" bad_checksum = hashlib.md5(b"invalid").hexdigest() fixture_headers = {'x-image-meta-store': 'file', 'x-image-meta-disk-format': 'vhd', 'x-image-meta-container-format': 'ovf', 'x-image-meta-name': 'fake image #3', 'x-image-meta-checksum': bad_checksum, 'x-image-meta-is-public': 'true'} req = webob.Request.blank("/images") req.method = 'POST' for k, v in six.iteritems(fixture_headers): req.headers[k] = v req.headers['Content-Type'] = 'application/octet-stream' req.body = image_contents res = req.get_response(self.api) self.assertEqual(http_client.BAD_REQUEST, res.status_int) # Test that only one image was returned (that already exists) req = webob.Request.blank("/images") req.method = 'GET' res = req.get_response(self.api) self.assertEqual(http_client.OK, res.status_int) images = jsonutils.loads(res.body)['images'] self.assertEqual(1, len(images)) def test_image_meta(self): """Test for HEAD /images/""" expected_headers = {'x-image-meta-id': UUID2, 'x-image-meta-name': 'fake image #2'} req = webob.Request.blank("/images/%s" % UUID2) req.method = 'HEAD' res = req.get_response(self.api) self.assertEqual(http_client.OK, res.status_int) self.assertFalse(res.location) for key, value in six.iteritems(expected_headers): self.assertEqual(value, res.headers[key]) def test_image_meta_unauthorized(self): rules = {"get_image": '!'} self.set_policy_rules(rules) req = webob.Request.blank("/images/%s" % UUID2) req.method = 'HEAD' res = req.get_response(self.api) self.assertEqual(http_client.FORBIDDEN, res.status_int) def test_show_image_basic(self): req = webob.Request.blank("/images/%s" % UUID2) res = req.get_response(self.api) self.assertEqual(http_client.OK, res.status_int) self.assertFalse(res.location) self.assertEqual('application/octet-stream', res.content_type) self.assertEqual(b'chunk00000remainder', res.body) def test_show_non_exists_image(self): req = webob.Request.blank("/images/%s" % _gen_uuid()) res = req.get_response(self.api) self.assertEqual(http_client.NOT_FOUND, res.status_int) def test_show_image_unauthorized(self): rules = {"get_image": '!'} self.set_policy_rules(rules) req = webob.Request.blank("/images/%s" % UUID2) res = req.get_response(self.api) self.assertEqual(http_client.FORBIDDEN, res.status_int) def test_show_image_unauthorized_download(self): rules = {"download_image": '!'} self.set_policy_rules(rules) req = webob.Request.blank("/images/%s" % UUID2) res = req.get_response(self.api) self.assertEqual(http_client.FORBIDDEN, res.status_int) def test_show_image_restricted_download_for_core_property(self): rules = { "restricted": "not ('1024M':%(min_ram)s and role:_member_)", "download_image": "role:admin or rule:restricted" } self.set_policy_rules(rules) req = webob.Request.blank("/images/%s" % UUID2) req.headers['X-Auth-Token'] = 'user:tenant:_member_' req.headers['min_ram'] = '1024M' res = req.get_response(self.api) self.assertEqual(http_client.FORBIDDEN, res.status_int) def test_show_image_restricted_download_for_custom_property(self): rules = { "restricted": "not ('test_1234'==%(x_test_key)s and role:_member_)", "download_image": "role:admin or rule:restricted" } self.set_policy_rules(rules) req = webob.Request.blank("/images/%s" % UUID2) req.headers['X-Auth-Token'] = 'user:tenant:_member_' req.headers['x_test_key'] = 'test_1234' res = req.get_response(self.api) self.assertEqual(http_client.FORBIDDEN, res.status_int) def test_download_service_unavailable(self): """Test image download returns HTTPServiceUnavailable.""" image_fixture = self.FIXTURES[1] image_fixture.update({'location': 'http://0.0.0.0:1/file.tar.gz'}) request = webob.Request.blank("/images/%s" % UUID2) request.context = self.context image_controller = glance.api.v1.images.Controller() with mock.patch.object(image_controller, 'get_active_image_meta_or_error' ) as mocked_get_image: mocked_get_image.return_value = image_fixture self.assertRaises(webob.exc.HTTPServiceUnavailable, image_controller.show, request, mocked_get_image) @mock.patch('glance_store._drivers.filesystem.Store.get') def test_show_image_store_get_not_support(self, m_get): m_get.side_effect = store.StoreGetNotSupported() req = webob.Request.blank("/images/%s" % UUID2) res = req.get_response(self.api) self.assertEqual(http_client.BAD_REQUEST, res.status_int) @mock.patch('glance_store._drivers.filesystem.Store.get') def test_show_image_store_random_get_not_support(self, m_get): m_get.side_effect = store.StoreRandomGetNotSupported(chunk_size=0, offset=0) req = webob.Request.blank("/images/%s" % UUID2) res = req.get_response(self.api) self.assertEqual(http_client.BAD_REQUEST, res.status_int) def test_delete_image(self): req = webob.Request.blank("/images/%s" % UUID2) req.method = 'DELETE' res = req.get_response(self.api) self.assertEqual(http_client.OK, res.status_int) self.assertFalse(res.location) self.assertEqual(b'', res.body) req = webob.Request.blank("/images/%s" % UUID2) req.method = 'GET' res = req.get_response(self.api) self.assertEqual(http_client.NOT_FOUND, res.status_int, res.body) req = webob.Request.blank("/images/%s" % UUID2) req.method = 'HEAD' res = req.get_response(self.api) self.assertEqual(http_client.OK, res.status_int) self.assertEqual('True', res.headers['x-image-meta-deleted']) self.assertEqual('deleted', res.headers['x-image-meta-status']) def test_delete_non_exists_image(self): req = webob.Request.blank("/images/%s" % _gen_uuid()) req.method = 'DELETE' res = req.get_response(self.api) self.assertEqual(http_client.NOT_FOUND, res.status_int) def test_delete_not_allowed(self): # Verify we can get the image data req = webob.Request.blank("/images/%s" % UUID2) req.method = 'GET' req.headers['X-Auth-Token'] = 'user:tenant:' res = req.get_response(self.api) self.assertEqual(http_client.OK, res.status_int) self.assertEqual(19, len(res.body)) # Verify we cannot delete the image req.method = 'DELETE' res = req.get_response(self.api) self.assertEqual(http_client.FORBIDDEN, res.status_int) # Verify the image data is still there req.method = 'GET' res = req.get_response(self.api) self.assertEqual(http_client.OK, res.status_int) self.assertEqual(19, len(res.body)) def test_delete_queued_image(self): """Delete an image in a queued state Bug #747799 demonstrated that trying to DELETE an image that had had its save process killed manually results in failure because the location attribute is None. Bug #1048851 demonstrated that the status was not properly being updated to 'deleted' from 'queued'. """ fixture_headers = {'x-image-meta-store': 'file', 'x-image-meta-disk-format': 'vhd', 'x-image-meta-container-format': 'ovf', 'x-image-meta-name': 'fake image #3'} req = webob.Request.blank("/images") req.method = 'POST' for k, v in six.iteritems(fixture_headers): req.headers[k] = v res = req.get_response(self.api) self.assertEqual(http_client.CREATED, res.status_int) res_body = jsonutils.loads(res.body)['image'] self.assertEqual('queued', res_body['status']) # Now try to delete the image... req = webob.Request.blank("/images/%s" % res_body['id']) req.method = 'DELETE' res = req.get_response(self.api) self.assertEqual(http_client.OK, res.status_int) req = webob.Request.blank('/images/%s' % res_body['id']) req.method = 'HEAD' res = req.get_response(self.api) self.assertEqual(http_client.OK, res.status_int) self.assertEqual('True', res.headers['x-image-meta-deleted']) self.assertEqual('deleted', res.headers['x-image-meta-status']) def test_delete_queued_image_delayed_delete(self): """Delete an image in a queued state when delayed_delete is on Bug #1048851 demonstrated that the status was not properly being updated to 'deleted' from 'queued'. """ self.config(delayed_delete=True) fixture_headers = {'x-image-meta-store': 'file', 'x-image-meta-disk-format': 'vhd', 'x-image-meta-container-format': 'ovf', 'x-image-meta-name': 'fake image #3'} req = webob.Request.blank("/images") req.method = 'POST' for k, v in six.iteritems(fixture_headers): req.headers[k] = v res = req.get_response(self.api) self.assertEqual(http_client.CREATED, res.status_int) res_body = jsonutils.loads(res.body)['image'] self.assertEqual('queued', res_body['status']) # Now try to delete the image... req = webob.Request.blank("/images/%s" % res_body['id']) req.method = 'DELETE' res = req.get_response(self.api) self.assertEqual(http_client.OK, res.status_int) req = webob.Request.blank('/images/%s' % res_body['id']) req.method = 'HEAD' res = req.get_response(self.api) self.assertEqual(http_client.OK, res.status_int) self.assertEqual('True', res.headers['x-image-meta-deleted']) self.assertEqual('deleted', res.headers['x-image-meta-status']) def test_delete_protected_image(self): fixture_headers = {'x-image-meta-store': 'file', 'x-image-meta-name': 'fake image #3', 'x-image-meta-disk-format': 'vhd', 'x-image-meta-container-format': 'ovf', 'x-image-meta-protected': 'True'} req = webob.Request.blank("/images") req.method = 'POST' for k, v in six.iteritems(fixture_headers): req.headers[k] = v res = req.get_response(self.api) self.assertEqual(http_client.CREATED, res.status_int) res_body = jsonutils.loads(res.body)['image'] self.assertEqual('queued', res_body['status']) # Now try to delete the image... req = webob.Request.blank("/images/%s" % res_body['id']) req.method = 'DELETE' res = req.get_response(self.api) self.assertEqual(http_client.FORBIDDEN, res.status_int) def test_delete_image_unauthorized(self): rules = {"delete_image": '!'} self.set_policy_rules(rules) req = webob.Request.blank("/images/%s" % UUID2) req.method = 'DELETE' res = req.get_response(self.api) self.assertEqual(http_client.FORBIDDEN, res.status_int) def test_head_details(self): req = webob.Request.blank('/images/detail') req.method = 'HEAD' res = req.get_response(self.api) self.assertEqual(http_client.METHOD_NOT_ALLOWED, res.status_int) self.assertEqual('GET', res.headers.get('Allow')) self.assertEqual(('GET',), res.allow) req.method = 'GET' res = req.get_response(self.api) self.assertEqual(http_client.OK, res.status_int) def test_get_details_invalid_marker(self): """ Tests that the /images/detail API returns a 400 when an invalid marker is provided """ req = webob.Request.blank('/images/detail?marker=%s' % _gen_uuid()) res = req.get_response(self.api) self.assertEqual(http_client.BAD_REQUEST, res.status_int) def test_get_image_members(self): """ Tests members listing for existing images """ req = webob.Request.blank('/images/%s/members' % UUID2) req.method = 'GET' res = req.get_response(self.api) self.assertEqual(http_client.OK, res.status_int) memb_list = jsonutils.loads(res.body) num_members = len(memb_list['members']) self.assertEqual(0, num_members) def test_get_image_members_allowed_by_policy(self): rules = {"get_members": '@'} self.set_policy_rules(rules) req = webob.Request.blank('/images/%s/members' % UUID2) req.method = 'GET' res = req.get_response(self.api) self.assertEqual(http_client.OK, res.status_int) memb_list = jsonutils.loads(res.body) num_members = len(memb_list['members']) self.assertEqual(0, num_members) def test_get_image_members_forbidden_by_policy(self): rules = {"get_members": '!'} self.set_policy_rules(rules) req = webob.Request.blank('/images/%s/members' % UUID2) req.method = 'GET' res = req.get_response(self.api) self.assertEqual(webob.exc.HTTPForbidden.code, res.status_int) def test_get_image_members_not_existing(self): """ Tests proper exception is raised if attempt to get members of non-existing image """ req = webob.Request.blank('/images/%s/members' % _gen_uuid()) req.method = 'GET' res = req.get_response(self.api) self.assertEqual(http_client.NOT_FOUND, res.status_int) def test_add_member_positive(self): """ Tests adding image members """ test_router_api = router.API(self.mapper) self.api = test_utils.FakeAuthMiddleware( test_router_api, is_admin=True) req = webob.Request.blank('/images/%s/members/pattieblack' % UUID2) req.method = 'PUT' res = req.get_response(self.api) self.assertEqual(http_client.NO_CONTENT, res.status_int) def test_get_member_images(self): """ Tests image listing for members """ req = webob.Request.blank('/shared-images/pattieblack') req.method = 'GET' res = req.get_response(self.api) self.assertEqual(http_client.OK, res.status_int) memb_list = jsonutils.loads(res.body) num_members = len(memb_list['shared_images']) self.assertEqual(0, num_members) def test_replace_members(self): """ Tests replacing image members raises right exception """ test_router_api = router.API(self.mapper) self.api = test_utils.FakeAuthMiddleware( test_router_api, is_admin=False) fixture = dict(member_id='pattieblack') req = webob.Request.blank('/images/%s/members' % UUID2) req.method = 'PUT' req.content_type = 'application/json' req.body = jsonutils.dump_as_bytes(dict(image_memberships=fixture)) res = req.get_response(self.api) self.assertEqual(http_client.UNAUTHORIZED, res.status_int) def test_active_image_immutable_props_for_user(self): """ Tests user cannot update immutable props of active image """ test_router_api = router.API(self.mapper) self.api = test_utils.FakeAuthMiddleware( test_router_api, is_admin=False) fixture_header_list = [{'x-image-meta-checksum': '1234'}, {'x-image-meta-size': '12345'}] for fixture_header in fixture_header_list: req = webob.Request.blank('/images/%s' % UUID2) req.method = 'PUT' for k, v in six.iteritems(fixture_header): req = webob.Request.blank('/images/%s' % UUID2) req.method = 'HEAD' res = req.get_response(self.api) self.assertEqual(http_client.OK, res.status_int) orig_value = res.headers[k] req = webob.Request.blank('/images/%s' % UUID2) req.headers[k] = v req.method = 'PUT' res = req.get_response(self.api) self.assertEqual(http_client.FORBIDDEN, res.status_int) prop = k[len('x-image-meta-'):] body = res.body.decode('utf-8') self.assertNotEqual(-1, body.find( "Forbidden to modify '%s' of active image" % prop)) req = webob.Request.blank('/images/%s' % UUID2) req.method = 'HEAD' res = req.get_response(self.api) self.assertEqual(http_client.OK, res.status_int) self.assertEqual(orig_value, res.headers[k]) def test_deactivated_image_immutable_props_for_user(self): """ Tests user cannot update immutable props of deactivated image """ test_router_api = router.API(self.mapper) self.api = test_utils.FakeAuthMiddleware( test_router_api, is_admin=False) fixture_header_list = [{'x-image-meta-checksum': '1234'}, {'x-image-meta-size': '12345'}] for fixture_header in fixture_header_list: req = webob.Request.blank('/images/%s' % UUID3) req.method = 'PUT' for k, v in six.iteritems(fixture_header): req = webob.Request.blank('/images/%s' % UUID3) req.method = 'HEAD' res = req.get_response(self.api) self.assertEqual(http_client.OK, res.status_int) orig_value = res.headers[k] req = webob.Request.blank('/images/%s' % UUID3) req.headers[k] = v req.method = 'PUT' res = req.get_response(self.api) self.assertEqual(http_client.FORBIDDEN, res.status_int) prop = k[len('x-image-meta-'):] body = res.body.decode('utf-8') self.assertNotEqual(-1, body.find( "Forbidden to modify '%s' of deactivated image" % prop)) req = webob.Request.blank('/images/%s' % UUID3) req.method = 'HEAD' res = req.get_response(self.api) self.assertEqual(http_client.OK, res.status_int) self.assertEqual(orig_value, res.headers[k]) def test_props_of_active_image_mutable_for_admin(self): """ Tests admin can update 'immutable' props of active image """ test_router_api = router.API(self.mapper) self.api = test_utils.FakeAuthMiddleware( test_router_api, is_admin=True) fixture_header_list = [{'x-image-meta-checksum': '1234'}, {'x-image-meta-size': '12345'}] for fixture_header in fixture_header_list: req = webob.Request.blank('/images/%s' % UUID2) req.method = 'PUT' for k, v in six.iteritems(fixture_header): req = webob.Request.blank('/images/%s' % UUID2) req.method = 'HEAD' res = req.get_response(self.api) self.assertEqual(http_client.OK, res.status_int) req = webob.Request.blank('/images/%s' % UUID2) req.headers[k] = v req.method = 'PUT' res = req.get_response(self.api) self.assertEqual(http_client.OK, res.status_int) req = webob.Request.blank('/images/%s' % UUID2) req.method = 'HEAD' res = req.get_response(self.api) self.assertEqual(http_client.OK, res.status_int) self.assertEqual(v, res.headers[k]) def test_props_of_deactivated_image_mutable_for_admin(self): """ Tests admin can update 'immutable' props of deactivated image """ test_router_api = router.API(self.mapper) self.api = test_utils.FakeAuthMiddleware( test_router_api, is_admin=True) fixture_header_list = [{'x-image-meta-checksum': '1234'}, {'x-image-meta-size': '12345'}] for fixture_header in fixture_header_list: req = webob.Request.blank('/images/%s' % UUID3) req.method = 'PUT' for k, v in six.iteritems(fixture_header): req = webob.Request.blank('/images/%s' % UUID3) req.method = 'HEAD' res = req.get_response(self.api) self.assertEqual(http_client.OK, res.status_int) req = webob.Request.blank('/images/%s' % UUID3) req.headers[k] = v req.method = 'PUT' res = req.get_response(self.api) self.assertEqual(http_client.OK, res.status_int) req = webob.Request.blank('/images/%s' % UUID3) req.method = 'HEAD' res = req.get_response(self.api) self.assertEqual(http_client.OK, res.status_int) self.assertEqual(v, res.headers[k]) def test_replace_members_non_existing_image(self): """ Tests replacing image members raises right exception """ test_router_api = router.API(self.mapper) self.api = test_utils.FakeAuthMiddleware( test_router_api, is_admin=True) fixture = dict(member_id='pattieblack') req = webob.Request.blank('/images/%s/members' % _gen_uuid()) req.method = 'PUT' req.content_type = 'application/json' req.body = jsonutils.dump_as_bytes(dict(image_memberships=fixture)) res = req.get_response(self.api) self.assertEqual(http_client.NOT_FOUND, res.status_int) def test_replace_members_bad_request(self): """ Tests replacing image members raises bad request if body is wrong """ test_router_api = router.API(self.mapper) self.api = test_utils.FakeAuthMiddleware( test_router_api, is_admin=True) fixture = dict(member_id='pattieblack') req = webob.Request.blank('/images/%s/members' % UUID2) req.method = 'PUT' req.content_type = 'application/json' req.body = jsonutils.dump_as_bytes(dict(image_memberships=fixture)) res = req.get_response(self.api) self.assertEqual(http_client.BAD_REQUEST, res.status_int) def test_replace_members_positive(self): """ Tests replacing image members """ test_router = router.API(self.mapper) self.api = test_utils.FakeAuthMiddleware( test_router, is_admin=True) fixture = [dict(member_id='pattieblack', can_share=False)] # Replace req = webob.Request.blank('/images/%s/members' % UUID2) req.method = 'PUT' req.content_type = 'application/json' req.body = jsonutils.dump_as_bytes(dict(memberships=fixture)) res = req.get_response(self.api) self.assertEqual(http_client.NO_CONTENT, res.status_int) def test_replace_members_forbidden_by_policy(self): rules = {"modify_member": '!'} self.set_policy_rules(rules) self.api = test_utils.FakeAuthMiddleware(router.API(self.mapper), is_admin=True) fixture = [{'member_id': 'pattieblack', 'can_share': 'false'}] req = webob.Request.blank('/images/%s/members' % UUID1) req.method = 'PUT' req.content_type = 'application/json' req.body = jsonutils.dump_as_bytes(dict(memberships=fixture)) res = req.get_response(self.api) self.assertEqual(webob.exc.HTTPForbidden.code, res.status_int) def test_replace_members_allowed_by_policy(self): rules = {"modify_member": '@'} self.set_policy_rules(rules) self.api = test_utils.FakeAuthMiddleware(router.API(self.mapper), is_admin=True) fixture = [{'member_id': 'pattieblack', 'can_share': 'false'}] req = webob.Request.blank('/images/%s/members' % UUID1) req.method = 'PUT' req.content_type = 'application/json' req.body = jsonutils.dump_as_bytes(dict(memberships=fixture)) res = req.get_response(self.api) self.assertEqual(webob.exc.HTTPNoContent.code, res.status_int) def test_add_member_unauthorized(self): """ Tests adding image members raises right exception """ test_router = router.API(self.mapper) self.api = test_utils.FakeAuthMiddleware( test_router, is_admin=False) req = webob.Request.blank('/images/%s/members/pattieblack' % UUID2) req.method = 'PUT' res = req.get_response(self.api) self.assertEqual(http_client.UNAUTHORIZED, res.status_int) def test_add_member_non_existing_image(self): """ Tests adding image members raises right exception """ test_router = router.API(self.mapper) self.api = test_utils.FakeAuthMiddleware( test_router, is_admin=True) test_uri = '/images/%s/members/pattieblack' req = webob.Request.blank(test_uri % _gen_uuid()) req.method = 'PUT' res = req.get_response(self.api) self.assertEqual(http_client.NOT_FOUND, res.status_int) def test_add_member_with_body(self): """ Tests adding image members """ fixture = dict(can_share=True) test_router = router.API(self.mapper) self.api = test_utils.FakeAuthMiddleware( test_router, is_admin=True) req = webob.Request.blank('/images/%s/members/pattieblack' % UUID2) req.method = 'PUT' req.body = jsonutils.dump_as_bytes(dict(member=fixture)) res = req.get_response(self.api) self.assertEqual(http_client.NO_CONTENT, res.status_int) def test_add_member_overlimit(self): self.config(image_member_quota=0) test_router_api = router.API(self.mapper) self.api = test_utils.FakeAuthMiddleware( test_router_api, is_admin=True) req = webob.Request.blank('/images/%s/members/pattieblack' % UUID2) req.method = 'PUT' res = req.get_response(self.api) self.assertEqual(http_client.REQUEST_ENTITY_TOO_LARGE, res.status_int) def test_add_member_unlimited(self): self.config(image_member_quota=-1) test_router_api = router.API(self.mapper) self.api = test_utils.FakeAuthMiddleware( test_router_api, is_admin=True) req = webob.Request.blank('/images/%s/members/pattieblack' % UUID2) req.method = 'PUT' res = req.get_response(self.api) self.assertEqual(http_client.NO_CONTENT, res.status_int) def test_add_member_forbidden_by_policy(self): rules = {"modify_member": '!'} self.set_policy_rules(rules) self.api = test_utils.FakeAuthMiddleware(router.API(self.mapper), is_admin=True) req = webob.Request.blank('/images/%s/members/pattieblack' % UUID1) req.method = 'PUT' res = req.get_response(self.api) self.assertEqual(webob.exc.HTTPForbidden.code, res.status_int) def test_add_member_allowed_by_policy(self): rules = {"modify_member": '@'} self.set_policy_rules(rules) self.api = test_utils.FakeAuthMiddleware(router.API(self.mapper), is_admin=True) req = webob.Request.blank('/images/%s/members/pattieblack' % UUID1) req.method = 'PUT' res = req.get_response(self.api) self.assertEqual(webob.exc.HTTPNoContent.code, res.status_int) def test_get_members_of_deleted_image_raises_404(self): """ Tests members listing for deleted image raises 404. """ req = webob.Request.blank("/images/%s" % UUID2) req.method = 'DELETE' res = req.get_response(self.api) self.assertEqual(http_client.OK, res.status_int) req = webob.Request.blank('/images/%s/members' % UUID2) req.method = 'GET' res = req.get_response(self.api) self.assertEqual(webob.exc.HTTPNotFound.code, res.status_int) self.assertIn('Image with identifier %s has been deleted.' % UUID2, res.body.decode()) def test_delete_member_of_deleted_image_raises_404(self): """ Tests deleting members of deleted image raises 404. """ test_router = router.API(self.mapper) self.api = test_utils.FakeAuthMiddleware(test_router, is_admin=True) req = webob.Request.blank("/images/%s" % UUID2) req.method = 'DELETE' res = req.get_response(self.api) self.assertEqual(http_client.OK, res.status_int) req = webob.Request.blank('/images/%s/members/pattieblack' % UUID2) req.method = 'DELETE' res = req.get_response(self.api) self.assertEqual(webob.exc.HTTPNotFound.code, res.status_int) self.assertIn('Image with identifier %s has been deleted.' % UUID2, res.body.decode()) def test_update_members_of_deleted_image_raises_404(self): """ Tests update members of deleted image raises 404. """ test_router = router.API(self.mapper) self.api = test_utils.FakeAuthMiddleware(test_router, is_admin=True) req = webob.Request.blank('/images/%s/members/pattieblack' % UUID2) req.method = 'PUT' res = req.get_response(self.api) self.assertEqual(http_client.NO_CONTENT, res.status_int) req = webob.Request.blank("/images/%s" % UUID2) req.method = 'DELETE' res = req.get_response(self.api) self.assertEqual(http_client.OK, res.status_int) fixture = [{'member_id': 'pattieblack', 'can_share': 'false'}] req = webob.Request.blank('/images/%s/members' % UUID2) req.method = 'PUT' req.content_type = 'application/json' req.body = jsonutils.dump_as_bytes(dict(memberships=fixture)) res = req.get_response(self.api) self.assertEqual(webob.exc.HTTPNotFound.code, res.status_int) body = res.body.decode('utf-8') self.assertIn( 'Image with identifier %s has been deleted.' % UUID2, body) def test_replace_members_of_image(self): test_router = router.API(self.mapper) self.api = test_utils.FakeAuthMiddleware(test_router, is_admin=True) fixture = [{'member_id': 'pattieblack', 'can_share': 'false'}] req = webob.Request.blank('/images/%s/members' % UUID2) req.method = 'PUT' req.body = jsonutils.dump_as_bytes(dict(memberships=fixture)) res = req.get_response(self.api) self.assertEqual(http_client.NO_CONTENT, res.status_int) req = webob.Request.blank('/images/%s/members' % UUID2) req.method = 'GET' res = req.get_response(self.api) self.assertEqual(http_client.OK, res.status_int) memb_list = jsonutils.loads(res.body) self.assertEqual(1, len(memb_list)) def test_replace_members_of_image_overlimit(self): # Set image_member_quota to 1 self.config(image_member_quota=1) test_router = router.API(self.mapper) self.api = test_utils.FakeAuthMiddleware(test_router, is_admin=True) # PUT an original member entry fixture = [{'member_id': 'baz', 'can_share': False}] req = webob.Request.blank('/images/%s/members' % UUID2) req.method = 'PUT' req.body = jsonutils.dump_as_bytes(dict(memberships=fixture)) res = req.get_response(self.api) self.assertEqual(http_client.NO_CONTENT, res.status_int) # GET original image member list req = webob.Request.blank('/images/%s/members' % UUID2) req.method = 'GET' res = req.get_response(self.api) self.assertEqual(http_client.OK, res.status_int) original_members = jsonutils.loads(res.body)['members'] self.assertEqual(1, len(original_members)) # PUT 2 image members to replace existing (overlimit) fixture = [{'member_id': 'foo1', 'can_share': False}, {'member_id': 'foo2', 'can_share': False}] req = webob.Request.blank('/images/%s/members' % UUID2) req.method = 'PUT' req.body = jsonutils.dump_as_bytes(dict(memberships=fixture)) res = req.get_response(self.api) self.assertEqual(http_client.REQUEST_ENTITY_TOO_LARGE, res.status_int) # GET member list req = webob.Request.blank('/images/%s/members' % UUID2) req.method = 'GET' res = req.get_response(self.api) self.assertEqual(http_client.OK, res.status_int) # Assert the member list was not changed memb_list = jsonutils.loads(res.body)['members'] self.assertEqual(original_members, memb_list) def test_replace_members_of_image_unlimited(self): self.config(image_member_quota=-1) test_router = router.API(self.mapper) self.api = test_utils.FakeAuthMiddleware(test_router, is_admin=True) fixture = [{'member_id': 'foo1', 'can_share': False}, {'member_id': 'foo2', 'can_share': False}] req = webob.Request.blank('/images/%s/members' % UUID2) req.method = 'PUT' req.body = jsonutils.dump_as_bytes(dict(memberships=fixture)) res = req.get_response(self.api) self.assertEqual(http_client.NO_CONTENT, res.status_int) req = webob.Request.blank('/images/%s/members' % UUID2) req.method = 'GET' res = req.get_response(self.api) self.assertEqual(http_client.OK, res.status_int) memb_list = jsonutils.loads(res.body)['members'] self.assertEqual(fixture, memb_list) def test_create_member_to_deleted_image_raises_404(self): """ Tests adding members to deleted image raises 404. """ test_router = router.API(self.mapper) self.api = test_utils.FakeAuthMiddleware(test_router, is_admin=True) req = webob.Request.blank("/images/%s" % UUID2) req.method = 'DELETE' res = req.get_response(self.api) self.assertEqual(http_client.OK, res.status_int) req = webob.Request.blank('/images/%s/members/pattieblack' % UUID2) req.method = 'PUT' res = req.get_response(self.api) self.assertEqual(webob.exc.HTTPNotFound.code, res.status_int) self.assertIn('Image with identifier %s has been deleted.' % UUID2, res.body.decode()) def test_delete_member(self): """ Tests deleting image members raises right exception """ test_router = router.API(self.mapper) self.api = test_utils.FakeAuthMiddleware( test_router, is_admin=False) req = webob.Request.blank('/images/%s/members/pattieblack' % UUID2) req.method = 'DELETE' res = req.get_response(self.api) self.assertEqual(http_client.UNAUTHORIZED, res.status_int) def test_delete_member_on_non_existing_image(self): """ Tests deleting image members raises right exception """ test_router = router.API(self.mapper) api = test_utils.FakeAuthMiddleware(test_router, is_admin=True) test_uri = '/images/%s/members/pattieblack' req = webob.Request.blank(test_uri % _gen_uuid()) req.method = 'DELETE' res = req.get_response(api) self.assertEqual(http_client.NOT_FOUND, res.status_int) def test_delete_non_exist_member(self): """ Test deleting image members raises right exception """ test_router = router.API(self.mapper) api = test_utils.FakeAuthMiddleware( test_router, is_admin=True) req = webob.Request.blank('/images/%s/members/test_user' % UUID2) req.method = 'DELETE' res = req.get_response(api) self.assertEqual(http_client.NOT_FOUND, res.status_int) def test_delete_image_member(self): test_rserver = router.API(self.mapper) self.api = test_utils.FakeAuthMiddleware( test_rserver, is_admin=True) # Add member to image: fixture = dict(can_share=True) test_uri = '/images/%s/members/test_add_member_positive' req = webob.Request.blank(test_uri % UUID2) req.method = 'PUT' req.content_type = 'application/json' req.body = jsonutils.dump_as_bytes(dict(member=fixture)) res = req.get_response(self.api) self.assertEqual(http_client.NO_CONTENT, res.status_int) # Delete member test_uri = '/images/%s/members/test_add_member_positive' req = webob.Request.blank(test_uri % UUID2) req.headers['X-Auth-Token'] = 'test1:test1:' req.method = 'DELETE' req.content_type = 'application/json' res = req.get_response(self.api) self.assertEqual(http_client.NOT_FOUND, res.status_int) self.assertIn(b'Forbidden', res.body) def test_delete_member_allowed_by_policy(self): rules = {"delete_member": '@', "modify_member": '@'} self.set_policy_rules(rules) self.api = test_utils.FakeAuthMiddleware(router.API(self.mapper), is_admin=True) req = webob.Request.blank('/images/%s/members/pattieblack' % UUID2) req.method = 'PUT' res = req.get_response(self.api) self.assertEqual(webob.exc.HTTPNoContent.code, res.status_int) req.method = 'DELETE' res = req.get_response(self.api) self.assertEqual(webob.exc.HTTPNoContent.code, res.status_int) def test_delete_member_forbidden_by_policy(self): rules = {"delete_member": '!', "modify_member": '@'} self.set_policy_rules(rules) self.api = test_utils.FakeAuthMiddleware(router.API(self.mapper), is_admin=True) req = webob.Request.blank('/images/%s/members/pattieblack' % UUID2) req.method = 'PUT' res = req.get_response(self.api) self.assertEqual(webob.exc.HTTPNoContent.code, res.status_int) req.method = 'DELETE' res = req.get_response(self.api) self.assertEqual(webob.exc.HTTPForbidden.code, res.status_int) class TestImageSerializer(base.IsolatedUnitTest): def setUp(self): """Establish a clean test environment""" super(TestImageSerializer, self).setUp() self.receiving_user = 'fake_user' self.receiving_tenant = 2 self.context = glance.context.RequestContext( is_admin=True, user=self.receiving_user, tenant=self.receiving_tenant) self.serializer = glance.api.v1.images.ImageSerializer() def image_iter(): for x in [b'chunk', b'678911234', b'56789']: yield x self.FIXTURE = { 'image_iterator': image_iter(), 'image_meta': { 'id': UUID2, 'name': 'fake image #2', 'status': 'active', 'disk_format': 'vhd', 'container_format': 'ovf', 'is_public': True, 'created_at': timeutils.utcnow(), 'updated_at': timeutils.utcnow(), 'deleted_at': None, 'deleted': False, 'checksum': '06ff575a2856444fbe93100157ed74ab92eb7eff', 'size': 19, 'owner': _gen_uuid(), 'location': "file:///tmp/glance-tests/2", 'properties': {}, } } def test_meta(self): exp_headers = {'x-image-meta-id': UUID2, 'x-image-meta-location': 'file:///tmp/glance-tests/2', 'ETag': self.FIXTURE['image_meta']['checksum'], 'x-image-meta-name': 'fake image #2'} req = webob.Request.blank("/images/%s" % UUID2) req.method = 'HEAD' req.remote_addr = "1.2.3.4" req.context = self.context response = webob.Response(request=req) self.serializer.meta(response, self.FIXTURE) for key, value in six.iteritems(exp_headers): self.assertEqual(value, response.headers[key]) def test_meta_utf8(self): # We get unicode strings from JSON, and therefore all strings in the # metadata will actually be unicode when handled internally. But we # want to output utf-8. FIXTURE = { 'image_meta': { 'id': six.text_type(UUID2), 'name': u'fake image #2 with utf-8 éàè', 'status': u'active', 'disk_format': u'vhd', 'container_format': u'ovf', 'is_public': True, 'created_at': timeutils.utcnow(), 'updated_at': timeutils.utcnow(), 'deleted_at': None, 'deleted': False, 'checksum': u'06ff575a2856444fbe93100157ed74ab92eb7eff', 'size': 19, 'owner': six.text_type(_gen_uuid()), 'location': u"file:///tmp/glance-tests/2", 'properties': { u'prop_éé': u'ça marche', u'prop_çé': u'çé', } } } exp_headers = {'x-image-meta-id': UUID2, 'x-image-meta-location': 'file:///tmp/glance-tests/2', 'ETag': '06ff575a2856444fbe93100157ed74ab92eb7eff', 'x-image-meta-size': '19', # str, not int 'x-image-meta-name': 'fake image #2 with utf-8 éàè', 'x-image-meta-property-prop_éé': 'ça marche', 'x-image-meta-property-prop_çé': 'çé'} req = webob.Request.blank("/images/%s" % UUID2) req.method = 'HEAD' req.remote_addr = "1.2.3.4" req.context = self.context response = webob.Response(request=req) self.serializer.meta(response, FIXTURE) if six.PY2: self.assertNotEqual(type(FIXTURE['image_meta']['name']), type(response.headers['x-image-meta-name'])) if six.PY3: self.assertEqual(FIXTURE['image_meta']['name'], response.headers['x-image-meta-name']) else: self.assertEqual( FIXTURE['image_meta']['name'], response.headers['x-image-meta-name'].decode('utf-8')) for key, value in six.iteritems(exp_headers): self.assertEqual(value, response.headers[key]) if six.PY2: FIXTURE['image_meta']['properties'][u'prop_bad'] = 'çé' self.assertRaises(UnicodeDecodeError, self.serializer.meta, response, FIXTURE) def test_show(self): exp_headers = {'x-image-meta-id': UUID2, 'x-image-meta-location': 'file:///tmp/glance-tests/2', 'ETag': self.FIXTURE['image_meta']['checksum'], 'x-image-meta-name': 'fake image #2'} req = webob.Request.blank("/images/%s" % UUID2) req.method = 'GET' req.context = self.context response = webob.Response(request=req) self.serializer.show(response, self.FIXTURE) for key, value in six.iteritems(exp_headers): self.assertEqual(value, response.headers[key]) self.assertEqual(b'chunk67891123456789', response.body) def test_show_notify(self): """Make sure an eventlet posthook for notify_image_sent is added.""" req = webob.Request.blank("/images/%s" % UUID2) req.method = 'GET' req.context = self.context response = webob.Response(request=req) response.request.environ['eventlet.posthooks'] = [] self.serializer.show(response, self.FIXTURE) # just make sure the app_iter is called for chunk in response.app_iter: pass self.assertNotEqual([], response.request.environ['eventlet.posthooks']) def test_image_send_notification(self): req = webob.Request.blank("/images/%s" % UUID2) req.method = 'GET' req.remote_addr = '1.2.3.4' req.context = self.context image_meta = self.FIXTURE['image_meta'] called = {"notified": False} expected_payload = { 'bytes_sent': 19, 'image_id': UUID2, 'owner_id': image_meta['owner'], 'receiver_tenant_id': self.receiving_tenant, 'receiver_user_id': self.receiving_user, 'destination_ip': '1.2.3.4', } def fake_info(_event_type, _payload): self.assertEqual(expected_payload, _payload) called['notified'] = True self.stubs.Set(self.serializer.notifier, 'info', fake_info) glance.api.common.image_send_notification(19, 19, image_meta, req, self.serializer.notifier) self.assertTrue(called['notified']) def test_image_send_notification_error(self): """Ensure image.send notification is sent on error.""" req = webob.Request.blank("/images/%s" % UUID2) req.method = 'GET' req.remote_addr = '1.2.3.4' req.context = self.context image_meta = self.FIXTURE['image_meta'] called = {"notified": False} expected_payload = { 'bytes_sent': 17, 'image_id': UUID2, 'owner_id': image_meta['owner'], 'receiver_tenant_id': self.receiving_tenant, 'receiver_user_id': self.receiving_user, 'destination_ip': '1.2.3.4', } def fake_error(_event_type, _payload): self.assertEqual(expected_payload, _payload) called['notified'] = True self.stubs.Set(self.serializer.notifier, 'error', fake_error) # expected and actually sent bytes differ glance.api.common.image_send_notification(17, 19, image_meta, req, self.serializer.notifier) self.assertTrue(called['notified']) def test_redact_location(self): """Ensure location redaction does not change original metadata""" image_meta = {'size': 3, 'id': '123', 'location': 'http://localhost'} redacted_image_meta = {'size': 3, 'id': '123'} copy_image_meta = copy.deepcopy(image_meta) tmp_image_meta = glance.api.v1.images.redact_loc(image_meta) self.assertEqual(image_meta, copy_image_meta) self.assertEqual(redacted_image_meta, tmp_image_meta) def test_noop_redact_location(self): """Check no-op location redaction does not change original metadata""" image_meta = {'size': 3, 'id': '123'} redacted_image_meta = {'size': 3, 'id': '123'} copy_image_meta = copy.deepcopy(image_meta) tmp_image_meta = glance.api.v1.images.redact_loc(image_meta) self.assertEqual(image_meta, copy_image_meta) self.assertEqual(redacted_image_meta, tmp_image_meta) self.assertEqual(redacted_image_meta, image_meta) class TestFilterValidator(base.IsolatedUnitTest): def test_filter_validator(self): self.assertFalse(glance.api.v1.filters.validate('size_max', -1)) self.assertTrue(glance.api.v1.filters.validate('size_max', 1)) self.assertTrue(glance.api.v1.filters.validate('protected', 'True')) self.assertTrue(glance.api.v1.filters.validate('protected', 'FALSE')) self.assertFalse(glance.api.v1.filters.validate('protected', '-1')) class TestAPIProtectedProps(base.IsolatedUnitTest): def setUp(self): """Establish a clean test environment""" super(TestAPIProtectedProps, self).setUp() self.mapper = routes.Mapper() # turn on property protections self.set_property_protections() self.api = test_utils.FakeAuthMiddleware(router.API(self.mapper)) db_api.get_engine() db_models.unregister_models(db_api.get_engine()) db_models.register_models(db_api.get_engine()) def tearDown(self): """Clear the test environment""" super(TestAPIProtectedProps, self).tearDown() self.destroy_fixtures() def destroy_fixtures(self): # Easiest to just drop the models and re-create them... db_models.unregister_models(db_api.get_engine()) db_models.register_models(db_api.get_engine()) def _create_admin_image(self, props=None): if props is None: props = {} request = unit_test_utils.get_fake_request(path='/images') headers = {'x-image-meta-disk-format': 'ami', 'x-image-meta-container-format': 'ami', 'x-image-meta-name': 'foo', 'x-image-meta-size': '0', 'x-auth-token': 'user:tenant:admin'} headers.update(props) for k, v in six.iteritems(headers): request.headers[k] = v created_image = request.get_response(self.api) res_body = jsonutils.loads(created_image.body)['image'] image_id = res_body['id'] return image_id def test_prop_protection_with_create_and_permitted_role(self): """ As admin role, create an image and verify permitted role 'member' can create a protected property """ image_id = self._create_admin_image() another_request = unit_test_utils.get_fake_request( path='/images/%s' % image_id, method='PUT') headers = {'x-auth-token': 'user:tenant:member', 'x-image-meta-property-x_owner_foo': 'bar'} for k, v in six.iteritems(headers): another_request.headers[k] = v output = another_request.get_response(self.api) res_body = jsonutils.loads(output.body)['image'] self.assertEqual('bar', res_body['properties']['x_owner_foo']) def test_prop_protection_with_permitted_policy_config(self): """ As admin role, create an image and verify permitted role 'member' can create a protected property """ self.set_property_protections(use_policies=True) image_id = self._create_admin_image() another_request = unit_test_utils.get_fake_request( path='/images/%s' % image_id, method='PUT') headers = {'x-auth-token': 'user:tenant:admin', 'x-image-meta-property-spl_create_prop_policy': 'bar'} for k, v in six.iteritems(headers): another_request.headers[k] = v output = another_request.get_response(self.api) self.assertEqual(http_client.OK, output.status_int) res_body = jsonutils.loads(output.body)['image'] self.assertEqual('bar', res_body['properties']['spl_create_prop_policy']) def test_prop_protection_with_create_and_unpermitted_role(self): """ As admin role, create an image and verify unpermitted role 'fake_member' can *not* create a protected property """ image_id = self._create_admin_image() another_request = unit_test_utils.get_fake_request( path='/images/%s' % image_id, method='PUT') headers = {'x-auth-token': 'user:tenant:fake_member', 'x-image-meta-property-x_owner_foo': 'bar'} for k, v in six.iteritems(headers): another_request.headers[k] = v another_request.get_response(self.api) output = another_request.get_response(self.api) self.assertEqual(webob.exc.HTTPForbidden.code, output.status_int) self.assertIn("Property '%s' is protected" % "x_owner_foo", output.body.decode()) def test_prop_protection_with_show_and_permitted_role(self): """ As admin role, create an image with a protected property, and verify permitted role 'member' can read that protected property via HEAD """ image_id = self._create_admin_image( {'x-image-meta-property-x_owner_foo': 'bar'}) another_request = unit_test_utils.get_fake_request( method='HEAD', path='/images/%s' % image_id) headers = {'x-auth-token': 'user:tenant:member'} for k, v in six.iteritems(headers): another_request.headers[k] = v res2 = another_request.get_response(self.api) self.assertEqual('bar', res2.headers['x-image-meta-property-x_owner_foo']) def test_prop_protection_with_show_and_unpermitted_role(self): """ As admin role, create an image with a protected property, and verify permitted role 'fake_role' can *not* read that protected property via HEAD """ image_id = self._create_admin_image( {'x-image-meta-property-x_owner_foo': 'bar'}) another_request = unit_test_utils.get_fake_request( method='HEAD', path='/images/%s' % image_id) headers = {'x-auth-token': 'user:tenant:fake_role'} for k, v in six.iteritems(headers): another_request.headers[k] = v output = another_request.get_response(self.api) self.assertEqual(http_client.OK, output.status_int) self.assertEqual(b'', output.body) self.assertNotIn('x-image-meta-property-x_owner_foo', output.headers) def test_prop_protection_with_get_and_permitted_role(self): """ As admin role, create an image with a protected property, and verify permitted role 'member' can read that protected property via GET """ image_id = self._create_admin_image( {'x-image-meta-property-x_owner_foo': 'bar'}) another_request = unit_test_utils.get_fake_request( method='GET', path='/images/%s' % image_id) headers = {'x-auth-token': 'user:tenant:member'} for k, v in six.iteritems(headers): another_request.headers[k] = v res2 = another_request.get_response(self.api) self.assertEqual('bar', res2.headers['x-image-meta-property-x_owner_foo']) def test_prop_protection_with_get_and_unpermitted_role(self): """ As admin role, create an image with a protected property, and verify permitted role 'fake_role' can *not* read that protected property via GET """ image_id = self._create_admin_image( {'x-image-meta-property-x_owner_foo': 'bar'}) another_request = unit_test_utils.get_fake_request( method='GET', path='/images/%s' % image_id) headers = {'x-auth-token': 'user:tenant:fake_role'} for k, v in six.iteritems(headers): another_request.headers[k] = v output = another_request.get_response(self.api) self.assertEqual(http_client.OK, output.status_int) self.assertEqual(b'', output.body) self.assertNotIn('x-image-meta-property-x_owner_foo', output.headers) def test_prop_protection_with_detail_and_permitted_role(self): """ As admin role, create an image with a protected property, and verify permitted role 'member' can read that protected property via /images/detail """ self._create_admin_image({'x-image-meta-property-x_owner_foo': 'bar'}) another_request = unit_test_utils.get_fake_request( method='GET', path='/images/detail') headers = {'x-auth-token': 'user:tenant:member'} for k, v in six.iteritems(headers): another_request.headers[k] = v output = another_request.get_response(self.api) self.assertEqual(http_client.OK, output.status_int) res_body = jsonutils.loads(output.body)['images'][0] self.assertEqual('bar', res_body['properties']['x_owner_foo']) def test_prop_protection_with_detail_and_permitted_policy(self): """ As admin role, create an image with a protected property, and verify permitted role 'member' can read that protected property via /images/detail """ self.set_property_protections(use_policies=True) self._create_admin_image({'x-image-meta-property-x_owner_foo': 'bar'}) another_request = unit_test_utils.get_fake_request( method='GET', path='/images/detail') headers = {'x-auth-token': 'user:tenant:member'} for k, v in six.iteritems(headers): another_request.headers[k] = v output = another_request.get_response(self.api) self.assertEqual(http_client.OK, output.status_int) res_body = jsonutils.loads(output.body)['images'][0] self.assertEqual('bar', res_body['properties']['x_owner_foo']) def test_prop_protection_with_detail_and_unpermitted_role(self): """ As admin role, create an image with a protected property, and verify permitted role 'fake_role' can *not* read that protected property via /images/detail """ self._create_admin_image({'x-image-meta-property-x_owner_foo': 'bar'}) another_request = unit_test_utils.get_fake_request( method='GET', path='/images/detail') headers = {'x-auth-token': 'user:tenant:fake_role'} for k, v in six.iteritems(headers): another_request.headers[k] = v output = another_request.get_response(self.api) self.assertEqual(http_client.OK, output.status_int) res_body = jsonutils.loads(output.body)['images'][0] self.assertNotIn('x-image-meta-property-x_owner_foo', res_body['properties']) def test_prop_protection_with_detail_and_unpermitted_policy(self): """ As admin role, create an image with a protected property, and verify permitted role 'fake_role' can *not* read that protected property via /images/detail """ self.set_property_protections(use_policies=True) self._create_admin_image({'x-image-meta-property-x_owner_foo': 'bar'}) another_request = unit_test_utils.get_fake_request( method='GET', path='/images/detail') headers = {'x-auth-token': 'user:tenant:fake_role'} for k, v in six.iteritems(headers): another_request.headers[k] = v output = another_request.get_response(self.api) self.assertEqual(http_client.OK, output.status_int) res_body = jsonutils.loads(output.body)['images'][0] self.assertNotIn('x-image-meta-property-x_owner_foo', res_body['properties']) def test_prop_protection_with_update_and_permitted_role(self): """ As admin role, create an image with protected property, and verify permitted role 'member' can update that protected property """ image_id = self._create_admin_image( {'x-image-meta-property-x_owner_foo': 'bar'}) another_request = unit_test_utils.get_fake_request( path='/images/%s' % image_id, method='PUT') headers = {'x-auth-token': 'user:tenant:member', 'x-image-meta-property-x_owner_foo': 'baz'} for k, v in six.iteritems(headers): another_request.headers[k] = v output = another_request.get_response(self.api) res_body = jsonutils.loads(output.body)['image'] self.assertEqual('baz', res_body['properties']['x_owner_foo']) def test_prop_protection_with_update_and_permitted_policy(self): """ As admin role, create an image with protected property, and verify permitted role 'admin' can update that protected property """ self.set_property_protections(use_policies=True) image_id = self._create_admin_image( {'x-image-meta-property-spl_default_policy': 'bar'}) another_request = unit_test_utils.get_fake_request( path='/images/%s' % image_id, method='PUT') headers = {'x-auth-token': 'user:tenant:admin', 'x-image-meta-property-spl_default_policy': 'baz'} for k, v in six.iteritems(headers): another_request.headers[k] = v output = another_request.get_response(self.api) res_body = jsonutils.loads(output.body)['image'] self.assertEqual('baz', res_body['properties']['spl_default_policy']) def test_prop_protection_with_update_and_unpermitted_role(self): """ As admin role, create an image with protected property, and verify unpermitted role 'fake_role' can *not* update that protected property """ image_id = self._create_admin_image( {'x-image-meta-property-x_owner_foo': 'bar'}) another_request = unit_test_utils.get_fake_request( path='/images/%s' % image_id, method='PUT') headers = {'x-auth-token': 'user:tenant:fake_role', 'x-image-meta-property-x_owner_foo': 'baz'} for k, v in six.iteritems(headers): another_request.headers[k] = v output = another_request.get_response(self.api) self.assertEqual(webob.exc.HTTPForbidden.code, output.status_int) self.assertIn("Property '%s' is protected" % "x_owner_foo", output.body.decode()) def test_prop_protection_with_update_and_unpermitted_policy(self): """ As admin role, create an image with protected property, and verify unpermitted role 'fake_role' can *not* update that protected property """ self.set_property_protections(use_policies=True) image_id = self._create_admin_image( {'x-image-meta-property-x_owner_foo': 'bar'}) another_request = unit_test_utils.get_fake_request( path='/images/%s' % image_id, method='PUT') headers = {'x-auth-token': 'user:tenant:fake_role', 'x-image-meta-property-x_owner_foo': 'baz'} for k, v in six.iteritems(headers): another_request.headers[k] = v output = another_request.get_response(self.api) self.assertEqual(webob.exc.HTTPForbidden.code, output.status_int) self.assertIn("Property '%s' is protected" % "x_owner_foo", output.body.decode()) def test_prop_protection_update_without_read(self): """ Test protected property cannot be updated without read permission """ image_id = self._create_admin_image( {'x-image-meta-property-spl_update_only_prop': 'foo'}) another_request = unit_test_utils.get_fake_request( path='/images/%s' % image_id, method='PUT') headers = {'x-auth-token': 'user:tenant:spl_role', 'x-image-meta-property-spl_update_only_prop': 'bar'} for k, v in six.iteritems(headers): another_request.headers[k] = v output = another_request.get_response(self.api) self.assertEqual(webob.exc.HTTPForbidden.code, output.status_int) self.assertIn("Property '%s' is protected" % "spl_update_only_prop", output.body.decode()) def test_prop_protection_update_noop(self): """ Test protected property update is allowed as long as the user has read access and the value is unchanged """ image_id = self._create_admin_image( {'x-image-meta-property-spl_read_prop': 'foo'}) another_request = unit_test_utils.get_fake_request( path='/images/%s' % image_id, method='PUT') headers = {'x-auth-token': 'user:tenant:spl_role', 'x-image-meta-property-spl_read_prop': 'foo'} for k, v in six.iteritems(headers): another_request.headers[k] = v output = another_request.get_response(self.api) res_body = jsonutils.loads(output.body)['image'] self.assertEqual('foo', res_body['properties']['spl_read_prop']) self.assertEqual(http_client.OK, output.status_int) def test_prop_protection_with_delete_and_permitted_role(self): """ As admin role, create an image with protected property, and verify permitted role 'member' can can delete that protected property """ image_id = self._create_admin_image( {'x-image-meta-property-x_owner_foo': 'bar'}) another_request = unit_test_utils.get_fake_request( path='/images/%s' % image_id, method='PUT') headers = {'x-auth-token': 'user:tenant:member', 'X-Glance-Registry-Purge-Props': 'True'} for k, v in six.iteritems(headers): another_request.headers[k] = v output = another_request.get_response(self.api) res_body = jsonutils.loads(output.body)['image'] self.assertEqual({}, res_body['properties']) def test_prop_protection_with_delete_and_permitted_policy(self): """ As admin role, create an image with protected property, and verify permitted role 'member' can can delete that protected property """ self.set_property_protections(use_policies=True) image_id = self._create_admin_image( {'x-image-meta-property-x_owner_foo': 'bar'}) another_request = unit_test_utils.get_fake_request( path='/images/%s' % image_id, method='PUT') headers = {'x-auth-token': 'user:tenant:member', 'X-Glance-Registry-Purge-Props': 'True'} for k, v in six.iteritems(headers): another_request.headers[k] = v output = another_request.get_response(self.api) res_body = jsonutils.loads(output.body)['image'] self.assertEqual({}, res_body['properties']) def test_prop_protection_with_delete_and_unpermitted_read(self): """ Test protected property cannot be deleted without read permission """ image_id = self._create_admin_image( {'x-image-meta-property-x_owner_foo': 'bar'}) another_request = unit_test_utils.get_fake_request( path='/images/%s' % image_id, method='PUT') headers = {'x-auth-token': 'user:tenant:fake_role', 'X-Glance-Registry-Purge-Props': 'True'} for k, v in six.iteritems(headers): another_request.headers[k] = v output = another_request.get_response(self.api) self.assertEqual(http_client.OK, output.status_int) self.assertNotIn('x-image-meta-property-x_owner_foo', output.headers) another_request = unit_test_utils.get_fake_request( method='HEAD', path='/images/%s' % image_id) headers = {'x-auth-token': 'user:tenant:admin'} for k, v in six.iteritems(headers): another_request.headers[k] = v output = another_request.get_response(self.api) self.assertEqual(http_client.OK, output.status_int) self.assertEqual(b'', output.body) self.assertEqual('bar', output.headers['x-image-meta-property-x_owner_foo']) def test_prop_protection_with_delete_and_unpermitted_delete(self): """ Test protected property cannot be deleted without delete permission """ image_id = self._create_admin_image( {'x-image-meta-property-spl_update_prop': 'foo'}) another_request = unit_test_utils.get_fake_request( path='/images/%s' % image_id, method='PUT') headers = {'x-auth-token': 'user:tenant:spl_role', 'X-Glance-Registry-Purge-Props': 'True'} for k, v in six.iteritems(headers): another_request.headers[k] = v output = another_request.get_response(self.api) self.assertEqual(http_client.FORBIDDEN, output.status_int) self.assertIn("Property '%s' is protected" % "spl_update_prop", output.body.decode()) another_request = unit_test_utils.get_fake_request( method='HEAD', path='/images/%s' % image_id) headers = {'x-auth-token': 'user:tenant:admin'} for k, v in six.iteritems(headers): another_request.headers[k] = v output = another_request.get_response(self.api) self.assertEqual(http_client.OK, output.status_int) self.assertEqual(b'', output.body) self.assertEqual( 'foo', output.headers['x-image-meta-property-spl_update_prop']) def test_read_protected_props_leak_with_update(self): """ Verify when updating props that ones we don't have read permission for are not disclosed """ image_id = self._create_admin_image( {'x-image-meta-property-spl_update_prop': '0', 'x-image-meta-property-foo': 'bar'}) another_request = unit_test_utils.get_fake_request( path='/images/%s' % image_id, method='PUT') headers = {'x-auth-token': 'user:tenant:spl_role', 'x-image-meta-property-spl_update_prop': '1', 'X-Glance-Registry-Purge-Props': 'False'} for k, v in six.iteritems(headers): another_request.headers[k] = v output = another_request.get_response(self.api) res_body = jsonutils.loads(output.body)['image'] self.assertEqual('1', res_body['properties']['spl_update_prop']) self.assertNotIn('foo', res_body['properties']) def test_update_protected_props_mix_no_read(self): """ Create an image with two props - one only readable by admin, and one readable/updatable by member. Verify member can successfully update their property while the admin owned one is ignored transparently """ image_id = self._create_admin_image( {'x-image-meta-property-admin_foo': 'bar', 'x-image-meta-property-x_owner_foo': 'bar'}) another_request = unit_test_utils.get_fake_request( path='/images/%s' % image_id, method='PUT') headers = {'x-auth-token': 'user:tenant:member', 'x-image-meta-property-x_owner_foo': 'baz'} for k, v in six.iteritems(headers): another_request.headers[k] = v output = another_request.get_response(self.api) res_body = jsonutils.loads(output.body)['image'] self.assertEqual('baz', res_body['properties']['x_owner_foo']) self.assertNotIn('admin_foo', res_body['properties']) def test_update_protected_props_mix_read(self): """ Create an image with two props - one readable/updatable by admin, but also readable by spl_role. The other is readable/updatable by spl_role. Verify spl_role can successfully update their property but not the admin owned one """ custom_props = { 'x-image-meta-property-spl_read_only_prop': '1', 'x-image-meta-property-spl_update_prop': '2' } image_id = self._create_admin_image(custom_props) another_request = unit_test_utils.get_fake_request( path='/images/%s' % image_id, method='PUT') # verify spl_role can update it's prop headers = {'x-auth-token': 'user:tenant:spl_role', 'x-image-meta-property-spl_read_only_prop': '1', 'x-image-meta-property-spl_update_prop': '1'} for k, v in six.iteritems(headers): another_request.headers[k] = v output = another_request.get_response(self.api) res_body = jsonutils.loads(output.body)['image'] self.assertEqual(http_client.OK, output.status_int) self.assertEqual('1', res_body['properties']['spl_read_only_prop']) self.assertEqual('1', res_body['properties']['spl_update_prop']) # verify spl_role can not update admin controlled prop headers = {'x-auth-token': 'user:tenant:spl_role', 'x-image-meta-property-spl_read_only_prop': '2', 'x-image-meta-property-spl_update_prop': '1'} for k, v in six.iteritems(headers): another_request.headers[k] = v output = another_request.get_response(self.api) self.assertEqual(http_client.FORBIDDEN, output.status_int) def test_delete_protected_props_mix_no_read(self): """ Create an image with two props - one only readable by admin, and one readable/deletable by member. Verify member can successfully delete their property while the admin owned one is ignored transparently """ image_id = self._create_admin_image( {'x-image-meta-property-admin_foo': 'bar', 'x-image-meta-property-x_owner_foo': 'bar'}) another_request = unit_test_utils.get_fake_request( path='/images/%s' % image_id, method='PUT') headers = {'x-auth-token': 'user:tenant:member', 'X-Glance-Registry-Purge-Props': 'True'} for k, v in six.iteritems(headers): another_request.headers[k] = v output = another_request.get_response(self.api) res_body = jsonutils.loads(output.body)['image'] self.assertNotIn('x_owner_foo', res_body['properties']) self.assertNotIn('admin_foo', res_body['properties']) def test_delete_protected_props_mix_read(self): """ Create an image with two props - one readable/deletable by admin, but also readable by spl_role. The other is readable/deletable by spl_role. Verify spl_role is forbidden to purge_props in this scenario without retaining the readable prop. """ custom_props = { 'x-image-meta-property-spl_read_only_prop': '1', 'x-image-meta-property-spl_delete_prop': '2' } image_id = self._create_admin_image(custom_props) another_request = unit_test_utils.get_fake_request( path='/images/%s' % image_id, method='PUT') headers = {'x-auth-token': 'user:tenant:spl_role', 'X-Glance-Registry-Purge-Props': 'True'} for k, v in six.iteritems(headers): another_request.headers[k] = v output = another_request.get_response(self.api) self.assertEqual(http_client.FORBIDDEN, output.status_int) def test_create_protected_prop_check_case_insensitive(self): """ Verify that role check is case-insensitive i.e. the property marked with role Member is creatable by the member role """ image_id = self._create_admin_image() another_request = unit_test_utils.get_fake_request( path='/images/%s' % image_id, method='PUT') headers = {'x-auth-token': 'user:tenant:member', 'x-image-meta-property-x_case_insensitive': '1'} for k, v in six.iteritems(headers): another_request.headers[k] = v output = another_request.get_response(self.api) res_body = jsonutils.loads(output.body)['image'] self.assertEqual('1', res_body['properties']['x_case_insensitive']) def test_read_protected_prop_check_case_insensitive(self): """ Verify that role check is case-insensitive i.e. the property marked with role Member is readable by the member role """ custom_props = { 'x-image-meta-property-x_case_insensitive': '1' } image_id = self._create_admin_image(custom_props) another_request = unit_test_utils.get_fake_request( method='HEAD', path='/images/%s' % image_id) headers = {'x-auth-token': 'user:tenant:member'} for k, v in six.iteritems(headers): another_request.headers[k] = v output = another_request.get_response(self.api) self.assertEqual(http_client.OK, output.status_int) self.assertEqual(b'', output.body) self.assertEqual( '1', output.headers['x-image-meta-property-x_case_insensitive']) def test_update_protected_props_check_case_insensitive(self): """ Verify that role check is case-insensitive i.e. the property marked with role Member is updatable by the member role """ image_id = self._create_admin_image( {'x-image-meta-property-x_case_insensitive': '1'}) another_request = unit_test_utils.get_fake_request( path='/images/%s' % image_id, method='PUT') headers = {'x-auth-token': 'user:tenant:member', 'x-image-meta-property-x_case_insensitive': '2'} for k, v in six.iteritems(headers): another_request.headers[k] = v output = another_request.get_response(self.api) res_body = jsonutils.loads(output.body)['image'] self.assertEqual('2', res_body['properties']['x_case_insensitive']) def test_delete_protected_props_check_case_insensitive(self): """ Verify that role check is case-insensitive i.e. the property marked with role Member is deletable by the member role """ image_id = self._create_admin_image( {'x-image-meta-property-x_case_insensitive': '1'}) another_request = unit_test_utils.get_fake_request( path='/images/%s' % image_id, method='PUT') headers = {'x-auth-token': 'user:tenant:member', 'X-Glance-Registry-Purge-Props': 'True'} for k, v in six.iteritems(headers): another_request.headers[k] = v output = another_request.get_response(self.api) res_body = jsonutils.loads(output.body)['image'] self.assertEqual({}, res_body['properties']) def test_create_non_protected_prop(self): """ Verify property marked with special char '@' is creatable by an unknown role """ image_id = self._create_admin_image() another_request = unit_test_utils.get_fake_request( path='/images/%s' % image_id, method='PUT') headers = {'x-auth-token': 'user:tenant:joe_soap', 'x-image-meta-property-x_all_permitted': '1'} for k, v in six.iteritems(headers): another_request.headers[k] = v output = another_request.get_response(self.api) res_body = jsonutils.loads(output.body)['image'] self.assertEqual('1', res_body['properties']['x_all_permitted']) def test_read_non_protected_prop(self): """ Verify property marked with special char '@' is readable by an unknown role """ custom_props = { 'x-image-meta-property-x_all_permitted': '1' } image_id = self._create_admin_image(custom_props) another_request = unit_test_utils.get_fake_request( method='HEAD', path='/images/%s' % image_id) headers = {'x-auth-token': 'user:tenant:joe_soap'} for k, v in six.iteritems(headers): another_request.headers[k] = v output = another_request.get_response(self.api) self.assertEqual(http_client.OK, output.status_int) self.assertEqual(b'', output.body) self.assertEqual( '1', output.headers['x-image-meta-property-x_all_permitted']) def test_update_non_protected_prop(self): """ Verify property marked with special char '@' is updatable by an unknown role """ image_id = self._create_admin_image( {'x-image-meta-property-x_all_permitted': '1'}) another_request = unit_test_utils.get_fake_request( path='/images/%s' % image_id, method='PUT') headers = {'x-auth-token': 'user:tenant:joe_soap', 'x-image-meta-property-x_all_permitted': '2'} for k, v in six.iteritems(headers): another_request.headers[k] = v output = another_request.get_response(self.api) res_body = jsonutils.loads(output.body)['image'] self.assertEqual('2', res_body['properties']['x_all_permitted']) def test_delete_non_protected_prop(self): """ Verify property marked with special char '@' is deletable by an unknown role """ image_id = self._create_admin_image( {'x-image-meta-property-x_all_permitted': '1'}) another_request = unit_test_utils.get_fake_request( path='/images/%s' % image_id, method='PUT') headers = {'x-auth-token': 'user:tenant:joe_soap', 'X-Glance-Registry-Purge-Props': 'True'} for k, v in six.iteritems(headers): another_request.headers[k] = v output = another_request.get_response(self.api) res_body = jsonutils.loads(output.body)['image'] self.assertEqual({}, res_body['properties']) def test_create_locked_down_protected_prop(self): """ Verify a property protected by special char '!' is creatable by no one """ image_id = self._create_admin_image() another_request = unit_test_utils.get_fake_request( path='/images/%s' % image_id, method='PUT') headers = {'x-auth-token': 'user:tenant:member', 'x-image-meta-property-x_none_permitted': '1'} for k, v in six.iteritems(headers): another_request.headers[k] = v output = another_request.get_response(self.api) self.assertEqual(http_client.FORBIDDEN, output.status_int) # also check admin can not create another_request = unit_test_utils.get_fake_request( path='/images/%s' % image_id, method='PUT') headers = {'x-auth-token': 'user:tenant:admin', 'x-image-meta-property-x_none_permitted_admin': '1'} for k, v in six.iteritems(headers): another_request.headers[k] = v output = another_request.get_response(self.api) self.assertEqual(http_client.FORBIDDEN, output.status_int) def test_read_locked_down_protected_prop(self): """ Verify a property protected by special char '!' is readable by no one """ custom_props = { 'x-image-meta-property-x_none_read': '1' } image_id = self._create_admin_image(custom_props) another_request = unit_test_utils.get_fake_request( method='HEAD', path='/images/%s' % image_id) headers = {'x-auth-token': 'user:tenant:member'} for k, v in six.iteritems(headers): another_request.headers[k] = v output = another_request.get_response(self.api) self.assertEqual(http_client.OK, output.status_int) self.assertNotIn('x_none_read', output.headers) # also check admin can not read another_request = unit_test_utils.get_fake_request( method='HEAD', path='/images/%s' % image_id) headers = {'x-auth-token': 'user:tenant:admin'} for k, v in six.iteritems(headers): another_request.headers[k] = v output = another_request.get_response(self.api) self.assertEqual(http_client.OK, output.status_int) self.assertNotIn('x_none_read', output.headers) def test_update_locked_down_protected_prop(self): """ Verify a property protected by special char '!' is updatable by no one """ image_id = self._create_admin_image( {'x-image-meta-property-x_none_update': '1'}) another_request = unit_test_utils.get_fake_request( path='/images/%s' % image_id, method='PUT') headers = {'x-auth-token': 'user:tenant:member', 'x-image-meta-property-x_none_update': '2'} for k, v in six.iteritems(headers): another_request.headers[k] = v output = another_request.get_response(self.api) self.assertEqual(http_client.FORBIDDEN, output.status_int) # also check admin can't update property another_request = unit_test_utils.get_fake_request( path='/images/%s' % image_id, method='PUT') headers = {'x-auth-token': 'user:tenant:admin', 'x-image-meta-property-x_none_update': '2'} for k, v in six.iteritems(headers): another_request.headers[k] = v output = another_request.get_response(self.api) self.assertEqual(http_client.FORBIDDEN, output.status_int) def test_delete_locked_down_protected_prop(self): """ Verify a property protected by special char '!' is deletable by no one """ image_id = self._create_admin_image( {'x-image-meta-property-x_none_delete': '1'}) another_request = unit_test_utils.get_fake_request( path='/images/%s' % image_id, method='PUT') headers = {'x-auth-token': 'user:tenant:member', 'X-Glance-Registry-Purge-Props': 'True'} for k, v in six.iteritems(headers): another_request.headers[k] = v output = another_request.get_response(self.api) self.assertEqual(http_client.FORBIDDEN, output.status_int) # also check admin can't delete another_request = unit_test_utils.get_fake_request( path='/images/%s' % image_id, method='PUT') headers = {'x-auth-token': 'user:tenant:admin', 'X-Glance-Registry-Purge-Props': 'True'} for k, v in six.iteritems(headers): another_request.headers[k] = v output = another_request.get_response(self.api) self.assertEqual(http_client.FORBIDDEN, output.status_int) class TestAPIPropertyQuotas(base.IsolatedUnitTest): def setUp(self): """Establish a clean test environment""" super(TestAPIPropertyQuotas, self).setUp() self.mapper = routes.Mapper() self.api = test_utils.FakeAuthMiddleware(router.API(self.mapper)) db_api.get_engine() db_models.unregister_models(db_api.get_engine()) db_models.register_models(db_api.get_engine()) def _create_admin_image(self, props=None): if props is None: props = {} request = unit_test_utils.get_fake_request(path='/images') headers = {'x-image-meta-disk-format': 'ami', 'x-image-meta-container-format': 'ami', 'x-image-meta-name': 'foo', 'x-image-meta-size': '0', 'x-auth-token': 'user:tenant:admin'} headers.update(props) for k, v in six.iteritems(headers): request.headers[k] = v created_image = request.get_response(self.api) res_body = jsonutils.loads(created_image.body)['image'] image_id = res_body['id'] return image_id def test_update_image_with_too_many_properties(self): """ Ensure that updating image properties enforces the quota. """ self.config(image_property_quota=1) image_id = self._create_admin_image() another_request = unit_test_utils.get_fake_request( path='/images/%s' % image_id, method='PUT') headers = {'x-auth-token': 'user:tenant:joe_soap', 'x-image-meta-property-x_all_permitted': '1', 'x-image-meta-property-x_all_permitted_foo': '2'} for k, v in six.iteritems(headers): another_request.headers[k] = v output = another_request.get_response(self.api) self.assertEqual(http_client.REQUEST_ENTITY_TOO_LARGE, output.status_int) self.assertIn("Attempted: 2, Maximum: 1", output.text) def test_update_image_with_too_many_properties_without_purge_props(self): """ Ensure that updating image properties counts existing image properties when enforcing property quota. """ self.config(image_property_quota=1) request = unit_test_utils.get_fake_request(path='/images') headers = {'x-image-meta-disk-format': 'ami', 'x-image-meta-container-format': 'ami', 'x-image-meta-name': 'foo', 'x-image-meta-size': '0', 'x-image-meta-property-x_all_permitted_create': '1', 'x-auth-token': 'user:tenant:admin'} for k, v in six.iteritems(headers): request.headers[k] = v created_image = request.get_response(self.api) res_body = jsonutils.loads(created_image.body)['image'] image_id = res_body['id'] another_request = unit_test_utils.get_fake_request( path='/images/%s' % image_id, method='PUT') headers = {'x-auth-token': 'user:tenant:joe_soap', 'x-glance-registry-purge-props': 'False', 'x-image-meta-property-x_all_permitted': '1'} for k, v in six.iteritems(headers): another_request.headers[k] = v output = another_request.get_response(self.api) self.assertEqual(http_client.REQUEST_ENTITY_TOO_LARGE, output.status_int) self.assertIn("Attempted: 2, Maximum: 1", output.text) def test_update_properties_without_purge_props_overwrite_value(self): """ Ensure that updating image properties does not count against image property quota. """ self.config(image_property_quota=2) request = unit_test_utils.get_fake_request(path='/images') headers = {'x-image-meta-disk-format': 'ami', 'x-image-meta-container-format': 'ami', 'x-image-meta-name': 'foo', 'x-image-meta-size': '0', 'x-image-meta-property-x_all_permitted_create': '1', 'x-auth-token': 'user:tenant:admin'} for k, v in six.iteritems(headers): request.headers[k] = v created_image = request.get_response(self.api) res_body = jsonutils.loads(created_image.body)['image'] image_id = res_body['id'] another_request = unit_test_utils.get_fake_request( path='/images/%s' % image_id, method='PUT') headers = {'x-auth-token': 'user:tenant:joe_soap', 'x-glance-registry-purge-props': 'False', 'x-image-meta-property-x_all_permitted_create': '3', 'x-image-meta-property-x_all_permitted': '1'} for k, v in six.iteritems(headers): another_request.headers[k] = v output = another_request.get_response(self.api) self.assertEqual(http_client.OK, output.status_int) res_body = jsonutils.loads(output.body)['image'] self.assertEqual('1', res_body['properties']['x_all_permitted']) self.assertEqual('3', res_body['properties']['x_all_permitted_create']) glance-16.0.0/glance/tests/unit/test_image_cache.py0000666000175100017510000004753613245511421022262 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from __future__ import absolute_import from contextlib import contextmanager import datetime import hashlib import os import time import fixtures from oslo_utils import units from oslotest import moxstubout import six # NOTE(jokke): simplified transition to py3, behaves like py2 xrange from six.moves import range from glance.common import exception from glance import image_cache # NOTE(bcwaldon): This is imported to load the registry config options import glance.registry # noqa from glance.tests import utils as test_utils from glance.tests.utils import skip_if_disabled from glance.tests.utils import xattr_writes_supported FIXTURE_LENGTH = 1024 FIXTURE_DATA = b'*' * FIXTURE_LENGTH class ImageCacheTestCase(object): def _setup_fixture_file(self): FIXTURE_FILE = six.BytesIO(FIXTURE_DATA) self.assertFalse(self.cache.is_cached(1)) self.assertTrue(self.cache.cache_image_file(1, FIXTURE_FILE)) self.assertTrue(self.cache.is_cached(1)) @skip_if_disabled def test_is_cached(self): """Verify is_cached(1) returns 0, then add something to the cache and verify is_cached(1) returns 1. """ self._setup_fixture_file() @skip_if_disabled def test_read(self): """Verify is_cached(1) returns 0, then add something to the cache and verify after a subsequent read from the cache that is_cached(1) returns 1. """ self._setup_fixture_file() buff = six.BytesIO() with self.cache.open_for_read(1) as cache_file: for chunk in cache_file: buff.write(chunk) self.assertEqual(FIXTURE_DATA, buff.getvalue()) @skip_if_disabled def test_open_for_read(self): """Test convenience wrapper for opening a cache file via its image identifier. """ self._setup_fixture_file() buff = six.BytesIO() with self.cache.open_for_read(1) as cache_file: for chunk in cache_file: buff.write(chunk) self.assertEqual(FIXTURE_DATA, buff.getvalue()) @skip_if_disabled def test_get_image_size(self): """Test convenience wrapper for querying cache file size via its image identifier. """ self._setup_fixture_file() size = self.cache.get_image_size(1) self.assertEqual(FIXTURE_LENGTH, size) @skip_if_disabled def test_delete(self): """Test delete method that removes an image from the cache.""" self._setup_fixture_file() self.cache.delete_cached_image(1) self.assertFalse(self.cache.is_cached(1)) @skip_if_disabled def test_delete_all(self): """Test delete method that removes an image from the cache.""" for image_id in (1, 2): self.assertFalse(self.cache.is_cached(image_id)) for image_id in (1, 2): FIXTURE_FILE = six.BytesIO(FIXTURE_DATA) self.assertTrue(self.cache.cache_image_file(image_id, FIXTURE_FILE)) for image_id in (1, 2): self.assertTrue(self.cache.is_cached(image_id)) self.cache.delete_all_cached_images() for image_id in (1, 2): self.assertFalse(self.cache.is_cached(image_id)) @skip_if_disabled def test_clean_stalled(self): """Test the clean method removes expected images.""" incomplete_file_path = os.path.join(self.cache_dir, 'incomplete', '1') incomplete_file = open(incomplete_file_path, 'wb') incomplete_file.write(FIXTURE_DATA) incomplete_file.close() self.assertTrue(os.path.exists(incomplete_file_path)) self.cache.clean(stall_time=0) self.assertFalse(os.path.exists(incomplete_file_path)) @skip_if_disabled def test_clean_stalled_nonzero_stall_time(self): """ Test the clean method removes the stalled images as expected """ incomplete_file_path_1 = os.path.join(self.cache_dir, 'incomplete', '1') incomplete_file_path_2 = os.path.join(self.cache_dir, 'incomplete', '2') for f in (incomplete_file_path_1, incomplete_file_path_2): incomplete_file = open(f, 'wb') incomplete_file.write(FIXTURE_DATA) incomplete_file.close() mtime = os.path.getmtime(incomplete_file_path_1) pastday = (datetime.datetime.fromtimestamp(mtime) - datetime.timedelta(days=1)) atime = int(time.mktime(pastday.timetuple())) mtime = atime os.utime(incomplete_file_path_1, (atime, mtime)) self.assertTrue(os.path.exists(incomplete_file_path_1)) self.assertTrue(os.path.exists(incomplete_file_path_2)) self.cache.clean(stall_time=3600) self.assertFalse(os.path.exists(incomplete_file_path_1)) self.assertTrue(os.path.exists(incomplete_file_path_2)) @skip_if_disabled def test_prune(self): """ Test that pruning the cache works as expected... """ self.assertEqual(0, self.cache.get_cache_size()) # Add a bunch of images to the cache. The max cache size for the cache # is set to 5KB and each image is 1K. We use 11 images in this test. # The first 10 are added to and retrieved from cache in the same order. # Then, the 11th image is added to cache but not retrieved before we # prune. We should see only 5 images left after pruning, and the # images that are least recently accessed should be the ones pruned... for x in range(10): FIXTURE_FILE = six.BytesIO(FIXTURE_DATA) self.assertTrue(self.cache.cache_image_file(x, FIXTURE_FILE)) self.assertEqual(10 * units.Ki, self.cache.get_cache_size()) # OK, hit the images that are now cached... for x in range(10): buff = six.BytesIO() with self.cache.open_for_read(x) as cache_file: for chunk in cache_file: buff.write(chunk) # Add a new image to cache. # This is specifically to test the bug: 1438564 FIXTURE_FILE = six.BytesIO(FIXTURE_DATA) self.assertTrue(self.cache.cache_image_file(99, FIXTURE_FILE)) self.cache.prune() self.assertEqual(5 * units.Ki, self.cache.get_cache_size()) # Ensure images 0, 1, 2, 3, 4 & 5 are not cached anymore for x in range(0, 6): self.assertFalse(self.cache.is_cached(x), "Image %s was cached!" % x) # Ensure images 6, 7, 8 and 9 are still cached for x in range(6, 10): self.assertTrue(self.cache.is_cached(x), "Image %s was not cached!" % x) # Ensure the newly added image, 99, is still cached self.assertTrue(self.cache.is_cached(99), "Image 99 was not cached!") @skip_if_disabled def test_prune_to_zero(self): """Test that an image_cache_max_size of 0 doesn't kill the pruner This is a test specifically for LP #1039854 """ self.assertEqual(0, self.cache.get_cache_size()) FIXTURE_FILE = six.BytesIO(FIXTURE_DATA) self.assertTrue(self.cache.cache_image_file('xxx', FIXTURE_FILE)) self.assertEqual(1024, self.cache.get_cache_size()) # OK, hit the image that is now cached... buff = six.BytesIO() with self.cache.open_for_read('xxx') as cache_file: for chunk in cache_file: buff.write(chunk) self.config(image_cache_max_size=0) self.cache.prune() self.assertEqual(0, self.cache.get_cache_size()) self.assertFalse(self.cache.is_cached('xxx')) @skip_if_disabled def test_queue(self): """ Test that queueing works properly """ self.assertFalse(self.cache.is_cached(1)) self.assertFalse(self.cache.is_queued(1)) FIXTURE_FILE = six.BytesIO(FIXTURE_DATA) self.assertTrue(self.cache.queue_image(1)) self.assertTrue(self.cache.is_queued(1)) self.assertFalse(self.cache.is_cached(1)) # Should not return True if the image is already # queued for caching... self.assertFalse(self.cache.queue_image(1)) self.assertFalse(self.cache.is_cached(1)) # Test that we return False if we try to queue # an image that has already been cached self.assertTrue(self.cache.cache_image_file(1, FIXTURE_FILE)) self.assertFalse(self.cache.is_queued(1)) self.assertTrue(self.cache.is_cached(1)) self.assertFalse(self.cache.queue_image(1)) self.cache.delete_cached_image(1) for x in range(3): self.assertTrue(self.cache.queue_image(x)) self.assertEqual(['0', '1', '2'], self.cache.get_queued_images()) def test_open_for_write_good(self): """ Test to see if open_for_write works in normal case """ # test a good case image_id = '1' self.assertFalse(self.cache.is_cached(image_id)) with self.cache.driver.open_for_write(image_id) as cache_file: cache_file.write(b'a') self.assertTrue(self.cache.is_cached(image_id), "Image %s was NOT cached!" % image_id) # make sure it has tidied up incomplete_file_path = os.path.join(self.cache_dir, 'incomplete', image_id) invalid_file_path = os.path.join(self.cache_dir, 'invalid', image_id) self.assertFalse(os.path.exists(incomplete_file_path)) self.assertFalse(os.path.exists(invalid_file_path)) def test_open_for_write_with_exception(self): """ Test to see if open_for_write works in a failure case for each driver This case is where an exception is raised while the file is being written. The image is partially filled in cache and filling wont resume so verify the image is moved to invalid/ directory """ # test a case where an exception is raised while the file is open image_id = '1' self.assertFalse(self.cache.is_cached(image_id)) try: with self.cache.driver.open_for_write(image_id): raise IOError except Exception as e: self.assertIsInstance(e, IOError) self.assertFalse(self.cache.is_cached(image_id), "Image %s was cached!" % image_id) # make sure it has tidied up incomplete_file_path = os.path.join(self.cache_dir, 'incomplete', image_id) invalid_file_path = os.path.join(self.cache_dir, 'invalid', image_id) self.assertFalse(os.path.exists(incomplete_file_path)) self.assertTrue(os.path.exists(invalid_file_path)) def test_caching_iterator(self): """ Test to see if the caching iterator interacts properly with the driver When the iterator completes going through the data the driver should have closed the image and placed it correctly """ # test a case where an exception NOT raised while the file is open, # and a consuming iterator completes def consume(image_id): data = [b'a', b'b', b'c', b'd', b'e', b'f'] checksum = None caching_iter = self.cache.get_caching_iter(image_id, checksum, iter(data)) self.assertEqual(data, list(caching_iter)) image_id = '1' self.assertFalse(self.cache.is_cached(image_id)) consume(image_id) self.assertTrue(self.cache.is_cached(image_id), "Image %s was NOT cached!" % image_id) # make sure it has tidied up incomplete_file_path = os.path.join(self.cache_dir, 'incomplete', image_id) invalid_file_path = os.path.join(self.cache_dir, 'invalid', image_id) self.assertFalse(os.path.exists(incomplete_file_path)) self.assertFalse(os.path.exists(invalid_file_path)) def test_caching_iterator_handles_backend_failure(self): """ Test that when the backend fails, caching_iter does not continue trying to consume data, and rolls back the cache. """ def faulty_backend(): data = [b'a', b'b', b'c', b'Fail', b'd', b'e', b'f'] for d in data: if d == b'Fail': raise exception.GlanceException('Backend failure') yield d def consume(image_id): caching_iter = self.cache.get_caching_iter(image_id, None, faulty_backend()) # exercise the caching_iter list(caching_iter) image_id = '1' self.assertRaises(exception.GlanceException, consume, image_id) # make sure bad image was not cached self.assertFalse(self.cache.is_cached(image_id)) def test_caching_iterator_falloffend(self): """ Test to see if the caching iterator interacts properly with the driver in a case where the iterator is only partially consumed. In this case the image is only partially filled in cache and filling wont resume. When the iterator goes out of scope the driver should have closed the image and moved it from incomplete/ to invalid/ """ # test a case where a consuming iterator just stops. def falloffend(image_id): data = [b'a', b'b', b'c', b'd', b'e', b'f'] checksum = None caching_iter = self.cache.get_caching_iter(image_id, checksum, iter(data)) self.assertEqual(b'a', next(caching_iter)) image_id = '1' self.assertFalse(self.cache.is_cached(image_id)) falloffend(image_id) self.assertFalse(self.cache.is_cached(image_id), "Image %s was cached!" % image_id) # make sure it has tidied up incomplete_file_path = os.path.join(self.cache_dir, 'incomplete', image_id) invalid_file_path = os.path.join(self.cache_dir, 'invalid', image_id) self.assertFalse(os.path.exists(incomplete_file_path)) self.assertTrue(os.path.exists(invalid_file_path)) def test_gate_caching_iter_good_checksum(self): image = b"12345678990abcdefghijklmnop" image_id = 123 md5 = hashlib.md5() md5.update(image) checksum = md5.hexdigest() cache = image_cache.ImageCache() img_iter = cache.get_caching_iter(image_id, checksum, [image]) for chunk in img_iter: pass # checksum is valid, fake image should be cached: self.assertTrue(cache.is_cached(image_id)) def test_gate_caching_iter_bad_checksum(self): image = b"12345678990abcdefghijklmnop" image_id = 123 checksum = "foobar" # bad. cache = image_cache.ImageCache() img_iter = cache.get_caching_iter(image_id, checksum, [image]) def reader(): for chunk in img_iter: pass self.assertRaises(exception.GlanceException, reader) # checksum is invalid, caching will fail: self.assertFalse(cache.is_cached(image_id)) class TestImageCacheXattr(test_utils.BaseTestCase, ImageCacheTestCase): """Tests image caching when xattr is used in cache""" def setUp(self): """ Test to see if the pre-requisites for the image cache are working (python-xattr installed and xattr support on the filesystem) """ super(TestImageCacheXattr, self).setUp() if getattr(self, 'disable', False): return self.cache_dir = self.useFixture(fixtures.TempDir()).path if not getattr(self, 'inited', False): try: import xattr # noqa except ImportError: self.inited = True self.disabled = True self.disabled_message = ("python-xattr not installed.") return self.inited = True self.disabled = False self.config(image_cache_dir=self.cache_dir, image_cache_driver='xattr', image_cache_max_size=5 * units.Ki) self.cache = image_cache.ImageCache() if not xattr_writes_supported(self.cache_dir): self.inited = True self.disabled = True self.disabled_message = ("filesystem does not support xattr") return class TestImageCacheSqlite(test_utils.BaseTestCase, ImageCacheTestCase): """Tests image caching when SQLite is used in cache""" def setUp(self): """ Test to see if the pre-requisites for the image cache are working (python-sqlite3 installed) """ super(TestImageCacheSqlite, self).setUp() if getattr(self, 'disable', False): return if not getattr(self, 'inited', False): try: import sqlite3 # noqa except ImportError: self.inited = True self.disabled = True self.disabled_message = ("python-sqlite3 not installed.") return self.inited = True self.disabled = False self.cache_dir = self.useFixture(fixtures.TempDir()).path self.config(image_cache_dir=self.cache_dir, image_cache_driver='sqlite', image_cache_max_size=5 * units.Ki) self.cache = image_cache.ImageCache() class TestImageCacheNoDep(test_utils.BaseTestCase): def setUp(self): super(TestImageCacheNoDep, self).setUp() self.driver = None def init_driver(self2): self2.driver = self.driver mox_fixture = self.useFixture(moxstubout.MoxStubout()) self.stubs = mox_fixture.stubs self.stubs.Set(image_cache.ImageCache, 'init_driver', init_driver) def test_get_caching_iter_when_write_fails(self): class FailingFile(object): def write(self, data): if data == "Fail": raise IOError class FailingFileDriver(object): def is_cacheable(self, *args, **kwargs): return True @contextmanager def open_for_write(self, *args, **kwargs): yield FailingFile() self.driver = FailingFileDriver() cache = image_cache.ImageCache() data = [b'a', b'b', b'c', b'Fail', b'd', b'e', b'f'] caching_iter = cache.get_caching_iter('dummy_id', None, iter(data)) self.assertEqual(data, list(caching_iter)) def test_get_caching_iter_when_open_fails(self): class OpenFailingDriver(object): def is_cacheable(self, *args, **kwargs): return True @contextmanager def open_for_write(self, *args, **kwargs): raise IOError self.driver = OpenFailingDriver() cache = image_cache.ImageCache() data = [b'a', b'b', b'c', b'd', b'e', b'f'] caching_iter = cache.get_caching_iter('dummy_id', None, iter(data)) self.assertEqual(data, list(caching_iter)) glance-16.0.0/glance/tests/unit/test_glance_replicator.py0000666000175100017510000005602713245511421023525 0ustar zuulzuul00000000000000# Copyright 2012 Michael Still and Canonical Inc # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from __future__ import absolute_import import copy import os import sys import uuid import fixtures import mock from oslo_serialization import jsonutils import six from six import moves from six.moves import http_client as http import webob from glance.cmd import replicator as glance_replicator from glance.common import exception from glance.tests.unit import utils as unit_test_utils from glance.tests import utils as test_utils IMG_RESPONSE_ACTIVE = { 'content-length': '0', 'property-image_state': 'available', 'min_ram': '0', 'disk_format': 'aki', 'updated_at': '2012-06-25T02:10:36', 'date': 'Thu, 28 Jun 2012 07:20:05 GMT', 'owner': '8aef75b5c0074a59aa99188fdb4b9e90', 'id': '6d55dd55-053a-4765-b7bc-b30df0ea3861', 'size': '4660272', 'property-image_location': 'ubuntu-bucket/oneiric-server-cloudimg-amd64-' 'vmlinuz-generic.manifest.xml', 'property-architecture': 'x86_64', 'etag': 'f46cfe7fb3acaff49a3567031b9b53bb', 'location': 'http://127.0.0.1:9292/v1/images/' '6d55dd55-053a-4765-b7bc-b30df0ea3861', 'container_format': 'aki', 'status': 'active', 'deleted': 'False', 'min_disk': '0', 'is_public': 'False', 'name': 'ubuntu-bucket/oneiric-server-cloudimg-amd64-vmlinuz-generic', 'checksum': 'f46cfe7fb3acaff49a3567031b9b53bb', 'created_at': '2012-06-25T02:10:32', 'protected': 'False', 'content-type': 'text/html; charset=UTF-8' } IMG_RESPONSE_QUEUED = copy.copy(IMG_RESPONSE_ACTIVE) IMG_RESPONSE_QUEUED['status'] = 'queued' IMG_RESPONSE_QUEUED['id'] = '49b2c782-ee10-4692-84f8-3942e9432c4b' IMG_RESPONSE_QUEUED['location'] = ('http://127.0.0.1:9292/v1/images/' + IMG_RESPONSE_QUEUED['id']) class FakeHTTPConnection(object): def __init__(self): self.count = 0 self.reqs = {} self.last_req = None self.host = 'localhost' self.port = 9292 def prime_request(self, method, url, in_body, in_headers, out_code, out_body, out_headers): if not url.startswith('/'): url = '/' + url url = unit_test_utils.sort_url_by_qs_keys(url) hkeys = sorted(in_headers.keys()) hashable = (method, url, in_body, ' '.join(hkeys)) flat_headers = [] for key in out_headers: flat_headers.append((key, out_headers[key])) self.reqs[hashable] = (out_code, out_body, flat_headers) def request(self, method, url, body, headers): self.count += 1 url = unit_test_utils.sort_url_by_qs_keys(url) hkeys = sorted(headers.keys()) hashable = (method, url, body, ' '.join(hkeys)) if hashable not in self.reqs: options = [] for h in self.reqs: options.append(repr(h)) raise Exception('No such primed request: %s "%s"\n' '%s\n\n' 'Available:\n' '%s' % (method, url, hashable, '\n\n'.join(options))) self.last_req = hashable def getresponse(self): class FakeResponse(object): def __init__(self, args): (code, body, headers) = args self.body = six.StringIO(body) self.headers = headers self.status = code def read(self, count=1000000): return self.body.read(count) def getheaders(self): return self.headers return FakeResponse(self.reqs[self.last_req]) class ImageServiceTestCase(test_utils.BaseTestCase): def test_rest_errors(self): c = glance_replicator.ImageService(FakeHTTPConnection(), 'noauth') for code, exc in [(http.BAD_REQUEST, webob.exc.HTTPBadRequest), (http.UNAUTHORIZED, webob.exc.HTTPUnauthorized), (http.FORBIDDEN, webob.exc.HTTPForbidden), (http.CONFLICT, webob.exc.HTTPConflict), (http.INTERNAL_SERVER_ERROR, webob.exc.HTTPInternalServerError)]: c.conn.prime_request('GET', ('v1/images/' '5dcddce0-cba5-4f18-9cf4-9853c7b207a6'), '', {'x-auth-token': 'noauth'}, code, '', {}) self.assertRaises(exc, c.get_image, '5dcddce0-cba5-4f18-9cf4-9853c7b207a6') def test_rest_get_images(self): c = glance_replicator.ImageService(FakeHTTPConnection(), 'noauth') # Two images, one of which is queued resp = {'images': [IMG_RESPONSE_ACTIVE, IMG_RESPONSE_QUEUED]} c.conn.prime_request('GET', 'v1/images/detail?is_public=None', '', {'x-auth-token': 'noauth'}, http.OK, jsonutils.dumps(resp), {}) c.conn.prime_request('GET', ('v1/images/detail?marker=%s&is_public=None' % IMG_RESPONSE_QUEUED['id']), '', {'x-auth-token': 'noauth'}, http.OK, jsonutils.dumps({'images': []}), {}) imgs = list(c.get_images()) self.assertEqual(2, len(imgs)) self.assertEqual(2, c.conn.count) def test_rest_get_image(self): c = glance_replicator.ImageService(FakeHTTPConnection(), 'noauth') image_contents = 'THISISTHEIMAGEBODY' c.conn.prime_request('GET', 'v1/images/%s' % IMG_RESPONSE_ACTIVE['id'], '', {'x-auth-token': 'noauth'}, http.OK, image_contents, IMG_RESPONSE_ACTIVE) body = c.get_image(IMG_RESPONSE_ACTIVE['id']) self.assertEqual(image_contents, body.read()) def test_rest_header_list_to_dict(self): i = [('x-image-meta-banana', 42), ('gerkin', 12), ('x-image-meta-property-frog', 11), ('x-image-meta-property-duck', 12)] o = glance_replicator.ImageService._header_list_to_dict(i) self.assertIn('banana', o) self.assertIn('gerkin', o) self.assertIn('properties', o) self.assertIn('frog', o['properties']) self.assertIn('duck', o['properties']) self.assertNotIn('x-image-meta-banana', o) def test_rest_get_image_meta(self): c = glance_replicator.ImageService(FakeHTTPConnection(), 'noauth') c.conn.prime_request('HEAD', 'v1/images/%s' % IMG_RESPONSE_ACTIVE['id'], '', {'x-auth-token': 'noauth'}, http.OK, '', IMG_RESPONSE_ACTIVE) header = c.get_image_meta(IMG_RESPONSE_ACTIVE['id']) self.assertIn('id', header) def test_rest_dict_to_headers(self): i = {'banana': 42, 'gerkin': 12, 'properties': {'frog': 1, 'kernel_id': None} } o = glance_replicator.ImageService._dict_to_headers(i) self.assertIn('x-image-meta-banana', o) self.assertIn('x-image-meta-gerkin', o) self.assertIn('x-image-meta-property-frog', o) self.assertIn('x-image-meta-property-kernel_id', o) self.assertEqual(o['x-image-meta-property-kernel_id'], '') self.assertNotIn('properties', o) def test_rest_add_image(self): c = glance_replicator.ImageService(FakeHTTPConnection(), 'noauth') image_body = 'THISISANIMAGEBODYFORSURE!' image_meta_with_proto = { 'x-auth-token': 'noauth', 'Content-Type': 'application/octet-stream', 'Content-Length': len(image_body) } for key in IMG_RESPONSE_ACTIVE: image_meta_with_proto[ 'x-image-meta-%s' % key] = IMG_RESPONSE_ACTIVE[key] c.conn.prime_request('POST', 'v1/images', image_body, image_meta_with_proto, http.OK, '', IMG_RESPONSE_ACTIVE) headers, body = c.add_image(IMG_RESPONSE_ACTIVE, image_body) self.assertEqual(IMG_RESPONSE_ACTIVE, headers) self.assertEqual(1, c.conn.count) def test_rest_add_image_meta(self): c = glance_replicator.ImageService(FakeHTTPConnection(), 'noauth') image_meta = {'id': '5dcddce0-cba5-4f18-9cf4-9853c7b207a6'} image_meta_headers = glance_replicator.ImageService._dict_to_headers( image_meta) image_meta_headers['x-auth-token'] = 'noauth' image_meta_headers['Content-Type'] = 'application/octet-stream' c.conn.prime_request('PUT', 'v1/images/%s' % image_meta['id'], '', image_meta_headers, http.OK, '', '') headers, body = c.add_image_meta(image_meta) class FakeHttpResponse(object): def __init__(self, headers, data): self.headers = headers self.data = six.BytesIO(data) def getheaders(self): return self.headers def read(self, amt=None): return self.data.read(amt) FAKEIMAGES = [{'status': 'active', 'size': 100, 'dontrepl': 'banana', 'id': '5dcddce0-cba5-4f18-9cf4-9853c7b207a6', 'name': 'x1'}, {'status': 'deleted', 'size': 200, 'dontrepl': 'banana', 'id': 'f4da1d2a-40e8-4710-b3aa-0222a4cc887b', 'name': 'x2'}, {'status': 'active', 'size': 300, 'dontrepl': 'banana', 'id': '37ff82db-afca-48c7-ae0b-ddc7cf83e3db', 'name': 'x3'}] FAKEIMAGES_LIVEMASTER = [{'status': 'active', 'size': 100, 'dontrepl': 'banana', 'name': 'x1', 'id': '5dcddce0-cba5-4f18-9cf4-9853c7b207a6'}, {'status': 'deleted', 'size': 200, 'dontrepl': 'banana', 'name': 'x2', 'id': 'f4da1d2a-40e8-4710-b3aa-0222a4cc887b'}, {'status': 'deleted', 'size': 300, 'dontrepl': 'banana', 'name': 'x3', 'id': '37ff82db-afca-48c7-ae0b-ddc7cf83e3db'}, {'status': 'active', 'size': 100, 'dontrepl': 'banana', 'name': 'x4', 'id': '15648dd7-8dd0-401c-bd51-550e1ba9a088'}] class FakeImageService(object): def __init__(self, http_conn, authtoken): self.authtoken = authtoken def get_images(self): if self.authtoken == 'livesourcetoken': return FAKEIMAGES_LIVEMASTER return FAKEIMAGES def get_image(self, id): return FakeHttpResponse({}, b'data') def get_image_meta(self, id): for img in FAKEIMAGES: if img['id'] == id: return img return {} def add_image_meta(self, meta): return {'status': http.OK}, None def add_image(self, meta, data): return {'status': http.OK}, None def get_image_service(): return FakeImageService def check_no_args(command, args): options = moves.UserDict() no_args_error = False orig_img_service = glance_replicator.get_image_service try: glance_replicator.get_image_service = get_image_service command(options, args) except TypeError as e: if str(e) == "Too few arguments.": no_args_error = True finally: glance_replicator.get_image_service = orig_img_service return no_args_error def check_bad_args(command, args): options = moves.UserDict() bad_args_error = False orig_img_service = glance_replicator.get_image_service try: glance_replicator.get_image_service = get_image_service command(options, args) except ValueError: bad_args_error = True finally: glance_replicator.get_image_service = orig_img_service return bad_args_error class ReplicationCommandsTestCase(test_utils.BaseTestCase): @mock.patch.object(glance_replicator, 'lookup_command') def test_help(self, mock_lookup_command): option = mock.Mock() mock_lookup_command.return_value = "fake_return" glance_replicator.print_help(option, []) glance_replicator.print_help(option, ['dump']) glance_replicator.print_help(option, ['fake_command']) self.assertEqual(2, mock_lookup_command.call_count) def test_replication_size(self): options = moves.UserDict() options.targettoken = 'targettoken' args = ['localhost:9292'] stdout = sys.stdout orig_img_service = glance_replicator.get_image_service sys.stdout = six.StringIO() try: glance_replicator.get_image_service = get_image_service glance_replicator.replication_size(options, args) sys.stdout.seek(0) output = sys.stdout.read() finally: sys.stdout = stdout glance_replicator.get_image_service = orig_img_service output = output.rstrip() self.assertEqual( 'Total size is 400 bytes (400.0 B) across 2 images', output ) def test_replication_size_with_no_args(self): args = [] command = glance_replicator.replication_size self.assertTrue(check_no_args(command, args)) def test_replication_size_with_args_is_None(self): args = None command = glance_replicator.replication_size self.assertTrue(check_no_args(command, args)) def test_replication_size_with_bad_args(self): args = ['aaa'] command = glance_replicator.replication_size self.assertTrue(check_bad_args(command, args)) def test_human_readable_size(self): _human_readable_size = glance_replicator._human_readable_size self.assertEqual('0.0 B', _human_readable_size(0)) self.assertEqual('1.0 B', _human_readable_size(1)) self.assertEqual('512.0 B', _human_readable_size(512)) self.assertEqual('1.0 KiB', _human_readable_size(1024)) self.assertEqual('2.0 KiB', _human_readable_size(2048)) self.assertEqual('8.0 KiB', _human_readable_size(8192)) self.assertEqual('64.0 KiB', _human_readable_size(65536)) self.assertEqual('93.3 KiB', _human_readable_size(95536)) self.assertEqual('117.7 MiB', _human_readable_size(123456789)) self.assertEqual('36.3 GiB', _human_readable_size(39022543360)) def test_replication_dump(self): tempdir = self.useFixture(fixtures.TempDir()).path options = moves.UserDict() options.chunksize = 4096 options.sourcetoken = 'sourcetoken' options.metaonly = False args = ['localhost:9292', tempdir] orig_img_service = glance_replicator.get_image_service self.addCleanup(setattr, glance_replicator, 'get_image_service', orig_img_service) glance_replicator.get_image_service = get_image_service glance_replicator.replication_dump(options, args) for active in ['5dcddce0-cba5-4f18-9cf4-9853c7b207a6', '37ff82db-afca-48c7-ae0b-ddc7cf83e3db']: imgfile = os.path.join(tempdir, active) self.assertTrue(os.path.exists(imgfile)) self.assertTrue(os.path.exists('%s.img' % imgfile)) with open(imgfile) as f: d = jsonutils.loads(f.read()) self.assertIn('status', d) self.assertIn('id', d) self.assertIn('size', d) for inactive in ['f4da1d2a-40e8-4710-b3aa-0222a4cc887b']: imgfile = os.path.join(tempdir, inactive) self.assertTrue(os.path.exists(imgfile)) self.assertFalse(os.path.exists('%s.img' % imgfile)) with open(imgfile) as f: d = jsonutils.loads(f.read()) self.assertIn('status', d) self.assertIn('id', d) self.assertIn('size', d) def test_replication_dump_with_no_args(self): args = [] command = glance_replicator.replication_dump self.assertTrue(check_no_args(command, args)) def test_replication_dump_with_bad_args(self): args = ['aaa', 'bbb'] command = glance_replicator.replication_dump self.assertTrue(check_bad_args(command, args)) def test_replication_load(self): tempdir = self.useFixture(fixtures.TempDir()).path def write_image(img, data): imgfile = os.path.join(tempdir, img['id']) with open(imgfile, 'w') as f: f.write(jsonutils.dumps(img)) if data: with open('%s.img' % imgfile, 'w') as f: f.write(data) for img in FAKEIMAGES: cimg = copy.copy(img) # We need at least one image where the stashed metadata on disk # is newer than what the fake has if cimg['id'] == '5dcddce0-cba5-4f18-9cf4-9853c7b207a6': cimg['extra'] = 'thisissomeextra' # This is an image where the metadata change should be ignored if cimg['id'] == 'f4da1d2a-40e8-4710-b3aa-0222a4cc887b': cimg['dontrepl'] = 'thisisyetmoreextra' write_image(cimg, 'kjdhfkjshdfkjhsdkfd') # And an image which isn't on the destination at all new_id = str(uuid.uuid4()) cimg['id'] = new_id write_image(cimg, 'dskjfhskjhfkfdhksjdhf') # And an image which isn't on the destination, but lacks image # data new_id_missing_data = str(uuid.uuid4()) cimg['id'] = new_id_missing_data write_image(cimg, None) # A file which should be ignored badfile = os.path.join(tempdir, 'kjdfhf') with open(badfile, 'w') as f: f.write(jsonutils.dumps([1, 2, 3, 4, 5])) # Finally, we're ready to test options = moves.UserDict() options.dontreplicate = 'dontrepl dontreplabsent' options.targettoken = 'targettoken' args = ['localhost:9292', tempdir] orig_img_service = glance_replicator.get_image_service try: glance_replicator.get_image_service = get_image_service updated = glance_replicator.replication_load(options, args) finally: glance_replicator.get_image_service = orig_img_service self.assertIn('5dcddce0-cba5-4f18-9cf4-9853c7b207a6', updated) self.assertNotIn('f4da1d2a-40e8-4710-b3aa-0222a4cc887b', updated) self.assertIn(new_id, updated) self.assertNotIn(new_id_missing_data, updated) def test_replication_load_with_no_args(self): args = [] command = glance_replicator.replication_load self.assertTrue(check_no_args(command, args)) def test_replication_load_with_bad_args(self): args = ['aaa', 'bbb'] command = glance_replicator.replication_load self.assertTrue(check_bad_args(command, args)) def test_replication_livecopy(self): options = moves.UserDict() options.chunksize = 4096 options.dontreplicate = 'dontrepl dontreplabsent' options.sourcetoken = 'livesourcetoken' options.targettoken = 'livetargettoken' options.metaonly = False args = ['localhost:9292', 'localhost:9393'] orig_img_service = glance_replicator.get_image_service try: glance_replicator.get_image_service = get_image_service updated = glance_replicator.replication_livecopy(options, args) finally: glance_replicator.get_image_service = orig_img_service self.assertEqual(2, len(updated)) def test_replication_livecopy_with_no_args(self): args = [] command = glance_replicator.replication_livecopy self.assertTrue(check_no_args(command, args)) def test_replication_livecopy_with_bad_args(self): args = ['aaa', 'bbb'] command = glance_replicator.replication_livecopy self.assertTrue(check_bad_args(command, args)) def test_replication_compare(self): options = moves.UserDict() options.chunksize = 4096 options.dontreplicate = 'dontrepl dontreplabsent' options.sourcetoken = 'livesourcetoken' options.targettoken = 'livetargettoken' options.metaonly = False args = ['localhost:9292', 'localhost:9393'] orig_img_service = glance_replicator.get_image_service try: glance_replicator.get_image_service = get_image_service differences = glance_replicator.replication_compare(options, args) finally: glance_replicator.get_image_service = orig_img_service self.assertIn('15648dd7-8dd0-401c-bd51-550e1ba9a088', differences) self.assertEqual(differences['15648dd7-8dd0-401c-bd51-550e1ba9a088'], 'missing') self.assertIn('37ff82db-afca-48c7-ae0b-ddc7cf83e3db', differences) self.assertEqual(differences['37ff82db-afca-48c7-ae0b-ddc7cf83e3db'], 'diff') def test_replication_compare_with_no_args(self): args = [] command = glance_replicator.replication_compare self.assertTrue(check_no_args(command, args)) def test_replication_compare_with_bad_args(self): args = ['aaa', 'bbb'] command = glance_replicator.replication_compare self.assertTrue(check_bad_args(command, args)) class ReplicationUtilitiesTestCase(test_utils.BaseTestCase): def test_check_upload_response_headers(self): glance_replicator._check_upload_response_headers({'status': 'active'}, None) d = {'image': {'status': 'active'}} glance_replicator._check_upload_response_headers({}, jsonutils.dumps(d)) self.assertRaises( exception.UploadException, glance_replicator._check_upload_response_headers, {}, None) def test_image_present(self): client = FakeImageService(None, 'noauth') self.assertTrue(glance_replicator._image_present( client, '5dcddce0-cba5-4f18-9cf4-9853c7b207a6')) self.assertFalse(glance_replicator._image_present( client, uuid.uuid4())) def test_dict_diff(self): a = {'a': 1, 'b': 2, 'c': 3} b = {'a': 1, 'b': 2} c = {'a': 1, 'b': 1, 'c': 3} d = {'a': 1, 'b': 2, 'c': 3, 'd': 4} # Only things that the first dict has which the second dict doesn't # matter here. self.assertFalse(glance_replicator._dict_diff(a, a)) self.assertTrue(glance_replicator._dict_diff(a, b)) self.assertTrue(glance_replicator._dict_diff(a, c)) self.assertFalse(glance_replicator._dict_diff(a, d)) glance-16.0.0/glance/tests/stubs.py0000666000175100017510000001566413245511421017174 0ustar zuulzuul00000000000000# Copyright 2010-2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Stubouts, mocks and fixtures for the test suite""" import os try: import sendfile SENDFILE_SUPPORTED = True except ImportError: SENDFILE_SUPPORTED = False import routes import webob from glance.api.middleware import context from glance.api.v1 import router import glance.common.client from glance.registry.api import v1 as rserver from glance.tests import utils DEBUG = False class FakeRegistryConnection(object): def __init__(self, registry=None): self.registry = registry or rserver def __call__(self, *args, **kwargs): # NOTE(flaper87): This method takes # __init__'s place in the chain. return self def connect(self): return True def close(self): return True def request(self, method, url, body=None, headers=None): self.req = webob.Request.blank("/" + url.lstrip("/")) self.req.method = method if headers: self.req.headers = headers if body: self.req.body = body def getresponse(self): mapper = routes.Mapper() server = self.registry.API(mapper) # NOTE(markwash): we need to pass through context auth information if # we have it. if 'X-Auth-Token' in self.req.headers: api = utils.FakeAuthMiddleware(server) else: api = context.UnauthenticatedContextMiddleware(server) webob_res = self.req.get_response(api) return utils.FakeHTTPResponse(status=webob_res.status_int, headers=webob_res.headers, data=webob_res.body) def stub_out_registry_and_store_server(stubs, base_dir, **kwargs): """Mocks calls to 127.0.0.1 on 9191 and 9292 for testing. Done so that a real Glance server does not need to be up and running """ class FakeSocket(object): def __init__(self, *args, **kwargs): pass def fileno(self): return 42 class FakeSendFile(object): def __init__(self, req): self.req = req def sendfile(self, o, i, offset, nbytes): os.lseek(i, offset, os.SEEK_SET) prev_len = len(self.req.body) self.req.body += os.read(i, nbytes) return len(self.req.body) - prev_len class FakeGlanceConnection(object): def __init__(self, *args, **kwargs): self.sock = FakeSocket() self.stub_force_sendfile = kwargs.get('stub_force_sendfile', SENDFILE_SUPPORTED) def connect(self): return True def close(self): return True def _clean_url(self, url): # TODO(bcwaldon): Fix the hack that strips off v1 return url.replace('/v1', '', 1) if url.startswith('/v1') else url def putrequest(self, method, url): self.req = webob.Request.blank(self._clean_url(url)) if self.stub_force_sendfile: fake_sendfile = FakeSendFile(self.req) stubs.Set(sendfile, 'sendfile', fake_sendfile.sendfile) self.req.method = method def putheader(self, key, value): self.req.headers[key] = value def endheaders(self): hl = [i.lower() for i in self.req.headers.keys()] assert not ('content-length' in hl and 'transfer-encoding' in hl), ( 'Content-Length and Transfer-Encoding are mutually exclusive') def send(self, data): # send() is called during chunked-transfer encoding, and # data is of the form %x\r\n%s\r\n. Strip off the %x and # only write the actual data in tests. self.req.body += data.split("\r\n")[1] def request(self, method, url, body=None, headers=None): self.req = webob.Request.blank(self._clean_url(url)) self.req.method = method if headers: self.req.headers = headers if body: self.req.body = body def getresponse(self): mapper = routes.Mapper() api = context.UnauthenticatedContextMiddleware(router.API(mapper)) res = self.req.get_response(api) # httplib.Response has a read() method...fake it out def fake_reader(): return res.body setattr(res, 'read', fake_reader) return res def fake_get_connection_type(client): """Returns the proper connection type.""" DEFAULT_REGISTRY_PORT = 9191 DEFAULT_API_PORT = 9292 if (client.port == DEFAULT_API_PORT and client.host == '0.0.0.0'): return FakeGlanceConnection elif (client.port == DEFAULT_REGISTRY_PORT and client.host == '0.0.0.0'): rserver = kwargs.get("registry") return FakeRegistryConnection(registry=rserver) def fake_image_iter(self): for i in self.source.app_iter: yield i def fake_sendable(self, body): force = getattr(self, 'stub_force_sendfile', None) if force is None: return self._stub_orig_sendable(body) else: if force: assert glance.common.client.SENDFILE_SUPPORTED return force stubs.Set(glance.common.client.BaseClient, 'get_connection_type', fake_get_connection_type) setattr(glance.common.client.BaseClient, '_stub_orig_sendable', glance.common.client.BaseClient._sendable) stubs.Set(glance.common.client.BaseClient, '_sendable', fake_sendable) def stub_out_registry_server(stubs, **kwargs): """Mocks calls to 127.0.0.1 on 9191 for testing. Done so that a real Glance Registry server does not need to be up and running. """ def fake_get_connection_type(client): """Returns the proper connection type.""" DEFAULT_REGISTRY_PORT = 9191 if (client.port == DEFAULT_REGISTRY_PORT and client.host == '0.0.0.0'): rserver = kwargs.pop("registry", None) return FakeRegistryConnection(registry=rserver) def fake_image_iter(self): for i in self.response.app_iter: yield i stubs.Set(glance.common.client.BaseClient, 'get_connection_type', fake_get_connection_type) glance-16.0.0/glance/tests/functional/0000775000175100017510000000000013245511661017614 5ustar zuulzuul00000000000000glance-16.0.0/glance/tests/functional/db/0000775000175100017510000000000013245511661020201 5ustar zuulzuul00000000000000glance-16.0.0/glance/tests/functional/db/migrations/0000775000175100017510000000000013245511661022355 5ustar zuulzuul00000000000000glance-16.0.0/glance/tests/functional/db/migrations/test_pike_contract01.py0000666000175100017510000000350013245511421026746 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_db.sqlalchemy import test_base from oslo_db.sqlalchemy import utils as db_utils import sqlalchemy from glance.tests.functional.db import test_migrations class TestPikeContract01Mixin(test_migrations.AlembicMigrationsMixin): artifacts_table_names = [ 'artifact_blob_locations', 'artifact_properties', 'artifact_blobs', 'artifact_dependencies', 'artifact_tags', 'artifacts' ] def _get_revisions(self, config): return test_migrations.AlembicMigrationsMixin._get_revisions( self, config, head='pike_contract01') def _pre_upgrade_pike_contract01(self, engine): # verify presence of the artifacts tables for table_name in self.artifacts_table_names: table = db_utils.get_table(engine, table_name) self.assertIsNotNone(table) def _check_pike_contract01(self, engine, data): # verify absence of the artifacts tables for table_name in self.artifacts_table_names: self.assertRaises(sqlalchemy.exc.NoSuchTableError, db_utils.get_table, engine, table_name) class TestPikeContract01MySQL(TestPikeContract01Mixin, test_base.MySQLOpportunisticTestCase): pass glance-16.0.0/glance/tests/functional/db/migrations/test_mitaka01.py0000666000175100017510000000315013245511421025370 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_db.sqlalchemy import test_base import sqlalchemy from glance.tests.functional.db import test_migrations def get_indexes(table, engine): inspector = sqlalchemy.inspect(engine) return [idx['name'] for idx in inspector.get_indexes(table)] class TestMitaka01Mixin(test_migrations.AlembicMigrationsMixin): def _pre_upgrade_mitaka01(self, engine): indexes = get_indexes('images', engine) self.assertNotIn('created_at_image_idx', indexes) self.assertNotIn('updated_at_image_idx', indexes) def _check_mitaka01(self, engine, data): indexes = get_indexes('images', engine) self.assertIn('created_at_image_idx', indexes) self.assertIn('updated_at_image_idx', indexes) class TestMitaka01MySQL(TestMitaka01Mixin, test_base.MySQLOpportunisticTestCase): pass class TestMitaka01PostgresSQL(TestMitaka01Mixin, test_base.PostgreSQLOpportunisticTestCase): pass class TestMitaka01Sqlite(TestMitaka01Mixin, test_base.DbTestCase): pass glance-16.0.0/glance/tests/functional/db/migrations/test_ocata_migrate01.py0000666000175100017510000001555513245511421026735 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime from oslo_db.sqlalchemy import test_base from oslo_db.sqlalchemy import utils as db_utils from glance.db.sqlalchemy.alembic_migrations import data_migrations from glance.tests.functional.db import test_migrations class TestOcataMigrate01Mixin(test_migrations.AlembicMigrationsMixin): def _get_revisions(self, config): return test_migrations.AlembicMigrationsMixin._get_revisions( self, config, head='ocata_expand01') def _pre_upgrade_ocata_expand01(self, engine): images = db_utils.get_table(engine, 'images') image_members = db_utils.get_table(engine, 'image_members') now = datetime.datetime.now() # inserting a public image record public_temp = dict(deleted=False, created_at=now, status='active', is_public=True, min_disk=0, min_ram=0, id='public_id') images.insert().values(public_temp).execute() # inserting a non-public image record for 'shared' visibility test shared_temp = dict(deleted=False, created_at=now, status='active', is_public=False, min_disk=0, min_ram=0, id='shared_id') images.insert().values(shared_temp).execute() # inserting a non-public image records for 'private' visibility test private_temp = dict(deleted=False, created_at=now, status='active', is_public=False, min_disk=0, min_ram=0, id='private_id_1') images.insert().values(private_temp).execute() private_temp = dict(deleted=False, created_at=now, status='active', is_public=False, min_disk=0, min_ram=0, id='private_id_2') images.insert().values(private_temp).execute() # adding an active as well as a deleted image member for checking # 'shared' visibility temp = dict(deleted=False, created_at=now, image_id='shared_id', member='fake_member_452', can_share=True, id=45) image_members.insert().values(temp).execute() temp = dict(deleted=True, created_at=now, image_id='shared_id', member='fake_member_453', can_share=True, id=453) image_members.insert().values(temp).execute() # adding an image member, but marking it deleted, # for testing 'private' visibility temp = dict(deleted=True, created_at=now, image_id='private_id_2', member='fake_member_451', can_share=True, id=451) image_members.insert().values(temp).execute() # adding an active image member for the 'public' image, # to test it remains public regardless. temp = dict(deleted=False, created_at=now, image_id='public_id', member='fake_member_450', can_share=True, id=450) image_members.insert().values(temp).execute() def _check_ocata_expand01(self, engine, data): images = db_utils.get_table(engine, 'images') # check that visibility is null for existing images rows = (images.select() .order_by(images.c.id) .execute() .fetchall()) self.assertEqual(4, len(rows)) for row in rows: self.assertIsNone(row['visibility']) # run data migrations data_migrations.migrate(engine) # check that visibility is set appropriately for all images rows = (images.select() .order_by(images.c.id) .execute() .fetchall()) self.assertEqual(4, len(rows)) # private_id_1 has private visibility self.assertEqual('private_id_1', rows[0]['id']) # TODO(rosmaita): bug #1745003 # self.assertEqual('private', rows[0]['visibility']) # private_id_2 has private visibility self.assertEqual('private_id_2', rows[1]['id']) # TODO(rosmaita): bug #1745003 # self.assertEqual('private', rows[1]['visibility']) # public_id has public visibility self.assertEqual('public_id', rows[2]['id']) # TODO(rosmaita): bug #1745003 # self.assertEqual('public', rows[2]['visibility']) # shared_id has shared visibility self.assertEqual('shared_id', rows[3]['id']) # TODO(rosmaita): bug #1745003 # self.assertEqual('shared', rows[3]['visibility']) class TestOcataMigrate01MySQL(TestOcataMigrate01Mixin, test_base.MySQLOpportunisticTestCase): pass class TestOcataMigrate01_EmptyDBMixin(test_migrations.AlembicMigrationsMixin): """This mixin is used to create an initial glance database and upgrade it up to the ocata_expand01 revision. """ def _get_revisions(self, config): return test_migrations.AlembicMigrationsMixin._get_revisions( self, config, head='ocata_expand01') def _pre_upgrade_ocata_expand01(self, engine): # New/empty database pass def _check_ocata_expand01(self, engine, data): images = db_utils.get_table(engine, 'images') # check that there are no rows in the images table rows = (images.select() .order_by(images.c.id) .execute() .fetchall()) self.assertEqual(0, len(rows)) # run data migrations data_migrations.migrate(engine) class TestOcataMigrate01_EmptyDBMySQL(TestOcataMigrate01_EmptyDBMixin, test_base.MySQLOpportunisticTestCase): """This test runs the Ocata data migrations on an empty databse.""" pass glance-16.0.0/glance/tests/functional/db/migrations/test_ocata_expand01.py0000666000175100017510000001605513245511421026560 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime from oslo_db.sqlalchemy import test_base from oslo_db.sqlalchemy import utils as db_utils from glance.tests.functional.db import test_migrations class TestOcataExpand01Mixin(test_migrations.AlembicMigrationsMixin): def _get_revisions(self, config): return test_migrations.AlembicMigrationsMixin._get_revisions( self, config, head='ocata_expand01') def _pre_upgrade_ocata_expand01(self, engine): images = db_utils.get_table(engine, 'images') now = datetime.datetime.now() self.assertIn('is_public', images.c) self.assertNotIn('visibility', images.c) self.assertFalse(images.c.is_public.nullable) # inserting a public image record public_temp = dict(deleted=False, created_at=now, status='active', is_public=True, min_disk=0, min_ram=0, id='public_id_before_expand') images.insert().values(public_temp).execute() # inserting a private image record shared_temp = dict(deleted=False, created_at=now, status='active', is_public=False, min_disk=0, min_ram=0, id='private_id_before_expand') images.insert().values(shared_temp).execute() def _check_ocata_expand01(self, engine, data): # check that after migration, 'visibility' column is introduced images = db_utils.get_table(engine, 'images') self.assertIn('visibility', images.c) self.assertIn('is_public', images.c) self.assertTrue(images.c.is_public.nullable) self.assertTrue(images.c.visibility.nullable) # tests visibility set to None for existing images rows = (images.select() .where(images.c.id.like('%_before_expand')) .order_by(images.c.id) .execute() .fetchall()) self.assertEqual(2, len(rows)) # private image first self.assertEqual(0, rows[0]['is_public']) self.assertEqual('private_id_before_expand', rows[0]['id']) self.assertIsNone(rows[0]['visibility']) # then public image self.assertEqual(1, rows[1]['is_public']) self.assertEqual('public_id_before_expand', rows[1]['id']) self.assertIsNone(rows[1]['visibility']) self._test_trigger_old_to_new(images) self._test_trigger_new_to_old(images) def _test_trigger_new_to_old(self, images): now = datetime.datetime.now() # inserting a public image record after expand public_temp = dict(deleted=False, created_at=now, status='active', visibility='public', min_disk=0, min_ram=0, id='public_id_new_to_old') images.insert().values(public_temp).execute() # inserting a private image record after expand shared_temp = dict(deleted=False, created_at=now, status='active', visibility='private', min_disk=0, min_ram=0, id='private_id_new_to_old') images.insert().values(shared_temp).execute() # inserting a shared image record after expand shared_temp = dict(deleted=False, created_at=now, status='active', visibility='shared', min_disk=0, min_ram=0, id='shared_id_new_to_old') images.insert().values(shared_temp).execute() # test visibility is set appropriately by the trigger for new images rows = (images.select() .where(images.c.id.like('%_new_to_old')) .order_by(images.c.id) .execute() .fetchall()) self.assertEqual(3, len(rows)) # private image first self.assertEqual(0, rows[0]['is_public']) self.assertEqual('private_id_new_to_old', rows[0]['id']) self.assertEqual('private', rows[0]['visibility']) # then public image self.assertEqual(1, rows[1]['is_public']) self.assertEqual('public_id_new_to_old', rows[1]['id']) self.assertEqual('public', rows[1]['visibility']) # then shared image self.assertEqual(0, rows[2]['is_public']) self.assertEqual('shared_id_new_to_old', rows[2]['id']) self.assertEqual('shared', rows[2]['visibility']) def _test_trigger_old_to_new(self, images): now = datetime.datetime.now() # inserting a public image record after expand public_temp = dict(deleted=False, created_at=now, status='active', is_public=True, min_disk=0, min_ram=0, id='public_id_old_to_new') images.insert().values(public_temp).execute() # inserting a private image record after expand shared_temp = dict(deleted=False, created_at=now, status='active', is_public=False, min_disk=0, min_ram=0, id='private_id_old_to_new') images.insert().values(shared_temp).execute() # tests visibility is set appropriately by the trigger for new images rows = (images.select() .where(images.c.id.like('%_old_to_new')) .order_by(images.c.id) .execute() .fetchall()) self.assertEqual(2, len(rows)) # private image first self.assertEqual(0, rows[0]['is_public']) self.assertEqual('private_id_old_to_new', rows[0]['id']) self.assertEqual('shared', rows[0]['visibility']) # then public image self.assertEqual(1, rows[1]['is_public']) self.assertEqual('public_id_old_to_new', rows[1]['id']) self.assertEqual('public', rows[1]['visibility']) class TestOcataExpand01MySQL(TestOcataExpand01Mixin, test_base.MySQLOpportunisticTestCase): pass glance-16.0.0/glance/tests/functional/db/migrations/test_ocata_contract01.py0000666000175100017510000000514613245511421027115 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime from oslo_db.sqlalchemy import test_base from oslo_db.sqlalchemy import utils as db_utils from glance.db.sqlalchemy.alembic_migrations import data_migrations from glance.tests.functional.db import test_migrations class TestOcataContract01Mixin(test_migrations.AlembicMigrationsMixin): def _get_revisions(self, config): return test_migrations.AlembicMigrationsMixin._get_revisions( self, config, head='ocata_contract01') def _pre_upgrade_ocata_contract01(self, engine): images = db_utils.get_table(engine, 'images') now = datetime.datetime.now() self.assertIn('is_public', images.c) self.assertIn('visibility', images.c) self.assertTrue(images.c.is_public.nullable) self.assertTrue(images.c.visibility.nullable) # inserting a public image record public_temp = dict(deleted=False, created_at=now, status='active', is_public=True, min_disk=0, min_ram=0, id='public_id_before_expand') images.insert().values(public_temp).execute() # inserting a private image record shared_temp = dict(deleted=False, created_at=now, status='active', is_public=False, min_disk=0, min_ram=0, id='private_id_before_expand') images.insert().values(shared_temp).execute() data_migrations.migrate(engine=engine, release='ocata') def _check_ocata_contract01(self, engine, data): # check that after contract 'is_public' column is dropped images = db_utils.get_table(engine, 'images') self.assertNotIn('is_public', images.c) self.assertIn('visibility', images.c) class TestOcataContract01MySQL(TestOcataContract01Mixin, test_base.MySQLOpportunisticTestCase): pass glance-16.0.0/glance/tests/functional/db/migrations/test_pike_expand01.py0000666000175100017510000000324013245511421026411 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_db.sqlalchemy import test_base from oslo_db.sqlalchemy import utils as db_utils from glance.tests.functional.db import test_migrations class TestPikeExpand01Mixin(test_migrations.AlembicMigrationsMixin): artifacts_table_names = [ 'artifact_blob_locations', 'artifact_properties', 'artifact_blobs', 'artifact_dependencies', 'artifact_tags', 'artifacts' ] def _get_revisions(self, config): return test_migrations.AlembicMigrationsMixin._get_revisions( self, config, head='pike_expand01') def _pre_upgrade_pike_expand01(self, engine): # verify presence of the artifacts tables for table_name in self.artifacts_table_names: table = db_utils.get_table(engine, table_name) self.assertIsNotNone(table) def _check_pike_expand01(self, engine, data): # should be no changes, so re-run pre-upgrade check self._pre_upgrade_pike_expand01(engine) class TestPikeExpand01MySQL(TestPikeExpand01Mixin, test_base.MySQLOpportunisticTestCase): pass glance-16.0.0/glance/tests/functional/db/migrations/__init__.py0000666000175100017510000000000013245511421024450 0ustar zuulzuul00000000000000glance-16.0.0/glance/tests/functional/db/migrations/test_pike_migrate01.py0000666000175100017510000000161313245511421026564 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_db.sqlalchemy import test_base import glance.tests.functional.db.migrations.test_pike_expand01 as tpe01 # no TestPikeMigrate01Mixin class needed, can use TestPikeExpand01Mixin instead class TestPikeMigrate01MySQL(tpe01.TestPikeExpand01Mixin, test_base.MySQLOpportunisticTestCase): pass glance-16.0.0/glance/tests/functional/db/migrations/test_mitaka02.py0000666000175100017510000000464613245511421025404 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime from oslo_db.sqlalchemy import test_base from oslo_db.sqlalchemy import utils as db_utils from glance.tests.functional.db import test_migrations class TestMitaka02Mixin(test_migrations.AlembicMigrationsMixin): def _pre_upgrade_mitaka02(self, engine): metadef_resource_types = db_utils.get_table(engine, 'metadef_resource_types') now = datetime.datetime.now() db_rec1 = dict(id='9580', name='OS::Nova::Instance', protected=False, created_at=now, updated_at=now,) db_rec2 = dict(id='9581', name='OS::Nova::Blah', protected=False, created_at=now, updated_at=now,) db_values = (db_rec1, db_rec2) metadef_resource_types.insert().values(db_values).execute() def _check_mitaka02(self, engine, data): metadef_resource_types = db_utils.get_table(engine, 'metadef_resource_types') result = (metadef_resource_types.select() .where(metadef_resource_types.c.name == 'OS::Nova::Instance') .execute().fetchall()) self.assertEqual(0, len(result)) result = (metadef_resource_types.select() .where(metadef_resource_types.c.name == 'OS::Nova::Server') .execute().fetchall()) self.assertEqual(1, len(result)) class TestMitaka02MySQL(TestMitaka02Mixin, test_base.MySQLOpportunisticTestCase): pass class TestMitaka02PostgresSQL(TestMitaka02Mixin, test_base.PostgreSQLOpportunisticTestCase): pass class TestMitaka02Sqlite(TestMitaka02Mixin, test_base.DbTestCase): pass glance-16.0.0/glance/tests/functional/db/test_simple.py0000666000175100017510000000541113245511421023100 0ustar zuulzuul00000000000000# Copyright 2012 OpenStack Foundation # Copyright 2013 IBM Corp. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from glance.api import CONF import glance.db.simple.api import glance.tests.functional.db as db_tests from glance.tests.functional.db import base def get_db(config, workers=1): CONF.set_override('data_api', 'glance.db.simple.api') CONF.set_override('workers', workers) db_api = glance.db.get_api() return db_api def reset_db(db_api): db_api.reset() class TestSimpleDriver(base.TestDriver, base.DriverTests, base.FunctionalInitWrapper): def setUp(self): db_tests.load(get_db, reset_db) super(TestSimpleDriver, self).setUp() self.addCleanup(db_tests.reset) class TestSimpleQuota(base.DriverQuotaTests, base.FunctionalInitWrapper): def setUp(self): db_tests.load(get_db, reset_db) super(TestSimpleQuota, self).setUp() self.addCleanup(db_tests.reset) class TestSimpleVisibility(base.TestVisibility, base.VisibilityTests, base.FunctionalInitWrapper): def setUp(self): db_tests.load(get_db, reset_db) super(TestSimpleVisibility, self).setUp() self.addCleanup(db_tests.reset) class TestSimpleMembershipVisibility(base.TestMembershipVisibility, base.MembershipVisibilityTests, base.FunctionalInitWrapper): def setUp(self): db_tests.load(get_db, reset_db) super(TestSimpleMembershipVisibility, self).setUp() self.addCleanup(db_tests.reset) class TestSimpleTask(base.TaskTests, base.FunctionalInitWrapper): def setUp(self): db_tests.load(get_db, reset_db) super(TestSimpleTask, self).setUp() self.addCleanup(db_tests.reset) class TestTooManyWorkers(base.TaskTests): def setUp(self): def get_db_too_many_workers(config): self.assertRaises(SystemExit, get_db, config, 2) return get_db(config) db_tests.load(get_db_too_many_workers, reset_db) super(TestTooManyWorkers, self).setUp() self.addCleanup(db_tests.reset) glance-16.0.0/glance/tests/functional/db/test_rpc_endpoint.py0000666000175100017510000000405113245511421024272 0ustar zuulzuul00000000000000# Copyright 2014 Hewlett-Packard Development Company, L.P. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_serialization import jsonutils import requests from six.moves import http_client as http from glance.tests import functional class TestRegistryURLVisibility(functional.FunctionalTest): def setUp(self): super(TestRegistryURLVisibility, self).setUp() self.cleanup() self.registry_server.deployment_flavor = '' self.req_body = jsonutils.dumps([{"command": "image_get_all"}]) def _url(self, path): return 'http://127.0.0.1:%d%s' % (self.registry_port, path) def _headers(self, custom_headers=None): base_headers = { } base_headers.update(custom_headers or {}) return base_headers def test_v2_not_enabled(self): self.registry_server.enable_v2_registry = False self.start_servers(**self.__dict__.copy()) path = self._url('/rpc') response = requests.post(path, headers=self._headers(), data=self.req_body) self.assertEqual(http.NOT_FOUND, response.status_code) self.stop_servers() def test_v2_enabled(self): self.registry_server.enable_v2_registry = True self.start_servers(**self.__dict__.copy()) path = self._url('/rpc') response = requests.post(path, headers=self._headers(), data=self.req_body) self.assertEqual(http.OK, response.status_code) self.stop_servers() glance-16.0.0/glance/tests/functional/db/base.py0000666000175100017510000033155713245511421021477 0ustar zuulzuul00000000000000# Copyright 2010-2012 OpenStack Foundation # Copyright 2012 Justin Santa Barbara # Copyright 2013 IBM Corp. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import datetime import uuid import mock from oslo_db import exception as db_exception from oslo_db.sqlalchemy import utils as sqlalchemyutils # NOTE(jokke): simplified transition to py3, behaves like py2 xrange from six.moves import range from six.moves import reduce from sqlalchemy.dialects import sqlite from glance.common import exception from glance.common import timeutils from glance import context from glance.db.sqlalchemy import api as db_api from glance.tests import functional import glance.tests.functional.db as db_tests from glance.tests import utils as test_utils # The default sort order of results is whatever sort key is specified, # plus created_at and id for ties. When we're not specifying a sort_key, # we get the default (created_at). Some tests below expect the fixtures to be # returned in array-order, so if the created_at timestamps are the same, # these tests rely on the UUID* values being in order UUID1, UUID2, UUID3 = sorted([str(uuid.uuid4()) for x in range(3)]) def build_image_fixture(**kwargs): default_datetime = timeutils.utcnow() image = { 'id': str(uuid.uuid4()), 'name': 'fake image #2', 'status': 'active', 'disk_format': 'vhd', 'container_format': 'ovf', 'is_public': True, 'created_at': default_datetime, 'updated_at': default_datetime, 'deleted_at': None, 'deleted': False, 'checksum': None, 'min_disk': 5, 'min_ram': 256, 'size': 19, 'locations': [{'url': "file:///tmp/glance-tests/2", 'metadata': {}, 'status': 'active'}], 'properties': {}, } if 'visibility' in kwargs: image.pop('is_public') image.update(kwargs) return image def build_task_fixture(**kwargs): default_datetime = timeutils.utcnow() task = { 'id': str(uuid.uuid4()), 'type': 'import', 'status': 'pending', 'input': {'ping': 'pong'}, 'owner': str(uuid.uuid4()), 'message': None, 'expires_at': None, 'created_at': default_datetime, 'updated_at': default_datetime, } task.update(kwargs) return task class FunctionalInitWrapper(functional.FunctionalTest): def setUp(self): super(FunctionalInitWrapper, self).setUp() self.config(policy_file=self.policy_file, group='oslo_policy') class TestDriver(test_utils.BaseTestCase): def setUp(self): super(TestDriver, self).setUp() context_cls = context.RequestContext self.adm_context = context_cls(is_admin=True, auth_token='user:user:admin') self.context = context_cls(is_admin=False, auth_token='user:user:user') self.db_api = db_tests.get_db(self.config) db_tests.reset_db(self.db_api) self.fixtures = self.build_image_fixtures() self.create_images(self.fixtures) def build_image_fixtures(self): dt1 = timeutils.utcnow() dt2 = dt1 + datetime.timedelta(microseconds=5) fixtures = [ { 'id': UUID1, 'created_at': dt1, 'updated_at': dt1, 'properties': {'foo': 'bar', 'far': 'boo'}, 'protected': True, 'size': 13, }, { 'id': UUID2, 'created_at': dt1, 'updated_at': dt2, 'size': 17, }, { 'id': UUID3, 'created_at': dt2, 'updated_at': dt2, }, ] return [build_image_fixture(**fixture) for fixture in fixtures] def create_images(self, images): for fixture in images: self.db_api.image_create(self.adm_context, fixture) class DriverTests(object): def test_image_create_requires_status(self): fixture = {'name': 'mark', 'size': 12} self.assertRaises(exception.Invalid, self.db_api.image_create, self.context, fixture) fixture = {'name': 'mark', 'size': 12, 'status': 'queued'} self.db_api.image_create(self.context, fixture) @mock.patch.object(timeutils, 'utcnow') def test_image_create_defaults(self, mock_utcnow): mock_utcnow.return_value = datetime.datetime.utcnow() create_time = timeutils.utcnow() values = {'status': 'queued', 'created_at': create_time, 'updated_at': create_time} image = self.db_api.image_create(self.context, values) self.assertIsNone(image['name']) self.assertIsNone(image['container_format']) self.assertEqual(0, image['min_ram']) self.assertEqual(0, image['min_disk']) self.assertIsNone(image['owner']) self.assertEqual('shared', image['visibility']) self.assertIsNone(image['size']) self.assertIsNone(image['checksum']) self.assertIsNone(image['disk_format']) self.assertEqual([], image['locations']) self.assertFalse(image['protected']) self.assertFalse(image['deleted']) self.assertIsNone(image['deleted_at']) self.assertEqual([], image['properties']) self.assertEqual(create_time, image['created_at']) self.assertEqual(create_time, image['updated_at']) # Image IDs aren't predictable, but they should be populated self.assertTrue(uuid.UUID(image['id'])) # NOTE(bcwaldon): the tags attribute should not be returned as a part # of a core image entity self.assertNotIn('tags', image) def test_image_create_duplicate_id(self): self.assertRaises(exception.Duplicate, self.db_api.image_create, self.context, {'id': UUID1, 'status': 'queued'}) def test_image_create_with_locations(self): locations = [{'url': 'a', 'metadata': {}, 'status': 'active'}, {'url': 'b', 'metadata': {}, 'status': 'active'}] fixture = {'status': 'queued', 'locations': locations} image = self.db_api.image_create(self.context, fixture) actual = [{'url': l['url'], 'metadata': l['metadata'], 'status': l['status']} for l in image['locations']] self.assertEqual(locations, actual) def test_image_create_without_locations(self): locations = [] fixture = {'status': 'queued', 'locations': locations} self.db_api.image_create(self.context, fixture) def test_image_create_with_location_data(self): location_data = [{'url': 'a', 'metadata': {'key': 'value'}, 'status': 'active'}, {'url': 'b', 'metadata': {}, 'status': 'active'}] fixture = {'status': 'queued', 'locations': location_data} image = self.db_api.image_create(self.context, fixture) actual = [{'url': l['url'], 'metadata': l['metadata'], 'status': l['status']} for l in image['locations']] self.assertEqual(location_data, actual) def test_image_create_properties(self): fixture = {'status': 'queued', 'properties': {'ping': 'pong'}} image = self.db_api.image_create(self.context, fixture) expected = [{'name': 'ping', 'value': 'pong'}] actual = [{'name': p['name'], 'value': p['value']} for p in image['properties']] self.assertEqual(expected, actual) def test_image_create_unknown_attributes(self): fixture = {'ping': 'pong'} self.assertRaises(exception.Invalid, self.db_api.image_create, self.context, fixture) def test_image_create_bad_name(self): bad_name = u'A name with forbidden symbol \U0001f62a' fixture = {'name': bad_name, 'size': 12, 'status': 'queued'} self.assertRaises(exception.Invalid, self.db_api.image_create, self.context, fixture) def test_image_create_bad_checksum(self): # checksum should be no longer than 32 characters bad_checksum = "42" * 42 fixture = {'checksum': bad_checksum} self.assertRaises(exception.Invalid, self.db_api.image_create, self.context, fixture) # if checksum is not longer than 32 characters but non-ascii -> # still raise 400 fixture = {'checksum': u'\u042f' * 32} self.assertRaises(exception.Invalid, self.db_api.image_create, self.context, fixture) def test_image_create_bad_int_params(self): int_too_long = 2 ** 31 + 42 for param in ['min_disk', 'min_ram']: fixture = {param: int_too_long} self.assertRaises(exception.Invalid, self.db_api.image_create, self.context, fixture) def test_image_create_bad_property(self): # bad value fixture = {'status': 'queued', 'properties': {'bad': u'Bad \U0001f62a'}} self.assertRaises(exception.Invalid, self.db_api.image_create, self.context, fixture) # bad property names are also not allowed fixture = {'status': 'queued', 'properties': {u'Bad \U0001f62a': 'ok'}} self.assertRaises(exception.Invalid, self.db_api.image_create, self.context, fixture) def test_image_create_bad_location(self): location_data = [{'url': 'a', 'metadata': {'key': 'value'}, 'status': 'active'}, {'url': u'Bad \U0001f60a', 'metadata': {}, 'status': 'active'}] fixture = {'status': 'queued', 'locations': location_data} self.assertRaises(exception.Invalid, self.db_api.image_create, self.context, fixture) def test_image_update_core_attribute(self): fixture = {'status': 'queued'} image = self.db_api.image_update(self.adm_context, UUID3, fixture) self.assertEqual('queued', image['status']) self.assertNotEqual(image['created_at'], image['updated_at']) def test_image_update_with_locations(self): locations = [{'url': 'a', 'metadata': {}, 'status': 'active'}, {'url': 'b', 'metadata': {}, 'status': 'active'}] fixture = {'locations': locations} image = self.db_api.image_update(self.adm_context, UUID3, fixture) self.assertEqual(2, len(image['locations'])) self.assertIn('id', image['locations'][0]) self.assertIn('id', image['locations'][1]) image['locations'][0].pop('id') image['locations'][1].pop('id') self.assertEqual(locations, image['locations']) def test_image_update_with_location_data(self): location_data = [{'url': 'a', 'metadata': {'key': 'value'}, 'status': 'active'}, {'url': 'b', 'metadata': {}, 'status': 'active'}] fixture = {'locations': location_data} image = self.db_api.image_update(self.adm_context, UUID3, fixture) self.assertEqual(2, len(image['locations'])) self.assertIn('id', image['locations'][0]) self.assertIn('id', image['locations'][1]) image['locations'][0].pop('id') image['locations'][1].pop('id') self.assertEqual(location_data, image['locations']) def test_image_update(self): fixture = {'status': 'queued', 'properties': {'ping': 'pong'}} image = self.db_api.image_update(self.adm_context, UUID3, fixture) expected = [{'name': 'ping', 'value': 'pong'}] actual = [{'name': p['name'], 'value': p['value']} for p in image['properties']] self.assertEqual(expected, actual) self.assertEqual('queued', image['status']) self.assertNotEqual(image['created_at'], image['updated_at']) def test_image_update_properties(self): fixture = {'properties': {'ping': 'pong'}} image = self.db_api.image_update(self.adm_context, UUID1, fixture) expected = {'ping': 'pong', 'foo': 'bar', 'far': 'boo'} actual = {p['name']: p['value'] for p in image['properties']} self.assertEqual(expected, actual) self.assertNotEqual(image['created_at'], image['updated_at']) def test_image_update_purge_properties(self): fixture = {'properties': {'ping': 'pong'}} image = self.db_api.image_update(self.adm_context, UUID1, fixture, purge_props=True) properties = {p['name']: p for p in image['properties']} # New properties are set self.assertIn('ping', properties) self.assertEqual('pong', properties['ping']['value']) self.assertFalse(properties['ping']['deleted']) # Original properties still show up, but with deleted=True # TODO(markwash): db api should not return deleted properties self.assertIn('foo', properties) self.assertEqual('bar', properties['foo']['value']) self.assertTrue(properties['foo']['deleted']) def test_image_update_bad_name(self): fixture = {'name': u'A new name with forbidden symbol \U0001f62a'} self.assertRaises(exception.Invalid, self.db_api.image_update, self.adm_context, UUID1, fixture) def test_image_update_bad_property(self): # bad value fixture = {'status': 'queued', 'properties': {'bad': u'Bad \U0001f62a'}} self.assertRaises(exception.Invalid, self.db_api.image_update, self.adm_context, UUID1, fixture) # bad property names are also not allowed fixture = {'status': 'queued', 'properties': {u'Bad \U0001f62a': 'ok'}} self.assertRaises(exception.Invalid, self.db_api.image_update, self.adm_context, UUID1, fixture) def test_image_update_bad_location(self): location_data = [{'url': 'a', 'metadata': {'key': 'value'}, 'status': 'active'}, {'url': u'Bad \U0001f60a', 'metadata': {}, 'status': 'active'}] fixture = {'status': 'queued', 'locations': location_data} self.assertRaises(exception.Invalid, self.db_api.image_update, self.adm_context, UUID1, fixture) def test_update_locations_direct(self): """ For some reasons update_locations can be called directly (not via image_update), so better check that everything is ok if passed 4 byte unicode characters """ # update locations correctly first to retrieve existing location id location_data = [{'url': 'a', 'metadata': {'key': 'value'}, 'status': 'active'}] fixture = {'locations': location_data} image = self.db_api.image_update(self.adm_context, UUID1, fixture) self.assertEqual(1, len(image['locations'])) self.assertIn('id', image['locations'][0]) loc_id = image['locations'][0].pop('id') bad_location = {'url': u'Bad \U0001f60a', 'metadata': {}, 'status': 'active', 'id': loc_id} self.assertRaises(exception.Invalid, self.db_api.image_location_update, self.adm_context, UUID1, bad_location) def test_image_property_delete(self): fixture = {'name': 'ping', 'value': 'pong', 'image_id': UUID1} prop = self.db_api.image_property_create(self.context, fixture) prop = self.db_api.image_property_delete(self.context, prop['name'], UUID1) self.assertIsNotNone(prop['deleted_at']) self.assertTrue(prop['deleted']) def test_image_get(self): image = self.db_api.image_get(self.context, UUID1) self.assertEqual(self.fixtures[0]['id'], image['id']) def test_image_get_disallow_deleted(self): self.db_api.image_destroy(self.adm_context, UUID1) self.assertRaises(exception.NotFound, self.db_api.image_get, self.context, UUID1) def test_image_get_allow_deleted(self): self.db_api.image_destroy(self.adm_context, UUID1) image = self.db_api.image_get(self.adm_context, UUID1) self.assertEqual(self.fixtures[0]['id'], image['id']) self.assertTrue(image['deleted']) def test_image_get_force_allow_deleted(self): self.db_api.image_destroy(self.adm_context, UUID1) image = self.db_api.image_get(self.context, UUID1, force_show_deleted=True) self.assertEqual(self.fixtures[0]['id'], image['id']) def test_image_get_not_owned(self): TENANT1 = str(uuid.uuid4()) TENANT2 = str(uuid.uuid4()) ctxt1 = context.RequestContext(is_admin=False, tenant=TENANT1, auth_token='user:%s:user' % TENANT1) ctxt2 = context.RequestContext(is_admin=False, tenant=TENANT2, auth_token='user:%s:user' % TENANT2) image = self.db_api.image_create( ctxt1, {'status': 'queued', 'owner': TENANT1}) self.assertRaises(exception.Forbidden, self.db_api.image_get, ctxt2, image['id']) def test_image_get_not_found(self): UUID = str(uuid.uuid4()) self.assertRaises(exception.NotFound, self.db_api.image_get, self.context, UUID) def test_image_get_all(self): images = self.db_api.image_get_all(self.context) self.assertEqual(3, len(images)) def test_image_get_all_with_filter(self): images = self.db_api.image_get_all(self.context, filters={ 'id': self.fixtures[0]['id'], }) self.assertEqual(1, len(images)) self.assertEqual(self.fixtures[0]['id'], images[0]['id']) def test_image_get_all_with_filter_user_defined_property(self): images = self.db_api.image_get_all(self.context, filters={'foo': 'bar'}) self.assertEqual(1, len(images)) self.assertEqual(self.fixtures[0]['id'], images[0]['id']) def test_image_get_all_with_filter_nonexistent_userdef_property(self): images = self.db_api.image_get_all(self.context, filters={'faz': 'boo'}) self.assertEqual(0, len(images)) def test_image_get_all_with_filter_userdef_prop_nonexistent_value(self): images = self.db_api.image_get_all(self.context, filters={'foo': 'baz'}) self.assertEqual(0, len(images)) def test_image_get_all_with_filter_multiple_user_defined_properties(self): images = self.db_api.image_get_all(self.context, filters={'foo': 'bar', 'far': 'boo'}) self.assertEqual(1, len(images)) self.assertEqual(images[0]['id'], self.fixtures[0]['id']) def test_image_get_all_with_filter_nonexistent_user_defined_property(self): images = self.db_api.image_get_all(self.context, filters={'foo': 'bar', 'faz': 'boo'}) self.assertEqual(0, len(images)) def test_image_get_all_with_filter_user_deleted_property(self): fixture = {'name': 'poo', 'value': 'bear', 'image_id': UUID1} prop = self.db_api.image_property_create(self.context, fixture) images = self.db_api.image_get_all(self.context, filters={ 'properties': {'poo': 'bear'}, }) self.assertEqual(1, len(images)) self.db_api.image_property_delete(self.context, prop['name'], images[0]['id']) images = self.db_api.image_get_all(self.context, filters={ 'properties': {'poo': 'bear'}, }) self.assertEqual(0, len(images)) def test_image_get_all_with_filter_undefined_property(self): images = self.db_api.image_get_all(self.context, filters={'poo': 'bear'}) self.assertEqual(0, len(images)) def test_image_get_all_with_filter_protected(self): images = self.db_api.image_get_all(self.context, filters={'protected': True}) self.assertEqual(1, len(images)) images = self.db_api.image_get_all(self.context, filters={'protected': False}) self.assertEqual(2, len(images)) def test_image_get_all_with_filter_comparative_created_at(self): anchor = timeutils.isotime(self.fixtures[0]['created_at']) time_expr = 'lt:' + anchor images = self.db_api.image_get_all(self.context, filters={'created_at': time_expr}) self.assertEqual(0, len(images)) def test_image_get_all_with_filter_comparative_updated_at(self): anchor = timeutils.isotime(self.fixtures[0]['updated_at']) time_expr = 'lt:' + anchor images = self.db_api.image_get_all(self.context, filters={'updated_at': time_expr}) self.assertEqual(0, len(images)) def test_filter_image_by_invalid_operator(self): self.assertRaises(exception.InvalidFilterOperatorValue, self.db_api.image_get_all, self.context, filters={'status': 'lala:active'}) def test_image_get_all_with_filter_in_status(self): images = self.db_api.image_get_all(self.context, filters={'status': 'in:active'}) self.assertEqual(3, len(images)) def test_image_get_all_with_filter_in_name(self): data = 'in:%s' % self.fixtures[0]['name'] images = self.db_api.image_get_all(self.context, filters={'name': data}) self.assertEqual(3, len(images)) def test_image_get_all_with_filter_in_container_format(self): images = self.db_api.image_get_all(self.context, filters={'container_format': 'in:ami,bare,ovf'}) self.assertEqual(3, len(images)) def test_image_get_all_with_filter_in_disk_format(self): images = self.db_api.image_get_all(self.context, filters={'disk_format': 'in:vhd'}) self.assertEqual(3, len(images)) def test_image_get_all_with_filter_in_id(self): data = 'in:%s,%s' % (UUID1, UUID2) images = self.db_api.image_get_all(self.context, filters={'id': data}) self.assertEqual(2, len(images)) def test_image_get_all_with_quotes(self): fixture = {'name': 'fake\\\"name'} self.db_api.image_update(self.adm_context, UUID3, fixture) fixture = {'name': 'fake,name'} self.db_api.image_update(self.adm_context, UUID2, fixture) fixture = {'name': 'fakename'} self.db_api.image_update(self.adm_context, UUID1, fixture) data = 'in:\"fake\\\"name\",fakename,\"fake,name\"' images = self.db_api.image_get_all(self.context, filters={'name': data}) self.assertEqual(3, len(images)) def test_image_get_all_with_invalid_quotes(self): invalid_expr = ['in:\"name', 'in:\"name\"name', 'in:name\"dd\"', 'in:na\"me', 'in:\"name\"\"name\"'] for expr in invalid_expr: self.assertRaises(exception.InvalidParameterValue, self.db_api.image_get_all, self.context, filters={'name': expr}) def test_image_get_all_size_min_max(self): images = self.db_api.image_get_all(self.context, filters={ 'size_min': 10, 'size_max': 15, }) self.assertEqual(1, len(images)) self.assertEqual(self.fixtures[0]['id'], images[0]['id']) def test_image_get_all_size_min(self): images = self.db_api.image_get_all(self.context, filters={'size_min': 15}) self.assertEqual(2, len(images)) self.assertEqual(self.fixtures[2]['id'], images[0]['id']) self.assertEqual(self.fixtures[1]['id'], images[1]['id']) def test_image_get_all_size_range(self): images = self.db_api.image_get_all(self.context, filters={'size_max': 15, 'size_min': 20}) self.assertEqual(0, len(images)) def test_image_get_all_size_max(self): images = self.db_api.image_get_all(self.context, filters={'size_max': 15}) self.assertEqual(1, len(images)) self.assertEqual(self.fixtures[0]['id'], images[0]['id']) def test_image_get_all_with_filter_min_range_bad_value(self): self.assertRaises(exception.InvalidFilterRangeValue, self.db_api.image_get_all, self.context, filters={'size_min': 'blah'}) def test_image_get_all_with_filter_max_range_bad_value(self): self.assertRaises(exception.InvalidFilterRangeValue, self.db_api.image_get_all, self.context, filters={'size_max': 'blah'}) def test_image_get_all_marker(self): images = self.db_api.image_get_all(self.context, marker=UUID3) self.assertEqual(2, len(images)) def test_image_get_all_marker_with_size(self): # Use sort_key=size to test BigInteger images = self.db_api.image_get_all(self.context, sort_key=['size'], marker=UUID3) self.assertEqual(2, len(images)) self.assertEqual(17, images[0]['size']) self.assertEqual(13, images[1]['size']) def test_image_get_all_marker_deleted(self): """Cannot specify a deleted image as a marker.""" self.db_api.image_destroy(self.adm_context, UUID1) filters = {'deleted': False} self.assertRaises(exception.NotFound, self.db_api.image_get_all, self.context, marker=UUID1, filters=filters) def test_image_get_all_marker_deleted_showing_deleted_as_admin(self): """Specify a deleted image as a marker if showing deleted images.""" self.db_api.image_destroy(self.adm_context, UUID3) images = self.db_api.image_get_all(self.adm_context, marker=UUID3) # NOTE(bcwaldon): an admin should see all images (deleted or not) self.assertEqual(2, len(images)) def test_image_get_all_marker_deleted_showing_deleted(self): """Specify a deleted image as a marker if showing deleted images. A non-admin user has to explicitly ask for deleted images, and should only see deleted images in the results """ self.db_api.image_destroy(self.adm_context, UUID3) self.db_api.image_destroy(self.adm_context, UUID1) filters = {'deleted': True} images = self.db_api.image_get_all(self.context, marker=UUID3, filters=filters) self.assertEqual(1, len(images)) def test_image_get_all_marker_null_name_desc(self): """Check an image with name null is handled Check an image with name null is handled marker is specified and order is descending """ TENANT1 = str(uuid.uuid4()) ctxt1 = context.RequestContext(is_admin=False, tenant=TENANT1, auth_token='user:%s:user' % TENANT1) UUIDX = str(uuid.uuid4()) self.db_api.image_create(ctxt1, {'id': UUIDX, 'status': 'queued', 'name': None, 'owner': TENANT1}) images = self.db_api.image_get_all(ctxt1, marker=UUIDX, sort_key=['name'], sort_dir=['desc']) image_ids = [image['id'] for image in images] expected = [] self.assertEqual(sorted(expected), sorted(image_ids)) def test_image_get_all_marker_null_disk_format_desc(self): """Check an image with disk_format null is handled Check an image with disk_format null is handled when marker is specified and order is descending """ TENANT1 = str(uuid.uuid4()) ctxt1 = context.RequestContext(is_admin=False, tenant=TENANT1, auth_token='user:%s:user' % TENANT1) UUIDX = str(uuid.uuid4()) self.db_api.image_create(ctxt1, {'id': UUIDX, 'status': 'queued', 'disk_format': None, 'owner': TENANT1}) images = self.db_api.image_get_all(ctxt1, marker=UUIDX, sort_key=['disk_format'], sort_dir=['desc']) image_ids = [image['id'] for image in images] expected = [] self.assertEqual(sorted(expected), sorted(image_ids)) def test_image_get_all_marker_null_container_format_desc(self): """Check an image with container_format null is handled Check an image with container_format null is handled when marker is specified and order is descending """ TENANT1 = str(uuid.uuid4()) ctxt1 = context.RequestContext(is_admin=False, tenant=TENANT1, auth_token='user:%s:user' % TENANT1) UUIDX = str(uuid.uuid4()) self.db_api.image_create(ctxt1, {'id': UUIDX, 'status': 'queued', 'container_format': None, 'owner': TENANT1}) images = self.db_api.image_get_all(ctxt1, marker=UUIDX, sort_key=['container_format'], sort_dir=['desc']) image_ids = [image['id'] for image in images] expected = [] self.assertEqual(sorted(expected), sorted(image_ids)) def test_image_get_all_marker_null_name_asc(self): """Check an image with name null is handled Check an image with name null is handled when marker is specified and order is ascending """ TENANT1 = str(uuid.uuid4()) ctxt1 = context.RequestContext(is_admin=False, tenant=TENANT1, auth_token='user:%s:user' % TENANT1) UUIDX = str(uuid.uuid4()) self.db_api.image_create(ctxt1, {'id': UUIDX, 'status': 'queued', 'name': None, 'owner': TENANT1}) images = self.db_api.image_get_all(ctxt1, marker=UUIDX, sort_key=['name'], sort_dir=['asc']) image_ids = [image['id'] for image in images] expected = [UUID3, UUID2, UUID1] self.assertEqual(sorted(expected), sorted(image_ids)) def test_image_get_all_marker_null_disk_format_asc(self): """Check an image with disk_format null is handled Check an image with disk_format null is handled when marker is specified and order is ascending """ TENANT1 = str(uuid.uuid4()) ctxt1 = context.RequestContext(is_admin=False, tenant=TENANT1, auth_token='user:%s:user' % TENANT1) UUIDX = str(uuid.uuid4()) self.db_api.image_create(ctxt1, {'id': UUIDX, 'status': 'queued', 'disk_format': None, 'owner': TENANT1}) images = self.db_api.image_get_all(ctxt1, marker=UUIDX, sort_key=['disk_format'], sort_dir=['asc']) image_ids = [image['id'] for image in images] expected = [UUID3, UUID2, UUID1] self.assertEqual(sorted(expected), sorted(image_ids)) def test_image_get_all_marker_null_container_format_asc(self): """Check an image with container_format null is handled Check an image with container_format null is handled when marker is specified and order is ascending """ TENANT1 = str(uuid.uuid4()) ctxt1 = context.RequestContext(is_admin=False, tenant=TENANT1, auth_token='user:%s:user' % TENANT1) UUIDX = str(uuid.uuid4()) self.db_api.image_create(ctxt1, {'id': UUIDX, 'status': 'queued', 'container_format': None, 'owner': TENANT1}) images = self.db_api.image_get_all(ctxt1, marker=UUIDX, sort_key=['container_format'], sort_dir=['asc']) image_ids = [image['id'] for image in images] expected = [UUID3, UUID2, UUID1] self.assertEqual(sorted(expected), sorted(image_ids)) def test_image_get_all_limit(self): images = self.db_api.image_get_all(self.context, limit=2) self.assertEqual(2, len(images)) # A limit of None should not equate to zero images = self.db_api.image_get_all(self.context, limit=None) self.assertEqual(3, len(images)) # A limit of zero should actually mean zero images = self.db_api.image_get_all(self.context, limit=0) self.assertEqual(0, len(images)) def test_image_get_all_owned(self): TENANT1 = str(uuid.uuid4()) ctxt1 = context.RequestContext(is_admin=False, tenant=TENANT1, auth_token='user:%s:user' % TENANT1) UUIDX = str(uuid.uuid4()) image_meta_data = {'id': UUIDX, 'status': 'queued', 'owner': TENANT1} self.db_api.image_create(ctxt1, image_meta_data) TENANT2 = str(uuid.uuid4()) ctxt2 = context.RequestContext(is_admin=False, tenant=TENANT2, auth_token='user:%s:user' % TENANT2) UUIDY = str(uuid.uuid4()) image_meta_data = {'id': UUIDY, 'status': 'queued', 'owner': TENANT2} self.db_api.image_create(ctxt2, image_meta_data) images = self.db_api.image_get_all(ctxt1) image_ids = [image['id'] for image in images] expected = [UUIDX, UUID3, UUID2, UUID1] self.assertEqual(sorted(expected), sorted(image_ids)) def test_image_get_all_owned_checksum(self): TENANT1 = str(uuid.uuid4()) ctxt1 = context.RequestContext(is_admin=False, tenant=TENANT1, auth_token='user:%s:user' % TENANT1) UUIDX = str(uuid.uuid4()) CHECKSUM1 = '91264c3edf5972c9f1cb309543d38a5c' image_meta_data = { 'id': UUIDX, 'status': 'queued', 'checksum': CHECKSUM1, 'owner': TENANT1 } self.db_api.image_create(ctxt1, image_meta_data) image_member_data = { 'image_id': UUIDX, 'member': TENANT1, 'can_share': False, "status": "accepted", } self.db_api.image_member_create(ctxt1, image_member_data) TENANT2 = str(uuid.uuid4()) ctxt2 = context.RequestContext(is_admin=False, tenant=TENANT2, auth_token='user:%s:user' % TENANT2) UUIDY = str(uuid.uuid4()) CHECKSUM2 = '92264c3edf5972c9f1cb309543d38a5c' image_meta_data = { 'id': UUIDY, 'status': 'queued', 'checksum': CHECKSUM2, 'owner': TENANT2 } self.db_api.image_create(ctxt2, image_meta_data) image_member_data = { 'image_id': UUIDY, 'member': TENANT2, 'can_share': False, "status": "accepted", } self.db_api.image_member_create(ctxt2, image_member_data) filters = {'visibility': 'shared', 'checksum': CHECKSUM2} images = self.db_api.image_get_all(ctxt2, filters) self.assertEqual(1, len(images)) self.assertEqual(UUIDY, images[0]['id']) def test_image_get_all_with_filter_tags(self): self.db_api.image_tag_create(self.context, UUID1, 'x86') self.db_api.image_tag_create(self.context, UUID1, '64bit') self.db_api.image_tag_create(self.context, UUID2, 'power') self.db_api.image_tag_create(self.context, UUID2, '64bit') images = self.db_api.image_get_all(self.context, filters={'tags': ['64bit']}) self.assertEqual(2, len(images)) image_ids = [image['id'] for image in images] expected = [UUID1, UUID2] self.assertEqual(sorted(expected), sorted(image_ids)) def test_image_get_all_with_filter_multi_tags(self): self.db_api.image_tag_create(self.context, UUID1, 'x86') self.db_api.image_tag_create(self.context, UUID1, '64bit') self.db_api.image_tag_create(self.context, UUID2, 'power') self.db_api.image_tag_create(self.context, UUID2, '64bit') images = self.db_api.image_get_all(self.context, filters={'tags': ['64bit', 'power'] }) self.assertEqual(1, len(images)) self.assertEqual(UUID2, images[0]['id']) def test_image_get_all_with_filter_tags_and_nonexistent(self): self.db_api.image_tag_create(self.context, UUID1, 'x86') images = self.db_api.image_get_all(self.context, filters={'tags': ['x86', 'fake'] }) self.assertEqual(0, len(images)) def test_image_get_all_with_filter_deleted_tags(self): tag = self.db_api.image_tag_create(self.context, UUID1, 'AIX') images = self.db_api.image_get_all(self.context, filters={ 'tags': [tag], }) self.assertEqual(1, len(images)) self.db_api.image_tag_delete(self.context, UUID1, tag) images = self.db_api.image_get_all(self.context, filters={ 'tags': [tag], }) self.assertEqual(0, len(images)) def test_image_get_all_with_filter_undefined_tags(self): images = self.db_api.image_get_all(self.context, filters={'tags': ['fake']}) self.assertEqual(0, len(images)) def test_image_paginate(self): """Paginate through a list of images using limit and marker""" now = timeutils.utcnow() extra_uuids = [(str(uuid.uuid4()), now + datetime.timedelta(seconds=i * 5)) for i in range(2)] extra_images = [build_image_fixture(id=_id, created_at=_dt, updated_at=_dt) for _id, _dt in extra_uuids] self.create_images(extra_images) # Reverse uuids to match default sort of created_at extra_uuids.reverse() page = self.db_api.image_get_all(self.context, limit=2) self.assertEqual([i[0] for i in extra_uuids], [i['id'] for i in page]) last = page[-1]['id'] page = self.db_api.image_get_all(self.context, limit=2, marker=last) self.assertEqual([UUID3, UUID2], [i['id'] for i in page]) page = self.db_api.image_get_all(self.context, limit=2, marker=UUID2) self.assertEqual([UUID1], [i['id'] for i in page]) def test_image_get_all_invalid_sort_key(self): self.assertRaises(exception.InvalidSortKey, self.db_api.image_get_all, self.context, sort_key=['blah']) def test_image_get_all_limit_marker(self): images = self.db_api.image_get_all(self.context, limit=2) self.assertEqual(2, len(images)) def test_image_get_all_with_tag_returning(self): expected_tags = {UUID1: ['foo'], UUID2: ['bar'], UUID3: ['baz']} self.db_api.image_tag_create(self.context, UUID1, expected_tags[UUID1][0]) self.db_api.image_tag_create(self.context, UUID2, expected_tags[UUID2][0]) self.db_api.image_tag_create(self.context, UUID3, expected_tags[UUID3][0]) images = self.db_api.image_get_all(self.context, return_tag=True) self.assertEqual(3, len(images)) for image in images: self.assertIn('tags', image) self.assertEqual(expected_tags[image['id']], image['tags']) self.db_api.image_tag_delete(self.context, UUID1, expected_tags[UUID1][0]) expected_tags[UUID1] = [] images = self.db_api.image_get_all(self.context, return_tag=True) self.assertEqual(3, len(images)) for image in images: self.assertIn('tags', image) self.assertEqual(expected_tags[image['id']], image['tags']) def test_image_destroy(self): location_data = [{'url': 'a', 'metadata': {'key': 'value'}, 'status': 'active'}, {'url': 'b', 'metadata': {}, 'status': 'active'}] fixture = {'status': 'queued', 'locations': location_data} image = self.db_api.image_create(self.context, fixture) IMG_ID = image['id'] fixture = {'name': 'ping', 'value': 'pong', 'image_id': IMG_ID} prop = self.db_api.image_property_create(self.context, fixture) TENANT2 = str(uuid.uuid4()) fixture = {'image_id': IMG_ID, 'member': TENANT2, 'can_share': False} member = self.db_api.image_member_create(self.context, fixture) self.db_api.image_tag_create(self.context, IMG_ID, 'snarf') self.assertEqual(2, len(image['locations'])) self.assertIn('id', image['locations'][0]) self.assertIn('id', image['locations'][1]) image['locations'][0].pop('id') image['locations'][1].pop('id') self.assertEqual(location_data, image['locations']) self.assertEqual(('ping', 'pong', IMG_ID, False), (prop['name'], prop['value'], prop['image_id'], prop['deleted'])) self.assertEqual((TENANT2, IMG_ID, False), (member['member'], member['image_id'], member['can_share'])) self.assertEqual(['snarf'], self.db_api.image_tag_get_all(self.context, IMG_ID)) image = self.db_api.image_destroy(self.adm_context, IMG_ID) self.assertTrue(image['deleted']) self.assertTrue(image['deleted_at']) self.assertRaises(exception.NotFound, self.db_api.image_get, self.context, IMG_ID) self.assertEqual([], image['locations']) prop = image['properties'][0] self.assertEqual(('ping', IMG_ID, True), (prop['name'], prop['image_id'], prop['deleted'])) self.context.auth_token = 'user:%s:user' % TENANT2 members = self.db_api.image_member_find(self.context, IMG_ID) self.assertEqual([], members) tags = self.db_api.image_tag_get_all(self.context, IMG_ID) self.assertEqual([], tags) def test_image_destroy_with_delete_all(self): """Check the image child element's _image_delete_all methods. checks if all the image_delete_all methods deletes only the child elements of the image to be deleted. """ TENANT2 = str(uuid.uuid4()) location_data = [{'url': 'a', 'metadata': {'key': 'value'}, 'status': 'active'}, {'url': 'b', 'metadata': {}, 'status': 'active'}] def _create_image_with_child_entries(): fixture = {'status': 'queued', 'locations': location_data} image_id = self.db_api.image_create(self.context, fixture)['id'] fixture = {'name': 'ping', 'value': 'pong', 'image_id': image_id} self.db_api.image_property_create(self.context, fixture) fixture = {'image_id': image_id, 'member': TENANT2, 'can_share': False} self.db_api.image_member_create(self.context, fixture) self.db_api.image_tag_create(self.context, image_id, 'snarf') return image_id ACTIVE_IMG_ID = _create_image_with_child_entries() DEL_IMG_ID = _create_image_with_child_entries() deleted_image = self.db_api.image_destroy(self.adm_context, DEL_IMG_ID) self.assertTrue(deleted_image['deleted']) self.assertTrue(deleted_image['deleted_at']) self.assertRaises(exception.NotFound, self.db_api.image_get, self.context, DEL_IMG_ID) active_image = self.db_api.image_get(self.context, ACTIVE_IMG_ID) self.assertFalse(active_image['deleted']) self.assertFalse(active_image['deleted_at']) self.assertEqual(2, len(active_image['locations'])) self.assertIn('id', active_image['locations'][0]) self.assertIn('id', active_image['locations'][1]) active_image['locations'][0].pop('id') active_image['locations'][1].pop('id') self.assertEqual(location_data, active_image['locations']) self.assertEqual(1, len(active_image['properties'])) prop = active_image['properties'][0] self.assertEqual(('ping', 'pong', ACTIVE_IMG_ID), (prop['name'], prop['value'], prop['image_id'])) self.assertEqual((False, None), (prop['deleted'], prop['deleted_at'])) self.context.auth_token = 'user:%s:user' % TENANT2 members = self.db_api.image_member_find(self.context, ACTIVE_IMG_ID) self.assertEqual(1, len(members)) member = members[0] self.assertEqual((TENANT2, ACTIVE_IMG_ID, False), (member['member'], member['image_id'], member['can_share'])) tags = self.db_api.image_tag_get_all(self.context, ACTIVE_IMG_ID) self.assertEqual(['snarf'], tags) def test_image_get_multiple_members(self): TENANT1 = str(uuid.uuid4()) TENANT2 = str(uuid.uuid4()) ctxt1 = context.RequestContext(is_admin=False, tenant=TENANT1, auth_token='user:%s:user' % TENANT1, owner_is_tenant=True) ctxt2 = context.RequestContext(is_admin=False, user=TENANT2, auth_token='user:%s:user' % TENANT2, owner_is_tenant=False) UUIDX = str(uuid.uuid4()) # We need a shared image and context.owner should not match image # owner self.db_api.image_create(ctxt1, {'id': UUIDX, 'status': 'queued', 'is_public': False, 'owner': TENANT1}) values = {'image_id': UUIDX, 'member': TENANT2, 'can_share': False} self.db_api.image_member_create(ctxt1, values) image = self.db_api.image_get(ctxt2, UUIDX) self.assertEqual(UUIDX, image['id']) # by default get_all displays only images with status 'accepted' images = self.db_api.image_get_all(ctxt2) self.assertEqual(3, len(images)) # filter by rejected images = self.db_api.image_get_all(ctxt2, member_status='rejected') self.assertEqual(3, len(images)) # filter by visibility images = self.db_api.image_get_all(ctxt2, filters={'visibility': 'shared'}) self.assertEqual(0, len(images)) # filter by visibility images = self.db_api.image_get_all(ctxt2, member_status='pending', filters={'visibility': 'shared'}) self.assertEqual(1, len(images)) # filter by visibility images = self.db_api.image_get_all(ctxt2, member_status='all', filters={'visibility': 'shared'}) self.assertEqual(1, len(images)) # filter by status pending images = self.db_api.image_get_all(ctxt2, member_status='pending') self.assertEqual(4, len(images)) # filter by status all images = self.db_api.image_get_all(ctxt2, member_status='all') self.assertEqual(4, len(images)) def test_is_image_visible(self): TENANT1 = str(uuid.uuid4()) TENANT2 = str(uuid.uuid4()) ctxt1 = context.RequestContext(is_admin=False, tenant=TENANT1, auth_token='user:%s:user' % TENANT1, owner_is_tenant=True) ctxt2 = context.RequestContext(is_admin=False, user=TENANT2, auth_token='user:%s:user' % TENANT2, owner_is_tenant=False) UUIDX = str(uuid.uuid4()) # We need a shared image and context.owner should not match image # owner image = self.db_api.image_create(ctxt1, {'id': UUIDX, 'status': 'queued', 'is_public': False, 'owner': TENANT1}) values = {'image_id': UUIDX, 'member': TENANT2, 'can_share': False} self.db_api.image_member_create(ctxt1, values) result = self.db_api.is_image_visible(ctxt2, image) self.assertTrue(result) # image should not be visible for a deleted member members = self.db_api.image_member_find(ctxt1, image_id=UUIDX) self.db_api.image_member_delete(ctxt1, members[0]['id']) result = self.db_api.is_image_visible(ctxt2, image) self.assertFalse(result) def test_is_community_image_visible(self): TENANT1 = str(uuid.uuid4()) TENANT2 = str(uuid.uuid4()) owners_ctxt = context.RequestContext(is_admin=False, tenant=TENANT1, auth_token='user:%s:user' % TENANT1, owner_is_tenant=True) viewing_ctxt = context.RequestContext(is_admin=False, user=TENANT2, auth_token='user:%s:user' % TENANT2, owner_is_tenant=False) UUIDX = str(uuid.uuid4()) # We need a community image and context.owner should not match image # owner image = self.db_api.image_create(owners_ctxt, {'id': UUIDX, 'status': 'queued', 'visibility': 'community', 'owner': TENANT1}) # image should be visible in every context result = self.db_api.is_image_visible(owners_ctxt, image) self.assertTrue(result) result = self.db_api.is_image_visible(viewing_ctxt, image) self.assertTrue(result) def test_image_tag_create(self): tag = self.db_api.image_tag_create(self.context, UUID1, 'snap') self.assertEqual('snap', tag) def test_image_tag_create_bad_value(self): self.assertRaises(exception.Invalid, self.db_api.image_tag_create, self.context, UUID1, u'Bad \U0001f62a') def test_image_tag_set_all(self): tags = self.db_api.image_tag_get_all(self.context, UUID1) self.assertEqual([], tags) self.db_api.image_tag_set_all(self.context, UUID1, ['ping', 'pong']) tags = self.db_api.image_tag_get_all(self.context, UUID1) # NOTE(bcwaldon): tag ordering should match exactly what was provided self.assertEqual(['ping', 'pong'], tags) def test_image_tag_get_all(self): self.db_api.image_tag_create(self.context, UUID1, 'snap') self.db_api.image_tag_create(self.context, UUID1, 'snarf') self.db_api.image_tag_create(self.context, UUID2, 'snarf') # Check the tags for the first image tags = self.db_api.image_tag_get_all(self.context, UUID1) expected = ['snap', 'snarf'] self.assertEqual(expected, tags) # Check the tags for the second image tags = self.db_api.image_tag_get_all(self.context, UUID2) expected = ['snarf'] self.assertEqual(expected, tags) def test_image_tag_get_all_no_tags(self): actual = self.db_api.image_tag_get_all(self.context, UUID1) self.assertEqual([], actual) def test_image_tag_get_all_non_existent_image(self): bad_image_id = str(uuid.uuid4()) actual = self.db_api.image_tag_get_all(self.context, bad_image_id) self.assertEqual([], actual) def test_image_tag_delete(self): self.db_api.image_tag_create(self.context, UUID1, 'snap') self.db_api.image_tag_delete(self.context, UUID1, 'snap') self.assertRaises(exception.NotFound, self.db_api.image_tag_delete, self.context, UUID1, 'snap') @mock.patch.object(timeutils, 'utcnow') def test_image_member_create(self, mock_utcnow): mock_utcnow.return_value = datetime.datetime.utcnow() memberships = self.db_api.image_member_find(self.context) self.assertEqual([], memberships) TENANT1 = str(uuid.uuid4()) # NOTE(flaper87): Update auth token, otherwise # non visible members won't be returned. self.context.auth_token = 'user:%s:user' % TENANT1 self.db_api.image_member_create(self.context, {'member': TENANT1, 'image_id': UUID1}) memberships = self.db_api.image_member_find(self.context) self.assertEqual(1, len(memberships)) actual = memberships[0] self.assertIsNotNone(actual['created_at']) self.assertIsNotNone(actual['updated_at']) actual.pop('id') actual.pop('created_at') actual.pop('updated_at') expected = { 'member': TENANT1, 'image_id': UUID1, 'can_share': False, 'status': 'pending', 'deleted': False, } self.assertEqual(expected, actual) def test_image_member_update(self): TENANT1 = str(uuid.uuid4()) # NOTE(flaper87): Update auth token, otherwise # non visible members won't be returned. self.context.auth_token = 'user:%s:user' % TENANT1 member = self.db_api.image_member_create(self.context, {'member': TENANT1, 'image_id': UUID1}) member_id = member.pop('id') member.pop('created_at') member.pop('updated_at') expected = {'member': TENANT1, 'image_id': UUID1, 'status': 'pending', 'can_share': False, 'deleted': False} self.assertEqual(expected, member) member = self.db_api.image_member_update(self.context, member_id, {'can_share': True}) self.assertNotEqual(member['created_at'], member['updated_at']) member.pop('id') member.pop('created_at') member.pop('updated_at') expected = {'member': TENANT1, 'image_id': UUID1, 'status': 'pending', 'can_share': True, 'deleted': False} self.assertEqual(expected, member) members = self.db_api.image_member_find(self.context, member=TENANT1, image_id=UUID1) member = members[0] member.pop('id') member.pop('created_at') member.pop('updated_at') self.assertEqual(expected, member) def test_image_member_update_status(self): TENANT1 = str(uuid.uuid4()) # NOTE(flaper87): Update auth token, otherwise # non visible members won't be returned. self.context.auth_token = 'user:%s:user' % TENANT1 member = self.db_api.image_member_create(self.context, {'member': TENANT1, 'image_id': UUID1}) member_id = member.pop('id') member.pop('created_at') member.pop('updated_at') expected = {'member': TENANT1, 'image_id': UUID1, 'status': 'pending', 'can_share': False, 'deleted': False} self.assertEqual(expected, member) member = self.db_api.image_member_update(self.context, member_id, {'status': 'accepted'}) self.assertNotEqual(member['created_at'], member['updated_at']) member.pop('id') member.pop('created_at') member.pop('updated_at') expected = {'member': TENANT1, 'image_id': UUID1, 'status': 'accepted', 'can_share': False, 'deleted': False} self.assertEqual(expected, member) members = self.db_api.image_member_find(self.context, member=TENANT1, image_id=UUID1) member = members[0] member.pop('id') member.pop('created_at') member.pop('updated_at') self.assertEqual(expected, member) def test_image_member_find(self): TENANT1 = str(uuid.uuid4()) TENANT2 = str(uuid.uuid4()) fixtures = [ {'member': TENANT1, 'image_id': UUID1}, {'member': TENANT1, 'image_id': UUID2, 'status': 'rejected'}, {'member': TENANT2, 'image_id': UUID1, 'status': 'accepted'}, ] for f in fixtures: self.db_api.image_member_create(self.context, copy.deepcopy(f)) def _simplify(output): return def _assertMemberListMatch(list1, list2): _simple = lambda x: set([(o['member'], o['image_id']) for o in x]) self.assertEqual(_simple(list1), _simple(list2)) # NOTE(flaper87): Update auth token, otherwise # non visible members won't be returned. self.context.auth_token = 'user:%s:user' % TENANT1 output = self.db_api.image_member_find(self.context, member=TENANT1) _assertMemberListMatch([fixtures[0], fixtures[1]], output) output = self.db_api.image_member_find(self.adm_context, image_id=UUID1) _assertMemberListMatch([fixtures[0], fixtures[2]], output) # NOTE(flaper87): Update auth token, otherwise # non visible members won't be returned. self.context.auth_token = 'user:%s:user' % TENANT2 output = self.db_api.image_member_find(self.context, member=TENANT2, image_id=UUID1) _assertMemberListMatch([fixtures[2]], output) output = self.db_api.image_member_find(self.context, status='accepted') _assertMemberListMatch([fixtures[2]], output) # NOTE(flaper87): Update auth token, otherwise # non visible members won't be returned. self.context.auth_token = 'user:%s:user' % TENANT1 output = self.db_api.image_member_find(self.context, status='rejected') _assertMemberListMatch([fixtures[1]], output) output = self.db_api.image_member_find(self.context, status='pending') _assertMemberListMatch([fixtures[0]], output) output = self.db_api.image_member_find(self.context, status='pending', image_id=UUID2) _assertMemberListMatch([], output) image_id = str(uuid.uuid4()) output = self.db_api.image_member_find(self.context, member=TENANT2, image_id=image_id) _assertMemberListMatch([], output) def test_image_member_count(self): TENANT1 = str(uuid.uuid4()) self.db_api.image_member_create(self.context, {'member': TENANT1, 'image_id': UUID1}) actual = self.db_api.image_member_count(self.context, UUID1) self.assertEqual(1, actual) def test_image_member_count_invalid_image_id(self): TENANT1 = str(uuid.uuid4()) self.db_api.image_member_create(self.context, {'member': TENANT1, 'image_id': UUID1}) self.assertRaises(exception.Invalid, self.db_api.image_member_count, self.context, None) def test_image_member_count_empty_image_id(self): TENANT1 = str(uuid.uuid4()) self.db_api.image_member_create(self.context, {'member': TENANT1, 'image_id': UUID1}) self.assertRaises(exception.Invalid, self.db_api.image_member_count, self.context, "") def test_image_member_delete(self): TENANT1 = str(uuid.uuid4()) # NOTE(flaper87): Update auth token, otherwise # non visible members won't be returned. self.context.auth_token = 'user:%s:user' % TENANT1 fixture = {'member': TENANT1, 'image_id': UUID1, 'can_share': True} member = self.db_api.image_member_create(self.context, fixture) self.assertEqual(1, len(self.db_api.image_member_find(self.context))) member = self.db_api.image_member_delete(self.context, member['id']) self.assertEqual(0, len(self.db_api.image_member_find(self.context))) class DriverQuotaTests(test_utils.BaseTestCase): def setUp(self): super(DriverQuotaTests, self).setUp() self.owner_id1 = str(uuid.uuid4()) self.context1 = context.RequestContext( is_admin=False, user=self.owner_id1, tenant=self.owner_id1, auth_token='%s:%s:user' % (self.owner_id1, self.owner_id1)) self.db_api = db_tests.get_db(self.config) db_tests.reset_db(self.db_api) dt1 = timeutils.utcnow() dt2 = dt1 + datetime.timedelta(microseconds=5) fixtures = [ { 'id': UUID1, 'created_at': dt1, 'updated_at': dt1, 'size': 13, 'owner': self.owner_id1, }, { 'id': UUID2, 'created_at': dt1, 'updated_at': dt2, 'size': 17, 'owner': self.owner_id1, }, { 'id': UUID3, 'created_at': dt2, 'updated_at': dt2, 'size': 7, 'owner': self.owner_id1, }, ] self.owner1_fixtures = [ build_image_fixture(**fixture) for fixture in fixtures] for fixture in self.owner1_fixtures: self.db_api.image_create(self.context1, fixture) def test_storage_quota(self): total = reduce(lambda x, y: x + y, [f['size'] for f in self.owner1_fixtures]) x = self.db_api.user_get_storage_usage(self.context1, self.owner_id1) self.assertEqual(total, x) def test_storage_quota_without_image_id(self): total = reduce(lambda x, y: x + y, [f['size'] for f in self.owner1_fixtures]) total = total - self.owner1_fixtures[0]['size'] x = self.db_api.user_get_storage_usage( self.context1, self.owner_id1, image_id=self.owner1_fixtures[0]['id']) self.assertEqual(total, x) def test_storage_quota_multiple_locations(self): dt1 = timeutils.utcnow() sz = 53 new_fixture_dict = {'id': str(uuid.uuid4()), 'created_at': dt1, 'updated_at': dt1, 'size': sz, 'owner': self.owner_id1} new_fixture = build_image_fixture(**new_fixture_dict) new_fixture['locations'].append({'url': 'file:///some/path/file', 'metadata': {}, 'status': 'active'}) self.db_api.image_create(self.context1, new_fixture) total = reduce(lambda x, y: x + y, [f['size'] for f in self.owner1_fixtures]) + (sz * 2) x = self.db_api.user_get_storage_usage(self.context1, self.owner_id1) self.assertEqual(total, x) def test_storage_quota_deleted_image(self): # NOTE(flaper87): This needs to be tested for # soft deleted images as well. Currently there's no # good way to delete locations. dt1 = timeutils.utcnow() sz = 53 image_id = str(uuid.uuid4()) new_fixture_dict = {'id': image_id, 'created_at': dt1, 'updated_at': dt1, 'size': sz, 'owner': self.owner_id1} new_fixture = build_image_fixture(**new_fixture_dict) new_fixture['locations'].append({'url': 'file:///some/path/file', 'metadata': {}, 'status': 'active'}) self.db_api.image_create(self.context1, new_fixture) total = reduce(lambda x, y: x + y, [f['size'] for f in self.owner1_fixtures]) x = self.db_api.user_get_storage_usage(self.context1, self.owner_id1) self.assertEqual(total + (sz * 2), x) self.db_api.image_destroy(self.context1, image_id) x = self.db_api.user_get_storage_usage(self.context1, self.owner_id1) self.assertEqual(total, x) class TaskTests(test_utils.BaseTestCase): def setUp(self): super(TaskTests, self).setUp() self.admin_id = 'admin' self.owner_id = 'user' self.adm_context = context.RequestContext( is_admin=True, auth_token='user:admin:admin', tenant=self.admin_id) self.context = context.RequestContext( is_admin=False, auth_token='user:user:user', user=self.owner_id) self.db_api = db_tests.get_db(self.config) self.fixtures = self.build_task_fixtures() db_tests.reset_db(self.db_api) def build_task_fixtures(self): self.context.tenant = str(uuid.uuid4()) fixtures = [ { 'owner': self.context.owner, 'type': 'import', 'input': {'import_from': 'file:///a.img', 'import_from_format': 'qcow2', 'image_properties': { "name": "GreatStack 1.22", "tags": ["lamp", "custom"] }}, }, { 'owner': self.context.owner, 'type': 'import', 'input': {'import_from': 'file:///b.img', 'import_from_format': 'qcow2', 'image_properties': { "name": "GreatStack 1.23", "tags": ["lamp", "good"] }}, }, { 'owner': self.context.owner, "type": "export", "input": { "export_uuid": "deadbeef-dead-dead-dead-beefbeefbeef", "export_to": "swift://cloud.foo/myaccount/mycontainer/path", "export_format": "qcow2" } }, ] return [build_task_fixture(**fixture) for fixture in fixtures] def test_task_get_all_with_filter(self): for fixture in self.fixtures: self.db_api.task_create(self.adm_context, build_task_fixture(**fixture)) import_tasks = self.db_api.task_get_all(self.adm_context, filters={'type': 'import'}) self.assertTrue(import_tasks) self.assertEqual(2, len(import_tasks)) for task in import_tasks: self.assertEqual('import', task['type']) self.assertEqual(self.context.owner, task['owner']) def test_task_get_all_as_admin(self): tasks = [] for fixture in self.fixtures: task = self.db_api.task_create(self.adm_context, build_task_fixture(**fixture)) tasks.append(task) import_tasks = self.db_api.task_get_all(self.adm_context) self.assertTrue(import_tasks) self.assertEqual(3, len(import_tasks)) def test_task_get_all_marker(self): for fixture in self.fixtures: self.db_api.task_create(self.adm_context, build_task_fixture(**fixture)) tasks = self.db_api.task_get_all(self.adm_context, sort_key='id') task_ids = [t['id'] for t in tasks] tasks = self.db_api.task_get_all(self.adm_context, sort_key='id', marker=task_ids[0]) self.assertEqual(2, len(tasks)) def test_task_get_all_limit(self): for fixture in self.fixtures: self.db_api.task_create(self.adm_context, build_task_fixture(**fixture)) tasks = self.db_api.task_get_all(self.adm_context, limit=2) self.assertEqual(2, len(tasks)) # A limit of None should not equate to zero tasks = self.db_api.task_get_all(self.adm_context, limit=None) self.assertEqual(3, len(tasks)) # A limit of zero should actually mean zero tasks = self.db_api.task_get_all(self.adm_context, limit=0) self.assertEqual(0, len(tasks)) def test_task_get_all_owned(self): then = timeutils.utcnow() + datetime.timedelta(days=365) TENANT1 = str(uuid.uuid4()) ctxt1 = context.RequestContext(is_admin=False, tenant=TENANT1, auth_token='user:%s:user' % TENANT1) task_values = {'type': 'import', 'status': 'pending', 'input': '{"loc": "fake"}', 'owner': TENANT1, 'expires_at': then} self.db_api.task_create(ctxt1, task_values) TENANT2 = str(uuid.uuid4()) ctxt2 = context.RequestContext(is_admin=False, tenant=TENANT2, auth_token='user:%s:user' % TENANT2) task_values = {'type': 'export', 'status': 'pending', 'input': '{"loc": "fake"}', 'owner': TENANT2, 'expires_at': then} self.db_api.task_create(ctxt2, task_values) tasks = self.db_api.task_get_all(ctxt1) task_owners = set([task['owner'] for task in tasks]) expected = set([TENANT1]) self.assertEqual(sorted(expected), sorted(task_owners)) def test_task_get(self): expires_at = timeutils.utcnow() image_id = str(uuid.uuid4()) fixture = { 'owner': self.context.owner, 'type': 'import', 'status': 'pending', 'input': '{"loc": "fake"}', 'result': "{'image_id': %s}" % image_id, 'message': 'blah', 'expires_at': expires_at } task = self.db_api.task_create(self.adm_context, fixture) self.assertIsNotNone(task) self.assertIsNotNone(task['id']) task_id = task['id'] task = self.db_api.task_get(self.adm_context, task_id) self.assertIsNotNone(task) self.assertEqual(task_id, task['id']) self.assertEqual(self.context.owner, task['owner']) self.assertEqual('import', task['type']) self.assertEqual('pending', task['status']) self.assertEqual(fixture['input'], task['input']) self.assertEqual(fixture['result'], task['result']) self.assertEqual(fixture['message'], task['message']) self.assertEqual(expires_at, task['expires_at']) def test_task_get_all(self): now = timeutils.utcnow() then = now + datetime.timedelta(days=365) image_id = str(uuid.uuid4()) fixture1 = { 'owner': self.context.owner, 'type': 'import', 'status': 'pending', 'input': '{"loc": "fake_1"}', 'result': "{'image_id': %s}" % image_id, 'message': 'blah_1', 'expires_at': then, 'created_at': now, 'updated_at': now } fixture2 = { 'owner': self.context.owner, 'type': 'import', 'status': 'pending', 'input': '{"loc": "fake_2"}', 'result': "{'image_id': %s}" % image_id, 'message': 'blah_2', 'expires_at': then, 'created_at': now, 'updated_at': now } task1 = self.db_api.task_create(self.adm_context, fixture1) task2 = self.db_api.task_create(self.adm_context, fixture2) self.assertIsNotNone(task1) self.assertIsNotNone(task2) task1_id = task1['id'] task2_id = task2['id'] task_fixtures = {task1_id: fixture1, task2_id: fixture2} tasks = self.db_api.task_get_all(self.adm_context) self.assertEqual(2, len(tasks)) self.assertEqual(set((tasks[0]['id'], tasks[1]['id'])), set((task1_id, task2_id))) for task in tasks: fixture = task_fixtures[task['id']] self.assertEqual(self.context.owner, task['owner']) self.assertEqual(fixture['type'], task['type']) self.assertEqual(fixture['status'], task['status']) self.assertEqual(fixture['expires_at'], task['expires_at']) self.assertFalse(task['deleted']) self.assertIsNone(task['deleted_at']) self.assertEqual(fixture['created_at'], task['created_at']) self.assertEqual(fixture['updated_at'], task['updated_at']) task_details_keys = ['input', 'message', 'result'] for key in task_details_keys: self.assertNotIn(key, task) def test_task_soft_delete(self): now = timeutils.utcnow() then = now + datetime.timedelta(days=365) fixture1 = build_task_fixture(id='1', expires_at=now, owner=self.adm_context.owner) fixture2 = build_task_fixture(id='2', expires_at=now, owner=self.adm_context.owner) fixture3 = build_task_fixture(id='3', expires_at=then, owner=self.adm_context.owner) fixture4 = build_task_fixture(id='4', expires_at=then, owner=self.adm_context.owner) task1 = self.db_api.task_create(self.adm_context, fixture1) task2 = self.db_api.task_create(self.adm_context, fixture2) task3 = self.db_api.task_create(self.adm_context, fixture3) task4 = self.db_api.task_create(self.adm_context, fixture4) self.assertIsNotNone(task1) self.assertIsNotNone(task2) self.assertIsNotNone(task3) self.assertIsNotNone(task4) tasks = self.db_api.task_get_all( self.adm_context, sort_key='id', sort_dir='asc') self.assertEqual(4, len(tasks)) self.assertTrue(tasks[0]['deleted']) self.assertTrue(tasks[1]['deleted']) self.assertFalse(tasks[2]['deleted']) self.assertFalse(tasks[3]['deleted']) def test_task_create(self): task_id = str(uuid.uuid4()) self.context.tenant = self.context.owner values = { 'id': task_id, 'owner': self.context.owner, 'type': 'export', 'status': 'pending', } task_values = build_task_fixture(**values) task = self.db_api.task_create(self.adm_context, task_values) self.assertIsNotNone(task) self.assertEqual(task_id, task['id']) self.assertEqual(self.context.owner, task['owner']) self.assertEqual('export', task['type']) self.assertEqual('pending', task['status']) self.assertEqual({'ping': 'pong'}, task['input']) def test_task_create_with_all_task_info_null(self): task_id = str(uuid.uuid4()) self.context.tenant = str(uuid.uuid4()) values = { 'id': task_id, 'owner': self.context.owner, 'type': 'export', 'status': 'pending', 'input': None, 'result': None, 'message': None, } task_values = build_task_fixture(**values) task = self.db_api.task_create(self.adm_context, task_values) self.assertIsNotNone(task) self.assertEqual(task_id, task['id']) self.assertEqual(self.context.owner, task['owner']) self.assertEqual('export', task['type']) self.assertEqual('pending', task['status']) self.assertIsNone(task['input']) self.assertIsNone(task['result']) self.assertIsNone(task['message']) def test_task_update(self): self.context.tenant = str(uuid.uuid4()) result = {'foo': 'bar'} task_values = build_task_fixture(owner=self.context.owner, result=result) task = self.db_api.task_create(self.adm_context, task_values) task_id = task['id'] fixture = { 'status': 'processing', 'message': 'This is a error string', } task = self.db_api.task_update(self.adm_context, task_id, fixture) self.assertEqual(task_id, task['id']) self.assertEqual(self.context.owner, task['owner']) self.assertEqual('import', task['type']) self.assertEqual('processing', task['status']) self.assertEqual({'ping': 'pong'}, task['input']) self.assertEqual(result, task['result']) self.assertEqual('This is a error string', task['message']) self.assertFalse(task['deleted']) self.assertIsNone(task['deleted_at']) self.assertIsNone(task['expires_at']) self.assertEqual(task_values['created_at'], task['created_at']) self.assertGreater(task['updated_at'], task['created_at']) def test_task_update_with_all_task_info_null(self): self.context.tenant = str(uuid.uuid4()) task_values = build_task_fixture(owner=self.context.owner, input=None, result=None, message=None) task = self.db_api.task_create(self.adm_context, task_values) task_id = task['id'] fixture = {'status': 'processing'} task = self.db_api.task_update(self.adm_context, task_id, fixture) self.assertEqual(task_id, task['id']) self.assertEqual(self.context.owner, task['owner']) self.assertEqual('import', task['type']) self.assertEqual('processing', task['status']) self.assertIsNone(task['input']) self.assertIsNone(task['result']) self.assertIsNone(task['message']) self.assertFalse(task['deleted']) self.assertIsNone(task['deleted_at']) self.assertIsNone(task['expires_at']) self.assertEqual(task_values['created_at'], task['created_at']) self.assertGreater(task['updated_at'], task['created_at']) def test_task_delete(self): task_values = build_task_fixture(owner=self.context.owner) task = self.db_api.task_create(self.adm_context, task_values) self.assertIsNotNone(task) self.assertFalse(task['deleted']) self.assertIsNone(task['deleted_at']) task_id = task['id'] self.db_api.task_delete(self.adm_context, task_id) self.assertRaises(exception.TaskNotFound, self.db_api.task_get, self.context, task_id) def test_task_delete_as_admin(self): task_values = build_task_fixture(owner=self.context.owner) task = self.db_api.task_create(self.adm_context, task_values) self.assertIsNotNone(task) self.assertFalse(task['deleted']) self.assertIsNone(task['deleted_at']) task_id = task['id'] self.db_api.task_delete(self.adm_context, task_id) del_task = self.db_api.task_get(self.adm_context, task_id, force_show_deleted=True) self.assertIsNotNone(del_task) self.assertEqual(task_id, del_task['id']) self.assertTrue(del_task['deleted']) self.assertIsNotNone(del_task['deleted_at']) class DBPurgeTests(test_utils.BaseTestCase): def setUp(self): super(DBPurgeTests, self).setUp() self.adm_context = context.get_admin_context(show_deleted=True) self.db_api = db_tests.get_db(self.config) db_tests.reset_db(self.db_api) self.image_fixtures, self.task_fixtures = self.build_fixtures() self.create_tasks(self.task_fixtures) self.create_images(self.image_fixtures) def build_fixtures(self): dt1 = timeutils.utcnow() - datetime.timedelta(days=5) dt2 = dt1 + datetime.timedelta(days=1) dt3 = dt2 + datetime.timedelta(days=1) fixtures = [ { 'created_at': dt1, 'updated_at': dt1, 'deleted_at': dt3, 'deleted': True, }, { 'created_at': dt1, 'updated_at': dt2, 'deleted_at': timeutils.utcnow(), 'deleted': True, }, { 'created_at': dt2, 'updated_at': dt2, 'deleted_at': None, 'deleted': False, }, ] return ( [build_image_fixture(**fixture) for fixture in fixtures], [build_task_fixture(**fixture) for fixture in fixtures], ) def create_images(self, images): for fixture in images: self.db_api.image_create(self.adm_context, fixture) def create_tasks(self, tasks): for fixture in tasks: self.db_api.task_create(self.adm_context, fixture) def test_db_purge(self): self.db_api.purge_deleted_rows(self.adm_context, 1, 5) images = self.db_api.image_get_all(self.adm_context) self.assertEqual(len(images), 2) tasks = self.db_api.task_get_all(self.adm_context) self.assertEqual(len(tasks), 2) def test_purge_fk_constraint_failure(self): """Test foreign key constraint failure Test whether foreign key constraint failure during purge operation is raising DBReferenceError or not. """ session = db_api.get_session() engine = db_api.get_engine() connection = engine.connect() dialect = engine.url.get_dialect() if dialect == sqlite.dialect: # We're seeing issues with foreign key support in SQLite 3.6.20 # SQLAlchemy doesn't support it at all with SQLite < 3.6.19 # It works fine in SQLite 3.7. # So return early to skip this test if running SQLite < 3.7 if test_utils.is_sqlite_version_prior_to(3, 7): self.skipTest( 'sqlite version too old for reliable SQLA foreign_keys') # This is required for enforcing Foreign Key Constraint # in SQLite 3.x connection.execute("PRAGMA foreign_keys = ON") images = sqlalchemyutils.get_table( engine, "images") image_tags = sqlalchemyutils.get_table( engine, "image_tags") # Add a 4th row in images table and set it deleted 15 days ago uuidstr = uuid.uuid4().hex created_time = timeutils.utcnow() - datetime.timedelta(days=20) deleted_time = created_time + datetime.timedelta(days=5) images_row_fixture = { 'id': uuidstr, 'status': 'status', 'created_at': created_time, 'deleted_at': deleted_time, 'deleted': 1, 'visibility': 'public', 'min_disk': 1, 'min_ram': 1, 'protected': 0 } ins_stmt = images.insert().values(**images_row_fixture) connection.execute(ins_stmt) # Add a record in image_tags referencing the above images record # but do not set it as deleted image_tags_row_fixture = { 'image_id': uuidstr, 'value': 'tag_value', 'created_at': created_time, 'deleted': 0 } ins_stmt = image_tags.insert().values(**image_tags_row_fixture) connection.execute(ins_stmt) # Purge all records deleted at least 10 days ago self.assertRaises(db_exception.DBReferenceError, db_api.purge_deleted_rows, self.adm_context, age_in_days=10, max_rows=50) # Verify that no records from images have been deleted # due to DBReferenceError being raised images_rows = session.query(images).count() self.assertEqual(4, images_rows) class TestVisibility(test_utils.BaseTestCase): def setUp(self): super(TestVisibility, self).setUp() self.db_api = db_tests.get_db(self.config) db_tests.reset_db(self.db_api) self.setup_tenants() self.setup_contexts() self.fixtures = self.build_image_fixtures() self.create_images(self.fixtures) def setup_tenants(self): self.admin_tenant = str(uuid.uuid4()) self.tenant1 = str(uuid.uuid4()) self.tenant2 = str(uuid.uuid4()) def setup_contexts(self): self.admin_context = context.RequestContext( is_admin=True, tenant=self.admin_tenant) self.admin_none_context = context.RequestContext( is_admin=True, tenant=None) self.tenant1_context = context.RequestContext(tenant=self.tenant1) self.tenant2_context = context.RequestContext(tenant=self.tenant2) self.none_context = context.RequestContext(tenant=None) def build_image_fixtures(self): fixtures = [] owners = { 'Unowned': None, 'Admin Tenant': self.admin_tenant, 'Tenant 1': self.tenant1, 'Tenant 2': self.tenant2, } visibilities = ['community', 'private', 'public', 'shared'] for owner_label, owner in owners.items(): for visibility in visibilities: fixture = { 'name': '%s, %s' % (owner_label, visibility), 'owner': owner, 'visibility': visibility, } fixtures.append(fixture) return [build_image_fixture(**f) for f in fixtures] def create_images(self, images): for fixture in images: self.db_api.image_create(self.admin_context, fixture) class VisibilityTests(object): def test_unknown_admin_sees_all_but_community(self): images = self.db_api.image_get_all(self.admin_none_context) self.assertEqual(12, len(images)) def test_unknown_admin_is_public_true(self): images = self.db_api.image_get_all(self.admin_none_context, is_public=True) self.assertEqual(4, len(images)) for i in images: self.assertEqual('public', i['visibility']) def test_unknown_admin_is_public_false(self): images = self.db_api.image_get_all(self.admin_none_context, is_public=False) self.assertEqual(8, len(images)) for i in images: self.assertTrue(i['visibility'] in ['shared', 'private']) def test_unknown_admin_is_public_none(self): images = self.db_api.image_get_all(self.admin_none_context) self.assertEqual(12, len(images)) def test_unknown_admin_visibility_public(self): images = self.db_api.image_get_all(self.admin_none_context, filters={'visibility': 'public'}) self.assertEqual(4, len(images)) for i in images: self.assertEqual('public', i['visibility']) def test_unknown_admin_visibility_shared(self): images = self.db_api.image_get_all(self.admin_none_context, filters={'visibility': 'shared'}) self.assertEqual(4, len(images)) for i in images: self.assertEqual('shared', i['visibility']) def test_unknown_admin_visibility_private(self): images = self.db_api.image_get_all(self.admin_none_context, filters={'visibility': 'private'}) self.assertEqual(4, len(images)) for i in images: self.assertEqual('private', i['visibility']) def test_unknown_admin_visibility_community(self): images = self.db_api.image_get_all(self.admin_none_context, filters={'visibility': 'community'}) self.assertEqual(4, len(images)) for i in images: self.assertEqual('community', i['visibility']) def test_known_admin_sees_all_but_others_community_images(self): images = self.db_api.image_get_all(self.admin_context) self.assertEqual(13, len(images)) def test_known_admin_is_public_true(self): images = self.db_api.image_get_all(self.admin_context, is_public=True) self.assertEqual(4, len(images)) for i in images: self.assertEqual('public', i['visibility']) def test_known_admin_is_public_false(self): images = self.db_api.image_get_all(self.admin_context, is_public=False) self.assertEqual(9, len(images)) for i in images: self.assertTrue(i['visibility'] in ['shared', 'private', 'community']) def test_known_admin_is_public_none(self): images = self.db_api.image_get_all(self.admin_context) self.assertEqual(13, len(images)) def test_admin_as_user_true(self): images = self.db_api.image_get_all(self.admin_context, admin_as_user=True) self.assertEqual(7, len(images)) for i in images: self.assertTrue(('public' == i['visibility']) or i['owner'] == self.admin_tenant) def test_known_admin_visibility_public(self): images = self.db_api.image_get_all(self.admin_context, filters={'visibility': 'public'}) self.assertEqual(4, len(images)) for i in images: self.assertEqual('public', i['visibility']) def test_known_admin_visibility_shared(self): images = self.db_api.image_get_all(self.admin_context, filters={'visibility': 'shared'}) self.assertEqual(4, len(images)) for i in images: self.assertEqual('shared', i['visibility']) def test_known_admin_visibility_private(self): images = self.db_api.image_get_all(self.admin_context, filters={'visibility': 'private'}) self.assertEqual(4, len(images)) for i in images: self.assertEqual('private', i['visibility']) def test_known_admin_visibility_community(self): images = self.db_api.image_get_all(self.admin_context, filters={'visibility': 'community'}) self.assertEqual(4, len(images)) for i in images: self.assertEqual('community', i['visibility']) def test_what_unknown_user_sees(self): images = self.db_api.image_get_all(self.none_context) self.assertEqual(4, len(images)) for i in images: self.assertEqual('public', i['visibility']) def test_unknown_user_is_public_true(self): images = self.db_api.image_get_all(self.none_context, is_public=True) self.assertEqual(4, len(images)) for i in images: self.assertEqual('public', i['visibility']) def test_unknown_user_is_public_false(self): images = self.db_api.image_get_all(self.none_context, is_public=False) self.assertEqual(0, len(images)) def test_unknown_user_is_public_none(self): images = self.db_api.image_get_all(self.none_context) self.assertEqual(4, len(images)) for i in images: self.assertEqual('public', i['visibility']) def test_unknown_user_visibility_public(self): images = self.db_api.image_get_all(self.none_context, filters={'visibility': 'public'}) self.assertEqual(4, len(images)) for i in images: self.assertEqual('public', i['visibility']) def test_unknown_user_visibility_shared(self): images = self.db_api.image_get_all(self.none_context, filters={'visibility': 'shared'}) self.assertEqual(0, len(images)) def test_unknown_user_visibility_private(self): images = self.db_api.image_get_all(self.none_context, filters={'visibility': 'private'}) self.assertEqual(0, len(images)) def test_unknown_user_visibility_community(self): images = self.db_api.image_get_all(self.none_context, filters={'visibility': 'community'}) self.assertEqual(4, len(images)) for i in images: self.assertEqual('community', i['visibility']) def test_what_tenant1_sees(self): images = self.db_api.image_get_all(self.tenant1_context) self.assertEqual(7, len(images)) for i in images: if not ('public' == i['visibility']): self.assertEqual(i['owner'], self.tenant1) def test_tenant1_is_public_true(self): images = self.db_api.image_get_all(self.tenant1_context, is_public=True) self.assertEqual(4, len(images)) for i in images: self.assertEqual('public', i['visibility']) def test_tenant1_is_public_false(self): images = self.db_api.image_get_all(self.tenant1_context, is_public=False) self.assertEqual(3, len(images)) for i in images: self.assertEqual(i['owner'], self.tenant1) self.assertTrue(i['visibility'] in ['private', 'shared', 'community']) def test_tenant1_is_public_none(self): images = self.db_api.image_get_all(self.tenant1_context) self.assertEqual(7, len(images)) for i in images: if not ('public' == i['visibility']): self.assertEqual(self.tenant1, i['owner']) def test_tenant1_visibility_public(self): images = self.db_api.image_get_all(self.tenant1_context, filters={'visibility': 'public'}) self.assertEqual(4, len(images)) for i in images: self.assertEqual('public', i['visibility']) def test_tenant1_visibility_shared(self): images = self.db_api.image_get_all(self.tenant1_context, filters={'visibility': 'shared'}) self.assertEqual(1, len(images)) self.assertEqual('shared', images[0]['visibility']) self.assertEqual(self.tenant1, images[0]['owner']) def test_tenant1_visibility_private(self): images = self.db_api.image_get_all(self.tenant1_context, filters={'visibility': 'private'}) self.assertEqual(1, len(images)) self.assertEqual('private', images[0]['visibility']) self.assertEqual(self.tenant1, images[0]['owner']) def test_tenant1_visibility_community(self): images = self.db_api.image_get_all(self.tenant1_context, filters={'visibility': 'community'}) self.assertEqual(4, len(images)) for i in images: self.assertEqual('community', i['visibility']) def _setup_is_public_red_herring(self): values = { 'name': 'Red Herring', 'owner': self.tenant1, 'visibility': 'shared', 'properties': {'is_public': 'silly'} } fixture = build_image_fixture(**values) self.db_api.image_create(self.admin_context, fixture) def test_is_public_is_a_normal_filter_for_admin(self): self._setup_is_public_red_herring() images = self.db_api.image_get_all(self.admin_context, filters={'is_public': 'silly'}) self.assertEqual(1, len(images)) self.assertEqual('Red Herring', images[0]['name']) def test_is_public_is_a_normal_filter_for_user(self): self._setup_is_public_red_herring() images = self.db_api.image_get_all(self.tenant1_context, filters={'is_public': 'silly'}) self.assertEqual(1, len(images)) self.assertEqual('Red Herring', images[0]['name']) # NOTE(markwash): the following tests are sanity checks to make sure # visibility filtering and is_public=(True|False) do not interact in # unexpected ways. However, using both of the filtering techniques # simultaneously is not an anticipated use case. def test_admin_is_public_true_and_visibility_public(self): images = self.db_api.image_get_all(self.admin_context, is_public=True, filters={'visibility': 'public'}) self.assertEqual(4, len(images)) def test_admin_is_public_false_and_visibility_public(self): images = self.db_api.image_get_all(self.admin_context, is_public=False, filters={'visibility': 'public'}) self.assertEqual(0, len(images)) def test_admin_is_public_true_and_visibility_shared(self): images = self.db_api.image_get_all(self.admin_context, is_public=True, filters={'visibility': 'shared'}) self.assertEqual(0, len(images)) def test_admin_is_public_false_and_visibility_shared(self): images = self.db_api.image_get_all(self.admin_context, is_public=False, filters={'visibility': 'shared'}) self.assertEqual(4, len(images)) def test_admin_is_public_true_and_visibility_private(self): images = self.db_api.image_get_all(self.admin_context, is_public=True, filters={'visibility': 'private'}) self.assertEqual(0, len(images)) def test_admin_is_public_false_and_visibility_private(self): images = self.db_api.image_get_all(self.admin_context, is_public=False, filters={'visibility': 'private'}) self.assertEqual(4, len(images)) def test_admin_is_public_true_and_visibility_community(self): images = self.db_api.image_get_all(self.admin_context, is_public=True, filters={'visibility': 'community'}) self.assertEqual(0, len(images)) def test_admin_is_public_false_and_visibility_community(self): images = self.db_api.image_get_all(self.admin_context, is_public=False, filters={'visibility': 'community'}) self.assertEqual(4, len(images)) def test_tenant1_is_public_true_and_visibility_public(self): images = self.db_api.image_get_all(self.tenant1_context, is_public=True, filters={'visibility': 'public'}) self.assertEqual(4, len(images)) def test_tenant1_is_public_false_and_visibility_public(self): images = self.db_api.image_get_all(self.tenant1_context, is_public=False, filters={'visibility': 'public'}) self.assertEqual(0, len(images)) def test_tenant1_is_public_true_and_visibility_shared(self): images = self.db_api.image_get_all(self.tenant1_context, is_public=True, filters={'visibility': 'shared'}) self.assertEqual(0, len(images)) def test_tenant1_is_public_false_and_visibility_shared(self): images = self.db_api.image_get_all(self.tenant1_context, is_public=False, filters={'visibility': 'shared'}) self.assertEqual(1, len(images)) def test_tenant1_is_public_true_and_visibility_private(self): images = self.db_api.image_get_all(self.tenant1_context, is_public=True, filters={'visibility': 'private'}) self.assertEqual(0, len(images)) def test_tenant1_is_public_false_and_visibility_private(self): images = self.db_api.image_get_all(self.tenant1_context, is_public=False, filters={'visibility': 'private'}) self.assertEqual(1, len(images)) def test_tenant1_is_public_true_and_visibility_community(self): images = self.db_api.image_get_all(self.tenant1_context, is_public=True, filters={'visibility': 'community'}) self.assertEqual(0, len(images)) def test_tenant1_is_public_false_and_visibility_community(self): images = self.db_api.image_get_all(self.tenant1_context, is_public=False, filters={'visibility': 'community'}) self.assertEqual(4, len(images)) class TestMembershipVisibility(test_utils.BaseTestCase): def setUp(self): super(TestMembershipVisibility, self).setUp() self.db_api = db_tests.get_db(self.config) db_tests.reset_db(self.db_api) self._create_contexts() self._create_images() def _create_contexts(self): self.owner1, self.owner1_ctx = self._user_fixture() self.owner2, self.owner2_ctx = self._user_fixture() self.tenant1, self.user1_ctx = self._user_fixture() self.tenant2, self.user2_ctx = self._user_fixture() self.tenant3, self.user3_ctx = self._user_fixture() self.admin_tenant, self.admin_ctx = self._user_fixture(admin=True) def _user_fixture(self, admin=False): tenant_id = str(uuid.uuid4()) ctx = context.RequestContext(tenant=tenant_id, is_admin=admin) return tenant_id, ctx def _create_images(self): self.image_ids = {} for owner in [self.owner1, self.owner2]: self._create_image('not_shared', owner) self._create_image('shared-with-1', owner, members=[self.tenant1]) self._create_image('shared-with-2', owner, members=[self.tenant2]) self._create_image('shared-with-both', owner, members=[self.tenant1, self.tenant2]) def _create_image(self, name, owner, members=None): image = build_image_fixture(name=name, owner=owner, visibility='shared') self.image_ids[(owner, name)] = image['id'] self.db_api.image_create(self.admin_ctx, image) for member in members or []: member = {'image_id': image['id'], 'member': member} self.db_api.image_member_create(self.admin_ctx, member) class MembershipVisibilityTests(object): def _check_by_member(self, ctx, member_id, expected): members = self.db_api.image_member_find(ctx, member=member_id) images = [self.db_api.image_get(self.admin_ctx, member['image_id']) for member in members] facets = [(image['owner'], image['name']) for image in images] self.assertEqual(set(expected), set(facets)) def test_owner1_finding_user1_memberships(self): """Owner1 should see images it owns that are shared with User1.""" expected = [ (self.owner1, 'shared-with-1'), (self.owner1, 'shared-with-both'), ] self._check_by_member(self.owner1_ctx, self.tenant1, expected) def test_user1_finding_user1_memberships(self): """User1 should see all images shared with User1 """ expected = [ (self.owner1, 'shared-with-1'), (self.owner1, 'shared-with-both'), (self.owner2, 'shared-with-1'), (self.owner2, 'shared-with-both'), ] self._check_by_member(self.user1_ctx, self.tenant1, expected) def test_user2_finding_user1_memberships(self): """User2 should see no images shared with User1 """ expected = [] self._check_by_member(self.user2_ctx, self.tenant1, expected) def test_admin_finding_user1_memberships(self): """Admin should see all images shared with User1 """ expected = [ (self.owner1, 'shared-with-1'), (self.owner1, 'shared-with-both'), (self.owner2, 'shared-with-1'), (self.owner2, 'shared-with-both'), ] self._check_by_member(self.admin_ctx, self.tenant1, expected) def _check_by_image(self, context, image_id, expected): members = self.db_api.image_member_find(context, image_id=image_id) member_ids = [member['member'] for member in members] self.assertEqual(set(expected), set(member_ids)) def test_owner1_finding_owner1s_image_members(self): """Owner1 should see all memberships of its image """ expected = [self.tenant1, self.tenant2] image_id = self.image_ids[(self.owner1, 'shared-with-both')] self._check_by_image(self.owner1_ctx, image_id, expected) def test_admin_finding_owner1s_image_members(self): """Admin should see all memberships of owner1's image """ expected = [self.tenant1, self.tenant2] image_id = self.image_ids[(self.owner1, 'shared-with-both')] self._check_by_image(self.admin_ctx, image_id, expected) def test_user1_finding_owner1s_image_members(self): """User1 should see its own membership of owner1's image """ expected = [self.tenant1] image_id = self.image_ids[(self.owner1, 'shared-with-both')] self._check_by_image(self.user1_ctx, image_id, expected) def test_user2_finding_owner1s_image_members(self): """User2 should see its own membership of owner1's image """ expected = [self.tenant2] image_id = self.image_ids[(self.owner1, 'shared-with-both')] self._check_by_image(self.user2_ctx, image_id, expected) def test_user3_finding_owner1s_image_members(self): """User3 should see no memberships of owner1's image """ expected = [] image_id = self.image_ids[(self.owner1, 'shared-with-both')] self._check_by_image(self.user3_ctx, image_id, expected) glance-16.0.0/glance/tests/functional/db/test_sqlalchemy.py0000666000175100017510000001423213245511421023752 0ustar zuulzuul00000000000000# Copyright 2012 OpenStack Foundation # Copyright 2013 IBM Corp. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg from oslo_db import options from glance.common import exception import glance.db.sqlalchemy.api from glance.db.sqlalchemy import models as db_models from glance.db.sqlalchemy import models_metadef as metadef_models import glance.tests.functional.db as db_tests from glance.tests.functional.db import base from glance.tests.functional.db import base_metadef CONF = cfg.CONF def get_db(config): options.set_defaults(CONF, connection='sqlite://') config(debug=False) db_api = glance.db.sqlalchemy.api return db_api def reset_db(db_api): db_models.unregister_models(db_api.get_engine()) db_models.register_models(db_api.get_engine()) def reset_db_metadef(db_api): metadef_models.unregister_models(db_api.get_engine()) metadef_models.register_models(db_api.get_engine()) class TestSqlAlchemyDriver(base.TestDriver, base.DriverTests, base.FunctionalInitWrapper): def setUp(self): db_tests.load(get_db, reset_db) super(TestSqlAlchemyDriver, self).setUp() self.addCleanup(db_tests.reset) def test_get_image_with_invalid_long_image_id(self): image_id = '343f9ba5-0197-41be-9543-16bbb32e12aa-xxxxxx' self.assertRaises(exception.NotFound, self.db_api._image_get, self.context, image_id) def test_image_tag_delete_with_invalid_long_image_id(self): image_id = '343f9ba5-0197-41be-9543-16bbb32e12aa-xxxxxx' self.assertRaises(exception.NotFound, self.db_api.image_tag_delete, self.context, image_id, 'fake') def test_image_tag_get_all_with_invalid_long_image_id(self): image_id = '343f9ba5-0197-41be-9543-16bbb32e12aa-xxxxxx' self.assertRaises(exception.NotFound, self.db_api.image_tag_get_all, self.context, image_id) def test_user_get_storage_usage_with_invalid_long_image_id(self): image_id = '343f9ba5-0197-41be-9543-16bbb32e12aa-xxxxxx' self.assertRaises(exception.NotFound, self.db_api.user_get_storage_usage, self.context, 'fake_owner_id', image_id) class TestSqlAlchemyVisibility(base.TestVisibility, base.VisibilityTests, base.FunctionalInitWrapper): def setUp(self): db_tests.load(get_db, reset_db) super(TestSqlAlchemyVisibility, self).setUp() self.addCleanup(db_tests.reset) class TestSqlAlchemyMembershipVisibility(base.TestMembershipVisibility, base.MembershipVisibilityTests, base.FunctionalInitWrapper): def setUp(self): db_tests.load(get_db, reset_db) super(TestSqlAlchemyMembershipVisibility, self).setUp() self.addCleanup(db_tests.reset) class TestSqlAlchemyDBDataIntegrity(base.TestDriver, base.FunctionalInitWrapper): """Test class for checking the data integrity in the database. Helpful in testing scenarios specific to the sqlalchemy api. """ def setUp(self): db_tests.load(get_db, reset_db) super(TestSqlAlchemyDBDataIntegrity, self).setUp() self.addCleanup(db_tests.reset) def test_paginate_redundant_sort_keys(self): original_method = self.db_api._paginate_query def fake_paginate_query(query, model, limit, sort_keys, marker, sort_dir, sort_dirs): self.assertEqual(['created_at', 'id'], sort_keys) return original_method(query, model, limit, sort_keys, marker, sort_dir, sort_dirs) self.stubs.Set(self.db_api, '_paginate_query', fake_paginate_query) self.db_api.image_get_all(self.context, sort_key=['created_at']) def test_paginate_non_redundant_sort_keys(self): original_method = self.db_api._paginate_query def fake_paginate_query(query, model, limit, sort_keys, marker, sort_dir, sort_dirs): self.assertEqual(['name', 'created_at', 'id'], sort_keys) return original_method(query, model, limit, sort_keys, marker, sort_dir, sort_dirs) self.stubs.Set(self.db_api, '_paginate_query', fake_paginate_query) self.db_api.image_get_all(self.context, sort_key=['name']) class TestSqlAlchemyTask(base.TaskTests, base.FunctionalInitWrapper): def setUp(self): db_tests.load(get_db, reset_db) super(TestSqlAlchemyTask, self).setUp() self.addCleanup(db_tests.reset) class TestSqlAlchemyQuota(base.DriverQuotaTests, base.FunctionalInitWrapper): def setUp(self): db_tests.load(get_db, reset_db) super(TestSqlAlchemyQuota, self).setUp() self.addCleanup(db_tests.reset) class TestDBPurge(base.DBPurgeTests, base.FunctionalInitWrapper): def setUp(self): db_tests.load(get_db, reset_db) super(TestDBPurge, self).setUp() self.addCleanup(db_tests.reset) class TestMetadefSqlAlchemyDriver(base_metadef.TestMetadefDriver, base_metadef.MetadefDriverTests, base.FunctionalInitWrapper): def setUp(self): db_tests.load(get_db, reset_db_metadef) super(TestMetadefSqlAlchemyDriver, self).setUp() self.addCleanup(db_tests.reset) glance-16.0.0/glance/tests/functional/db/test_migrations.py0000666000175100017510000001440113245511421023762 0ustar zuulzuul00000000000000# Copyright 2016 Rackspace # Copyright 2016 Intel Corporation # # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os from alembic import command as alembic_command from alembic import script as alembic_script from oslo_db.sqlalchemy import test_migrations from oslo_db.tests.sqlalchemy import base as test_base import sqlalchemy.types as types from glance.db.sqlalchemy import alembic_migrations from glance.db.sqlalchemy.alembic_migrations import versions from glance.db.sqlalchemy import models from glance.db.sqlalchemy import models_metadef import glance.tests.utils as test_utils class AlembicMigrationsMixin(object): def _get_revisions(self, config, head=None): head = head or 'heads' scripts_dir = alembic_script.ScriptDirectory.from_config(config) revisions = list(scripts_dir.walk_revisions(base='base', head=head)) revisions = list(reversed(revisions)) revisions = [rev.revision for rev in revisions] return revisions def _migrate_up(self, config, engine, revision, with_data=False): if with_data: data = None pre_upgrade = getattr(self, '_pre_upgrade_%s' % revision, None) if pre_upgrade: data = pre_upgrade(engine) alembic_command.upgrade(config, revision) if with_data: check = getattr(self, '_check_%s' % revision, None) if check: check(engine, data) def test_walk_versions(self): alembic_config = alembic_migrations.get_alembic_config(self.engine) for revision in self._get_revisions(alembic_config): self._migrate_up(alembic_config, self.engine, revision, with_data=True) class TestMysqlMigrations(test_base.MySQLOpportunisticTestCase, AlembicMigrationsMixin): def test_mysql_innodb_tables(self): test_utils.db_sync(engine=self.engine) total = self.engine.execute( "SELECT COUNT(*) " "FROM information_schema.TABLES " "WHERE TABLE_SCHEMA='%s'" % self.engine.url.database) self.assertGreater(total.scalar(), 0, "No tables found. Wrong schema?") noninnodb = self.engine.execute( "SELECT count(*) " "FROM information_schema.TABLES " "WHERE TABLE_SCHEMA='%s' " "AND ENGINE!='InnoDB' " "AND TABLE_NAME!='migrate_version'" % self.engine.url.database) count = noninnodb.scalar() self.assertEqual(0, count, "%d non InnoDB tables created" % count) class TestPostgresqlMigrations(test_base.PostgreSQLOpportunisticTestCase, AlembicMigrationsMixin): pass class TestSqliteMigrations(test_base.DbTestCase, AlembicMigrationsMixin): pass class TestMigrations(test_base.DbTestCase, test_utils.BaseTestCase): def test_no_downgrade(self): migrate_file = versions.__path__[0] for parent, dirnames, filenames in os.walk(migrate_file): for filename in filenames: if filename.split('.')[1] == 'py': model_name = filename.split('.')[0] model = __import__( 'glance.db.sqlalchemy.alembic_migrations.versions.' + model_name) obj = getattr(getattr(getattr(getattr(getattr( model, 'db'), 'sqlalchemy'), 'alembic_migrations'), 'versions'), model_name) func = getattr(obj, 'downgrade', None) self.assertIsNone(func) class ModelsMigrationSyncMixin(object): def get_metadata(self): for table in models_metadef.BASE_DICT.metadata.sorted_tables: models.BASE.metadata._add_table(table.name, table.schema, table) return models.BASE.metadata def get_engine(self): return self.engine def db_sync(self, engine): test_utils.db_sync(engine=engine) # TODO(akamyshikova): remove this method as soon as comparison with Variant # will be implemented in oslo.db or alembic def compare_type(self, ctxt, insp_col, meta_col, insp_type, meta_type): if isinstance(meta_type, types.Variant): meta_orig_type = meta_col.type insp_orig_type = insp_col.type meta_col.type = meta_type.impl insp_col.type = meta_type.impl try: return self.compare_type(ctxt, insp_col, meta_col, insp_type, meta_type.impl) finally: meta_col.type = meta_orig_type insp_col.type = insp_orig_type else: ret = super(ModelsMigrationSyncMixin, self).compare_type( ctxt, insp_col, meta_col, insp_type, meta_type) if ret is not None: return ret return ctxt.impl.compare_type(insp_col, meta_col) def include_object(self, object_, name, type_, reflected, compare_to): if name in ['migrate_version'] and type_ == 'table': return False return True class ModelsMigrationsSyncMysql(ModelsMigrationSyncMixin, test_migrations.ModelsMigrationsSync, test_base.MySQLOpportunisticTestCase): pass class ModelsMigrationsSyncPostgres(ModelsMigrationSyncMixin, test_migrations.ModelsMigrationsSync, test_base.PostgreSQLOpportunisticTestCase): pass class ModelsMigrationsSyncSqlite(ModelsMigrationSyncMixin, test_migrations.ModelsMigrationsSync, test_base.DbTestCase): pass glance-16.0.0/glance/tests/functional/db/__init__.py0000666000175100017510000000206513245511421022311 0ustar zuulzuul00000000000000# Copyright 2012 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # NOTE(markwash): These functions are used in the base tests cases to # set up the db api implementation under test. Rather than accessing them # directly, test modules should use the load and reset functions below. get_db = None reset_db = None def load(get_db_fn, reset_db_fn): global get_db, reset_db get_db = get_db_fn reset_db = reset_db_fn def reset(): global get_db, reset_db get_db = None reset_db = None glance-16.0.0/glance/tests/functional/db/base_metadef.py0000666000175100017510000007220313245511421023152 0ustar zuulzuul00000000000000# Copyright (c) 2014 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy from glance.common import config from glance.common import exception from glance import context import glance.tests.functional.db as db_tests from glance.tests import utils as test_utils def build_namespace_fixture(**kwargs): namespace = { 'namespace': u'MyTestNamespace', 'display_name': u'test-display-name', 'description': u'test-description', 'visibility': u'public', 'protected': 0, 'owner': u'test-owner' } namespace.update(kwargs) return namespace def build_resource_type_fixture(**kwargs): resource_type = { 'name': u'MyTestResourceType', 'protected': 0 } resource_type.update(kwargs) return resource_type def build_association_fixture(**kwargs): association = { 'name': u'MyTestResourceType', 'properties_target': 'test-properties-target', 'prefix': 'test-prefix' } association.update(kwargs) return association def build_object_fixture(**kwargs): # Full testing of required and schema done via rest api tests object = { 'namespace_id': 1, 'name': u'test-object-name', 'description': u'test-object-description', 'required': u'fake-required-properties-list', 'json_schema': u'{fake-schema}' } object.update(kwargs) return object def build_property_fixture(**kwargs): # Full testing of required and schema done via rest api tests property = { 'namespace_id': 1, 'name': u'test-property-name', 'json_schema': u'{fake-schema}' } property.update(kwargs) return property def build_tag_fixture(**kwargs): # Full testing of required and schema done via rest api tests tag = { 'namespace_id': 1, 'name': u'test-tag-name', } tag.update(kwargs) return tag def build_tags_fixture(tag_name_list): tag_list = [] for tag_name in tag_name_list: tag_list.append({'name': tag_name}) return tag_list class TestMetadefDriver(test_utils.BaseTestCase): """Test Driver class for Metadef tests.""" def setUp(self): """Run before each test method to initialize test environment.""" super(TestMetadefDriver, self).setUp() config.parse_args(args=[]) context_cls = context.RequestContext self.adm_context = context_cls(is_admin=True, auth_token='user:user:admin') self.context = context_cls(is_admin=False, auth_token='user:user:user') self.db_api = db_tests.get_db(self.config) db_tests.reset_db(self.db_api) def _assert_saved_fields(self, expected, actual): for k in expected.keys(): self.assertEqual(expected[k], actual[k]) class MetadefNamespaceTests(object): def test_namespace_create(self): fixture = build_namespace_fixture() created = self.db_api.metadef_namespace_create(self.context, fixture) self.assertIsNotNone(created) self._assert_saved_fields(fixture, created) def test_namespace_create_duplicate(self): fixture = build_namespace_fixture() created = self.db_api.metadef_namespace_create(self.context, fixture) self.assertIsNotNone(created) self._assert_saved_fields(fixture, created) self.assertRaises(exception.Duplicate, self.db_api.metadef_namespace_create, self.context, fixture) def test_namespace_get(self): fixture = build_namespace_fixture() created = self.db_api.metadef_namespace_create(self.context, fixture) self.assertIsNotNone(created) self._assert_saved_fields(fixture, created) found = self.db_api.metadef_namespace_get( self.context, created['namespace']) self.assertIsNotNone(found, "Namespace not found.") def test_namespace_get_all_with_resource_types_filter(self): ns_fixture = build_namespace_fixture() ns_created = self.db_api.metadef_namespace_create( self.context, ns_fixture) self.assertIsNotNone(ns_created, "Could not create a namespace.") self._assert_saved_fields(ns_fixture, ns_created) fixture = build_association_fixture() created = self.db_api.metadef_resource_type_association_create( self.context, ns_created['namespace'], fixture) self.assertIsNotNone(created, "Could not create an association.") rt_filters = {'resource_types': fixture['name']} found = self.db_api.metadef_namespace_get_all( self.context, filters=rt_filters, sort_key='created_at') self.assertEqual(1, len(found)) for item in found: self._assert_saved_fields(ns_fixture, item) def test_namespace_update(self): delta = {'owner': u'New Owner'} fixture = build_namespace_fixture() created = self.db_api.metadef_namespace_create(self.context, fixture) self.assertIsNotNone(created['namespace']) self.assertEqual(fixture['namespace'], created['namespace']) delta_dict = copy.deepcopy(created) delta_dict.update(delta.copy()) updated = self.db_api.metadef_namespace_update( self.context, created['id'], delta_dict) self.assertEqual(delta['owner'], updated['owner']) def test_namespace_delete(self): fixture = build_namespace_fixture() created = self.db_api.metadef_namespace_create(self.context, fixture) self.assertIsNotNone(created, "Could not create a Namespace.") self.db_api.metadef_namespace_delete( self.context, created['namespace']) self.assertRaises(exception.NotFound, self.db_api.metadef_namespace_get, self.context, created['namespace']) def test_namespace_delete_with_content(self): fixture_ns = build_namespace_fixture() created_ns = self.db_api.metadef_namespace_create( self.context, fixture_ns) self._assert_saved_fields(fixture_ns, created_ns) # Create object content for the namespace fixture_obj = build_object_fixture() created_obj = self.db_api.metadef_object_create( self.context, created_ns['namespace'], fixture_obj) self.assertIsNotNone(created_obj) # Create property content for the namespace fixture_prop = build_property_fixture(namespace_id=created_ns['id']) created_prop = self.db_api.metadef_property_create( self.context, created_ns['namespace'], fixture_prop) self.assertIsNotNone(created_prop) # Create associations fixture_assn = build_association_fixture() created_assn = self.db_api.metadef_resource_type_association_create( self.context, created_ns['namespace'], fixture_assn) self.assertIsNotNone(created_assn) deleted_ns = self.db_api.metadef_namespace_delete( self.context, created_ns['namespace']) self.assertRaises(exception.NotFound, self.db_api.metadef_namespace_get, self.context, deleted_ns['namespace']) class MetadefPropertyTests(object): def test_property_create(self): fixture = build_namespace_fixture() created_ns = self.db_api.metadef_namespace_create( self.context, fixture) self.assertIsNotNone(created_ns) self._assert_saved_fields(fixture, created_ns) fixture_prop = build_property_fixture(namespace_id=created_ns['id']) created_prop = self.db_api.metadef_property_create( self.context, created_ns['namespace'], fixture_prop) self._assert_saved_fields(fixture_prop, created_prop) def test_property_create_duplicate(self): fixture = build_namespace_fixture() created_ns = self.db_api.metadef_namespace_create( self.context, fixture) self.assertIsNotNone(created_ns) self._assert_saved_fields(fixture, created_ns) fixture_prop = build_property_fixture(namespace_id=created_ns['id']) created_prop = self.db_api.metadef_property_create( self.context, created_ns['namespace'], fixture_prop) self._assert_saved_fields(fixture_prop, created_prop) self.assertRaises(exception.Duplicate, self.db_api.metadef_property_create, self.context, created_ns['namespace'], fixture_prop) def test_property_get(self): fixture_ns = build_namespace_fixture() created_ns = self.db_api.metadef_namespace_create( self.context, fixture_ns) self.assertIsNotNone(created_ns) self._assert_saved_fields(fixture_ns, created_ns) fixture_prop = build_property_fixture(namespace_id=created_ns['id']) created_prop = self.db_api.metadef_property_create( self.context, created_ns['namespace'], fixture_prop) found_prop = self.db_api.metadef_property_get( self.context, created_ns['namespace'], created_prop['name']) self._assert_saved_fields(fixture_prop, found_prop) def test_property_get_all(self): ns_fixture = build_namespace_fixture() ns_created = self.db_api.metadef_namespace_create( self.context, ns_fixture) self.assertIsNotNone(ns_created, "Could not create a namespace.") self._assert_saved_fields(ns_fixture, ns_created) fixture1 = build_property_fixture(namespace_id=ns_created['id']) created_p1 = self.db_api.metadef_property_create( self.context, ns_created['namespace'], fixture1) self.assertIsNotNone(created_p1, "Could not create a property.") fixture2 = build_property_fixture(namespace_id=ns_created['id'], name='test-prop-2') created_p2 = self.db_api.metadef_property_create( self.context, ns_created['namespace'], fixture2) self.assertIsNotNone(created_p2, "Could not create a property.") found = self.db_api.metadef_property_get_all( self.context, ns_created['namespace']) self.assertEqual(2, len(found)) def test_property_update(self): delta = {'name': u'New-name', 'json_schema': u'new-schema'} fixture_ns = build_namespace_fixture() created_ns = self.db_api.metadef_namespace_create( self.context, fixture_ns) self.assertIsNotNone(created_ns['namespace']) prop_fixture = build_property_fixture(namespace_id=created_ns['id']) created_prop = self.db_api.metadef_property_create( self.context, created_ns['namespace'], prop_fixture) self.assertIsNotNone(created_prop, "Could not create a property.") delta_dict = copy.deepcopy(created_prop) delta_dict.update(delta.copy()) updated = self.db_api.metadef_property_update( self.context, created_ns['namespace'], created_prop['id'], delta_dict) self.assertEqual(delta['name'], updated['name']) self.assertEqual(delta['json_schema'], updated['json_schema']) def test_property_delete(self): fixture_ns = build_namespace_fixture() created_ns = self.db_api.metadef_namespace_create( self.context, fixture_ns) self.assertIsNotNone(created_ns['namespace']) prop_fixture = build_property_fixture(namespace_id=created_ns['id']) created_prop = self.db_api.metadef_property_create( self.context, created_ns['namespace'], prop_fixture) self.assertIsNotNone(created_prop, "Could not create a property.") self.db_api.metadef_property_delete( self.context, created_ns['namespace'], created_prop['name']) self.assertRaises(exception.NotFound, self.db_api.metadef_property_get, self.context, created_ns['namespace'], created_prop['name']) def test_property_delete_namespace_content(self): fixture_ns = build_namespace_fixture() created_ns = self.db_api.metadef_namespace_create( self.context, fixture_ns) self.assertIsNotNone(created_ns['namespace']) prop_fixture = build_property_fixture(namespace_id=created_ns['id']) created_prop = self.db_api.metadef_property_create( self.context, created_ns['namespace'], prop_fixture) self.assertIsNotNone(created_prop, "Could not create a property.") self.db_api.metadef_property_delete_namespace_content( self.context, created_ns['namespace']) self.assertRaises(exception.NotFound, self.db_api.metadef_property_get, self.context, created_ns['namespace'], created_prop['name']) class MetadefObjectTests(object): def test_object_create(self): fixture = build_namespace_fixture() created_ns = self.db_api.metadef_namespace_create(self.context, fixture) self.assertIsNotNone(created_ns) self._assert_saved_fields(fixture, created_ns) fixture_object = build_object_fixture(namespace_id=created_ns['id']) created_object = self.db_api.metadef_object_create( self.context, created_ns['namespace'], fixture_object) self._assert_saved_fields(fixture_object, created_object) def test_object_create_duplicate(self): fixture = build_namespace_fixture() created_ns = self.db_api.metadef_namespace_create(self.context, fixture) self.assertIsNotNone(created_ns) self._assert_saved_fields(fixture, created_ns) fixture_object = build_object_fixture(namespace_id=created_ns['id']) created_object = self.db_api.metadef_object_create( self.context, created_ns['namespace'], fixture_object) self._assert_saved_fields(fixture_object, created_object) self.assertRaises(exception.Duplicate, self.db_api.metadef_object_create, self.context, created_ns['namespace'], fixture_object) def test_object_get(self): fixture_ns = build_namespace_fixture() created_ns = self.db_api.metadef_namespace_create(self.context, fixture_ns) self.assertIsNotNone(created_ns) self._assert_saved_fields(fixture_ns, created_ns) fixture_object = build_object_fixture(namespace_id=created_ns['id']) created_object = self.db_api.metadef_object_create( self.context, created_ns['namespace'], fixture_object) found_object = self.db_api.metadef_object_get( self.context, created_ns['namespace'], created_object['name']) self._assert_saved_fields(fixture_object, found_object) def test_object_get_all(self): ns_fixture = build_namespace_fixture() ns_created = self.db_api.metadef_namespace_create(self.context, ns_fixture) self.assertIsNotNone(ns_created, "Could not create a namespace.") self._assert_saved_fields(ns_fixture, ns_created) fixture1 = build_object_fixture(namespace_id=ns_created['id']) created_o1 = self.db_api.metadef_object_create( self.context, ns_created['namespace'], fixture1) self.assertIsNotNone(created_o1, "Could not create an object.") fixture2 = build_object_fixture(namespace_id=ns_created['id'], name='test-object-2') created_o2 = self.db_api.metadef_object_create( self.context, ns_created['namespace'], fixture2) self.assertIsNotNone(created_o2, "Could not create an object.") found = self.db_api.metadef_object_get_all( self.context, ns_created['namespace']) self.assertEqual(2, len(found)) def test_object_update(self): delta = {'name': u'New-name', 'json_schema': u'new-schema', 'required': u'new-required'} fixture_ns = build_namespace_fixture() created_ns = self.db_api.metadef_namespace_create(self.context, fixture_ns) self.assertIsNotNone(created_ns['namespace']) object_fixture = build_object_fixture(namespace_id=created_ns['id']) created_object = self.db_api.metadef_object_create( self.context, created_ns['namespace'], object_fixture) self.assertIsNotNone(created_object, "Could not create an object.") delta_dict = {} delta_dict.update(delta.copy()) updated = self.db_api.metadef_object_update( self.context, created_ns['namespace'], created_object['id'], delta_dict) self.assertEqual(delta['name'], updated['name']) self.assertEqual(delta['json_schema'], updated['json_schema']) def test_object_delete(self): fixture_ns = build_namespace_fixture() created_ns = self.db_api.metadef_namespace_create( self.context, fixture_ns) self.assertIsNotNone(created_ns['namespace']) object_fixture = build_object_fixture(namespace_id=created_ns['id']) created_object = self.db_api.metadef_object_create( self.context, created_ns['namespace'], object_fixture) self.assertIsNotNone(created_object, "Could not create an object.") self.db_api.metadef_object_delete( self.context, created_ns['namespace'], created_object['name']) self.assertRaises(exception.NotFound, self.db_api.metadef_object_get, self.context, created_ns['namespace'], created_object['name']) class MetadefResourceTypeTests(object): def test_resource_type_get_all(self): resource_types_orig = self.db_api.metadef_resource_type_get_all( self.context) fixture = build_resource_type_fixture() self.db_api.metadef_resource_type_create(self.context, fixture) resource_types = self.db_api.metadef_resource_type_get_all( self.context) test_len = len(resource_types_orig) + 1 self.assertEqual(test_len, len(resource_types)) class MetadefResourceTypeAssociationTests(object): def test_association_create(self): ns_fixture = build_namespace_fixture() ns_created = self.db_api.metadef_namespace_create( self.context, ns_fixture) self.assertIsNotNone(ns_created) self._assert_saved_fields(ns_fixture, ns_created) assn_fixture = build_association_fixture() assn_created = self.db_api.metadef_resource_type_association_create( self.context, ns_created['namespace'], assn_fixture) self.assertIsNotNone(assn_created) self._assert_saved_fields(assn_fixture, assn_created) def test_association_create_duplicate(self): ns_fixture = build_namespace_fixture() ns_created = self.db_api.metadef_namespace_create( self.context, ns_fixture) self.assertIsNotNone(ns_created) self._assert_saved_fields(ns_fixture, ns_created) assn_fixture = build_association_fixture() assn_created = self.db_api.metadef_resource_type_association_create( self.context, ns_created['namespace'], assn_fixture) self.assertIsNotNone(assn_created) self._assert_saved_fields(assn_fixture, assn_created) self.assertRaises(exception.Duplicate, self.db_api. metadef_resource_type_association_create, self.context, ns_created['namespace'], assn_fixture) def test_association_delete(self): ns_fixture = build_namespace_fixture() ns_created = self.db_api.metadef_namespace_create( self.context, ns_fixture) self.assertIsNotNone(ns_created, "Could not create a namespace.") self._assert_saved_fields(ns_fixture, ns_created) fixture = build_association_fixture() created = self.db_api.metadef_resource_type_association_create( self.context, ns_created['namespace'], fixture) self.assertIsNotNone(created, "Could not create an association.") created_resource = self.db_api.metadef_resource_type_get( self.context, fixture['name']) self.assertIsNotNone(created_resource, "resource_type not created") self.db_api.metadef_resource_type_association_delete( self.context, ns_created['namespace'], created_resource['name']) self.assertRaises(exception.NotFound, self.db_api.metadef_resource_type_association_get, self.context, ns_created['namespace'], created_resource['name']) def test_association_get_all_by_namespace(self): ns_fixture = build_namespace_fixture() ns_created = self.db_api.metadef_namespace_create( self.context, ns_fixture) self.assertIsNotNone(ns_created, "Could not create a namespace.") self._assert_saved_fields(ns_fixture, ns_created) fixture = build_association_fixture() created = self.db_api.metadef_resource_type_association_create( self.context, ns_created['namespace'], fixture) self.assertIsNotNone(created, "Could not create an association.") found = ( self.db_api.metadef_resource_type_association_get_all_by_namespace( self.context, ns_created['namespace'])) self.assertEqual(1, len(found)) for item in found: self._assert_saved_fields(fixture, item) class MetadefTagTests(object): def test_tag_create(self): fixture = build_namespace_fixture() created_ns = self.db_api.metadef_namespace_create(self.context, fixture) self.assertIsNotNone(created_ns) self._assert_saved_fields(fixture, created_ns) fixture_tag = build_tag_fixture(namespace_id=created_ns['id']) created_tag = self.db_api.metadef_tag_create( self.context, created_ns['namespace'], fixture_tag) self._assert_saved_fields(fixture_tag, created_tag) def test_tag_create_duplicate(self): fixture = build_namespace_fixture() created_ns = self.db_api.metadef_namespace_create(self.context, fixture) self.assertIsNotNone(created_ns) self._assert_saved_fields(fixture, created_ns) fixture_tag = build_tag_fixture(namespace_id=created_ns['id']) created_tag = self.db_api.metadef_tag_create( self.context, created_ns['namespace'], fixture_tag) self._assert_saved_fields(fixture_tag, created_tag) self.assertRaises(exception.Duplicate, self.db_api.metadef_tag_create, self.context, created_ns['namespace'], fixture_tag) def test_tag_create_tags(self): fixture = build_namespace_fixture() created_ns = self.db_api.metadef_namespace_create(self.context, fixture) self.assertIsNotNone(created_ns) self._assert_saved_fields(fixture, created_ns) tags = build_tags_fixture(['Tag1', 'Tag2', 'Tag3']) created_tags = self.db_api.metadef_tag_create_tags( self.context, created_ns['namespace'], tags) actual = set([tag['name'] for tag in created_tags]) expected = set(['Tag1', 'Tag2', 'Tag3']) self.assertEqual(expected, actual) def test_tag_create_duplicate_tags_1(self): fixture = build_namespace_fixture() created_ns = self.db_api.metadef_namespace_create(self.context, fixture) self.assertIsNotNone(created_ns) self._assert_saved_fields(fixture, created_ns) tags = build_tags_fixture(['Tag1', 'Tag2', 'Tag3', 'Tag2']) self.assertRaises(exception.Duplicate, self.db_api.metadef_tag_create_tags, self.context, created_ns['namespace'], tags) def test_tag_create_duplicate_tags_2(self): fixture = build_namespace_fixture() created_ns = self.db_api.metadef_namespace_create(self.context, fixture) self.assertIsNotNone(created_ns) self._assert_saved_fields(fixture, created_ns) tags = build_tags_fixture(['Tag1', 'Tag2', 'Tag3']) self.db_api.metadef_tag_create_tags(self.context, created_ns['namespace'], tags) dup_tag = build_tag_fixture(namespace_id=created_ns['id'], name='Tag3') self.assertRaises(exception.Duplicate, self.db_api.metadef_tag_create, self.context, created_ns['namespace'], dup_tag) def test_tag_get(self): fixture_ns = build_namespace_fixture() created_ns = self.db_api.metadef_namespace_create(self.context, fixture_ns) self.assertIsNotNone(created_ns) self._assert_saved_fields(fixture_ns, created_ns) fixture_tag = build_tag_fixture(namespace_id=created_ns['id']) created_tag = self.db_api.metadef_tag_create( self.context, created_ns['namespace'], fixture_tag) found_tag = self.db_api.metadef_tag_get( self.context, created_ns['namespace'], created_tag['name']) self._assert_saved_fields(fixture_tag, found_tag) def test_tag_get_all(self): ns_fixture = build_namespace_fixture() ns_created = self.db_api.metadef_namespace_create(self.context, ns_fixture) self.assertIsNotNone(ns_created, "Could not create a namespace.") self._assert_saved_fields(ns_fixture, ns_created) fixture1 = build_tag_fixture(namespace_id=ns_created['id']) created_tag1 = self.db_api.metadef_tag_create( self.context, ns_created['namespace'], fixture1) self.assertIsNotNone(created_tag1, "Could not create tag 1.") fixture2 = build_tag_fixture(namespace_id=ns_created['id'], name='test-tag-2') created_tag2 = self.db_api.metadef_tag_create( self.context, ns_created['namespace'], fixture2) self.assertIsNotNone(created_tag2, "Could not create tag 2.") found = self.db_api.metadef_tag_get_all( self.context, ns_created['namespace'], sort_key='created_at') self.assertEqual(2, len(found)) def test_tag_update(self): delta = {'name': u'New-name'} fixture_ns = build_namespace_fixture() created_ns = self.db_api.metadef_namespace_create(self.context, fixture_ns) self.assertIsNotNone(created_ns['namespace']) tag_fixture = build_tag_fixture(namespace_id=created_ns['id']) created_tag = self.db_api.metadef_tag_create( self.context, created_ns['namespace'], tag_fixture) self.assertIsNotNone(created_tag, "Could not create a tag.") delta_dict = {} delta_dict.update(delta.copy()) updated = self.db_api.metadef_tag_update( self.context, created_ns['namespace'], created_tag['id'], delta_dict) self.assertEqual(delta['name'], updated['name']) def test_tag_delete(self): fixture_ns = build_namespace_fixture() created_ns = self.db_api.metadef_namespace_create( self.context, fixture_ns) self.assertIsNotNone(created_ns['namespace']) tag_fixture = build_tag_fixture(namespace_id=created_ns['id']) created_tag = self.db_api.metadef_tag_create( self.context, created_ns['namespace'], tag_fixture) self.assertIsNotNone(created_tag, "Could not create a tag.") self.db_api.metadef_tag_delete( self.context, created_ns['namespace'], created_tag['name']) self.assertRaises(exception.NotFound, self.db_api.metadef_tag_get, self.context, created_ns['namespace'], created_tag['name']) class MetadefDriverTests(MetadefNamespaceTests, MetadefResourceTypeTests, MetadefResourceTypeAssociationTests, MetadefPropertyTests, MetadefObjectTests, MetadefTagTests): # collection class pass glance-16.0.0/glance/tests/functional/db/test_registry.py0000666000175100017510000000721613245511421023464 0ustar zuulzuul00000000000000# Copyright 2013 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg from oslo_db import options import glance.db # NOTE(smcginnis) Need to make sure registry opts are registered from glance import registry # noqa from glance.registry import client # noqa import glance.tests.functional.db as db_tests from glance.tests.functional.db import base from glance.tests.functional.db import base_metadef CONF = cfg.CONF def get_db(config): options.set_defaults(CONF, connection='sqlite://') config(data_api='glance.db.registry.api') return glance.db.get_api() def reset_db(db_api): pass class FunctionalInitWrapper(base.FunctionalInitWrapper): def setUp(self): # NOTE(flaper87): We need to start the # registry service *before* TestDriver's # setup goes on, since it'll create some # images that will be later used in tests. # # Python's request is way too magical and # it will make the TestDriver's super call # FunctionalTest's without letting us start # the server. # # This setUp will be called by TestDriver # and will be used to call FunctionalTest # setUp method *and* start the registry # service right after it. super(FunctionalInitWrapper, self).setUp() self.registry_server.deployment_flavor = 'fakeauth' self.start_with_retry(self.registry_server, 'registry_port', 3, api_version=2) self.config(registry_port=self.registry_server.bind_port, use_user_token=True) class TestRegistryDriver(base.TestDriver, base.DriverTests, FunctionalInitWrapper): def setUp(self): db_tests.load(get_db, reset_db) super(TestRegistryDriver, self).setUp() self.addCleanup(db_tests.reset) def tearDown(self): self.registry_server.stop() super(TestRegistryDriver, self).tearDown() class TestRegistryQuota(base.DriverQuotaTests, FunctionalInitWrapper): def setUp(self): db_tests.load(get_db, reset_db) super(TestRegistryQuota, self).setUp() self.addCleanup(db_tests.reset) def tearDown(self): self.registry_server.stop() super(TestRegistryQuota, self).tearDown() class TestRegistryMetadefDriver(base_metadef.TestMetadefDriver, base_metadef.MetadefDriverTests, FunctionalInitWrapper): def setUp(self): db_tests.load(get_db, reset_db) super(TestRegistryMetadefDriver, self).setUp() self.addCleanup(db_tests.reset) def tearDown(self): self.registry_server.stop() super(TestRegistryMetadefDriver, self).tearDown() class TestTasksDriver(base.TaskTests, FunctionalInitWrapper): def setUp(self): db_tests.load(get_db, reset_db) super(TestTasksDriver, self).setUp() self.addCleanup(db_tests.reset) def tearDown(self): self.registry_server.stop() super(TestTasksDriver, self).tearDown() glance-16.0.0/glance/tests/functional/test_sqlite.py0000666000175100017510000000250413245511421022523 0ustar zuulzuul00000000000000# Copyright 2012 Red Hat, Inc # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Functional test cases for sqlite-specific logic""" from glance.tests import functional from glance.tests.utils import depends_on_exe from glance.tests.utils import execute from glance.tests.utils import skip_if_disabled class TestSqlite(functional.FunctionalTest): """Functional tests for sqlite-specific logic""" @depends_on_exe('sqlite3') @skip_if_disabled def test_big_int_mapping(self): """Ensure BigInteger not mapped to BIGINT""" self.cleanup() self.start_servers(**self.__dict__.copy()) cmd = "sqlite3 tests.sqlite '.schema'" exitcode, out, err = execute(cmd, raise_error=True) self.assertNotIn('BIGINT', out) self.stop_servers() glance-16.0.0/glance/tests/functional/store_utils.py0000666000175100017510000000545013245511421022542 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # Copyright 2012 Red Hat, Inc # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Utility methods to set testcases up for Swift tests. """ from __future__ import print_function import threading from oslo_utils import units from six.moves import BaseHTTPServer from six.moves import http_client as http FIVE_KB = 5 * units.Ki class RemoteImageHandler(BaseHTTPServer.BaseHTTPRequestHandler): def do_HEAD(self): """ Respond to an image HEAD request fake metadata """ if 'images' in self.path: self.send_response(http.OK) self.send_header('Content-Type', 'application/octet-stream') self.send_header('Content-Length', FIVE_KB) self.end_headers() return else: self.send_error(http.NOT_FOUND, 'File Not Found: %s' % self.path) return def do_GET(self): """ Respond to an image GET request with fake image content. """ if 'images' in self.path: self.send_response(http.OK) self.send_header('Content-Type', 'application/octet-stream') self.send_header('Content-Length', FIVE_KB) self.end_headers() image_data = b'*' * FIVE_KB self.wfile.write(image_data) self.wfile.close() return else: self.send_error(http.NOT_FOUND, 'File Not Found: %s' % self.path) return def log_message(self, format, *args): """ Simple override to prevent writing crap to stderr... """ pass def setup_http(test): server_class = BaseHTTPServer.HTTPServer remote_server = server_class(('127.0.0.1', 0), RemoteImageHandler) remote_ip, remote_port = remote_server.server_address def serve_requests(httpd): httpd.serve_forever() threading.Thread(target=serve_requests, args=(remote_server,)).start() test.http_server = remote_server test.http_ip = remote_ip test.http_port = remote_port test.addCleanup(test.http_server.shutdown) def get_http_uri(test, image_id): uri = ('http://%(http_ip)s:%(http_port)d/images/' % {'http_ip': test.http_ip, 'http_port': test.http_port}) uri += image_id return uri glance-16.0.0/glance/tests/functional/test_cache_middleware.py0000666000175100017510000013077713245511421024500 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Tests a Glance API server which uses the caching middleware that uses the default SQLite cache driver. We use the filesystem store, but that is really not relevant, as the image cache is transparent to the backend store. """ import hashlib import os import shutil import sys import time import uuid import httplib2 from oslo_serialization import jsonutils from oslo_utils import units from six.moves import http_client # NOTE(jokke): simplified transition to py3, behaves like py2 xrange from six.moves import range from glance.tests import functional from glance.tests.functional.store_utils import get_http_uri from glance.tests.functional.store_utils import setup_http from glance.tests.utils import execute from glance.tests.utils import minimal_headers from glance.tests.utils import skip_if_disabled from glance.tests.utils import xattr_writes_supported FIVE_KB = 5 * units.Ki class BaseCacheMiddlewareTest(object): @skip_if_disabled def test_cache_middleware_transparent_v1(self): """ We test that putting the cache middleware into the application pipeline gives us transparent image caching """ self.cleanup() self.start_servers(**self.__dict__.copy()) # Add an image and verify a 200 OK is returned image_data = b"*" * FIVE_KB headers = minimal_headers('Image1') path = "http://%s:%d/v1/images" % ("127.0.0.1", self.api_port) http = httplib2.Http() response, content = http.request(path, 'POST', headers=headers, body=image_data) self.assertEqual(http_client.CREATED, response.status) data = jsonutils.loads(content) self.assertEqual(hashlib.md5(image_data).hexdigest(), data['image']['checksum']) self.assertEqual(FIVE_KB, data['image']['size']) self.assertEqual("Image1", data['image']['name']) self.assertTrue(data['image']['is_public']) image_id = data['image']['id'] # Verify image not in cache image_cached_path = os.path.join(self.api_server.image_cache_dir, image_id) self.assertFalse(os.path.exists(image_cached_path)) # Grab the image path = "http://%s:%d/v1/images/%s" % ("127.0.0.1", self.api_port, image_id) http = httplib2.Http() response, content = http.request(path, 'GET') self.assertEqual(http_client.OK, response.status) # Verify image now in cache image_cached_path = os.path.join(self.api_server.image_cache_dir, image_id) # You might wonder why the heck this is here... well, it's here # because it took me forever to figure out that the disk write # cache in Linux was causing random failures of the os.path.exists # assert directly below this. Basically, since the cache is writing # the image file to disk in a different process, the write buffers # don't flush the cache file during an os.rename() properly, resulting # in a false negative on the file existence check below. This little # loop pauses the execution of this process for no more than 1.5 # seconds. If after that time the cached image file still doesn't # appear on disk, something really is wrong, and the assert should # trigger... i = 0 while not os.path.exists(image_cached_path) and i < 30: time.sleep(0.05) i = i + 1 self.assertTrue(os.path.exists(image_cached_path)) # Now, we delete the image from the server and verify that # the image cache no longer contains the deleted image path = "http://%s:%d/v1/images/%s" % ("127.0.0.1", self.api_port, image_id) http = httplib2.Http() response, content = http.request(path, 'DELETE') self.assertEqual(http_client.OK, response.status) self.assertFalse(os.path.exists(image_cached_path)) self.stop_servers() @skip_if_disabled def test_cache_middleware_transparent_v2(self): """Ensure the v2 API image transfer calls trigger caching""" self.cleanup() self.start_servers(**self.__dict__.copy()) # Add an image and verify success path = "http://%s:%d/v2/images" % ("0.0.0.0", self.api_port) http = httplib2.Http() headers = {'content-type': 'application/json'} image_entity = { 'name': 'Image1', 'visibility': 'public', 'container_format': 'bare', 'disk_format': 'raw', } response, content = http.request(path, 'POST', headers=headers, body=jsonutils.dumps(image_entity)) self.assertEqual(http_client.CREATED, response.status) data = jsonutils.loads(content) image_id = data['id'] path = "http://%s:%d/v2/images/%s/file" % ("0.0.0.0", self.api_port, image_id) headers = {'content-type': 'application/octet-stream'} image_data = "*" * FIVE_KB response, content = http.request(path, 'PUT', headers=headers, body=image_data) self.assertEqual(http_client.NO_CONTENT, response.status) # Verify image not in cache image_cached_path = os.path.join(self.api_server.image_cache_dir, image_id) self.assertFalse(os.path.exists(image_cached_path)) # Grab the image http = httplib2.Http() response, content = http.request(path, 'GET') self.assertEqual(http_client.OK, response.status) # Verify image now in cache image_cached_path = os.path.join(self.api_server.image_cache_dir, image_id) self.assertTrue(os.path.exists(image_cached_path)) # Now, we delete the image from the server and verify that # the image cache no longer contains the deleted image path = "http://%s:%d/v2/images/%s" % ("0.0.0.0", self.api_port, image_id) http = httplib2.Http() response, content = http.request(path, 'DELETE') self.assertEqual(http_client.NO_CONTENT, response.status) self.assertFalse(os.path.exists(image_cached_path)) self.stop_servers() @skip_if_disabled def test_partially_downloaded_images_are_not_cached_v2_api(self): """ Verify that we do not cache images that were downloaded partially using v2 images API. """ self.cleanup() self.start_servers(**self.__dict__.copy()) # Add an image and verify success path = "http://%s:%d/v2/images" % ("0.0.0.0", self.api_port) http = httplib2.Http() headers = {'content-type': 'application/json'} image_entity = { 'name': 'Image1', 'visibility': 'public', 'container_format': 'bare', 'disk_format': 'raw', } response, content = http.request(path, 'POST', headers=headers, body=jsonutils.dumps(image_entity)) self.assertEqual(http_client.CREATED, response.status) data = jsonutils.loads(content) image_id = data['id'] path = "http://%s:%d/v2/images/%s/file" % ("0.0.0.0", self.api_port, image_id) headers = {'content-type': 'application/octet-stream'} image_data = b'ABCDEFGHIJKLMNOPQRSTUVWXYZ' response, content = http.request(path, 'PUT', headers=headers, body=image_data) self.assertEqual(http_client.NO_CONTENT, response.status) # Verify that this image is not in cache image_cached_path = os.path.join(self.api_server.image_cache_dir, image_id) self.assertFalse(os.path.exists(image_cached_path)) # partially download this image and verify status 206 http = httplib2.Http() # range download request range_ = 'bytes=3-5' headers = { 'X-Identity-Status': 'Confirmed', 'X-Auth-Token': '932c5c84-02ac-4fe5-a9ba-620af0e2bb96', 'X-User-Id': 'f9a41d13-0c13-47e9-bee2-ce4e8bfe958e', 'X-Tenant-Id': str(uuid.uuid4()), 'X-Roles': 'member', 'Range': range_ } response, content = http.request(path, 'GET', headers=headers) self.assertEqual(http_client.PARTIAL_CONTENT, response.status) self.assertEqual(b'DEF', content) # content-range download request # NOTE(dharinic): Glance incorrectly supports Content-Range for partial # image downloads in requests. This test is included to ensure that # we prevent regression. content_range = 'bytes 3-5/*' headers = { 'X-Identity-Status': 'Confirmed', 'X-Auth-Token': '932c5c84-02ac-4fe5-a9ba-620af0e2bb96', 'X-User-Id': 'f9a41d13-0c13-47e9-bee2-ce4e8bfe958e', 'X-Tenant-Id': str(uuid.uuid4()), 'X-Roles': 'member', 'Content-Range': content_range } response, content = http.request(path, 'GET', headers=headers) self.assertEqual(http_client.PARTIAL_CONTENT, response.status) self.assertEqual(b'DEF', content) # verify that we do not cache the partial image image_cached_path = os.path.join(self.api_server.image_cache_dir, image_id) self.assertFalse(os.path.exists(image_cached_path)) self.stop_servers() @skip_if_disabled def test_partial_download_of_cached_images_v2_api(self): """ Verify that partial download requests for a fully cached image succeeds; we do not serve it from cache. """ self.cleanup() self.start_servers(**self.__dict__.copy()) # Add an image and verify success path = "http://%s:%d/v2/images" % ("0.0.0.0", self.api_port) http = httplib2.Http() headers = {'content-type': 'application/json'} image_entity = { 'name': 'Image1', 'visibility': 'public', 'container_format': 'bare', 'disk_format': 'raw', } response, content = http.request(path, 'POST', headers=headers, body=jsonutils.dumps(image_entity)) self.assertEqual(http_client.CREATED, response.status) data = jsonutils.loads(content) image_id = data['id'] path = "http://%s:%d/v2/images/%s/file" % ("0.0.0.0", self.api_port, image_id) headers = {'content-type': 'application/octet-stream'} image_data = b'ABCDEFGHIJKLMNOPQRSTUVWXYZ' response, content = http.request(path, 'PUT', headers=headers, body=image_data) self.assertEqual(http_client.NO_CONTENT, response.status) # Verify that this image is not in cache image_cached_path = os.path.join(self.api_server.image_cache_dir, image_id) self.assertFalse(os.path.exists(image_cached_path)) # Download the entire image http = httplib2.Http() response, content = http.request(path, 'GET') self.assertEqual(http_client.OK, response.status) self.assertEqual(b'ABCDEFGHIJKLMNOPQRSTUVWXYZ', content) # Verify that the image is now in cache image_cached_path = os.path.join(self.api_server.image_cache_dir, image_id) self.assertTrue(os.path.exists(image_cached_path)) # Modify the data in cache so we can verify the partially downloaded # content was not from cache indeed. with open(image_cached_path, 'w') as cache_file: cache_file.write('0123456789') # Partially attempt a download of this image and verify that is not # from cache # range download request range_ = 'bytes=3-5' headers = { 'X-Identity-Status': 'Confirmed', 'X-Auth-Token': '932c5c84-02ac-4fe5-a9ba-620af0e2bb96', 'X-User-Id': 'f9a41d13-0c13-47e9-bee2-ce4e8bfe958e', 'X-Tenant-Id': str(uuid.uuid4()), 'X-Roles': 'member', 'Range': range_, 'content-type': 'application/json' } response, content = http.request(path, 'GET', headers=headers) self.assertEqual(http_client.PARTIAL_CONTENT, response.status) self.assertEqual(b'DEF', content) self.assertNotEqual(b'345', content) self.assertNotEqual(image_data, content) # content-range download request # NOTE(dharinic): Glance incorrectly supports Content-Range for partial # image downloads in requests. This test is included to ensure that # we prevent regression. content_range = 'bytes 3-5/*' headers = { 'X-Identity-Status': 'Confirmed', 'X-Auth-Token': '932c5c84-02ac-4fe5-a9ba-620af0e2bb96', 'X-User-Id': 'f9a41d13-0c13-47e9-bee2-ce4e8bfe958e', 'X-Tenant-Id': str(uuid.uuid4()), 'X-Roles': 'member', 'Content-Range': content_range, 'content-type': 'application/json' } response, content = http.request(path, 'GET', headers=headers) self.assertEqual(http_client.PARTIAL_CONTENT, response.status) self.assertEqual(b'DEF', content) self.assertNotEqual(b'345', content) self.assertNotEqual(image_data, content) self.stop_servers() @skip_if_disabled def test_cache_remote_image(self): """ We test that caching is no longer broken for remote images """ self.cleanup() self.start_servers(**self.__dict__.copy()) setup_http(self) # Add a remote image and verify a 201 Created is returned remote_uri = get_http_uri(self, '2') headers = {'X-Image-Meta-Name': 'Image2', 'X-Image-Meta-disk_format': 'raw', 'X-Image-Meta-container_format': 'ovf', 'X-Image-Meta-Is-Public': 'True', 'X-Image-Meta-Location': remote_uri} path = "http://%s:%d/v1/images" % ("127.0.0.1", self.api_port) http = httplib2.Http() response, content = http.request(path, 'POST', headers=headers) self.assertEqual(http_client.CREATED, response.status) data = jsonutils.loads(content) self.assertEqual(FIVE_KB, data['image']['size']) image_id = data['image']['id'] path = "http://%s:%d/v1/images/%s" % ("127.0.0.1", self.api_port, image_id) # Grab the image http = httplib2.Http() response, content = http.request(path, 'GET') self.assertEqual(http_client.OK, response.status) # Grab the image again to ensure it can be served out from # cache with the correct size http = httplib2.Http() response, content = http.request(path, 'GET') self.assertEqual(http_client.OK, response.status) self.assertEqual(FIVE_KB, int(response['content-length'])) self.stop_servers() @skip_if_disabled def test_cache_middleware_trans_v1_without_download_image_policy(self): """ Ensure the image v1 API image transfer applied 'download_image' policy enforcement. """ self.cleanup() self.start_servers(**self.__dict__.copy()) # Add an image and verify a 200 OK is returned image_data = b"*" * FIVE_KB headers = minimal_headers('Image1') path = "http://%s:%d/v1/images" % ("127.0.0.1", self.api_port) http = httplib2.Http() response, content = http.request(path, 'POST', headers=headers, body=image_data) self.assertEqual(http_client.CREATED, response.status) data = jsonutils.loads(content) self.assertEqual(hashlib.md5(image_data).hexdigest(), data['image']['checksum']) self.assertEqual(FIVE_KB, data['image']['size']) self.assertEqual("Image1", data['image']['name']) self.assertTrue(data['image']['is_public']) image_id = data['image']['id'] # Verify image not in cache image_cached_path = os.path.join(self.api_server.image_cache_dir, image_id) self.assertFalse(os.path.exists(image_cached_path)) rules = {"context_is_admin": "role:admin", "default": "", "download_image": "!"} self.set_policy_rules(rules) # Grab the image path = "http://%s:%d/v1/images/%s" % ("127.0.0.1", self.api_port, image_id) http = httplib2.Http() response, content = http.request(path, 'GET') self.assertEqual(http_client.FORBIDDEN, response.status) # Now, we delete the image from the server and verify that # the image cache no longer contains the deleted image path = "http://%s:%d/v1/images/%s" % ("127.0.0.1", self.api_port, image_id) http = httplib2.Http() response, content = http.request(path, 'DELETE') self.assertEqual(http_client.OK, response.status) self.assertFalse(os.path.exists(image_cached_path)) self.stop_servers() @skip_if_disabled def test_cache_middleware_trans_v2_without_download_image_policy(self): """ Ensure the image v2 API image transfer applied 'download_image' policy enforcement. """ self.cleanup() self.start_servers(**self.__dict__.copy()) # Add an image and verify success path = "http://%s:%d/v2/images" % ("0.0.0.0", self.api_port) http = httplib2.Http() headers = {'content-type': 'application/json'} image_entity = { 'name': 'Image1', 'visibility': 'public', 'container_format': 'bare', 'disk_format': 'raw', } response, content = http.request(path, 'POST', headers=headers, body=jsonutils.dumps(image_entity)) self.assertEqual(http_client.CREATED, response.status) data = jsonutils.loads(content) image_id = data['id'] path = "http://%s:%d/v2/images/%s/file" % ("0.0.0.0", self.api_port, image_id) headers = {'content-type': 'application/octet-stream'} image_data = "*" * FIVE_KB response, content = http.request(path, 'PUT', headers=headers, body=image_data) self.assertEqual(http_client.NO_CONTENT, response.status) # Verify image not in cache image_cached_path = os.path.join(self.api_server.image_cache_dir, image_id) self.assertFalse(os.path.exists(image_cached_path)) rules = {"context_is_admin": "role:admin", "default": "", "download_image": "!"} self.set_policy_rules(rules) # Grab the image http = httplib2.Http() response, content = http.request(path, 'GET') self.assertEqual(http_client.FORBIDDEN, response.status) # Now, we delete the image from the server and verify that # the image cache no longer contains the deleted image path = "http://%s:%d/v2/images/%s" % ("0.0.0.0", self.api_port, image_id) http = httplib2.Http() response, content = http.request(path, 'DELETE') self.assertEqual(http_client.NO_CONTENT, response.status) self.assertFalse(os.path.exists(image_cached_path)) self.stop_servers() @skip_if_disabled def test_cache_middleware_trans_with_deactivated_image(self): """ Ensure the image v1/v2 API image transfer forbids downloading deactivated images. Image deactivation is not available in v1. So, we'll deactivate the image using v2 but test image transfer with both v1 and v2. """ self.cleanup() self.start_servers(**self.__dict__.copy()) # Add an image and verify a 200 OK is returned image_data = b"*" * FIVE_KB headers = minimal_headers('Image1') path = "http://%s:%d/v1/images" % ("127.0.0.1", self.api_port) http = httplib2.Http() response, content = http.request(path, 'POST', headers=headers, body=image_data) self.assertEqual(http_client.CREATED, response.status) data = jsonutils.loads(content) self.assertEqual(hashlib.md5(image_data).hexdigest(), data['image']['checksum']) self.assertEqual(FIVE_KB, data['image']['size']) self.assertEqual("Image1", data['image']['name']) self.assertTrue(data['image']['is_public']) image_id = data['image']['id'] # Grab the image path = "http://%s:%d/v1/images/%s" % ("127.0.0.1", self.api_port, image_id) http = httplib2.Http() response, content = http.request(path, 'GET') self.assertEqual(http_client.OK, response.status) # Verify image in cache image_cached_path = os.path.join(self.api_server.image_cache_dir, image_id) self.assertTrue(os.path.exists(image_cached_path)) # Deactivate the image using v2 path = "http://%s:%d/v2/images/%s/actions/deactivate" path = path % ("127.0.0.1", self.api_port, image_id) http = httplib2.Http() response, content = http.request(path, 'POST') self.assertEqual(http_client.NO_CONTENT, response.status) # Download the image with v1. Ensure it is forbidden path = "http://%s:%d/v1/images/%s" % ("127.0.0.1", self.api_port, image_id) http = httplib2.Http() response, content = http.request(path, 'GET') self.assertEqual(http_client.FORBIDDEN, response.status) # Download the image with v2. This succeeds because # we are in admin context. path = "http://%s:%d/v2/images/%s/file" % ("127.0.0.1", self.api_port, image_id) http = httplib2.Http() response, content = http.request(path, 'GET') self.assertEqual(http_client.OK, response.status) # Reactivate the image using v2 path = "http://%s:%d/v2/images/%s/actions/reactivate" path = path % ("127.0.0.1", self.api_port, image_id) http = httplib2.Http() response, content = http.request(path, 'POST') self.assertEqual(http_client.NO_CONTENT, response.status) # Download the image with v1. Ensure it is allowed path = "http://%s:%d/v1/images/%s" % ("127.0.0.1", self.api_port, image_id) http = httplib2.Http() response, content = http.request(path, 'GET') self.assertEqual(http_client.OK, response.status) # Download the image with v2. Ensure it is allowed path = "http://%s:%d/v2/images/%s/file" % ("127.0.0.1", self.api_port, image_id) http = httplib2.Http() response, content = http.request(path, 'GET') self.assertEqual(http_client.OK, response.status) # Now, we delete the image from the server and verify that # the image cache no longer contains the deleted image path = "http://%s:%d/v1/images/%s" % ("127.0.0.1", self.api_port, image_id) http = httplib2.Http() response, content = http.request(path, 'DELETE') self.assertEqual(http_client.OK, response.status) self.assertFalse(os.path.exists(image_cached_path)) self.stop_servers() class BaseCacheManageMiddlewareTest(object): """Base test class for testing cache management middleware""" def verify_no_images(self): path = "http://%s:%d/v1/images" % ("127.0.0.1", self.api_port) http = httplib2.Http() response, content = http.request(path, 'GET') self.assertEqual(http_client.OK, response.status) data = jsonutils.loads(content) self.assertIn('images', data) self.assertEqual(0, len(data['images'])) def add_image(self, name): """ Adds an image and returns the newly-added image identifier """ image_data = b"*" * FIVE_KB headers = minimal_headers('%s' % name) path = "http://%s:%d/v1/images" % ("127.0.0.1", self.api_port) http = httplib2.Http() response, content = http.request(path, 'POST', headers=headers, body=image_data) self.assertEqual(http_client.CREATED, response.status) data = jsonutils.loads(content) self.assertEqual(hashlib.md5(image_data).hexdigest(), data['image']['checksum']) self.assertEqual(FIVE_KB, data['image']['size']) self.assertEqual(name, data['image']['name']) self.assertTrue(data['image']['is_public']) return data['image']['id'] def verify_no_cached_images(self): """ Verify no images in the image cache """ path = "http://%s:%d/v1/cached_images" % ("127.0.0.1", self.api_port) http = httplib2.Http() response, content = http.request(path, 'GET') self.assertEqual(http_client.OK, response.status) data = jsonutils.loads(content) self.assertIn('cached_images', data) self.assertEqual([], data['cached_images']) @skip_if_disabled def test_user_not_authorized(self): self.cleanup() self.start_servers(**self.__dict__.copy()) self.verify_no_images() image_id1 = self.add_image("Image1") image_id2 = self.add_image("Image2") # Verify image does not yet show up in cache (we haven't "hit" # it yet using a GET /images/1 ... self.verify_no_cached_images() # Grab the image path = "http://%s:%d/v1/images/%s" % ("127.0.0.1", self.api_port, image_id1) http = httplib2.Http() response, content = http.request(path, 'GET') self.assertEqual(http_client.OK, response.status) # Verify image now in cache path = "http://%s:%d/v1/cached_images" % ("127.0.0.1", self.api_port) http = httplib2.Http() response, content = http.request(path, 'GET') self.assertEqual(http_client.OK, response.status) data = jsonutils.loads(content) self.assertIn('cached_images', data) cached_images = data['cached_images'] self.assertEqual(1, len(cached_images)) self.assertEqual(image_id1, cached_images[0]['image_id']) # Set policy to disallow access to cache management rules = {"manage_image_cache": '!'} self.set_policy_rules(rules) # Verify an unprivileged user cannot see cached images path = "http://%s:%d/v1/cached_images" % ("127.0.0.1", self.api_port) http = httplib2.Http() response, content = http.request(path, 'GET') self.assertEqual(http_client.FORBIDDEN, response.status) # Verify an unprivileged user cannot delete images from the cache path = "http://%s:%d/v1/cached_images/%s" % ("127.0.0.1", self.api_port, image_id1) http = httplib2.Http() response, content = http.request(path, 'DELETE') self.assertEqual(http_client.FORBIDDEN, response.status) # Verify an unprivileged user cannot delete all cached images path = "http://%s:%d/v1/cached_images" % ("127.0.0.1", self.api_port) http = httplib2.Http() response, content = http.request(path, 'DELETE') self.assertEqual(http_client.FORBIDDEN, response.status) # Verify an unprivileged user cannot queue an image path = "http://%s:%d/v1/queued_images/%s" % ("127.0.0.1", self.api_port, image_id2) http = httplib2.Http() response, content = http.request(path, 'PUT') self.assertEqual(http_client.FORBIDDEN, response.status) self.stop_servers() @skip_if_disabled def test_cache_manage_get_cached_images(self): """ Tests that cached images are queryable """ self.cleanup() self.start_servers(**self.__dict__.copy()) self.verify_no_images() image_id = self.add_image("Image1") # Verify image does not yet show up in cache (we haven't "hit" # it yet using a GET /images/1 ... self.verify_no_cached_images() # Grab the image path = "http://%s:%d/v1/images/%s" % ("127.0.0.1", self.api_port, image_id) http = httplib2.Http() response, content = http.request(path, 'GET') self.assertEqual(http_client.OK, response.status) # Verify image now in cache path = "http://%s:%d/v1/cached_images" % ("127.0.0.1", self.api_port) http = httplib2.Http() response, content = http.request(path, 'GET') self.assertEqual(http_client.OK, response.status) data = jsonutils.loads(content) self.assertIn('cached_images', data) # Verify the last_modified/last_accessed values are valid floats for cached_image in data['cached_images']: for time_key in ('last_modified', 'last_accessed'): time_val = cached_image[time_key] try: float(time_val) except ValueError: self.fail('%s time %s for cached image %s not a valid ' 'float' % (time_key, time_val, cached_image['image_id'])) cached_images = data['cached_images'] self.assertEqual(1, len(cached_images)) self.assertEqual(image_id, cached_images[0]['image_id']) self.assertEqual(0, cached_images[0]['hits']) # Hit the image path = "http://%s:%d/v1/images/%s" % ("127.0.0.1", self.api_port, image_id) http = httplib2.Http() response, content = http.request(path, 'GET') self.assertEqual(http_client.OK, response.status) # Verify image hits increased in output of manage GET path = "http://%s:%d/v1/cached_images" % ("127.0.0.1", self.api_port) http = httplib2.Http() response, content = http.request(path, 'GET') self.assertEqual(http_client.OK, response.status) data = jsonutils.loads(content) self.assertIn('cached_images', data) cached_images = data['cached_images'] self.assertEqual(1, len(cached_images)) self.assertEqual(image_id, cached_images[0]['image_id']) self.assertEqual(1, cached_images[0]['hits']) self.stop_servers() @skip_if_disabled def test_cache_manage_delete_cached_images(self): """ Tests that cached images may be deleted """ self.cleanup() self.start_servers(**self.__dict__.copy()) self.verify_no_images() ids = {} # Add a bunch of images... for x in range(4): ids[x] = self.add_image("Image%s" % str(x)) # Verify no images in cached_images because no image has been hit # yet using a GET /images/ ... self.verify_no_cached_images() # Grab the images, essentially caching them... for x in range(4): path = "http://%s:%d/v1/images/%s" % ("127.0.0.1", self.api_port, ids[x]) http = httplib2.Http() response, content = http.request(path, 'GET') self.assertEqual(http_client.OK, response.status, "Failed to find image %s" % ids[x]) # Verify images now in cache path = "http://%s:%d/v1/cached_images" % ("127.0.0.1", self.api_port) http = httplib2.Http() response, content = http.request(path, 'GET') self.assertEqual(http_client.OK, response.status) data = jsonutils.loads(content) self.assertIn('cached_images', data) cached_images = data['cached_images'] self.assertEqual(4, len(cached_images)) for x in range(4, 0): # Cached images returned last modified order self.assertEqual(ids[x], cached_images[x]['image_id']) self.assertEqual(0, cached_images[x]['hits']) # Delete third image of the cached images and verify no longer in cache path = "http://%s:%d/v1/cached_images/%s" % ("127.0.0.1", self.api_port, ids[2]) http = httplib2.Http() response, content = http.request(path, 'DELETE') self.assertEqual(http_client.OK, response.status) path = "http://%s:%d/v1/cached_images" % ("127.0.0.1", self.api_port) http = httplib2.Http() response, content = http.request(path, 'GET') self.assertEqual(http_client.OK, response.status) data = jsonutils.loads(content) self.assertIn('cached_images', data) cached_images = data['cached_images'] self.assertEqual(3, len(cached_images)) self.assertNotIn(ids[2], [x['image_id'] for x in cached_images]) # Delete all cached images and verify nothing in cache path = "http://%s:%d/v1/cached_images" % ("127.0.0.1", self.api_port) http = httplib2.Http() response, content = http.request(path, 'DELETE') self.assertEqual(http_client.OK, response.status) path = "http://%s:%d/v1/cached_images" % ("127.0.0.1", self.api_port) http = httplib2.Http() response, content = http.request(path, 'GET') self.assertEqual(http_client.OK, response.status) data = jsonutils.loads(content) self.assertIn('cached_images', data) cached_images = data['cached_images'] self.assertEqual(0, len(cached_images)) self.stop_servers() @skip_if_disabled def test_cache_manage_delete_queued_images(self): """ Tests that all queued images may be deleted at once """ self.cleanup() self.start_servers(**self.__dict__.copy()) self.verify_no_images() ids = {} NUM_IMAGES = 4 # Add and then queue some images for x in range(NUM_IMAGES): ids[x] = self.add_image("Image%s" % str(x)) path = "http://%s:%d/v1/queued_images/%s" % ("127.0.0.1", self.api_port, ids[x]) http = httplib2.Http() response, content = http.request(path, 'PUT') self.assertEqual(http_client.OK, response.status) # Delete all queued images path = "http://%s:%d/v1/queued_images" % ("127.0.0.1", self.api_port) http = httplib2.Http() response, content = http.request(path, 'DELETE') self.assertEqual(http_client.OK, response.status) data = jsonutils.loads(content) num_deleted = data['num_deleted'] self.assertEqual(NUM_IMAGES, num_deleted) # Verify a second delete now returns num_deleted=0 path = "http://%s:%d/v1/queued_images" % ("127.0.0.1", self.api_port) http = httplib2.Http() response, content = http.request(path, 'DELETE') self.assertEqual(http_client.OK, response.status) data = jsonutils.loads(content) num_deleted = data['num_deleted'] self.assertEqual(0, num_deleted) self.stop_servers() @skip_if_disabled def test_queue_and_prefetch(self): """ Tests that images may be queued and prefetched """ self.cleanup() self.start_servers(**self.__dict__.copy()) cache_config_filepath = os.path.join(self.test_dir, 'etc', 'glance-cache.conf') cache_file_options = { 'image_cache_dir': self.api_server.image_cache_dir, 'image_cache_driver': self.image_cache_driver, 'registry_port': self.registry_server.bind_port, 'log_file': os.path.join(self.test_dir, 'cache.log'), 'lock_path': self.test_dir, 'metadata_encryption_key': "012345678901234567890123456789ab", 'filesystem_store_datadir': self.test_dir } with open(cache_config_filepath, 'w') as cache_file: cache_file.write("""[DEFAULT] debug = True lock_path = %(lock_path)s image_cache_dir = %(image_cache_dir)s image_cache_driver = %(image_cache_driver)s registry_host = 127.0.0.1 registry_port = %(registry_port)s metadata_encryption_key = %(metadata_encryption_key)s log_file = %(log_file)s [glance_store] filesystem_store_datadir=%(filesystem_store_datadir)s """ % cache_file_options) self.verify_no_images() ids = {} # Add a bunch of images... for x in range(4): ids[x] = self.add_image("Image%s" % str(x)) # Queue the first image, verify no images still in cache after queueing # then run the prefetcher and verify that the image is then in the # cache path = "http://%s:%d/v1/queued_images/%s" % ("127.0.0.1", self.api_port, ids[0]) http = httplib2.Http() response, content = http.request(path, 'PUT') self.assertEqual(http_client.OK, response.status) self.verify_no_cached_images() cmd = ("%s -m glance.cmd.cache_prefetcher --config-file %s" % (sys.executable, cache_config_filepath)) exitcode, out, err = execute(cmd) self.assertEqual(0, exitcode) self.assertEqual(b'', out.strip(), out) # Verify first image now in cache path = "http://%s:%d/v1/cached_images" % ("127.0.0.1", self.api_port) http = httplib2.Http() response, content = http.request(path, 'GET') self.assertEqual(http_client.OK, response.status) data = jsonutils.loads(content) self.assertIn('cached_images', data) cached_images = data['cached_images'] self.assertEqual(1, len(cached_images)) self.assertIn(ids[0], [r['image_id'] for r in data['cached_images']]) self.stop_servers() class TestImageCacheXattr(functional.FunctionalTest, BaseCacheMiddlewareTest): """Functional tests that exercise the image cache using the xattr driver""" def setUp(self): """ Test to see if the pre-requisites for the image cache are working (python-xattr installed and xattr support on the filesystem) """ if getattr(self, 'disabled', False): return if not getattr(self, 'inited', False): try: import xattr # noqa except ImportError: self.inited = True self.disabled = True self.disabled_message = ("python-xattr not installed.") return self.inited = True self.disabled = False self.image_cache_driver = "xattr" super(TestImageCacheXattr, self).setUp() self.api_server.deployment_flavor = "caching" if not xattr_writes_supported(self.test_dir): self.inited = True self.disabled = True self.disabled_message = ("filesystem does not support xattr") return def tearDown(self): super(TestImageCacheXattr, self).tearDown() if os.path.exists(self.api_server.image_cache_dir): shutil.rmtree(self.api_server.image_cache_dir) class TestImageCacheManageXattr(functional.FunctionalTest, BaseCacheManageMiddlewareTest): """ Functional tests that exercise the image cache management with the Xattr cache driver """ def setUp(self): """ Test to see if the pre-requisites for the image cache are working (python-xattr installed and xattr support on the filesystem) """ if getattr(self, 'disabled', False): return if not getattr(self, 'inited', False): try: import xattr # noqa except ImportError: self.inited = True self.disabled = True self.disabled_message = ("python-xattr not installed.") return self.inited = True self.disabled = False self.image_cache_driver = "xattr" super(TestImageCacheManageXattr, self).setUp() self.api_server.deployment_flavor = "cachemanagement" if not xattr_writes_supported(self.test_dir): self.inited = True self.disabled = True self.disabled_message = ("filesystem does not support xattr") return def tearDown(self): super(TestImageCacheManageXattr, self).tearDown() if os.path.exists(self.api_server.image_cache_dir): shutil.rmtree(self.api_server.image_cache_dir) class TestImageCacheSqlite(functional.FunctionalTest, BaseCacheMiddlewareTest): """ Functional tests that exercise the image cache using the SQLite driver """ def setUp(self): """ Test to see if the pre-requisites for the image cache are working (python-xattr installed and xattr support on the filesystem) """ if getattr(self, 'disabled', False): return if not getattr(self, 'inited', False): try: import sqlite3 # noqa except ImportError: self.inited = True self.disabled = True self.disabled_message = ("python-sqlite3 not installed.") return self.inited = True self.disabled = False super(TestImageCacheSqlite, self).setUp() self.api_server.deployment_flavor = "caching" def tearDown(self): super(TestImageCacheSqlite, self).tearDown() if os.path.exists(self.api_server.image_cache_dir): shutil.rmtree(self.api_server.image_cache_dir) class TestImageCacheManageSqlite(functional.FunctionalTest, BaseCacheManageMiddlewareTest): """ Functional tests that exercise the image cache management using the SQLite driver """ def setUp(self): """ Test to see if the pre-requisites for the image cache are working (python-xattr installed and xattr support on the filesystem) """ if getattr(self, 'disabled', False): return if not getattr(self, 'inited', False): try: import sqlite3 # noqa except ImportError: self.inited = True self.disabled = True self.disabled_message = ("python-sqlite3 not installed.") return self.inited = True self.disabled = False self.image_cache_driver = "sqlite" super(TestImageCacheManageSqlite, self).setUp() self.api_server.deployment_flavor = "cachemanagement" def tearDown(self): super(TestImageCacheManageSqlite, self).tearDown() if os.path.exists(self.api_server.image_cache_dir): shutil.rmtree(self.api_server.image_cache_dir) glance-16.0.0/glance/tests/functional/v2/0000775000175100017510000000000013245511661020143 5ustar zuulzuul00000000000000glance-16.0.0/glance/tests/functional/v2/test_tasks.py0000666000175100017510000001215113245511421022675 0ustar zuulzuul00000000000000# Copyright 2013 IBM Corp. # All Rights Reserved. # # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import uuid from oslo_serialization import jsonutils import requests from six.moves import http_client as http from glance.tests import functional TENANT1 = str(uuid.uuid4()) TENANT2 = str(uuid.uuid4()) TENANT3 = str(uuid.uuid4()) TENANT4 = str(uuid.uuid4()) class TestTasks(functional.FunctionalTest): def setUp(self): super(TestTasks, self).setUp() self.cleanup() self.api_server.deployment_flavor = 'noauth' def _url(self, path): return 'http://127.0.0.1:%d%s' % (self.api_port, path) def _headers(self, custom_headers=None): base_headers = { 'X-Identity-Status': 'Confirmed', 'X-Auth-Token': '932c5c84-02ac-4fe5-a9ba-620af0e2bb96', 'X-User-Id': 'f9a41d13-0c13-47e9-bee2-ce4e8bfe958e', 'X-Tenant-Id': TENANT1, 'X-Roles': 'admin', } base_headers.update(custom_headers or {}) return base_headers def test_task_not_allowed_non_admin(self): self.start_servers(**self.__dict__.copy()) roles = {'X-Roles': 'member'} # Task list should be empty path = self._url('/v2/tasks') response = requests.get(path, headers=self._headers(roles)) self.assertEqual(http.FORBIDDEN, response.status_code) def test_task_lifecycle(self): self.start_servers(**self.__dict__.copy()) # Task list should be empty path = self._url('/v2/tasks') response = requests.get(path, headers=self._headers()) self.assertEqual(http.OK, response.status_code) tasks = jsonutils.loads(response.text)['tasks'] self.assertEqual(0, len(tasks)) # Create a task path = self._url('/v2/tasks') headers = self._headers({'content-type': 'application/json'}) data = jsonutils.dumps({ "type": "import", "input": { "import_from": "http://example.com", "import_from_format": "qcow2", "image_properties": { 'disk_format': 'vhd', 'container_format': 'ovf' } } }) response = requests.post(path, headers=headers, data=data) self.assertEqual(http.CREATED, response.status_code) # Returned task entity should have a generated id and status task = jsonutils.loads(response.text) task_id = task['id'] self.assertIn('Location', response.headers) self.assertEqual(path + '/' + task_id, response.headers['Location']) checked_keys = set([u'created_at', u'id', u'input', u'message', u'owner', u'schema', u'self', u'status', u'type', u'result', u'updated_at']) self.assertEqual(checked_keys, set(task.keys())) expected_task = { 'status': 'pending', 'type': 'import', 'input': { "import_from": "http://example.com", "import_from_format": "qcow2", "image_properties": { 'disk_format': 'vhd', 'container_format': 'ovf' }}, 'schema': '/v2/schemas/task', } for key, value in expected_task.items(): self.assertEqual(value, task[key], key) # Tasks list should now have one entry path = self._url('/v2/tasks') response = requests.get(path, headers=self._headers()) self.assertEqual(http.OK, response.status_code) tasks = jsonutils.loads(response.text)['tasks'] self.assertEqual(1, len(tasks)) self.assertEqual(task_id, tasks[0]['id']) # Attempt to delete a task path = self._url('/v2/tasks/%s' % tasks[0]['id']) response = requests.delete(path, headers=self._headers()) self.assertEqual(http.METHOD_NOT_ALLOWED, response.status_code) self.assertIsNotNone(response.headers.get('Allow')) self.assertEqual('GET', response.headers.get('Allow')) self.stop_servers() class TestTasksWithRegistry(TestTasks): def setUp(self): super(TestTasksWithRegistry, self).setUp() self.api_server.data_api = ( 'glance.tests.functional.v2.registry_data_api') self.registry_server.deployment_flavor = 'trusted-auth' self.include_scrubber = False glance-16.0.0/glance/tests/functional/v2/test_schemas.py0000666000175100017510000000444513245511421023202 0ustar zuulzuul00000000000000# Copyright 2012 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_serialization import jsonutils import requests from six.moves import http_client as http from glance.tests import functional class TestSchemas(functional.FunctionalTest): def setUp(self): super(TestSchemas, self).setUp() self.cleanup() self.start_servers(**self.__dict__.copy()) def test_resource(self): # Ensure the image link works and custom properties are loaded path = 'http://%s:%d/v2/schemas/image' % ('127.0.0.1', self.api_port) response = requests.get(path) self.assertEqual(http.OK, response.status_code) image_schema = jsonutils.loads(response.text) expected = set([ 'id', 'name', 'visibility', 'checksum', 'created_at', 'updated_at', 'tags', 'size', 'virtual_size', 'owner', 'container_format', 'disk_format', 'self', 'file', 'status', 'schema', 'direct_url', 'locations', 'min_ram', 'min_disk', 'protected', ]) self.assertEqual(expected, set(image_schema['properties'].keys())) # Ensure the images link works and agrees with the image schema path = 'http://%s:%d/v2/schemas/images' % ('127.0.0.1', self.api_port) response = requests.get(path) self.assertEqual(http.OK, response.status_code) images_schema = jsonutils.loads(response.text) item_schema = images_schema['properties']['images']['items'] self.assertEqual(item_schema, image_schema) self.stop_servers() glance-16.0.0/glance/tests/functional/v2/test_metadef_namespaces.py0000666000175100017510000002365213245511421025364 0ustar zuulzuul00000000000000# Copyright (c) 2014 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import uuid from oslo_serialization import jsonutils import requests from six.moves import http_client as http from glance.tests import functional TENANT1 = str(uuid.uuid4()) TENANT2 = str(uuid.uuid4()) class TestNamespaces(functional.FunctionalTest): def setUp(self): super(TestNamespaces, self).setUp() self.cleanup() self.api_server.deployment_flavor = 'noauth' self.start_servers(**self.__dict__.copy()) def _url(self, path): return 'http://127.0.0.1:%d%s' % (self.api_port, path) def _headers(self, custom_headers=None): base_headers = { 'X-Identity-Status': 'Confirmed', 'X-Auth-Token': '932c5c84-02ac-4fe5-a9ba-620af0e2bb96', 'X-User-Id': 'f9a41d13-0c13-47e9-bee2-ce4e8bfe958e', 'X-Tenant-Id': TENANT1, 'X-Roles': 'admin', } base_headers.update(custom_headers or {}) return base_headers def test_namespace_lifecycle(self): # Namespace should not exist path = self._url('/v2/metadefs/namespaces/MyNamespace') response = requests.get(path, headers=self._headers()) self.assertEqual(http.NOT_FOUND, response.status_code) # Create a namespace path = self._url('/v2/metadefs/namespaces') headers = self._headers({'content-type': 'application/json'}) namespace_name = 'MyNamespace' data = jsonutils.dumps({ "namespace": namespace_name, "display_name": "My User Friendly Namespace", "description": "My description" } ) response = requests.post(path, headers=headers, data=data) self.assertEqual(http.CREATED, response.status_code) namespace_loc_header = response.headers['Location'] # Returned namespace should match the created namespace with default # values of visibility=private, protected=False and owner=Context # Tenant namespace = jsonutils.loads(response.text) checked_keys = set([ u'namespace', u'display_name', u'description', u'visibility', u'self', u'schema', u'protected', u'owner', u'created_at', u'updated_at' ]) self.assertEqual(set(namespace.keys()), checked_keys) expected_namespace = { "namespace": namespace_name, "display_name": "My User Friendly Namespace", "description": "My description", "visibility": "private", "protected": False, "owner": TENANT1, "self": "/v2/metadefs/namespaces/%s" % namespace_name, "schema": "/v2/schemas/metadefs/namespace" } for key, value in expected_namespace.items(): self.assertEqual(namespace[key], value, key) # Attempt to insert a duplicate response = requests.post(path, headers=headers, data=data) self.assertEqual(http.CONFLICT, response.status_code) # Get the namespace using the returned Location header response = requests.get(namespace_loc_header, headers=self._headers()) self.assertEqual(http.OK, response.status_code) namespace = jsonutils.loads(response.text) self.assertEqual(namespace_name, namespace['namespace']) self.assertNotIn('object', namespace) self.assertEqual(TENANT1, namespace['owner']) self.assertEqual('private', namespace['visibility']) self.assertFalse(namespace['protected']) # The namespace should be mutable path = self._url('/v2/metadefs/namespaces/%s' % namespace_name) media_type = 'application/json' headers = self._headers({'content-type': media_type}) namespace_name = "MyNamespace-UPDATED" data = jsonutils.dumps( { "namespace": namespace_name, "display_name": "display_name-UPDATED", "description": "description-UPDATED", "visibility": "private", # Not changed "protected": True, "owner": TENANT2 } ) response = requests.put(path, headers=headers, data=data) self.assertEqual(http.OK, response.status_code, response.text) # Returned namespace should reflect the changes namespace = jsonutils.loads(response.text) self.assertEqual('MyNamespace-UPDATED', namespace_name) self.assertEqual('display_name-UPDATED', namespace['display_name']) self.assertEqual('description-UPDATED', namespace['description']) self.assertEqual('private', namespace['visibility']) self.assertTrue(namespace['protected']) self.assertEqual(TENANT2, namespace['owner']) # Updates should persist across requests path = self._url('/v2/metadefs/namespaces/%s' % namespace_name) response = requests.get(path, headers=self._headers()) self.assertEqual(http.OK, response.status_code) namespace = jsonutils.loads(response.text) self.assertEqual('MyNamespace-UPDATED', namespace['namespace']) self.assertEqual('display_name-UPDATED', namespace['display_name']) self.assertEqual('description-UPDATED', namespace['description']) self.assertEqual('private', namespace['visibility']) self.assertTrue(namespace['protected']) self.assertEqual(TENANT2, namespace['owner']) # Deletion should not work on protected namespaces path = self._url('/v2/metadefs/namespaces/%s' % namespace_name) response = requests.delete(path, headers=self._headers()) self.assertEqual(http.FORBIDDEN, response.status_code) # Unprotect namespace for deletion path = self._url('/v2/metadefs/namespaces/%s' % namespace_name) media_type = 'application/json' headers = self._headers({'content-type': media_type}) doc = { "namespace": namespace_name, "display_name": "My User Friendly Namespace", "description": "My description", "visibility": "public", "protected": False, "owner": TENANT2 } data = jsonutils.dumps(doc) response = requests.put(path, headers=headers, data=data) self.assertEqual(http.OK, response.status_code, response.text) # Deletion should work. Deleting namespace MyNamespace path = self._url('/v2/metadefs/namespaces/%s' % namespace_name) response = requests.delete(path, headers=self._headers()) self.assertEqual(http.NO_CONTENT, response.status_code) # Namespace should not exist path = self._url('/v2/metadefs/namespaces/MyNamespace') response = requests.get(path, headers=self._headers()) self.assertEqual(http.NOT_FOUND, response.status_code) def test_metadef_dont_accept_illegal_bodies(self): # Namespace should not exist path = self._url('/v2/metadefs/namespaces/bodytest') response = requests.get(path, headers=self._headers()) self.assertEqual(http.NOT_FOUND, response.status_code) # Create a namespace path = self._url('/v2/metadefs/namespaces') headers = self._headers({'content-type': 'application/json'}) namespace_name = 'bodytest' data = jsonutils.dumps({ "namespace": namespace_name, "display_name": "My User Friendly Namespace", "description": "My description" } ) response = requests.post(path, headers=headers, data=data) self.assertEqual(http.CREATED, response.status_code) # Test all the urls that supply data data_urls = [ '/v2/schemas/metadefs/namespace', '/v2/schemas/metadefs/namespaces', '/v2/schemas/metadefs/resource_type', '/v2/schemas/metadefs/resource_types', '/v2/schemas/metadefs/property', '/v2/schemas/metadefs/properties', '/v2/schemas/metadefs/object', '/v2/schemas/metadefs/objects', '/v2/schemas/metadefs/tag', '/v2/schemas/metadefs/tags', '/v2/metadefs/resource_types', ] for value in data_urls: path = self._url(value) data = jsonutils.dumps(["body"]) response = requests.get(path, headers=self._headers(), data=data) self.assertEqual(http.BAD_REQUEST, response.status_code) # Put the namespace into the url test_urls = [ ('/v2/metadefs/namespaces/%s/resource_types', 'get'), ('/v2/metadefs/namespaces/%s/resource_types/type', 'delete'), ('/v2/metadefs/namespaces/%s', 'get'), ('/v2/metadefs/namespaces/%s', 'delete'), ('/v2/metadefs/namespaces/%s/objects/name', 'get'), ('/v2/metadefs/namespaces/%s/objects/name', 'delete'), ('/v2/metadefs/namespaces/%s/properties', 'get'), ('/v2/metadefs/namespaces/%s/tags/test', 'get'), ('/v2/metadefs/namespaces/%s/tags/test', 'post'), ('/v2/metadefs/namespaces/%s/tags/test', 'delete'), ] for link, method in test_urls: path = self._url(link % namespace_name) data = jsonutils.dumps(["body"]) response = getattr(requests, method)( path, headers=self._headers(), data=data) self.assertEqual(http.BAD_REQUEST, response.status_code) glance-16.0.0/glance/tests/functional/v2/test_images.py0000666000175100017510000052520013245511421023021 0ustar zuulzuul00000000000000# Copyright 2012 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os import signal import uuid from oslo_serialization import jsonutils import requests import six from six.moves import http_client as http # NOTE(jokke): simplified transition to py3, behaves like py2 xrange from six.moves import range from six.moves import urllib from glance.tests import functional from glance.tests import utils as test_utils TENANT1 = str(uuid.uuid4()) TENANT2 = str(uuid.uuid4()) TENANT3 = str(uuid.uuid4()) TENANT4 = str(uuid.uuid4()) class TestImages(functional.FunctionalTest): def setUp(self): super(TestImages, self).setUp() self.cleanup() self.include_scrubber = False self.api_server.deployment_flavor = 'noauth' self.api_server.data_api = 'glance.db.sqlalchemy.api' for i in range(3): ret = test_utils.start_http_server("foo_image_id%d" % i, "foo_image%d" % i) setattr(self, 'http_server%d_pid' % i, ret[0]) setattr(self, 'http_port%d' % i, ret[1]) def tearDown(self): for i in range(3): pid = getattr(self, 'http_server%d_pid' % i, None) if pid: os.kill(pid, signal.SIGKILL) super(TestImages, self).tearDown() def _url(self, path): return 'http://127.0.0.1:%d%s' % (self.api_port, path) def _headers(self, custom_headers=None): base_headers = { 'X-Identity-Status': 'Confirmed', 'X-Auth-Token': '932c5c84-02ac-4fe5-a9ba-620af0e2bb96', 'X-User-Id': 'f9a41d13-0c13-47e9-bee2-ce4e8bfe958e', 'X-Tenant-Id': TENANT1, 'X-Roles': 'member', } base_headers.update(custom_headers or {}) return base_headers def test_v1_none_properties_v2(self): self.api_server.deployment_flavor = 'noauth' self.api_server.use_user_token = True self.api_server.send_identity_credentials = True self.registry_server.deployment_flavor = '' # Image list should be empty self.start_servers(**self.__dict__.copy()) # Create an image (with two deployer-defined properties) path = self._url('/v1/images') headers = self._headers({'content-type': 'application/octet-stream'}) headers.update(test_utils.minimal_headers('image-1')) # NOTE(flaper87): Sending empty string, the server will use None headers['x-image-meta-property-my_empty_prop'] = '' response = requests.post(path, headers=headers) self.assertEqual(http.CREATED, response.status_code) data = jsonutils.loads(response.text) image_id = data['image']['id'] # NOTE(flaper87): Get the image using V2 and verify # the returned value for `my_empty_prop` is an empty # string. path = self._url('/v2/images/%s' % image_id) response = requests.get(path, headers=self._headers()) self.assertEqual(http.OK, response.status_code) image = jsonutils.loads(response.text) self.assertEqual('', image['my_empty_prop']) self.stop_servers() def test_not_authenticated_in_registry_on_ops(self): # https://bugs.launchpad.net/glance/+bug/1451850 # this configuration guarantees that authentication succeeds in # glance-api and fails in glance-registry if no token is passed self.api_server.deployment_flavor = '' # make sure that request will reach registry self.api_server.data_api = 'glance.db.registry.api' self.registry_server.deployment_flavor = 'fakeauth' self.start_servers(**self.__dict__.copy()) headers = {'content-type': 'application/json'} image = {'name': 'image', 'type': 'kernel', 'disk_format': 'qcow2', 'container_format': 'bare'} # image create should return 401 response = requests.post(self._url('/v2/images'), headers=headers, data=jsonutils.dumps(image)) self.assertEqual(http.UNAUTHORIZED, response.status_code) # image list should return 401 response = requests.get(self._url('/v2/images')) self.assertEqual(http.UNAUTHORIZED, response.status_code) # image show should return 401 response = requests.get(self._url('/v2/images/someimageid')) self.assertEqual(http.UNAUTHORIZED, response.status_code) # image update should return 401 ops = [{'op': 'replace', 'path': '/protected', 'value': False}] media_type = 'application/openstack-images-v2.1-json-patch' response = requests.patch(self._url('/v2/images/someimageid'), headers={'content-type': media_type}, data=jsonutils.dumps(ops)) self.assertEqual(http.UNAUTHORIZED, response.status_code) # image delete should return 401 response = requests.delete(self._url('/v2/images/someimageid')) self.assertEqual(http.UNAUTHORIZED, response.status_code) self.stop_servers() def test_image_lifecycle(self): # Image list should be empty self.api_server.show_multiple_locations = True self.start_servers(**self.__dict__.copy()) path = self._url('/v2/images') response = requests.get(path, headers=self._headers()) self.assertEqual(http.OK, response.status_code) images = jsonutils.loads(response.text)['images'] self.assertEqual(0, len(images)) # Create an image (with two deployer-defined properties) path = self._url('/v2/images') headers = self._headers({'content-type': 'application/json'}) data = jsonutils.dumps({'name': 'image-1', 'type': 'kernel', 'foo': 'bar', 'disk_format': 'aki', 'container_format': 'aki', 'abc': 'xyz', 'protected': True}) response = requests.post(path, headers=headers, data=data) self.assertEqual(http.CREATED, response.status_code) image_location_header = response.headers['Location'] # Returned image entity should have a generated id and status image = jsonutils.loads(response.text) image_id = image['id'] checked_keys = set([ u'status', u'name', u'tags', u'created_at', u'updated_at', u'visibility', u'self', u'protected', u'id', u'file', u'min_disk', u'foo', u'abc', u'type', u'min_ram', u'schema', u'disk_format', u'container_format', u'owner', u'checksum', u'size', u'virtual_size', u'locations', ]) self.assertEqual(checked_keys, set(image.keys())) expected_image = { 'status': 'queued', 'name': 'image-1', 'tags': [], 'visibility': 'shared', 'self': '/v2/images/%s' % image_id, 'protected': True, 'file': '/v2/images/%s/file' % image_id, 'min_disk': 0, 'foo': 'bar', 'abc': 'xyz', 'type': 'kernel', 'min_ram': 0, 'schema': '/v2/schemas/image', } for key, value in expected_image.items(): self.assertEqual(value, image[key], key) # Image list should now have one entry path = self._url('/v2/images') response = requests.get(path, headers=self._headers()) self.assertEqual(http.OK, response.status_code) images = jsonutils.loads(response.text)['images'] self.assertEqual(1, len(images)) self.assertEqual(image_id, images[0]['id']) # Create another image (with two deployer-defined properties) path = self._url('/v2/images') headers = self._headers({'content-type': 'application/json'}) data = jsonutils.dumps({'name': 'image-2', 'type': 'kernel', 'bar': 'foo', 'disk_format': 'aki', 'container_format': 'aki', 'xyz': 'abc'}) response = requests.post(path, headers=headers, data=data) self.assertEqual(http.CREATED, response.status_code) # Returned image entity should have a generated id and status image = jsonutils.loads(response.text) image2_id = image['id'] checked_keys = set([ u'status', u'name', u'tags', u'created_at', u'updated_at', u'visibility', u'self', u'protected', u'id', u'file', u'min_disk', u'bar', u'xyz', u'type', u'min_ram', u'schema', u'disk_format', u'container_format', u'owner', u'checksum', u'size', u'virtual_size', u'locations', ]) self.assertEqual(checked_keys, set(image.keys())) expected_image = { 'status': 'queued', 'name': 'image-2', 'tags': [], 'visibility': 'shared', 'self': '/v2/images/%s' % image2_id, 'protected': False, 'file': '/v2/images/%s/file' % image2_id, 'min_disk': 0, 'bar': 'foo', 'xyz': 'abc', 'type': 'kernel', 'min_ram': 0, 'schema': '/v2/schemas/image', } for key, value in expected_image.items(): self.assertEqual(value, image[key], key) # Image list should now have two entries path = self._url('/v2/images') response = requests.get(path, headers=self._headers()) self.assertEqual(http.OK, response.status_code) images = jsonutils.loads(response.text)['images'] self.assertEqual(2, len(images)) self.assertEqual(image2_id, images[0]['id']) self.assertEqual(image_id, images[1]['id']) # Image list should list only image-2 as image-1 doesn't contain the # property 'bar' path = self._url('/v2/images?bar=foo') response = requests.get(path, headers=self._headers()) self.assertEqual(http.OK, response.status_code) images = jsonutils.loads(response.text)['images'] self.assertEqual(1, len(images)) self.assertEqual(image2_id, images[0]['id']) # Image list should list only image-1 as image-2 doesn't contain the # property 'foo' path = self._url('/v2/images?foo=bar') response = requests.get(path, headers=self._headers()) self.assertEqual(http.OK, response.status_code) images = jsonutils.loads(response.text)['images'] self.assertEqual(1, len(images)) self.assertEqual(image_id, images[0]['id']) # The "changes-since" filter shouldn't work on glance v2 path = self._url('/v2/images?changes-since=20001007T10:10:10') response = requests.get(path, headers=self._headers()) self.assertEqual(http.BAD_REQUEST, response.status_code) path = self._url('/v2/images?changes-since=aaa') response = requests.get(path, headers=self._headers()) self.assertEqual(http.BAD_REQUEST, response.status_code) # Image list should list only image-1 based on the filter # 'protected=true' path = self._url('/v2/images?protected=true') response = requests.get(path, headers=self._headers()) self.assertEqual(http.OK, response.status_code) images = jsonutils.loads(response.text)['images'] self.assertEqual(1, len(images)) self.assertEqual(image_id, images[0]['id']) # Image list should list only image-2 based on the filter # 'protected=false' path = self._url('/v2/images?protected=false') response = requests.get(path, headers=self._headers()) self.assertEqual(http.OK, response.status_code) images = jsonutils.loads(response.text)['images'] self.assertEqual(1, len(images)) self.assertEqual(image2_id, images[0]['id']) # Image list should return 400 based on the filter # 'protected=False' path = self._url('/v2/images?protected=False') response = requests.get(path, headers=self._headers()) self.assertEqual(http.BAD_REQUEST, response.status_code) # Image list should list only image-1 based on the filter # 'foo=bar&abc=xyz' path = self._url('/v2/images?foo=bar&abc=xyz') response = requests.get(path, headers=self._headers()) self.assertEqual(http.OK, response.status_code) images = jsonutils.loads(response.text)['images'] self.assertEqual(1, len(images)) self.assertEqual(image_id, images[0]['id']) # Image list should list only image-2 based on the filter # 'bar=foo&xyz=abc' path = self._url('/v2/images?bar=foo&xyz=abc') response = requests.get(path, headers=self._headers()) self.assertEqual(http.OK, response.status_code) images = jsonutils.loads(response.text)['images'] self.assertEqual(1, len(images)) self.assertEqual(image2_id, images[0]['id']) # Image list should not list anything as the filter 'foo=baz&abc=xyz' # is not satisfied by either images path = self._url('/v2/images?foo=baz&abc=xyz') response = requests.get(path, headers=self._headers()) self.assertEqual(http.OK, response.status_code) images = jsonutils.loads(response.text)['images'] self.assertEqual(0, len(images)) # Get the image using the returned Location header response = requests.get(image_location_header, headers=self._headers()) self.assertEqual(http.OK, response.status_code) image = jsonutils.loads(response.text) self.assertEqual(image_id, image['id']) self.assertIsNone(image['checksum']) self.assertIsNone(image['size']) self.assertIsNone(image['virtual_size']) self.assertEqual('bar', image['foo']) self.assertTrue(image['protected']) self.assertEqual('kernel', image['type']) self.assertTrue(image['created_at']) self.assertTrue(image['updated_at']) self.assertEqual(image['updated_at'], image['created_at']) # The URI file:// should return a 400 rather than a 500 path = self._url('/v2/images/%s' % image_id) media_type = 'application/openstack-images-v2.1-json-patch' headers = self._headers({'content-type': media_type}) url = ('file://') changes = [{ 'op': 'add', 'path': '/locations/-', 'value': { 'url': url, 'metadata': {} } }] data = jsonutils.dumps(changes) response = requests.patch(path, headers=headers, data=data) self.assertEqual(http.BAD_REQUEST, response.status_code, response.text) # The image should be mutable, including adding and removing properties path = self._url('/v2/images/%s' % image_id) media_type = 'application/openstack-images-v2.1-json-patch' headers = self._headers({'content-type': media_type}) data = jsonutils.dumps([ {'op': 'replace', 'path': '/name', 'value': 'image-2'}, {'op': 'replace', 'path': '/disk_format', 'value': 'vhd'}, {'op': 'replace', 'path': '/container_format', 'value': 'ami'}, {'op': 'replace', 'path': '/foo', 'value': 'baz'}, {'op': 'add', 'path': '/ping', 'value': 'pong'}, {'op': 'replace', 'path': '/protected', 'value': True}, {'op': 'remove', 'path': '/type'}, ]) response = requests.patch(path, headers=headers, data=data) self.assertEqual(http.OK, response.status_code, response.text) # Returned image entity should reflect the changes image = jsonutils.loads(response.text) self.assertEqual('image-2', image['name']) self.assertEqual('vhd', image['disk_format']) self.assertEqual('baz', image['foo']) self.assertEqual('pong', image['ping']) self.assertTrue(image['protected']) self.assertNotIn('type', image, response.text) # Adding 11 image properties should fail since configured limit is 10 path = self._url('/v2/images/%s' % image_id) media_type = 'application/openstack-images-v2.1-json-patch' headers = self._headers({'content-type': media_type}) changes = [] for i in range(11): changes.append({'op': 'add', 'path': '/ping%i' % i, 'value': 'pong'}) data = jsonutils.dumps(changes) response = requests.patch(path, headers=headers, data=data) self.assertEqual(http.REQUEST_ENTITY_TOO_LARGE, response.status_code, response.text) # Adding 3 image locations should fail since configured limit is 2 path = self._url('/v2/images/%s' % image_id) media_type = 'application/openstack-images-v2.1-json-patch' headers = self._headers({'content-type': media_type}) changes = [] for i in range(3): url = ('http://127.0.0.1:%s/foo_image' % getattr(self, 'http_port%d' % i)) changes.append({'op': 'add', 'path': '/locations/-', 'value': {'url': url, 'metadata': {}}, }) data = jsonutils.dumps(changes) response = requests.patch(path, headers=headers, data=data) self.assertEqual(http.REQUEST_ENTITY_TOO_LARGE, response.status_code, response.text) # Ensure the v2.0 json-patch content type is accepted path = self._url('/v2/images/%s' % image_id) media_type = 'application/openstack-images-v2.0-json-patch' headers = self._headers({'content-type': media_type}) data = jsonutils.dumps([{'add': '/ding', 'value': 'dong'}]) response = requests.patch(path, headers=headers, data=data) self.assertEqual(http.OK, response.status_code, response.text) # Returned image entity should reflect the changes image = jsonutils.loads(response.text) self.assertEqual('dong', image['ding']) # Updates should persist across requests path = self._url('/v2/images/%s' % image_id) response = requests.get(path, headers=self._headers()) self.assertEqual(http.OK, response.status_code) image = jsonutils.loads(response.text) self.assertEqual(image_id, image['id']) self.assertEqual('image-2', image['name']) self.assertEqual('baz', image['foo']) self.assertEqual('pong', image['ping']) self.assertTrue(image['protected']) self.assertNotIn('type', image, response.text) # Try to download data before its uploaded path = self._url('/v2/images/%s/file' % image_id) headers = self._headers() response = requests.get(path, headers=headers) self.assertEqual(http.NO_CONTENT, response.status_code) def _verify_image_checksum_and_status(checksum, status): # Checksum should be populated and status should be active path = self._url('/v2/images/%s' % image_id) response = requests.get(path, headers=self._headers()) self.assertEqual(http.OK, response.status_code) image = jsonutils.loads(response.text) self.assertEqual(checksum, image['checksum']) self.assertEqual(status, image['status']) # Upload some image data path = self._url('/v2/images/%s/file' % image_id) headers = self._headers({'Content-Type': 'application/octet-stream'}) response = requests.put(path, headers=headers, data='ZZZZZ') self.assertEqual(http.NO_CONTENT, response.status_code) expected_checksum = '8f113e38d28a79a5a451b16048cc2b72' _verify_image_checksum_and_status(expected_checksum, 'active') # `disk_format` and `container_format` cannot # be replaced when the image is active. immutable_paths = ['/disk_format', '/container_format'] media_type = 'application/openstack-images-v2.1-json-patch' headers = self._headers({'content-type': media_type}) path = self._url('/v2/images/%s' % image_id) for immutable_path in immutable_paths: data = jsonutils.dumps([ {'op': 'replace', 'path': immutable_path, 'value': 'ari'}, ]) response = requests.patch(path, headers=headers, data=data) self.assertEqual(http.FORBIDDEN, response.status_code) # Try to download the data that was just uploaded path = self._url('/v2/images/%s/file' % image_id) response = requests.get(path, headers=self._headers()) self.assertEqual(http.OK, response.status_code) self.assertEqual(expected_checksum, response.headers['Content-MD5']) self.assertEqual('ZZZZZ', response.text) # Uploading duplicate data should be rejected with a 409. The # original data should remain untouched. path = self._url('/v2/images/%s/file' % image_id) headers = self._headers({'Content-Type': 'application/octet-stream'}) response = requests.put(path, headers=headers, data='XXX') self.assertEqual(http.CONFLICT, response.status_code) _verify_image_checksum_and_status(expected_checksum, 'active') # Ensure the size is updated to reflect the data uploaded path = self._url('/v2/images/%s' % image_id) response = requests.get(path, headers=self._headers()) self.assertEqual(http.OK, response.status_code) self.assertEqual(5, jsonutils.loads(response.text)['size']) # Should be able to deactivate image path = self._url('/v2/images/%s/actions/deactivate' % image_id) response = requests.post(path, data={}, headers=self._headers()) self.assertEqual(http.NO_CONTENT, response.status_code) # Change the image to public so TENANT2 can see it path = self._url('/v2/images/%s' % image_id) media_type = 'application/openstack-images-v2.0-json-patch' headers = self._headers({'content-type': media_type}) data = jsonutils.dumps([{"replace": "/visibility", "value": "public"}]) response = requests.patch(path, headers=headers, data=data) self.assertEqual(http.OK, response.status_code, response.text) # Tenant2 should get Forbidden when deactivating the public image path = self._url('/v2/images/%s/actions/deactivate' % image_id) response = requests.post(path, data={}, headers=self._headers( {'X-Tenant-Id': TENANT2})) self.assertEqual(http.FORBIDDEN, response.status_code) # Tenant2 should get Forbidden when reactivating the public image path = self._url('/v2/images/%s/actions/reactivate' % image_id) response = requests.post(path, data={}, headers=self._headers( {'X-Tenant-Id': TENANT2})) self.assertEqual(http.FORBIDDEN, response.status_code) # Deactivating a deactivated image succeeds (no-op) path = self._url('/v2/images/%s/actions/deactivate' % image_id) response = requests.post(path, data={}, headers=self._headers()) self.assertEqual(http.NO_CONTENT, response.status_code) # Can't download a deactivated image path = self._url('/v2/images/%s/file' % image_id) response = requests.get(path, headers=self._headers()) self.assertEqual(http.FORBIDDEN, response.status_code) # Deactivated image should still be in a listing path = self._url('/v2/images') response = requests.get(path, headers=self._headers()) self.assertEqual(http.OK, response.status_code) images = jsonutils.loads(response.text)['images'] self.assertEqual(2, len(images)) self.assertEqual(image2_id, images[0]['id']) self.assertEqual(image_id, images[1]['id']) # Should be able to reactivate a deactivated image path = self._url('/v2/images/%s/actions/reactivate' % image_id) response = requests.post(path, data={}, headers=self._headers()) self.assertEqual(http.NO_CONTENT, response.status_code) # Reactivating an active image succeeds (no-op) path = self._url('/v2/images/%s/actions/reactivate' % image_id) response = requests.post(path, data={}, headers=self._headers()) self.assertEqual(http.NO_CONTENT, response.status_code) # Deletion should not work on protected images path = self._url('/v2/images/%s' % image_id) response = requests.delete(path, headers=self._headers()) self.assertEqual(http.FORBIDDEN, response.status_code) # Unprotect image for deletion path = self._url('/v2/images/%s' % image_id) media_type = 'application/openstack-images-v2.1-json-patch' headers = self._headers({'content-type': media_type}) doc = [{'op': 'replace', 'path': '/protected', 'value': False}] data = jsonutils.dumps(doc) response = requests.patch(path, headers=headers, data=data) self.assertEqual(http.OK, response.status_code, response.text) # Deletion should work. Deleting image-1 path = self._url('/v2/images/%s' % image_id) response = requests.delete(path, headers=self._headers()) self.assertEqual(http.NO_CONTENT, response.status_code) # This image should be no longer be directly accessible path = self._url('/v2/images/%s' % image_id) response = requests.get(path, headers=self._headers()) self.assertEqual(http.NOT_FOUND, response.status_code) # And neither should its data path = self._url('/v2/images/%s/file' % image_id) headers = self._headers() response = requests.get(path, headers=headers) self.assertEqual(http.NOT_FOUND, response.status_code) # Image list should now contain just image-2 path = self._url('/v2/images') response = requests.get(path, headers=self._headers()) self.assertEqual(http.OK, response.status_code) images = jsonutils.loads(response.text)['images'] self.assertEqual(1, len(images)) self.assertEqual(image2_id, images[0]['id']) # Deleting image-2 should work path = self._url('/v2/images/%s' % image2_id) response = requests.delete(path, headers=self._headers()) self.assertEqual(http.NO_CONTENT, response.status_code) # Image list should now be empty path = self._url('/v2/images') response = requests.get(path, headers=self._headers()) self.assertEqual(http.OK, response.status_code) images = jsonutils.loads(response.text)['images'] self.assertEqual(0, len(images)) # Create image that tries to send True should return 400 path = self._url('/v2/images') headers = self._headers({'content-type': 'application/json'}) data = 'true' response = requests.post(path, headers=headers, data=data) self.assertEqual(http.BAD_REQUEST, response.status_code) # Create image that tries to send a string should return 400 path = self._url('/v2/images') headers = self._headers({'content-type': 'application/json'}) data = '"hello"' response = requests.post(path, headers=headers, data=data) self.assertEqual(http.BAD_REQUEST, response.status_code) # Create image that tries to send 123 should return 400 path = self._url('/v2/images') headers = self._headers({'content-type': 'application/json'}) data = '123' response = requests.post(path, headers=headers, data=data) self.assertEqual(http.BAD_REQUEST, response.status_code) self.stop_servers() def test_update_readonly_prop(self): self.start_servers(**self.__dict__.copy()) # Create an image (with two deployer-defined properties) path = self._url('/v2/images') headers = self._headers({'content-type': 'application/json'}) data = jsonutils.dumps({'name': 'image-1'}) response = requests.post(path, headers=headers, data=data) image = jsonutils.loads(response.text) image_id = image['id'] path = self._url('/v2/images/%s' % image_id) media_type = 'application/openstack-images-v2.1-json-patch' headers = self._headers({'content-type': media_type}) props = ['/id', '/file', '/location', '/schema', '/self'] for prop in props: doc = [{'op': 'replace', 'path': prop, 'value': 'value1'}] data = jsonutils.dumps(doc) response = requests.patch(path, headers=headers, data=data) self.assertEqual(http.FORBIDDEN, response.status_code) for prop in props: doc = [{'op': 'remove', 'path': prop, 'value': 'value1'}] data = jsonutils.dumps(doc) response = requests.patch(path, headers=headers, data=data) self.assertEqual(http.FORBIDDEN, response.status_code) for prop in props: doc = [{'op': 'add', 'path': prop, 'value': 'value1'}] data = jsonutils.dumps(doc) response = requests.patch(path, headers=headers, data=data) self.assertEqual(http.FORBIDDEN, response.status_code) self.stop_servers() def test_methods_that_dont_accept_illegal_bodies(self): # Check images can be reached self.start_servers(**self.__dict__.copy()) path = self._url('/v2/images') response = requests.get(path, headers=self._headers()) self.assertEqual(http.OK, response.status_code) # Test all the schemas schema_urls = [ '/v2/schemas/images', '/v2/schemas/image', '/v2/schemas/members', '/v2/schemas/member', ] for value in schema_urls: path = self._url(value) data = jsonutils.dumps(["body"]) response = requests.get(path, headers=self._headers(), data=data) self.assertEqual(http.BAD_REQUEST, response.status_code) # Create image for use with tests path = self._url('/v2/images') headers = self._headers({'content-type': 'application/json'}) data = jsonutils.dumps({'name': 'image'}) response = requests.post(path, headers=headers, data=data) self.assertEqual(http.CREATED, response.status_code) image = jsonutils.loads(response.text) image_id = image['id'] test_urls = [ ('/v2/images/%s', 'get'), ('/v2/images/%s/actions/deactivate', 'post'), ('/v2/images/%s/actions/reactivate', 'post'), ('/v2/images/%s/tags/mytag', 'put'), ('/v2/images/%s/tags/mytag', 'delete'), ('/v2/images/%s/members', 'get'), ('/v2/images/%s/file', 'get'), ('/v2/images/%s', 'delete'), ] for link, method in test_urls: path = self._url(link % image_id) data = jsonutils.dumps(["body"]) response = getattr(requests, method)( path, headers=self._headers(), data=data) self.assertEqual(http.BAD_REQUEST, response.status_code) # DELETE /images/imgid without legal json path = self._url('/v2/images/%s' % image_id) data = '{"hello"]' response = requests.delete(path, headers=self._headers(), data=data) self.assertEqual(http.BAD_REQUEST, response.status_code) # POST /images/imgid/members path = self._url('/v2/images/%s/members' % image_id) data = jsonutils.dumps({'member': TENANT3}) response = requests.post(path, headers=self._headers(), data=data) self.assertEqual(http.OK, response.status_code) # GET /images/imgid/members/memid path = self._url('/v2/images/%s/members/%s' % (image_id, TENANT3)) data = jsonutils.dumps(["body"]) response = requests.get(path, headers=self._headers(), data=data) self.assertEqual(http.BAD_REQUEST, response.status_code) # DELETE /images/imgid/members/memid path = self._url('/v2/images/%s/members/%s' % (image_id, TENANT3)) data = jsonutils.dumps(["body"]) response = requests.delete(path, headers=self._headers(), data=data) self.assertEqual(http.BAD_REQUEST, response.status_code) self.stop_servers() def test_download_random_access_w_range_request(self): """ Test partial download 'Range' requests for images (random image access) """ self.start_servers(**self.__dict__.copy()) # Create an image (with two deployer-defined properties) path = self._url('/v2/images') headers = self._headers({'content-type': 'application/json'}) data = jsonutils.dumps({'name': 'image-2', 'type': 'kernel', 'bar': 'foo', 'disk_format': 'aki', 'container_format': 'aki', 'xyz': 'abc'}) response = requests.post(path, headers=headers, data=data) self.assertEqual(http.CREATED, response.status_code) image = jsonutils.loads(response.text) image_id = image['id'] # Upload data to image image_data = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ' path = self._url('/v2/images/%s/file' % image_id) headers = self._headers({'Content-Type': 'application/octet-stream'}) response = requests.put(path, headers=headers, data=image_data) self.assertEqual(http.NO_CONTENT, response.status_code) # test for success on satisfiable Range request. range_ = 'bytes=3-10' headers = self._headers({'Range': range_}) path = self._url('/v2/images/%s/file' % image_id) response = requests.get(path, headers=headers) self.assertEqual(http.PARTIAL_CONTENT, response.status_code) self.assertEqual('DEFGHIJK', response.text) # test for failure on unsatisfiable Range request. range_ = 'bytes=10-5' headers = self._headers({'Range': range_}) path = self._url('/v2/images/%s/file' % image_id) response = requests.get(path, headers=headers) self.assertEqual(http.REQUESTED_RANGE_NOT_SATISFIABLE, response.status_code) self.stop_servers() def test_download_random_access_w_content_range(self): """ Even though Content-Range is incorrect on requests, we support it for backward compatibility with clients written for pre-Pike Glance. The following test is for 'Content-Range' requests, which we have to ensure that we prevent regression. """ self.start_servers(**self.__dict__.copy()) # Create another image (with two deployer-defined properties) path = self._url('/v2/images') headers = self._headers({'content-type': 'application/json'}) data = jsonutils.dumps({'name': 'image-2', 'type': 'kernel', 'bar': 'foo', 'disk_format': 'aki', 'container_format': 'aki', 'xyz': 'abc'}) response = requests.post(path, headers=headers, data=data) self.assertEqual(http.CREATED, response.status_code) image = jsonutils.loads(response.text) image_id = image['id'] # Upload data to image image_data = 'Z' * 15 path = self._url('/v2/images/%s/file' % image_id) headers = self._headers({'Content-Type': 'application/octet-stream'}) response = requests.put(path, headers=headers, data=image_data) self.assertEqual(http.NO_CONTENT, response.status_code) result_body = '' for x in range(15): # NOTE(flaper87): Read just 1 byte. Content-Range is # 0-indexed and it specifies the first byte to read # and the last byte to read. content_range = 'bytes %s-%s/15' % (x, x) headers = self._headers({'Content-Range': content_range}) path = self._url('/v2/images/%s/file' % image_id) response = requests.get(path, headers=headers) self.assertEqual(http.PARTIAL_CONTENT, response.status_code) result_body += response.text self.assertEqual(result_body, image_data) # test for failure on unsatisfiable request for ContentRange. content_range = 'bytes 3-16/15' headers = self._headers({'Content-Range': content_range}) path = self._url('/v2/images/%s/file' % image_id) response = requests.get(path, headers=headers) self.assertEqual(http.REQUESTED_RANGE_NOT_SATISFIABLE, response.status_code) self.stop_servers() def test_download_policy_when_cache_is_not_enabled(self): rules = {'context_is_admin': 'role:admin', 'default': '', 'add_image': '', 'get_image': '', 'modify_image': '', 'upload_image': '', 'delete_image': '', 'download_image': '!'} self.set_policy_rules(rules) self.start_servers(**self.__dict__.copy()) # Create an image path = self._url('/v2/images') headers = self._headers({'content-type': 'application/json', 'X-Roles': 'member'}) data = jsonutils.dumps({'name': 'image-1', 'disk_format': 'aki', 'container_format': 'aki'}) response = requests.post(path, headers=headers, data=data) self.assertEqual(http.CREATED, response.status_code) # Returned image entity image = jsonutils.loads(response.text) image_id = image['id'] expected_image = { 'status': 'queued', 'name': 'image-1', 'tags': [], 'visibility': 'shared', 'self': '/v2/images/%s' % image_id, 'protected': False, 'file': '/v2/images/%s/file' % image_id, 'min_disk': 0, 'min_ram': 0, 'schema': '/v2/schemas/image', } for key, value in six.iteritems(expected_image): self.assertEqual(value, image[key], key) # Upload data to image path = self._url('/v2/images/%s/file' % image_id) headers = self._headers({'Content-Type': 'application/octet-stream'}) response = requests.put(path, headers=headers, data='ZZZZZ') self.assertEqual(http.NO_CONTENT, response.status_code) # Get an image should fail path = self._url('/v2/images/%s/file' % image_id) headers = self._headers({'Content-Type': 'application/octet-stream'}) response = requests.get(path, headers=headers) self.assertEqual(http.FORBIDDEN, response.status_code) # Image Deletion should work path = self._url('/v2/images/%s' % image_id) response = requests.delete(path, headers=self._headers()) self.assertEqual(http.NO_CONTENT, response.status_code) # This image should be no longer be directly accessible path = self._url('/v2/images/%s' % image_id) response = requests.get(path, headers=self._headers()) self.assertEqual(http.NOT_FOUND, response.status_code) self.stop_servers() def test_download_image_not_allowed_using_restricted_policy(self): rules = { "context_is_admin": "role:admin", "default": "", "add_image": "", "get_image": "", "modify_image": "", "upload_image": "", "delete_image": "", "restricted": "not ('aki':%(container_format)s and role:_member_)", "download_image": "role:admin or rule:restricted" } self.set_policy_rules(rules) self.start_servers(**self.__dict__.copy()) # Create an image path = self._url('/v2/images') headers = self._headers({'content-type': 'application/json', 'X-Roles': 'member'}) data = jsonutils.dumps({'name': 'image-1', 'disk_format': 'aki', 'container_format': 'aki'}) response = requests.post(path, headers=headers, data=data) self.assertEqual(http.CREATED, response.status_code) # Returned image entity image = jsonutils.loads(response.text) image_id = image['id'] expected_image = { 'status': 'queued', 'name': 'image-1', 'tags': [], 'visibility': 'shared', 'self': '/v2/images/%s' % image_id, 'protected': False, 'file': '/v2/images/%s/file' % image_id, 'min_disk': 0, 'min_ram': 0, 'schema': '/v2/schemas/image', } for key, value in six.iteritems(expected_image): self.assertEqual(value, image[key], key) # Upload data to image path = self._url('/v2/images/%s/file' % image_id) headers = self._headers({'Content-Type': 'application/octet-stream'}) response = requests.put(path, headers=headers, data='ZZZZZ') self.assertEqual(http.NO_CONTENT, response.status_code) # Get an image should fail path = self._url('/v2/images/%s/file' % image_id) headers = self._headers({'Content-Type': 'application/octet-stream', 'X-Roles': '_member_'}) response = requests.get(path, headers=headers) self.assertEqual(http.FORBIDDEN, response.status_code) # Image Deletion should work path = self._url('/v2/images/%s' % image_id) response = requests.delete(path, headers=self._headers()) self.assertEqual(http.NO_CONTENT, response.status_code) # This image should be no longer be directly accessible path = self._url('/v2/images/%s' % image_id) response = requests.get(path, headers=self._headers()) self.assertEqual(http.NOT_FOUND, response.status_code) self.stop_servers() def test_download_image_allowed_using_restricted_policy(self): rules = { "context_is_admin": "role:admin", "default": "", "add_image": "", "get_image": "", "modify_image": "", "upload_image": "", "get_image_location": "", "delete_image": "", "restricted": "not ('aki':%(container_format)s and role:_member_)", "download_image": "role:admin or rule:restricted" } self.set_policy_rules(rules) self.start_servers(**self.__dict__.copy()) # Create an image path = self._url('/v2/images') headers = self._headers({'content-type': 'application/json', 'X-Roles': 'member'}) data = jsonutils.dumps({'name': 'image-1', 'disk_format': 'aki', 'container_format': 'aki'}) response = requests.post(path, headers=headers, data=data) self.assertEqual(http.CREATED, response.status_code) # Returned image entity image = jsonutils.loads(response.text) image_id = image['id'] expected_image = { 'status': 'queued', 'name': 'image-1', 'tags': [], 'visibility': 'shared', 'self': '/v2/images/%s' % image_id, 'protected': False, 'file': '/v2/images/%s/file' % image_id, 'min_disk': 0, 'min_ram': 0, 'schema': '/v2/schemas/image', } for key, value in six.iteritems(expected_image): self.assertEqual(value, image[key], key) # Upload data to image path = self._url('/v2/images/%s/file' % image_id) headers = self._headers({'Content-Type': 'application/octet-stream'}) response = requests.put(path, headers=headers, data='ZZZZZ') self.assertEqual(http.NO_CONTENT, response.status_code) # Get an image should be allowed path = self._url('/v2/images/%s/file' % image_id) headers = self._headers({'Content-Type': 'application/octet-stream', 'X-Roles': 'member'}) response = requests.get(path, headers=headers) self.assertEqual(http.OK, response.status_code) # Image Deletion should work path = self._url('/v2/images/%s' % image_id) response = requests.delete(path, headers=self._headers()) self.assertEqual(http.NO_CONTENT, response.status_code) # This image should be no longer be directly accessible path = self._url('/v2/images/%s' % image_id) response = requests.get(path, headers=self._headers()) self.assertEqual(http.NOT_FOUND, response.status_code) self.stop_servers() def test_download_image_raises_service_unavailable(self): """Test image download returns HTTPServiceUnavailable.""" self.api_server.show_multiple_locations = True self.start_servers(**self.__dict__.copy()) # Create an image path = self._url('/v2/images') headers = self._headers({'content-type': 'application/json'}) data = jsonutils.dumps({'name': 'image-1', 'disk_format': 'aki', 'container_format': 'aki'}) response = requests.post(path, headers=headers, data=data) self.assertEqual(http.CREATED, response.status_code) # Get image id image = jsonutils.loads(response.text) image_id = image['id'] # Update image locations via PATCH path = self._url('/v2/images/%s' % image_id) media_type = 'application/openstack-images-v2.1-json-patch' headers = self._headers({'content-type': media_type}) http_server_pid, http_port = test_utils.start_http_server(image_id, "image-1") values = [{'url': 'http://127.0.0.1:%s/image-1' % http_port, 'metadata': {'idx': '0'}}] doc = [{'op': 'replace', 'path': '/locations', 'value': values}] data = jsonutils.dumps(doc) response = requests.patch(path, headers=headers, data=data) self.assertEqual(http.OK, response.status_code) # Download an image should work path = self._url('/v2/images/%s/file' % image_id) headers = self._headers({'Content-Type': 'application/json'}) response = requests.get(path, headers=headers) self.assertEqual(http.OK, response.status_code) # Stop http server used to update image location os.kill(http_server_pid, signal.SIGKILL) # Download an image should raise HTTPServiceUnavailable path = self._url('/v2/images/%s/file' % image_id) headers = self._headers({'Content-Type': 'application/json'}) response = requests.get(path, headers=headers) self.assertEqual(http.SERVICE_UNAVAILABLE, response.status_code) # Image Deletion should work path = self._url('/v2/images/%s' % image_id) response = requests.delete(path, headers=self._headers()) self.assertEqual(http.NO_CONTENT, response.status_code) # This image should be no longer be directly accessible path = self._url('/v2/images/%s' % image_id) response = requests.get(path, headers=self._headers()) self.assertEqual(http.NOT_FOUND, response.status_code) self.stop_servers() def test_image_modification_works_for_owning_tenant_id(self): rules = { "context_is_admin": "role:admin", "default": "", "add_image": "", "get_image": "", "modify_image": "tenant:%(owner)s", "upload_image": "", "get_image_location": "", "delete_image": "", "restricted": "not ('aki':%(container_format)s and role:_member_)", "download_image": "role:admin or rule:restricted" } self.set_policy_rules(rules) self.start_servers(**self.__dict__.copy()) path = self._url('/v2/images') headers = self._headers({'content-type': 'application/json', 'X-Roles': 'admin'}) data = jsonutils.dumps({'name': 'image-1', 'disk_format': 'aki', 'container_format': 'aki'}) response = requests.post(path, headers=headers, data=data) self.assertEqual(http.CREATED, response.status_code) # Get the image's ID image = jsonutils.loads(response.text) image_id = image['id'] path = self._url('/v2/images/%s' % image_id) media_type = 'application/openstack-images-v2.1-json-patch' headers['content-type'] = media_type del headers['X-Roles'] data = jsonutils.dumps([ {'op': 'replace', 'path': '/name', 'value': 'new-name'}, ]) response = requests.patch(path, headers=headers, data=data) self.assertEqual(http.OK, response.status_code) self.stop_servers() def test_image_modification_fails_on_mismatched_tenant_ids(self): rules = { "context_is_admin": "role:admin", "default": "", "add_image": "", "get_image": "", "modify_image": "'A-Fake-Tenant-Id':%(owner)s", "upload_image": "", "get_image_location": "", "delete_image": "", "restricted": "not ('aki':%(container_format)s and role:_member_)", "download_image": "role:admin or rule:restricted" } self.set_policy_rules(rules) self.start_servers(**self.__dict__.copy()) path = self._url('/v2/images') headers = self._headers({'content-type': 'application/json', 'X-Roles': 'admin'}) data = jsonutils.dumps({'name': 'image-1', 'disk_format': 'aki', 'container_format': 'aki'}) response = requests.post(path, headers=headers, data=data) self.assertEqual(http.CREATED, response.status_code) # Get the image's ID image = jsonutils.loads(response.text) image_id = image['id'] path = self._url('/v2/images/%s' % image_id) media_type = 'application/openstack-images-v2.1-json-patch' headers['content-type'] = media_type del headers['X-Roles'] data = jsonutils.dumps([ {'op': 'replace', 'path': '/name', 'value': 'new-name'}, ]) response = requests.patch(path, headers=headers, data=data) self.assertEqual(http.FORBIDDEN, response.status_code) self.stop_servers() def test_member_additions_works_for_owning_tenant_id(self): rules = { "context_is_admin": "role:admin", "default": "", "add_image": "", "get_image": "", "modify_image": "", "upload_image": "", "get_image_location": "", "delete_image": "", "restricted": "not ('aki':%(container_format)s and role:_member_)", "download_image": "role:admin or rule:restricted", "add_member": "tenant:%(owner)s", } self.set_policy_rules(rules) self.start_servers(**self.__dict__.copy()) path = self._url('/v2/images') headers = self._headers({'content-type': 'application/json', 'X-Roles': 'admin'}) data = jsonutils.dumps({'name': 'image-1', 'disk_format': 'aki', 'container_format': 'aki'}) response = requests.post(path, headers=headers, data=data) self.assertEqual(http.CREATED, response.status_code) # Get the image's ID image = jsonutils.loads(response.text) image_id = image['id'] # Get the image's members resource path = self._url('/v2/images/%s/members' % image_id) body = jsonutils.dumps({'member': TENANT3}) del headers['X-Roles'] response = requests.post(path, headers=headers, data=body) self.assertEqual(http.OK, response.status_code) self.stop_servers() def test_image_additions_works_only_for_specific_tenant_id(self): rules = { "context_is_admin": "role:admin", "default": "", "add_image": "'{0}':%(owner)s".format(TENANT1), "get_image": "", "modify_image": "", "upload_image": "", "get_image_location": "", "delete_image": "", "restricted": "not ('aki':%(container_format)s and role:_member_)", "download_image": "role:admin or rule:restricted", "add_member": "", } self.set_policy_rules(rules) self.start_servers(**self.__dict__.copy()) path = self._url('/v2/images') headers = self._headers({'content-type': 'application/json', 'X-Roles': 'admin', 'X-Tenant-Id': TENANT1}) data = jsonutils.dumps({'name': 'image-1', 'disk_format': 'aki', 'container_format': 'aki'}) response = requests.post(path, headers=headers, data=data) self.assertEqual(http.CREATED, response.status_code) headers['X-Tenant-Id'] = TENANT2 response = requests.post(path, headers=headers, data=data) self.assertEqual(http.FORBIDDEN, response.status_code) self.stop_servers() def test_owning_tenant_id_can_retrieve_image_information(self): rules = { "context_is_admin": "role:admin", "default": "", "add_image": "", "get_image": "tenant:%(owner)s", "modify_image": "", "upload_image": "", "get_image_location": "", "delete_image": "", "restricted": "not ('aki':%(container_format)s and role:_member_)", "download_image": "role:admin or rule:restricted", "add_member": "", } self.set_policy_rules(rules) self.start_servers(**self.__dict__.copy()) path = self._url('/v2/images') headers = self._headers({'content-type': 'application/json', 'X-Roles': 'admin', 'X-Tenant-Id': TENANT1}) data = jsonutils.dumps({'name': 'image-1', 'disk_format': 'aki', 'container_format': 'aki'}) response = requests.post(path, headers=headers, data=data) self.assertEqual(http.CREATED, response.status_code) # Remove the admin role del headers['X-Roles'] # Get the image's ID image = jsonutils.loads(response.text) image_id = image['id'] # Can retrieve the image as TENANT1 path = self._url('/v2/images/%s' % image_id) response = requests.get(path, headers=headers) self.assertEqual(http.OK, response.status_code) # Can retrieve the image's members as TENANT1 path = self._url('/v2/images/%s/members' % image_id) response = requests.get(path, headers=headers) self.assertEqual(http.OK, response.status_code) headers['X-Tenant-Id'] = TENANT2 response = requests.get(path, headers=headers) self.assertEqual(http.FORBIDDEN, response.status_code) self.stop_servers() def test_owning_tenant_can_publicize_image(self): rules = { "context_is_admin": "role:admin", "default": "", "add_image": "", "publicize_image": "tenant:%(owner)s", "get_image": "tenant:%(owner)s", "modify_image": "", "upload_image": "", "get_image_location": "", "delete_image": "", "restricted": "not ('aki':%(container_format)s and role:_member_)", "download_image": "role:admin or rule:restricted", "add_member": "", } self.set_policy_rules(rules) self.start_servers(**self.__dict__.copy()) path = self._url('/v2/images') headers = self._headers({'content-type': 'application/json', 'X-Roles': 'admin', 'X-Tenant-Id': TENANT1}) data = jsonutils.dumps({'name': 'image-1', 'disk_format': 'aki', 'container_format': 'aki'}) response = requests.post(path, headers=headers, data=data) self.assertEqual(http.CREATED, response.status_code) # Get the image's ID image = jsonutils.loads(response.text) image_id = image['id'] path = self._url('/v2/images/%s' % image_id) headers = self._headers({ 'Content-Type': 'application/openstack-images-v2.1-json-patch', 'X-Tenant-Id': TENANT1, }) doc = [{'op': 'replace', 'path': '/visibility', 'value': 'public'}] data = jsonutils.dumps(doc) response = requests.patch(path, headers=headers, data=data) self.assertEqual(http.OK, response.status_code) def test_owning_tenant_can_communitize_image(self): rules = { "context_is_admin": "role:admin", "default": "", "add_image": "", "communitize_image": "tenant:%(owner)s", "get_image": "tenant:%(owner)s", "modify_image": "", "upload_image": "", "get_image_location": "", "delete_image": "", "restricted": "not ('aki':%(container_format)s and role:_member_)", "download_image": "role:admin or rule:restricted", "add_member": "", } self.set_policy_rules(rules) self.start_servers(**self.__dict__.copy()) path = self._url('/v2/images') headers = self._headers({'content-type': 'application/json', 'X-Roles': 'admin', 'X-Tenant-Id': TENANT1}) data = jsonutils.dumps({'name': 'image-1', 'disk_format': 'aki', 'container_format': 'aki'}) response = requests.post(path, headers=headers, data=data) self.assertEqual(201, response.status_code) # Get the image's ID image = jsonutils.loads(response.text) image_id = image['id'] path = self._url('/v2/images/%s' % image_id) headers = self._headers({ 'Content-Type': 'application/openstack-images-v2.1-json-patch', 'X-Tenant-Id': TENANT1, }) doc = [{'op': 'replace', 'path': '/visibility', 'value': 'community'}] data = jsonutils.dumps(doc) response = requests.patch(path, headers=headers, data=data) self.assertEqual(200, response.status_code) def test_owning_tenant_can_delete_image(self): rules = { "context_is_admin": "role:admin", "default": "", "add_image": "", "publicize_image": "tenant:%(owner)s", "get_image": "tenant:%(owner)s", "modify_image": "", "upload_image": "", "get_image_location": "", "delete_image": "", "restricted": "not ('aki':%(container_format)s and role:_member_)", "download_image": "role:admin or rule:restricted", "add_member": "", } self.set_policy_rules(rules) self.start_servers(**self.__dict__.copy()) path = self._url('/v2/images') headers = self._headers({'content-type': 'application/json', 'X-Roles': 'admin', 'X-Tenant-Id': TENANT1}) data = jsonutils.dumps({'name': 'image-1', 'disk_format': 'aki', 'container_format': 'aki'}) response = requests.post(path, headers=headers, data=data) self.assertEqual(http.CREATED, response.status_code) # Get the image's ID image = jsonutils.loads(response.text) image_id = image['id'] path = self._url('/v2/images/%s' % image_id) response = requests.delete(path, headers=headers) self.assertEqual(http.NO_CONTENT, response.status_code) def test_list_show_ok_when_get_location_allowed_for_admins(self): self.api_server.show_image_direct_url = True self.api_server.show_multiple_locations = True # setup context to allow a list locations by admin only rules = { "context_is_admin": "role:admin", "default": "", "add_image": "", "get_image": "", "modify_image": "", "upload_image": "", "get_image_location": "role:admin", "delete_image": "", "restricted": "", "download_image": "", "add_member": "", } self.set_policy_rules(rules) self.start_servers(**self.__dict__.copy()) # Create an image path = self._url('/v2/images') headers = self._headers({'content-type': 'application/json', 'X-Tenant-Id': TENANT1}) data = jsonutils.dumps({'name': 'image-1', 'disk_format': 'aki', 'container_format': 'aki'}) response = requests.post(path, headers=headers, data=data) self.assertEqual(http.CREATED, response.status_code) # Get the image's ID image = jsonutils.loads(response.text) image_id = image['id'] # Can retrieve the image as TENANT1 path = self._url('/v2/images/%s' % image_id) response = requests.get(path, headers=headers) self.assertEqual(http.OK, response.status_code) # Can list images as TENANT1 path = self._url('/v2/images') response = requests.get(path, headers=headers) self.assertEqual(http.OK, response.status_code) self.stop_servers() def test_image_size_cap(self): self.api_server.image_size_cap = 128 self.start_servers(**self.__dict__.copy()) # create an image path = self._url('/v2/images') headers = self._headers({'content-type': 'application/json'}) data = jsonutils.dumps({'name': 'image-size-cap-test-image', 'type': 'kernel', 'disk_format': 'aki', 'container_format': 'aki'}) response = requests.post(path, headers=headers, data=data) self.assertEqual(http.CREATED, response.status_code) image = jsonutils.loads(response.text) image_id = image['id'] # try to populate it with oversized data path = self._url('/v2/images/%s/file' % image_id) headers = self._headers({'Content-Type': 'application/octet-stream'}) class StreamSim(object): # Using a one-shot iterator to force chunked transfer in the PUT # request def __init__(self, size): self.size = size def __iter__(self): yield b'Z' * self.size response = requests.put(path, headers=headers, data=StreamSim( self.api_server.image_size_cap + 1)) self.assertEqual(http.REQUEST_ENTITY_TOO_LARGE, response.status_code) # hashlib.md5('Z'*129).hexdigest() # == '76522d28cb4418f12704dfa7acd6e7ee' # If the image has this checksum, it means that the whole stream was # accepted and written to the store, which should not be the case. path = self._url('/v2/images/{0}'.format(image_id)) headers = self._headers({'content-type': 'application/json'}) response = requests.get(path, headers=headers) image_checksum = jsonutils.loads(response.text).get('checksum') self.assertNotEqual(image_checksum, '76522d28cb4418f12704dfa7acd6e7ee') def test_permissions(self): self.start_servers(**self.__dict__.copy()) # Create an image that belongs to TENANT1 path = self._url('/v2/images') headers = self._headers({'Content-Type': 'application/json'}) data = jsonutils.dumps({'name': 'image-1', 'disk_format': 'raw', 'container_format': 'bare'}) response = requests.post(path, headers=headers, data=data) self.assertEqual(http.CREATED, response.status_code) image_id = jsonutils.loads(response.text)['id'] # Upload some image data path = self._url('/v2/images/%s/file' % image_id) headers = self._headers({'Content-Type': 'application/octet-stream'}) response = requests.put(path, headers=headers, data='ZZZZZ') self.assertEqual(http.NO_CONTENT, response.status_code) # TENANT1 should see the image in their list path = self._url('/v2/images') response = requests.get(path, headers=self._headers()) self.assertEqual(http.OK, response.status_code) images = jsonutils.loads(response.text)['images'] self.assertEqual(image_id, images[0]['id']) # TENANT1 should be able to access the image directly path = self._url('/v2/images/%s' % image_id) response = requests.get(path, headers=self._headers()) self.assertEqual(http.OK, response.status_code) # TENANT2 should not see the image in their list path = self._url('/v2/images') headers = self._headers({'X-Tenant-Id': TENANT2}) response = requests.get(path, headers=headers) self.assertEqual(http.OK, response.status_code) images = jsonutils.loads(response.text)['images'] self.assertEqual(0, len(images)) # TENANT2 should not be able to access the image directly path = self._url('/v2/images/%s' % image_id) headers = self._headers({'X-Tenant-Id': TENANT2}) response = requests.get(path, headers=headers) self.assertEqual(http.NOT_FOUND, response.status_code) # TENANT2 should not be able to modify the image, either path = self._url('/v2/images/%s' % image_id) headers = self._headers({ 'Content-Type': 'application/openstack-images-v2.1-json-patch', 'X-Tenant-Id': TENANT2, }) doc = [{'op': 'replace', 'path': '/name', 'value': 'image-2'}] data = jsonutils.dumps(doc) response = requests.patch(path, headers=headers, data=data) self.assertEqual(http.NOT_FOUND, response.status_code) # TENANT2 should not be able to delete the image, either path = self._url('/v2/images/%s' % image_id) headers = self._headers({'X-Tenant-Id': TENANT2}) response = requests.delete(path, headers=headers) self.assertEqual(http.NOT_FOUND, response.status_code) # Publicize the image as an admin of TENANT1 path = self._url('/v2/images/%s' % image_id) headers = self._headers({ 'Content-Type': 'application/openstack-images-v2.1-json-patch', 'X-Roles': 'admin', }) doc = [{'op': 'replace', 'path': '/visibility', 'value': 'public'}] data = jsonutils.dumps(doc) response = requests.patch(path, headers=headers, data=data) self.assertEqual(http.OK, response.status_code) # TENANT3 should now see the image in their list path = self._url('/v2/images') headers = self._headers({'X-Tenant-Id': TENANT3}) response = requests.get(path, headers=headers) self.assertEqual(http.OK, response.status_code) images = jsonutils.loads(response.text)['images'] self.assertEqual(image_id, images[0]['id']) # TENANT3 should also be able to access the image directly path = self._url('/v2/images/%s' % image_id) headers = self._headers({'X-Tenant-Id': TENANT3}) response = requests.get(path, headers=headers) self.assertEqual(http.OK, response.status_code) # TENANT3 still should not be able to modify the image path = self._url('/v2/images/%s' % image_id) headers = self._headers({ 'Content-Type': 'application/openstack-images-v2.1-json-patch', 'X-Tenant-Id': TENANT3, }) doc = [{'op': 'replace', 'path': '/name', 'value': 'image-2'}] data = jsonutils.dumps(doc) response = requests.patch(path, headers=headers, data=data) self.assertEqual(http.FORBIDDEN, response.status_code) # TENANT3 should not be able to delete the image, either path = self._url('/v2/images/%s' % image_id) headers = self._headers({'X-Tenant-Id': TENANT3}) response = requests.delete(path, headers=headers) self.assertEqual(http.FORBIDDEN, response.status_code) # Image data should still be present after the failed delete path = self._url('/v2/images/%s/file' % image_id) response = requests.get(path, headers=self._headers()) self.assertEqual(http.OK, response.status_code) self.assertEqual(response.text, 'ZZZZZ') self.stop_servers() def test_property_protections_with_roles(self): # Enable property protection self.api_server.property_protection_file = self.property_file_roles self.start_servers(**self.__dict__.copy()) # Image list should be empty path = self._url('/v2/images') response = requests.get(path, headers=self._headers()) self.assertEqual(http.OK, response.status_code) images = jsonutils.loads(response.text)['images'] self.assertEqual(0, len(images)) # Create an image for role member with extra props # Raises 403 since user is not allowed to set 'foo' path = self._url('/v2/images') headers = self._headers({'content-type': 'application/json', 'X-Roles': 'member'}) data = jsonutils.dumps({'name': 'image-1', 'foo': 'bar', 'disk_format': 'aki', 'container_format': 'aki', 'x_owner_foo': 'o_s_bar'}) response = requests.post(path, headers=headers, data=data) self.assertEqual(http.FORBIDDEN, response.status_code) # Create an image for role member without 'foo' path = self._url('/v2/images') headers = self._headers({'content-type': 'application/json', 'X-Roles': 'member'}) data = jsonutils.dumps({'name': 'image-1', 'disk_format': 'aki', 'container_format': 'aki', 'x_owner_foo': 'o_s_bar'}) response = requests.post(path, headers=headers, data=data) self.assertEqual(http.CREATED, response.status_code) # Returned image entity should have 'x_owner_foo' image = jsonutils.loads(response.text) image_id = image['id'] expected_image = { 'status': 'queued', 'name': 'image-1', 'tags': [], 'visibility': 'shared', 'self': '/v2/images/%s' % image_id, 'protected': False, 'file': '/v2/images/%s/file' % image_id, 'min_disk': 0, 'x_owner_foo': 'o_s_bar', 'min_ram': 0, 'schema': '/v2/schemas/image', } for key, value in expected_image.items(): self.assertEqual(value, image[key], key) # Create an image for role spl_role with extra props path = self._url('/v2/images') headers = self._headers({'content-type': 'application/json', 'X-Roles': 'spl_role'}) data = jsonutils.dumps({'name': 'image-1', 'disk_format': 'aki', 'container_format': 'aki', 'spl_create_prop': 'create_bar', 'spl_create_prop_policy': 'create_policy_bar', 'spl_read_prop': 'read_bar', 'spl_update_prop': 'update_bar', 'spl_delete_prop': 'delete_bar', 'spl_delete_empty_prop': ''}) response = requests.post(path, headers=headers, data=data) self.assertEqual(http.CREATED, response.status_code) image = jsonutils.loads(response.text) image_id = image['id'] # Attempt to replace, add and remove properties which are forbidden path = self._url('/v2/images/%s' % image_id) media_type = 'application/openstack-images-v2.1-json-patch' headers = self._headers({'content-type': media_type, 'X-Roles': 'spl_role'}) data = jsonutils.dumps([ {'op': 'replace', 'path': '/spl_read_prop', 'value': 'r'}, {'op': 'replace', 'path': '/spl_update_prop', 'value': 'u'}, ]) response = requests.patch(path, headers=headers, data=data) self.assertEqual(http.FORBIDDEN, response.status_code, response.text) # Attempt to replace, add and remove properties which are forbidden path = self._url('/v2/images/%s' % image_id) media_type = 'application/openstack-images-v2.1-json-patch' headers = self._headers({'content-type': media_type, 'X-Roles': 'spl_role'}) data = jsonutils.dumps([ {'op': 'add', 'path': '/spl_new_prop', 'value': 'new'}, {'op': 'remove', 'path': '/spl_create_prop'}, {'op': 'remove', 'path': '/spl_delete_prop'}, ]) response = requests.patch(path, headers=headers, data=data) self.assertEqual(http.FORBIDDEN, response.status_code, response.text) # Attempt to replace properties path = self._url('/v2/images/%s' % image_id) media_type = 'application/openstack-images-v2.1-json-patch' headers = self._headers({'content-type': media_type, 'X-Roles': 'spl_role'}) data = jsonutils.dumps([ # Updating an empty property to verify bug #1332103. {'op': 'replace', 'path': '/spl_update_prop', 'value': ''}, {'op': 'replace', 'path': '/spl_update_prop', 'value': 'u'}, ]) response = requests.patch(path, headers=headers, data=data) self.assertEqual(http.OK, response.status_code, response.text) # Returned image entity should reflect the changes image = jsonutils.loads(response.text) # 'spl_update_prop' has update permission for spl_role # hence the value has changed self.assertEqual('u', image['spl_update_prop']) # Attempt to remove properties path = self._url('/v2/images/%s' % image_id) media_type = 'application/openstack-images-v2.1-json-patch' headers = self._headers({'content-type': media_type, 'X-Roles': 'spl_role'}) data = jsonutils.dumps([ {'op': 'remove', 'path': '/spl_delete_prop'}, # Deleting an empty property to verify bug #1332103. {'op': 'remove', 'path': '/spl_delete_empty_prop'}, ]) response = requests.patch(path, headers=headers, data=data) self.assertEqual(http.OK, response.status_code, response.text) # Returned image entity should reflect the changes image = jsonutils.loads(response.text) # 'spl_delete_prop' and 'spl_delete_empty_prop' have delete # permission for spl_role hence the property has been deleted self.assertNotIn('spl_delete_prop', image.keys()) self.assertNotIn('spl_delete_empty_prop', image.keys()) # Image Deletion should work path = self._url('/v2/images/%s' % image_id) response = requests.delete(path, headers=self._headers()) self.assertEqual(http.NO_CONTENT, response.status_code) # This image should be no longer be directly accessible path = self._url('/v2/images/%s' % image_id) response = requests.get(path, headers=self._headers()) self.assertEqual(http.NOT_FOUND, response.status_code) self.stop_servers() def test_property_protections_with_policies(self): # Enable property protection self.api_server.property_protection_file = self.property_file_policies self.api_server.property_protection_rule_format = 'policies' self.start_servers(**self.__dict__.copy()) # Image list should be empty path = self._url('/v2/images') response = requests.get(path, headers=self._headers()) self.assertEqual(http.OK, response.status_code) images = jsonutils.loads(response.text)['images'] self.assertEqual(0, len(images)) # Create an image for role member with extra props # Raises 403 since user is not allowed to set 'foo' path = self._url('/v2/images') headers = self._headers({'content-type': 'application/json', 'X-Roles': 'member'}) data = jsonutils.dumps({'name': 'image-1', 'foo': 'bar', 'disk_format': 'aki', 'container_format': 'aki', 'x_owner_foo': 'o_s_bar'}) response = requests.post(path, headers=headers, data=data) self.assertEqual(http.FORBIDDEN, response.status_code) # Create an image for role member without 'foo' path = self._url('/v2/images') headers = self._headers({'content-type': 'application/json', 'X-Roles': 'member'}) data = jsonutils.dumps({'name': 'image-1', 'disk_format': 'aki', 'container_format': 'aki'}) response = requests.post(path, headers=headers, data=data) self.assertEqual(http.CREATED, response.status_code) # Returned image entity image = jsonutils.loads(response.text) image_id = image['id'] expected_image = { 'status': 'queued', 'name': 'image-1', 'tags': [], 'visibility': 'shared', 'self': '/v2/images/%s' % image_id, 'protected': False, 'file': '/v2/images/%s/file' % image_id, 'min_disk': 0, 'min_ram': 0, 'schema': '/v2/schemas/image', } for key, value in expected_image.items(): self.assertEqual(value, image[key], key) # Create an image for role spl_role with extra props path = self._url('/v2/images') headers = self._headers({'content-type': 'application/json', 'X-Roles': 'spl_role, admin'}) data = jsonutils.dumps({'name': 'image-1', 'disk_format': 'aki', 'container_format': 'aki', 'spl_creator_policy': 'creator_bar', 'spl_default_policy': 'default_bar'}) response = requests.post(path, headers=headers, data=data) self.assertEqual(http.CREATED, response.status_code) image = jsonutils.loads(response.text) image_id = image['id'] self.assertEqual('creator_bar', image['spl_creator_policy']) self.assertEqual('default_bar', image['spl_default_policy']) # Attempt to replace a property which is permitted path = self._url('/v2/images/%s' % image_id) media_type = 'application/openstack-images-v2.1-json-patch' headers = self._headers({'content-type': media_type, 'X-Roles': 'admin'}) data = jsonutils.dumps([ # Updating an empty property to verify bug #1332103. {'op': 'replace', 'path': '/spl_creator_policy', 'value': ''}, {'op': 'replace', 'path': '/spl_creator_policy', 'value': 'r'}, ]) response = requests.patch(path, headers=headers, data=data) self.assertEqual(http.OK, response.status_code, response.text) # Returned image entity should reflect the changes image = jsonutils.loads(response.text) # 'spl_creator_policy' has update permission for admin # hence the value has changed self.assertEqual('r', image['spl_creator_policy']) # Attempt to replace a property which is forbidden path = self._url('/v2/images/%s' % image_id) media_type = 'application/openstack-images-v2.1-json-patch' headers = self._headers({'content-type': media_type, 'X-Roles': 'spl_role'}) data = jsonutils.dumps([ {'op': 'replace', 'path': '/spl_creator_policy', 'value': 'z'}, ]) response = requests.patch(path, headers=headers, data=data) self.assertEqual(http.FORBIDDEN, response.status_code, response.text) # Attempt to read properties path = self._url('/v2/images/%s' % image_id) headers = self._headers({'content-type': media_type, 'X-Roles': 'random_role'}) response = requests.get(path, headers=headers) self.assertEqual(http.OK, response.status_code) image = jsonutils.loads(response.text) # 'random_role' is allowed read 'spl_default_policy'. self.assertEqual(image['spl_default_policy'], 'default_bar') # 'random_role' is forbidden to read 'spl_creator_policy'. self.assertNotIn('spl_creator_policy', image) # Attempt to replace and remove properties which are permitted path = self._url('/v2/images/%s' % image_id) media_type = 'application/openstack-images-v2.1-json-patch' headers = self._headers({'content-type': media_type, 'X-Roles': 'admin'}) data = jsonutils.dumps([ # Deleting an empty property to verify bug #1332103. {'op': 'replace', 'path': '/spl_creator_policy', 'value': ''}, {'op': 'remove', 'path': '/spl_creator_policy'}, ]) response = requests.patch(path, headers=headers, data=data) self.assertEqual(http.OK, response.status_code, response.text) # Returned image entity should reflect the changes image = jsonutils.loads(response.text) # 'spl_creator_policy' has delete permission for admin # hence the value has been deleted self.assertNotIn('spl_creator_policy', image) # Attempt to read a property that is permitted path = self._url('/v2/images/%s' % image_id) headers = self._headers({'content-type': media_type, 'X-Roles': 'random_role'}) response = requests.get(path, headers=headers) self.assertEqual(http.OK, response.status_code) # Returned image entity should reflect the changes image = jsonutils.loads(response.text) self.assertEqual(image['spl_default_policy'], 'default_bar') # Image Deletion should work path = self._url('/v2/images/%s' % image_id) response = requests.delete(path, headers=self._headers()) self.assertEqual(http.NO_CONTENT, response.status_code) # This image should be no longer be directly accessible path = self._url('/v2/images/%s' % image_id) response = requests.get(path, headers=self._headers()) self.assertEqual(http.NOT_FOUND, response.status_code) self.stop_servers() def test_property_protections_special_chars_roles(self): # Enable property protection self.api_server.property_protection_file = self.property_file_roles self.start_servers(**self.__dict__.copy()) # Verify both admin and unknown role can create properties marked with # '@' path = self._url('/v2/images') headers = self._headers({'content-type': 'application/json', 'X-Roles': 'admin'}) data = jsonutils.dumps({ 'name': 'image-1', 'disk_format': 'aki', 'container_format': 'aki', 'x_all_permitted_admin': '1' }) response = requests.post(path, headers=headers, data=data) self.assertEqual(http.CREATED, response.status_code) image = jsonutils.loads(response.text) image_id = image['id'] expected_image = { 'status': 'queued', 'name': 'image-1', 'tags': [], 'visibility': 'shared', 'self': '/v2/images/%s' % image_id, 'protected': False, 'file': '/v2/images/%s/file' % image_id, 'min_disk': 0, 'x_all_permitted_admin': '1', 'min_ram': 0, 'schema': '/v2/schemas/image', } for key, value in expected_image.items(): self.assertEqual(value, image[key], key) path = self._url('/v2/images') headers = self._headers({'content-type': 'application/json', 'X-Roles': 'joe_soap'}) data = jsonutils.dumps({ 'name': 'image-1', 'disk_format': 'aki', 'container_format': 'aki', 'x_all_permitted_joe_soap': '1' }) response = requests.post(path, headers=headers, data=data) self.assertEqual(http.CREATED, response.status_code) image = jsonutils.loads(response.text) image_id = image['id'] expected_image = { 'status': 'queued', 'name': 'image-1', 'tags': [], 'visibility': 'shared', 'self': '/v2/images/%s' % image_id, 'protected': False, 'file': '/v2/images/%s/file' % image_id, 'min_disk': 0, 'x_all_permitted_joe_soap': '1', 'min_ram': 0, 'schema': '/v2/schemas/image', } for key, value in expected_image.items(): self.assertEqual(value, image[key], key) # Verify both admin and unknown role can read properties marked with # '@' headers = self._headers({'content-type': 'application/json', 'X-Roles': 'admin'}) path = self._url('/v2/images/%s' % image_id) response = requests.get(path, headers=self._headers()) self.assertEqual(http.OK, response.status_code) image = jsonutils.loads(response.text) self.assertEqual('1', image['x_all_permitted_joe_soap']) headers = self._headers({'content-type': 'application/json', 'X-Roles': 'joe_soap'}) path = self._url('/v2/images/%s' % image_id) response = requests.get(path, headers=self._headers()) self.assertEqual(http.OK, response.status_code) image = jsonutils.loads(response.text) self.assertEqual('1', image['x_all_permitted_joe_soap']) # Verify both admin and unknown role can update properties marked with # '@' path = self._url('/v2/images/%s' % image_id) media_type = 'application/openstack-images-v2.1-json-patch' headers = self._headers({'content-type': media_type, 'X-Roles': 'admin'}) data = jsonutils.dumps([ {'op': 'replace', 'path': '/x_all_permitted_joe_soap', 'value': '2'} ]) response = requests.patch(path, headers=headers, data=data) self.assertEqual(http.OK, response.status_code, response.text) image = jsonutils.loads(response.text) self.assertEqual('2', image['x_all_permitted_joe_soap']) path = self._url('/v2/images/%s' % image_id) media_type = 'application/openstack-images-v2.1-json-patch' headers = self._headers({'content-type': media_type, 'X-Roles': 'joe_soap'}) data = jsonutils.dumps([ {'op': 'replace', 'path': '/x_all_permitted_joe_soap', 'value': '3'} ]) response = requests.patch(path, headers=headers, data=data) self.assertEqual(http.OK, response.status_code, response.text) image = jsonutils.loads(response.text) self.assertEqual('3', image['x_all_permitted_joe_soap']) # Verify both admin and unknown role can delete properties marked with # '@' path = self._url('/v2/images') headers = self._headers({'content-type': 'application/json', 'X-Roles': 'admin'}) data = jsonutils.dumps({ 'name': 'image-1', 'disk_format': 'aki', 'container_format': 'aki', 'x_all_permitted_a': '1', 'x_all_permitted_b': '2' }) response = requests.post(path, headers=headers, data=data) self.assertEqual(http.CREATED, response.status_code) image = jsonutils.loads(response.text) image_id = image['id'] path = self._url('/v2/images/%s' % image_id) media_type = 'application/openstack-images-v2.1-json-patch' headers = self._headers({'content-type': media_type, 'X-Roles': 'admin'}) data = jsonutils.dumps([ {'op': 'remove', 'path': '/x_all_permitted_a'} ]) response = requests.patch(path, headers=headers, data=data) self.assertEqual(http.OK, response.status_code, response.text) image = jsonutils.loads(response.text) self.assertNotIn('x_all_permitted_a', image.keys()) path = self._url('/v2/images/%s' % image_id) media_type = 'application/openstack-images-v2.1-json-patch' headers = self._headers({'content-type': media_type, 'X-Roles': 'joe_soap'}) data = jsonutils.dumps([ {'op': 'remove', 'path': '/x_all_permitted_b'} ]) response = requests.patch(path, headers=headers, data=data) self.assertEqual(http.OK, response.status_code, response.text) image = jsonutils.loads(response.text) self.assertNotIn('x_all_permitted_b', image.keys()) # Verify neither admin nor unknown role can create a property protected # with '!' path = self._url('/v2/images') headers = self._headers({'content-type': 'application/json', 'X-Roles': 'admin'}) data = jsonutils.dumps({ 'name': 'image-1', 'disk_format': 'aki', 'container_format': 'aki', 'x_none_permitted_admin': '1' }) response = requests.post(path, headers=headers, data=data) self.assertEqual(http.FORBIDDEN, response.status_code) path = self._url('/v2/images') headers = self._headers({'content-type': 'application/json', 'X-Roles': 'joe_soap'}) data = jsonutils.dumps({ 'name': 'image-1', 'disk_format': 'aki', 'container_format': 'aki', 'x_none_permitted_joe_soap': '1' }) response = requests.post(path, headers=headers, data=data) self.assertEqual(http.FORBIDDEN, response.status_code) # Verify neither admin nor unknown role can read properties marked with # '!' path = self._url('/v2/images') headers = self._headers({'content-type': 'application/json', 'X-Roles': 'admin'}) data = jsonutils.dumps({ 'name': 'image-1', 'disk_format': 'aki', 'container_format': 'aki', 'x_none_read': '1' }) response = requests.post(path, headers=headers, data=data) self.assertEqual(http.CREATED, response.status_code) image = jsonutils.loads(response.text) image_id = image['id'] self.assertNotIn('x_none_read', image.keys()) headers = self._headers({'content-type': 'application/json', 'X-Roles': 'admin'}) path = self._url('/v2/images/%s' % image_id) response = requests.get(path, headers=self._headers()) self.assertEqual(http.OK, response.status_code) image = jsonutils.loads(response.text) self.assertNotIn('x_none_read', image.keys()) headers = self._headers({'content-type': 'application/json', 'X-Roles': 'joe_soap'}) path = self._url('/v2/images/%s' % image_id) response = requests.get(path, headers=self._headers()) self.assertEqual(http.OK, response.status_code) image = jsonutils.loads(response.text) self.assertNotIn('x_none_read', image.keys()) # Verify neither admin nor unknown role can update properties marked # with '!' path = self._url('/v2/images') headers = self._headers({'content-type': 'application/json', 'X-Roles': 'admin'}) data = jsonutils.dumps({ 'name': 'image-1', 'disk_format': 'aki', 'container_format': 'aki', 'x_none_update': '1' }) response = requests.post(path, headers=headers, data=data) self.assertEqual(http.CREATED, response.status_code) image = jsonutils.loads(response.text) image_id = image['id'] self.assertEqual('1', image['x_none_update']) path = self._url('/v2/images/%s' % image_id) media_type = 'application/openstack-images-v2.1-json-patch' headers = self._headers({'content-type': media_type, 'X-Roles': 'admin'}) data = jsonutils.dumps([ {'op': 'replace', 'path': '/x_none_update', 'value': '2'} ]) response = requests.patch(path, headers=headers, data=data) self.assertEqual(http.FORBIDDEN, response.status_code, response.text) path = self._url('/v2/images/%s' % image_id) media_type = 'application/openstack-images-v2.1-json-patch' headers = self._headers({'content-type': media_type, 'X-Roles': 'joe_soap'}) data = jsonutils.dumps([ {'op': 'replace', 'path': '/x_none_update', 'value': '3'} ]) response = requests.patch(path, headers=headers, data=data) self.assertEqual(http.CONFLICT, response.status_code, response.text) # Verify neither admin nor unknown role can delete properties marked # with '!' path = self._url('/v2/images') headers = self._headers({'content-type': 'application/json', 'X-Roles': 'admin'}) data = jsonutils.dumps({ 'name': 'image-1', 'disk_format': 'aki', 'container_format': 'aki', 'x_none_delete': '1', }) response = requests.post(path, headers=headers, data=data) self.assertEqual(http.CREATED, response.status_code) image = jsonutils.loads(response.text) image_id = image['id'] path = self._url('/v2/images/%s' % image_id) media_type = 'application/openstack-images-v2.1-json-patch' headers = self._headers({'content-type': media_type, 'X-Roles': 'admin'}) data = jsonutils.dumps([ {'op': 'remove', 'path': '/x_none_delete'} ]) response = requests.patch(path, headers=headers, data=data) self.assertEqual(http.FORBIDDEN, response.status_code, response.text) path = self._url('/v2/images/%s' % image_id) media_type = 'application/openstack-images-v2.1-json-patch' headers = self._headers({'content-type': media_type, 'X-Roles': 'joe_soap'}) data = jsonutils.dumps([ {'op': 'remove', 'path': '/x_none_delete'} ]) response = requests.patch(path, headers=headers, data=data) self.assertEqual(http.CONFLICT, response.status_code, response.text) self.stop_servers() def test_property_protections_special_chars_policies(self): # Enable property protection self.api_server.property_protection_file = self.property_file_policies self.api_server.property_protection_rule_format = 'policies' self.start_servers(**self.__dict__.copy()) # Verify both admin and unknown role can create properties marked with # '@' path = self._url('/v2/images') headers = self._headers({'content-type': 'application/json', 'X-Roles': 'admin'}) data = jsonutils.dumps({ 'name': 'image-1', 'disk_format': 'aki', 'container_format': 'aki', 'x_all_permitted_admin': '1' }) response = requests.post(path, headers=headers, data=data) self.assertEqual(http.CREATED, response.status_code) image = jsonutils.loads(response.text) image_id = image['id'] expected_image = { 'status': 'queued', 'name': 'image-1', 'tags': [], 'visibility': 'shared', 'self': '/v2/images/%s' % image_id, 'protected': False, 'file': '/v2/images/%s/file' % image_id, 'min_disk': 0, 'x_all_permitted_admin': '1', 'min_ram': 0, 'schema': '/v2/schemas/image', } for key, value in expected_image.items(): self.assertEqual(value, image[key], key) path = self._url('/v2/images') headers = self._headers({'content-type': 'application/json', 'X-Roles': 'joe_soap'}) data = jsonutils.dumps({ 'name': 'image-1', 'disk_format': 'aki', 'container_format': 'aki', 'x_all_permitted_joe_soap': '1' }) response = requests.post(path, headers=headers, data=data) self.assertEqual(http.CREATED, response.status_code) image = jsonutils.loads(response.text) image_id = image['id'] expected_image = { 'status': 'queued', 'name': 'image-1', 'tags': [], 'visibility': 'shared', 'self': '/v2/images/%s' % image_id, 'protected': False, 'file': '/v2/images/%s/file' % image_id, 'min_disk': 0, 'x_all_permitted_joe_soap': '1', 'min_ram': 0, 'schema': '/v2/schemas/image', } for key, value in expected_image.items(): self.assertEqual(value, image[key], key) # Verify both admin and unknown role can read properties marked with # '@' headers = self._headers({'content-type': 'application/json', 'X-Roles': 'admin'}) path = self._url('/v2/images/%s' % image_id) response = requests.get(path, headers=self._headers()) self.assertEqual(http.OK, response.status_code) image = jsonutils.loads(response.text) self.assertEqual('1', image['x_all_permitted_joe_soap']) headers = self._headers({'content-type': 'application/json', 'X-Roles': 'joe_soap'}) path = self._url('/v2/images/%s' % image_id) response = requests.get(path, headers=self._headers()) self.assertEqual(http.OK, response.status_code) image = jsonutils.loads(response.text) self.assertEqual('1', image['x_all_permitted_joe_soap']) # Verify both admin and unknown role can update properties marked with # '@' path = self._url('/v2/images/%s' % image_id) media_type = 'application/openstack-images-v2.1-json-patch' headers = self._headers({'content-type': media_type, 'X-Roles': 'admin'}) data = jsonutils.dumps([ {'op': 'replace', 'path': '/x_all_permitted_joe_soap', 'value': '2'} ]) response = requests.patch(path, headers=headers, data=data) self.assertEqual(http.OK, response.status_code, response.text) image = jsonutils.loads(response.text) self.assertEqual('2', image['x_all_permitted_joe_soap']) path = self._url('/v2/images/%s' % image_id) media_type = 'application/openstack-images-v2.1-json-patch' headers = self._headers({'content-type': media_type, 'X-Roles': 'joe_soap'}) data = jsonutils.dumps([ {'op': 'replace', 'path': '/x_all_permitted_joe_soap', 'value': '3'} ]) response = requests.patch(path, headers=headers, data=data) self.assertEqual(http.OK, response.status_code, response.text) image = jsonutils.loads(response.text) self.assertEqual('3', image['x_all_permitted_joe_soap']) # Verify both admin and unknown role can delete properties marked with # '@' path = self._url('/v2/images') headers = self._headers({'content-type': 'application/json', 'X-Roles': 'admin'}) data = jsonutils.dumps({ 'name': 'image-1', 'disk_format': 'aki', 'container_format': 'aki', 'x_all_permitted_a': '1', 'x_all_permitted_b': '2' }) response = requests.post(path, headers=headers, data=data) self.assertEqual(http.CREATED, response.status_code) image = jsonutils.loads(response.text) image_id = image['id'] path = self._url('/v2/images/%s' % image_id) media_type = 'application/openstack-images-v2.1-json-patch' headers = self._headers({'content-type': media_type, 'X-Roles': 'admin'}) data = jsonutils.dumps([ {'op': 'remove', 'path': '/x_all_permitted_a'} ]) response = requests.patch(path, headers=headers, data=data) self.assertEqual(http.OK, response.status_code, response.text) image = jsonutils.loads(response.text) self.assertNotIn('x_all_permitted_a', image.keys()) path = self._url('/v2/images/%s' % image_id) media_type = 'application/openstack-images-v2.1-json-patch' headers = self._headers({'content-type': media_type, 'X-Roles': 'joe_soap'}) data = jsonutils.dumps([ {'op': 'remove', 'path': '/x_all_permitted_b'} ]) response = requests.patch(path, headers=headers, data=data) self.assertEqual(http.OK, response.status_code, response.text) image = jsonutils.loads(response.text) self.assertNotIn('x_all_permitted_b', image.keys()) # Verify neither admin nor unknown role can create a property protected # with '!' path = self._url('/v2/images') headers = self._headers({'content-type': 'application/json', 'X-Roles': 'admin'}) data = jsonutils.dumps({ 'name': 'image-1', 'disk_format': 'aki', 'container_format': 'aki', 'x_none_permitted_admin': '1' }) response = requests.post(path, headers=headers, data=data) self.assertEqual(http.FORBIDDEN, response.status_code) path = self._url('/v2/images') headers = self._headers({'content-type': 'application/json', 'X-Roles': 'joe_soap'}) data = jsonutils.dumps({ 'name': 'image-1', 'disk_format': 'aki', 'container_format': 'aki', 'x_none_permitted_joe_soap': '1' }) response = requests.post(path, headers=headers, data=data) self.assertEqual(http.FORBIDDEN, response.status_code) # Verify neither admin nor unknown role can read properties marked with # '!' path = self._url('/v2/images') headers = self._headers({'content-type': 'application/json', 'X-Roles': 'admin'}) data = jsonutils.dumps({ 'name': 'image-1', 'disk_format': 'aki', 'container_format': 'aki', 'x_none_read': '1' }) response = requests.post(path, headers=headers, data=data) self.assertEqual(http.CREATED, response.status_code) image = jsonutils.loads(response.text) image_id = image['id'] self.assertNotIn('x_none_read', image.keys()) headers = self._headers({'content-type': 'application/json', 'X-Roles': 'admin'}) path = self._url('/v2/images/%s' % image_id) response = requests.get(path, headers=self._headers()) self.assertEqual(http.OK, response.status_code) image = jsonutils.loads(response.text) self.assertNotIn('x_none_read', image.keys()) headers = self._headers({'content-type': 'application/json', 'X-Roles': 'joe_soap'}) path = self._url('/v2/images/%s' % image_id) response = requests.get(path, headers=self._headers()) self.assertEqual(http.OK, response.status_code) image = jsonutils.loads(response.text) self.assertNotIn('x_none_read', image.keys()) # Verify neither admin nor unknown role can update properties marked # with '!' path = self._url('/v2/images') headers = self._headers({'content-type': 'application/json', 'X-Roles': 'admin'}) data = jsonutils.dumps({ 'name': 'image-1', 'disk_format': 'aki', 'container_format': 'aki', 'x_none_update': '1' }) response = requests.post(path, headers=headers, data=data) self.assertEqual(http.CREATED, response.status_code) image = jsonutils.loads(response.text) image_id = image['id'] self.assertEqual('1', image['x_none_update']) path = self._url('/v2/images/%s' % image_id) media_type = 'application/openstack-images-v2.1-json-patch' headers = self._headers({'content-type': media_type, 'X-Roles': 'admin'}) data = jsonutils.dumps([ {'op': 'replace', 'path': '/x_none_update', 'value': '2'} ]) response = requests.patch(path, headers=headers, data=data) self.assertEqual(http.FORBIDDEN, response.status_code, response.text) path = self._url('/v2/images/%s' % image_id) media_type = 'application/openstack-images-v2.1-json-patch' headers = self._headers({'content-type': media_type, 'X-Roles': 'joe_soap'}) data = jsonutils.dumps([ {'op': 'replace', 'path': '/x_none_update', 'value': '3'} ]) response = requests.patch(path, headers=headers, data=data) self.assertEqual(http.CONFLICT, response.status_code, response.text) # Verify neither admin nor unknown role can delete properties marked # with '!' path = self._url('/v2/images') headers = self._headers({'content-type': 'application/json', 'X-Roles': 'admin'}) data = jsonutils.dumps({ 'name': 'image-1', 'disk_format': 'aki', 'container_format': 'aki', 'x_none_delete': '1', }) response = requests.post(path, headers=headers, data=data) self.assertEqual(http.CREATED, response.status_code) image = jsonutils.loads(response.text) image_id = image['id'] path = self._url('/v2/images/%s' % image_id) media_type = 'application/openstack-images-v2.1-json-patch' headers = self._headers({'content-type': media_type, 'X-Roles': 'admin'}) data = jsonutils.dumps([ {'op': 'remove', 'path': '/x_none_delete'} ]) response = requests.patch(path, headers=headers, data=data) self.assertEqual(http.FORBIDDEN, response.status_code, response.text) path = self._url('/v2/images/%s' % image_id) media_type = 'application/openstack-images-v2.1-json-patch' headers = self._headers({'content-type': media_type, 'X-Roles': 'joe_soap'}) data = jsonutils.dumps([ {'op': 'remove', 'path': '/x_none_delete'} ]) response = requests.patch(path, headers=headers, data=data) self.assertEqual(http.CONFLICT, response.status_code, response.text) self.stop_servers() def test_tag_lifecycle(self): self.start_servers(**self.__dict__.copy()) # Create an image with a tag - duplicate should be ignored path = self._url('/v2/images') headers = self._headers({'Content-Type': 'application/json'}) data = jsonutils.dumps({'name': 'image-1', 'tags': ['sniff', 'sniff']}) response = requests.post(path, headers=headers, data=data) self.assertEqual(http.CREATED, response.status_code) image_id = jsonutils.loads(response.text)['id'] # Image should show a list with a single tag path = self._url('/v2/images/%s' % image_id) response = requests.get(path, headers=self._headers()) self.assertEqual(http.OK, response.status_code) tags = jsonutils.loads(response.text)['tags'] self.assertEqual(['sniff'], tags) # Delete all tags for tag in tags: path = self._url('/v2/images/%s/tags/%s' % (image_id, tag)) response = requests.delete(path, headers=self._headers()) self.assertEqual(http.NO_CONTENT, response.status_code) # Update image with too many tags via PUT # Configured limit is 10 tags for i in range(10): path = self._url('/v2/images/%s/tags/foo%i' % (image_id, i)) response = requests.put(path, headers=self._headers()) self.assertEqual(http.NO_CONTENT, response.status_code) # 11th tag should fail path = self._url('/v2/images/%s/tags/fail_me' % image_id) response = requests.put(path, headers=self._headers()) self.assertEqual(http.REQUEST_ENTITY_TOO_LARGE, response.status_code) # Make sure the 11th tag was not added path = self._url('/v2/images/%s' % image_id) response = requests.get(path, headers=self._headers()) self.assertEqual(http.OK, response.status_code) tags = jsonutils.loads(response.text)['tags'] self.assertEqual(10, len(tags)) # Update image tags via PATCH path = self._url('/v2/images/%s' % image_id) media_type = 'application/openstack-images-v2.1-json-patch' headers = self._headers({'content-type': media_type}) doc = [ { 'op': 'replace', 'path': '/tags', 'value': ['foo'], }, ] data = jsonutils.dumps(doc) response = requests.patch(path, headers=headers, data=data) self.assertEqual(http.OK, response.status_code) # Update image with too many tags via PATCH # Configured limit is 10 tags path = self._url('/v2/images/%s' % image_id) media_type = 'application/openstack-images-v2.1-json-patch' headers = self._headers({'content-type': media_type}) tags = ['foo%d' % i for i in range(11)] doc = [ { 'op': 'replace', 'path': '/tags', 'value': tags, }, ] data = jsonutils.dumps(doc) response = requests.patch(path, headers=headers, data=data) self.assertEqual(http.REQUEST_ENTITY_TOO_LARGE, response.status_code) # Tags should not have changed since request was over limit path = self._url('/v2/images/%s' % image_id) response = requests.get(path, headers=self._headers()) self.assertEqual(http.OK, response.status_code) tags = jsonutils.loads(response.text)['tags'] self.assertEqual(['foo'], tags) # Update image with duplicate tag - it should be ignored path = self._url('/v2/images/%s' % image_id) media_type = 'application/openstack-images-v2.1-json-patch' headers = self._headers({'content-type': media_type}) doc = [ { 'op': 'replace', 'path': '/tags', 'value': ['sniff', 'snozz', 'snozz'], }, ] data = jsonutils.dumps(doc) response = requests.patch(path, headers=headers, data=data) self.assertEqual(http.OK, response.status_code) tags = jsonutils.loads(response.text)['tags'] self.assertEqual(['sniff', 'snozz'], sorted(tags)) # Image should show the appropriate tags path = self._url('/v2/images/%s' % image_id) response = requests.get(path, headers=self._headers()) self.assertEqual(http.OK, response.status_code) tags = jsonutils.loads(response.text)['tags'] self.assertEqual(['sniff', 'snozz'], sorted(tags)) # Attempt to tag the image with a duplicate should be ignored path = self._url('/v2/images/%s/tags/snozz' % image_id) response = requests.put(path, headers=self._headers()) self.assertEqual(http.NO_CONTENT, response.status_code) # Create another more complex tag path = self._url('/v2/images/%s/tags/gabe%%40example.com' % image_id) response = requests.put(path, headers=self._headers()) self.assertEqual(http.NO_CONTENT, response.status_code) # Double-check that the tags container on the image is populated path = self._url('/v2/images/%s' % image_id) response = requests.get(path, headers=self._headers()) self.assertEqual(http.OK, response.status_code) tags = jsonutils.loads(response.text)['tags'] self.assertEqual(['gabe@example.com', 'sniff', 'snozz'], sorted(tags)) # Query images by single tag path = self._url('/v2/images?tag=sniff') response = requests.get(path, headers=self._headers()) self.assertEqual(http.OK, response.status_code) images = jsonutils.loads(response.text)['images'] self.assertEqual(1, len(images)) self.assertEqual('image-1', images[0]['name']) # Query images by multiple tags path = self._url('/v2/images?tag=sniff&tag=snozz') response = requests.get(path, headers=self._headers()) self.assertEqual(http.OK, response.status_code) images = jsonutils.loads(response.text)['images'] self.assertEqual(1, len(images)) self.assertEqual('image-1', images[0]['name']) # Query images by tag and other attributes path = self._url('/v2/images?tag=sniff&status=queued') response = requests.get(path, headers=self._headers()) self.assertEqual(http.OK, response.status_code) images = jsonutils.loads(response.text)['images'] self.assertEqual(1, len(images)) self.assertEqual('image-1', images[0]['name']) # Query images by tag and a nonexistent tag path = self._url('/v2/images?tag=sniff&tag=fake') response = requests.get(path, headers=self._headers()) self.assertEqual(http.OK, response.status_code) images = jsonutils.loads(response.text)['images'] self.assertEqual(0, len(images)) # The tag should be deletable path = self._url('/v2/images/%s/tags/gabe%%40example.com' % image_id) response = requests.delete(path, headers=self._headers()) self.assertEqual(http.NO_CONTENT, response.status_code) # List of tags should reflect the deletion path = self._url('/v2/images/%s' % image_id) response = requests.get(path, headers=self._headers()) self.assertEqual(http.OK, response.status_code) tags = jsonutils.loads(response.text)['tags'] self.assertEqual(['sniff', 'snozz'], sorted(tags)) # Deleting the same tag should return a 404 path = self._url('/v2/images/%s/tags/gabe%%40example.com' % image_id) response = requests.delete(path, headers=self._headers()) self.assertEqual(http.NOT_FOUND, response.status_code) # The tags won't be able to query the images after deleting path = self._url('/v2/images?tag=gabe%%40example.com') response = requests.get(path, headers=self._headers()) self.assertEqual(http.OK, response.status_code) images = jsonutils.loads(response.text)['images'] self.assertEqual(0, len(images)) # Try to add a tag that is too long big_tag = 'a' * 300 path = self._url('/v2/images/%s/tags/%s' % (image_id, big_tag)) response = requests.put(path, headers=self._headers()) self.assertEqual(http.BAD_REQUEST, response.status_code) # Tags should not have changed since request was over limit path = self._url('/v2/images/%s' % image_id) response = requests.get(path, headers=self._headers()) self.assertEqual(http.OK, response.status_code) tags = jsonutils.loads(response.text)['tags'] self.assertEqual(['sniff', 'snozz'], sorted(tags)) self.stop_servers() def test_images_container(self): # Image list should be empty and no next link should be present self.start_servers(**self.__dict__.copy()) path = self._url('/v2/images') response = requests.get(path, headers=self._headers()) self.assertEqual(http.OK, response.status_code) images = jsonutils.loads(response.text)['images'] first = jsonutils.loads(response.text)['first'] self.assertEqual(0, len(images)) self.assertNotIn('next', jsonutils.loads(response.text)) self.assertEqual('/v2/images', first) # Create 7 images images = [] fixtures = [ {'name': 'image-3', 'type': 'kernel', 'ping': 'pong', 'container_format': 'ami', 'disk_format': 'ami'}, {'name': 'image-4', 'type': 'kernel', 'ping': 'pong', 'container_format': 'bare', 'disk_format': 'ami'}, {'name': 'image-1', 'type': 'kernel', 'ping': 'pong'}, {'name': 'image-3', 'type': 'ramdisk', 'ping': 'pong'}, {'name': 'image-2', 'type': 'kernel', 'ping': 'ding'}, {'name': 'image-3', 'type': 'kernel', 'ping': 'pong'}, {'name': 'image-2,image-5', 'type': 'kernel', 'ping': 'pong'}, ] path = self._url('/v2/images') headers = self._headers({'content-type': 'application/json'}) for fixture in fixtures: data = jsonutils.dumps(fixture) response = requests.post(path, headers=headers, data=data) self.assertEqual(http.CREATED, response.status_code) images.append(jsonutils.loads(response.text)) # Image list should contain 7 images path = self._url('/v2/images') response = requests.get(path, headers=self._headers()) self.assertEqual(http.OK, response.status_code) body = jsonutils.loads(response.text) self.assertEqual(7, len(body['images'])) self.assertEqual('/v2/images', body['first']) self.assertNotIn('next', jsonutils.loads(response.text)) # Image list filters by created_at time url_template = '/v2/images?created_at=lt:%s' path = self._url(url_template % images[0]['created_at']) response = requests.get(path, headers=self._headers()) self.assertEqual(http.OK, response.status_code) body = jsonutils.loads(response.text) self.assertEqual(0, len(body['images'])) self.assertEqual(url_template % images[0]['created_at'], urllib.parse.unquote(body['first'])) # Image list filters by updated_at time url_template = '/v2/images?updated_at=lt:%s' path = self._url(url_template % images[2]['updated_at']) response = requests.get(path, headers=self._headers()) self.assertEqual(http.OK, response.status_code) body = jsonutils.loads(response.text) self.assertGreaterEqual(3, len(body['images'])) self.assertEqual(url_template % images[2]['updated_at'], urllib.parse.unquote(body['first'])) # Image list filters by updated_at and created time with invalid value url_template = '/v2/images?%s=lt:invalid_value' for filter in ['updated_at', 'created_at']: path = self._url(url_template % filter) response = requests.get(path, headers=self._headers()) self.assertEqual(http.BAD_REQUEST, response.status_code) # Image list filters by updated_at and created_at with invalid operator url_template = '/v2/images?%s=invalid_operator:2015-11-19T12:24:02Z' for filter in ['updated_at', 'created_at']: path = self._url(url_template % filter) response = requests.get(path, headers=self._headers()) self.assertEqual(http.BAD_REQUEST, response.status_code) # Image list filters by non-'URL encoding' value path = self._url('/v2/images?name=%FF') response = requests.get(path, headers=self._headers()) self.assertEqual(http.BAD_REQUEST, response.status_code) # Image list filters by name with in operator url_template = '/v2/images?name=in:%s' filter_value = 'image-1,image-2' path = self._url(url_template % filter_value) response = requests.get(path, headers=self._headers()) self.assertEqual(http.OK, response.status_code) body = jsonutils.loads(response.text) self.assertGreaterEqual(3, len(body['images'])) # Image list filters by container_format with in operator url_template = '/v2/images?container_format=in:%s' filter_value = 'bare,ami' path = self._url(url_template % filter_value) response = requests.get(path, headers=self._headers()) self.assertEqual(http.OK, response.status_code) body = jsonutils.loads(response.text) self.assertGreaterEqual(2, len(body['images'])) # Image list filters by disk_format with in operator url_template = '/v2/images?disk_format=in:%s' filter_value = 'bare,ami,iso' path = self._url(url_template % filter_value) response = requests.get(path, headers=self._headers()) self.assertEqual(http.OK, response.status_code) body = jsonutils.loads(response.text) self.assertGreaterEqual(2, len(body['images'])) # Begin pagination after the first image template_url = ('/v2/images?limit=2&sort_dir=asc&sort_key=name' '&marker=%s&type=kernel&ping=pong') path = self._url(template_url % images[2]['id']) response = requests.get(path, headers=self._headers()) self.assertEqual(http.OK, response.status_code) body = jsonutils.loads(response.text) self.assertEqual(2, len(body['images'])) response_ids = [image['id'] for image in body['images']] self.assertEqual([images[6]['id'], images[0]['id']], response_ids) # Continue pagination using next link from previous request path = self._url(body['next']) response = requests.get(path, headers=self._headers()) self.assertEqual(http.OK, response.status_code) body = jsonutils.loads(response.text) self.assertEqual(2, len(body['images'])) response_ids = [image['id'] for image in body['images']] self.assertEqual([images[5]['id'], images[1]['id']], response_ids) # Continue pagination - expect no results path = self._url(body['next']) response = requests.get(path, headers=self._headers()) self.assertEqual(http.OK, response.status_code) body = jsonutils.loads(response.text) self.assertEqual(0, len(body['images'])) # Delete first image path = self._url('/v2/images/%s' % images[0]['id']) response = requests.delete(path, headers=self._headers()) self.assertEqual(http.NO_CONTENT, response.status_code) # Ensure bad request for using a deleted image as marker path = self._url('/v2/images?marker=%s' % images[0]['id']) response = requests.get(path, headers=self._headers()) self.assertEqual(http.BAD_REQUEST, response.status_code) self.stop_servers() def test_image_visibility_to_different_users(self): self.cleanup() self.api_server.deployment_flavor = 'fakeauth' self.registry_server.deployment_flavor = 'fakeauth' kwargs = self.__dict__.copy() kwargs['use_user_token'] = True self.start_servers(**kwargs) owners = ['admin', 'tenant1', 'tenant2', 'none'] visibilities = ['public', 'private', 'shared', 'community'] for owner in owners: for visibility in visibilities: path = self._url('/v2/images') headers = self._headers({ 'content-type': 'application/json', 'X-Auth-Token': 'createuser:%s:admin' % owner, }) data = jsonutils.dumps({ 'name': '%s-%s' % (owner, visibility), 'visibility': visibility, }) response = requests.post(path, headers=headers, data=data) self.assertEqual(http.CREATED, response.status_code) def list_images(tenant, role='', visibility=None): auth_token = 'user:%s:%s' % (tenant, role) headers = {'X-Auth-Token': auth_token} path = self._url('/v2/images') if visibility is not None: path += '?visibility=%s' % visibility response = requests.get(path, headers=headers) self.assertEqual(http.OK, response.status_code) return jsonutils.loads(response.text)['images'] # 1. Known user sees public and their own images images = list_images('tenant1') self.assertEqual(7, len(images)) for image in images: self.assertTrue(image['visibility'] == 'public' or 'tenant1' in image['name']) # 2. Known user, visibility=public, sees all public images images = list_images('tenant1', visibility='public') self.assertEqual(4, len(images)) for image in images: self.assertEqual('public', image['visibility']) # 3. Known user, visibility=private, sees only their private image images = list_images('tenant1', visibility='private') self.assertEqual(1, len(images)) image = images[0] self.assertEqual('private', image['visibility']) self.assertIn('tenant1', image['name']) # 4. Known user, visibility=shared, sees only their shared image images = list_images('tenant1', visibility='shared') self.assertEqual(1, len(images)) image = images[0] self.assertEqual('shared', image['visibility']) self.assertIn('tenant1', image['name']) # 5. Known user, visibility=community, sees all community images images = list_images('tenant1', visibility='community') self.assertEqual(4, len(images)) for image in images: self.assertEqual('community', image['visibility']) # 6. Unknown user sees only public images images = list_images('none') self.assertEqual(4, len(images)) for image in images: self.assertEqual('public', image['visibility']) # 7. Unknown user, visibility=public, sees only public images images = list_images('none', visibility='public') self.assertEqual(4, len(images)) for image in images: self.assertEqual('public', image['visibility']) # 8. Unknown user, visibility=private, sees no images images = list_images('none', visibility='private') self.assertEqual(0, len(images)) # 9. Unknown user, visibility=shared, sees no images images = list_images('none', visibility='shared') self.assertEqual(0, len(images)) # 10. Unknown user, visibility=community, sees only community images images = list_images('none', visibility='community') self.assertEqual(4, len(images)) for image in images: self.assertEqual('community', image['visibility']) # 11. Unknown admin sees all images except for community images images = list_images('none', role='admin') self.assertEqual(12, len(images)) # 12. Unknown admin, visibility=public, shows only public images images = list_images('none', role='admin', visibility='public') self.assertEqual(4, len(images)) for image in images: self.assertEqual('public', image['visibility']) # 13. Unknown admin, visibility=private, sees only private images images = list_images('none', role='admin', visibility='private') self.assertEqual(4, len(images)) for image in images: self.assertEqual('private', image['visibility']) # 14. Unknown admin, visibility=shared, sees only shared images images = list_images('none', role='admin', visibility='shared') self.assertEqual(4, len(images)) for image in images: self.assertEqual('shared', image['visibility']) # 15. Unknown admin, visibility=community, sees only community images images = list_images('none', role='admin', visibility='community') self.assertEqual(4, len(images)) for image in images: self.assertEqual('community', image['visibility']) # 16. Known admin sees all images, except community images owned by # others images = list_images('admin', role='admin') self.assertEqual(13, len(images)) # 17. Known admin, visibility=public, sees all public images images = list_images('admin', role='admin', visibility='public') self.assertEqual(4, len(images)) for image in images: self.assertEqual('public', image['visibility']) # 18. Known admin, visibility=private, sees all private images images = list_images('admin', role='admin', visibility='private') self.assertEqual(4, len(images)) for image in images: self.assertEqual('private', image['visibility']) # 19. Known admin, visibility=shared, sees all shared images images = list_images('admin', role='admin', visibility='shared') self.assertEqual(4, len(images)) for image in images: self.assertEqual('shared', image['visibility']) # 20. Known admin, visibility=community, sees all community images images = list_images('admin', role='admin', visibility='community') self.assertEqual(4, len(images)) for image in images: self.assertEqual('community', image['visibility']) self.stop_servers() def test_update_locations(self): self.api_server.show_multiple_locations = True self.start_servers(**self.__dict__.copy()) # Create an image path = self._url('/v2/images') headers = self._headers({'content-type': 'application/json'}) data = jsonutils.dumps({'name': 'image-1', 'disk_format': 'aki', 'container_format': 'aki'}) response = requests.post(path, headers=headers, data=data) self.assertEqual(http.CREATED, response.status_code) # Returned image entity should have a generated id and status image = jsonutils.loads(response.text) image_id = image['id'] self.assertEqual('queued', image['status']) self.assertIsNone(image['size']) self.assertIsNone(image['virtual_size']) # Update locations for the queued image path = self._url('/v2/images/%s' % image_id) media_type = 'application/openstack-images-v2.1-json-patch' headers = self._headers({'content-type': media_type}) url = 'http://127.0.0.1:%s/foo_image' % self.http_port0 data = jsonutils.dumps([{'op': 'replace', 'path': '/locations', 'value': [{'url': url, 'metadata': {}}] }]) response = requests.patch(path, headers=headers, data=data) self.assertEqual(http.OK, response.status_code, response.text) # The image size should be updated path = self._url('/v2/images/%s' % image_id) response = requests.get(path, headers=headers) self.assertEqual(http.OK, response.status_code) image = jsonutils.loads(response.text) self.assertEqual(10, image['size']) def test_update_locations_with_restricted_sources(self): self.api_server.show_multiple_locations = True self.start_servers(**self.__dict__.copy()) # Create an image path = self._url('/v2/images') headers = self._headers({'content-type': 'application/json'}) data = jsonutils.dumps({'name': 'image-1', 'disk_format': 'aki', 'container_format': 'aki'}) response = requests.post(path, headers=headers, data=data) self.assertEqual(http.CREATED, response.status_code) # Returned image entity should have a generated id and status image = jsonutils.loads(response.text) image_id = image['id'] self.assertEqual('queued', image['status']) self.assertIsNone(image['size']) self.assertIsNone(image['virtual_size']) # Update locations for the queued image path = self._url('/v2/images/%s' % image_id) media_type = 'application/openstack-images-v2.1-json-patch' headers = self._headers({'content-type': media_type}) data = jsonutils.dumps([{'op': 'replace', 'path': '/locations', 'value': [{'url': 'file:///foo_image', 'metadata': {}}] }]) response = requests.patch(path, headers=headers, data=data) self.assertEqual(http.BAD_REQUEST, response.status_code, response.text) data = jsonutils.dumps([{'op': 'replace', 'path': '/locations', 'value': [{'url': 'swift+config:///foo_image', 'metadata': {}}] }]) response = requests.patch(path, headers=headers, data=data) self.assertEqual(http.BAD_REQUEST, response.status_code, response.text) class TestImagesWithRegistry(TestImages): def setUp(self): super(TestImagesWithRegistry, self).setUp() self.api_server.data_api = ( 'glance.tests.functional.v2.registry_data_api') self.registry_server.deployment_flavor = 'trusted-auth' class TestImagesIPv6(functional.FunctionalTest): """Verify that API and REG servers running IPv6 can communicate""" def setUp(self): """ First applying monkey patches of functions and methods which have IPv4 hardcoded. """ # Setting up initial monkey patch (1) test_utils.get_unused_port_ipv4 = test_utils.get_unused_port test_utils.get_unused_port_and_socket_ipv4 = ( test_utils.get_unused_port_and_socket) test_utils.get_unused_port = test_utils.get_unused_port_ipv6 test_utils.get_unused_port_and_socket = ( test_utils.get_unused_port_and_socket_ipv6) super(TestImagesIPv6, self).setUp() self.cleanup() # Setting up monkey patch (2), after object is ready... self.ping_server_ipv4 = self.ping_server self.ping_server = self.ping_server_ipv6 self.include_scrubber = False def tearDown(self): # Cleaning up monkey patch (2). self.ping_server = self.ping_server_ipv4 super(TestImagesIPv6, self).tearDown() # Cleaning up monkey patch (1). test_utils.get_unused_port = test_utils.get_unused_port_ipv4 test_utils.get_unused_port_and_socket = ( test_utils.get_unused_port_and_socket_ipv4) def _url(self, path): return "http://[::1]:%d%s" % (self.api_port, path) def _headers(self, custom_headers=None): base_headers = { 'X-Identity-Status': 'Confirmed', 'X-Auth-Token': '932c5c84-02ac-4fe5-a9ba-620af0e2bb96', 'X-User-Id': 'f9a41d13-0c13-47e9-bee2-ce4e8bfe958e', 'X-Tenant-Id': TENANT1, 'X-Roles': 'member', } base_headers.update(custom_headers or {}) return base_headers def test_image_list_ipv6(self): # Image list should be empty self.api_server.data_api = ( 'glance.tests.functional.v2.registry_data_api') self.registry_server.deployment_flavor = 'trusted-auth' # Setting up configuration parameters properly # (bind_host is not needed since it is replaced by monkey patches, # but it would be reflected in the configuration file, which is # at least improving consistency) self.registry_server.bind_host = "::1" self.api_server.bind_host = "::1" self.api_server.registry_host = "::1" self.scrubber_daemon.registry_host = "::1" self.start_servers(**self.__dict__.copy()) requests.get(self._url('/'), headers=self._headers()) path = self._url('/v2/images') response = requests.get(path, headers=self._headers()) self.assertEqual(200, response.status_code) images = jsonutils.loads(response.text)['images'] self.assertEqual(0, len(images)) class TestImageDirectURLVisibility(functional.FunctionalTest): def setUp(self): super(TestImageDirectURLVisibility, self).setUp() self.cleanup() self.include_scrubber = False self.api_server.deployment_flavor = 'noauth' def _url(self, path): return 'http://127.0.0.1:%d%s' % (self.api_port, path) def _headers(self, custom_headers=None): base_headers = { 'X-Identity-Status': 'Confirmed', 'X-Auth-Token': '932c5c84-02ac-4fe5-a9ba-620af0e2bb96', 'X-User-Id': 'f9a41d13-0c13-47e9-bee2-ce4e8bfe958e', 'X-Tenant-Id': TENANT1, 'X-Roles': 'member', } base_headers.update(custom_headers or {}) return base_headers def test_v2_not_enabled(self): self.api_server.enable_v2_api = False self.start_servers(**self.__dict__.copy()) path = self._url('/v2/images') response = requests.get(path, headers=self._headers()) self.assertEqual(http.MULTIPLE_CHOICES, response.status_code) self.stop_servers() def test_v2_enabled(self): self.api_server.enable_v2_api = True self.start_servers(**self.__dict__.copy()) path = self._url('/v2/images') response = requests.get(path, headers=self._headers()) self.assertEqual(http.OK, response.status_code) self.stop_servers() def test_image_direct_url_visible(self): self.api_server.show_image_direct_url = True self.start_servers(**self.__dict__.copy()) # Image list should be empty path = self._url('/v2/images') response = requests.get(path, headers=self._headers()) self.assertEqual(http.OK, response.status_code) images = jsonutils.loads(response.text)['images'] self.assertEqual(0, len(images)) # Create an image path = self._url('/v2/images') headers = self._headers({'content-type': 'application/json'}) data = jsonutils.dumps({'name': 'image-1', 'type': 'kernel', 'foo': 'bar', 'disk_format': 'aki', 'container_format': 'aki', 'visibility': 'public'}) response = requests.post(path, headers=headers, data=data) self.assertEqual(http.CREATED, response.status_code) # Get the image id image = jsonutils.loads(response.text) image_id = image['id'] # Image direct_url should not be visible before location is set path = self._url('/v2/images/%s' % image_id) headers = self._headers({'Content-Type': 'application/json'}) response = requests.get(path, headers=headers) self.assertEqual(http.OK, response.status_code) image = jsonutils.loads(response.text) self.assertNotIn('direct_url', image) # Upload some image data, setting the image location path = self._url('/v2/images/%s/file' % image_id) headers = self._headers({'Content-Type': 'application/octet-stream'}) response = requests.put(path, headers=headers, data='ZZZZZ') self.assertEqual(http.NO_CONTENT, response.status_code) # Image direct_url should be visible path = self._url('/v2/images/%s' % image_id) headers = self._headers({'Content-Type': 'application/json'}) response = requests.get(path, headers=headers) self.assertEqual(http.OK, response.status_code) image = jsonutils.loads(response.text) self.assertIn('direct_url', image) # Image direct_url should be visible to non-owner, non-admin user path = self._url('/v2/images/%s' % image_id) headers = self._headers({'Content-Type': 'application/json', 'X-Tenant-Id': TENANT2}) response = requests.get(path, headers=headers) self.assertEqual(http.OK, response.status_code) image = jsonutils.loads(response.text) self.assertIn('direct_url', image) # Image direct_url should be visible in a list path = self._url('/v2/images') headers = self._headers({'Content-Type': 'application/json'}) response = requests.get(path, headers=headers) self.assertEqual(http.OK, response.status_code) image = jsonutils.loads(response.text)['images'][0] self.assertIn('direct_url', image) self.stop_servers() def test_image_multiple_location_url_visible(self): self.api_server.show_multiple_locations = True self.start_servers(**self.__dict__.copy()) # Create an image path = self._url('/v2/images') headers = self._headers({'content-type': 'application/json'}) data = jsonutils.dumps({'name': 'image-1', 'type': 'kernel', 'foo': 'bar', 'disk_format': 'aki', 'container_format': 'aki'}) response = requests.post(path, headers=headers, data=data) self.assertEqual(http.CREATED, response.status_code) # Get the image id image = jsonutils.loads(response.text) image_id = image['id'] # Image locations should not be visible before location is set path = self._url('/v2/images/%s' % image_id) headers = self._headers({'Content-Type': 'application/json'}) response = requests.get(path, headers=headers) self.assertEqual(http.OK, response.status_code) image = jsonutils.loads(response.text) self.assertIn('locations', image) self.assertEqual([], image["locations"]) # Upload some image data, setting the image location path = self._url('/v2/images/%s/file' % image_id) headers = self._headers({'Content-Type': 'application/octet-stream'}) response = requests.put(path, headers=headers, data='ZZZZZ') self.assertEqual(http.NO_CONTENT, response.status_code) # Image locations should be visible path = self._url('/v2/images/%s' % image_id) headers = self._headers({'Content-Type': 'application/json'}) response = requests.get(path, headers=headers) self.assertEqual(http.OK, response.status_code) image = jsonutils.loads(response.text) self.assertIn('locations', image) loc = image['locations'] self.assertGreater(len(loc), 0) loc = loc[0] self.assertIn('url', loc) self.assertIn('metadata', loc) self.stop_servers() def test_image_direct_url_not_visible(self): self.api_server.show_image_direct_url = False self.start_servers(**self.__dict__.copy()) # Image list should be empty path = self._url('/v2/images') response = requests.get(path, headers=self._headers()) self.assertEqual(http.OK, response.status_code) images = jsonutils.loads(response.text)['images'] self.assertEqual(0, len(images)) # Create an image path = self._url('/v2/images') headers = self._headers({'content-type': 'application/json'}) data = jsonutils.dumps({'name': 'image-1', 'type': 'kernel', 'foo': 'bar', 'disk_format': 'aki', 'container_format': 'aki'}) response = requests.post(path, headers=headers, data=data) self.assertEqual(http.CREATED, response.status_code) # Get the image id image = jsonutils.loads(response.text) image_id = image['id'] # Upload some image data, setting the image location path = self._url('/v2/images/%s/file' % image_id) headers = self._headers({'Content-Type': 'application/octet-stream'}) response = requests.put(path, headers=headers, data='ZZZZZ') self.assertEqual(http.NO_CONTENT, response.status_code) # Image direct_url should not be visible path = self._url('/v2/images/%s' % image_id) headers = self._headers({'Content-Type': 'application/json'}) response = requests.get(path, headers=headers) self.assertEqual(http.OK, response.status_code) image = jsonutils.loads(response.text) self.assertNotIn('direct_url', image) # Image direct_url should not be visible in a list path = self._url('/v2/images') headers = self._headers({'Content-Type': 'application/json'}) response = requests.get(path, headers=headers) self.assertEqual(http.OK, response.status_code) image = jsonutils.loads(response.text)['images'][0] self.assertNotIn('direct_url', image) self.stop_servers() class TestImageDirectURLVisibilityWithRegistry(TestImageDirectURLVisibility): def setUp(self): super(TestImageDirectURLVisibilityWithRegistry, self).setUp() self.api_server.data_api = ( 'glance.tests.functional.v2.registry_data_api') self.registry_server.deployment_flavor = 'trusted-auth' class TestImageLocationSelectionStrategy(functional.FunctionalTest): def setUp(self): super(TestImageLocationSelectionStrategy, self).setUp() self.cleanup() self.include_scrubber = False self.api_server.deployment_flavor = 'noauth' for i in range(3): ret = test_utils.start_http_server("foo_image_id%d" % i, "foo_image%d" % i) setattr(self, 'http_server%d_pid' % i, ret[0]) setattr(self, 'http_port%d' % i, ret[1]) def tearDown(self): for i in range(3): pid = getattr(self, 'http_server%d_pid' % i, None) if pid: os.kill(pid, signal.SIGKILL) super(TestImageLocationSelectionStrategy, self).tearDown() def _url(self, path): return 'http://127.0.0.1:%d%s' % (self.api_port, path) def _headers(self, custom_headers=None): base_headers = { 'X-Identity-Status': 'Confirmed', 'X-Auth-Token': '932c5c84-02ac-4fe5-a9ba-620af0e2bb96', 'X-User-Id': 'f9a41d13-0c13-47e9-bee2-ce4e8bfe958e', 'X-Tenant-Id': TENANT1, 'X-Roles': 'member', } base_headers.update(custom_headers or {}) return base_headers def test_image_locations_with_order_strategy(self): self.api_server.show_image_direct_url = True self.api_server.show_multiple_locations = True self.image_location_quota = 10 self.api_server.location_strategy = 'location_order' preference = "http, swift, filesystem" self.api_server.store_type_location_strategy_preference = preference self.start_servers(**self.__dict__.copy()) # Create an image path = self._url('/v2/images') headers = self._headers({'content-type': 'application/json'}) data = jsonutils.dumps({'name': 'image-1', 'type': 'kernel', 'foo': 'bar', 'disk_format': 'aki', 'container_format': 'aki'}) response = requests.post(path, headers=headers, data=data) self.assertEqual(http.CREATED, response.status_code) # Get the image id image = jsonutils.loads(response.text) image_id = image['id'] # Image locations should not be visible before location is set path = self._url('/v2/images/%s' % image_id) headers = self._headers({'Content-Type': 'application/json'}) response = requests.get(path, headers=headers) self.assertEqual(http.OK, response.status_code) image = jsonutils.loads(response.text) self.assertIn('locations', image) self.assertEqual([], image["locations"]) # Update image locations via PATCH path = self._url('/v2/images/%s' % image_id) media_type = 'application/openstack-images-v2.1-json-patch' headers = self._headers({'content-type': media_type}) values = [{'url': 'http://127.0.0.1:%s/foo_image' % self.http_port0, 'metadata': {}}, {'url': 'http://127.0.0.1:%s/foo_image' % self.http_port1, 'metadata': {}}] doc = [{'op': 'replace', 'path': '/locations', 'value': values}] data = jsonutils.dumps(doc) response = requests.patch(path, headers=headers, data=data) self.assertEqual(http.OK, response.status_code) # Image locations should be visible path = self._url('/v2/images/%s' % image_id) headers = self._headers({'Content-Type': 'application/json'}) response = requests.get(path, headers=headers) self.assertEqual(http.OK, response.status_code) image = jsonutils.loads(response.text) self.assertIn('locations', image) self.assertEqual(values, image['locations']) self.assertIn('direct_url', image) self.assertEqual(values[0]['url'], image['direct_url']) self.stop_servers() class TestImageLocationSelectionStrategyWithRegistry( TestImageLocationSelectionStrategy): def setUp(self): super(TestImageLocationSelectionStrategyWithRegistry, self).setUp() self.api_server.data_api = ( 'glance.tests.functional.v2.registry_data_api') self.registry_server.deployment_flavor = 'trusted-auth' class TestImageMembers(functional.FunctionalTest): def setUp(self): super(TestImageMembers, self).setUp() self.cleanup() self.include_scrubber = False self.api_server.deployment_flavor = 'fakeauth' self.registry_server.deployment_flavor = 'fakeauth' self.start_servers(**self.__dict__.copy()) def _url(self, path): return 'http://127.0.0.1:%d%s' % (self.api_port, path) def _headers(self, custom_headers=None): base_headers = { 'X-Identity-Status': 'Confirmed', 'X-Auth-Token': '932c5c84-02ac-4fe5-a9ba-620af0e2bb96', 'X-User-Id': 'f9a41d13-0c13-47e9-bee2-ce4e8bfe958e', 'X-Tenant-Id': TENANT1, 'X-Roles': 'member', } base_headers.update(custom_headers or {}) return base_headers def test_image_member_lifecycle(self): def get_header(tenant, role=''): auth_token = 'user:%s:%s' % (tenant, role) headers = {'X-Auth-Token': auth_token} return headers # Image list should be empty path = self._url('/v2/images') response = requests.get(path, headers=get_header('tenant1')) self.assertEqual(http.OK, response.status_code) images = jsonutils.loads(response.text)['images'] self.assertEqual(0, len(images)) owners = ['tenant1', 'tenant2', 'admin'] visibilities = ['community', 'private', 'public', 'shared'] image_fixture = [] for owner in owners: for visibility in visibilities: path = self._url('/v2/images') headers = self._headers({ 'content-type': 'application/json', 'X-Auth-Token': 'createuser:%s:admin' % owner, }) data = jsonutils.dumps({ 'name': '%s-%s' % (owner, visibility), 'visibility': visibility, }) response = requests.post(path, headers=headers, data=data) self.assertEqual(http.CREATED, response.status_code) image_fixture.append(jsonutils.loads(response.text)) # Image list should contain 6 images for tenant1 path = self._url('/v2/images') response = requests.get(path, headers=get_header('tenant1')) self.assertEqual(http.OK, response.status_code) images = jsonutils.loads(response.text)['images'] self.assertEqual(6, len(images)) # Image list should contain 3 images for TENANT3 path = self._url('/v2/images') response = requests.get(path, headers=get_header(TENANT3)) self.assertEqual(http.OK, response.status_code) images = jsonutils.loads(response.text)['images'] self.assertEqual(3, len(images)) # Add Image member for tenant1-shared image path = self._url('/v2/images/%s/members' % image_fixture[3]['id']) body = jsonutils.dumps({'member': TENANT3}) response = requests.post(path, headers=get_header('tenant1'), data=body) self.assertEqual(http.OK, response.status_code) image_member = jsonutils.loads(response.text) self.assertEqual(image_fixture[3]['id'], image_member['image_id']) self.assertEqual(TENANT3, image_member['member_id']) self.assertIn('created_at', image_member) self.assertIn('updated_at', image_member) self.assertEqual('pending', image_member['status']) # Image list should contain 3 images for TENANT3 path = self._url('/v2/images') response = requests.get(path, headers=get_header(TENANT3)) self.assertEqual(http.OK, response.status_code) images = jsonutils.loads(response.text)['images'] self.assertEqual(3, len(images)) # Image list should contain 0 shared images for TENANT3 # because default is accepted path = self._url('/v2/images?visibility=shared') response = requests.get(path, headers=get_header(TENANT3)) self.assertEqual(http.OK, response.status_code) images = jsonutils.loads(response.text)['images'] self.assertEqual(0, len(images)) # Image list should contain 4 images for TENANT3 with status pending path = self._url('/v2/images?member_status=pending') response = requests.get(path, headers=get_header(TENANT3)) self.assertEqual(http.OK, response.status_code) images = jsonutils.loads(response.text)['images'] self.assertEqual(4, len(images)) # Image list should contain 4 images for TENANT3 with status all path = self._url('/v2/images?member_status=all') response = requests.get(path, headers=get_header(TENANT3)) self.assertEqual(http.OK, response.status_code) images = jsonutils.loads(response.text)['images'] self.assertEqual(4, len(images)) # Image list should contain 1 image for TENANT3 with status pending # and visibility shared path = self._url('/v2/images?member_status=pending&visibility=shared') response = requests.get(path, headers=get_header(TENANT3)) self.assertEqual(http.OK, response.status_code) images = jsonutils.loads(response.text)['images'] self.assertEqual(1, len(images)) self.assertEqual(images[0]['name'], 'tenant1-shared') # Image list should contain 0 image for TENANT3 with status rejected # and visibility shared path = self._url('/v2/images?member_status=rejected&visibility=shared') response = requests.get(path, headers=get_header(TENANT3)) self.assertEqual(http.OK, response.status_code) images = jsonutils.loads(response.text)['images'] self.assertEqual(0, len(images)) # Image list should contain 0 image for TENANT3 with status accepted # and visibility shared path = self._url('/v2/images?member_status=accepted&visibility=shared') response = requests.get(path, headers=get_header(TENANT3)) self.assertEqual(http.OK, response.status_code) images = jsonutils.loads(response.text)['images'] self.assertEqual(0, len(images)) # Image list should contain 0 image for TENANT3 with status accepted # and visibility private path = self._url('/v2/images?visibility=private') response = requests.get(path, headers=get_header(TENANT3)) self.assertEqual(http.OK, response.status_code) images = jsonutils.loads(response.text)['images'] self.assertEqual(0, len(images)) # Image tenant2-shared's image members list should contain no members path = self._url('/v2/images/%s/members' % image_fixture[7]['id']) response = requests.get(path, headers=get_header('tenant2')) self.assertEqual(http.OK, response.status_code) body = jsonutils.loads(response.text) self.assertEqual(0, len(body['members'])) # Tenant 1, who is the owner cannot change status of image member path = self._url('/v2/images/%s/members/%s' % (image_fixture[3]['id'], TENANT3)) body = jsonutils.dumps({'status': 'accepted'}) response = requests.put(path, headers=get_header('tenant1'), data=body) self.assertEqual(http.FORBIDDEN, response.status_code) # Tenant 1, who is the owner can get status of its own image member path = self._url('/v2/images/%s/members/%s' % (image_fixture[3]['id'], TENANT3)) response = requests.get(path, headers=get_header('tenant1')) self.assertEqual(http.OK, response.status_code) body = jsonutils.loads(response.text) self.assertEqual('pending', body['status']) self.assertEqual(image_fixture[3]['id'], body['image_id']) self.assertEqual(TENANT3, body['member_id']) # Tenant 3, who is the member can get status of its own status path = self._url('/v2/images/%s/members/%s' % (image_fixture[3]['id'], TENANT3)) response = requests.get(path, headers=get_header(TENANT3)) self.assertEqual(http.OK, response.status_code) body = jsonutils.loads(response.text) self.assertEqual('pending', body['status']) self.assertEqual(image_fixture[3]['id'], body['image_id']) self.assertEqual(TENANT3, body['member_id']) # Tenant 2, who not the owner cannot get status of image member path = self._url('/v2/images/%s/members/%s' % (image_fixture[3]['id'], TENANT3)) response = requests.get(path, headers=get_header('tenant2')) self.assertEqual(http.NOT_FOUND, response.status_code) # Tenant 3 can change status of image member path = self._url('/v2/images/%s/members/%s' % (image_fixture[3]['id'], TENANT3)) body = jsonutils.dumps({'status': 'accepted'}) response = requests.put(path, headers=get_header(TENANT3), data=body) self.assertEqual(http.OK, response.status_code) image_member = jsonutils.loads(response.text) self.assertEqual(image_fixture[3]['id'], image_member['image_id']) self.assertEqual(TENANT3, image_member['member_id']) self.assertEqual('accepted', image_member['status']) # Image list should contain 4 images for TENANT3 because status is # accepted path = self._url('/v2/images') response = requests.get(path, headers=get_header(TENANT3)) self.assertEqual(http.OK, response.status_code) images = jsonutils.loads(response.text)['images'] self.assertEqual(4, len(images)) # Tenant 3 invalid status change path = self._url('/v2/images/%s/members/%s' % (image_fixture[3]['id'], TENANT3)) body = jsonutils.dumps({'status': 'invalid-status'}) response = requests.put(path, headers=get_header(TENANT3), data=body) self.assertEqual(http.BAD_REQUEST, response.status_code) # Owner cannot change status of image path = self._url('/v2/images/%s/members/%s' % (image_fixture[3]['id'], TENANT3)) body = jsonutils.dumps({'status': 'accepted'}) response = requests.put(path, headers=get_header('tenant1'), data=body) self.assertEqual(http.FORBIDDEN, response.status_code) # Add Image member for tenant2-shared image path = self._url('/v2/images/%s/members' % image_fixture[7]['id']) body = jsonutils.dumps({'member': TENANT4}) response = requests.post(path, headers=get_header('tenant2'), data=body) self.assertEqual(http.OK, response.status_code) image_member = jsonutils.loads(response.text) self.assertEqual(image_fixture[7]['id'], image_member['image_id']) self.assertEqual(TENANT4, image_member['member_id']) self.assertIn('created_at', image_member) self.assertIn('updated_at', image_member) # Add Image member to public image path = self._url('/v2/images/%s/members' % image_fixture[2]['id']) body = jsonutils.dumps({'member': TENANT2}) response = requests.post(path, headers=get_header('tenant1'), data=body) self.assertEqual(http.FORBIDDEN, response.status_code) # Add Image member to private image path = self._url('/v2/images/%s/members' % image_fixture[1]['id']) body = jsonutils.dumps({'member': TENANT2}) response = requests.post(path, headers=get_header('tenant1'), data=body) self.assertEqual(http.FORBIDDEN, response.status_code) # Add Image member to community image path = self._url('/v2/images/%s/members' % image_fixture[0]['id']) body = jsonutils.dumps({'member': TENANT2}) response = requests.post(path, headers=get_header('tenant1'), data=body) self.assertEqual(http.FORBIDDEN, response.status_code) # Image tenant1-shared's members list should contain 1 member path = self._url('/v2/images/%s/members' % image_fixture[3]['id']) response = requests.get(path, headers=get_header('tenant1')) self.assertEqual(http.OK, response.status_code) body = jsonutils.loads(response.text) self.assertEqual(1, len(body['members'])) # Admin can see any members path = self._url('/v2/images/%s/members' % image_fixture[3]['id']) response = requests.get(path, headers=get_header('tenant1', 'admin')) self.assertEqual(http.OK, response.status_code) body = jsonutils.loads(response.text) self.assertEqual(1, len(body['members'])) # Image members not found for private image not owned by TENANT 1 path = self._url('/v2/images/%s/members' % image_fixture[7]['id']) response = requests.get(path, headers=get_header('tenant1')) self.assertEqual(http.NOT_FOUND, response.status_code) # Image members forbidden for public image path = self._url('/v2/images/%s/members' % image_fixture[2]['id']) response = requests.get(path, headers=get_header('tenant1')) self.assertIn("Only shared images have members", response.text) self.assertEqual(http.FORBIDDEN, response.status_code) # Image members forbidden for community image path = self._url('/v2/images/%s/members' % image_fixture[0]['id']) response = requests.get(path, headers=get_header('tenant1')) self.assertIn("Only shared images have members", response.text) self.assertEqual(http.FORBIDDEN, response.status_code) # Image members forbidden for private image path = self._url('/v2/images/%s/members' % image_fixture[1]['id']) response = requests.get(path, headers=get_header('tenant1')) self.assertIn("Only shared images have members", response.text) self.assertEqual(http.FORBIDDEN, response.status_code) # Image Member Cannot delete Image membership path = self._url('/v2/images/%s/members/%s' % (image_fixture[3]['id'], TENANT3)) response = requests.delete(path, headers=get_header(TENANT3)) self.assertEqual(http.FORBIDDEN, response.status_code) # Delete Image member path = self._url('/v2/images/%s/members/%s' % (image_fixture[3]['id'], TENANT3)) response = requests.delete(path, headers=get_header('tenant1')) self.assertEqual(http.NO_CONTENT, response.status_code) # Now the image has no members path = self._url('/v2/images/%s/members' % image_fixture[3]['id']) response = requests.get(path, headers=get_header('tenant1')) self.assertEqual(http.OK, response.status_code) body = jsonutils.loads(response.text) self.assertEqual(0, len(body['members'])) # Adding 11 image members should fail since configured limit is 10 path = self._url('/v2/images/%s/members' % image_fixture[3]['id']) for i in range(10): body = jsonutils.dumps({'member': str(uuid.uuid4())}) response = requests.post(path, headers=get_header('tenant1'), data=body) self.assertEqual(http.OK, response.status_code) body = jsonutils.dumps({'member': str(uuid.uuid4())}) response = requests.post(path, headers=get_header('tenant1'), data=body) self.assertEqual(http.REQUEST_ENTITY_TOO_LARGE, response.status_code) # Get Image member should return not found for public image path = self._url('/v2/images/%s/members/%s' % (image_fixture[2]['id'], TENANT3)) response = requests.get(path, headers=get_header('tenant1')) self.assertEqual(http.NOT_FOUND, response.status_code) # Get Image member should return not found for community image path = self._url('/v2/images/%s/members/%s' % (image_fixture[0]['id'], TENANT3)) response = requests.get(path, headers=get_header('tenant1')) self.assertEqual(http.NOT_FOUND, response.status_code) # Get Image member should return not found for private image path = self._url('/v2/images/%s/members/%s' % (image_fixture[1]['id'], TENANT3)) response = requests.get(path, headers=get_header('tenant1')) self.assertEqual(http.NOT_FOUND, response.status_code) # Delete Image member should return forbidden for public image path = self._url('/v2/images/%s/members/%s' % (image_fixture[2]['id'], TENANT3)) response = requests.delete(path, headers=get_header('tenant1')) self.assertEqual(http.FORBIDDEN, response.status_code) # Delete Image member should return forbidden for community image path = self._url('/v2/images/%s/members/%s' % (image_fixture[0]['id'], TENANT3)) response = requests.delete(path, headers=get_header('tenant1')) self.assertEqual(http.FORBIDDEN, response.status_code) # Delete Image member should return forbidden for private image path = self._url('/v2/images/%s/members/%s' % (image_fixture[1]['id'], TENANT3)) response = requests.delete(path, headers=get_header('tenant1')) self.assertEqual(http.FORBIDDEN, response.status_code) self.stop_servers() class TestImageMembersWithRegistry(TestImageMembers): def setUp(self): super(TestImageMembersWithRegistry, self).setUp() self.api_server.data_api = ( 'glance.tests.functional.v2.registry_data_api') self.registry_server.deployment_flavor = 'trusted-auth' class TestQuotas(functional.FunctionalTest): def setUp(self): super(TestQuotas, self).setUp() self.cleanup() self.include_scrubber = False self.api_server.deployment_flavor = 'noauth' self.registry_server.deployment_flavor = 'trusted-auth' self.user_storage_quota = 100 self.start_servers(**self.__dict__.copy()) def _url(self, path): return 'http://127.0.0.1:%d%s' % (self.api_port, path) def _headers(self, custom_headers=None): base_headers = { 'X-Identity-Status': 'Confirmed', 'X-Auth-Token': '932c5c84-02ac-4fe5-a9ba-620af0e2bb96', 'X-User-Id': 'f9a41d13-0c13-47e9-bee2-ce4e8bfe958e', 'X-Tenant-Id': TENANT1, 'X-Roles': 'member', } base_headers.update(custom_headers or {}) return base_headers def _upload_image_test(self, data_src, expected_status): # Image list should be empty path = self._url('/v2/images') response = requests.get(path, headers=self._headers()) self.assertEqual(http.OK, response.status_code) images = jsonutils.loads(response.text)['images'] self.assertEqual(0, len(images)) # Create an image (with a deployer-defined property) path = self._url('/v2/images') headers = self._headers({'content-type': 'application/json'}) data = jsonutils.dumps({'name': 'testimg', 'type': 'kernel', 'foo': 'bar', 'disk_format': 'aki', 'container_format': 'aki'}) response = requests.post(path, headers=headers, data=data) self.assertEqual(http.CREATED, response.status_code) image = jsonutils.loads(response.text) image_id = image['id'] # upload data path = self._url('/v2/images/%s/file' % image_id) headers = self._headers({'Content-Type': 'application/octet-stream'}) response = requests.put(path, headers=headers, data=data_src) self.assertEqual(expected_status, response.status_code) # Deletion should work path = self._url('/v2/images/%s' % image_id) response = requests.delete(path, headers=self._headers()) self.assertEqual(http.NO_CONTENT, response.status_code) def test_image_upload_under_quota(self): data = b'x' * (self.user_storage_quota - 1) self._upload_image_test(data, http.NO_CONTENT) def test_image_upload_exceed_quota(self): data = b'x' * (self.user_storage_quota + 1) self._upload_image_test(data, http.REQUEST_ENTITY_TOO_LARGE) def test_chunked_image_upload_under_quota(self): def data_gen(): yield b'x' * (self.user_storage_quota - 1) self._upload_image_test(data_gen(), http.NO_CONTENT) def test_chunked_image_upload_exceed_quota(self): def data_gen(): yield b'x' * (self.user_storage_quota + 1) self._upload_image_test(data_gen(), http.REQUEST_ENTITY_TOO_LARGE) class TestQuotasWithRegistry(TestQuotas): def setUp(self): super(TestQuotasWithRegistry, self).setUp() self.api_server.data_api = ( 'glance.tests.functional.v2.registry_data_api') self.registry_server.deployment_flavor = 'trusted-auth' glance-16.0.0/glance/tests/functional/v2/test_metadef_objects.py0000666000175100017510000002634213245511421024675 0ustar zuulzuul00000000000000# Copyright (c) 2014 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import uuid from oslo_serialization import jsonutils import requests from six.moves import http_client as http from glance.tests import functional TENANT1 = str(uuid.uuid4()) class TestMetadefObjects(functional.FunctionalTest): def setUp(self): super(TestMetadefObjects, self).setUp() self.cleanup() self.api_server.deployment_flavor = 'noauth' self.start_servers(**self.__dict__.copy()) def _url(self, path): return 'http://127.0.0.1:%d%s' % (self.api_port, path) def _headers(self, custom_headers=None): base_headers = { 'X-Identity-Status': 'Confirmed', 'X-Auth-Token': '932c5c84-02ac-4fe5-a9ba-620af0e2bb96', 'X-User-Id': 'f9a41d13-0c13-47e9-bee2-ce4e8bfe958e', 'X-Tenant-Id': TENANT1, 'X-Roles': 'admin', } base_headers.update(custom_headers or {}) return base_headers def test_metadata_objects_lifecycle(self): # Namespace should not exist path = self._url('/v2/metadefs/namespaces/MyNamespace') response = requests.get(path, headers=self._headers()) self.assertEqual(http.NOT_FOUND, response.status_code) # Create a namespace path = self._url('/v2/metadefs/namespaces') headers = self._headers({'content-type': 'application/json'}) namespace_name = 'MyNamespace' data = jsonutils.dumps({ "namespace": namespace_name, "display_name": "My User Friendly Namespace", "description": "My description", "visibility": "public", "protected": False, "owner": "The Test Owner" } ) response = requests.post(path, headers=headers, data=data) self.assertEqual(http.CREATED, response.status_code) # Metadata objects should not exist path = self._url('/v2/metadefs/namespaces/MyNamespace/objects/object1') response = requests.get(path, headers=self._headers()) self.assertEqual(http.NOT_FOUND, response.status_code) # Create a object path = self._url('/v2/metadefs/namespaces/MyNamespace/objects') headers = self._headers({'content-type': 'application/json'}) metadata_object_name = "object1" data = jsonutils.dumps( { "name": metadata_object_name, "description": "object1 description.", "required": [ "property1" ], "properties": { "property1": { "type": "integer", "title": "property1", "description": "property1 description", "operators": [""], "default": 100, "minimum": 100, "maximum": 30000369 }, "property2": { "type": "string", "title": "property2", "description": "property2 description ", "default": "value2", "minLength": 2, "maxLength": 50 } } } ) response = requests.post(path, headers=headers, data=data) self.assertEqual(http.CREATED, response.status_code) # Attempt to insert a duplicate response = requests.post(path, headers=headers, data=data) self.assertEqual(http.CONFLICT, response.status_code) # Get the metadata object created above path = self._url('/v2/metadefs/namespaces/%s/objects/%s' % (namespace_name, metadata_object_name)) response = requests.get(path, headers=self._headers()) self.assertEqual(http.OK, response.status_code) metadata_object = jsonutils.loads(response.text) self.assertEqual("object1", metadata_object['name']) # Returned object should match the created object metadata_object = jsonutils.loads(response.text) checked_keys = set([ u'name', u'description', u'properties', u'required', u'self', u'schema', u'created_at', u'updated_at' ]) self.assertEqual(set(metadata_object.keys()), checked_keys) expected_metadata_object = { "name": metadata_object_name, "description": "object1 description.", "required": [ "property1" ], "properties": { 'property1': { 'type': 'integer', "title": "property1", 'description': 'property1 description', 'operators': [''], 'default': 100, 'minimum': 100, 'maximum': 30000369 }, "property2": { "type": "string", "title": "property2", "description": "property2 description ", "default": "value2", "minLength": 2, "maxLength": 50 } }, "self": "/v2/metadefs/namespaces/%(" "namespace)s/objects/%(object)s" % {'namespace': namespace_name, 'object': metadata_object_name}, "schema": "v2/schemas/metadefs/object" } # Simple key values checked_values = set([ u'name', u'description', ]) for key, value in expected_metadata_object.items(): if(key in checked_values): self.assertEqual(metadata_object[key], value, key) # Complex key values - properties for key, value in ( expected_metadata_object["properties"]['property2'].items()): self.assertEqual( metadata_object["properties"]["property2"][key], value, key ) # The metadata_object should be mutable path = self._url('/v2/metadefs/namespaces/%s/objects/%s' % (namespace_name, metadata_object_name)) media_type = 'application/json' headers = self._headers({'content-type': media_type}) metadata_object_name = "object1-UPDATED" data = jsonutils.dumps( { "name": metadata_object_name, "description": "desc-UPDATED", "required": [ "property2" ], "properties": { 'property1': { 'type': 'integer', "title": "property1", 'description': 'p1 desc-UPDATED', 'default': 500, 'minimum': 500, 'maximum': 1369 }, "property2": { "type": "string", "title": "property2", "description": "p2 desc-UPDATED", 'operators': [''], "default": "value2-UPDATED", "minLength": 5, "maxLength": 150 } } } ) response = requests.put(path, headers=headers, data=data) self.assertEqual(http.OK, response.status_code, response.text) # Returned metadata_object should reflect the changes metadata_object = jsonutils.loads(response.text) self.assertEqual('object1-UPDATED', metadata_object['name']) self.assertEqual('desc-UPDATED', metadata_object['description']) self.assertEqual('property2', metadata_object['required'][0]) updated_property1 = metadata_object['properties']['property1'] updated_property2 = metadata_object['properties']['property2'] self.assertEqual('integer', updated_property1['type']) self.assertEqual('p1 desc-UPDATED', updated_property1['description']) self.assertEqual('500', updated_property1['default']) self.assertEqual(500, updated_property1['minimum']) self.assertEqual(1369, updated_property1['maximum']) self.assertEqual([''], updated_property2['operators']) self.assertEqual('string', updated_property2['type']) self.assertEqual('p2 desc-UPDATED', updated_property2['description']) self.assertEqual('value2-UPDATED', updated_property2['default']) self.assertEqual(5, updated_property2['minLength']) self.assertEqual(150, updated_property2['maxLength']) # Updates should persist across requests path = self._url('/v2/metadefs/namespaces/%s/objects/%s' % (namespace_name, metadata_object_name)) response = requests.get(path, headers=self._headers()) self.assertEqual(200, response.status_code) self.assertEqual('object1-UPDATED', metadata_object['name']) self.assertEqual('desc-UPDATED', metadata_object['description']) self.assertEqual('property2', metadata_object['required'][0]) updated_property1 = metadata_object['properties']['property1'] updated_property2 = metadata_object['properties']['property2'] self.assertEqual('integer', updated_property1['type']) self.assertEqual('p1 desc-UPDATED', updated_property1['description']) self.assertEqual('500', updated_property1['default']) self.assertEqual(500, updated_property1['minimum']) self.assertEqual(1369, updated_property1['maximum']) self.assertEqual([''], updated_property2['operators']) self.assertEqual('string', updated_property2['type']) self.assertEqual('p2 desc-UPDATED', updated_property2['description']) self.assertEqual('value2-UPDATED', updated_property2['default']) self.assertEqual(5, updated_property2['minLength']) self.assertEqual(150, updated_property2['maxLength']) # Deletion of metadata_object object1 path = self._url('/v2/metadefs/namespaces/%s/objects/%s' % (namespace_name, metadata_object_name)) response = requests.delete(path, headers=self._headers()) self.assertEqual(http.NO_CONTENT, response.status_code) # metadata_object object1 should not exist path = self._url('/v2/metadefs/namespaces/%s/objects/%s' % (namespace_name, metadata_object_name)) response = requests.get(path, headers=self._headers()) self.assertEqual(http.NOT_FOUND, response.status_code) glance-16.0.0/glance/tests/functional/v2/test_metadef_properties.py0000666000175100017510000002247613245511421025444 0ustar zuulzuul00000000000000# Copyright (c) 2014 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import uuid from oslo_serialization import jsonutils import requests from six.moves import http_client as http from glance.tests import functional TENANT1 = str(uuid.uuid4()) class TestNamespaceProperties(functional.FunctionalTest): def setUp(self): super(TestNamespaceProperties, self).setUp() self.cleanup() self.api_server.deployment_flavor = 'noauth' self.start_servers(**self.__dict__.copy()) def _url(self, path): return 'http://127.0.0.1:%d%s' % (self.api_port, path) def _headers(self, custom_headers=None): base_headers = { 'X-Identity-Status': 'Confirmed', 'X-Auth-Token': '932c5c84-02ac-4fe5-a9ba-620af0e2bb96', 'X-User-Id': 'f9a41d13-0c13-47e9-bee2-ce4e8bfe958e', 'X-Tenant-Id': TENANT1, 'X-Roles': 'admin', } base_headers.update(custom_headers or {}) return base_headers def test_properties_lifecycle(self): # Namespace should not exist path = self._url('/v2/metadefs/namespaces/MyNamespace') response = requests.get(path, headers=self._headers()) self.assertEqual(http.NOT_FOUND, response.status_code) # Create a namespace path = self._url('/v2/metadefs/namespaces') headers = self._headers({'content-type': 'application/json'}) namespace_name = 'MyNamespace' resource_type_name = 'MyResourceType' resource_type_prefix = 'MyPrefix' data = jsonutils.dumps({ "namespace": namespace_name, "display_name": "My User Friendly Namespace", "description": "My description", "visibility": "public", "protected": False, "owner": "The Test Owner", "resource_type_associations": [ { "name": resource_type_name, "prefix": resource_type_prefix } ] }) response = requests.post(path, headers=headers, data=data) self.assertEqual(http.CREATED, response.status_code) # Property1 should not exist path = self._url('/v2/metadefs/namespaces/MyNamespace/properties' '/property1') response = requests.get(path, headers=self._headers()) self.assertEqual(http.NOT_FOUND, response.status_code) # Create a property path = self._url('/v2/metadefs/namespaces/MyNamespace/properties') headers = self._headers({'content-type': 'application/json'}) property_name = "property1" data = jsonutils.dumps( { "name": property_name, "type": "integer", "title": "property1", "description": "property1 description", "default": 100, "minimum": 100, "maximum": 30000369, "readonly": False, } ) response = requests.post(path, headers=headers, data=data) self.assertEqual(http.CREATED, response.status_code) # Attempt to insert a duplicate response = requests.post(path, headers=headers, data=data) self.assertEqual(http.CONFLICT, response.status_code) # Get the property created above path = self._url('/v2/metadefs/namespaces/%s/properties/%s' % (namespace_name, property_name)) response = requests.get(path, headers=self._headers()) self.assertEqual(http.OK, response.status_code) property_object = jsonutils.loads(response.text) self.assertEqual("integer", property_object['type']) self.assertEqual("property1", property_object['title']) self.assertEqual("property1 description", property_object[ 'description']) self.assertEqual('100', property_object['default']) self.assertEqual(100, property_object['minimum']) self.assertEqual(30000369, property_object['maximum']) # Get the property with specific resource type association path = self._url('/v2/metadefs/namespaces/%s/properties/%s%s' % ( namespace_name, property_name, '='.join(['?resource_type', resource_type_name]))) response = requests.get(path, headers=self._headers()) self.assertEqual(http.NOT_FOUND, response.status_code) # Get the property with prefix and specific resource type association property_name_with_prefix = ''.join([resource_type_prefix, property_name]) path = self._url('/v2/metadefs/namespaces/%s/properties/%s%s' % ( namespace_name, property_name_with_prefix, '='.join([ '?resource_type', resource_type_name]))) response = requests.get(path, headers=self._headers()) self.assertEqual(http.OK, response.status_code) property_object = jsonutils.loads(response.text) self.assertEqual("integer", property_object['type']) self.assertEqual("property1", property_object['title']) self.assertEqual("property1 description", property_object[ 'description']) self.assertEqual('100', property_object['default']) self.assertEqual(100, property_object['minimum']) self.assertEqual(30000369, property_object['maximum']) self.assertFalse(property_object['readonly']) # Returned property should match the created property property_object = jsonutils.loads(response.text) checked_keys = set([ u'name', u'type', u'title', u'description', u'default', u'minimum', u'maximum', u'readonly', ]) self.assertEqual(set(property_object.keys()), checked_keys) expected_metadata_property = { "type": "integer", "title": "property1", "description": "property1 description", "default": '100', "minimum": 100, "maximum": 30000369, "readonly": False, } for key, value in expected_metadata_property.items(): self.assertEqual(property_object[key], value, key) # The property should be mutable path = self._url('/v2/metadefs/namespaces/%s/properties/%s' % (namespace_name, property_name)) media_type = 'application/json' headers = self._headers({'content-type': media_type}) property_name = "property1-UPDATED" data = jsonutils.dumps( { "name": property_name, "type": "string", "title": "string property", "description": "desc-UPDATED", "operators": [""], "default": "value-UPDATED", "minLength": 5, "maxLength": 10, "readonly": True, } ) response = requests.put(path, headers=headers, data=data) self.assertEqual(http.OK, response.status_code, response.text) # Returned property should reflect the changes property_object = jsonutils.loads(response.text) self.assertEqual('string', property_object['type']) self.assertEqual('desc-UPDATED', property_object['description']) self.assertEqual('value-UPDATED', property_object['default']) self.assertEqual([""], property_object['operators']) self.assertEqual(5, property_object['minLength']) self.assertEqual(10, property_object['maxLength']) self.assertTrue(property_object['readonly']) # Updates should persist across requests path = self._url('/v2/metadefs/namespaces/%s/properties/%s' % (namespace_name, property_name)) response = requests.get(path, headers=self._headers()) self.assertEqual('string', property_object['type']) self.assertEqual('desc-UPDATED', property_object['description']) self.assertEqual('value-UPDATED', property_object['default']) self.assertEqual([""], property_object['operators']) self.assertEqual(5, property_object['minLength']) self.assertEqual(10, property_object['maxLength']) # Deletion of property property1 path = self._url('/v2/metadefs/namespaces/%s/properties/%s' % (namespace_name, property_name)) response = requests.delete(path, headers=self._headers()) self.assertEqual(http.NO_CONTENT, response.status_code) # property1 should not exist path = self._url('/v2/metadefs/namespaces/%s/properties/%s' % (namespace_name, property_name)) response = requests.get(path, headers=self._headers()) self.assertEqual(http.NOT_FOUND, response.status_code) glance-16.0.0/glance/tests/functional/v2/test_metadef_tags.py0000666000175100017510000001567113245511421024205 0ustar zuulzuul00000000000000# Copyright (c) 2014 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import uuid from oslo_serialization import jsonutils import requests from six.moves import http_client as http from glance.tests import functional TENANT1 = str(uuid.uuid4()) class TestMetadefTags(functional.FunctionalTest): def setUp(self): super(TestMetadefTags, self).setUp() self.cleanup() self.api_server.deployment_flavor = 'noauth' self.start_servers(**self.__dict__.copy()) def _url(self, path): return 'http://127.0.0.1:%d%s' % (self.api_port, path) def _headers(self, custom_headers=None): base_headers = { 'X-Identity-Status': 'Confirmed', 'X-Auth-Token': '932c5c84-02ac-4fe5-a9ba-620af0e2bb96', 'X-User-Id': 'f9a41d13-0c13-47e9-bee2-ce4e8bfe958e', 'X-Tenant-Id': TENANT1, 'X-Roles': 'admin', } base_headers.update(custom_headers or {}) return base_headers def test_metadata_tags_lifecycle(self): # Namespace should not exist path = self._url('/v2/metadefs/namespaces/MyNamespace') response = requests.get(path, headers=self._headers()) self.assertEqual(http.NOT_FOUND, response.status_code) # Create a namespace path = self._url('/v2/metadefs/namespaces') headers = self._headers({'content-type': 'application/json'}) namespace_name = 'MyNamespace' data = jsonutils.dumps({ "namespace": namespace_name, "display_name": "My User Friendly Namespace", "description": "My description", "visibility": "public", "protected": False, "owner": "The Test Owner"} ) response = requests.post(path, headers=headers, data=data) self.assertEqual(http.CREATED, response.status_code) # Metadata tag should not exist metadata_tag_name = "tag1" path = self._url('/v2/metadefs/namespaces/%s/tags/%s' % (namespace_name, metadata_tag_name)) response = requests.get(path, headers=self._headers()) self.assertEqual(http.NOT_FOUND, response.status_code) # Create the metadata tag headers = self._headers({'content-type': 'application/json'}) response = requests.post(path, headers=headers) self.assertEqual(http.CREATED, response.status_code) # Get the metadata tag created above response = requests.get(path, headers=self._headers()) self.assertEqual(http.OK, response.status_code) metadata_tag = jsonutils.loads(response.text) self.assertEqual(metadata_tag_name, metadata_tag['name']) # Returned tag should match the created tag metadata_tag = jsonutils.loads(response.text) checked_keys = set([ u'name', u'created_at', u'updated_at' ]) self.assertEqual(checked_keys, set(metadata_tag.keys())) expected_metadata_tag = { "name": metadata_tag_name } # Simple key values checked_values = set([ u'name' ]) for key, value in expected_metadata_tag.items(): if(key in checked_values): self.assertEqual(metadata_tag[key], value, key) # Try to create a duplicate metadata tag headers = self._headers({'content-type': 'application/json'}) response = requests.post(path, headers=headers) self.assertEqual(http.CONFLICT, response.status_code) # The metadata_tag should be mutable path = self._url('/v2/metadefs/namespaces/%s/tags/%s' % (namespace_name, metadata_tag_name)) media_type = 'application/json' headers = self._headers({'content-type': media_type}) metadata_tag_name = "tag1-UPDATED" data = jsonutils.dumps( { "name": metadata_tag_name } ) response = requests.put(path, headers=headers, data=data) self.assertEqual(http.OK, response.status_code, response.text) # Returned metadata_tag should reflect the changes metadata_tag = jsonutils.loads(response.text) self.assertEqual('tag1-UPDATED', metadata_tag['name']) # Updates should persist across requests path = self._url('/v2/metadefs/namespaces/%s/tags/%s' % (namespace_name, metadata_tag_name)) response = requests.get(path, headers=self._headers()) self.assertEqual(http.OK, response.status_code) self.assertEqual('tag1-UPDATED', metadata_tag['name']) # Deletion of metadata_tag_name path = self._url('/v2/metadefs/namespaces/%s/tags/%s' % (namespace_name, metadata_tag_name)) response = requests.delete(path, headers=self._headers()) self.assertEqual(http.NO_CONTENT, response.status_code) # metadata_tag_name should not exist path = self._url('/v2/metadefs/namespaces/%s/tags/%s' % (namespace_name, metadata_tag_name)) response = requests.get(path, headers=self._headers()) self.assertEqual(http.NOT_FOUND, response.status_code) # Create multiple tags. path = self._url('/v2/metadefs/namespaces/%s/tags' % (namespace_name)) headers = self._headers({'content-type': 'application/json'}) data = jsonutils.dumps( {"tags": [{"name": "tag1"}, {"name": "tag2"}, {"name": "tag3"}]} ) response = requests.post(path, headers=headers, data=data) self.assertEqual(http.CREATED, response.status_code) # List out the three new tags. response = requests.get(path, headers=self._headers()) self.assertEqual(http.OK, response.status_code) tags = jsonutils.loads(response.text)['tags'] self.assertEqual(3, len(tags)) # Attempt to create bogus duplicate tag4 data = jsonutils.dumps( {"tags": [{"name": "tag4"}, {"name": "tag5"}, {"name": "tag4"}]} ) response = requests.post(path, headers=headers, data=data) self.assertEqual(http.CONFLICT, response.status_code) # Verify the previous 3 still exist response = requests.get(path, headers=self._headers()) self.assertEqual(http.OK, response.status_code) tags = jsonutils.loads(response.text)['tags'] self.assertEqual(3, len(tags)) glance-16.0.0/glance/tests/functional/v2/test_metadef_resourcetypes.py0000666000175100017510000002510213245511421026151 0ustar zuulzuul00000000000000# Copyright (c) 2014 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. from oslo_log import log as logging from oslo_serialization import jsonutils from oslo_utils import encodeutils import six from six.moves import http_client as http import webob.exc from wsme.rest import json from glance.api import policy from glance.api.v2.model.metadef_resource_type import ResourceType from glance.api.v2.model.metadef_resource_type import ResourceTypeAssociation from glance.api.v2.model.metadef_resource_type import ResourceTypeAssociations from glance.api.v2.model.metadef_resource_type import ResourceTypes from glance.common import exception from glance.common import wsgi import glance.db import glance.gateway from glance.i18n import _, _LE import glance.notifier import glance.schema LOG = logging.getLogger(__name__) class ResourceTypeController(object): def __init__(self, db_api=None, policy_enforcer=None): self.db_api = db_api or glance.db.get_api() self.policy = policy_enforcer or policy.Enforcer() self.gateway = glance.gateway.Gateway(db_api=self.db_api, policy_enforcer=self.policy) def index(self, req): try: filters = {'namespace': None} rs_type_repo = self.gateway.get_metadef_resource_type_repo( req.context) db_resource_type_list = rs_type_repo.list(filters=filters) resource_type_list = [ResourceType.to_wsme_model( resource_type) for resource_type in db_resource_type_list] resource_types = ResourceTypes() resource_types.resource_types = resource_type_list except exception.Forbidden as e: raise webob.exc.HTTPForbidden(explanation=e.msg) except exception.NotFound as e: raise webob.exc.HTTPNotFound(explanation=e.msg) except Exception as e: LOG.error(e) raise webob.exc.HTTPInternalServerError(e) return resource_types def show(self, req, namespace): try: filters = {'namespace': namespace} rs_type_repo = self.gateway.get_metadef_resource_type_repo( req.context) db_resource_type_list = rs_type_repo.list(filters=filters) resource_type_list = [ResourceTypeAssociation.to_wsme_model( resource_type) for resource_type in db_resource_type_list] resource_types = ResourceTypeAssociations() resource_types.resource_type_associations = resource_type_list except exception.Forbidden as e: raise webob.exc.HTTPForbidden(explanation=e.msg) except exception.NotFound as e: raise webob.exc.HTTPNotFound(explanation=e.msg) except Exception as e: LOG.error(e) raise webob.exc.HTTPInternalServerError(e) return resource_types def create(self, req, resource_type, namespace): rs_type_factory = self.gateway.get_metadef_resource_type_factory( req.context) rs_type_repo = self.gateway.get_metadef_resource_type_repo(req.context) try: new_resource_type = rs_type_factory.new_resource_type( namespace=namespace, **resource_type.to_dict()) rs_type_repo.add(new_resource_type) except exception.Forbidden as e: msg = (_LE("Forbidden to create resource type. " "Reason: %(reason)s") % {'reason': encodeutils.exception_to_unicode(e)}) LOG.error(msg) raise webob.exc.HTTPForbidden(explanation=e.msg) except exception.NotFound as e: raise webob.exc.HTTPNotFound(explanation=e.msg) except exception.Duplicate as e: raise webob.exc.HTTPConflict(explanation=e.msg) except Exception as e: LOG.error(e) raise webob.exc.HTTPInternalServerError() return ResourceTypeAssociation.to_wsme_model(new_resource_type) def delete(self, req, namespace, resource_type): rs_type_repo = self.gateway.get_metadef_resource_type_repo(req.context) try: filters = {} found = False filters['namespace'] = namespace db_resource_type_list = rs_type_repo.list(filters=filters) for db_resource_type in db_resource_type_list: if db_resource_type.name == resource_type: db_resource_type.delete() rs_type_repo.remove(db_resource_type) found = True if not found: raise exception.NotFound() except exception.Forbidden as e: raise webob.exc.HTTPForbidden(explanation=e.msg) except exception.NotFound as e: msg = (_("Failed to find resource type %(resourcetype)s to " "delete") % {'resourcetype': resource_type}) LOG.error(msg) raise webob.exc.HTTPNotFound(explanation=msg) except Exception as e: LOG.error(e) raise webob.exc.HTTPInternalServerError() class RequestDeserializer(wsgi.JSONRequestDeserializer): _disallowed_properties = ['created_at', 'updated_at'] def __init__(self, schema=None): super(RequestDeserializer, self).__init__() self.schema = schema or get_schema() def _get_request_body(self, request): output = super(RequestDeserializer, self).default(request) if 'body' not in output: msg = _('Body expected in request.') raise webob.exc.HTTPBadRequest(explanation=msg) return output['body'] @classmethod def _check_allowed(cls, image): for key in cls._disallowed_properties: if key in image: msg = _("Attribute '%s' is read-only.") % key raise webob.exc.HTTPForbidden( explanation=encodeutils.exception_to_unicode(msg)) def create(self, request): body = self._get_request_body(request) self._check_allowed(body) try: self.schema.validate(body) except exception.InvalidObject as e: raise webob.exc.HTTPBadRequest(explanation=e.msg) resource_type = json.fromjson(ResourceTypeAssociation, body) return dict(resource_type=resource_type) class ResponseSerializer(wsgi.JSONResponseSerializer): def __init__(self, schema=None): super(ResponseSerializer, self).__init__() self.schema = schema def show(self, response, result): resource_type_json = json.tojson(ResourceTypeAssociations, result) body = jsonutils.dumps(resource_type_json, ensure_ascii=False) response.unicode_body = six.text_type(body) response.content_type = 'application/json' def index(self, response, result): resource_type_json = json.tojson(ResourceTypes, result) body = jsonutils.dumps(resource_type_json, ensure_ascii=False) response.unicode_body = six.text_type(body) response.content_type = 'application/json' def create(self, response, result): resource_type_json = json.tojson(ResourceTypeAssociation, result) response.status_int = http.CREATED body = jsonutils.dumps(resource_type_json, ensure_ascii=False) response.unicode_body = six.text_type(body) response.content_type = 'application/json' def delete(self, response, result): response.status_int = http.NO_CONTENT def _get_base_properties(): return { 'name': { 'type': 'string', 'description': _('Resource type names should be aligned with Heat ' 'resource types whenever possible: ' 'http://docs.openstack.org/developer/heat/' 'template_guide/openstack.html'), 'maxLength': 80, }, 'prefix': { 'type': 'string', 'description': _('Specifies the prefix to use for the given ' 'resource type. Any properties in the namespace ' 'should be prefixed with this prefix when being ' 'applied to the specified resource type. Must ' 'include prefix separator (e.g. a colon :).'), 'maxLength': 80, }, 'properties_target': { 'type': 'string', 'description': _('Some resource types allow more than one key / ' 'value pair per instance. For example, Cinder ' 'allows user and image metadata on volumes. Only ' 'the image properties metadata is evaluated by ' 'Nova (scheduling or drivers). This property ' 'allows a namespace target to remove the ' 'ambiguity.'), 'maxLength': 80, }, "created_at": { "type": "string", "readOnly": True, "description": _("Date and time of resource type association"), "format": "date-time" }, "updated_at": { "type": "string", "readOnly": True, "description": _("Date and time of the last resource type " "association modification"), "format": "date-time" } } def get_schema(): properties = _get_base_properties() mandatory_attrs = ResourceTypeAssociation.get_mandatory_attrs() schema = glance.schema.Schema( 'resource_type_association', properties, required=mandatory_attrs, ) return schema def get_collection_schema(): resource_type_schema = get_schema() return glance.schema.CollectionSchema('resource_type_associations', resource_type_schema) def create_resource(): """ResourceTypeAssociation resource factory method""" schema = get_schema() deserializer = RequestDeserializer(schema) serializer = ResponseSerializer(schema) controller = ResourceTypeController() return wsgi.Resource(controller, deserializer, serializer) glance-16.0.0/glance/tests/functional/v2/__init__.py0000666000175100017510000000000013245511421022236 0ustar zuulzuul00000000000000glance-16.0.0/glance/tests/functional/v2/registry_data_api.py0000666000175100017510000000405413245511421024206 0ustar zuulzuul00000000000000# Copyright 2014 Hewlett-Packard Development Company, L.P. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from glance.db.registry.api import * # noqa from glance.common.rpc import RPCClient from glance.registry.client.v2 import api from glance.registry.client.v2 import client def patched_bulk_request(self, commands): # We add some auth headers which are typically # added by keystone. This is required when testing # without keystone, otherwise the tests fail. # We use the 'trusted-auth' deployment flavour # for testing so that these headers are interpreted # as expected (ie the same way as if keystone was # present) body = self._serializer.to_json(commands) headers = {"X-Identity-Status": "Confirmed", 'X-Roles': 'member'} if self.context.user is not None: headers['X-User-Id'] = self.context.user if self.context.tenant is not None: headers['X-Tenant-Id'] = self.context.tenant response = super(RPCClient, self).do_request('POST', self.base_path, body, headers=headers) return self._deserializer.from_json(response.read()) def client_wrapper(func): def call(context): reg_client = func(context) reg_client.context = context return reg_client return call client.RegistryClient.bulk_request = patched_bulk_request api.get_registry_client = client_wrapper(api.get_registry_client) glance-16.0.0/glance/tests/functional/test_wsgi.py0000666000175100017510000000350613245511421022176 0ustar zuulzuul00000000000000# Copyright 2014 Hewlett-Packard Development Company, L.P. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Tests for `glance.wsgi`.""" import socket import time from oslo_config import cfg import testtools from glance.common import wsgi CONF = cfg.CONF class TestWSGIServer(testtools.TestCase): """WSGI server tests.""" def test_client_socket_timeout(self): CONF.set_default("workers", 0) CONF.set_default("client_socket_timeout", 1) """Verify connections are timed out as per 'client_socket_timeout'""" greetings = b'Hello, World!!!' def hello_world(env, start_response): start_response('200 OK', [('Content-Type', 'text/plain')]) return [greetings] server = wsgi.Server() server.start(hello_world, 0) port = server.sock.getsockname()[1] def get_request(delay=0.0): sock = socket.socket() sock.connect(('127.0.0.1', port)) time.sleep(delay) sock.send(b'GET / HTTP/1.1\r\nHost: localhost\r\n\r\n') return sock.recv(1024) # Should succeed - no timeout self.assertIn(greetings, get_request()) # Should fail - connection timed out so we get nothing from the server self.assertFalse(get_request(delay=1.1)) glance-16.0.0/glance/tests/functional/test_glance_manage.py0000666000175100017510000001606413245511421023771 0ustar zuulzuul00000000000000# Copyright 2012 Red Hat, Inc # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Functional test cases for glance-manage""" import os import sys from oslo_config import cfg from oslo_db import options as db_options from glance.common import utils from glance.db import migration as db_migration from glance.db.sqlalchemy import alembic_migrations from glance.db.sqlalchemy.alembic_migrations import data_migrations from glance.db.sqlalchemy import api as db_api from glance.tests import functional from glance.tests.utils import depends_on_exe from glance.tests.utils import execute from glance.tests.utils import skip_if_disabled CONF = cfg.CONF class TestGlanceManage(functional.FunctionalTest): """Functional tests for glance-manage""" def setUp(self): super(TestGlanceManage, self).setUp() conf_dir = os.path.join(self.test_dir, 'etc') utils.safe_mkdirs(conf_dir) self.conf_filepath = os.path.join(conf_dir, 'glance-manage.conf') self.db_filepath = os.path.join(self.test_dir, 'tests.sqlite') self.connection = ('sql_connection = sqlite:///%s' % self.db_filepath) db_options.set_defaults(CONF, connection='sqlite:///%s' % self.db_filepath) def _db_command(self, db_method): with open(self.conf_filepath, 'w') as conf_file: conf_file.write('[DEFAULT]\n') conf_file.write(self.connection) conf_file.flush() cmd = ('%s -m glance.cmd.manage --config-file %s db %s' % (sys.executable, self.conf_filepath, db_method)) return execute(cmd, raise_error=True) def _check_db(self, expected_exitcode): with open(self.conf_filepath, 'w') as conf_file: conf_file.write('[DEFAULT]\n') conf_file.write(self.connection) conf_file.flush() cmd = ('%s -m glance.cmd.manage --config-file %s db check' % (sys.executable, self.conf_filepath)) exitcode, out, err = execute(cmd, raise_error=True, expected_exitcode=expected_exitcode) return exitcode, out def _assert_table_exists(self, db_table): cmd = ("sqlite3 {0} \"SELECT name FROM sqlite_master WHERE " "type='table' AND name='{1}'\"").format(self.db_filepath, db_table) exitcode, out, err = execute(cmd, raise_error=True) msg = "Expected table {0} was not found in the schema".format(db_table) self.assertEqual(out.rstrip().decode("utf-8"), db_table, msg) @depends_on_exe('sqlite3') @skip_if_disabled def test_db_creation(self): """Test schema creation by db_sync on a fresh DB""" self._db_command(db_method='sync') for table in ['images', 'image_tags', 'image_locations', 'image_members', 'image_properties']: self._assert_table_exists(table) @depends_on_exe('sqlite3') @skip_if_disabled def test_sync(self): """Test DB sync which internally calls EMC""" self._db_command(db_method='sync') contract_head = alembic_migrations.get_alembic_branch_head( db_migration.CONTRACT_BRANCH) cmd = ("sqlite3 {0} \"SELECT version_num FROM alembic_version\"" ).format(self.db_filepath) exitcode, out, err = execute(cmd, raise_error=True) self.assertEqual(contract_head, out.rstrip().decode("utf-8")) @depends_on_exe('sqlite3') @skip_if_disabled def test_check(self): exitcode, out = self._check_db(3) self.assertEqual(3, exitcode) self._db_command(db_method='expand') if data_migrations.has_pending_migrations(db_api.get_engine()): exitcode, out = self._check_db(4) self.assertEqual(4, exitcode) self._db_command(db_method='migrate') exitcode, out = self._check_db(5) self.assertEqual(5, exitcode) self._db_command(db_method='contract') exitcode, out = self._check_db(0) self.assertEqual(0, exitcode) @depends_on_exe('sqlite3') @skip_if_disabled def test_expand(self): """Test DB expand""" self._db_command(db_method='expand') expand_head = alembic_migrations.get_alembic_branch_head( db_migration.EXPAND_BRANCH) cmd = ("sqlite3 {0} \"SELECT version_num FROM alembic_version\"" ).format(self.db_filepath) exitcode, out, err = execute(cmd, raise_error=True) self.assertEqual(expand_head, out.rstrip().decode("utf-8")) exitcode, out, err = self._db_command(db_method='expand') self.assertIn('Database expansion is up to date. ' 'No expansion needed.', out) @depends_on_exe('sqlite3') @skip_if_disabled def test_migrate(self): """Test DB migrate""" self._db_command(db_method='expand') if data_migrations.has_pending_migrations(db_api.get_engine()): self._db_command(db_method='migrate') expand_head = alembic_migrations.get_alembic_branch_head( db_migration.EXPAND_BRANCH) cmd = ("sqlite3 {0} \"SELECT version_num FROM alembic_version\"" ).format(self.db_filepath) exitcode, out, err = execute(cmd, raise_error=True) self.assertEqual(expand_head, out.rstrip().decode("utf-8")) self.assertEqual(False, data_migrations.has_pending_migrations( db_api.get_engine())) if data_migrations.has_pending_migrations(db_api.get_engine()): exitcode, out, err = self._db_command(db_method='migrate') self.assertIn('Database migration is up to date. No migration ' 'needed.', out) @depends_on_exe('sqlite3') @skip_if_disabled def test_contract(self): """Test DB contract""" self._db_command(db_method='expand') if data_migrations.has_pending_migrations(db_api.get_engine()): self._db_command(db_method='migrate') self._db_command(db_method='contract') contract_head = alembic_migrations.get_alembic_branch_head( db_migration.CONTRACT_BRANCH) cmd = ("sqlite3 {0} \"SELECT version_num FROM alembic_version\"" ).format(self.db_filepath) exitcode, out, err = execute(cmd, raise_error=True) self.assertEqual(contract_head, out.rstrip().decode("utf-8")) exitcode, out, err = self._db_command(db_method='contract') self.assertIn('Database is up to date. No migrations needed.', out) glance-16.0.0/glance/tests/functional/test_logging.py0000666000175100017510000000601013245511421022644 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Functional test case that tests logging output""" import os import stat import httplib2 from six.moves import http_client as http from glance.tests import functional class TestLogging(functional.FunctionalTest): """Functional tests for Glance's logging output""" def test_debug(self): """ Test logging output proper when debug is on. """ self.cleanup() self.start_servers() # The default functional test case has both debug on. Let's verify # that debug statements appear in both the API and registry logs. self.assertTrue(os.path.exists(self.api_server.log_file)) with open(self.api_server.log_file, 'r') as f: api_log_out = f.read() self.assertIn('DEBUG glance', api_log_out) self.assertTrue(os.path.exists(self.registry_server.log_file)) with open(self.registry_server.log_file, 'r') as freg: registry_log_out = freg.read() self.assertIn('DEBUG glance', registry_log_out) self.stop_servers() def test_no_debug(self): """ Test logging output proper when debug is off. """ self.cleanup() self.start_servers(debug=False) self.assertTrue(os.path.exists(self.api_server.log_file)) with open(self.api_server.log_file, 'r') as f: api_log_out = f.read() self.assertNotIn('DEBUG glance', api_log_out) self.assertTrue(os.path.exists(self.registry_server.log_file)) with open(self.registry_server.log_file, 'r') as freg: registry_log_out = freg.read() self.assertNotIn('DEBUG glance', registry_log_out) self.stop_servers() def assertNotEmptyFile(self, path): self.assertTrue(os.path.exists(path)) self.assertNotEqual(os.stat(path)[stat.ST_SIZE], 0) def test_logrotate(self): """ Test that we notice when our log file has been rotated """ self.cleanup() self.start_servers() self.assertNotEmptyFile(self.api_server.log_file) os.rename(self.api_server.log_file, self.api_server.log_file + ".1") path = "http://%s:%d/" % ("127.0.0.1", self.api_port) response, content = httplib2.Http().request(path, 'GET') self.assertEqual(http.MULTIPLE_CHOICES, response.status) self.assertNotEmptyFile(self.api_server.log_file) self.stop_servers() glance-16.0.0/glance/tests/functional/__init__.py0000666000175100017510000010561313245511421021727 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Base test class for running non-stubbed tests (functional tests) The FunctionalTest class contains helper methods for starting the API and Registry server, grabbing the logs of each, cleaning up pidfiles, and spinning down the servers. """ import atexit import datetime import errno import os import platform import shutil import signal import socket import sys import tempfile import time import fixtures from oslo_serialization import jsonutils # NOTE(jokke): simplified transition to py3, behaves like py2 xrange from six.moves import range import six.moves.urllib.parse as urlparse import testtools from glance.common import utils from glance.db.sqlalchemy import api as db_api from glance import tests as glance_tests from glance.tests import utils as test_utils execute, get_unused_port = test_utils.execute, test_utils.get_unused_port tracecmd_osmap = {'Linux': 'strace', 'FreeBSD': 'truss'} class Server(object): """ Class used to easily manage starting and stopping a server during functional test runs. """ def __init__(self, test_dir, port, sock=None): """ Creates a new Server object. :param test_dir: The directory where all test stuff is kept. This is passed from the FunctionalTestCase. :param port: The port to start a server up on. """ self.debug = True self.no_venv = False self.test_dir = test_dir self.bind_port = port self.conf_file_name = None self.conf_base = None self.paste_conf_base = None self.exec_env = None self.deployment_flavor = '' self.show_image_direct_url = False self.show_multiple_locations = False self.property_protection_file = '' self.enable_v1_api = True self.enable_v2_api = True # TODO(rosmaita): remove in Queens when the option is removed # also, don't forget to remove it from ApiServer.conf_base self.enable_image_import = False self.enable_v1_registry = True self.enable_v2_registry = True self.needs_database = False self.log_file = None self.sock = sock self.fork_socket = True self.process_pid = None self.server_module = None self.stop_kill = False self.use_user_token = True self.send_identity_credentials = False def write_conf(self, **kwargs): """ Writes the configuration file for the server to its intended destination. Returns the name of the configuration file and the over-ridden config content (may be useful for populating error messages). """ if not self.conf_base: raise RuntimeError("Subclass did not populate config_base!") conf_override = self.__dict__.copy() if kwargs: conf_override.update(**kwargs) # A config file and paste.ini to use just for this test...we don't want # to trample on currently-running Glance servers, now do we? conf_dir = os.path.join(self.test_dir, 'etc') conf_filepath = os.path.join(conf_dir, "%s.conf" % self.server_name) if os.path.exists(conf_filepath): os.unlink(conf_filepath) paste_conf_filepath = conf_filepath.replace(".conf", "-paste.ini") if os.path.exists(paste_conf_filepath): os.unlink(paste_conf_filepath) utils.safe_mkdirs(conf_dir) def override_conf(filepath, overridden): with open(filepath, 'w') as conf_file: conf_file.write(overridden) conf_file.flush() return conf_file.name overridden_core = self.conf_base % conf_override self.conf_file_name = override_conf(conf_filepath, overridden_core) overridden_paste = '' if self.paste_conf_base: overridden_paste = self.paste_conf_base % conf_override override_conf(paste_conf_filepath, overridden_paste) overridden = ('==Core config==\n%s\n==Paste config==\n%s' % (overridden_core, overridden_paste)) return self.conf_file_name, overridden def start(self, expect_exit=True, expected_exitcode=0, **kwargs): """ Starts the server. Any kwargs passed to this method will override the configuration value in the conf file used in starting the servers. """ # Ensure the configuration file is written self.write_conf(**kwargs) self.create_database() cmd = ("%(server_module)s --config-file %(conf_file_name)s" % {"server_module": self.server_module, "conf_file_name": self.conf_file_name}) cmd = "%s -m %s" % (sys.executable, cmd) # close the sock and release the unused port closer to start time if self.exec_env: exec_env = self.exec_env.copy() else: exec_env = {} pass_fds = set() if self.sock: if not self.fork_socket: self.sock.close() self.sock = None else: fd = os.dup(self.sock.fileno()) exec_env[utils.GLANCE_TEST_SOCKET_FD_STR] = str(fd) pass_fds.add(fd) self.sock.close() self.process_pid = test_utils.fork_exec(cmd, logfile=os.devnull, exec_env=exec_env, pass_fds=pass_fds) self.stop_kill = not expect_exit if self.pid_file: pf = open(self.pid_file, 'w') pf.write('%d\n' % self.process_pid) pf.close() if not expect_exit: rc = 0 try: os.kill(self.process_pid, 0) except OSError: raise RuntimeError("The process did not start") else: rc = test_utils.wait_for_fork( self.process_pid, expected_exitcode=expected_exitcode) # avoid an FD leak if self.sock: os.close(fd) self.sock = None return (rc, '', '') def reload(self, expect_exit=True, expected_exitcode=0, **kwargs): """ Start and stop the service to reload Any kwargs passed to this method will override the configuration value in the conf file used in starting the servers. """ self.stop() return self.start(expect_exit=expect_exit, expected_exitcode=expected_exitcode, **kwargs) def create_database(self): """Create database if required for this server""" if self.needs_database: conf_dir = os.path.join(self.test_dir, 'etc') utils.safe_mkdirs(conf_dir) conf_filepath = os.path.join(conf_dir, 'glance-manage.conf') with open(conf_filepath, 'w') as conf_file: conf_file.write('[DEFAULT]\n') conf_file.write('sql_connection = %s' % self.sql_connection) conf_file.flush() glance_db_env = 'GLANCE_DB_TEST_SQLITE_FILE' if glance_db_env in os.environ: # use the empty db created and cached as a tempfile # instead of spending the time creating a new one db_location = os.environ[glance_db_env] os.system('cp %s %s/tests.sqlite' % (db_location, self.test_dir)) else: cmd = ('%s -m glance.cmd.manage --config-file %s db sync' % (sys.executable, conf_filepath)) execute(cmd, no_venv=self.no_venv, exec_env=self.exec_env, expect_exit=True) # copy the clean db to a temp location so that it # can be reused for future tests (osf, db_location) = tempfile.mkstemp() os.close(osf) os.system('cp %s/tests.sqlite %s' % (self.test_dir, db_location)) os.environ[glance_db_env] = db_location # cleanup the temp file when the test suite is # complete def _delete_cached_db(): try: os.remove(os.environ[glance_db_env]) except Exception: glance_tests.logger.exception( "Error cleaning up the file %s" % os.environ[glance_db_env]) atexit.register(_delete_cached_db) def stop(self): """ Spin down the server. """ if not self.process_pid: raise Exception('why is this being called? %s' % self.server_name) if self.stop_kill: os.kill(self.process_pid, signal.SIGTERM) rc = test_utils.wait_for_fork(self.process_pid, raise_error=False) return (rc, '', '') def dump_log(self): if not self.log_file: return "log_file not set for {name}".format(name=self.server_name) elif not os.path.exists(self.log_file): return "{log_file} for {name} did not exist".format( log_file=self.log_file, name=self.server_name) with open(self.log_file, 'r') as fptr: return fptr.read().strip() class ApiServer(Server): """ Server object that starts/stops/manages the API server """ def __init__(self, test_dir, port, policy_file, delayed_delete=False, pid_file=None, sock=None, **kwargs): super(ApiServer, self).__init__(test_dir, port, sock=sock) self.server_name = 'api' self.server_module = 'glance.cmd.%s' % self.server_name self.default_store = kwargs.get("default_store", "file") self.bind_host = "127.0.0.1" self.registry_host = "127.0.0.1" self.key_file = "" self.cert_file = "" self.metadata_encryption_key = "012345678901234567890123456789ab" self.image_dir = os.path.join(self.test_dir, "images") self.pid_file = pid_file or os.path.join(self.test_dir, "api.pid") self.log_file = os.path.join(self.test_dir, "api.log") self.image_size_cap = 1099511627776 self.delayed_delete = delayed_delete self.owner_is_tenant = True self.workers = 0 self.scrub_time = 5 self.image_cache_dir = os.path.join(self.test_dir, 'cache') self.image_cache_driver = 'sqlite' self.policy_file = policy_file self.policy_default_rule = 'default' self.property_protection_rule_format = 'roles' self.image_member_quota = 10 self.image_property_quota = 10 self.image_tag_quota = 10 self.image_location_quota = 2 self.disable_path = None self.needs_database = True default_sql_connection = 'sqlite:////%s/tests.sqlite' % self.test_dir self.sql_connection = os.environ.get('GLANCE_TEST_SQL_CONNECTION', default_sql_connection) self.data_api = kwargs.get("data_api", "glance.db.sqlalchemy.api") self.user_storage_quota = '0' self.lock_path = self.test_dir self.location_strategy = 'location_order' self.store_type_location_strategy_preference = "" self.send_identity_headers = False self.conf_base = """[DEFAULT] debug = %(debug)s default_log_levels = eventlet.wsgi.server=DEBUG bind_host = %(bind_host)s bind_port = %(bind_port)s key_file = %(key_file)s cert_file = %(cert_file)s metadata_encryption_key = %(metadata_encryption_key)s registry_host = %(registry_host)s registry_port = %(registry_port)s use_user_token = %(use_user_token)s send_identity_credentials = %(send_identity_credentials)s log_file = %(log_file)s image_size_cap = %(image_size_cap)d delayed_delete = %(delayed_delete)s owner_is_tenant = %(owner_is_tenant)s workers = %(workers)s scrub_time = %(scrub_time)s send_identity_headers = %(send_identity_headers)s image_cache_dir = %(image_cache_dir)s image_cache_driver = %(image_cache_driver)s data_api = %(data_api)s sql_connection = %(sql_connection)s show_image_direct_url = %(show_image_direct_url)s show_multiple_locations = %(show_multiple_locations)s user_storage_quota = %(user_storage_quota)s enable_v1_api = %(enable_v1_api)s enable_v2_api = %(enable_v2_api)s enable_image_import = %(enable_image_import)s lock_path = %(lock_path)s property_protection_file = %(property_protection_file)s property_protection_rule_format = %(property_protection_rule_format)s image_member_quota=%(image_member_quota)s image_property_quota=%(image_property_quota)s image_tag_quota=%(image_tag_quota)s image_location_quota=%(image_location_quota)s location_strategy=%(location_strategy)s allow_additional_image_properties = True [oslo_policy] policy_file = %(policy_file)s policy_default_rule = %(policy_default_rule)s [paste_deploy] flavor = %(deployment_flavor)s [store_type_location_strategy] store_type_preference = %(store_type_location_strategy_preference)s [glance_store] filesystem_store_datadir=%(image_dir)s default_store = %(default_store)s """ self.paste_conf_base = """[pipeline:glance-api] pipeline = cors healthcheck versionnegotiation gzip unauthenticated-context rootapp [pipeline:glance-api-caching] pipeline = cors healthcheck versionnegotiation gzip unauthenticated-context cache rootapp [pipeline:glance-api-cachemanagement] pipeline = cors healthcheck versionnegotiation gzip unauthenticated-context cache cache_manage rootapp [pipeline:glance-api-fakeauth] pipeline = cors healthcheck versionnegotiation gzip fakeauth context rootapp [pipeline:glance-api-noauth] pipeline = cors healthcheck versionnegotiation gzip context rootapp [composite:rootapp] paste.composite_factory = glance.api:root_app_factory /: apiversions /v1: apiv1app /v2: apiv2app [app:apiversions] paste.app_factory = glance.api.versions:create_resource [app:apiv1app] paste.app_factory = glance.api.v1.router:API.factory [app:apiv2app] paste.app_factory = glance.api.v2.router:API.factory [filter:healthcheck] paste.filter_factory = oslo_middleware:Healthcheck.factory backends = disable_by_file disable_by_file_path = %(disable_path)s [filter:versionnegotiation] paste.filter_factory = glance.api.middleware.version_negotiation:VersionNegotiationFilter.factory [filter:gzip] paste.filter_factory = glance.api.middleware.gzip:GzipMiddleware.factory [filter:cache] paste.filter_factory = glance.api.middleware.cache:CacheFilter.factory [filter:cache_manage] paste.filter_factory = glance.api.middleware.cache_manage:CacheManageFilter.factory [filter:context] paste.filter_factory = glance.api.middleware.context:ContextMiddleware.factory [filter:unauthenticated-context] paste.filter_factory = glance.api.middleware.context:UnauthenticatedContextMiddleware.factory [filter:fakeauth] paste.filter_factory = glance.tests.utils:FakeAuthMiddleware.factory [filter:cors] paste.filter_factory = oslo_middleware.cors:filter_factory allowed_origin=http://valid.example.com """ class RegistryServer(Server): """ Server object that starts/stops/manages the Registry server """ def __init__(self, test_dir, port, policy_file, sock=None): super(RegistryServer, self).__init__(test_dir, port, sock=sock) self.server_name = 'registry' self.server_module = 'glance.cmd.%s' % self.server_name self.needs_database = True default_sql_connection = 'sqlite:////%s/tests.sqlite' % self.test_dir self.sql_connection = os.environ.get('GLANCE_TEST_SQL_CONNECTION', default_sql_connection) self.bind_host = "127.0.0.1" self.pid_file = os.path.join(self.test_dir, "registry.pid") self.log_file = os.path.join(self.test_dir, "registry.log") self.owner_is_tenant = True self.workers = 0 self.api_version = 1 self.user_storage_quota = '0' self.metadata_encryption_key = "012345678901234567890123456789ab" self.policy_file = policy_file self.policy_default_rule = 'default' self.disable_path = None self.conf_base = """[DEFAULT] debug = %(debug)s bind_host = %(bind_host)s bind_port = %(bind_port)s log_file = %(log_file)s sql_connection = %(sql_connection)s sql_idle_timeout = 3600 api_limit_max = 1000 limit_param_default = 25 owner_is_tenant = %(owner_is_tenant)s enable_v2_registry = %(enable_v2_registry)s workers = %(workers)s user_storage_quota = %(user_storage_quota)s metadata_encryption_key = %(metadata_encryption_key)s [oslo_policy] policy_file = %(policy_file)s policy_default_rule = %(policy_default_rule)s [paste_deploy] flavor = %(deployment_flavor)s """ self.paste_conf_base = """[pipeline:glance-registry] pipeline = healthcheck unauthenticated-context registryapp [pipeline:glance-registry-fakeauth] pipeline = healthcheck fakeauth context registryapp [pipeline:glance-registry-trusted-auth] pipeline = healthcheck context registryapp [app:registryapp] paste.app_factory = glance.registry.api:API.factory [filter:healthcheck] paste.filter_factory = oslo_middleware:Healthcheck.factory backends = disable_by_file disable_by_file_path = %(disable_path)s [filter:context] paste.filter_factory = glance.api.middleware.context:ContextMiddleware.factory [filter:unauthenticated-context] paste.filter_factory = glance.api.middleware.context:UnauthenticatedContextMiddleware.factory [filter:fakeauth] paste.filter_factory = glance.tests.utils:FakeAuthMiddleware.factory """ class ScrubberDaemon(Server): """ Server object that starts/stops/manages the Scrubber server """ def __init__(self, test_dir, policy_file, daemon=False, **kwargs): # NOTE(jkoelker): Set the port to 0 since we actually don't listen super(ScrubberDaemon, self).__init__(test_dir, 0) self.server_name = 'scrubber' self.server_module = 'glance.cmd.%s' % self.server_name self.daemon = daemon self.registry_host = "127.0.0.1" self.image_dir = os.path.join(self.test_dir, "images") self.scrub_time = 5 self.pid_file = os.path.join(self.test_dir, "scrubber.pid") self.log_file = os.path.join(self.test_dir, "scrubber.log") self.metadata_encryption_key = "012345678901234567890123456789ab" self.lock_path = self.test_dir default_sql_connection = 'sqlite:////%s/tests.sqlite' % self.test_dir self.sql_connection = os.environ.get('GLANCE_TEST_SQL_CONNECTION', default_sql_connection) self.policy_file = policy_file self.policy_default_rule = 'default' self.send_identity_headers = False self.admin_role = 'admin' self.conf_base = """[DEFAULT] debug = %(debug)s log_file = %(log_file)s daemon = %(daemon)s wakeup_time = 2 scrub_time = %(scrub_time)s registry_host = %(registry_host)s registry_port = %(registry_port)s metadata_encryption_key = %(metadata_encryption_key)s lock_path = %(lock_path)s sql_connection = %(sql_connection)s sql_idle_timeout = 3600 send_identity_headers = %(send_identity_headers)s admin_role = %(admin_role)s [glance_store] filesystem_store_datadir=%(image_dir)s [oslo_policy] policy_file = %(policy_file)s policy_default_rule = %(policy_default_rule)s """ def start(self, expect_exit=True, expected_exitcode=0, **kwargs): if 'daemon' in kwargs: expect_exit = False return super(ScrubberDaemon, self).start( expect_exit=expect_exit, expected_exitcode=expected_exitcode, **kwargs) class FunctionalTest(test_utils.BaseTestCase): """ Base test class for any test that wants to test the actual servers and clients and not just the stubbed out interfaces """ inited = False disabled = False launched_servers = [] def setUp(self): super(FunctionalTest, self).setUp() self.test_dir = self.useFixture(fixtures.TempDir()).path self.api_protocol = 'http' self.api_port, api_sock = test_utils.get_unused_port_and_socket() self.registry_port, reg_sock = test_utils.get_unused_port_and_socket() # NOTE: Scrubber is enabled by default for the functional tests. # Please disbale it by explicitly setting 'self.include_scrubber' to # False in the test SetUps that do not require Scrubber to run. self.include_scrubber = True self.tracecmd = tracecmd_osmap.get(platform.system()) conf_dir = os.path.join(self.test_dir, 'etc') utils.safe_mkdirs(conf_dir) self.copy_data_file('schema-image.json', conf_dir) self.copy_data_file('policy.json', conf_dir) self.copy_data_file('property-protections.conf', conf_dir) self.copy_data_file('property-protections-policies.conf', conf_dir) self.property_file_roles = os.path.join(conf_dir, 'property-protections.conf') property_policies = 'property-protections-policies.conf' self.property_file_policies = os.path.join(conf_dir, property_policies) self.policy_file = os.path.join(conf_dir, 'policy.json') self.api_server = ApiServer(self.test_dir, self.api_port, self.policy_file, sock=api_sock) self.registry_server = RegistryServer(self.test_dir, self.registry_port, self.policy_file, sock=reg_sock) self.scrubber_daemon = ScrubberDaemon(self.test_dir, self.policy_file) self.pid_files = [self.api_server.pid_file, self.registry_server.pid_file, self.scrubber_daemon.pid_file] self.files_to_destroy = [] self.launched_servers = [] # Keep track of servers we've logged so we don't double-log them. self._attached_server_logs = [] self.addOnException(self.add_log_details_on_exception) if not self.disabled: # We destroy the test data store between each test case, # and recreate it, which ensures that we have no side-effects # from the tests self.addCleanup( self._reset_database, self.registry_server.sql_connection) self.addCleanup( self._reset_database, self.api_server.sql_connection) self.addCleanup(self.cleanup) self._reset_database(self.registry_server.sql_connection) self._reset_database(self.api_server.sql_connection) def set_policy_rules(self, rules): fap = open(self.policy_file, 'w') fap.write(jsonutils.dumps(rules)) fap.close() def _reset_database(self, conn_string): conn_pieces = urlparse.urlparse(conn_string) if conn_string.startswith('sqlite'): # We leave behind the sqlite DB for failing tests to aid # in diagnosis, as the file size is relatively small and # won't interfere with subsequent tests as it's in a per- # test directory (which is blown-away if the test is green) pass elif conn_string.startswith('mysql'): # We can execute the MySQL client to destroy and re-create # the MYSQL database, which is easier and less error-prone # than using SQLAlchemy to do this via MetaData...trust me. database = conn_pieces.path.strip('/') loc_pieces = conn_pieces.netloc.split('@') host = loc_pieces[1] auth_pieces = loc_pieces[0].split(':') user = auth_pieces[0] password = "" if len(auth_pieces) > 1: if auth_pieces[1].strip(): password = "-p%s" % auth_pieces[1] sql = ("drop database if exists %(database)s; " "create database %(database)s;") % {'database': database} cmd = ("mysql -u%(user)s %(password)s -h%(host)s " "-e\"%(sql)s\"") % {'user': user, 'password': password, 'host': host, 'sql': sql} exitcode, out, err = execute(cmd) self.assertEqual(0, exitcode) def cleanup(self): """ Makes sure anything we created or started up in the tests are destroyed or spun down """ # NOTE(jbresnah) call stop on each of the servers instead of # checking the pid file. stop() will wait until the child # server is dead. This eliminates the possibility of a race # between a child process listening on a port actually dying # and a new process being started servers = [self.api_server, self.registry_server, self.scrubber_daemon] for s in servers: try: s.stop() except Exception: pass for f in self.files_to_destroy: if os.path.exists(f): os.unlink(f) def start_server(self, server, expect_launch, expect_exit=True, expected_exitcode=0, **kwargs): """ Starts a server on an unused port. Any kwargs passed to this method will override the configuration value in the conf file used in starting the server. :param server: the server to launch :param expect_launch: true iff the server is expected to successfully start :param expect_exit: true iff the launched process is expected to exit in a timely fashion :param expected_exitcode: expected exitcode from the launcher """ self.cleanup() # Start up the requested server exitcode, out, err = server.start(expect_exit=expect_exit, expected_exitcode=expected_exitcode, **kwargs) if expect_exit: self.assertEqual(expected_exitcode, exitcode, "Failed to spin up the requested server. " "Got: %s" % err) self.launched_servers.append(server) launch_msg = self.wait_for_servers([server], expect_launch) self.assertTrue(launch_msg is None, launch_msg) def start_with_retry(self, server, port_name, max_retries, expect_launch=True, **kwargs): """ Starts a server, with retries if the server launches but fails to start listening on the expected port. :param server: the server to launch :param port_name: the name of the port attribute :param max_retries: the maximum number of attempts :param expect_launch: true iff the server is expected to successfully start :param expect_exit: true iff the launched process is expected to exit in a timely fashion """ launch_msg = None for i in range(max_retries): exitcode, out, err = server.start(expect_exit=not expect_launch, **kwargs) name = server.server_name self.assertEqual(0, exitcode, "Failed to spin up the %s server. " "Got: %s" % (name, err)) launch_msg = self.wait_for_servers([server], expect_launch) if launch_msg: server.stop() server.bind_port = get_unused_port() setattr(self, port_name, server.bind_port) else: self.launched_servers.append(server) break self.assertTrue(launch_msg is None, launch_msg) def start_servers(self, **kwargs): """ Starts the API and Registry servers (glance-control api start & glance-control registry start) on unused ports. glance-control should be installed into the python path Any kwargs passed to this method will override the configuration value in the conf file used in starting the servers. """ self.cleanup() # Start up the API and default registry server # We start the registry server first, as the API server config # depends on the registry port - this ordering allows for # retrying the launch on a port clash self.start_with_retry(self.registry_server, 'registry_port', 3, **kwargs) kwargs['registry_port'] = self.registry_server.bind_port self.start_with_retry(self.api_server, 'api_port', 3, **kwargs) if self.include_scrubber: exitcode, out, err = self.scrubber_daemon.start(**kwargs) self.assertEqual(0, exitcode, "Failed to spin up the Scrubber daemon. " "Got: %s" % err) def ping_server(self, port): """ Simple ping on the port. If responsive, return True, else return False. :note We use raw sockets, not ping here, since ping uses ICMP and has no concept of ports... """ s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) try: s.connect(("127.0.0.1", port)) return True except socket.error: return False finally: s.close() def ping_server_ipv6(self, port): """ Simple ping on the port. If responsive, return True, else return False. :note We use raw sockets, not ping here, since ping uses ICMP and has no concept of ports... The function uses IPv6 (therefore AF_INET6 and ::1). """ s = socket.socket(socket.AF_INET6, socket.SOCK_STREAM) try: s.connect(("::1", port)) return True except socket.error: return False finally: s.close() def wait_for_servers(self, servers, expect_launch=True, timeout=30): """ Tight loop, waiting for the given server port(s) to be available. Returns when all are pingable. There is a timeout on waiting for the servers to come up. :param servers: Glance server ports to ping :param expect_launch: Optional, true iff the server(s) are expected to successfully start :param timeout: Optional, defaults to 30 seconds :returns: None if launch expectation is met, otherwise an assertion message """ now = datetime.datetime.now() timeout_time = now + datetime.timedelta(seconds=timeout) replied = [] while (timeout_time > now): pinged = 0 for server in servers: if self.ping_server(server.bind_port): pinged += 1 if server not in replied: replied.append(server) if pinged == len(servers): msg = 'Unexpected server launch status' return None if expect_launch else msg now = datetime.datetime.now() time.sleep(0.05) failed = list(set(servers) - set(replied)) msg = 'Unexpected server launch status for: ' for f in failed: msg += ('%s, ' % f.server_name) if os.path.exists(f.pid_file): pid = f.process_pid trace = f.pid_file.replace('.pid', '.trace') if self.tracecmd: cmd = '%s -p %d -o %s' % (self.tracecmd, pid, trace) try: execute(cmd, raise_error=False, expect_exit=False) except OSError as e: if e.errno == errno.ENOENT: raise RuntimeError('No executable found for "%s" ' 'command.' % self.tracecmd) else: raise time.sleep(0.5) if os.path.exists(trace): msg += ('\n%s:\n%s\n' % (self.tracecmd, open(trace).read())) self.add_log_details(failed) return msg if expect_launch else None def stop_server(self, server): """ Called to stop a single server in a normal fashion using the glance-control stop method to gracefully shut the server down. :param server: the server to stop """ # Spin down the requested server server.stop() def stop_servers(self): """ Called to stop the started servers in a normal fashion. Note that cleanup() will stop the servers using a fairly draconian method of sending a SIGTERM signal to the servers. Here, we use the glance-control stop method to gracefully shut the server down. This method also asserts that the shutdown was clean, and so it is meant to be called during a normal test case sequence. """ # Spin down the API and default registry server self.stop_server(self.api_server) self.stop_server(self.registry_server) if self.include_scrubber: self.stop_server(self.scrubber_daemon) self._reset_database(self.registry_server.sql_connection) def run_sql_cmd(self, sql): """ Provides a crude mechanism to run manual SQL commands for backend DB verification within the functional tests. The raw result set is returned. """ engine = db_api.get_engine() return engine.execute(sql) def copy_data_file(self, file_name, dst_dir): src_file_name = os.path.join('glance/tests/etc', file_name) shutil.copy(src_file_name, dst_dir) dst_file_name = os.path.join(dst_dir, file_name) return dst_file_name def add_log_details_on_exception(self, *args, **kwargs): self.add_log_details() def add_log_details(self, servers=None): for s in servers or self.launched_servers: if s.log_file not in self._attached_server_logs: self._attached_server_logs.append(s.log_file) self.addDetail( s.server_name, testtools.content.text_content(s.dump_log())) glance-16.0.0/glance/tests/functional/test_cors_middleware.py0000666000175100017510000000610213245511421024363 0ustar zuulzuul00000000000000# All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Tests cors middleware.""" import httplib2 from six.moves import http_client from glance.tests import functional class TestCORSMiddleware(functional.FunctionalTest): '''Provide a basic smoke test to ensure CORS middleware is active. The tests below provide minimal confirmation that the CORS middleware is active, and may be configured. For comprehensive tests, please consult the test suite in oslo_middleware. ''' def setUp(self): super(TestCORSMiddleware, self).setUp() # Cleanup is handled in teardown of the parent class. self.start_servers(**self.__dict__.copy()) self.http = httplib2.Http() self.api_path = "http://%s:%d/v1/images" % ("127.0.0.1", self.api_port) def test_valid_cors_options_request(self): (r_headers, content) = self.http.request( self.api_path, 'OPTIONS', headers={ 'Origin': 'http://valid.example.com', 'Access-Control-Request-Method': 'GET' }) self.assertEqual(http_client.OK, r_headers.status) self.assertIn('access-control-allow-origin', r_headers) self.assertEqual('http://valid.example.com', r_headers['access-control-allow-origin']) def test_invalid_cors_options_request(self): (r_headers, content) = self.http.request( self.api_path, 'OPTIONS', headers={ 'Origin': 'http://invalid.example.com', 'Access-Control-Request-Method': 'GET' }) self.assertEqual(http_client.OK, r_headers.status) self.assertNotIn('access-control-allow-origin', r_headers) def test_valid_cors_get_request(self): (r_headers, content) = self.http.request( self.api_path, 'GET', headers={ 'Origin': 'http://valid.example.com' }) self.assertEqual(http_client.OK, r_headers.status) self.assertIn('access-control-allow-origin', r_headers) self.assertEqual('http://valid.example.com', r_headers['access-control-allow-origin']) def test_invalid_cors_get_request(self): (r_headers, content) = self.http.request( self.api_path, 'GET', headers={ 'Origin': 'http://invalid.example.com' }) self.assertEqual(http_client.OK, r_headers.status) self.assertNotIn('access-control-allow-origin', r_headers) glance-16.0.0/glance/tests/functional/test_scrubber.py0000666000175100017510000003144613245511421023040 0ustar zuulzuul00000000000000# Copyright 2011-2012 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os import sys import time import httplib2 from oslo_serialization import jsonutils from oslo_utils import units from six.moves import http_client # NOTE(jokke): simplified transition to py3, behaves like py2 xrange from six.moves import range from glance.tests import functional from glance.tests.utils import execute TEST_IMAGE_DATA = '*' * 5 * units.Ki TEST_IMAGE_META = { 'name': 'test_image', 'is_public': False, 'disk_format': 'raw', 'container_format': 'ovf', } class TestScrubber(functional.FunctionalTest): """Test that delayed_delete works and the scrubber deletes""" def _send_http_request(self, path, method, body=None): headers = { 'x-image-meta-name': 'test_image', 'x-image-meta-is_public': 'true', 'x-image-meta-disk_format': 'raw', 'x-image-meta-container_format': 'ovf', 'content-type': 'application/octet-stream' } return httplib2.Http().request(path, method, body, headers) def test_delayed_delete(self): """ test that images don't get deleted immediately and that the scrubber scrubs them """ self.cleanup() self.start_servers(delayed_delete=True, daemon=True, metadata_encryption_key='') path = "http://%s:%d/v1/images" % ("127.0.0.1", self.api_port) response, content = self._send_http_request(path, 'POST', body='XXX') self.assertEqual(http_client.CREATED, response.status) image = jsonutils.loads(content)['image'] self.assertEqual('active', image['status']) path = "http://%s:%d/v1/images/%s" % ("127.0.0.1", self.api_port, image['id']) response, content = self._send_http_request(path, 'DELETE') self.assertEqual(http_client.OK, response.status) response, content = self._send_http_request(path, 'HEAD') self.assertEqual(http_client.OK, response.status) self.assertEqual('pending_delete', response['x-image-meta-status']) self.wait_for_scrub(path) self.stop_servers() def test_delayed_delete_with_trustedauth_registry(self): """ test that images don't get deleted immediately and that the scrubber scrubs them when registry is operating in trustedauth mode """ self.cleanup() self.api_server.deployment_flavor = 'noauth' self.registry_server.deployment_flavor = 'trusted-auth' self.start_servers(delayed_delete=True, daemon=True, metadata_encryption_key='', send_identity_headers=True) base_headers = { 'X-Identity-Status': 'Confirmed', 'X-Auth-Token': '932c5c84-02ac-4fe5-a9ba-620af0e2bb96', 'X-User-Id': 'f9a41d13-0c13-47e9-bee2-ce4e8bfe958e', 'X-Tenant-Id': 'deae8923-075d-4287-924b-840fb2644874', 'X-Roles': 'admin', } headers = { 'x-image-meta-name': 'test_image', 'x-image-meta-is_public': 'true', 'x-image-meta-disk_format': 'raw', 'x-image-meta-container_format': 'ovf', 'content-type': 'application/octet-stream', } headers.update(base_headers) path = "http://%s:%d/v1/images" % ("127.0.0.1", self.api_port) http = httplib2.Http() response, content = http.request(path, 'POST', body='XXX', headers=headers) self.assertEqual(http_client.CREATED, response.status) image = jsonutils.loads(content)['image'] self.assertEqual('active', image['status']) image_id = image['id'] path = "http://%s:%d/v1/images/%s" % ("127.0.0.1", self.api_port, image_id) http = httplib2.Http() response, content = http.request(path, 'DELETE', headers=base_headers) self.assertEqual(http_client.OK, response.status) response, content = http.request(path, 'HEAD', headers=base_headers) self.assertEqual(http_client.OK, response.status) self.assertEqual('pending_delete', response['x-image-meta-status']) self.wait_for_scrub(path, headers=base_headers) self.stop_servers() def test_scrubber_app(self): """ test that the glance-scrubber script runs successfully when not in daemon mode """ self.cleanup() self.start_servers(delayed_delete=True, daemon=False, metadata_encryption_key='') path = "http://%s:%d/v1/images" % ("127.0.0.1", self.api_port) response, content = self._send_http_request(path, 'POST', body='XXX') self.assertEqual(http_client.CREATED, response.status) image = jsonutils.loads(content)['image'] self.assertEqual('active', image['status']) path = "http://%s:%d/v1/images/%s" % ("127.0.0.1", self.api_port, image['id']) response, content = self._send_http_request(path, 'DELETE') self.assertEqual(http_client.OK, response.status) response, content = self._send_http_request(path, 'HEAD') self.assertEqual(http_client.OK, response.status) self.assertEqual('pending_delete', response['x-image-meta-status']) # wait for the scrub time on the image to pass time.sleep(self.api_server.scrub_time) # scrub images and make sure they get deleted exe_cmd = "%s -m glance.cmd.scrubber" % sys.executable cmd = ("%s --config-file %s" % (exe_cmd, self.scrubber_daemon.conf_file_name)) exitcode, out, err = execute(cmd, raise_error=False) self.assertEqual(0, exitcode) self.wait_for_scrub(path) self.stop_servers() def test_scrubber_app_with_trustedauth_registry(self): """ test that the glance-scrubber script runs successfully when not in daemon mode and with a registry that operates in trustedauth mode """ self.cleanup() self.api_server.deployment_flavor = 'noauth' self.registry_server.deployment_flavor = 'trusted-auth' self.start_servers(delayed_delete=True, daemon=False, metadata_encryption_key='', send_identity_headers=True) base_headers = { 'X-Identity-Status': 'Confirmed', 'X-Auth-Token': '932c5c84-02ac-4fe5-a9ba-620af0e2bb96', 'X-User-Id': 'f9a41d13-0c13-47e9-bee2-ce4e8bfe958e', 'X-Tenant-Id': 'deae8923-075d-4287-924b-840fb2644874', 'X-Roles': 'admin', } headers = { 'x-image-meta-name': 'test_image', 'x-image-meta-is_public': 'true', 'x-image-meta-disk_format': 'raw', 'x-image-meta-container_format': 'ovf', 'content-type': 'application/octet-stream', } headers.update(base_headers) path = "http://%s:%d/v1/images" % ("127.0.0.1", self.api_port) http = httplib2.Http() response, content = http.request(path, 'POST', body='XXX', headers=headers) self.assertEqual(http_client.CREATED, response.status) image = jsonutils.loads(content)['image'] self.assertEqual('active', image['status']) image_id = image['id'] path = "http://%s:%d/v1/images/%s" % ("127.0.0.1", self.api_port, image_id) http = httplib2.Http() response, content = http.request(path, 'DELETE', headers=base_headers) self.assertEqual(http_client.OK, response.status) response, content = http.request(path, 'HEAD', headers=base_headers) self.assertEqual(http_client.OK, response.status) self.assertEqual('pending_delete', response['x-image-meta-status']) # wait for the scrub time on the image to pass time.sleep(self.api_server.scrub_time) # scrub images and make sure they get deleted exe_cmd = "%s -m glance.cmd.scrubber" % sys.executable cmd = ("%s --config-file %s" % (exe_cmd, self.scrubber_daemon.conf_file_name)) exitcode, out, err = execute(cmd, raise_error=False) self.assertEqual(0, exitcode) self.wait_for_scrub(path, headers=base_headers) self.stop_servers() def test_scrubber_delete_handles_exception(self): """ Test that the scrubber handles the case where an exception occurs when _delete() is called. The scrubber should not write out queue files in this case. """ # Start servers. self.cleanup() self.start_servers(delayed_delete=True, daemon=False, default_store='file') # Check that we are using a file backend. self.assertEqual(self.api_server.default_store, 'file') # add an image path = "http://%s:%d/v1/images" % ("127.0.0.1", self.api_port) response, content = self._send_http_request(path, 'POST', body='XXX') self.assertEqual(http_client.CREATED, response.status) image = jsonutils.loads(content)['image'] self.assertEqual('active', image['status']) # delete the image path = "http://%s:%d/v1/images/%s" % ("127.0.0.1", self.api_port, image['id']) response, content = self._send_http_request(path, 'DELETE') self.assertEqual(http_client.OK, response.status) # ensure the image is marked pending delete response, content = self._send_http_request(path, 'HEAD') self.assertEqual(http_client.OK, response.status) self.assertEqual('pending_delete', response['x-image-meta-status']) # Remove the file from the backend. file_path = os.path.join(self.api_server.image_dir, image['id']) os.remove(file_path) # Wait for the scrub time on the image to pass time.sleep(self.api_server.scrub_time) # run the scrubber app, and ensure it doesn't fall over exe_cmd = "%s -m glance.cmd.scrubber" % sys.executable cmd = ("%s --config-file %s" % (exe_cmd, self.scrubber_daemon.conf_file_name)) exitcode, out, err = execute(cmd, raise_error=False) self.assertEqual(0, exitcode) self.wait_for_scrub(path) self.stop_servers() def test_scrubber_app_queue_errors_not_daemon(self): """ test that the glance-scrubber exits with an exit code > 0 when it fails to lookup images, indicating a configuration error when not in daemon mode. Related-Bug: #1548289 """ # Don't start the registry server to cause intended failure # Don't start the api server to save time exitcode, out, err = self.scrubber_daemon.start( delayed_delete=True, daemon=False, registry_port=28890) self.assertEqual(0, exitcode, "Failed to spin up the Scrubber daemon. " "Got: %s" % err) # Run the Scrubber exe_cmd = "%s -m glance.cmd.scrubber" % sys.executable cmd = ("%s --config-file %s" % (exe_cmd, self.scrubber_daemon.conf_file_name)) exitcode, out, err = execute(cmd, raise_error=False) self.assertEqual(1, exitcode) self.assertIn('Can not get scrub jobs from queue', str(err)) self.stop_server(self.scrubber_daemon) def wait_for_scrub(self, path, headers=None): """ NOTE(jkoelker) The build servers sometimes take longer than 15 seconds to scrub. Give it up to 5 min, checking checking every 15 seconds. When/if it flips to deleted, bail immediately. """ http = httplib2.Http() wait_for = 300 # seconds check_every = 15 # seconds for _ in range(wait_for // check_every): time.sleep(check_every) response, content = http.request(path, 'HEAD', headers=headers) if (response['x-image-meta-status'] == 'deleted' and response['x-image-meta-deleted'] == 'True'): break else: continue else: self.fail('image was never scrubbed') glance-16.0.0/glance/tests/functional/test_client_redirects.py0000666000175100017510000001167213245511421024552 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Functional test cases testing glance client redirect-following.""" import eventlet.patcher from six.moves import http_client as http import webob.dec import webob.exc from glance.common import client from glance.common import exception from glance.common import wsgi from glance.tests import functional from glance.tests import utils eventlet.patcher.monkey_patch(socket=True) def RedirectTestApp(name): class App(object): """ Test WSGI application which can respond with multiple kinds of HTTP redirects and is used to verify Glance client redirects. """ def __init__(self): """ Initialize app with a name and port. """ self.name = name @webob.dec.wsgify def __call__(self, request): """ Handles all requests to the application. """ base = "http://%s" % request.host path = request.path_qs if path == "/": return "root" elif path == "/302": url = "%s/success" % base raise webob.exc.HTTPFound(location=url) elif path == "/302?with_qs=yes": url = "%s/success?with_qs=yes" % base raise webob.exc.HTTPFound(location=url) elif path == "/infinite_302": raise webob.exc.HTTPFound(location=request.url) elif path.startswith("/redirect-to"): url = "http://127.0.0.1:%s/success" % path.split("-")[-1] raise webob.exc.HTTPFound(location=url) elif path == "/success": return "success_from_host_%s" % self.name elif path == "/success?with_qs=yes": return "success_with_qs" return "fail" return App class TestClientRedirects(functional.FunctionalTest): def setUp(self): super(TestClientRedirects, self).setUp() self.port_one = utils.get_unused_port() self.port_two = utils.get_unused_port() server_one = wsgi.Server() server_two = wsgi.Server() self.config(bind_host='127.0.0.1') self.config(workers=0) server_one.start(RedirectTestApp("one")(), self.port_one) server_two.start(RedirectTestApp("two")(), self.port_two) self.client = client.BaseClient("127.0.0.1", self.port_one) def test_get_without_redirect(self): """ Test GET with no redirect """ response = self.client.do_request("GET", "/") self.assertEqual(http.OK, response.status) self.assertEqual(b"root", response.read()) def test_get_with_one_redirect(self): """ Test GET with one 302 FOUND redirect """ response = self.client.do_request("GET", "/302") self.assertEqual(http.OK, response.status) self.assertEqual(b"success_from_host_one", response.read()) def test_get_with_one_redirect_query_string(self): """ Test GET with one 302 FOUND redirect w/ a query string """ response = self.client.do_request("GET", "/302", params={'with_qs': 'yes'}) self.assertEqual(http.OK, response.status) self.assertEqual(b"success_with_qs", response.read()) def test_get_with_max_redirects(self): """ Test we don't redirect forever. """ self.assertRaises(exception.MaxRedirectsExceeded, self.client.do_request, "GET", "/infinite_302") def test_post_redirect(self): """ Test POST with 302 redirect """ response = self.client.do_request("POST", "/302") self.assertEqual(http.OK, response.status) self.assertEqual(b"success_from_host_one", response.read()) def test_redirect_to_new_host(self): """ Test redirect to one host and then another. """ url = "/redirect-to-%d" % self.port_two response = self.client.do_request("POST", url) self.assertEqual(http.OK, response.status) self.assertEqual(b"success_from_host_two", response.read()) response = self.client.do_request("POST", "/success") self.assertEqual(http.OK, response.status) self.assertEqual(b"success_from_host_one", response.read()) glance-16.0.0/glance/tests/functional/test_bin_glance_cache_manage.py0000666000175100017510000002716113245511421025744 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Functional test case that utilizes the bin/glance-cache-manage CLI tool""" import datetime import hashlib import os import sys import httplib2 from oslo_serialization import jsonutils from oslo_utils import units from six.moves import http_client # NOTE(jokke): simplified transition to py3, behaves like py2 xrange from six.moves import range from glance.tests import functional from glance.tests.utils import execute from glance.tests.utils import minimal_headers FIVE_KB = 5 * units.Ki class TestBinGlanceCacheManage(functional.FunctionalTest): """Functional tests for the bin/glance CLI tool""" def setUp(self): self.image_cache_driver = "sqlite" super(TestBinGlanceCacheManage, self).setUp() self.api_server.deployment_flavor = "cachemanagement" # NOTE(sirp): This is needed in case we are running the tests under an # environment in which OS_AUTH_STRATEGY=keystone. The test server we # spin up won't have keystone support, so we need to switch to the # NoAuth strategy. os.environ['OS_AUTH_STRATEGY'] = 'noauth' os.environ['OS_AUTH_URL'] = '' def add_image(self, name): """ Adds an image with supplied name and returns the newly-created image identifier. """ image_data = b"*" * FIVE_KB headers = minimal_headers(name) path = "http://%s:%d/v1/images" % ("127.0.0.1", self.api_port) http = httplib2.Http() response, content = http.request(path, 'POST', headers=headers, body=image_data) self.assertEqual(http_client.CREATED, response.status) data = jsonutils.loads(content) self.assertEqual(hashlib.md5(image_data).hexdigest(), data['image']['checksum']) self.assertEqual(FIVE_KB, data['image']['size']) self.assertEqual(name, data['image']['name']) self.assertTrue(data['image']['is_public']) return data['image']['id'] def is_image_cached(self, image_id): """ Return True if supplied image ID is cached, False otherwise """ exe_cmd = '%s -m glance.cmd.cache_manage' % sys.executable cmd = "%s --port=%d list-cached" % (exe_cmd, self.api_port) exitcode, out, err = execute(cmd) self.assertEqual(0, exitcode) out = out.decode('utf-8') return image_id in out def iso_date(self, image_id): """ Return True if supplied image ID is cached, False otherwise """ exe_cmd = '%s -m glance.cmd.cache_manage' % sys.executable cmd = "%s --port=%d list-cached" % (exe_cmd, self.api_port) exitcode, out, err = execute(cmd) out = out.decode('utf-8') return datetime.datetime.utcnow().strftime("%Y-%m-%d") in out def test_no_cache_enabled(self): """ Test that cache index command works """ self.cleanup() self.api_server.deployment_flavor = '' self.start_servers() # Not passing in cache_manage in pipeline... api_port = self.api_port # Verify decent error message returned exe_cmd = '%s -m glance.cmd.cache_manage' % sys.executable cmd = "%s --port=%d list-cached" % (exe_cmd, api_port) exitcode, out, err = execute(cmd, raise_error=False) self.assertEqual(1, exitcode) self.assertIn(b'Cache management middleware not enabled on host', out.strip()) self.stop_servers() def test_cache_index(self): """ Test that cache index command works """ self.cleanup() self.start_servers(**self.__dict__.copy()) api_port = self.api_port # Verify no cached images exe_cmd = '%s -m glance.cmd.cache_manage' % sys.executable cmd = "%s --port=%d list-cached" % (exe_cmd, api_port) exitcode, out, err = execute(cmd) self.assertEqual(0, exitcode) self.assertIn(b'No cached images', out.strip()) ids = {} # Add a few images and cache the second one of them # by GETing the image... for x in range(4): ids[x] = self.add_image("Image%s" % x) path = "http://%s:%d/v1/images/%s" % ("127.0.0.1", api_port, ids[1]) http = httplib2.Http() response, content = http.request(path, 'GET') self.assertEqual(http_client.OK, response.status) self.assertTrue(self.is_image_cached(ids[1]), "%s is not cached." % ids[1]) self.assertTrue(self.iso_date(ids[1])) self.stop_servers() def test_queue(self): """ Test that we can queue and fetch images using the CLI utility """ self.cleanup() self.start_servers(**self.__dict__.copy()) api_port = self.api_port # Verify no cached images exe_cmd = '%s -m glance.cmd.cache_manage' % sys.executable cmd = "%s --port=%d list-cached" % (exe_cmd, api_port) exitcode, out, err = execute(cmd) self.assertEqual(0, exitcode) self.assertIn(b'No cached images', out.strip()) # Verify no queued images cmd = "%s --port=%d list-queued" % (exe_cmd, api_port) exitcode, out, err = execute(cmd) self.assertEqual(0, exitcode) self.assertIn(b'No queued images', out.strip()) ids = {} # Add a few images and cache the second one of them # by GETing the image... for x in range(4): ids[x] = self.add_image("Image%s" % x) # Queue second image and then cache it cmd = "%s --port=%d --force queue-image %s" % ( exe_cmd, api_port, ids[1]) exitcode, out, err = execute(cmd) self.assertEqual(0, exitcode) # Verify queued second image cmd = "%s --port=%d list-queued" % (exe_cmd, api_port) exitcode, out, err = execute(cmd) self.assertEqual(0, exitcode) out = out.decode('utf-8') self.assertIn(ids[1], out, 'Image %s was not queued!' % ids[1]) # Cache images in the queue by running the prefetcher cache_config_filepath = os.path.join(self.test_dir, 'etc', 'glance-cache.conf') cache_file_options = { 'image_cache_dir': self.api_server.image_cache_dir, 'image_cache_driver': self.image_cache_driver, 'registry_port': self.registry_server.bind_port, 'lock_path': self.test_dir, 'log_file': os.path.join(self.test_dir, 'cache.log'), 'metadata_encryption_key': "012345678901234567890123456789ab", 'filesystem_store_datadir': self.test_dir } with open(cache_config_filepath, 'w') as cache_file: cache_file.write("""[DEFAULT] debug = True lock_path = %(lock_path)s image_cache_dir = %(image_cache_dir)s image_cache_driver = %(image_cache_driver)s registry_host = 127.0.0.1 registry_port = %(registry_port)s metadata_encryption_key = %(metadata_encryption_key)s log_file = %(log_file)s [glance_store] filesystem_store_datadir=%(filesystem_store_datadir)s """ % cache_file_options) cmd = ("%s -m glance.cmd.cache_prefetcher --config-file %s" % (sys.executable, cache_config_filepath)) exitcode, out, err = execute(cmd) self.assertEqual(0, exitcode) self.assertEqual(b'', out.strip(), out) # Verify no queued images cmd = "%s --port=%d list-queued" % (exe_cmd, api_port) exitcode, out, err = execute(cmd) self.assertEqual(0, exitcode) self.assertIn(b'No queued images', out.strip()) # Verify second image now cached cmd = "%s --port=%d list-cached" % (exe_cmd, api_port) exitcode, out, err = execute(cmd) self.assertEqual(0, exitcode) out = out.decode('utf-8') self.assertIn(ids[1], out, 'Image %s was not cached!' % ids[1]) # Queue third image and then delete it from queue cmd = "%s --port=%d --force queue-image %s" % ( exe_cmd, api_port, ids[2]) exitcode, out, err = execute(cmd) self.assertEqual(0, exitcode) # Verify queued third image cmd = "%s --port=%d list-queued" % (exe_cmd, api_port) exitcode, out, err = execute(cmd) self.assertEqual(0, exitcode) out = out.decode('utf-8') self.assertIn(ids[2], out, 'Image %s was not queued!' % ids[2]) # Delete the image from the queue cmd = ("%s --port=%d --force " "delete-queued-image %s") % (exe_cmd, api_port, ids[2]) exitcode, out, err = execute(cmd) self.assertEqual(0, exitcode) # Verify no queued images cmd = "%s --port=%d list-queued" % (exe_cmd, api_port) exitcode, out, err = execute(cmd) self.assertEqual(0, exitcode) self.assertIn(b'No queued images', out.strip()) # Queue all images for x in range(4): cmd = ("%s --port=%d --force " "queue-image %s") % (exe_cmd, api_port, ids[x]) exitcode, out, err = execute(cmd) self.assertEqual(0, exitcode) # Verify queued third image cmd = "%s --port=%d list-queued" % (exe_cmd, api_port) exitcode, out, err = execute(cmd) self.assertEqual(0, exitcode) self.assertIn(b'Found 3 queued images', out) # Delete the image from the queue cmd = ("%s --port=%d --force " "delete-all-queued-images") % (exe_cmd, api_port) exitcode, out, err = execute(cmd) self.assertEqual(0, exitcode) # Verify nothing in queue anymore cmd = "%s --port=%d list-queued" % (exe_cmd, api_port) exitcode, out, err = execute(cmd) self.assertEqual(0, exitcode) self.assertIn(b'No queued images', out.strip()) # verify two image id when queue-image cmd = ("%s --port=%d --force " "queue-image %s %s") % (exe_cmd, api_port, ids[0], ids[1]) exitcode, out, err = execute(cmd, raise_error=False) self.assertEqual(1, exitcode) self.assertIn(b'Please specify one and only ID of ' b'the image you wish to ', out.strip()) # verify two image id when delete-queued-image cmd = ("%s --port=%d --force delete-queued-image " "%s %s") % (exe_cmd, api_port, ids[0], ids[1]) exitcode, out, err = execute(cmd, raise_error=False) self.assertEqual(1, exitcode) self.assertIn(b'Please specify one and only ID of ' b'the image you wish to ', out.strip()) # verify two image id when delete-cached-image cmd = ("%s --port=%d --force delete-cached-image " "%s %s") % (exe_cmd, api_port, ids[0], ids[1]) exitcode, out, err = execute(cmd, raise_error=False) self.assertEqual(1, exitcode) self.assertIn(b'Please specify one and only ID of ' b'the image you wish to ', out.strip()) self.stop_servers() glance-16.0.0/glance/tests/functional/test_ssl.py0000666000175100017510000000562013245511421022025 0ustar zuulzuul00000000000000# Copyright 2015 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os import unittest import httplib2 import six from six.moves import http_client as http from glance.tests import functional TEST_VAR_DIR = os.path.abspath(os.path.join(os.path.dirname(__file__), '..', 'var')) class TestSSL(functional.FunctionalTest): """Functional tests verifying SSL communication""" def setUp(self): super(TestSSL, self).setUp() if getattr(self, 'inited', False): return self.inited = False self.disabled = True # NOTE (stevelle): Test key/cert/CA file created as per: # http://nrocco.github.io/2013/01/25/ # self-signed-ssl-certificate-chains.html # For these tests certificate.crt must be created with 'Common Name' # set to 127.0.0.1 self.key_file = os.path.join(TEST_VAR_DIR, 'privatekey.key') if not os.path.exists(self.key_file): self.disabled_message = ("Could not find private key file %s" % self.key_file) self.inited = True return self.cert_file = os.path.join(TEST_VAR_DIR, 'certificate.crt') if not os.path.exists(self.cert_file): self.disabled_message = ("Could not find certificate file %s" % self.cert_file) self.inited = True return self.ca_file = os.path.join(TEST_VAR_DIR, 'ca.crt') if not os.path.exists(self.ca_file): self.disabled_message = ("Could not find CA file %s" % self.ca_file) self.inited = True return self.inited = True self.disabled = False def tearDown(self): super(TestSSL, self).tearDown() if getattr(self, 'inited', False): return @unittest.skipIf(six.PY3, 'SSL handshakes are broken in PY3') def test_ssl_ok(self): """Make sure the public API works with HTTPS.""" self.cleanup() self.start_servers(**self.__dict__.copy()) path = "https://%s:%d/versions" % ("127.0.0.1", self.api_port) https = httplib2.Http(ca_certs=self.ca_file) response, content = https.request(path, 'GET') self.assertEqual(http.OK, response.status) glance-16.0.0/glance/tests/functional/test_client_exceptions.py0000666000175100017510000001065213245511421024744 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # Copyright 2012 Red Hat, Inc # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Functional test asserting strongly typed exceptions from glance client""" import eventlet.patcher import httplib2 from six.moves import http_client import webob.dec import webob.exc from glance.common import client from glance.common import exception from glance.common import wsgi from glance.tests import functional from glance.tests import utils eventlet.patcher.monkey_patch(socket=True) class ExceptionTestApp(object): """ Test WSGI application which can respond with multiple kinds of HTTP status codes """ @webob.dec.wsgify def __call__(self, request): path = request.path_qs if path == "/rate-limit": request.response = webob.exc.HTTPRequestEntityTooLarge() elif path == "/rate-limit-retry": request.response.retry_after = 10 request.response.status = http_client.REQUEST_ENTITY_TOO_LARGE elif path == "/service-unavailable": request.response = webob.exc.HTTPServiceUnavailable() elif path == "/service-unavailable-retry": request.response.retry_after = 10 request.response.status = http_client.SERVICE_UNAVAILABLE elif path == "/expectation-failed": request.response = webob.exc.HTTPExpectationFailed() elif path == "/server-error": request.response = webob.exc.HTTPServerError() elif path == "/server-traceback": raise exception.ServerError() class TestClientExceptions(functional.FunctionalTest): def setUp(self): super(TestClientExceptions, self).setUp() self.port = utils.get_unused_port() server = wsgi.Server() self.config(bind_host='127.0.0.1') self.config(workers=0) server.start(ExceptionTestApp(), self.port) self.client = client.BaseClient("127.0.0.1", self.port) def _do_test_exception(self, path, exc_type): try: self.client.do_request("GET", path) self.fail('expected %s' % exc_type) except exc_type as e: if 'retry' in path: self.assertEqual(10, e.retry_after) def test_rate_limited(self): """ Test rate limited response """ self._do_test_exception('/rate-limit', exception.LimitExceeded) def test_rate_limited_retry(self): """ Test rate limited response with retry """ self._do_test_exception('/rate-limit-retry', exception.LimitExceeded) def test_service_unavailable(self): """ Test service unavailable response """ self._do_test_exception('/service-unavailable', exception.ServiceUnavailable) def test_service_unavailable_retry(self): """ Test service unavailable response with retry """ self._do_test_exception('/service-unavailable-retry', exception.ServiceUnavailable) def test_expectation_failed(self): """ Test expectation failed response """ self._do_test_exception('/expectation-failed', exception.UnexpectedStatus) def test_server_error(self): """ Test server error response """ self._do_test_exception('/server-error', exception.ServerError) def test_server_traceback(self): """ Verify that the wsgi server does not return tracebacks to the client on 500 errors (bug 1192132) """ http = httplib2.Http() path = ('http://%s:%d/server-traceback' % ('127.0.0.1', self.port)) response, content = http.request(path, 'GET') self.assertNotIn(b'ServerError', content) self.assertEqual(http_client.INTERNAL_SERVER_ERROR, response.status) glance-16.0.0/glance/tests/functional/test_reload.py0000666000175100017510000002264513245511421022500 0ustar zuulzuul00000000000000# Copyright 2014 Hewlett-Packard Development Company, L.P. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os import re import time import unittest import psutil import requests import six from six.moves import http_client as http from glance.tests import functional from glance.tests.utils import execute TEST_VAR_DIR = os.path.abspath(os.path.join(os.path.dirname(__file__), '../', 'var')) def set_config_value(filepath, key, value): """Set 'key = value' in config file""" replacement_line = '%s = %s\n' % (key, value) match = re.compile('^%s\s+=' % key).match with open(filepath, 'r+') as f: lines = f.readlines() f.seek(0, 0) f.truncate() for line in lines: f.write(line if not match(line) else replacement_line) class TestReload(functional.FunctionalTest): """Test configuration reload""" def setUp(self): self.workers = 1 super(TestReload, self).setUp() def tearDown(self): self.stop_servers() super(TestReload, self).tearDown() def ticker(self, message, seconds=60, tick=0.01): """ Allows repeatedly testing for an expected result for a finite amount of time. :param message: Message to display on timeout :param seconds: Time in seconds after which we timeout :param tick: Time to sleep before rechecking for expected result :returns: 'True' or fails the test with 'message' on timeout """ # We default to allowing 60 seconds timeout but # typically only a few hundredths of a second # are needed. num_ticks = seconds * (1.0 / tick) count = 0 while count < num_ticks: count += 1 time.sleep(tick) yield self.fail(message) def _get_children(self, server): pid = None pid = self._get_parent(server) process = psutil.Process(pid) try: # psutils version >= 2 children = process.children() except AttributeError: # psutils version < 2 children = process.get_children() pids = set() for child in children: pids.add(child.pid) return pids def _get_parent(self, server): if server == 'api': return self.api_server.process_pid elif server == 'registry': return self.registry_server.process_pid def _conffile(self, service): conf_dir = os.path.join(self.test_dir, 'etc') conf_filepath = os.path.join(conf_dir, '%s.conf' % service) return conf_filepath def _url(self, protocol, path): return '%s://127.0.0.1:%d%s' % (protocol, self.api_port, path) @unittest.skipIf(six.PY3, 'SSL handshakes are broken in PY3') def test_reload(self): """Test SIGHUP picks up new config values""" def check_pids(pre, post=None, workers=2): if post is None: if len(pre) == workers: return True else: return False if len(post) == workers: # Check new children have different pids if post.intersection(pre) == set(): return True return False self.api_server.fork_socket = False self.registry_server.fork_socket = False self.start_servers(fork_socket=False, **vars(self)) pre_pids = {} post_pids = {} # Test changing the workers value creates all new children # This recycles the existing socket msg = 'Start timeout' for _ in self.ticker(msg): for server in ('api', 'registry'): pre_pids[server] = self._get_children(server) if check_pids(pre_pids['api'], workers=1): if check_pids(pre_pids['registry'], workers=1): break for server in ('api', 'registry'): # Labour costs have fallen set_config_value(self._conffile(server), 'workers', '2') cmd = "kill -HUP %s" % self._get_parent(server) execute(cmd, raise_error=True) msg = 'Worker change timeout' for _ in self.ticker(msg): for server in ('api', 'registry'): post_pids[server] = self._get_children(server) if check_pids(pre_pids['registry'], post_pids['registry']): if check_pids(pre_pids['api'], post_pids['api']): break # Test changing from http to https # This recycles the existing socket path = self._url('http', '/') response = requests.get(path) self.assertEqual(http.MULTIPLE_CHOICES, response.status_code) del response # close socket so that process audit is reliable pre_pids['api'] = self._get_children('api') key_file = os.path.join(TEST_VAR_DIR, 'privatekey.key') set_config_value(self._conffile('api'), 'key_file', key_file) cert_file = os.path.join(TEST_VAR_DIR, 'certificate.crt') set_config_value(self._conffile('api'), 'cert_file', cert_file) cmd = "kill -HUP %s" % self._get_parent('api') execute(cmd, raise_error=True) msg = 'http to https timeout' for _ in self.ticker(msg): post_pids['api'] = self._get_children('api') if check_pids(pre_pids['api'], post_pids['api']): break ca_file = os.path.join(TEST_VAR_DIR, 'ca.crt') path = self._url('https', '/') response = requests.get(path, verify=ca_file) self.assertEqual(http.MULTIPLE_CHOICES, response.status_code) del response # Test https restart # This recycles the existing socket pre_pids['api'] = self._get_children('api') cmd = "kill -HUP %s" % self._get_parent('api') execute(cmd, raise_error=True) msg = 'https restart timeout' for _ in self.ticker(msg): post_pids['api'] = self._get_children('api') if check_pids(pre_pids['api'], post_pids['api']): break ca_file = os.path.join(TEST_VAR_DIR, 'ca.crt') path = self._url('https', '/') response = requests.get(path, verify=ca_file) self.assertEqual(http.MULTIPLE_CHOICES, response.status_code) del response # Test changing the https bind_host # This requires a new socket pre_pids['api'] = self._get_children('api') set_config_value(self._conffile('api'), 'bind_host', '127.0.0.1') cmd = "kill -HUP %s" % self._get_parent('api') execute(cmd, raise_error=True) msg = 'https bind_host timeout' for _ in self.ticker(msg): post_pids['api'] = self._get_children('api') if check_pids(pre_pids['api'], post_pids['api']): break path = self._url('https', '/') response = requests.get(path, verify=ca_file) self.assertEqual(http.MULTIPLE_CHOICES, response.status_code) del response # Test https -> http # This recycles the existing socket pre_pids['api'] = self._get_children('api') set_config_value(self._conffile('api'), 'key_file', '') set_config_value(self._conffile('api'), 'cert_file', '') cmd = "kill -HUP %s" % self._get_parent('api') execute(cmd, raise_error=True) msg = 'https to http timeout' for _ in self.ticker(msg): post_pids['api'] = self._get_children('api') if check_pids(pre_pids['api'], post_pids['api']): break path = self._url('http', '/') response = requests.get(path) self.assertEqual(http.MULTIPLE_CHOICES, response.status_code) del response # Test changing the http bind_host # This requires a new socket pre_pids['api'] = self._get_children('api') set_config_value(self._conffile('api'), 'bind_host', '127.0.0.1') cmd = "kill -HUP %s" % self._get_parent('api') execute(cmd, raise_error=True) msg = 'http bind_host timeout' for _ in self.ticker(msg): post_pids['api'] = self._get_children('api') if check_pids(pre_pids['api'], post_pids['api']): break path = self._url('http', '/') response = requests.get(path) self.assertEqual(http.MULTIPLE_CHOICES, response.status_code) del response # Test logging configuration change # This recycles the existing socket conf_dir = os.path.join(self.test_dir, 'etc') log_file = conf_dir + 'new.log' self.assertFalse(os.path.exists(log_file)) set_config_value(self._conffile('api'), 'log_file', log_file) cmd = "kill -HUP %s" % self._get_parent('api') execute(cmd, raise_error=True) msg = 'No new log file created' for _ in self.ticker(msg): if os.path.exists(log_file): break glance-16.0.0/glance/tests/functional/v1/0000775000175100017510000000000013245511661020142 5ustar zuulzuul00000000000000glance-16.0.0/glance/tests/functional/v1/test_multiprocessing.py0000666000175100017510000000536013245511421025002 0ustar zuulzuul00000000000000# Copyright 2012 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import time import httplib2 import psutil from six.moves import http_client # NOTE(jokke): simplified transition to py3, behaves like py2 xrange from six.moves import range from glance.tests import functional from glance.tests.utils import execute class TestMultiprocessing(functional.FunctionalTest): """Functional tests for the bin/glance CLI tool""" def setUp(self): self.workers = 2 super(TestMultiprocessing, self).setUp() def test_multiprocessing(self): """Spin up the api servers with multiprocessing on""" self.cleanup() self.start_servers(**self.__dict__.copy()) path = "http://%s:%d/v1/images" % ("127.0.0.1", self.api_port) http = httplib2.Http() response, content = http.request(path, 'GET') self.assertEqual(http_client.OK, response.status) self.assertEqual(b'{"images": []}', content) self.stop_servers() def _get_children(self): api_pid = self.api_server.process_pid process = psutil.Process(api_pid) try: # psutils version >= 2 children = process.children() except AttributeError: # psutils version < 2 children = process.get_children() pids = [str(child.pid) for child in children] return pids def test_interrupt_avoids_respawn_storm(self): """ Ensure an interrupt signal does not cause a respawn storm. See bug #978130 """ self.start_servers(**self.__dict__.copy()) children = self._get_children() cmd = "kill -INT %s" % ' '.join(children) execute(cmd, raise_error=True) for _ in range(9): # Yeah. This totally isn't a race condition. Randomly fails # set at 0.05. Works most of the time at 0.10 time.sleep(0.10) # ensure number of children hasn't grown self.assertGreaterEqual(len(children), len(self._get_children())) for child in self._get_children(): # ensure no new children spawned self.assertIn(child, children, child) self.stop_servers() glance-16.0.0/glance/tests/functional/v1/test_copy_to_file.py0000666000175100017510000002706713245511421024236 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # Copyright 2012 Red Hat, Inc # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Tests copying images to a Glance API server which uses a filesystem- based storage backend. """ import hashlib import tempfile import time import httplib2 from oslo_serialization import jsonutils from oslo_utils import units from six.moves import http_client # NOTE(jokke): simplified transition to py3, behaves like py2 xrange from six.moves import range from glance.tests import functional from glance.tests.functional.store_utils import get_http_uri from glance.tests.functional.store_utils import setup_http from glance.tests.utils import skip_if_disabled FIVE_KB = 5 * units.Ki class TestCopyToFile(functional.FunctionalTest): """ Functional tests for copying images from the HTTP storage backend to file """ def _do_test_copy_from(self, from_store, get_uri): """ Ensure we can copy from an external image in from_store. """ self.cleanup() self.start_servers(**self.__dict__.copy()) setup_http(self) # POST /images with public image to be stored in from_store, # to stand in for the 'external' image image_data = b"*" * FIVE_KB headers = {'Content-Type': 'application/octet-stream', 'X-Image-Meta-Name': 'external', 'X-Image-Meta-Store': from_store, 'X-Image-Meta-disk_format': 'raw', 'X-Image-Meta-container_format': 'ovf', 'X-Image-Meta-Is-Public': 'True'} path = "http://%s:%d/v1/images" % ("127.0.0.1", self.api_port) http = httplib2.Http() response, content = http.request(path, 'POST', headers=headers, body=image_data) self.assertEqual(http_client.CREATED, response.status, content) data = jsonutils.loads(content) original_image_id = data['image']['id'] copy_from = get_uri(self, original_image_id) # POST /images with public image copied from_store (to file) headers = {'X-Image-Meta-Name': 'copied', 'X-Image-Meta-disk_format': 'raw', 'X-Image-Meta-container_format': 'ovf', 'X-Image-Meta-Is-Public': 'True', 'X-Glance-API-Copy-From': copy_from} path = "http://%s:%d/v1/images" % ("127.0.0.1", self.api_port) http = httplib2.Http() response, content = http.request(path, 'POST', headers=headers) self.assertEqual(http_client.CREATED, response.status, content) data = jsonutils.loads(content) copy_image_id = data['image']['id'] self.assertNotEqual(copy_image_id, original_image_id) # GET image and make sure image content is as expected path = "http://%s:%d/v1/images/%s" % ("127.0.0.1", self.api_port, copy_image_id) def _await_status(expected_status): for i in range(100): time.sleep(0.01) http = httplib2.Http() response, content = http.request(path, 'HEAD') self.assertEqual(http_client.OK, response.status) if response['x-image-meta-status'] == expected_status: return self.fail('unexpected image status %s' % response['x-image-meta-status']) _await_status('active') http = httplib2.Http() response, content = http.request(path, 'GET') self.assertEqual(http_client.OK, response.status) self.assertEqual(str(FIVE_KB), response['content-length']) self.assertEqual(image_data, content) self.assertEqual(hashlib.md5(image_data).hexdigest(), hashlib.md5(content).hexdigest()) self.assertEqual(FIVE_KB, data['image']['size']) self.assertEqual("copied", data['image']['name']) # DELETE original image path = "http://%s:%d/v1/images/%s" % ("127.0.0.1", self.api_port, original_image_id) http = httplib2.Http() response, content = http.request(path, 'DELETE') self.assertEqual(http_client.OK, response.status) # GET image again to make sure the existence of the original # image in from_store is not depended on path = "http://%s:%d/v1/images/%s" % ("127.0.0.1", self.api_port, copy_image_id) http = httplib2.Http() response, content = http.request(path, 'GET') self.assertEqual(http_client.OK, response.status) self.assertEqual(str(FIVE_KB), response['content-length']) self.assertEqual(image_data, content) self.assertEqual(hashlib.md5(image_data).hexdigest(), hashlib.md5(content).hexdigest()) self.assertEqual(FIVE_KB, data['image']['size']) self.assertEqual("copied", data['image']['name']) # DELETE copied image path = "http://%s:%d/v1/images/%s" % ("127.0.0.1", self.api_port, copy_image_id) http = httplib2.Http() response, content = http.request(path, 'DELETE') self.assertEqual(http_client.OK, response.status) self.stop_servers() @skip_if_disabled def test_copy_from_http_store(self): """ Ensure we can copy from an external image in HTTP store. """ self._do_test_copy_from('file', get_http_uri) @skip_if_disabled def test_copy_from_http_exists(self): """Ensure we can copy from an external image in HTTP.""" self.cleanup() self.start_servers(**self.__dict__.copy()) setup_http(self) copy_from = get_http_uri(self, 'foobar') # POST /images with public image copied from HTTP (to file) headers = {'X-Image-Meta-Name': 'copied', 'X-Image-Meta-disk_format': 'raw', 'X-Image-Meta-container_format': 'ovf', 'X-Image-Meta-Is-Public': 'True', 'X-Glance-API-Copy-From': copy_from} path = "http://%s:%d/v1/images" % ("127.0.0.1", self.api_port) http = httplib2.Http() response, content = http.request(path, 'POST', headers=headers) self.assertEqual(http_client.CREATED, response.status, content) data = jsonutils.loads(content) copy_image_id = data['image']['id'] self.assertEqual('queued', data['image']['status'], content) path = "http://%s:%d/v1/images/%s" % ("127.0.0.1", self.api_port, copy_image_id) def _await_status(expected_status): for i in range(100): time.sleep(0.01) http = httplib2.Http() response, content = http.request(path, 'HEAD') self.assertEqual(http_client.OK, response.status) if response['x-image-meta-status'] == expected_status: return self.fail('unexpected image status %s' % response['x-image-meta-status']) _await_status('active') # GET image and make sure image content is as expected http = httplib2.Http() response, content = http.request(path, 'GET') self.assertEqual(http_client.OK, response.status) self.assertEqual(str(FIVE_KB), response['content-length']) self.assertEqual(b"*" * FIVE_KB, content) self.assertEqual(hashlib.md5(b"*" * FIVE_KB).hexdigest(), hashlib.md5(content).hexdigest()) # DELETE copied image http = httplib2.Http() response, content = http.request(path, 'DELETE') self.assertEqual(http_client.OK, response.status) self.stop_servers() @skip_if_disabled def test_copy_from_http_nonexistent_location_url(self): # Ensure HTTP 404 response returned when try to create # image with non-existent http location URL. self.cleanup() self.start_servers(**self.__dict__.copy()) setup_http(self) uri = get_http_uri(self, 'foobar') copy_from = uri.replace('images', 'snafu') # POST /images with public image copied from HTTP (to file) headers = {'X-Image-Meta-Name': 'copied', 'X-Image-Meta-disk_format': 'raw', 'X-Image-Meta-container_format': 'ovf', 'X-Image-Meta-Is-Public': 'True', 'X-Glance-API-Copy-From': copy_from} path = "http://%s:%d/v1/images" % ("127.0.0.1", self.api_port) http = httplib2.Http() response, content = http.request(path, 'POST', headers=headers) self.assertEqual(http_client.NOT_FOUND, response.status, content) expected = 'HTTP datastore could not find image at URI.' self.assertIn(expected, content.decode()) self.stop_servers() @skip_if_disabled def test_copy_from_file(self): """ Ensure we can't copy from file """ self.cleanup() self.start_servers(**self.__dict__.copy()) with tempfile.NamedTemporaryFile() as image_file: image_file.write(b"XXX") image_file.flush() copy_from = 'file://' + image_file.name # POST /images with public image copied from file (to file) headers = {'X-Image-Meta-Name': 'copied', 'X-Image-Meta-disk_format': 'raw', 'X-Image-Meta-container_format': 'ovf', 'X-Image-Meta-Is-Public': 'True', 'X-Glance-API-Copy-From': copy_from} path = "http://%s:%d/v1/images" % ("127.0.0.1", self.api_port) http = httplib2.Http() response, content = http.request(path, 'POST', headers=headers) self.assertEqual(http_client.BAD_REQUEST, response.status, content) expected = 'External sources are not supported: \'%s\'' % copy_from msg = 'expected "%s" in "%s"' % (expected, content) self.assertIn(expected, content.decode(), msg) self.stop_servers() @skip_if_disabled def test_copy_from_swift_config(self): """ Ensure we can't copy from swift+config """ self.cleanup() self.start_servers(**self.__dict__.copy()) # POST /images with public image copied from file (to file) headers = {'X-Image-Meta-Name': 'copied', 'X-Image-Meta-disk_format': 'raw', 'X-Image-Meta-container_format': 'ovf', 'X-Image-Meta-Is-Public': 'True', 'X-Glance-API-Copy-From': 'swift+config://xxx'} path = "http://%s:%d/v1/images" % ("127.0.0.1", self.api_port) http = httplib2.Http() response, content = http.request(path, 'POST', headers=headers) self.assertEqual(http_client.BAD_REQUEST, response.status, content) expected = 'External sources are not supported: \'swift+config://xxx\'' msg = 'expected "%s" in "%s"' % (expected, content) self.assertIn(expected, content.decode(), msg) self.stop_servers() glance-16.0.0/glance/tests/functional/v1/test_misc.py0000666000175100017510000001111513245511421022501 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import hashlib import os import httplib2 from oslo_serialization import jsonutils from oslo_utils import units from six.moves import http_client from glance.tests import functional from glance.tests.utils import minimal_headers FIVE_KB = 5 * units.Ki FIVE_GB = 5 * units.Gi class TestMiscellaneous(functional.FunctionalTest): """Some random tests for various bugs and stuff""" def setUp(self): super(TestMiscellaneous, self).setUp() # NOTE(sirp): This is needed in case we are running the tests under an # environment in which OS_AUTH_STRATEGY=keystone. The test server we # spin up won't have keystone support, so we need to switch to the # NoAuth strategy. os.environ['OS_AUTH_STRATEGY'] = 'noauth' os.environ['OS_AUTH_URL'] = '' def test_api_response_when_image_deleted_from_filesystem(self): """ A test for LP bug #781410 -- glance should fail more gracefully on requests for images that have been removed from the fs """ self.cleanup() self.start_servers() # 1. POST /images with public image named Image1 # attribute and no custom properties. Verify a 200 OK is returned image_data = b"*" * FIVE_KB headers = minimal_headers('Image1') path = "http://%s:%d/v1/images" % ("127.0.0.1", self.api_port) http = httplib2.Http() response, content = http.request(path, 'POST', headers=headers, body=image_data) self.assertEqual(http_client.CREATED, response.status) data = jsonutils.loads(content) self.assertEqual(hashlib.md5(image_data).hexdigest(), data['image']['checksum']) self.assertEqual(FIVE_KB, data['image']['size']) self.assertEqual("Image1", data['image']['name']) self.assertTrue(data['image']['is_public']) # 2. REMOVE the image from the filesystem image_path = "%s/images/%s" % (self.test_dir, data['image']['id']) os.remove(image_path) # 3. HEAD /images/1 # Verify image found now path = "http://%s:%d/v1/images/%s" % ("127.0.0.1", self.api_port, data['image']['id']) http = httplib2.Http() response, content = http.request(path, 'HEAD') self.assertEqual(http_client.OK, response.status) self.assertEqual("Image1", response['x-image-meta-name']) # 4. GET /images/1 # Verify the api throws the appropriate 404 error path = "http://%s:%d/v1/images/1" % ("127.0.0.1", self.api_port) http = httplib2.Http() response, content = http.request(path, 'GET') self.assertEqual(http_client.NOT_FOUND, response.status) self.stop_servers() def test_exception_not_eaten_from_registry_to_api(self): """ A test for LP bug #704854 -- Exception thrown by registry server is consumed by API server. We start both servers daemonized. We then use Glance API to try adding an image that does not meet validation requirements on the registry server and test that the error returned from the API server is appropriate """ self.cleanup() self.start_servers() api_port = self.api_port path = 'http://127.0.0.1:%d/v1/images' % api_port http = httplib2.Http() response, content = http.request(path, 'GET') self.assertEqual(http_client.OK, response.status) self.assertEqual(b'{"images": []}', content) headers = {'Content-Type': 'application/octet-stream', 'X-Image-Meta-Name': 'ImageName', 'X-Image-Meta-Disk-Format': 'Invalid', } ignored, content = http.request(path, 'POST', headers=headers) self.assertIn(b'Invalid disk format', content, "Could not find 'Invalid disk format' " "in output: %s" % content) self.stop_servers() glance-16.0.0/glance/tests/functional/v1/__init__.py0000666000175100017510000000000013245511421022235 0ustar zuulzuul00000000000000glance-16.0.0/glance/tests/functional/v1/test_api.py0000666000175100017510000012376213245511421022333 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Functional test case that utilizes httplib2 against the API server""" import hashlib import httplib2 import sys from oslo_serialization import jsonutils from oslo_utils import units from six.moves import http_client # NOTE(jokke): simplified transition to py3, behaves like py2 xrange from six.moves import range from glance.tests import functional from glance.tests.utils import minimal_headers from glance.tests.utils import skip_if_disabled FIVE_KB = 5 * units.Ki FIVE_GB = 5 * units.Gi class TestApi(functional.FunctionalTest): """Functional tests using httplib2 against the API server""" def _check_image_create(self, headers, status=http_client.CREATED, image_data="*" * FIVE_KB): # performs image_create request, checks the response and returns # content http = httplib2.Http() path = "http://%s:%d/v1/images" % ("127.0.0.1", self.api_port) response, content = http.request( path, 'POST', headers=headers, body=image_data) self.assertEqual(status, response.status) return content.decode() def test_checksum_32_chars_at_image_create(self): self.cleanup() self.start_servers(**self.__dict__.copy()) headers = minimal_headers('Image1') image_data = b"*" * FIVE_KB # checksum can be no longer that 32 characters (String(32)) headers['X-Image-Meta-Checksum'] = 'x' * 42 content = self._check_image_create(headers, http_client.BAD_REQUEST) self.assertIn("Invalid checksum", content) # test positive case as well headers['X-Image-Meta-Checksum'] = hashlib.md5(image_data).hexdigest() self._check_image_create(headers) def test_param_int_too_large_at_create(self): # currently 2 params min_disk/min_ram can cause DBError on save self.cleanup() self.start_servers(**self.__dict__.copy()) # Integer field can't be greater than max 8-byte signed integer for param in ['min_disk', 'min_ram']: headers = minimal_headers('Image1') # check that long numbers result in 400 headers['X-Image-Meta-%s' % param] = str(sys.maxsize + 1) content = self._check_image_create(headers, http_client.BAD_REQUEST) self.assertIn("'%s' value out of range" % param, content) # check that integers over 4 byte result in 400 headers['X-Image-Meta-%s' % param] = str(2 ** 31) content = self._check_image_create(headers, http_client.BAD_REQUEST) self.assertIn("'%s' value out of range" % param, content) # verify positive case as well headers['X-Image-Meta-%s' % param] = str((2 ** 31) - 1) self._check_image_create(headers) def test_updating_is_public(self): """Verify that we can update the is_public attribute.""" self.cleanup() self.start_servers(**self.__dict__.copy()) # Verify no public images path = "http://%s:%d/v1/images" % ("127.0.0.1", self.api_port) http = httplib2.Http() response, content = http.request(path, 'GET') self.assertEqual(http_client.OK, response.status) self.assertEqual('{"images": []}', content.decode()) # Verify no public images path = "http://%s:%d/v1/images/detail" % ("127.0.0.1", self.api_port) http = httplib2.Http() response, content = http.request(path, 'GET') self.assertEqual(http_client.OK, response.status) self.assertEqual('{"images": []}', content.decode()) # POST /images with private image named Image1 # attribute and no custom properties. Verify a 200 OK is returned image_data = b"*" * FIVE_KB headers = minimal_headers('Image1', public=False) path = "http://%s:%d/v1/images" % ("127.0.0.1", self.api_port) http = httplib2.Http() response, content = http.request(path, 'POST', headers=headers, body=image_data) self.assertEqual(http_client.CREATED, response.status) data = jsonutils.loads(content) image_id = data['image']['id'] self.assertEqual(hashlib.md5(image_data).hexdigest(), data['image']['checksum']) self.assertEqual(FIVE_KB, data['image']['size']) self.assertEqual("Image1", data['image']['name']) self.assertFalse(data['image']['is_public']) # Retrieve image again to verify it was created appropriately path = "http://%s:%d/v1/images/%s" % ("127.0.0.1", self.api_port, image_id) http = httplib2.Http() response, content = http.request(path, 'GET') self.assertEqual(http_client.OK, response.status) expected_image_headers = { 'x-image-meta-id': image_id, 'x-image-meta-name': 'Image1', 'x-image-meta-is_public': 'False', 'x-image-meta-status': 'active', 'x-image-meta-disk_format': 'raw', 'x-image-meta-container_format': 'ovf', 'x-image-meta-size': str(FIVE_KB)} expected_std_headers = { 'content-length': str(FIVE_KB), 'content-type': 'application/octet-stream'} for expected_key, expected_value in expected_image_headers.items(): self.assertEqual(expected_value, response[expected_key], "For key '%s' expected header value '%s'. " "Got '%s'" % (expected_key, expected_value, response[expected_key])) for expected_key, expected_value in expected_std_headers.items(): self.assertEqual(expected_value, response[expected_key], "For key '%s' expected header value '%s'. " "Got '%s'" % (expected_key, expected_value, response[expected_key])) self.assertEqual(image_data, content) self.assertEqual(hashlib.md5(image_data).hexdigest(), hashlib.md5(content).hexdigest()) # PUT image with custom properties to make public and then # Verify 200 returned headers = {'X-Image-Meta-is_public': 'True'} path = "http://%s:%d/v1/images/%s" % ("127.0.0.1", self.api_port, image_id) http = httplib2.Http() response, content = http.request(path, 'PUT', headers=headers) self.assertEqual(http_client.OK, response.status) image = jsonutils.loads(content) is_public = image['image']['is_public'] self.assertTrue( is_public, "Expected image to be public but received %s" % is_public) # PUT image with custom properties to make private and then # Verify 200 returned headers = {'X-Image-Meta-is_public': 'False'} path = "http://%s:%d/v1/images/%s" % ("127.0.0.1", self.api_port, image_id) http = httplib2.Http() response, content = http.request(path, 'PUT', headers=headers) self.assertEqual(http_client.OK, response.status) image = jsonutils.loads(content) is_public = image['image']['is_public'] self.assertFalse( is_public, "Expected image to be private but received %s" % is_public) @skip_if_disabled def test_get_head_simple_post(self): """ We test the following sequential series of actions: 0. GET /images - Verify no public images 1. GET /images/detail - Verify no public images 2. POST /images with public image named Image1 and no custom properties - Verify 201 returned 3. HEAD image - Verify HTTP headers have correct information we just added 4. GET image - Verify all information on image we just added is correct 5. GET /images - Verify the image we just added is returned 6. GET /images/detail - Verify the image we just added is returned 7. PUT image with custom properties of "distro" and "arch" - Verify 200 returned 8. PUT image with too many custom properties - Verify 413 returned 9. GET image - Verify updated information about image was stored 10. PUT image - Remove a previously existing property. 11. PUT image - Add a previously deleted property. 12. PUT image/members/member1 - Add member1 to image 13. PUT image/members/member2 - Add member2 to image 14. GET image/members - List image members 15. DELETE image/members/member1 - Delete image member1 16. PUT image/members - Attempt to replace members with an overlimit amount 17. PUT image/members/member11 - Attempt to add a member while at limit 18. POST /images with another public image named Image2 - attribute and three custom properties, "distro", "arch" & "foo" - Verify a 200 OK is returned 19. HEAD image2 - Verify image2 found now 20. GET /images - Verify 2 public images 21. GET /images with filter on user-defined property "distro". - Verify both images are returned 22. GET /images with filter on user-defined property 'distro' but - with non-existent value. Verify no images are returned 23. GET /images with filter on non-existent user-defined property - "boo". Verify no images are returned 24. GET /images with filter 'arch=i386' - Verify only image2 is returned 25. GET /images with filter 'arch=x86_64' - Verify only image1 is returned 26. GET /images with filter 'foo=bar' - Verify only image2 is returned 27. DELETE image1 - Delete image 28. GET image/members - List deleted image members 29. PUT image/members/member2 - Update existing member2 of deleted image 30. PUT image/members/member3 - Add member3 to deleted image 31. DELETE image/members/member2 - Delete member2 from deleted image 32. DELETE image2 - Delete image 33. GET /images - Verify no images are listed """ self.cleanup() self.start_servers(**self.__dict__.copy()) # 0. GET /images # Verify no public images path = "http://%s:%d/v1/images" % ("127.0.0.1", self.api_port) http = httplib2.Http() response, content = http.request(path, 'GET') self.assertEqual(http_client.OK, response.status) self.assertEqual('{"images": []}', content.decode()) # 1. GET /images/detail # Verify no public images path = "http://%s:%d/v1/images/detail" % ("127.0.0.1", self.api_port) http = httplib2.Http() response, content = http.request(path, 'GET') self.assertEqual(http_client.OK, response.status) self.assertEqual('{"images": []}', content.decode()) # 2. POST /images with public image named Image1 # attribute and no custom properties. Verify a 200 OK is returned image_data = b"*" * FIVE_KB headers = minimal_headers('Image1') path = "http://%s:%d/v1/images" % ("127.0.0.1", self.api_port) http = httplib2.Http() response, content = http.request(path, 'POST', headers=headers, body=image_data) self.assertEqual(http_client.CREATED, response.status) data = jsonutils.loads(content) image_id = data['image']['id'] self.assertEqual(hashlib.md5(image_data).hexdigest(), data['image']['checksum']) self.assertEqual(FIVE_KB, data['image']['size']) self.assertEqual("Image1", data['image']['name']) self.assertTrue(data['image']['is_public']) # 3. HEAD image # Verify image found now path = "http://%s:%d/v1/images/%s" % ("127.0.0.1", self.api_port, image_id) http = httplib2.Http() response, content = http.request(path, 'HEAD') self.assertEqual(http_client.OK, response.status) self.assertEqual("Image1", response['x-image-meta-name']) # 4. GET image # Verify all information on image we just added is correct path = "http://%s:%d/v1/images/%s" % ("127.0.0.1", self.api_port, image_id) http = httplib2.Http() response, content = http.request(path, 'GET') self.assertEqual(http_client.OK, response.status) expected_image_headers = { 'x-image-meta-id': image_id, 'x-image-meta-name': 'Image1', 'x-image-meta-is_public': 'True', 'x-image-meta-status': 'active', 'x-image-meta-disk_format': 'raw', 'x-image-meta-container_format': 'ovf', 'x-image-meta-size': str(FIVE_KB)} expected_std_headers = { 'content-length': str(FIVE_KB), 'content-type': 'application/octet-stream'} for expected_key, expected_value in expected_image_headers.items(): self.assertEqual(expected_value, response[expected_key], "For key '%s' expected header value '%s'. " "Got '%s'" % (expected_key, expected_value, response[expected_key])) for expected_key, expected_value in expected_std_headers.items(): self.assertEqual(expected_value, response[expected_key], "For key '%s' expected header value '%s'. " "Got '%s'" % (expected_key, expected_value, response[expected_key])) self.assertEqual(image_data, content) self.assertEqual(hashlib.md5(image_data).hexdigest(), hashlib.md5(content).hexdigest()) # 5. GET /images # Verify one public image path = "http://%s:%d/v1/images" % ("127.0.0.1", self.api_port) http = httplib2.Http() response, content = http.request(path, 'GET') self.assertEqual(http_client.OK, response.status) expected_result = {"images": [ {"container_format": "ovf", "disk_format": "raw", "id": image_id, "name": "Image1", "checksum": "c2e5db72bd7fd153f53ede5da5a06de3", "size": 5120}]} self.assertEqual(expected_result, jsonutils.loads(content)) # 6. GET /images/detail # Verify image and all its metadata path = "http://%s:%d/v1/images/detail" % ("127.0.0.1", self.api_port) http = httplib2.Http() response, content = http.request(path, 'GET') self.assertEqual(http_client.OK, response.status) expected_image = { "status": "active", "name": "Image1", "deleted": False, "container_format": "ovf", "disk_format": "raw", "id": image_id, "is_public": True, "deleted_at": None, "properties": {}, "size": 5120} image = jsonutils.loads(content) for expected_key, expected_value in expected_image.items(): self.assertEqual(expected_value, image['images'][0][expected_key], "For key '%s' expected header value '%s'. " "Got '%s'" % (expected_key, expected_value, image['images'][0][expected_key])) # 7. PUT image with custom properties of "distro" and "arch" # Verify 200 returned headers = {'X-Image-Meta-Property-Distro': 'Ubuntu', 'X-Image-Meta-Property-Arch': 'x86_64'} path = "http://%s:%d/v1/images/%s" % ("127.0.0.1", self.api_port, image_id) http = httplib2.Http() response, content = http.request(path, 'PUT', headers=headers) self.assertEqual(http_client.OK, response.status) data = jsonutils.loads(content) self.assertEqual("x86_64", data['image']['properties']['arch']) self.assertEqual("Ubuntu", data['image']['properties']['distro']) # 8. PUT image with too many custom properties # Verify 413 returned headers = {} for i in range(11): # configured limit is 10 headers['X-Image-Meta-Property-foo%d' % i] = 'bar' path = "http://%s:%d/v1/images/%s" % ("127.0.0.1", self.api_port, image_id) http = httplib2.Http() response, content = http.request(path, 'PUT', headers=headers) self.assertEqual(http_client.REQUEST_ENTITY_TOO_LARGE, response.status) # 9. GET /images/detail # Verify image and all its metadata path = "http://%s:%d/v1/images/detail" % ("127.0.0.1", self.api_port) http = httplib2.Http() response, content = http.request(path, 'GET') self.assertEqual(http_client.OK, response.status) expected_image = { "status": "active", "name": "Image1", "deleted": False, "container_format": "ovf", "disk_format": "raw", "id": image_id, "is_public": True, "deleted_at": None, "properties": {'distro': 'Ubuntu', 'arch': 'x86_64'}, "size": 5120} image = jsonutils.loads(content) for expected_key, expected_value in expected_image.items(): self.assertEqual(expected_value, image['images'][0][expected_key], "For key '%s' expected header value '%s'. " "Got '%s'" % (expected_key, expected_value, image['images'][0][expected_key])) # 10. PUT image and remove a previously existing property. headers = {'X-Image-Meta-Property-Arch': 'x86_64'} path = "http://%s:%d/v1/images/%s" % ("127.0.0.1", self.api_port, image_id) http = httplib2.Http() response, content = http.request(path, 'PUT', headers=headers) self.assertEqual(http_client.OK, response.status) path = "http://%s:%d/v1/images/detail" % ("127.0.0.1", self.api_port) response, content = http.request(path, 'GET') self.assertEqual(http_client.OK, response.status) data = jsonutils.loads(content)['images'][0] self.assertEqual(1, len(data['properties'])) self.assertEqual("x86_64", data['properties']['arch']) # 11. PUT image and add a previously deleted property. headers = {'X-Image-Meta-Property-Distro': 'Ubuntu', 'X-Image-Meta-Property-Arch': 'x86_64'} path = "http://%s:%d/v1/images/%s" % ("127.0.0.1", self.api_port, image_id) http = httplib2.Http() response, content = http.request(path, 'PUT', headers=headers) self.assertEqual(http_client.OK, response.status) data = jsonutils.loads(content) path = "http://%s:%d/v1/images/detail" % ("127.0.0.1", self.api_port) response, content = http.request(path, 'GET') self.assertEqual(http_client.OK, response.status) data = jsonutils.loads(content)['images'][0] self.assertEqual(2, len(data['properties'])) self.assertEqual("x86_64", data['properties']['arch']) self.assertEqual("Ubuntu", data['properties']['distro']) self.assertNotEqual(data['created_at'], data['updated_at']) # 12. Add member to image path = ("http://%s:%d/v1/images/%s/members/pattieblack" % ("127.0.0.1", self.api_port, image_id)) http = httplib2.Http() response, content = http.request(path, 'PUT') self.assertEqual(http_client.NO_CONTENT, response.status) # 13. Add member to image path = ("http://%s:%d/v1/images/%s/members/pattiewhite" % ("127.0.0.1", self.api_port, image_id)) http = httplib2.Http() response, content = http.request(path, 'PUT') self.assertEqual(http_client.NO_CONTENT, response.status) # 14. List image members path = ("http://%s:%d/v1/images/%s/members" % ("127.0.0.1", self.api_port, image_id)) http = httplib2.Http() response, content = http.request(path, 'GET') self.assertEqual(http_client.OK, response.status) data = jsonutils.loads(content) self.assertEqual(2, len(data['members'])) self.assertEqual('pattieblack', data['members'][0]['member_id']) self.assertEqual('pattiewhite', data['members'][1]['member_id']) # 15. Delete image member path = ("http://%s:%d/v1/images/%s/members/pattieblack" % ("127.0.0.1", self.api_port, image_id)) http = httplib2.Http() response, content = http.request(path, 'DELETE') self.assertEqual(http_client.NO_CONTENT, response.status) # 16. Attempt to replace members with an overlimit amount # Adding 11 image members should fail since configured limit is 10 path = ("http://%s:%d/v1/images/%s/members" % ("127.0.0.1", self.api_port, image_id)) memberships = [] for i in range(11): member_id = "foo%d" % i memberships.append(dict(member_id=member_id)) http = httplib2.Http() body = jsonutils.dumps(dict(memberships=memberships)) response, content = http.request(path, 'PUT', body=body) self.assertEqual(http_client.REQUEST_ENTITY_TOO_LARGE, response.status) # 17. Attempt to add a member while at limit # Adding an 11th member should fail since configured limit is 10 path = ("http://%s:%d/v1/images/%s/members" % ("127.0.0.1", self.api_port, image_id)) memberships = [] for i in range(10): member_id = "foo%d" % i memberships.append(dict(member_id=member_id)) http = httplib2.Http() body = jsonutils.dumps(dict(memberships=memberships)) response, content = http.request(path, 'PUT', body=body) self.assertEqual(http_client.NO_CONTENT, response.status) path = ("http://%s:%d/v1/images/%s/members/fail_me" % ("127.0.0.1", self.api_port, image_id)) http = httplib2.Http() response, content = http.request(path, 'PUT') self.assertEqual(http_client.REQUEST_ENTITY_TOO_LARGE, response.status) # 18. POST /images with another public image named Image2 # attribute and three custom properties, "distro", "arch" & "foo". # Verify a 200 OK is returned image_data = b"*" * FIVE_KB headers = minimal_headers('Image2') headers['X-Image-Meta-Property-Distro'] = 'Ubuntu' headers['X-Image-Meta-Property-Arch'] = 'i386' headers['X-Image-Meta-Property-foo'] = 'bar' path = "http://%s:%d/v1/images" % ("127.0.0.1", self.api_port) http = httplib2.Http() response, content = http.request(path, 'POST', headers=headers, body=image_data) self.assertEqual(http_client.CREATED, response.status) data = jsonutils.loads(content) image2_id = data['image']['id'] self.assertEqual(hashlib.md5(image_data).hexdigest(), data['image']['checksum']) self.assertEqual(FIVE_KB, data['image']['size']) self.assertEqual("Image2", data['image']['name']) self.assertTrue(data['image']['is_public']) self.assertEqual('Ubuntu', data['image']['properties']['distro']) self.assertEqual('i386', data['image']['properties']['arch']) self.assertEqual('bar', data['image']['properties']['foo']) # 19. HEAD image2 # Verify image2 found now path = "http://%s:%d/v1/images/%s" % ("127.0.0.1", self.api_port, image2_id) http = httplib2.Http() response, content = http.request(path, 'HEAD') self.assertEqual(http_client.OK, response.status) self.assertEqual("Image2", response['x-image-meta-name']) # 20. GET /images # Verify 2 public images path = "http://%s:%d/v1/images" % ("127.0.0.1", self.api_port) http = httplib2.Http() response, content = http.request(path, 'GET') self.assertEqual(http_client.OK, response.status) images = jsonutils.loads(content)['images'] self.assertEqual(2, len(images)) self.assertEqual(image2_id, images[0]['id']) self.assertEqual(image_id, images[1]['id']) # 21. GET /images with filter on user-defined property 'distro'. # Verify both images are returned path = "http://%s:%d/v1/images?property-distro=Ubuntu" % ( "127.0.0.1", self.api_port) http = httplib2.Http() response, content = http.request(path, 'GET') self.assertEqual(http_client.OK, response.status) images = jsonutils.loads(content)['images'] self.assertEqual(2, len(images)) self.assertEqual(image2_id, images[0]['id']) self.assertEqual(image_id, images[1]['id']) # 22. GET /images with filter on user-defined property 'distro' but # with non-existent value. Verify no images are returned path = "http://%s:%d/v1/images?property-distro=fedora" % ( "127.0.0.1", self.api_port) http = httplib2.Http() response, content = http.request(path, 'GET') self.assertEqual(http_client.OK, response.status) images = jsonutils.loads(content)['images'] self.assertEqual(0, len(images)) # 23. GET /images with filter on non-existent user-defined property # 'boo'. Verify no images are returned path = "http://%s:%d/v1/images?property-boo=bar" % ("127.0.0.1", self.api_port) http = httplib2.Http() response, content = http.request(path, 'GET') self.assertEqual(http_client.OK, response.status) images = jsonutils.loads(content)['images'] self.assertEqual(0, len(images)) # 24. GET /images with filter 'arch=i386' # Verify only image2 is returned path = "http://%s:%d/v1/images?property-arch=i386" % ("127.0.0.1", self.api_port) http = httplib2.Http() response, content = http.request(path, 'GET') self.assertEqual(http_client.OK, response.status) images = jsonutils.loads(content)['images'] self.assertEqual(1, len(images)) self.assertEqual(image2_id, images[0]['id']) # 25. GET /images with filter 'arch=x86_64' # Verify only image1 is returned path = "http://%s:%d/v1/images?property-arch=x86_64" % ("127.0.0.1", self.api_port) http = httplib2.Http() response, content = http.request(path, 'GET') self.assertEqual(http_client.OK, response.status) images = jsonutils.loads(content)['images'] self.assertEqual(1, len(images)) self.assertEqual(image_id, images[0]['id']) # 26. GET /images with filter 'foo=bar' # Verify only image2 is returned path = "http://%s:%d/v1/images?property-foo=bar" % ("127.0.0.1", self.api_port) http = httplib2.Http() response, content = http.request(path, 'GET') self.assertEqual(http_client.OK, response.status) images = jsonutils.loads(content)['images'] self.assertEqual(1, len(images)) self.assertEqual(image2_id, images[0]['id']) # 27. DELETE image1 path = "http://%s:%d/v1/images/%s" % ("127.0.0.1", self.api_port, image_id) http = httplib2.Http() response, content = http.request(path, 'DELETE') self.assertEqual(http_client.OK, response.status) # 28. Try to list members of deleted image path = ("http://%s:%d/v1/images/%s/members" % ("127.0.0.1", self.api_port, image_id)) http = httplib2.Http() response, content = http.request(path, 'GET') self.assertEqual(http_client.NOT_FOUND, response.status) # 29. Try to update member of deleted image path = ("http://%s:%d/v1/images/%s/members" % ("127.0.0.1", self.api_port, image_id)) http = httplib2.Http() fixture = [{'member_id': 'pattieblack', 'can_share': 'false'}] body = jsonutils.dumps(dict(memberships=fixture)) response, content = http.request(path, 'PUT', body=body) self.assertEqual(http_client.NOT_FOUND, response.status) # 30. Try to add member to deleted image path = ("http://%s:%d/v1/images/%s/members/chickenpattie" % ("127.0.0.1", self.api_port, image_id)) http = httplib2.Http() response, content = http.request(path, 'PUT') self.assertEqual(http_client.NOT_FOUND, response.status) # 31. Try to delete member of deleted image path = ("http://%s:%d/v1/images/%s/members/pattieblack" % ("127.0.0.1", self.api_port, image_id)) http = httplib2.Http() response, content = http.request(path, 'DELETE') self.assertEqual(http_client.NOT_FOUND, response.status) # 32. DELETE image2 path = "http://%s:%d/v1/images/%s" % ("127.0.0.1", self.api_port, image2_id) http = httplib2.Http() response, content = http.request(path, 'DELETE') self.assertEqual(http_client.OK, response.status) # 33. GET /images # Verify no images are listed path = "http://%s:%d/v1/images" % ("127.0.0.1", self.api_port) http = httplib2.Http() response, content = http.request(path, 'GET') self.assertEqual(http_client.OK, response.status) images = jsonutils.loads(content)['images'] self.assertEqual(0, len(images)) # 34. HEAD /images/detail path = "http://%s:%d/v1/images/detail" % ("127.0.0.1", self.api_port) http = httplib2.Http() response, content = http.request(path, 'HEAD') self.assertEqual(http_client.METHOD_NOT_ALLOWED, response.status) self.assertEqual('GET', response.get('allow')) self.stop_servers() def test_download_non_exists_image_raises_http_forbidden(self): """ We test the following sequential series of actions:: 0. POST /images with public image named Image1 and no custom properties - Verify 201 returned 1. HEAD image - Verify HTTP headers have correct information we just added 2. GET image - Verify all information on image we just added is correct 3. DELETE image1 - Delete the newly added image 4. GET image - Verify that 403 HTTPForbidden exception is raised prior to 404 HTTPNotFound """ self.cleanup() self.start_servers(**self.__dict__.copy()) image_data = b"*" * FIVE_KB headers = minimal_headers('Image1') path = "http://%s:%d/v1/images" % ("127.0.0.1", self.api_port) http = httplib2.Http() response, content = http.request(path, 'POST', headers=headers, body=image_data) self.assertEqual(http_client.CREATED, response.status) data = jsonutils.loads(content) image_id = data['image']['id'] self.assertEqual(hashlib.md5(image_data).hexdigest(), data['image']['checksum']) self.assertEqual(FIVE_KB, data['image']['size']) self.assertEqual("Image1", data['image']['name']) self.assertTrue(data['image']['is_public']) # 1. HEAD image # Verify image found now path = "http://%s:%d/v1/images/%s" % ("127.0.0.1", self.api_port, image_id) http = httplib2.Http() response, content = http.request(path, 'HEAD') self.assertEqual(http_client.OK, response.status) self.assertEqual("Image1", response['x-image-meta-name']) # 2. GET /images # Verify one public image path = "http://%s:%d/v1/images" % ("127.0.0.1", self.api_port) http = httplib2.Http() response, content = http.request(path, 'GET') self.assertEqual(http_client.OK, response.status) expected_result = {"images": [ {"container_format": "ovf", "disk_format": "raw", "id": image_id, "name": "Image1", "checksum": "c2e5db72bd7fd153f53ede5da5a06de3", "size": 5120}]} self.assertEqual(expected_result, jsonutils.loads(content)) # 3. DELETE image1 path = "http://%s:%d/v1/images/%s" % ("127.0.0.1", self.api_port, image_id) http = httplib2.Http() response, content = http.request(path, 'DELETE') self.assertEqual(http_client.OK, response.status) # 4. GET image # Verify that 403 HTTPForbidden exception is raised prior to # 404 HTTPNotFound rules = {"download_image": '!'} self.set_policy_rules(rules) path = "http://%s:%d/v1/images/%s" % ("127.0.0.1", self.api_port, image_id) http = httplib2.Http() response, content = http.request(path, 'GET') self.assertEqual(http_client.FORBIDDEN, response.status) self.stop_servers() def test_download_non_exists_image_raises_http_not_found(self): """ We test the following sequential series of actions: 0. POST /images with public image named Image1 and no custom properties - Verify 201 returned 1. HEAD image - Verify HTTP headers have correct information we just added 2. GET image - Verify all information on image we just added is correct 3. DELETE image1 - Delete the newly added image 4. GET image - Verify that 404 HTTPNotFound exception is raised """ self.cleanup() self.start_servers(**self.__dict__.copy()) image_data = b"*" * FIVE_KB headers = minimal_headers('Image1') path = "http://%s:%d/v1/images" % ("127.0.0.1", self.api_port) http = httplib2.Http() response, content = http.request(path, 'POST', headers=headers, body=image_data) self.assertEqual(http_client.CREATED, response.status) data = jsonutils.loads(content) image_id = data['image']['id'] self.assertEqual(hashlib.md5(image_data).hexdigest(), data['image']['checksum']) self.assertEqual(FIVE_KB, data['image']['size']) self.assertEqual("Image1", data['image']['name']) self.assertTrue(data['image']['is_public']) # 1. HEAD image # Verify image found now path = "http://%s:%d/v1/images/%s" % ("127.0.0.1", self.api_port, image_id) http = httplib2.Http() response, content = http.request(path, 'HEAD') self.assertEqual(http_client.OK, response.status) self.assertEqual("Image1", response['x-image-meta-name']) # 2. GET /images # Verify one public image path = "http://%s:%d/v1/images" % ("127.0.0.1", self.api_port) http = httplib2.Http() response, content = http.request(path, 'GET') self.assertEqual(http_client.OK, response.status) expected_result = {"images": [ {"container_format": "ovf", "disk_format": "raw", "id": image_id, "name": "Image1", "checksum": "c2e5db72bd7fd153f53ede5da5a06de3", "size": 5120}]} self.assertEqual(expected_result, jsonutils.loads(content)) # 3. DELETE image1 path = "http://%s:%d/v1/images/%s" % ("127.0.0.1", self.api_port, image_id) http = httplib2.Http() response, content = http.request(path, 'DELETE') self.assertEqual(http_client.OK, response.status) # 4. GET image # Verify that 404 HTTPNotFound exception is raised path = "http://%s:%d/v1/images/%s" % ("127.0.0.1", self.api_port, image_id) http = httplib2.Http() response, content = http.request(path, 'GET') self.assertEqual(http_client.NOT_FOUND, response.status) self.stop_servers() def test_status_cannot_be_manipulated_directly(self): self.cleanup() self.start_servers(**self.__dict__.copy()) headers = minimal_headers('Image1') # Create a 'queued' image http = httplib2.Http() headers = {'Content-Type': 'application/octet-stream', 'X-Image-Meta-Disk-Format': 'raw', 'X-Image-Meta-Container-Format': 'bare'} path = "http://%s:%d/v1/images" % ("127.0.0.1", self.api_port) response, content = http.request(path, 'POST', headers=headers, body=None) self.assertEqual(http_client.CREATED, response.status) image = jsonutils.loads(content)['image'] self.assertEqual('queued', image['status']) # Ensure status of 'queued' image can't be changed path = "http://%s:%d/v1/images/%s" % ("127.0.0.1", self.api_port, image['id']) http = httplib2.Http() headers = {'X-Image-Meta-Status': 'active'} response, content = http.request(path, 'PUT', headers=headers) self.assertEqual(http_client.FORBIDDEN, response.status) response, content = http.request(path, 'HEAD') self.assertEqual(http_client.OK, response.status) self.assertEqual('queued', response['x-image-meta-status']) # We allow 'setting' to the same status http = httplib2.Http() headers = {'X-Image-Meta-Status': 'queued'} response, content = http.request(path, 'PUT', headers=headers) self.assertEqual(http_client.OK, response.status) response, content = http.request(path, 'HEAD') self.assertEqual(http_client.OK, response.status) self.assertEqual('queued', response['x-image-meta-status']) # Make image active http = httplib2.Http() headers = {'Content-Type': 'application/octet-stream'} response, content = http.request(path, 'PUT', headers=headers, body='data') self.assertEqual(http_client.OK, response.status) image = jsonutils.loads(content)['image'] self.assertEqual('active', image['status']) # Ensure status of 'active' image can't be changed http = httplib2.Http() headers = {'X-Image-Meta-Status': 'queued'} response, content = http.request(path, 'PUT', headers=headers) self.assertEqual(http_client.FORBIDDEN, response.status) response, content = http.request(path, 'HEAD') self.assertEqual(http_client.OK, response.status) self.assertEqual('active', response['x-image-meta-status']) # We allow 'setting' to the same status http = httplib2.Http() headers = {'X-Image-Meta-Status': 'active'} response, content = http.request(path, 'PUT', headers=headers) self.assertEqual(http_client.OK, response.status) response, content = http.request(path, 'HEAD') self.assertEqual(http_client.OK, response.status) self.assertEqual('active', response['x-image-meta-status']) # Create a 'queued' image, ensure 'status' header is ignored http = httplib2.Http() path = "http://%s:%d/v1/images" % ("127.0.0.1", self.api_port) headers = {'Content-Type': 'application/octet-stream', 'X-Image-Meta-Status': 'active'} response, content = http.request(path, 'POST', headers=headers, body=None) self.assertEqual(http_client.CREATED, response.status) image = jsonutils.loads(content)['image'] self.assertEqual('queued', image['status']) # Create an 'active' image, ensure 'status' header is ignored http = httplib2.Http() path = "http://%s:%d/v1/images" % ("127.0.0.1", self.api_port) headers = {'Content-Type': 'application/octet-stream', 'X-Image-Meta-Disk-Format': 'raw', 'X-Image-Meta-Status': 'queued', 'X-Image-Meta-Container-Format': 'bare'} response, content = http.request(path, 'POST', headers=headers, body='data') self.assertEqual(http_client.CREATED, response.status) image = jsonutils.loads(content)['image'] self.assertEqual('active', image['status']) self.stop_servers() glance-16.0.0/glance/tests/functional/test_gzip_middleware.py0000666000175100017510000000323113245511421024366 0ustar zuulzuul00000000000000# Copyright 2013 Red Hat, Inc # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Tests gzip middleware.""" import httplib2 from glance.tests import functional from glance.tests import utils class GzipMiddlewareTest(functional.FunctionalTest): @utils.skip_if_disabled def test_gzip_requests(self): self.cleanup() self.start_servers(**self.__dict__.copy()) def request(path, headers=None): # We don't care what version we're using here so, # sticking with latest url = 'http://127.0.0.1:%s/v2/%s' % (self.api_port, path) http = httplib2.Http() return http.request(url, 'GET', headers=headers) # Accept-Encoding: Identity headers = {'Accept-Encoding': 'identity'} response, content = request('images', headers=headers) self.assertIsNone(response.get("-content-encoding")) # Accept-Encoding: gzip headers = {'Accept-Encoding': 'gzip'} response, content = request('images', headers=headers) self.assertEqual('gzip', response.get("-content-encoding")) self.stop_servers() glance-16.0.0/glance/tests/functional/test_glance_replicator.py0000666000175100017510000000232713245511421024702 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Functional test cases for glance-replicator""" import sys from glance.tests import functional from glance.tests.utils import execute class TestGlanceReplicator(functional.FunctionalTest): """Functional tests for glance-replicator""" def test_compare(self): # Test for issue: https://bugs.launchpad.net/glance/+bug/1598928 cmd = ('%s -m glance.cmd.replicator ' 'compare az1:9292 az2:9292 --debug' % (sys.executable,)) exitcode, out, err = execute(cmd, raise_error=False) self.assertIn( b'Request: GET http://az1:9292/v1/images/detail?is_public=None', err ) glance-16.0.0/glance/tests/functional/test_healthcheck_middleware.py0000666000175100017510000000342513245511421025665 0ustar zuulzuul00000000000000# Copyright 2015 Hewlett Packard # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Tests healthcheck middleware.""" import tempfile import httplib2 from six.moves import http_client from glance.tests import functional from glance.tests import utils class HealthcheckMiddlewareTest(functional.FunctionalTest): def request(self): url = 'http://127.0.0.1:%s/healthcheck' % self.api_port http = httplib2.Http() return http.request(url, 'GET') @utils.skip_if_disabled def test_healthcheck_enabled(self): self.cleanup() self.start_servers(**self.__dict__.copy()) response, content = self.request() self.assertEqual(b'OK', content) self.assertEqual(http_client.OK, response.status) self.stop_servers() def test_healthcheck_disabled(self): with tempfile.NamedTemporaryFile() as test_disable_file: self.cleanup() self.api_server.disable_path = test_disable_file.name self.start_servers(**self.__dict__.copy()) response, content = self.request() self.assertEqual(b'DISABLED BY FILE', content) self.assertEqual(http_client.SERVICE_UNAVAILABLE, response.status) self.stop_servers() glance-16.0.0/glance/tests/functional/test_api.py0000666000175100017510000003047313245511421022001 0ustar zuulzuul00000000000000# Copyright 2012 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Version-independent api tests""" import httplib2 from oslo_serialization import jsonutils from six.moves import http_client from glance.tests import functional # TODO(rosmaita): all the EXPERIMENTAL stuff in this file can be ripped out # when v2.6 becomes CURRENT in Queens def _generate_v1_versions(url): v1_versions = {'versions': [ { 'id': 'v1.1', 'status': 'DEPRECATED', 'links': [{'rel': 'self', 'href': url % '1'}], }, { 'id': 'v1.0', 'status': 'DEPRECATED', 'links': [{'rel': 'self', 'href': url % '1'}], }, ]} return v1_versions def _generate_v2_versions(url): version_list = [] version_list.extend([ { 'id': 'v2.6', 'status': 'CURRENT', 'links': [{'rel': 'self', 'href': url % '2'}], }, { 'id': 'v2.5', 'status': 'SUPPORTED', 'links': [{'rel': 'self', 'href': url % '2'}], }, { 'id': 'v2.4', 'status': 'SUPPORTED', 'links': [{'rel': 'self', 'href': url % '2'}], }, { 'id': 'v2.3', 'status': 'SUPPORTED', 'links': [{'rel': 'self', 'href': url % '2'}], }, { 'id': 'v2.2', 'status': 'SUPPORTED', 'links': [{'rel': 'self', 'href': url % '2'}], }, { 'id': 'v2.1', 'status': 'SUPPORTED', 'links': [{'rel': 'self', 'href': url % '2'}], }, { 'id': 'v2.0', 'status': 'SUPPORTED', 'links': [{'rel': 'self', 'href': url % '2'}], } ]) v2_versions = {'versions': version_list} return v2_versions def _generate_all_versions(url): v1 = _generate_v1_versions(url) v2 = _generate_v2_versions(url) all_versions = {'versions': v2['versions'] + v1['versions']} return all_versions class TestApiVersions(functional.FunctionalTest): def test_version_configurations(self): """Test that versioning is handled properly through all channels""" # v1 and v2 api enabled self.start_servers(**self.__dict__.copy()) url = 'http://127.0.0.1:%d/v%%s/' % self.api_port versions = _generate_all_versions(url) # Verify version choices returned. path = 'http://%s:%d' % ('127.0.0.1', self.api_port) http = httplib2.Http() response, content_json = http.request(path, 'GET') self.assertEqual(http_client.MULTIPLE_CHOICES, response.status) content = jsonutils.loads(content_json.decode()) self.assertEqual(versions, content) def test_v2_api_configuration(self): self.api_server.enable_v1_api = False self.api_server.enable_v2_api = True self.start_servers(**self.__dict__.copy()) url = 'http://127.0.0.1:%d/v%%s/' % self.api_port versions = _generate_v2_versions(url) # Verify version choices returned. path = 'http://%s:%d' % ('127.0.0.1', self.api_port) http = httplib2.Http() response, content_json = http.request(path, 'GET') self.assertEqual(http_client.MULTIPLE_CHOICES, response.status) content = jsonutils.loads(content_json.decode()) self.assertEqual(versions, content) def test_v1_api_configuration(self): self.api_server.enable_v1_api = True self.api_server.enable_v2_api = False self.start_servers(**self.__dict__.copy()) url = 'http://127.0.0.1:%d/v%%s/' % self.api_port versions = _generate_v1_versions(url) # Verify version choices returned. path = 'http://%s:%d' % ('127.0.0.1', self.api_port) http = httplib2.Http() response, content_json = http.request(path, 'GET') self.assertEqual(http_client.MULTIPLE_CHOICES, response.status) content = jsonutils.loads(content_json.decode()) self.assertEqual(versions, content) class TestApiPaths(functional.FunctionalTest): def setUp(self): super(TestApiPaths, self).setUp() self.start_servers(**self.__dict__.copy()) url = 'http://127.0.0.1:%d/v%%s/' % self.api_port self.versions = _generate_all_versions(url) images = {'images': []} self.images_json = jsonutils.dumps(images) def test_get_root_path(self): """Assert GET / with `no Accept:` header. Verify version choices returned. Bug lp:803260 no Accept header causes a 500 in glance-api """ path = 'http://%s:%d' % ('127.0.0.1', self.api_port) http = httplib2.Http() response, content_json = http.request(path, 'GET') self.assertEqual(http_client.MULTIPLE_CHOICES, response.status) content = jsonutils.loads(content_json.decode()) self.assertEqual(self.versions, content) def test_get_images_path(self): """Assert GET /images with `no Accept:` header. Verify version choices returned. """ path = 'http://%s:%d/images' % ('127.0.0.1', self.api_port) http = httplib2.Http() response, content_json = http.request(path, 'GET') self.assertEqual(http_client.MULTIPLE_CHOICES, response.status) content = jsonutils.loads(content_json.decode()) self.assertEqual(self.versions, content) def test_get_v1_images_path(self): """GET /v1/images with `no Accept:` header. Verify empty images list returned. """ path = 'http://%s:%d/v1/images' % ('127.0.0.1', self.api_port) http = httplib2.Http() response, content = http.request(path, 'GET') self.assertEqual(http_client.OK, response.status) def test_get_root_path_with_unknown_header(self): """Assert GET / with Accept: unknown header Verify version choices returned. Verify message in API log about unknown accept header. """ path = 'http://%s:%d/' % ('127.0.0.1', self.api_port) http = httplib2.Http() headers = {'Accept': 'unknown'} response, content_json = http.request(path, 'GET', headers=headers) self.assertEqual(http_client.MULTIPLE_CHOICES, response.status) content = jsonutils.loads(content_json.decode()) self.assertEqual(self.versions, content) def test_get_root_path_with_openstack_header(self): """Assert GET / with an Accept: application/vnd.openstack.images-v1 Verify empty image list returned """ path = 'http://%s:%d/images' % ('127.0.0.1', self.api_port) http = httplib2.Http() headers = {'Accept': 'application/vnd.openstack.images-v1'} response, content = http.request(path, 'GET', headers=headers) self.assertEqual(http_client.OK, response.status) self.assertEqual(self.images_json, content.decode()) def test_get_images_path_with_openstack_header(self): """Assert GET /images with a `Accept: application/vnd.openstack.compute-v1` header. Verify version choices returned. Verify message in API log about unknown accept header. """ path = 'http://%s:%d/images' % ('127.0.0.1', self.api_port) http = httplib2.Http() headers = {'Accept': 'application/vnd.openstack.compute-v1'} response, content_json = http.request(path, 'GET', headers=headers) self.assertEqual(http_client.MULTIPLE_CHOICES, response.status) content = jsonutils.loads(content_json.decode()) self.assertEqual(self.versions, content) def test_get_v10_images_path(self): """Assert GET /v1.0/images with no Accept: header Verify version choices returned """ path = 'http://%s:%d/v1.a/images' % ('127.0.0.1', self.api_port) http = httplib2.Http() response, content = http.request(path, 'GET') self.assertEqual(http_client.MULTIPLE_CHOICES, response.status) def test_get_v1a_images_path(self): """Assert GET /v1.a/images with no Accept: header Verify version choices returned """ path = 'http://%s:%d/v1.a/images' % ('127.0.0.1', self.api_port) http = httplib2.Http() response, content = http.request(path, 'GET') self.assertEqual(http_client.MULTIPLE_CHOICES, response.status) def test_get_va1_images_path(self): """Assert GET /va.1/images with no Accept: header Verify version choices returned """ path = 'http://%s:%d/va.1/images' % ('127.0.0.1', self.api_port) http = httplib2.Http() response, content_json = http.request(path, 'GET') self.assertEqual(http_client.MULTIPLE_CHOICES, response.status) content = jsonutils.loads(content_json.decode()) self.assertEqual(self.versions, content) def test_get_versions_path(self): """Assert GET /versions with no Accept: header Verify version choices returned """ path = 'http://%s:%d/versions' % ('127.0.0.1', self.api_port) http = httplib2.Http() response, content_json = http.request(path, 'GET') self.assertEqual(http_client.OK, response.status) content = jsonutils.loads(content_json.decode()) self.assertEqual(self.versions, content) def test_get_versions_path_with_openstack_header(self): """Assert GET /versions with the `Accept: application/vnd.openstack.images-v1` header. Verify version choices returned. """ path = 'http://%s:%d/versions' % ('127.0.0.1', self.api_port) http = httplib2.Http() headers = {'Accept': 'application/vnd.openstack.images-v1'} response, content_json = http.request(path, 'GET', headers=headers) self.assertEqual(http_client.OK, response.status) content = jsonutils.loads(content_json.decode()) self.assertEqual(self.versions, content) def test_get_v1_versions_path(self): """Assert GET /v1/versions with `no Accept:` header Verify 404 returned """ path = 'http://%s:%d/v1/versions' % ('127.0.0.1', self.api_port) http = httplib2.Http() response, content = http.request(path, 'GET') self.assertEqual(http_client.NOT_FOUND, response.status) def test_get_versions_choices(self): """Verify version choices returned""" path = 'http://%s:%d/v10' % ('127.0.0.1', self.api_port) http = httplib2.Http() response, content_json = http.request(path, 'GET') self.assertEqual(http_client.MULTIPLE_CHOICES, response.status) content = jsonutils.loads(content_json.decode()) self.assertEqual(self.versions, content) def test_get_images_path_with_openstack_v2_header(self): """Assert GET /images with a `Accept: application/vnd.openstack.compute-v2` header. Verify version choices returned. Verify message in API log about unknown version in accept header. """ path = 'http://%s:%d/images' % ('127.0.0.1', self.api_port) http = httplib2.Http() headers = {'Accept': 'application/vnd.openstack.images-v10'} response, content_json = http.request(path, 'GET', headers=headers) self.assertEqual(http_client.MULTIPLE_CHOICES, response.status) content = jsonutils.loads(content_json.decode()) self.assertEqual(self.versions, content) def test_get_v12_images_path(self): """Assert GET /v1.2/images with `no Accept:` header Verify version choices returned """ path = 'http://%s:%d/v1.2/images' % ('127.0.0.1', self.api_port) http = httplib2.Http() response, content_json = http.request(path, 'GET') self.assertEqual(http_client.MULTIPLE_CHOICES, response.status) content = jsonutils.loads(content_json.decode()) self.assertEqual(self.versions, content) glance-16.0.0/glance/tests/var/0000775000175100017510000000000013245511661016242 5ustar zuulzuul00000000000000glance-16.0.0/glance/tests/var/testserver-not-tar.ova0000666000175100017510000050104113245511421022536 0ustar zuulzuul00000000000000‰PNG  IHDRÐК8ÄybKGDÿÿÿ ½§“ IDATxœìÝ{°ßõß÷×ç§£÷ƒ±àn²À‹#Ë—µM@™É?Ùf:Ó‹·N›]Ï&›ŒÁN“íÎtšië’Nþj“i;“Ä©/é:»$í&m³Û­7‹¸-`ƒ±± „$0F@ÂB Ëù}¿Ÿwÿ8Gn¶Á #¡Çã?~çwÄ÷óŸ†'¯Ï7à”Ó–û83Ü}ý³+²æâ!ýâ±Õúiµƒ ™<´¢¦ç´¾bl3õÂtºâ†Iµb÷CO<±çËI_îçÎ:ïÈ=›7Χ³Õ'siã|2™Mj6­Í¥j>Él’¹$óIÖ¼îן?2Öycòü˜vyÿ;ÕBUûfUýz’ôªïTÚúJík­=>Vîo}ü“é»úÕd<‰GÎ:I’;¶nY¹Ϻ†¹´6ŸÔlªÍ¥Õ|Òf“šK-}ÞÚlÞÅß®ŽŒuîØòüXí²$“$ UõͪöëIÒ+÷÷dK’•Iª6¤ÕŽ¡ò`Kî™™¬þÃÿè±Çöü‚ÏD@x_ûÖ– ç¯&—%}i)~,€_‰WfÓ2—ÅÅøÉöü‘ÞÏÒööj—æhDó;Õòù$«}§R7$Y]É*µ¾§¿0övvÒ†¡çžÖê_ÍMûïÿùÝ»/À÷ à4RÉäÞÍ/yÃÕé­æR9~mzÕü»´?^8ÒûÙ=íÅáÄ%ú ½ª}g<ÑX©+{òÂØëìžÇjC*/©ÿu&+þù¾cÇËËz"à´$ ,³G7mZµo2½¬M†ùªÉlª^·Ïü2¯ÄO‚¶ïHÕê±²o¬]¢O{µ¯¦ê¯&icåÁJ®K²&-õªK*õ“iÏYimz I?0T›«Ô?X™™ÿQHÞà]VI»wóÆ¹Ÿ±_\Š'ó9=Vâ'ÁbD •‹“Ì${µ¯§êóIZ*K½''uñ‰}¬šVµƒCõ¹¡å;½ò•ßܱëöå=pºÐ~wlÝ:³rÿžu5 Ç×áÕæÒj>i³IÍ¥Ž­ÆO—«ÓOAmß‘W}²LÖåhDïùZ’Ï'™TòÐX¹6ÉÚ£½'û‡êkZµÉX9ÐÓÆÞ/é-ß_Hß1þëÛÿñÞå<pêЀ3Ö7®¿`eÍ\šô¥¥ø‰×¦×ñ÷‰[‰ŸdmßBõUCo/¿&¢W¾žÊ¯çu=Éãcúº¤½¦- ýXD¯^ùj*9oˆèÙYUçT2L[ ­ç¼iÕž$«‡ž³§-WzM+ÃØë~i×3ÿb9œt8CÜþ¹¬˜ß±ñâ†ã1ü­Wâ®NçÔÓòòBï}¨¶0V;?‹7T%ÿ[õüZ#úc½ê’J›­ÔÎJήʸšN*ç•ç*µfè9{ly|L¯¡O†±ò¾¸k×?_æËL@€ÓØ=7ž›5ãIŸ­šÌ¦êuKņ̃2›–¹,.Æátw`Z}˜VÆjçe)¢Ÿ¸DOËãc¯‹OŒè=écÕ‘žÌöžW²vèuÖbD¯ cM§™ü㿱k×íË|>` èp ©drïæ—¤KצóÉd6­cøÑkÓ«æ­Ä9cUL[™öô—èKý¿H²²Z{Ö%¹ðØ=iÓª•\T=Ïöž³‡Ôš±Õö!É8Öt¬ö•/îÞý{Ëz>`Ùèð{pË–•‡\Þ&ÃüVâo¼:ýÂ,Æ@à§©Xh92ôªÞÛ¹Õ²&I¥ÚW{ÕbDOÛ>T}°%Vú®žœUi“±×þjY7öö£¤ÎžöZÓ[m[ÚÐÛ˜É?¹í©§~w¹œ|:¼÷lÞ8Ÿ>Î&}i)~âµéu|)žÌÇJÞ­ràHÕ‘i%U휣½÷|%É_N²º*O ÉE¯èC¯ýiY×{{¦§.+ROõTz-ô6óOo}ê©o,ó€“L@€$wlÝ:³rÿžu5 Çcxµ¹´š“•¸«Óáq4¢Õ&CjMK;+Iz¯¯'í?ÍbDßÙSgWÚÅIíSkzÚÌPõRK懞•\4Vzoµ»WjÚ³0¶Éÿ~ÛSO}}™œD:ï[÷êêó†Ã¹ü +ñcKņ̃2›–¹,.ÆÓÐRD?i³IÍ¥Ž­Æ],«£}H[;T[™£=ù§Õó'Y[•gÇů_–×FôgSùÐØóx’¹iêȘþìBo“¤{ÿýÛv=ý¿,×Ù€÷–€p†úÖ– ç¯&—%}i)~âµéK+qW§§¯C U/•sÆj©ä¼$©Ê7«òï'9§’IOåòJžë­Æôœ7my¶U]=T©Ê•Cå@OÿwCµ“ýç_Þºk×ÿ¼ÌçÞ:ÀûD%“{7o¼ä5+ñLfÓjñ=âG¯M¯š·Î‡ŽT½Ô+ç •$í¼$Iå›ýxDnLÆ#zUÎ*Ï´Ô‡‡Ê÷{åC½jÿ˜z~¬V Éáˆèð¾$ œÂÝ´iÕ¾Éô²6æ«&³©zÝR<óVâ?Õ¡iÕ¾±·ó¦©:Ñ{꟥·¿”äÜ¥ˆ>¤rE%Ï­†T.*Ï$õ‘^ù^U6L«~2&/Œ©*‡û˜uÛîÝÿ`™Ï¼‹t€“¨’vïæséãÒµéãü›¬Ä—âÉ|¬ÄÞ ‡*û†žóÆ–žÊùI’^¿ÛÓþ½$ç&yn¨L“\™ä¹¡ÕЪ>0íÙ‘–_êUWµ«¦U/õäÅ!½/T;Rcþ¯/îÞý÷—ñlÀ»H@øݱuëÌÌÞç®8¶OͦÚ\ZÍ'm6©¹Ô±Õ¸«Ó–CåðBÕ¿«}`hEôÔïöÞ~%Éy­åùé˜i¹*És=5­ÔECÕ“•vmïõpO®*ûz²whÕz^í•ÿûK;wÿOËy<àÝ! ¼‰;n\ÁÊš¹4éKKñ¯M¯ãï·8}¼6¢©\°øqý^õö“œ—ä…¡²?ÉÕIíéÉB¥.šV=™´M½×w—"úÞžÚ7$m¡çÕ$ÿßm;wÿ½å<ð‹Ð€3ÂíŸËŠù/®a8Ã_³_ âUóVâïk Ó^{†´‹¦•iËRD¯üUùóI.Ì ½’ç«ê@M27VßÞ«]7ööPR×LÓ÷TòÊ´2•C½òG":œÞtà´õè¦M«öM¦—»:½êuKņ̃2›–¹,rH%Ó¡×sC²n¨É¡¤.\ü¼ýŸÕkk’ «òbO^ªäÃ-ya¬z¹&™›Vßžj7Œ•ûS¹nšþãJ^VzïupHûÖm;wÿË{BàЀSF%íÞÍçÒÇ¥kÓÇùd2›V‹1ÜJ€wÉш>&ë†j+ù@’ô´ßO¯›“| '{{eo’´ä…!µ?-óÓªÇSù豈^ýÙ1íPO CÕá±ç[·ízúï.ï €wB@ÞSnÙ²òÐÂËßt%ž6›Ô\êØjüÂ$«—û™8c,L«žª]2T^ÍRDoi¿?.Eô$?’=©üÒ±ˆž\:T=VYŒèU¹~¨ü(©CCjœVê}òÇ·îÚõw–ópÀÛ' oÛÝ×_1Û&«æ“¾´?ñÚô:¾Oæc%À©maÚë™!í²#zUþeUnJrQ#ús©\›äÅ!õ“–\>­z$ÉÇÆ1÷WËõcå™Jž¦g¬v`èíOnݵëï´¤–ñ|ÀÛ  ¹ýsY1¿cãÅ5 Çcxµ¹´š“•¸«Óx¿Y˜özfL.ööjZ.JÞÑ÷ɳUٔňþRK®ª¾WÉ–^¹¯W>:Tž®Ô‘izÆ´WÆv×—víúïEt8=èð>uÿ§®>o8œËß°?¶Ï|*³i™ËâbÎdÓ…ÔîqÌåCµWŽEôÔ¿®Þ¶$¹¤’—ÇÊ’lJ²wHí;!¢|¬üi*›§UOö´>fì=íÕ¡Ú=_xj×+¢À©O@€ÓD%“{7o¼$}\ âã|2™M«Å~ôÚôªy+qxG¦C²{ÚsåúIª­K’^íR}sÒæ–"ú3I®K²·§öV²~šþpª}r¬ÜÊ–iÕ“•´iËBõ\HîÙûÔ®ÿîËI_Ö?•€ËèÁ-[VZ8py› óU“ÙT½î}âK+ñž0Éêå~fxŸ‡Ê®iÕÓÊK-íâ$©ÊVÕI›ki¯½vTËGÓò“1µ'•«ŽEôž»“||HßÞÓV ÕÆ>98MýéÞ»ÿö—Et8e èð.»góÆùôqö W§·¶¸?ºOæc%§¢qÚkט\9TöVÚ%I’ÊöÊG“Ì·´W§©'S¹ñ„ˆ~õÐóPµúÔш>Mß>ö¶bl™Ž©W†žûDt8u èð3ܱuëÌÊý{ÖÕ0áÕæÒj>i³IÍ¥ŽErW§ÀûÃ8ôÚ9MÖ•}9ºDOý¿ÕÛ 9ÑŸHes’ý½Õ«²q¬z 'Ÿî½îª´OL«?>V›ZUyeZ“{÷îÜ)¢À)H@àŒô­-Î_;L.û©+ñÊlZ沸Î<ãB剱êê¡ÚÞ$‹KôÔ½]ßo“98T¶'¯èCÕý•|¦÷vW¥>9­þX%+’±W,T=8·óéßþÕd\¾ã¯' ð¾PÉäÞÍ/I—‚ø8ŸLfÓjñ=âG¯M¯š·Þ†£ýÃCM^Hjnéó?Ç\×Z.Mrp¬<^ÉÇ’ì[žMÕ5ÓÊýI}¦z»«§~yÚë‘jµfÚ2Ž=ÆÊC/^±þ·¿¼mÛ°ŒçN  pÊztÓ¦Uû&ÓËÚd˜¯šÌ¦êuKñÌ[‰'Á8Vm_¨|dZíÅvt‰^ù“^uUÒ®LrhZy¬-Fô—ÇÔ’\3Tî¯Ôg{oÛ*õ™iÕ÷zêÜ¡åÐÐÛÁªzè…+ÖÿW":œtNšJÚ½›7ÎýŒ•øâR|ñJT+qàT1޽¶O“Ó´?KåÒ$©äŽôúP¥­OrdH¾ŸÊ'óšˆ^÷UrÓш>T=<¦Î[MÇv°W}÷Å+×ÿ–ˆËO@àrÇÖ­3+÷ïYWÃp|^m.­æ“6›Ô\êØjÜÕéÀé¬UM+×LÓþ¬-Eô$ÛzÏ•I>”daH¾w4¢÷Ô3•\;TýébD¯m•ö™!õp¯œ¿ÐêPõž¦=´÷ò+ÿ¦ˆËK@à ¾µeÃùk‡ÉeI_ZŠŸxmzŸ¸•8pæéÓÊcCÕ5cÚžª\–$-Ù6޹"-’,Œ•‡+ùT%zêé¼1¢vH}·WΟ¶~dì“Wªò½³þÍ¿öÐCÓå="œ¹t€3ÀíŸËŠù/~ÍJüW§/Æq+q€Ÿ¥†Ê‡ªk‡"z%wÖ˜ËFô¡òp’O%íÕ±Æií†!uwUn®^ÛzÚg‡ª‡zËì:2öv`¬|föÂÿRD€å! œ¦Ý´iÕ¾Éô²6æ«&³©zÝR<ó©Ì¦e.‹€wOU?œV®VÛ“,EôÊ]©\ZÉUI¦cå¡J~9i¯ö6>YÕ>z4¢÷Þî¬ÔMCõ*í¢iËá…Þ_©Þ~ØWÌ|éK;vYÞ#À™G@8ETÒîݼq.}\º6}œ·8¥ÕXõƒi庡Ú+¹|éãûƱͶ–k’ŒcåþJ>ûšˆÞëîj¹yìíΤnªî¯äƒ “Ç~ ×äñq²âVN.à=ôà–-+-¸üØJ<5›jsi5Ÿ´Ù¤æRÇVã&Y½ÜÏ ÀÛRc¯L“ë‡jÏTre’¤åþqÈoч6>Ùª}tèýîjí„%zÝ?¦æz²ñJ÷É£":œ\:ÀÛtÇë/XY3—&}i)~âµéu|)žÌÇJàLp,¢O«=“¥ˆ^Éé9¯’_J2•û“|6-ÇÔöTn«ßÕÓné©;«·›¦•û+57M½ÜSGz{tXnû[Ï>{hYOg8ãÝþ¹¬˜ß±ñâ†ã1ü5+qW§ð3ÕPydZuÃPy&i‹Kô´ª×¹•\›dÓ›NŒèCrWUÝÒÓî¬^7 É}UuéBòʘ~`èmûta¸UD€÷ž€¼¯üñ5×| ­®ìcqíÌäܤÏVMfSõº¥xæS™MË\9ü¢j¨|wZõ±#zKûöØëì$›’ôžúÓ^m1¢WmO²y¬lë©­Ç®s﹯Z]:­Z½Ò{ž8²0|AD€÷–€œ–ܲeåþéþë[ÕÇ{˦V“MI]3¦žn•Ï$Ù³zÒ>Ø’™å~VÎ(5T¾=T}rzâ½ÚwzÕYYŒè5Vî®äæ´ì•Ç+õ±£½*w÷Êg‡ª{zåŠ!yylõê0Ô®vö9íÖG}eYOïc:pÚ¸çº OÚp$íãi¹9o~•úX©;ªòÒòüêÖÎoÉê“ý¬œÙzòðB¯‡ÊÓ•¶>IªçÁJÖæõ=94&%õ±±ú¶žöÚˆÞ–"zÕ«Õkw_{ÎoŠèðÞЀSÚ7\}ÙÊÖ>_U¿–äª$éÉ“c¯sªµ¹·øµªô»«ÚÍI^XÕÚ¹“æ½åœ\K}ó´Ú3I®Xúø{½ZRuc×êwg1¢R´ä=}ÛXmk¯Ü]•Ï ©{Çʆ!õâXux¬ìlk-Ñའ §¤{nøÈŸKúoUò—’¬xýÏ+yj¡×Yíç‰è•WMÚ9":'Û˜zxÚ³y¡Ú®–|(Iªåû[¯Ôæ$Õ“»zå–$GÆÔ#I>1¦oëÕ¶Vrwª®ZH^轎 ÉSkW®ù뿱}ûe= ¼ÏèÀ)㮾zõ¹kë×Ò&·&uÃÏñ+ÏN+UÉåoñóªêwWÚÍ•¶oִͤ´³ÞÍg€Ÿe¬zxZÙ>¸²Õï´´¿úvÿ¼Jž›V†¼õ=Uµ­’­-yi夭š$g¿³§€wfL¾=íõÉiÚÎÔ⽪~˜LUÕ'’¤W¶õ¥%ú˜þpÒ>5¤ßYÕné©ûª·OôÔ=cå#ÓÊ‹CËáÞóLÆßüÂ3ϼ´œç€ÓÝŠå~àÌóo>ú‘Oÿa’ÿ,Éê~´²§]?™ä-ÙüvþÌ–œ;iyµ·ìKrÁ›~§µõiÙVÉ/õÊ«+ÒÆÖ²êÞ¦Ir餵oWÕuUÙÖ.h­­k­½R•'“\ÞZÖ'ÙVÉU“´u•zp’ÉgÓêΤÝÔªHÚ§“<ÜÿÒ…Lê¬Þ&·üÊYgÿÉ8phY §18i¾œLþöuWÝÖ’ßKrÉ[|mE¯\›Ê×&-›ó6nÌiɹ+ª­½uDO;Ñ“ƒ€“¬%—®hy¨’k+í©$&¹(É¡–lÏRD¯–;³Ñ/>ÑÓîLËM­êÖÚ§[òpkùP’ƒi9§­˜¹ù/Šèð޹Â8)îØ´þ’±M~¯ÒnÎâ;où¿2ÎLÚ×'UŸÏÛÿ;Ëó •I®z«/Te[¥¶¶äåU­¥µc×ÈÀI1¤z¶L«=•¥¿»V剪¼˜ä3I2&wVå–$Ó1y(©_+wöÔÍÕëž¶e¬º·'›†ô熴ƒcµçkåê߸íñÇ÷.ãñà´d¼çþøú«?ÝÓþ8i×/}´¾µvg’+óÖq|Ò+7$ùú¤å£?å{oæìIËØ“?Ëâšç ¯ÅlÛ*ÙØ“#“äHkmõ›}Þ “´ùIËCI6õ¥%zkù@ZNo§åŠI²¾’;“l˜$s•vÿ¤åÏ¥åî´ö™–ö–|*-ßmiZêpoY•qü ¿rñ%ßúƒ_´D€·A@ÞSÿÏuþgZý£,¾›ñ˜JÖWÚ·Zˆü”ˆ^É I¾ñv#z[ŠèUÙ“ö³#ú˜,LZ·´5?ï¿~Qm1¢ßŸä†¤í¨ä–ö¶"GªÚcI®˜¿ÎýC“üÿìÝYg~ß÷ïïÿ¼ïi,±£±4€&$Á!9 ‡#’ÃFŠfF‘JQWÅ®J.ⲕÊU.å"IM*²<²d•#—e«ì*—"%–¥X–¬Y<œEØH€7œ!@€Hb™!b#èóþsqN/hœntsAsÀßç®û¼}Îóžœ/ÿÏÃÊD;> ÚòáÏ’ú$â¹Dë£Éw€YM{è§ié²ï}ýôé‹3{—fffff?9ÐÍÌÌìó­{7ü%Ü$çCœãúsÉ×gƉ5LÑïô‡zw½i’“Eôî—‘ÛÉUGt3333»Ù„=•ä}©‘3ÑGp%3Z0ˆØšˆ¾ Ø)ñ¨’)’x^©O ž Åz²¹DhöÐÕö¾²`Á–ož=ûÖ ß¦™™™™ÙOt333û@|ãÞ _%ùg@-HtQâŒ`þ5*×"v­fâ8®ýß!>ÄT×!˜ÓèÇ‹{_£AÁÖìDô¡è¬ÕÝÌÌÌÌn‰áˆþ±D‡èDôE’®6™?Z+¶% u#ºìHxHbRŸ€|NÒ.Rèk'_øGt3333³)™ò—ÏffffSõõ»7üO4ü4c‚xÂÊ¡DGÇ_ŸÉg“|h&yZµÉÿ~(ó€¡é¬G°¤‘œø"}ŽÎÙ’·]i²jàìt^ÃÌÌÌÌì½ ñP-íª”Gº¿ÞP‚5©ü~÷šÇ”ÚD Ï<Òg ¥IôD÷Gè÷2ýIt8L¢Kƒ)u·sÏvH…'ÑÍÌÌÌìæ 1Ò®Î$:‡@‹@ $R»u!ÖJ¹#Ñšrd]O¤ø4°WðŠgîF\@ªß/þWý·oýO§Ï_˜éû43333û°r@733³÷Íظq]¡ùÏêœ7.²WDgn¢ö„xBb€É¶sOîOôÇ¡éGô"Ê,àšAI[îlgf ós¦úfffffï•`@Ò.à¾áíÜó%êìFt¡µ©ÜZh@âIàÑv&|*`ð€ÄsJÝ…x+¥ÖÕv|áW–ÍsD73333›€º™™™½/þâÞõ«#s‹Ðq‰%@M'¢¯—ØBç¼Æasµ%N꺈žkv V2qWÂòßtÓûL3«­3ADm6´I9¢›™™™ÙÍ0äS ÷eçsó|‰šÔßë­%õDŠ5B«OíJñI‰ý™ÜK²Ghâ- 5ÔŽ/ü·K—mû‹Ó§ÑÍÌÌÌÌÆq@733³÷ì/7mšGò]`SÂê„gC,¦ÑÉd°WDoPp¢;±>ÖjÄ.¡L:a®Aü»Pnªi,yVÕ‰è/Ëz>³D¹ ´¾„йpD7333³›HÒ@‘žjÈ‘It`>¢O©]Àz‰5 ;U©tm< z xé%`3©=À&Ä ïJ3ô3¿°´Û7ÑÍÌÌÌÌ®1å-OÍÌÌÌ&2¤¡ßƒæ­áŸj’—€·‡×4|>aë¸?]2„ÿœ™<Üdî†&{í6Íß’þ?àòtÖœ0¿†ÕÀ&ºFÄc;}W3ç¶“ÓÓy 3333³÷JðP_è©Z¹Ap¨û»EnNx ˆ‡‹x h—ä³’¶ñSE¹;àžz¹¯Rû*´®ˆ>)–´¯^ù7ÿvppp&ïÏÌÌÌÌìÃÆèffföžüù¦ ¿„øZÂÊ@;5 ËíqТóËAÄV];‰>§A8Ngj}¬ÕÀSÒä“è™ÜüÇë™Î$ºè+0« ™h­%Ù¸#!Bœò$º™™™™Ý4‚"=y_#·‡˜›™[AÕÙÅé©„‚Á$·‹xTʧÝ8$¸[Ò‰{Qžj]/þòüùOþÕ¹sggú>ÍÌÌÌÌ> ÐÍÌÌì]ûËM›æ5Ñü&°ˆÎ—uÚ1&÷§r¿Ð\M-¢{×6ó"z3YDW'¢§¸£ÝP"8ëˆnffff7ÓHDÏüX‡@‹€Û$ͧa ¢ÑéDô@#ånÐ}ˆC$wö| š3TCâ ¿|û‚ŽèfffffÞÂÝÌÌÌÞƒK\ý¿®6ùó™<ÑýUÝÂÈÏd꾆<œ0ºÅ{Ãçzlç¾ø*Z¼8þu~ª!Ÿ®N¶ž6ü7môuàÒ´nDÜ^KwH’Ҷ៛äÄðÏ$|¬cÀÈÙŠMÃç2ÙÞyxäºùC «Sü`ük'̓™¹¸<Ùø¥«ð}àÂôî.çÖ3ÝŸ›èŠ ´+AC™‹Ûä¦÷fffffïR·B»+rÈáíÜ—…ø)ÄòAÁ>àrÀc¶Jÿ³Kæ?ûí7Ͻ1Ó÷iffffv39 ›™™Ù´ü‚ù¿‹ø[‰v…XÔ™lr‹Ð  nÑôJ(Ë5Ü!i5c#:šâ`íÜW¦x^LÑ6¥øN$ËÑ´#ú‚É"ºèm˜ðãæMã5ÌÌÌÌÌÞ‰UщèwK#g¢ÏEêoà‚{+%ö$,ll–ØlH餒5RìW'ÆŸ‚¬•úéŸ_¼ä¹o½ùæ©™½K3333³›ÇÝÌÌ̦ì7¬‚ø*°Xñ7!–U¦C¹4È":°ôJˆº]k³gD¹¡æE¡ñç’¯Dì™BDߨdn)‰õÙ]×C]Ä¢6ùhU¯ †#:°ª9‚“!Ý>×03333{O:§Ø :Üè³C¬Lø:p`¥ÄÞND×À¡‡%ö$ýˆÌÕ’^Ü8 šægþËÅóžÿÆ™ó¯ÏèMš™™™™Ý$èfff6e_^¸ð73y ¤“Àâ$W'ì i!Pgjpü$zQléFtÁ¶îã$,IôêøˆÞc½ÕóŠšÒ#¢ öÐ]Ä‹—6d²-Ät#zU¤%Mò,b∎þXÙÀ툓#º™™™™Ý<’V(èfffvCÿjݺ;¥üu`.h ™ß—XšÓ@«ÀaÐR ?Åþ@sV&c·s§éѯ™DO&z=tmDOñÔuÍ/Ò>®›DÏ~Ä~u×0Ém­mÐÓ…ìCÝךšRD£Et=,qD7333³2P¤Ý™y·¤W²Ñû$îüYÂ}‚~‰—†#º:ÿƒëC’4°*‚ód.Rè(É=ÍIêB>ö¥…‹|ÓÝÌÌÌÌnAèfffvC_^8ÿï6ɧ$ÌGZzJäŠ1ýeÐ2 ŸàÀpD'5¨ñ½3Ý2Óèl¡û8°0Ñ©©Dô @ûtývîýÀéÆâÙx7ú±› #:+=Ÿˆ¾@Ò±€ùÓx 3333³÷D0PÐó y'ŠW»½…XŸÉŸ2ÑCV »‡#zt"úJI2s¡¤cJÝ%å©”Š2ûÊ‚…/}ãìÙã3{—fffffï/t333»¡¯,\ø[ÀæLÞ‰è°zlDoì  –eÒ/8€4GЗ©AR[‡£y¯ˆ.Ø2Ù™ ¢“üD?£Ÿaê„E!ž¬·ìþ„!Íú&º·„µMÆó…¬‘æMãm)V´aWç½èA¬í7É"Ä«Lã5ÌÌÌÌÌÞ‰EÚÓdÞ ñZ %±¡Í¿ºO°4B¯dfÝ l>ÒÁVéR¦æ„8Úy*¥hà±_œ¿àà×ÑÍÌÌÌìâ€nfff“úÝEQô+Bw·gòfHÑ…f·ÉY…8,KèOtPb¶:{øLôµ€®?#ýúˆÎ!Þ‰èiïÝã"zÕ ¥¡|F¨WD?Òl&‰èˆ5ôbÓ‰èQÄÊ;Öô|j±i°°I“z5„#º™™™™Ý4‚•ô\C®WèµLuHÈüÐýÀ…^É$46¢‡7É¢€!Ð,Ä„6IyJR4â³_™¿àÐ7Ξ=6³wifffföþp@733³I}yñâÿ:áÛ%V‹²áíáNH?¨BÌÎdØÑ9?% "m1F£:hAC¼ä¥1ÓáBÏ"–U÷w¥AKƒ|ZÒø-Õ—‡CT Ù“Üêª6¹¿’€iW«±“‰":¬”Ø—°°Å™¼ZäIt3333»©V†ô|“¹AÒkÝíÜkIw6ÉŸî,‘ôj£ýS¡x¥!B¶A}Jý´å)PiÄ£_Z<ÿåo¾yÎÝÌÌÌÌ~â9 ›™™Ù¤¾´hÁ?>ž°6•ß4ˆ¸=áí"Þ¤³äêDÐ/˜“ʹ Ë¥‰^ (ˆÙt&Яèhkh8šè,Ht.‚ótÃvÂ*¡çzDôeDôΔeòˆ®U ¼T”W™ÞVëQ`U“ìD½#:hÒ“œ—h‰·s7333³›M°²»û:àXv>Ö!6Žè¯5 @w'l|*ˆWR9_Ð¤Ô ô:h£R§R”lâ±/-^ðÚ·Þ<{t†oÓÌÌÌÌì=q@733³IýÜüy¿#ô#IË@ë’|":úöL.ÅhDHØÐÌBÌ ôC:}I¢WŠ2º{­’݉vå¸íÜA]ÑoÏäB ÎÑ™z'aU&ÏK,bj}IJ¯˜l}e# òЦÑŪL=™DtÁòûr^¦–¯„'ÑÍÌÌÌì&¬¬ÐžÖI:‘ÿIµqW¦þà‰Å'3¹ÒfÐV”Ÿ x5ÅüP4)j¥^Gl$uªD*S~yÑücß ¸XÃÐÇ»ÏóR‹…„Ø!ñpç)¡Û„>K÷³ŠÐÖ€ÏYÚ‰"†ÄèVé žÜÃØÉrÑ®’]R>2þÞ”z ±¤;½>!Á •˜+¸cšo_^M¶'<6É5û2s0a^‘ŽÕºnbÞÌÌÌÌì] ZUE+jZ¥Õ.ª.×Q E©¨$‰¨BQ)UP#€D×~W˜4(ÛMf»Ù\n_½òÎPûÍS—/¬;ùÂÑ+æ$¨IÎ7äÕ¡dÙùÂ4M“C™ñÛ¿zäÈã3rûfffffï’º™™™M蟬Yó/%þ~ˆÇ?×ýõ…å•^òÎ mè\šOt/0 ¸XÁ~¡Ot;Ü"æ¦èˆàIÁCtwÈ)°=¤GèÛF¢:p¢Wsì”·Øb0wÌïÚ5¹ ¸.¢‡Cº X6é¡üA *Ц©½s#¦Ñ×bÝ4_ÃÌÌÌÌ>"**êV‹¥©«ÖÅVÔWŠªT‰R¥ µP%²\Ã?™yñÜÕw®¿pfèõËÏ$ÙÊ\:?h JšvÆïÿÚáÃÿáƒ^‹™™™™ÙûÅÝÌÌÌ&ôÛk×|ø2€à»!¾Ø}èí¢8ÜP`§ÆDô:ÜÙù;=äpD¿\¥öJ<p¸B£]<)ñiº[±´=Ä#t'Ó•±=”£Q]y²".'98²èd 4.¢Wä.õˆèB‡¥œ êŸì½ò5QA¾ï=áE2—',¬¥ÃÅÝÌÌÌì#!€*ZÔuM_´.ÕÑz§’²D • *¥jA…:Ÿ‰?¬Co]½zîà…Sg_¿ôÖ‚6¼0d“C4úçÿèÑ¿œé5š™™™™M…º™™™MèkkWÿÇ’|‘Δ6H[ 9¼•úÛEqx@°«ˆOµ’Ó¥p:ac籉#ºàH…æÐè"wFèA†#zh{äØˆÎöF~^¯Ä`ýðº…ö¹¾GDß)xtü}N5¢£<Ø"ÚwMãmÈ+°œt}fö',ô$º™™™ÙO®**ꨩKÕÔ¥u±Vu¥Ž:‹DwÛtTÓÙyé–ün®É¼xê·ßÚwîÇ/^j·É&‡Úðû¿zø•?Ÿéµ™™™™™ÝÈ-ù!ÝÌÌÌÞ_[;pFè°Ð‚y!mÕ˜ˆŠƒêѳUp2áîÎcz&ÈÍt"ú• =/øt÷ï®è‘;ƒÑˆ.ØQI#g¤G²]c"ºàTç€ Ãkr/¡uÀmcn©©É'Ñõt$È9è“èä¡Z1ôn"úP²­¹ö,÷ñö7Ù,-*âp-9¢›™™™Í°*júªE…¾Òº\©ºTQ¨’TªÈÎQD6F’g¯¾sò©ÓÇ^|§ÝÖPòû¿vøoçnffffjèfffÖS‚¾¶v ô‚ÐÚ^]p1ˆˆwÞ]Ä}@p6"O ÝÝ}žgDÞ̦ÑŸëž{ŽÄÑ:™•Òr‰èŸ¢ã¯èb»òÚˆâœÆDt‚}wpmDÏJlyÝ4x&GK0 :k˜ˆàåZºy÷ôÞÕGt%ûÛt#:z¹ŽÑÉz3333{T´J•ê¦*ÕÅVô]®K¤TJE¤Z¢ݲSâ7ÙЉKo½üìéÇ/Kÿê×ù³™^™™™™ÙDü333ëéO¡¼¼và*ÝÏ ‘z6:Ñ m•nÑçy\èžîSï)È=—½‚>Ð ”»Š4<ÑNÀ“E9#=2vH9Õ%Þ8£îùëtþh_À Ýøß••šíB×Et‰£b =y¹’Þ’òþ©¾§Ã†’- |~¢Çh²½ThQH/·äˆnfff6™:*ZUE…VÔWê¨ß¹nJ\AIäïÃfH.ïyóäó¯^¸ðÿóÑ£ÿv¦×cffffÖ‹ÿÁ`fff=ýÁ'?YŸ~ãÇÿYÐÛòÖ‹¥–¹URψʧÝ Ìœ+Êc)mî^»'È  ë#:«™äêÎóh—”#½ˆ'ƒÑˆ®Œ16¢ÃéoŽè/H¬e|Df»²wDú4ò' x­†7@DÏ—šÌ%êlç~¨–6Lt­™™™Ù­&ª¨è«[Tª³Õ¥ˆz¨.!BT‡Tè~´Ÿg¯\yeûë¯ýã¿÷òË0Ók13333ÏÝÌÌÌzúêæÍ­ú­sƒJpM Þ_ˆ~†#zg;÷Ç . í—ô‰îcOy/sÏÏ—à5 ÑÅÞÈ\ßèWkxô™ÎkäñZ¥ŽèÀî#ÛÂhg#gñ7 IDAT¤<©±QNWâtÂÆ‘/†r9há˜Ûœx ‘ «Ç?víu«áôÑ9œ™·ËÑÍÌÌìVЊuUÓŠêrUZoWª²R¨”*BQµP>OüV×nòÒ¾óoþo_ÞóÌ?鵘™™™™å€nfff=}uóæVß[ç.§ØšI0&PÇþ –%,±MðY:Ÿ-.zq8¢ =S”›‰èz•NTÑÛ<%ô0@‰–Ô¦°v+øX÷y(hgLÑg‹8©däœr‰ýRöèEÍöèÑãEjnÑÇ[p x`Êop×&Ñ“ÌMè/p¨Gt333ûp¸fJ<êl©º*CuUe(ЧÄm òì•Ëÿìž[ÿáL/ÄÌÌÌÌl˜º™™™õô{6ô½}õw±]©ùGôí‚Gé|¾¸\{;×Ꙣ¼˜ \ˆÐ+ŽèÉ^)× Ý´k´ x:½–†k$ž1<ÑN¤v•`d{wÁÎ@#Q8W‹ã Ã篃ØbÝuwe%¶‰Î™î×HN(ÔÖ#ú‰ZüXÙÙÂ~:¦Ñ=‰nfff **úºg‰WQÚu´.UQš¡(óÄ}–¸½¿Ú4ÿnõÖïþw3½33333p@733³ |uppV_]þ9“íI–ÐØ­Òã@KÇEôGèL])ŠçéL®#ñl»Ù Þ8Hwb[°OäýB»ºÏƒàDºJ²¶û<×DtI»*F#zÀN‹è•8FwëøÎsê€"—2ˆ¡!º!búq-^ù齿ã]M¶&\ÿÚ#knŽ´a©þ"©ÅÓ} 333ûhŠê¨é‹¾¦UÕoWQÚ…º]¢HŠªH•D ž·™£Ì¿Z±í;¿8Óë03333s@733³ž~w``öåÂű¿ëFô*4¨E(0£#Ø®œ ¢Ãž 7Hš;>¢ûó7‰Nr².º2&¢?bx¢» 9#]°+ÐHTÎqL×Dt^R°„k#:¹Eê1 .N„FCþ$^dgZZnÑ!6älRý‡ZÞÎÝÌÌì#*˜]µ¨K‹¢hZ¥ïb¯)qPà n?IRßZ¹íÛ_žée˜™™™ÙG›º™™™õôÕ•+çôÕñöøß§ØÞNª >ÅÈÖéz© ‘=é$:ì r½¤¹ÀÅ€‘mϯèÒN’G»÷z%]Ö(x&軫k"ºv×lï~>Äk×Dtñ’`1bñØ{¬”[Ô{Kõ“£!"‚Sµt˜*¢gÎ-î™èþ@gffö“¯-ꪦŽ:û¢¾î,qIE¡Š¤ÌôZÍ>HÊüÓÛ¾ó·fzfffföÑåï[ÍÌ̬§ßîïŸ;4«~«×c ¹Tb²ˆ;$f4¢?<Ô¹–½1zîù5]ä Bk»½©3žD9ÑOUÒ9`€Ä)7Í(ðtH£Û»»ƒ;™~¾ˆW{GnFå\PÿØ{¬Ä‘Ÿïqû¯Ç˜?¡äVpøÌ¤×õ0”lm&‹è޶ÕîZQàPåˆnffö¡TEM«ÔTQ·ûªÖ…Ze(JQPJQT"ªPVÙÙ:Ýÿ97ëJñµU[ÿ_gzfffföÑ䜙™™YO_]ºô¶¾9}&z¼!w µ"¹Ÿn WÄmÀ2€€'% ã":°¯häÜó‹E¹éÝ¿{QÉrÄB k´x ®è{ˆéDt½]ÈCˆû‡ïEÁaÑ#¢Ã©gD?ÒynÑáLKùУ7¸î:7ŠèI#E’žD733»9ÆL‰ÓõÅPŠª¨él›î)q³÷CU6¿Ò¿í»>Ók1333³Ïjfff=Ý( 4ð¤”E© Ô‡ÍôC7¢Ã§p5ÏjLD¯Ä ióKEùâpDö—¤8¢W°]è±îcoÔÒY†'Ña¯"×Gt¡gª1g¤OÆFu½]"’#ç¯CpDäM'¢£sˆIÏ!œ­•û@Ÿìº^n8‰Ç›ÌXè`ÜéwfffS@_5‹º´¨Mõ;õP]*BD! Rí³ÄÍfÄP]µ7/ýÞ÷^šé…˜™™™ÙG‹¿c533³ž~kÓ’yÍ;³ÎOáÒ'QEr£úšˆ.ØðàpD/Ä3¨³µyÀ¾è—Cì•x°ûÜ“Gttqg÷5ö*X'¸­ó¼z¦Œ‰èOk\DȃÑÉ£!f!-{ƒEl ²WÈ~#ÐÙ)Dôs-±'»SôÓ1[›ä1&øÜ–p¢Él V¬CŽèffö‘@-꺦/Z—êh½SIY¢…ŠJ•Rµ BŽâfjÉVn{|ÅL/ÃÌÌÌÌ>Züýª™™™õôÕ nï»úι©\›°3EÑš3ÑÅÎÈ‘ˆÞ.ÄStÎH'àÅ+€Àåì>ÝyöGgKøE@Vh»F·s?]¡ÓˆÝר'qÇpD±§¨ÙÞ}|DG\,p@Ýó×»ë>*5="znÞÓàoĘ?Á¹JÚ#rú=µ½!e’ˆž™CÀš€C­Ð¤AßÌÌì'Épï«[Ôª<%nöÓ$8°íñÿa¦×affffèfffÖÓ×Ö­›Ÿí+g§z}ÂNDÑ":| ¨v¨ìùŒDôåÀBÁ•ž§ÑJ²”8[ÁIIww¯ÝWF'Ú‘bOa4¢ ž 4º½ûD=š>Ð5Ó.¹Uº>¢ N ½y£ˆ¼])Ÿw±{›ÜÞNMÑéL¢kûB7Z‹™™ÙŒ©¢¢¯ê£¨ÐWZ—+U—}–¸™õ”äÕrõÁµý×ÏÌôRÌÌÌÌì£ÁÝÌÌÌzúêàà‚¾:3¿i`RDæÇèj‘Gƒ2{4¢ç®@Ÿd$¢k—Ð#ÇØ_%ËR,êFôç>/](™KA‹€,hÛ˜‰ðs5:NgËv¼°–nDGì­ÐºÑÉt=p7c"zûáó×!9…V½Ç‰¶sœ–t:Óð“x»Vîú™©¼§cu#ú#L<]w"3¯&¬õ$º™™ÝLc§Ä+ÕÙŠê’§ÄÍì}tlåÖÇWßø23333³÷ÎÝÌÌÌzúÇkÖ,DÍ›Óý»v)U¤¼—k#ú,ÁòîÏ»B½#zˆýe‚ˆ.x)È%ýBÛ46¢KÇ€ÍçÑ‹R.ì ]Ñ÷lævÿþR/jlD‡ã4L=¢ŸE:©NœŸÌÅí”òƒˆè¯7™oëÑÍÌì½ ZU¡ŽuT—«h]¬£jêRe(ª@u*ªUâIq3û`)ó×WlûÎ?éu˜™™™Ù­ÏÝÌÌÌzúÝE— §ßÍßf²;EÜÅH ÖÑ‚F#ºØpÐÇøIt8äRI‹€«%x–k#úbÐb€ ¶}¾ûØùJñäæîkìÑO7¢ öUÒ Ã“éÄÞB®]#—#ro¤¹8Q‚6ã"z%¶ªGDÎ…t:Óð“¸ÔBO üâ ®»N»Ñ޶òa&è€õ!µäˆnffDgëô ¦Ä•HuHO‰›Ù‡ÏÛ+¶>~» ™é…˜™™™Ù­ÍÝÌÌÌzz/ ÅîlTB¹‰1½F} +àúˆ^¤ G»OñR!— Gôž|¦ûØá€ÛËàúˆ^+^MòÞîµûKä²îÔzˆÎÞ³½;p¥ ž'»ç¯Ó‰è ÖŒ½ÇBn ñצ:âhó ަ˕x"à 7¸î:Sˆè§Ú™çjŽèff· ŠŠ¾Vç,ñVÔWê¨ß‰JT„¢tΧJ"`f?Ñ”ÍWWlûîÿ1Óë0333³[›ÿñlfff=ýæªU‹Ué÷ò O' en”Fõ±ªÓ|ž–¸˜4Ezr8¢Çð´¹´h‡xJâáîó.É\:æ´%àóЉèEzUpo÷ç#ç§#ôb+ ºïºãšˆ.ž‡Ñˆp‚à*³ÕG5Û}–ë?Wé5º[ÊODp¥RîšvDoÄ“C Ÿªž$o4™g$éwL÷5ÌÌìæºfJ<êl©º*CuUe(ЧÄÍì#-¹¸bÛãó<…nffff$t333ëéwV®\rµŽSïõyžnȈÔHD:^¤†ìl‹>>¢GèIew=9\‚¹‚~ Ê]Rg«wÆEô mQ7¢#Þ®á èÎ×GôZ,Ï1Û» jt2ýJ¥Ñó×»ws2BWÑ#r{I¥wD•nÈŸÄ•Z|_ðó7|CÇ™bD?ƒ¸Ó“èff3§-ꨩë2TѺX•új•J(JF¥PªŠÿ¡nf6±þÁò¿~ü÷fzffffvëò¿ËÍÌ̬§ßܰ|©®V¯¿ÏÕ™Dϸ4®è‚gBl¦Ñ3ˆíêlp¸¨wD‰dN¯ItÄÛÔpDO^Š’K†#zÀþ"–ùyLÑ%^—¸¬{An/⮟œjD¿Úß¾4…·ôCÉÎd‚ˆžp:›|q§àå¾Ðú^×™™ÙT­¨¨«š:êì‹ÚSâff7‹xyå–Çý?…š™™™ÙÆÝÌÌÌzúíõýˆ†ê¿_Ï—è™¤ÑØˆ:Q¤¶®è÷³,Ävº]p¤ˆ9 GôÈ]¢;‰ž-0{LDßð¹îK_¬áÒÇ»+9\I·¥:秇Ø_èˆK£±Û»_ ]sþ:À©ÎÁµÓÜGt½]‚ƒ™É×2Ù,òH_ÄÓ} 3³^è«fQ—•¢©£~'¢ªKEˆ(DAª}–¸™™½YšÿeÕ÷¿û;3½3333»õ8 ›™™YO¿µzõÊ&òøùÃ=ÐjN¢¿^o%¬P²/”w Gô¢ØF7†“¯Š2W3n]p¼ &ÉÕڪῃ+•ô¼àÓGC×Fô"Í O¦¿°dlD¯‚]ä5Û¹Ÿà$p÷Ø{ ØY”Ò#¢‡ØúÄÞ¦B~»èÝDôÜ=„>Ìšà’óMæ«À½ #³Gt3ëixJ¼¨ÐWZ—+U—=%nff3)ÑÎU[¿ýðL¯ÃÌÌÌÌn=èfffÖÓo ¬*…cø %ûšÈTj`lD/Ä`=€Ä>Á `×M¢s¢Hm:¡üšIô„# oçþXçϸRKÏÓèÀ±*Dv§Ö#µ˜3Ñ/Z ,î>Þ®B;Éîùëç"8Ü3öCì,ôŒè—Cì=È䲆oKL;¢·áévr/“Dôvæ«‚{%é“#ºÙGEA}ÔQ5uÕºX«ºR—ª E‘¢ ¢’¨ñYâffö!”ðÖª­Ï›éu˜™™™Ù­Ç_‚˜™™YO¿µjÕ@Séµ›ñZ)öÑ )—ºçsªçŽèð‚ÄÚnD'ˆ-Ÿï>v¢’†’\dˆmR'°èÛ ú,݈^Ás’ê®ãx%5Ã× ŽVbd28\ÁÜ™L§)Ê'…ÆGôcÀæ±÷ØèŸêq·?åˆ^¤o”ws&ú #ºÞÎlJt¿ÄÑ>ipº¯af3¯ŠŠ¾ªÏSâffö‘±bÙü>ýÙŸ]™éu˜™™™Ù­ÅÝÌÌÌzúõ+W—¡xõf½^ƒ^„¼dÿ˜©ïSq.aC÷²+±˜2¢k¬T¬mM2QOÒªêf¢š¼ÔªêRW-ª¨êÎyâž%.IÒJį¶üöÖO¿§$I’tý¼$I’úÆ=÷ÜG”wF½Ž„_fЮ`coëô€Ó5ÕGÀƒÝÛïVÁàN€šê ³àùñDÄ'Igj}Þ$ú‰Š¹)õ^ªˆ¯- ™^†ø;Ÿ¨b.¸GÄÑVßyêÀ‘:¢ômïN+ú¶Žï¸oƼˆ^Á«uä—€µó~SW¼’Ïò)jøë:ø{ÜhDO~Ö†{¹ö$ú•’ùðTÀS;#ºƒSâ’$é†}²íàwnõ"$I’´²Ð%IÒ@¶sççš*ßõ:ÞM¸X›?%¢¯î‚ù“Á©^DŠƒs½bnJ½‚—kâ):½LD¼ÌFôœÉàþî÷<>AœËèFu8ZEôŸ‘NÍ\°ïºXÕ¼Aòxÿcüu½ªâæ¶”¿¦‰Š‘|Œ‹ ?k'»6^ã.Ó™å§I|5ˆ§‚íFt]Ÿ¹)ñ:ª2YO]4%Qa—$I7߹ÖA%I’¤›b@—$I}cÇŽû©ykÔëèón€-1ÑÏÔTG€/uo_уê`ÕÑ[ÁIมˆž­ˆ¾Þý:×Áiæ"ú‰ â켈Þfv{÷½âR•ü‚¸:¢ümåþ ÖÏ{ì×Ñk8PÇGô~^’]×Ñélç¾-:?­2UU1QM29YOœoE«iM´Úu´¢ŠhEVU0QÕ…¬ü…C’$-šàʶß™’$I’ô™x=K’$ 4†„C "ØÔ;<àL+«Ã<Ô½}¨ Ö1xýt‹üˆˆªŠƒÑÑ[ômõÞ‰èO ˆèUÅ'Ñ›h‡­ˆ30ÕFôà@\½û•ªÊ×è„úYümÍàˆ^S½œ‘_û´ŸSùÝšø:7¸~Ñμ;‰[®q—éRÊOˆx:‚#“wÑ—¿ ˜j­¹Æ”8UMU1ᔸ$I;tI’$-º$IèÏvìø|SóËQ¯c¾„C¥³ûÆkEô„÷ZÁàn€ˆ8X½ˆ~¦Ž<Dgj=¸jJ½fnJ½†W*â zx1¢Ñód]ÅÜD;|\GœŽ¾ˆÄLÌ‘>(¢OGÅO¾Úÿ#òµšüÜ€ˆ^^Šˆë‰èÿ_Mìãw7šÌ»®/¢Ç‘ÉÀˆ>†&«ÉÞYâLU«¨ÛUUѪ&èl›uTÑ"©G½VI’¤›f@—$IÒ"0 K’¤þä¾íDoŽzƒå{ q¾6’³úlMõaЉèUð^0Ñ눃ôEôVäaˆÞ}F7¢œ® ïlõx¥³½†«ˆNDNבAôÎaÿ¸Ž˜L>ª©®@îî­¼Šþ­ãkEtòµ:¸/`þ™ŽY/2ò¯­Žü^MìáÆ÷ÓÉÀ­×øüL«™¹‡Œ£SwFÑSÿ”x+ª2QM\®ª‰öD픸$IZ½®l?h@—$IÒpÐ%IÒ@zï¶/”R½1êu\Kïep±ÎœJ:çg[Ä®* Ù 'Ñ[‡ÉîÖï}g¢œò£è†ñ*âGuò0äDðÂlŒŸÑ“<9Uÿdúñ*ã1»F*®šz˜®*~<Ýÿ~VGÞ;(¢ÇUÓð×V%ßoU|•ή[f¼9CÞÁµ#z“É’ÜkD¿9½)ñ:j¦êÉ+­h]qJ\’$é†\Þvð;kG½I’$­,tI’4ПìÞöÅÈê£^ǧ8\ˆÓ5¹öZ8RW’0{&ú×é¼:Ûªâ2†Ù3Ñ{Ÿ;Ó"dw«÷*ãGuðe:SíWEô€3UÍ’Î}át]ÅGdgŠ8Q3wFz÷> "z?‰¸:¢×ˆ¾;aã¼ÇžUÄ‹ðé=ॉà)n"¢OgÞqíˆ^’W Ÿ%ãèTÍ«y;÷ hU“LML2-§Ä%I’Ÿ]’$ICg@—$Iýé®]š¿õ:®Ã‘&âT¹–¹óÇÏÕÄû׊èÕ ìc@DàÅ*øƒ":ùãšêa:Õˆ|¾û=ÏDÍáèN´Wpºªâht£:ƒ"ú¼íÜf"x•`ÏU0øù$¹kPDˆã:"z|¿OÒ™¢¿n ¿œ.Üþë":™¯x–ˆ£k‚ÛÉùã¬Uµ˜jM9%.I’4†.o7 K’$iÈ è’$i ?¹çž/E”×G½Žët$ƒS‘ÌFtàB‹ê-àQ€€£UEÓ7‰þ"s¡üB¼ð•î}_¬*ž¥3%|¦¦ï¼tâÇ5 ŒèÀÙºŽÉ|ˆÎ:31»M<ðqÅUg¤/œD¯hjøAÂÞþXÁ/Z‘w'Ü2ï±_DÏü~«Š'zk¿^ ïÎ$€;¯q—¦d¾<pl²Š[bL#zÿ”x+&r²j]rJ\’$i¹ÊËÛþµ]’$ICe@—$I}óž{Ê(?õ:nÀÑ N¬!gõ‚ˆ^WÑÎÌ{:·««˜ åZðѽog½÷¹³5|^¯Vð%`-@+8Ä: IDATóÝïy¶®âèL´ˆè'+âÔ§Eô ^ží€¼1AÞ5(¢WÄ \½%ü@üpá¦"z®‡¸ëwiJæËÀ׎OElŽœv¿YU‹Éî”ød51ÝÙ:}Þ”xPÔIøúW’$i…ˆäÒÖ¾³nÔë$IÒÊâDI’4Ð2 èG8US׊èÀÑ:br,Œè5¼݈^E¼‘}}.Œñj«/¢×ÁªÑÎÕïGgRàìDÄl€NÖÄIàÞÂk8sg³CEÉ+WGô Þ¨£Ü 1KõëŽè5ñ£:òáÞÚ¯W¡™Ìu¿&¢—’ùŸ1¢WT­Ó¯1%YUDLT5N‰K’$­f—¶4 K’$i¸ è’$i ¹sçÃU•?õ:nÂÑvp²•L1¨/¶¨Þˆà£Š˜žèQ½Xu¦½«€‹5¼Iôî›ß«"ö2 ¢Wð·ÜÄzX0‰~¡Ž|‹ˆ^¸?;ñ½¨œ®ˆ"y°·ð*x¡bölöND‡Wbþ$:ñf+š;Dt"â@Àóó?¾@Æ&«|Øð©÷íÿ×®'¢—|‰¸:¢·h15Ù™oUu3QM^jUu©«UTugRÜ)qI’$ݺ$I’†Î‹“’$i oîØñå¬ymÔë¸ %|R‘Ñ ÔWEtàx+8—ð9ènÙNõ©=à\E¼­ˆ^Q¬#{Sàó"zu¡Ey;"¾Òýü™*âH$_ê­½Žx1ÈÞÙìÝíÜãe:›Ä›u4wÄ ˆNˆøôˆ^?jÅMEô÷Û…õܾð“YZuëüšzÍ•Ézbj}kÍÚõ“k[a—$IÒð]Üvð;ëG½I’$­,^È”$IýË{whà¹Ò™Ï„©’<Ò™pïFô̇²»M|!×—à~‚Þé”̯5É+@»û¡È†}puOª{Û— µð'Ï%ùò§ýœ&'&wlß´ã–»7ܵe²ž¼ŽŸ¬$I’4Ñym/I’$ •]’$ TekEô®Û&‚MföÎ.&›ÌG3èEô-íd+98¢xl6¢ö”ä§À`]C<8ÑóÁéÈcs=¿^à{ÌEôGg#:L5iøÎ„;¹>“Ïg_DOØÛîL®ÏFôRø: "zìng}yPDÏ俦á¯REu~离›{7îØ¸Æp.I’¤eÀz.I’¤Å`@—$I­ 3Я’°¥±»dWEôò(ĺ··4°-“¿ëü;ùL&?f&xœà€Lž*Ékt"úÚ†x0»øb;òp  ÉÜWÈïÑÙŽ}²$’ùƒîº¦f öGôB>@ôÎWà™î¤zD8põcÌÝ3D;ˆÃóç2y©ÿc›&ÖOß¿eç† ­up.I’¤e£²¡K’$iÐ%IÒ@+m ÷~ [êˆ{»º¨'Û4å\Dß\`;Éë…Ü“™½-Û'Ú…§’NˆÎä©,ù3à2°¶eö¶dç‹íÈÌFtöòût#z“ñXFö¾çäLáÑÙÛɺ¦sFúlDo’gÊÕRxžyŒ3I.|ü¹·$ë¨ÏíÚ¸5·o¼k²Û¹$I’–ë¹$I’ƒ]’$ TµVæzŸÍ5ÜÛtÞ(0Ñ›y½I_D§éEôºIžâûÏÅ“Yòçt"úT_ÙsοЎ<‘ä\D/s½4ñ8}½]x ú"zäú¦Ú)É3 ‰øÙ ˆè ÛÛuÀ‚íÜ×µ¦ž¸ÿ–×M¬]éϳ$I’V(Ï@—$IÒb0 K’¤š\¹è}6×p_H¢¨'šÇg6l*ÉÎ~P2ö$¥wîy=“¹';å݈?.wÏK”þˆÏFô`_C¾D'¢O4M6 ¢—`[“UiEýÑŽwåÖõ··Vû#$I’´²­Ž÷üJ’$i©Ð%IÒ@5+~ ÷~›jâs™´ú#zC>Å\ÐÞX ÷?…ND/¥ôÎ=ïDôÌïdæ%ãïèFôËnDO¸¯MžÏä@Cî-Áèœi^—ÂÓÀ÷{khgy*çnOø Ñ™j(äS%³ó;+O7¢gðd)¥wîyÝ{û#zÂ/!/$L漈ÞD^ ÑKîmw&ËÛ@Ýö0Íëvæž¾ >Ùd>šWEôxª@/æw>VâY:ÓíTÓŸß¼mÝšzbØ?3I’$idVÕ[~%I’´d è’$i ju^ŽZ__,…йm×ë6åé¾)ðõ™<ðèFt²Ñ«†Ø›Ä÷Já+MÆ;½ˆ^àñþˆ>y±Ñ3y¦É«#zß÷¬g’=™s½d>Ú·FJòdCü‚þ3Ñ“=kê‰#÷m¼{rmkj~\’$IÒ%N K’$iè è’$i ²ú&Ð{Ö×Á—JRÓÑÊìTxu$_˜è™Oòuz=óÙ„Hi2ÞIò<0Qàq w®ù½mòɯº_÷™væéDðºödò½ÞÚ°‡˜½=Ùõ­‘Ì|¬$¯u×AD´·­»}û†Éµ‹ðc’$I’FË~.I’¤Å`@—$IU´Vk@XWu#z^=‰>;Þ‹è$½-ÛóuàM²¯?¢g‰w{½§²»-|»›*/÷"zžvæ«ô"z²·?¢Ï”Ü;/¢÷y "ù¨ 1;‰°½M©{=K>Y’ŸÓ‹èÍU“èÑN¾Ú›D_Ûš¼¸sã­û‡!I’$ƒt]’$I‹À€.I’ œ@Ÿg2à±äªIôªMé?Ÿ|²ÑãI>’Ä;tÎ=6<—Á:Ÿ|°IŽõ"zCìÉÈïu>ŶND瀆|²ù:ý“è½sÙ£|m¢žø»6ßµÅ'M’$I«FXÐ%I’4|tI’4èMVðdMFoò<ÚQö•«"z>6ÑË#%ãÝnD§)E’$IZÅÂ-Ü%I’4|tI’4Pzúͨ‰|&3ëd6¢Ó&ŸÏ¾ˆ^`oD'¢øbf9N7¢gá¹Þ™èîm’K$¿¢!ö$¼Ðܳþ÷n—$IÒªæ/,’$IZ tI’4è7­®ˆ½…¬½6WGôª]x6»Û²øb“å™§º·g#zÂî&¹’äÇ@xæžõ[¦'+_ÆI’$iµs]’$IÃç•WI’¤á«jâÙ’T,ˆèÑ»%Ù׋èÀ |< ¢'°«$—!±uÝÆµK÷P$I’$I’$iõ0 K’¤Š;"~VQûš¤Ê«"zÓ?‰%ÙWÈ’|`~DOx±ó©ØÑ”Œ[&×||Û¤ý\’$IÊt]’$IÃg@—$IUž> Qûrà$ú\DÏŒÙIô$hÈÓ Çšäë%ùA÷¾Û·®Ý´e ×/I’$­èìÖ$I’$ •]’$ T<}X¢ ö•øõ½$û voß—YÎÇ ì)É‹¹}í†ÖR.^’$I[a@—$IÒðÐ%IÒ@N UTɾD½H¾ ¢7™_ïEô„ûš,—€O ì»mÍÚ“kkû¹$I’Pìç’$IZtI’4PÉÚ€>\Q%_(\Ñ Ù»}UDv7YNgw}ÓÄšMK»dI’$i¬YÐ%I’4ttI’4PU»…û"ˆHžKÈþˆ^ÈçæEôç toß[²\Œˆ[׬Ÿ\òK’$Ic*º$I’†Î€.I’´Äjx>~}D§É|>3_éÞ¼w"ªrûÔÚ%_«$I’$I’$­&tI’4PÁ ôÅð|B漈ž}½{ ^XW·¦ªð)‘$I’zÒ_Y$I’´ è’$i ÈôjÔ"«áù2o½Y8‰þ ™n[³fj4«”$I’ÆUº…»$I’†Î€.I’J'ЗÄìvîÁA ¡·;/õîÓÏojMyþ¹$I’Ô§ê¾~–$I’†É€.I’ ú’ x>“,Á‹ÌFô²·Pf'Ñ·L®©G¶@I’$i 9.I’¤Å`@—$I¥[¸/©ž'É’|ÙˆÎsI~`ó¤è’$IÒU"Lè’$I:º$IÈ ô¥WÁs…Èô>ÖÏnš˜x"|Ù&I’$õ gÐ%I’´¼+I’4F*x.3®dg;÷p©”[F¼,I’$I’$IZ è’$i 'ÐG'à¹LJ€\W×—G½&I’$iÜxè”$I’ƒ]’$ ”Uå娪ไéB~²ª}.$I’¤ù ná.I’¤¡3 K’¤œ@½ žbªÕ*£^‹$I’4vÂ3Ð%I’4|tI’4PÐÇBE<{ßúÍ£^‡$I’4v" è’$I:º$I(ÒÇÅmkZ£^ƒ$I’$I’$­tI’4èã#)>’$IÒ<™N K’$iø è’$Ic.ð $I’4_P|,I’¤¡3 K’¤"@áK6I’$I’$IZ^•$I<}l8€.I’$-T¾P–$IÒÐÐ%IÒ@•g ôÍ ’$IÒÖsI’$-º$I¨Tôqá!I’$ dC—$IÒÐÐ%IÒ@Ae·G/I’$ ’tI’$ ]’$ ”>ð™$I’1 K’$iè è’$i pìY’$IÒX º$I’†Î€.I’4î¼,(I’$I’$IK€.I’JÒ ô1á!I’$-”YF½I’$­@tI’4PØm%I’$±·p—$IÒðÐ%IÒ@éèã#|.$I’¤ è’$I:º$IÈ tI’$Ic- è’$I>º$I(*Ï@>’$IÒ|Q è’$I>º$I(­¶’$I’ÆX:.I’¤E`@—$Is¾`“$I’ð-¿’$IZ^•$I•—£$I’$I’$I«Š]’$ ”é<Ǹ(ÎÖH’$Iƒ¸…»$I’†Î€.I’ £íØŸ I’$i¡’tI’$ ]’$ ”¤Õvlx]P’$IZ ÂÊ’$I:º$I(ÜÂ}løDH’$I„ï4•$IÒðÐ%IÒ@ž»=F|&$I’¤ÜÂ]’$IÃg@—$I’$I’´ü'Ð%I’4|tI’4PåÜóØŸ I’$i!Ï@—$IÒ"0 K’¤J•VÛqáeAI’$I’$IZtI’4P8>6‚.I’$-N K’$iÐ%IÒ@™•}L$>’$IÒBÅ€.I’¤¡3 K’¤œ@N K’$I ™Ï%I’´ è’$i 4  ¯ J’$IƒT¾T–$IÒÐÐ%I’$I’$-GtI’$ ]’$ ¤èãÂË‚’$IÒBá&î’$I>º$I(Ó-ÜÇET>’$I’$I’´ è’$i ð ô±‘Y|.$I’¤ùÒ3Ð%I’4|tI’4PTôqá!I’$ d@—$IÒÐÐ%IÒ@éècħB’$IZÈ3Ð%I’4|tI’4Pxúøð™$I’Èp]’$IÃg@—$I’$I’´ìÔž.I’¤E`@—$I%•sÏcÂ'B’$IÈ€.I’¤¡3 K’¤Ân;>¼,(I’$I’$IK€.I’Jú Ÿ I’$iâ[M%I’4ttI’4PdmÇD†×%I’¤½š$I’4|tI’4è’$I’ÆYÉÊ€.I’¤¡3 K’¤Á*º$I’¤qæî’$I>º$I’$I’¤åÈ€.I’¤¡3 K’¤ÜÂ}|„O…$I’´g K’$iÐ%IÒ@‘iµ>’$I’$I’´$ è’$i 'ÐLjof$I’ˆR•Q¯A’$I+]’$ ô1âS!I’$-Å-Ü%I’4ttI’4PRYmÇ„O„$I’´Pxº$I’]’$iÜYÐ%I’¤€.I’¤¡3 K’¤"Ͷ’$I’ÆX© è’$I:º$I(Iú˜HßÌ I’–@™IÒ,©e¡ñ¿TI’$ ]kÔ $Iã)*òXÉ8Òªó]fêC¥ÕzkÛo½1è |c׎£ÀÖ›^qgd|œ‘›€É›ý:+ÁUý{gŸk·ù¿?<Äïì¼ðEŒ$I’¤eÈ€.I’>Õþ÷Þ;ýÍûîûÍl¦ÿ Ø3êõ¬:½ƒW%IZæ2ƒSâUƇ%y/¢ùh²T¯_nŸÿÙΗ_>5¤oû™·xÎÈÛÉ8I•I#zu%;ÿ›ïTû ÿþÄQ~ãÎíK½<­2Ñ ú/P’$Iúl è’$éºüówß=³ÿþûsjæò_ÏŒz=’$i<$PÁ pâB™„™åxÀѨªw28TµÛoÝýâÿûF !hßÄ?»ÈÛ(Ft輯¯bn—œA óÝógø`ÃFîY·i)—¦UÇ-Ü%I’4|tI’tÝö¿ýöÙu×]¿Ñ^3ñï€çG½žÕÂD%IK*Iª_?%NÕ:2ÓL¾¶û…oõr?U’CÛÉ%ò¶LNE°aµGôÞ$zF÷ÿdÌoÿê(¿wï&<]‹%†ôI’$©]’$Ý?8v쿺ë®ßj¯™ø·À8êõH’¤ëÖN¸\Á…$Îd”ŽCu´Ž|'‚CÉôÛw|þ`{Ô‹š!¶€[»}=ÉÔ0¿ör@Mg½É…[  ?B’$I«€]’$Ý”ýG^Ü¿mÛ?œš¨ÿÈ¿?êõH’´ÙNâ2pòlRˆÈã4|”åÝV›wÉÖ™A IDATÛwÞ½ñø‹¿˜õz—‡L†¶‡{ÿ—å:ç½gÀšáƒå£‚¹’^FôŸëDtièÜÂ]’$I‹À€.I’nÚþ£G/þëûïÿ­ 3—ÿOà·F½ž•*2;{£J’–›Ù)ñ .§JäñáL‰ëzEçoÒE‘°¹êFt`í"}›e¡êÿ‡yý§§NòÄæÛ _ÏhèÜÂ]’$IÃg@—$IŸÉï¿ýö•}ÿý¿s¡}å/ÈüíQ¯G’¤ÅA»À•HÎw¦Ä9¿¢á£Œx¿Eó–Sâã#a ÷y_sÂÙn6¢1/¢7>ž¹Ì“«úÇ#I’$i™0 K’¤Ïì÷ß~ûÊþ‡ú© §ÿ‚Œ8êõ¬8kIÒ¢ ˜!¹ä”øŠµèg$lJ8 „=ìþCé¾}á½óg¹ãÖUý£Ñ¢hœ@—$IÒÐÐ%IÒPìýõéý=ôŸ®9æÿHøG½ž•Å‚.I7¡¼Lr!£:['“üäG–vë—“Mûí;&gÞˆ.z±ZDA.Å)É›ˆ<q‰U>‰tBzD矛„_œ?ÃS·Þ5â•i¥)YÐ%I’4ttI’44û_}úß<ñÄï~òñ±ÿ=á?õzV ó¹$]ß”8—›.n™xçóù—gG½^‘%Éç=±òÄE`ÝÒ}ßñÓÝŨ.µÛ”L*ÏA×EÄ¢ï0!I’¤ÕÇ€.I’†ê¿~õÕ™?‡ß}{׎ÿ5à?õz$Iã+ ÉäJF\ˆÈ³$'#óWYÕA9\ªê͸Ô~{Ûô'oÆ«¯^õzµl-ñ„jlÌŒóiD§ûFÀ "áJiX[{)JCåº$I’†ÎßZ$IÒÐý.4þþáôήøÏG½že/³3¾%Ic,êSâuĉ†x‡Š7×,y`‹È À’ó–úû“Þvî0m@×Eº$I’†ÎßZ$IÒ¢èFôß{g׎þ‹Q¯g9Ë󹤑H(ÌP¸@äi"ÎQâhùa;9Ñ|4Yª×/·ÏÿlçË/Ÿõz¥Á2Gt ÊzˆÝ€´ÓÖ©!kœ@—$IÒðÐ%IÒ¢éFôÿê{v"ÿËQ¯g¹2žKªÎyÐ3—#ò|v&ÅOÖ…cTõ”|¯M9Ô°æ»_øÖ¡Q/Wúìb”m=ÁEà°q„ë¹fJ3êeh…©*Ï@—$IÒðÐ%IÒ¢ú]höðá?žºg{ñ{£^$­4ƒ¦Ä yºÊø°$ï9%.1êÀ¶¸Á¹ÌÕÑ%I’$i90 K’¤E·J~päc×öKAü7£^Ïr“8….­& ÏÏ /…æŸ%>=Õ>tï§G½fi¬Ewß…ÑZ—ÉE‚sÑ¥¡)QþO·$I’Vº$IZ™ïù'ßܵ=!þÛQ¯gY ƒ ÿ’>ƒÞ”x®”ˆ³yÒ)qi‰ŒC>ïX×}cÌÙH6z1ÒÊЌ˟oI’$­ tI’´dºýŸ~s׎üÓQ¯g¹éÉ­’®)¡Dr™àœSâÒX§¿I×FRq:É-£^Œ´ì5¾R–$IÒðÐ%IÒ’êDôÿÿÍÝ; Éïz=’ÔL;%.­,9f…m*ÉȈӑFté³Èˆ2ê5H’$iå1 K’¤%ùÞáö§÷ì,ùÏF½I+×õN‰.¼o—V¦¯ ôžÉÈ$‚O2¹eÔ‹‘–¯2޾%I’´ÌÐ%IÒH$|øßýÉ®ƒü£Q¯gœe£^…4®gJœªud¦™|m÷ ß:4êõJ½ r|ŽA¿Êd&|BÑ¥›ná.I’¤á3 K’¤‘ú£÷?üãoìÚž<굌­À|®í×M‰·“ãU•oæ•|Ý)qI7£eŒÿ"ìþ-ÿ Ñ¥•éî’$I>º$I¹?|ÿÈÿ]Û Ä¿õZÆS6t-7…ÈiàRÀ™ NgÆÇ™yø°Uåû¥Ä/¹0õwÛ_ýÖÇ£^®¤•+‚1@Ÿ5Iç/z#º$I’$º$I øþ‘ÿá»w’ÿqÔk‘t’$8‘?'9˜É.¬o½òù¿ü˳£^š$d!—ÁQ(À†€S ·Žz1ÒrÑbÌß#I’¤eÉ€.I’ÆÆ¾wxÿ7îÙq‰à›£^ËXI@×øI>þ· ëë?3–KgË&°M$l&ù˜àöQ/FZJ4ËåÏ·$I’–º$I+øÁá?ý“{vd:굌ë¹ÆCç u¼ÌtóO¶¿üï2âåHÒõZN­&¸…àÉ£^Œ4î"<]’$IÃWz’$Ióýчÿ,‰?õ:ÆÅ2ØvV«Að^»šyrûÁoï5žKZf–S@¨Inàø¨"¿z¹ýù–$IÒ2àº$I™óħNœØ° Ú·$ÕV*¶·y o‘ùùQ¯sÔ:GL[Ñ5AÎdÍïoûî_ÿ/£^‹$ݤåØêLn#8Nrç¨#«’ËòÏ·$I’Æœ]’$ ÕþÝ»·¬)e;pKÒl b[VÜl%Ùq äV`Û©­!zWµ»ãô_K¯‡I£”ĉéœzz÷w¿uhÔk‘¤›—e™‡R“Üp<1¢K’$IÒR1 K’¤_ëÏ¡>´sç]%bkÐîð¸%«^ïÆqz“ãí59{::=üªn¿aËôº¿–µàÕmwlÚñÓ£^Š$}&ËyFµN¸òÄ]£^Œ4nªh<]’$ICg@—$I üÉ=÷<[Uù/2ósïÀÝ:‰WswZ¾¢%}šŒ¿ÜvðÛÿ`Ôˤ¡(ä2#Zq'ÉQ‚m£^Œ´"h(Lgäà,Ÿ|ÉÑ$—ˆ_r¥ürëô'oŽz­’$IZy è’$i€fkf|r=ĆQ¯fÕ[ÞýµÌDðï¶üöoz’44±"Þö[£`DײÕ¸q!áL‰Ìr<"ŽÔo3S*­Ö[Û|ëQ/T’$I«›]’$- Ü—Ä!Ècá–¡Òªä+Ûüµñ\ÒJ³:tÞRgDר˜!¹”Á%àT‰<…Sù^’oSµŽÌ4“¯í~á[‡F½VI’$éFÐ%IÒ  à^à½4¢K+^À±­ŸÝ=ê¥HÒPäJ)èÌEô#Àö¯E+P’”LfJa& Ó¥p¹is¡Ýæb3ÃÙ™+œmOsæÊeÚoÿó÷>|8º¿;H’$I+…]’$-³—šc7ä‘€_%Ü=ºU­^‘¸»[ûRÝ~:ØßõB$iØråL ÷ t#º>U’4%™ÉB; Wº1üJ)œmOs®}…s33\hO3]n°ƒg>øÍÝ;^ßÿÞá/ï_CH’$iÅ0 K’¤ªy“ZAl'9aD ã¹Y|ßw¿ûþ¨×!I‹#ÊÊk舞p$Œè«RɤM2S¦›†+¥ÍÅvÃùf† 33œk¦ùdæ —Û‹Üµ“/NíÚñóýï~x¿]’$I+„]’$-P2ª«?–ÁöHŽü #ús]‹(ù`û ßþ³Q/C’KFÉÈù÷htãùÀ=£^Œn^’’R`º¦K›+¥áb»Í¹f†KÍ4g®LsjæòO‰//LíÚñúþ÷?´ßˆ.I’¤À€.I’¨¢É¤Zðñ ¶A|yرô+“4l™ÿhÔk¤Å%r…¿íž„÷vz!ºZɤɤÉÂt.µÛ\.m.4m.ÌLsnf†³3ÓüÿìÝi¬mçyöçYkŸáLj¢8\Š“D ilËSã!”ä1‰Ó¦­] CÚ h4@>°­Áhù!–ì/ P(Œ m“-’¦M'©íÄìØ±-‹–,‰’(qæ%ej ÅáòÞsÎÞëé‡{i[:ûJ$ï»ÏZgïß ?ÐåC¾ëp½çù¯ç}Ÿ›ï]j w ÑXtà!¾ÑœVÝP_è*ΖýHTäš÷ýKU}úõÿú_üúØu¬RE¬çüùŸp1<¯Ç"Rˆ¾BCD,†‹w‰ï‹8¿˜ÇÞbçñü|/Î/ñÌÞ^¼8߻ԱÑX tà.¿öô¯—7TÖ—¢Ò$:có¾ÿëc×°j™ëwúr)D *êâ}âCÅ~-bâÂbçæóxiqÏìÅóó‹G¨Ïc’ǧOÍ]»·Üøé{òí÷ Ñ8¦èÀùÍ»ƒ•¯Ï¨/™OdÅÍGPÐPE~éÌ¿úå_»€#°!zÄ¥ýш¼uÜ:ÆSQ±*êâ¤øÞ¥0|oâùù~¼0ß‹âÜ|ª÷‰{•yçö™º÷±'ßzoÄ…±ë€WK€RõJZÍùº¨øJd<·¬¾²MUq§±Ìáÿ»€#²AzÄÅð|½BtG§?yËö™¢p ЀC2²^Q‚q} Ñ ÑWÅ è¬ÂÎv~pìŽÈ†èykT<·]Éå,?:ý ^:XĹÁÑéë #oÙ¢p ЀC^éúɸ>*gõHE¼ee…MTÄ‹×ÿê¯>1vG"sˆÚÄ =n«Š‡sB!úo}ù ÑGÆsó½±KáÈä-Û·Þô¹{={×½Btމnì€ééùºÌuMD^—´¯h+?9vGgØÀôü¢Ì¸-*»Ž—½8ßžo ¬¸yçÌMŸ¿7bwìZà• ‡ 1M抺¦"¯˜N£8¬*>2v Gfx-®‘ŒÛÊÞŒñÝ´{æ¦Ïÿâ;ßyrìBà› ‡ q%æº:"^µ0YÙÕo]ÀQÉÜÄ;пVFÜV‘]›­"nzöËO? D`êèÀaÃþ WGÄ›Bˆ“4lÕï]ÀQ© GDdÔí!DgTqÓ3_~úsBt¦L€Òe“£NOVÆ›¦tï&pÑMÏ<óäØ5• ÿ‘Џ=²>?vl¼ŸùòÓü÷wÜqõØ…À2tàV“ZYq22nˆ¬Ïµøó€6ò¾û^»€£býëTÞŽ]ï ç.|^ˆÀ ЀCM ¿ìDTÞ\Bô×L×®@æ•^M³ŽîÈ “èŒí ç.<(D`jèÀ!üyf{"+ÏDjÔ¾9vpŒ™@_®2î !:ã{ý‹¼÷Ö[¯»x™8¤ñúËv¢âŒF-p”rè—“q§[F¼~g˜^ˆÀTЀC†˜¯ªÑ¼g"ÂqîÀ‘¨4þ • Èx€© ‡­ö®Ð툸µ„èÀÈXÉÉ:ë¥âÎÈ¢3®K!úß¼é¦ëÆ.€Í&@éW?©µ‘o !:°bî@…ªîÌ´7cd¯ÛëêsBtÆ$@YɤVmEÄíù™Õÿ½€M•ôW¬*î¢3ºÌë÷z!:ã ‡Áúý­"ê.!:°B«¼šfíTÅ]é” F—×ïõñðϽá o»68d˜é]¡Bt`uÒè¯VEÜBtÆwMwbû!:GM€ÒýQ§}D½Uˆ~Xg»Wfp„ûktW Ñß5ýI!:GKG8dgR«‹¨·EÄgGø{O—Ý\¡A€þ¥It& *®É“ÛŸû[w¿î†±k`3hɇå|¬»B3"ÞZYBôK:-¸"™&ЯÐ]™ñù±‹`³eÅÕçÏï~VˆÀQ ‡Œ4þ²ÌÊ·Fäý#Ö¬‰ w _©ª¸SˆÎزâê çOÑX9:pH?þ¤VFÔÛ…èQcë!cô}ÍZ¢3 UW_xi÷¿q×]7Ž] ëK€²˜Æ¤VFÔ;2ë3c2.Û5¸B¾FkDˆÎD\Õï½ô!:«¢# ÒLgR«*ïŽØàÝn ®H²ÎZ©Š;#òá±ë`3dä°•¹8Ñu‹kf³áÛÛuf÷d½ýÔé«~äªÓ¿‘o»’£&?€+SCVNâpuR·Eæç£êα+áxé"†.³¶º~Øé2·3óTöyºïºýV\Ý÷ñ-³YlwÝŸøŸ,—ýö¹£©€M#@é3«& G\ Ï?w] p|¸}EªîŒÊ#뎱Ka<QQ}× Û]ÆVvq"º<Ý÷Ýn—yU¿×nmÅ©®{99ï¯üo^yÅ,!@ÉÌa¢æ»bÃBôÎ :\‘Ê¡2äl+q1<0#n¯ð/y]dÄÐgVßeÍ2c7û<™]îv™§ûY^=ÛŠS}WÍf1»¸îcœTž9VC€²ÈƒÊšlp{×Å;Ñóî± 9“]8L ¯ÜCÅC]ÆmBôéz•G§¨/cX:pHŸQSA¿(ïÞ¨xÍÊ9Ï+—·GåCu›qÿ£ñòÑé³ìj«ÏÚŠ.Of—§ú>W~túD¤ tVD€²ÈãÐhλ#ò3µÖ!zNúC8†± Ø•u{U<ÔUÑ_£×xtúæòœ°"tàÙ~ÖâX̦ÔÝ•ñ¹¬õ½ý8|ÊS–å÷£‘·uQ·U¤ML¬éÑéÓáß+!@Y£»B³â®Œø\Åz†èy|–¦ÊËôeÆí—Žs_ÃÝÑéâwVD€ÒgÖâõš/†çõ¹ˆ\»=ÍÕÀÉ!«Ü‰r¤*âöŠî‘Œº5ŽÁ1ãŽN?žÊ5+"@Ùëºa‹±Ëx•ò®ŠúlF¾mìJZÒ€+SŽsEF½¥"G Ñ»K¡ø¬ëj§ëb;³»ttz::}MäúpÀ4ЀCf™Çò®ÐŒ|[V}¾2ï»–VR?®HÆPNzGF½%bx4¢;W¢_ÁÑ鬫,?ج„8d‘{Õ×ñ¼¦±2ÏGÆÚ„èÀkWtYwkD=‘_¢gDVTŸ98:×Ä—1¬ˆ8d–y¼¯̸³"̈;Æ.WÆ1ß׬…¼õºÙÎs7no]szkW÷}\ÝÍb»ë2"ŽçW›ŒO€ÀŠÐ€CæÇtþü] ÏŒc¢»Ù‘×*#‡Ê:ˆˆ UùBWõLd~¥¢ž»6€#•1˜A߬‹ø·N_5v¬‘ G¸°tà­ YÃzìî¨ÌG²ê-còZé ò²Š²âBd¼Qç"ó…ò©.ë‰yÅ»®¨½ºˆsÝüÛ¿ýÌØõL…#Üa=uéSSVc=Zã@S™Ç~ýeYõ–¬x¨2n»–×" Ö¬­Œ¨ÊØÏ!ö†Ìç3ë+CÔWsˆg2ëÑŠz0ºÙ“‹íOÜú¿ôÈØõW5D‰ÙÆç}Z+ßš°"tà­ÌZ§³N+ãöŠ|8£n»–WKWðx©¨X 5[ÌãÜ|{ÃÏÏ÷ãÅùþWߨŸø¯î:½ýk¦ÄŽFFBŸ€Jk@cî@`EèÀ!Ù÷CÔ|ì2šºž?Ç*DÏN³yLCT,CìÕóaˆ½a/ÍçñÂâ Î/öã¹½ýxæàBìÃ+ù#¯},žÿÛ¿÷lü`DÐŽ@¥‹’§Á*М‡ €• ‡dÖšæ¶·EÔcyfìBÏ‹áâ„øþ°ˆó‹yì-4%þÂ|/^88ˆsóýWŠ¿Zo¨>~íçoºé=ï={ö“«øðÇ2'=Ãr:«"@Ùî^ªù°5v+’g*ê±<6!z7v“·lJüÜb/Íâü0çöâÙý½Uâ¯Åë«_û¹·Üôƒxäì'Æ.`U˜@Ÿ‚r :U ~´X :pÈAæZ7š3òLd=•·Œ]Ë7³‰ñyEÄC CÄþ0Äþ0½a{‹Eœ[Äóó½8¿XÄ3{{ñâ|ìr¯Äë»!>ü 7ßüÃ?óÄ»€µ•9¸Ö‘tVC€²ýRWë:€þ²Ê[*ⱌ˜ô$úd楯PEÅb¨8¨‹G§ï-æqn>½a˜ò”øQø–êê_|èæ›ø}O<ñ{c°Ž2ÄçS`ho­¿ù`Dtàý5Ÿ@YFœ©Š³™qÓØµ\N7Ñ…xùØô¯Ÿi>q~±¿.Sâ+W×fW¿ú o¹éG~摳»€µ3D¹}$è´ætVD€²›Y{cqD.…çg#¦¢M_°.ýµ.Nˆý”øóóýxa¾/lâ”øÊUĵ1į|ðÖ›~ôýžýݱëX'áöí)uÒX ñÍ/# ‡™Q7Ž]È×Ë+èø¿<%¾X ±WĊ!Î-ñÒü Îó8¿Øçöö㙃 ñ ¨ˆk³âW~áÖôg}òwÆ®`]Tf_YZKŸe°tàÝ®«ZŒ]ÅѺžç“1ÁýeC 1 óbXÄùÅ<ö‹8·8ˆçç{q~±ˆâ…ù~Ì…âÇÑ5Cå/ ÑÚ1>²NÚêü`°"tà ]W; _T7FÄSñæ±+yÙW÷÷âÃ_|"žŸÄsóý„â›àš¡ò_~ðÖ›þüû=ûëcpÜ?Ÿ‹@kåtV¤»`zNô_Ýäç›#âc"}Þy ñØK/ijû„ç›åTVü³ÝzÓ=cpÜe “x§o<«@{tVB€rþ™+¹y{-¼)#ž­^Æu**þéϹñ]cp¬eúmÜNkÒsVE€rªs£`E¼).†èšîŒéTùO?tæÆw]ÀqUƒäÖQUék°6šÀ!çú^£ù¢7UÔ—CˆÎ¸NFä/ýÜ-7¾gìBŽ#w O„qas:«"@ÛÞ_’‘oˆˆgBˆÎ¸Nv™ÿäC·¼ùÇ.à¸É OAY|–ÀŠÐ€C®›Í´8¿Öë"âÙ¢3®“‘Ý?ù…[nù¡± 8N*Ò¾f,Í ÐX :pÈ3;;:œ‡]BtÆwbÈá—~þ–7ÿøØ…)¹„”uÒ˜' €U ‡Ü°»«Ñ¼Üõy1D_Œ]m§²ûGBt€WÆèÓ`hn¡°tà/ÜwŸçeTÄõñBÑ×ve÷>xóÍqìB&/§ÇLÝ%•c X:pÈÛµ8¿¡Œ¸6#^ !:ãÚήþÞzó¿;v!SÖ î@Ÿ‚²½¤±Î)8äÓôoª"® !:ãÛΪð¡37ý¥± ˜ªÁ¾fL Ç…8äÞG¾2×TÄ‹U5»6ÚvDüƒž¹éß»€)êè°¦ÊG¬„à dÄ5™ÝKBtF¶•ÿðçÏÜô]ÀÔT Ч ¬ Ù ÐX :p9Úœ¯X]"bìJØh}Eüý¹é?»€)Ép:¬#߯°*tàrt¤^ÌÓuc—ÂFë#âïýü-7ý'c0eO3 eh­áÀjЀËÑå|ÕòT^œB7‰Î˜úÊøŸ?xËÿ騅LBÃØ%‘!ëŽ:p9ô× "NEÄ<„茫ÏÌÿéƒ·ÞøŸ]ÀØÒžf† Óœg €• —cZëµ;CÑWŸ•ÿ£Øt5¸} ,í¥€• —£Ïy%*v#³"boìRØhÝÅýæ¿2v!cqú4¤e 1é9«"@.G—óJUí\ê_»6Z—U÷Cgnþ¯Ç.` )¹‹@k%C`%èÀåès¶P±™çÇ.…–õ·þÌÍmìBŽš ô©vǃ¸ÍæV*v*cîDçè ]ÄWvº|üêÙÖCoÜÙùë÷}ë·ÿ—cp”2Ü>ÒsZ+w °"³± &K³¹¥Š­Œ<¨ªó‘qbìr8¶†>âÙY×íïôý…“}?ßÍÙöU}·}²ŸÍv3wOuýÎn—³íÌ®Ïì"âúKEDDζþ\DüÑþ ŽXe”}|e hLzÀªÐ€ËÑæl¬¢¶.uúÎGѹ¤jÞwýWv»îün×w'û~8=ëf»ÙŸ8ÙÍf§ú~¶¹s"c¶Ýu/âWB¿Ø0Ãà?}°~ÜÀªÐ€ËÆ.`MmEUFÆùˆ¢¯©.âÅ­®{a»ëç'ú~8Õ÷³]7;•³íS}·³Ýu['º®?Ñu]qOþÆ£ª-Ëq§Àfé†(1Ûøª|Å@kö4¬†¸è«’9«ˆ®‹8_&Ñ‹a–ùüv×?ÑÍvú.N÷ýìâqéýìdæî‰¾ßÚÎì¶3£Ï<§Ç.€ˆ!Œ©NE µôë +"@(íû IDAT–JWU®TFtCÄND¼˜‚ÖQdÕ|«ï¾ºÓÍöw³ëNÌfuºë¶v³Û99ëg'»~¶1;Ñuyéèôk/ýuÌ™Ö6KiO3^>¬€Ç €• K• ô•ˈ®"NFĹˆ85v=k f™ÏϺ<8ÑmíìûØírçT7ëNvÝÎn×m¿|lú¥)ñYD¼nì¢\šÍÀF)cª“à+ภK•“6Ä¥ýDF¼&ÑɈEßu/ít¹w2góݾ›šÍf'£ëwûnv²ë·v3gÛ]Û™Ùg^3vÍÇ€m`£8Ug*l-i«ìiX:°Tš@?2ÑÅÅ ô˜Dï".l÷Ý‹;9«“}'û.Od·}²ë·Oö}¿9ÛÎÌ]Û]×GÄU—þ¢…t„;°Y*³DèãÆ.€µcKÀªÐ€ËÑi>Z'ëÞ‰ž5ëòÜvv;ýl8Ý÷ÝNvÝÉ®ïOu¹½Õu['³Ë}—ŽN߈ݱëÞT5è6Gv;ÝØ°vªœjÀjЀËÑl>z™™DϪÅvß¿x¢ïç;]'û¾?Õõ[;‘ýɾïOtÝl;3·».vº.»cúo²ÊA†l”†*ß®Â"ÐV:Y€ —Q®GFÄɈx>"®nù÷™v»no»ëãd××ɾÏ.·OuýÖn×õ;Ù剮‹]}fîàØ+§êL‚YaVÀSÀJЀËHÍæñd\¼óû›…èÃvvvúnÿTß×N×oêú<‘]¿Óu[Ûý‰®G§óu4›R²Ûi(ïs¨+"@.G€>®Œˆ«®ßÚ~ậíþD×õ»]7;ÑuÝnfîv}ìv]7Ë<'Öá•`&ÓGSP¶–´gOÀJЀËÑå_¾íĩӷîîjÒNvž'`£˜@‡õä—V¥»`²ô¤&@ÇŸÖº´çÀ,VCOXªB³y :Ë È0€b}†òm•KÐX:°”}tiO³Ø,=Í$xùÇ…X*5›'¡;Kk%Ã6L¦sž'À†8.èÀr‚ÛiuÀñ1ÚdØÕКg €• ËU˜Öš€Òò§¹A³Ø(ÝÂèáý tàr4›' CÓŸ¶2{°Q†˜{—N€E 5VE€\†Ùç)°´WúÍÀF|Œ6 )ï¤=Ï+!@.C³y JØIkå™6KçôI°Àq!@.GŸs¬MIÏM3x—N…WM¥' € —£Ù<Ò§µÒn6Lfc×´g› ÀªÐ€åw: )B€+Ò¥ki¦ÂBÐR•S X :°\…i­ RcÆÒèÀfærÛÉð] Ùа*t`©4$4 VÖLkÆútXš²£`EèÀR¥Ç9šþ4æT`à 1÷.ˆL¯ Ú©rª«!@–JÁ-¬¥4l˜.}8‚–ü¶ÀªÐ€¥Ê%•“PZÍ´fØ0ƒ˜ €WA€,åwXS&Ð Ó›@ŸŒò}&p Ѐ¥R³yJ§®Læ0v \dSCKn@`UèÀršÍS 1HkŽp6Ì0w„;¬#VE€\ŽfóXšs„;°a:¯Óɰ´Tö4¬ˆ¸=NXC.6Í&ЧÃRÐ’ç €Õ —£#5ƒU µLÓZÀFrîm:^A´cKÀªÐ€Ë0­5éCZËÒm6Š#ÜaMùÉ`EèÀe˜}†µä¾P`Ã8Â}:,pЀå4›'Ág p…2‡±Kà’²±¡òI+"@–«Ðlž€Ô¤5èÀ†é¼L§ÂBД;ÐX:p9zœ0è ÒX…;ÐͲpª¬'°"t`© OB:›’Æ2ý l–^€>>á¢-?Ú¬†î°TéHM‚E µ_fð:Œt:p Ѐ¥L O…e ­ Ç›Åút8q›¦ÜÀŠÐ€¥*4›§  ÓšôØ0‹<°§uä'€ KUZRÓ`àJt9Œ]ÙÔЖ' €Õ Ë šÍSPvk´gØ(³}§êLE¹8´d€¥2tLƒe ­t_(°a^¦¼ tàr4›' œ@cƒØ(}š@ŸŠòŽ:p9šÍ°–Œ ›e!@àU —£Ù<ehÎü°Yæt^:p9šÍ“ ë€+1 O‡• ±ÒÛ`l2€å4›'Â:À•Èn».²©¡½{ô6hÎ&Xn(Íæ pØ6Í¥;ÐÍ2O¹-¬­Ÿx½Þ&ÍÙdK¥fóDXÚªÁgÀfÙºà4—©°´ö…‡ž]ëG€,Uzœ“PVÖ2èÀF9p-ÍtØØÐØ³®ÒÛ 9›L`© O…u ©Îl˜-úd8…Ö¶OŸ×Û 9›L`)èÓP–ÆÊèÀ†™w>‚u5;qBo€æl2€¥L OƒE€+cÖ×loOo€æl2€¥J³y,+`Ø(ÙŸÆ®‹\NkýþŽÞ&ÍÙdK9Â}"\Jcž(`Óø(ÖV·³¯· @s6™Àr5˜Öš‚4«E[U2t`³l¿äô©°´–‹…} Í Ð€¥2LkMU =Ç›eß:¬­ýîÔlìX?t`)G¸Oƒch-ÅçÀ†Ù OFÙ^ÒX·5×Û 9›L`© ÃZ\ƒl˜½Îî°®öætÚ³É.G³yÊèpEèÓa!h­›ÖÛ 9›Làrô8§À¬0íyª€²+@‡µÕ™@`l2€åÜ: å²P“ž›&·žÆ®Kìjh,gtÚ³É–Íæ (i'­ 2t`³œÆG°®v‡Á¾€æèÀR™f„`U ÐÍrÊî“a!hí¥ÌÙØ5°~èÀR¦µ¦ ¬IÏMs®ï½M§ÂKˆÆr±ÐÛ 9›L`©4$4–¶R|l˜«M O†… µ\ôz›4g“ ,Uzœ“P²N¸"ÏÏfö4SáhËÁ:íÙdK™@Ÿf¸"× Ðame•¯MhN€,U)@Ÿë@cÍÀfÙ¹æšaì¸È¦†Ö滎p =›L`¹!5›' RÖI[Cv*`£|á¾ûä¶“áD[9ÐhÏ&XªrÐlžË@[—6ÍÛ½L'£\MCcî@`l2€¥Ü>VÖÜ l˜O{›N‡7uÙõc×Àú KU¤f3pìýwôɰ´6ï;½Mš³É–2> eàŠØÓÀúÊaÐÛ 9›L`©Òlž‹À 8@ØD^©àtZÛr5 + @–ÊÔhž#è4§Ï l$ïSXG#ÜhÏ&X*‡Æ®ˆHS54ç™6‘Öо#ÜX›L`©2> %ë¤1O°¡ìk&Á[ˆÆL °6™ÀRƒFó8ÂÆ*œjl$ïÓ °­¡µ­aÑ]ëG€,UÍ“ ê€&ìk` í¥· @s6™ÀR©Ñ<e /Ô °´6ë;Ÿ›МXÊú4XZ«Hf` c@„W­åàtڳɖʔÝN‚U 1Ѱ¡¼Q§`ðm-ºAo€æl2€¥†Á¤ÖèöÓZEÉÐMä•:ÖfåÛ@Ú K¹}Ê2ИG ØPþë7 ²N+è´g“ ,UŽpŸfÚªt-°‘ìk&ÀñF´6ÌfýØ5°~èÀRƒF3¬!߯Êü& ½„h,«ô6hÎ&Xª4š'¡¬´à:ecCc‹Áî´g“ ,•ÍahÎîÀ&rzø¤W¥} + @–ªLÉíXhB€>egCk™z›4g“ ,eÖS™Ô6PEüåˆØ»ŽMgsIke_@st`©LjÁ:ÊÔg6Ïû;ûÏ3â/EÄ…±kÙhtZ:½Mš³É–r„û48ê”ÖªLj›é½ý#…è°N²½Mš³É–r„ûTÈ:i+Ë6°¹Þ÷èÙ_Ž.ÿˆ8?v-›¨:ûÚÊYöc×Àú K•},´õ¾GžøÕÊø±ˆxqìZ6ޏh­|m @{t`©ŒA‡s¤í½ÿѳ¿ž]þù¢)ÛZ;p„;+`“ ,U©Ç9ÃØ°ŽLjDÄ{yâ7jÈ?/Œ]ËÆ°»¤±¾³¯ =:°Tš}žß1Ð\ê3¼ìýO<ñ¯«† !ú‘(¯ 楷 @s6™ÀRÃ`øy 4šYOÀŸðþÇŸú­ŒáG#âù±kY{åtÚê»™} Í Ð€¥Òèó$h3ÓšG à°÷>öÔ¿‰êÞÏŽ]ËZ󢱡L ОM&°T9Â}",mùÉXî}?~_T÷CñÌØµ¬-³Â4–9ôc×Àú K¥ävR§ŽÌûü¾Šá‡"â+cײŽÊ¾†Æº®óPМXªè“`h­¢4š¾<öÔïY?Bô°³¡­¹#ÜX›L`©ÌA‡sʨ­¥G à›ùÀ£O~üRˆþå±kY' »KZKОXÊú4XZÓexe>ðè“Ï®~0"¾4v-ëÂ;ˆÖrÑémМM&°ÜÃØ%!B§5§¼rï}äÉ?¨~ "¾0v-k!ÓÆ†ÆL ОXª“ÜNƒ– y¤^÷?úÔg+‡wGÄSc×rÜ•í%e¦Þ&ÍÙdKUêpNAYšs¸À«õþGŸúl× Ñ¯˜¯¸h­óTОX*Û/û™Gžz úzWD<9v-Ç•inú±K`ýЀ¥Êî“à¨SZ«H“Z¯Ñû~òsý"ÞgÇ®å8ò&­e•Þ&ÍÙdK¥#Ü'BØ SòÓgÏ~~Ñ-¾/#»–ãfˆ²¯¡©¡›y¦hN€,U1Ð'À"Àôüì#_xlÞ-Þ]Ë1ckC[å£ Ú K¥'¬+f€~ö‘/<–ýð®ŠxhìZŽ ¸ÓZ9€°É–†Æ®ß1Мø ™÷>üÔãÃlxWD<8v-ÇÍ%­uv6¬€XÊèÓ`hÎCÐÔÏ>ôÔݼÞŸ»–©Ë(o!šÒ:íÙdK•˜m"ÜëS÷3O>y¶ÙDħǮeʳ±k`½ô™z›4g“ ,•n©œé9­y¦Vã§}ô‡nëÝqÿصL—ik(:íÙdK•#Ü'Á"° €ùÀ#<=›¼;">5v-ST^A4ætVA€,•²Ûi° p¬üÔCOq6;xOd|rìZ&Ǿ†Æªtš KU Zœ`h-µ™Vî§zú‹5›¿'">1v-°Îw °6™ÀrC c—@D9–’Æ|”p4Þÿà~)ª»§">:v-SQå-D[]töÊ47»`šºH-ΠЀãáވݫΜ¹û êQ‹·fƪ†›+s7²ö¢bgìaÝT”í2Í Ð€¥* ªN¯h­Êw¯Ôߺûu7ì½xâ»*âÆŒá†Ê¼5º¸9"®Šë#óê¬Ú©ˆíˆÈýX\üú-/m¤2"£ÿñ2ç ÒZ•§ €æèÀR)»S5´å6YEtóÌ™?}0 · ¹xkyûPùúìㆪ|CdœŠ®ŠˆÝˆì.¼]]úßæý!ü–lüU°¹¤9w °t`©Ê,]N฻÷®[ß¶³·ø•ˆxÓ/DmU,2ºˆŒ.†ˆ?ñÕàË“â>5Z[KZËr®íÙdK¥ÃFa=É…€ ³µ°Q§"*ÊÇåß>­¥ÓšhO€,å¶Îi"5iªJ£ØH×WÄ‹±7v!@;öʬ‚Xn†±K LjÑ\ú4ØPq]EœË¢Åî´Ö¥Ý2í Ѐ¥L ODi @+q] q.„裰¹¤µJo€æl2€¥2Í©NAi5@S•q]UìEÄþصl𴝡5謀XÊ:°®2âꈸYçÇ®e“ ¢NËJ½Mš³É–Jú$¤[Ði¬v›NŸÆ‘»CƶãÜÛ‘tÒ\z¬hO€,õ³gÏ>ÙÍë{#âÁ±kÙd©×Lcå³ €W,#¶#s»*…è xÑž»ihO€\ÖÏ<ùäÙ>gߟ»–M¥ÑLsú̯ÖvfmWѯTÙØÐØàtVÀ&ø†~úÑGÿpè¶ÞBôQ” t˜‚íŒÚެÆ.øöÊ4'@¾©<òÈÓ—BôûÇ®eÓ8ÂÖ|”ðšmGåne~uìBŽ«ò¢±LOí ЀWä<òôlvðîˆøÔØµl'ÒZz¨®ÄVV]ÕeÑaè4'@^±Ÿzèé/ÎfOŽ]ËѤ1 :Àꇊ«2ã¹± 9nÊ;ˆÆ²:½Mš³É^•Ÿzèé/ÖlþžˆøÄصŒ¤¯ŠÓ&Ñ_ù9­åàcSš ¯ÚûüÃ/Eu÷TÄGÇ®x•RŸ ‘¾"®*ǹ¿b&ÐYš ¯ÉûüÙ¬î‡#ã÷Æ®eÝUi6ÓÇ  ¥>+®ÎH!ú+àDséË@Ú ¯ÙûüÙ½˜ ÑWL³&­«¨k"â+c2uiX˜Æj О¸"÷>úèW÷böÃUñ‘±kYWkhË'+q]Ñ¿!‡êÐZå · @s6™À»÷ÑG¿ÚͶ8"~wìZÖ‘^3 וýò|H{ž*š M¼÷ᇟË~ûG"âwÆ®`$™×E Ñ—óY ­uz›4g“ 4óÞ‡~nok÷G"â·Ç®e”óNiÊ ÀŠed\—_»©)ï ëbðPМhêÞ|~váà‡"âÃc×,ム€#ñºŠxfìB¦¤j»Ö€æè@s?õôÓçfþBFü«±kYâN8žçþõd4Vé¡ 9:°?õôÓç. !"þ¿±k9îèpŒe\_»Œ)(»Ú МX™{Ÿz꥽ƒáÇ3âׯ®åxÓl€cî )DY'­ Y*š +uïSO½tá`ø‹ù/Ç®åøÒ¤­ò{°Ir{ˆˆgÇ.£"Þ_»Ž1ù$°Q 93`åî}ê©—võ+ë_Œ] q߀²Èøƒˆzzì:"â†ÈMžD¡Óš;ÐhOã 8ÿÍÙ³çOÏNüxDüÓ±k9n´šiî'^ï÷`sÔ~×W|Ûù@M!D¯xæ†èö44W&ÐhOã 82ýÁ÷öN_óïGæ/]˱RÚÍ4ö¥/ù=Ø[ÝìÍ™ñ@WñŽ!ó3“ Ñ£62D‡–2Óž€æl2€#uïý÷ïïºú?ˆ¬2v-°©¾ð ~6F?¯»úê¾52>ÕW|ûÐåñ‡cבw'zù(æÊ:ÍiœGîbˆ~íODÅÿ3v-ÇV3­={á*¿›#ã;"âӳ꾯‹üd?Ä;+â31‰=nˆÈ)ÔÇS ÐhOã Ž÷ß¿¿wÕ5?™ÿ÷صL¶ ¼6÷ÞsÏ,3ÿTŸq[D|¬‹üžˆüxF|×¥}àõ¦˜F˜¿r> ¤5ÛdVA€ŒæÞûïßÿ–×½ñ'#â]Ë”¥ãNilûôy¿áêÇ¿»ÏxsV<Þg¼=">1‹üž¬üXF|÷"âˆ8;vñ¦Ì ÑSÜI[å¡`4΀QýÕûî;¸îuoü3âÿ»–©*³546;qÂïÀFØ­ÅOvÑ}¬ëº›3ãá>ã­ñ}æŸÉ¨ßï#¾kñ`d<1v­Uñ¦˜ÄD<#é¬&ÚÓ8F÷Wï»ïà¶ÇÎþdEüoc×›`¶·ç÷`#T—ïè²ÞÓE|$£»¥.†èwWÄÇûè¿7*>ÚG|×¼âá)„èqC¬óqîNÕ¡5_š°gÀ$üdÄâŽÇÎþåˆø_Ç®ej´ši­ßßñ{°öþηüPŸñž.â7»Œ{º.>ÖW¾%"˜e¼#3>:Ëîû³â£³ˆïª¬i„èoŠŠ§Æ.bìih-õ6X›L`2~2bqûcgÿóˆø_Æ®ÖY·³ï÷`ýUÿ—û¬'²«wdÆ}}ÅŸé3ïë"Ït:3~¯Ïîû3ów»Èï­Š+ãñ±KŒ7G¬_ˆ.@§¹4@{gÀ¤\ ÑÿJdþý±k™ŠrÜ)åb¡Ù ¬µ¿wÇ™o›EþÙŒ<ÕG~¥Ëº«Ë¸¯Ëø·ûÌߟeÞ÷wßZ¿×W~dþNñ½ÃTBôXÃÝ–†ÖáÀ ÌÆ.Øÿ0bû‰3gÞqPõލÅ[3ã Uysõñ樼*2NE W=±U>ô»¤´àU™uý1†sÈÓûÙG|užugùñˆø®EäGfß6êS³Œo]D~dVñýCÆoDÕ÷-*~³Ï¨¨83ò?Ê›#ò©ˆzóÈu4QtZ3À Ѐ+rï;ÞqúäsϽs‘ùŽŒá†Ê¼5º¸9"®Šë#óê¬Ú©ˆí‡"2b‘‘y±…šqi©âÿÀ*uÛ×ú@X[ÿûÛî¸g±~¼ÏîêEÄCÛQ·ìGæ?×¥ðü©¸x¬û±58V‡ÆÜtÀ*Ѐ¥ªÞÕW÷? Q[‘Q/Çãõ'þ/G¿lë¶æt`íüâ;ß¹U/=÷‹³¬ï^D~<¢î\T|a+ó®y t•7WÔW»ÌsCå-]WŸŠ!¿3³>ѽsQÃǺÌï*þMyÏ"âÃ]Ä=‹ˆ÷1vˆ|'ú± ÑÓ¦†W)»nÙ]ÈŠs•ñ|dÿ¥ªz:3ŸªŒGbˆß»FÖXª⩊è#c/*vÆ®g“i5ÓÚÞ\€¬Ÿ7½øÕ¿Q™w/"Î÷ßQï3n[,âÉYvo­ˆên¢ž.^ª!oŽ.?5 ñ™õÑ>ºïª>Úe}÷¢â·.†èõG!zwñ•|ÇÈÿ˜oŽŠ§"gˆnOCfw™ç3ò|d>3d|1*Ÿ‰ŒGc¨£ïŸf'>qëoüê#c× Àæ ëÏŸ IDATK ¯©ì¢b»2÷³j{욀6ºÙi:°VþÙÝwüµyÄwGÅŸÊÈû+*f™ßºˆácÕÇÃ"ŸÜÊî­1|¶‹<3¯z¦22ªn®ÌûûŠï¨ŠßÌo[T|´Ïúž‹!zwÏ"†‹!zƯ÷àqCô<¾!º}ýdfEtû‘±‘ÏgæW†Œ¯fåùhÔð…­®¿óOÞüÛ¿ýÌØõÀ+!@–ê¢òÒ¡í™U[¹Ÿ!Dƒf3­u&Ð5òÏß~û µøSóˆ_é"~¤"®ªEŸÝ· 9|¬ëëŽa‘g·²»û †ÿŸ½{‹²³¼óüþý=ïÞ¥’q ÈK€8è€@H*•Úîéžž$+‡IÏLf29N2“•Ü&c»'“›ŒíNî’«¬•óae¥g­ÌUºg¦§m•J',â,Λ3Iµß÷ùåb—ÐÂÆFÒ®’~Ÿµ´Œ¶y«½%‰]û»ÿÏó\Ý6ÀïèY¾Ùè™Z¼¡T1 ïtVDoÌNŠL$¢ÿvò¤faè -Ã'»ŽcµrÒæDíø¸ðá`†c]ËL­²ë?üþëoý7£^vDDDDÄù’€sªFÒç?•pß0#HD¿ØœW›#""æòÏÖ­þ;ôß5Ò‰®ú±žøýÎL“ˆW÷)ªõ kö‘¾ÊºAõ“=é[­êÛ]Uå› êÕ~Ä?üÑÊ›Ëwßxó¿õZ"""""¾ŽôˆˆˆøM X$8éDôˆiÐõ2 ŽA?¹oõß®þq£(ï¹z€¸ª‡X j k]y¼À$°»¢‰By­Òý¢/=ÔÙÔÂ=]Õ‹}±¾Å÷]îjí7°¯Qѵy¹«¾·‚²©Öº¯&NEô S¾cø±pºkÄ÷fðó>¢_nýÂn>¹èü£•+õGo¼ñF½–ˆˆˆˆˆßVzDDDÌ©4èW½=ÏOãkM—­çÙ ×% GÄ‚ñ“;{ÍûGÿµ)ó¯Úü-f+¬àZŠÞ·ýð~ñÆ:€Ù¢aD¬ˆ®šFÞÞ 7;ôv#²} )¾·V½Ðƒ ­üxϺ ñzg_‹tMSx¹³ïi¤§(eƒñ^Š'jõî"MS»¢̨#:°Bè-ãùÑ/…ç4³[§w† ­•c³[§ŸÁÖéóQ)þ£®ZY¾wäïz-¿ôˆˆˆ˜SžyþëŒ Ž_ø]¾.…ךc~Q—€óß¾u«ïh{ú›zÿèßV6Á[³(‚k>6þó>ÞÜŠ}¶¶–Â]]屦xÂxº³¶5ðvoõЖ9PŠïk+/ôÐÆ?Úƒu–^þ¶“–5âÕ®ú®Fz¦³6y𢉠»‹‡üŠþ¢ ÿ“}÷oÆ+ o nå:¾Ü<}V³À·NŸ—ìïý`Õ-úþ‘£ßõR"""""~S è1'ù«R9Ï“ˆ±`¨kÐ#bÞù“uëÆ®+'î…òþN'~OçôÖßn¤ÑÙ¿4‚«d}R5Œè=xh{mMa]g”5!yÚÖ6Y?·üV¶´Ö^ñúj?ÓC›[×GúÒýx±›;´¬)³žé(›¥:]ÌD'M3 ž¾])?.®F¬»è7ï Ãxîw@7Œrs¹XéÙ€%*ºl¶NŸd¾ûÃU·è{GŽ~wÔk‰ˆˆˆˆøM$ GDDÄœüÕ&ÐOYŒt;Û¹_ótV+""â7ò“uë®Xĉb¹¤ë…oP)7 ª×tp…8ù¯5`ðò¼ÜzîÇið練¯­<È©×5Ä•Å哪ú!æê¾¼¥µö¶5ÒýÕü´AÕÚSå­Åúy'¿Ùƒ--(h}‡ŸêQjí}iýŒýbA·TtuSxµVÖ6Ö3P6WêtƒwthªP&‘wãúíªòãâʨ#:è¡wŒçUD?÷ ¿ÙÅ¿zëôÛf2%>ߘ¿ÿn»¥üÑkGÿ‹Q/%""""â«J@ˆˆˆ9iö|ѯlÏs&úà$ô8ÏT³…{D|}?Y¿êê¾{7C]æZnBZ^†tö `fâ&`œ¤Cœþ Hئ'ýRgíEL0|²zoöà ÁÊs?w1[ 쯰™Ï#º¯”õ©å1W÷ä-k/°­Ö×ÊOKñvÃ>ÌC=ëŠöЖïo¤ýdOÚÒÚÆÐ†üÂlD§”úZ­¬mгP6W×éFž¬Ö®bv"¦qýv'~Ü —9Òˆ>Ï,å:Î4Þ4g?ð·Nÿ¤Ð:χªÿùn»eÉw_;úŸz-_EzDDDœOãÂDz{Ä<öUhˆˆËËŸü!ÍŠ—ÖÞà¶=í›W€–Ë€a—–Q&Í2û¶»ÙÀyfèüŠÛ4Ò`ZØ#´ (67à­1Í=‰ÞwfwFD—¸B.ŸTüøÚá$zÙc¼½Ö»òH[-öuæA¡÷±ô¤­­½¯)ÚXë0¢Ì1ØÐâç·VäR8ÚVßÞhÑ;{ªÈ;?èfùw,va,¸ç·ýzœ'Ë…~n|ãˆ× §Æùô> ø¨dRü2bøOxëÍÍ÷^ó?õZ"""""~¼psúÁ­7ÿ›’þños­á3Á’ó½¦ËÕ\s×ôú£^F\B¼hÑ·íÙóè¨×އ=ÁÊ/L‰>)Ί³§ÄG«³_jáíS@ðV¿0ÀÜ6×5ö·>cÀ|b8i|Pfh;0S+U'ØtöÇßlí}†]å0p ûmoPƒn«p¬ÂgµrƒáÕÎÜYíƒÆ“ï²µ³ÊÓÕÞf±Kf9£è€~Î<‰èqy³ý?|ÿõ7ÿÞ¨×ñ«d="""æôoá~öµKŒ>ND?òŽÇ8ßT›lá±@ÊÞ ko¤v³A¼[eò0†Ÿ3%>8~êh•s¦ÄÑYÿ8_4Ò6txïç“è°bPýö˜ÊãUç^SàáFèÌ&`øŽ3qåpgxÞ®ë‹m]ew;šÂÆjxXà ö‰~¥'mkñtSØÜVžèÁÖÖÚ×WÙ4 >Wà6£¶Þ©•o6âE›Ðp½Ê»Šµ1]í†)ÁÀýó>~‘oÿtýh×—;I÷o½Yÿõ7ÿî¨×ñeæÑ·É1ŸüèÖ›ÿš¥ÿçk~˜ÏÈ$ú×– ô8ßÔ,Ú²òÀžGF½Žˆzfݺ±÷Ëà•v…]–aŸsžøüš¿*¼8¨zGòVøüØô_ŒÁ'ˆÕ_rÍö̈€Wóp-àAÕ4bÐÕÊ#ÀÖjTØ|Ôâw´xóPkžÄ<ÐÚÓ†‡ö“ˆ5¼œ¬Ë«8ÚV¯V¼óóIt¼§â­ÆS e‚õîŽ}e¿Ñcäj©ÿã½úÖ4êuDDDDDÌ%è1'—¢³Î0ýí,!ýëûÚ_†ˆ³õszÄeÐÞ koúªSâïsrÀžíÄ:õGt~N‰_ ¾ÕÇ à€àA†¯_,€zæ%‰;æ¸fKlñ}À¢á£Z\Š©ï"®ëOthw5;Já!wì-b›¬ƒ¾¯‡Ôâ—zh¢•§ØÒÁÁšhíé¾ôÐ?ÑÀÚÞSá—¥rK¯èåÎ~ X»€FSÀ¤Ñ¨“Tv«pÈ£è×ãá½ñ:â2Wjù;?ºm¥¾ûÚgÔk‰ˆˆˆˆ8WzDDDÌI óÔmÑ¿¦ôóˆˆÑûÙ¦Mýã3Ÿ¬œsJ-ß„‡?ß#]C× ®*Ãúí/þ…®Ë¬Šÿ†Já[}ÓÍØ‹´è®oúâEà[_¸FÞܳ¶ètD÷0¢wæ—‚ëqgníƒ:ÑÅu3Fc_ѵ¹Ÿuæ8u¼7\ýsÃ}<Ñ”ѤÄÃ^YÛ?3ÜÓHMÅÏ5Ö„íé"¶Tû§§&Ó{Èþx;-¿ë¢wkõÍôªaÓðìsOkªâɆ²ŸR·vf_.æ}<—Åu¿,æ:ç7mŒí¿ñÃU+õ½#oüQ¯%""""â”ôˆˆˆ˜Óx1u ö§HWœç{ÉËzœomSÐã’ô“;{½÷Þºõó)q¼ ë&äsL‰/£ë†qõÌ)qà<açQ#î¼0Fôžq~팡/¿ ´æ ×À蜈n‰Îõç {bGk¦ “El©hOƒ·c~Öš{ º¹Ïö m­Ó¥hK­ÃˆÞáÝ=ôp‹¯è®"ÿ²4z·ë¼¢'i­à tkJx²ýˆ­Õìk¤ÇŒ7^Ü;y6™ë+¼_`Y"zŒ”ý×´j¥¾{ä¿>ê¥DDDDD@¾AŠˆˆˆ/ñ£Ûnù›†ÿë¼`ûÒÒóþq/a¿¿ìZ®ëzq)é/ž¼ußîÝ£^FÄW1}ï­ËTÆV@ŸÝ6}Ž)q>Ÿ6ŽKQ…æ}Á&†àÃ>¼%q÷\טuœñ{Cp¼sýt@‹vÙìjµöao7<Ú™»™îî\wW´µ«züPgí®Ô­-~ tWKý¥-Us…Å{µrc‡Ÿ´=i4UíIS÷w°¹šý’–jÄ@æ}”ˆó€õ¿÷úmÔˈˆˆˆˆÈ7G1§ÞvËßþÏ òÁÑ#¿·ìZ®O@ó¨ßßyÓþé©Q¯#.O?Ù¹³×ÿèíånÛÓ1üK¦Ä‘–‘(g¨ðB[õòé3Îá£1qX7ç5æÑîŸzLp¼Ã`VtÖ®ÊlD¯ÞÚ^Í_7¬ëÌWóˆÍÖïªfÛž¯¿]«Ô™¥6nèÌ“¦NOUk²£0lªø¡Å³o)™÷eV^'ŠÑråÿýþGÿê¨×—·|csúÑ­·üÛÿÇ…úø†c‚Dô¯ èq¾õÇ–|û¦½S?õ:âÒñç›nÿÆâ¶Üò+§ÄÍ2ÄM 'Æ#~kÈîîåô,>oð%Ýæ‰Á0†/=ã±–ß?Ñ[k—‡Ý®LvTôdµWjů÷t°«Ú9€ÙVñ®ÎL ð#6ë-­vïTD¯°¼ÚOï0LW3Q©ÔaD?(©'óà½i_=»{"zŒZåŸ|ï£ÿƨ——¯|SsúÁ­7ÿmIÿûþ4Ÿ9ý×Èzœo èñëüÉÒ¬xií gM‰S–!ß4‡Û¦Û+2%£PÍsû˜¤Ógœ#úëÀ=s]cóÄ Üqæø 3¶ßcx ­Ùe´“3"ºí'[t»ªºm­ïÌ®ÊéˆÞÙ»*L´p Újá :Æ:XbóAËM}ºš Þj¶[þYgßWñ£ó%¢c>|µQ^/Š‘’øgß=rô/zqyÊ7D1§‹ÐÁC™DÿUþÒÕײ|,=Î÷Æ÷¶ýÓ1êuÄŵgíÚ+ïn…ºÌ.˰ϙgE¦Äc!éÌs->&ÎŒè:6Vüæþ¹®©èɽú̈.´öÏ+ZØeÏFt3m³ÃðTk¾)¤VÝ‹²ÖW³«Ã]ÇĶjïê`¢¥0ÚØŠ×u*¢Ã‡]õõU~Æf›+{«ØfùÑξxÔROöÈ#ºáãWæLô5ÉþÝ#oþިחŸ|3súÑ­7ÿ;–þ·‹ô鎑íÜ¿Ô_Zv-Ë3ç‘Ç–üÞm{§þ|Ô눯ÇPönX{#µ›Ý6½[‘)ñ¸ÜTó\gféŒ3Îu¬'¿X`ýœ×à'[ëvÎÞgPíŸ3GD¯•iFôάB4ý<°áÔ$z5ûm¶ŸžDFôN¼îÊ¢ZG|ÜV_÷yD‡½Õl3~¬Ã÷«b~L¢ÃGÀUäu£9ý‹ï½öÆ_õ*""""âò’o„"""bN?Xuó¿+ë½hŸÐ|вû\~wٵ܀çQ¯¿ø÷WìÛýÏG½Žø¢gÖ­{¿ nQiWüÊ)ñáϯzÍ£Vá¹¶2cÞž]ðÙ˜xÞ°aÎkÌS-|“/Fô·[ZkÊ0Ép;÷݆IÃsÕ¾ÑÒ¢joì\wU´£š}6Û;Y9ç”8Z¾ >5ž)ñˆ‹ÄðÂIs²œ1Y>Ñ 6ÎuMOWsÛý-ÄmMuÈ•]vZî*Ë¥Uõ [vÔ]õŒˆÞâ)›-L?Ô™×ÀKZ ªžia©í—«ý ÅO«yÈp¨£ÞméiÙó ¢ƒ>&=æ‡]ß{íèïŒzqéË7?1§Þ¶ò?ÿO£ùì‰ègúÝ«¯á†ô8Jü_¾eßôÿ7êuÌ'{6¬]Aí–A?sÛtŸ>OõM ±¨p° Í‚g;ùN¡—@{ÀÛ/æ}œÃRÁ1ãÅ ùß‘¸4ØlùÁª[ùþ‘£_ø³q>$ GDDÄœ4ª’¸ù|yoçîÌ ÇyÖ•:/ÂÇŸü!ÍŠ—ÖÞðåSâ³AÜ^´ŒÊ0ŠgJ<"~"îêóósÉ6Zf˜ _Ñ;ÇÄá™á_)§"zSÄmÕz¼ºàI¤Ý™ø|]ìl¤WÚjÊÆôäÉΚ¢°ÃUSàI‰ÝXÛ[¼§[:ôB)¾ÆUn䮬*Ò£àlÿ¬¢M å¹ÖÝj•úŠ)Ó²'.Ö=œ‹a)pŒDô1™´ê–Ÿ~÷ÈÑ/ì*ñu% GDDÄœl4ÚôYÖ•ˆO0—uD8Ÿz¾po9ðÐWµ'Xù…)ñÏ'ÅYY†¸‰çYfº_1%~êáùð—QD,4w÷áÙó i8Yn›1ÇÄ`Ë—ÝÙGÏÏØHgEôUgDôv™®òöRØÙÁ”*“ôjgÓ ÍÚÛÈ“ ©ZTU7ûÑ ›t¸«Ü®R_…2&ѵøL0ž3Ñãb+ò"i°¸0X$nÿ'koûÿúó¯ý×£^WDDDD\ZÐ#""bNóbý”a<ÿ.ψžÙÚ8ïüÕ'Ð e7ž5%NY†<ŒáçL‰Ž3÷Yâ謌ˆ¸$îCÏÍT¿«"N–÷gÌæ~aŸÌÖ/^ãµcâ…™Ê"ºÍˆÕM©˜éж50i˜&‹8Rí_6hK…½ LZž¢°SU»؉™FL€§í<ß_íªA÷E7–Zž¬ªT9T¤û$^èìUÈG@ÓŒxXb8aS"z| ‹%/* µKÇ—–R—ˆÁ"©+jÆe®•XÄðYÄØì¥˜å£ýDDDDÄ¥(="""æ¤bÙóªr] þtž‰>¯¾q hK›¾wÍí*í »,Ã>ç<ñÓSâ{`Ý9Sâø‹ïìÈ”xDÌSwõžt~éTo•‡ÆÄ^`Û®5c…W¦…Ï]#i•í—Õ˜Àž®h› “T¦ šl¤#ê/е¥³÷öT&;Õ) ;]½‹¢ªšFLv7x¢ƒç‹XVÍLÏî×¢±ž¢ø^Óպ¯¡¼Ø¹®2¼†4òíÜqÁ †!3=Ä"á%Eƒq©]Ôpb±8¹DªãE'iQa¼§2&ûŠÙ(~*ˆ/ùò|±~ è± è ãOu™EôœçÛ»'?ø¿QÅžígÆñL‰GÄ%¨ {ŸpýH”SQ¼™1[úb¯æŽè·÷Å+Û fnŠ´ªºÝÙˆ ÉÓ]=Ñ'ëh'ÞjÐË{p™¦ja§ªw•¢³“ä;Z4ŒèâùF\×V>)öbS–Õ§«}o§*º§¡yµ£ÞŒkƒ5;É>RãÀI O"ú%«‡XÒà1©S9±¸øä’FÝ8:±¤q7®Òï•1q%pgO‰I÷yy®1Ç[ú"""""¾¶ôˆˆˆ˜ÓùyIëü›çŸ—MD—óº`DDÄ×&îSsxP«‡g¢ë ˜-}iæ8[|ÑõÊÀ~çìˆ^ÖœŠèÅLXÚSí­§":b²7;ñ–¬­’÷40ikªœšDG;-íéÙ;*ì&*n Ë»ÊǼ¸«eyQ}¦âuE<]+÷4p¤C7"»Ý¨ì¸˜·q‹€Ø}¤fÄk‰¯h¼ˆ1¹..¥/šY$N,iJ·DŒ©t‹E¬hq%c ¿7èÏþ¸,UŠˆˆˆˆËGzDDDÌIÍãÉç+@ƒ¯õB.†yûUˆ…+¿©"â2UðýR† ý4Œè­½µ/ï}YDõœˆ^†çÀw5ÔíPöTΈè0ÙÀ[8Z¬mUìîáÉΚª…IUM5xREû¨LT<]`Âöá¦hy­ú¨)^R«®³xÁÕw•Â3µ–u õHÅ7XETvQØy1ïãÆ‘NÈÆ‰è#1»u:ã…ºD,*:±HtW4šG3‹5ã¢×“Ʊ”Ó[ï†Q|éHóHzDDDÌÉžŸè§ùªË%¢Ïû/EDDÄ2Œè:<èNGtC™±¶õÅnÁ&ºßì‰#­ýsЧ>Twž·YÛ¨nìë¬ψè;zðv ¯3ÑáÝ4 ì¦xG­š*ö$xh{ÅÓHVñ måÃFºÊ¨-…—Úê;g#úÝ…úZ…åþ‰Ñï\ÔùEã–Ødýüè ÑuqÑ`q)3‹D»¸Ñ`‰T úBc¥,>µ;Sa¸+À¢.ý¢)ÊÛ#"""âüK@ˆˆˆ9 4ÿ_òU—Å™èéçq¾9¿©"âòvj}¦V}Å50=±»ÌÑ ZÕ—Ž \ÏŒèZ~Þ°¶ÀVäÓÝì¶™hðÏ;Êk e¢Rw7h0ýyDW™”ëþÂ!æG IDATmï`OnçþT¯øæ¶úÃúF+Ñ^®µ®mÐså®B}½…ë †ÛÂ_´›8·>Ò@“ˆ~.‹Š,nh« úR»´af±ÔŽI^"4V4^`ñìÖéðk£øüÖ±$ GDDÄœ¼@ ›Ð˜OÑ¥{&zÍèqž©Ý¨W1ZE¾sQѳ'm$¿-t öWDtáU}•#3®§þ}IZcûyf#ºåýÕÚ,±ÃxZh{_þå ã•¢2Qaª‘'±f#º§P™,xßl é‚wTëé^ñMµòA®î(¶ôªÝ­)â…Î县Ùáë ô*üD0êIô>h0ûÏ—|D°¤ˆñ‹¤n¬hf\tKÍ,’‹%7}•^K4Ü.†¯Éåu¹¯m¼ç7""""œLIL~á¼j‘tt`Nýû*ÒšŠŸÆÜÓÀÃÂû;´¹ ·dŸ¶µ­WPk^.°£¢©FžTeš¢‰Š¦šÊ¤ä˜­Ú¼£CO«°‚Êûê² šW[êí¿\Ñêz§£»²Àmó!¢{¸ûø`¶l.¸ˆ~jëô1ðxSÚñB7.Í,. ±Hª‹ ½>ºBú<ˆÃð׺ø‹1Ïå"""""’ôˆˆˆ˜“ŒÆ ú.ሾ¾±@ä7UDÄç ºo¬èÙûHoh6¢†ç—O¾Ñ[x³ãœˆŽÖU #zƒtÖ¦"Mx6¢7  /˜4ì¢h'ö4x¢¢áç“ິ¼£ÂÓµ°ÂUïõåkZpƒŽn-ÖËëíÊ/:êÒ‚VUø±àÛï.~Ñéˆnñvîgn>^äÅ…®/µKJ,‘Û±¢ºXÒ"i¬‹Îˆâb81ÞÆGö ˆˆˆˆˆˆ‹*="""æTA -±ÍÆóO€+G½–ó*[¸GDD\P³“è/œì¸B…×[N¢Oö »dvž{M7 Þš1§¢;œÑa â@g6IšOƒ¶ „y´³à]h6¢OT³»ØH?ÃÞìvžîV¶o÷`y‹Þ½êÊ"½Z«¿y*¢ËåvËó"¢Û´E ÌY“Ú_ÛÙ[§wã*ƒEE3K¼Xò˜ ‡ú¥0^N¿&NoŸž(¾À™š'ÊqÞ% GDDÄœÊÂ=PðJð1ÐÒQ/$b¾ ôOwDÄ$´f¼ñ Ç­+ñæ6€Aeç—EtÁŠ1ñÖÀŸGw8ÑñSÀ½o‘ÊÁÖ¾OÒ„Ñžbo­Û/íì*b'fòvÃÌöaDg‹ðn`GO—F·ºúÍžuC'Þ3å»ÞܽښU=Ê»Õ%…r{µÿBâ;í&ÎAR¯B-üúˆ>;!NvR|I¡ëSKÍnN7^hzhIÑY¯iÅçØB="""""â«K@ˆˆˆ9Õb-¨=ÜÏ¢¥˜cˆK#¢§uFDD\$Z3.¿0c]QÐãU0Ña—4wDï‹·œŽî ÏD¿§zÑEÝÜ›èEÞn³¯ÀCEjZx¶˜F»wbíéäm‚=Ål=dØ ÞÑÁÓÝJõ› Z1¨~»QùEçzs×*º­¡yÏÔñ"ÖvxOAÛ/Ú-œƒ Ý7zepµh7¥ohÇQ·¸¡Ž!-*ôzÒbNž(¿^Þ@õ"""b~Òé/&1Œè—€Zö—"æ¡w@CDÄÅ#´fLþ´º.^>õøvÚìšûVŒÙcˆ×Î|¸H÷ñÌðß©››áTú ¶ ®èÁÍÏv ¦ŠØ^о"¶•ÂÞ"mlàéj¤= ÜÓÀQÝŒüf¿hvmT~Y`EO-ðåÐ+è›–þüÂݱ¯ÆÐ¿s¼Ù·éŠ~¹{q3vûX³dÅX¹rYS¾±´ÑU=i ýùgDDDDD\Ð#""âÒu©Dôœç‰¨PkõR""æ5¡5c¥|h×+0/z|;«™šû"Ý46œ”~ùœÿcf#z4òÓÌFtà °´ ÏP¦ IÁT#¶7è€àá¢aDï¡gŠÙÜX{ws´‘V"ŽöŠn–ݪ”w1Ë‹xǧ#z‘½ÎòÈ#zÄyWê©S1Ÿ% GDDÄœìKdìY,Õ¥Ñ#æ¢*X…®Z‰‚O©|äïÕ¼=ó oœüˆ×N~À‘“ïóÚ‰÷8ÖõÊ#"æ=ᵋÔ|hÕ+ÏŒè-L~iD‡å}qçDt¡uFONGôRx¸ –4b%ðtA“ 'Ñ·6è§#öIlè=Sĵ§ˆ»liÄJÄѦhe#Eå#Á²F¼\Õ£tˆNÖ:‹~aîVDDDDDÄ¥#g GDDÄœüîgðð,ôc°@ÏDÏ\ÍåEÃ÷¸VÀ‚êJëJ´µeàŽ¶kié¨Ù "âÂ’×.róüIuWËåEÄ·`Ñ{0U`ò —Àõ}A[yÉâŽS7â~ÛOÝ_àɶÖÝÀÃ¥°¿VèKßl«Ÿh¤É SEž´µy‹á@55ÒÓàX{‘·UëéF^Õ¡×ûV½^\°ë2Ä/1×7ÖñNî0÷"þ9æ÷.â]ŒˆˆˆˆˆXPÐ#""bN¾ôNI^jøL°dÔ ùMùûB\v$*Pƒjflš¢Òº£«•[fº–J¶Vˆ˜w䵋hžŸQ]†õ° 5“=}yDï4¨¼Ä]*÷cž0¾_°©ˆÇª¹ xXâ€íM½¢;øP1“M5ò¤ðþ=XÄ#T”ôÌ€º±‘÷[+<ÕÈ·Wëµ¾µ•^E¸^méÝÎ\×3'Z¹Å¬·ùg¿ÑîaÄbå­¦qþ% GDDÄœt fÛÙxþ 0¢Ç|2ÜŸ¡ahkå˜Íg]ljÚñq;àÃÁ Ǻ–™9Îpçâ1®ˉJ ƒ×öÑ ûZKÏ î‚aD/0Õ;8g÷ÁuýƒÊç“ë÷Û:^ßàBO´p‡ÄÁjoêÁšV<^Ì$x7” TtÖƒ*>HÕæžÊ³­ë†FÞ‡µµš§KñêÚñbOþV /I’í«Šün­º¶'[ù„`c"zDDDDDÄÜÐ#""bN6ºÔFÐg-Á|†NDwÍ`ÍÅ`: ópJüx­œ¬•“6Ÿ´>é|Ú¶|Òh¿æÖéŸamãº^"zDÄB XÓ/ze¦órŠžî¨0Yaw æŽè:7¢±ÞÖ!ãõE¾¿çòd‹W#ºÚ¾¯k[xÊŽbO£²Ýª`=`|´¹çòl‡×ƒ ?XáùÒè[ÈηZëÅF\ÝU®,…:kicÑ©7l¢òOUøË÷NFœWy¢ç]zDDD\~ěϴ@"ú¥·ÀEò…­Ó‡1ü³®ãÓÚq¼ðÑ åƒvfÎ)ñ ÍÀác3¬Y<ÆòL¢GD,‚ÛÇzzi¦e¹ ÏëZ³£ÁÓ´ (ç\sm¿ÀÀ|¾ý;€ÄzÐã¶7ÕûzðdµnGÞ,ù`­º¯'ÖÌc’&f#ú6TZ‹(Õ?£h“¬Ã-Üüü`ÅÏ—FßêÎŒè…ë:ÜïÁ±Ö,m\@õ…ÍNDˆˆˆˆˆ8KzDDDÌIçLP]j$–`Ž#z-¿Ö%ºÀoîëo>½p|1Æõ‰è ‚Ì‹^ž©Ü€x¸ CøË#ú˜hNšÏ·Ÿ}|ƒÅ㘠îC~²Z·m.…ŸÕÊ=}±®³"My—mEü´-l*Ö£oj¬Ã†{ oîðá¦Ñš®ãùžXÛV=_ÐòªÚoà³Î,é¹ô:ê‡6þ©HDˆˆˆˆˆ€ôˆˆˆø*èkîR=ÿ ãù `|ÔKùU.õ¯Ã—m~¬v|ÒÎp¼V> ø´Œz©Ì Çg¨ã†E‰è Äê1ñòIs“ΉèÂÓ…/Ftàêq¡ó¬g·(hƒåÇ=Ñ%?ÕZ«€NEôFº¿…ŸÊšÀÞ#±µ G«¼¡‡¨Ü×§¼ÒºÞ:us…盆µµãù^am[y¾¡ÜPe ïÌx¡¹ªÒ}(Øbø3à.Òý‹8/J¶pˆˆˆˆ ="""æd|©¡Ÿ2Î<è î«0Ï·NŸ ¼tbHDˆX0ÄêEðòŒ¹Y…C6ëZk¢X{zÅ[9'¢¾1&˜1Ïxvû÷á‡Òf#ºàÞF~ª›è‚G ëz°¾4h»ñ>ÄCXU|_Sx²«¾¯§òŠj½³SùxS5Ï•†µ}¸WtgW}¸ˆ;4(øDÅã…æªŽî]`k…?+‰èq™K@ˆˆˆ9Éè2ç˜×Ýóa]¢3´ œì:ŽÕa_¨[§Ï7§"ºJŸåýfÔˉˆˆ¯B¬îÃ+ƒÊm.ÒlD¯òöÎìkăœóÚ‹á}Ð §ÏP~(m@b³éö}^:#¢¯kÍ“EÜ&ü‰a¼CÇ{æêV~³Íð§‚¿rocDDDDDļ‘€qÚ¸à¤aѨr¦¯<€ž­Ó//~6€Dôˆˆ…C¬§¼rÂuUƒmèÌÃèWFô[ð´ÏˆèÀFà °YpWn1Eºßâ WîhÄæÎì+°Õh?x3Ò¡ß#xk]O¼6p]ÝO¶æTD_×™'q5‡¬òÍŽúQcwÒñƺ¦S}GfÒâOeþ€xšL\>\/¯wüFDDDÄÅ‘€s—ÕîŸ3,BšÁõZNoZ‰ÖbàʉZ9áÊg]DZ¶å“®åãvÀÇ™¿¤$¢GD,,‚Û«yù¤»U‚³"ºá@Olúç\tU}s€??C H›1?­øAÉwöÐáÖXp?EOºzu#êÌ^`›Ð‚7!=¾x¦3w÷(o´®«{æÉVÜkü’Šîï*‡ë«9TÐíý¡ÄÄgË5•ú&fÒðg"="""""./ è1'_Î/”Úcó)¢ïþð½Q/!Fä¥Ïx±¹q,OÛ#"¯^Tôډηq´ Â–Ö_ÑñÒ>úÖ@gGtă«f£ðx¾³¾â'kÕíØÒUö±­Â›Ov°®ÀóÀ}ôN·xºÂ=Æ/6EëkåPëe=Naµí:XŠt¬qYÞÒ½.´SøOþ —ósȈˆˆˆ¸¬”Q/ """æ'Ù—÷‹¤ö˜¤ŒtÇHxéxË[3ݨ—_•uÛx£ wØüôÔö´Ò!àä-í£oI:ûqm=P`mÞÞº¯W Ç›ÂI{ ÚRàPA÷4â‰5¥ðpC#>mÌÊ>zº ÕÀKýÂúb—Ø ôr‘®)èÓî!}ÔЬŽýŽí?ƒl•ój~_FDDDÄy—€qéÓoó£Ëól÷3Ê Æ1b¯ðÖÉvÔˈˆˆ¯ÊºmQácäoœz¸VonÍSÀ‰9.ZÚ‡5Àãg>ZÄFÍFtä5½âw1ïQ¸·WϼµÈ{„67è©‚Öé9ÁšRô2pC¯p ³²AÏø¦ÍKMÙÇ{b#èÅ"-·ù¨ØcEú ¡ÜŒyUÒï`ÿ)P/Ø=‹ˆˆˆˆˆ˜'.ûÆ#""ˆß*‚söV›¿Ñu*Ù¦3fhÑcÔ^9Ñ&¢GD,$Öm‹Ä'ÆktfD‡:ó4sFt–Œ‰µ>'¢ëŒˆ.³¦Wüø ÷"^Ž´M°x  §¸»/¯nŠ^ZÞ+ú¬˜[zèÙF|ój¯a汞Ø\̳M)+€ ^"x·§²xéw«ù3Ñc^Qž£GDDDÄy—€qñ\ô>û£Ìþøm?g@ßv''¢Çh½r¢åh"zDÄ¢[OEt‹§íày8>ÇEK‰µ‚ÇÎúHb#p@p{_:~§ˆ{Š8 þ°‚ÝFôg$­é£—À«UxUp}—¹¹‡^(ò-ÀËÍÿÏÞ7ÉUåižžs¯{,’BØ’5!IH’**)’´n+ëª2›w4obl^C›Í˜uMÛŒMOUÛt÷@®ä ™ d&KŠE$BR„äîç7D\éÆÕõMá±?f×îâw9×#îòÇçúžB¿)’þÖò[eJ÷:ôEa/Y:[*X­P×?¢ØëИÎn Á=Á5SÛ”#xŸPg—!Btl¿Wúú”vßß•/9ôh(~^m éùAèO¢w¬§¤øu}c²_ðZˆ.ÅÃ¥}%gd=©ä3×CtëÇ–žOò;¶+•>°âáÂ:©¤;:É++•ÞOŠû-½_úž­_Ž­xË)t6Ù‡%-åû%¿›¬È"D°wñÁ8`?Ú7!øÚT ÙV4«o+"TLûÄîyUˆNwîØf' Ñ`w±îïJ—-?.ëgÕæô½^èÝ.·5×µŸIݵÝŠ‡:k!z’ž(’ÏX:_H/'ëµµý];¾Õqú ö)I·•Éý$ßQ:}`Å Kï'ëY¿,ì“âÉ>Ž3…t[XgJù!Kï$éBbLtl»à}96:`7#‚9n¢ógQÞÊ.#ѱýNÒ;ì.k!zäx¢¢‡ôÝþ@Ò7-GÍ•öwúÕºSÙ/¨¢—¡e)>é 'aé|’~˜¤×,}7ÉH«!º'’ý™C’e‡‘þjʼnd½[XkûõÂ~19þP*=$ëT’nët¡ô¨¥w,ý‡µ}°™Ol5>ì$³¤÷[>ìšõ©\›†­_ßNú«!z!:¶Ù‡+}ºÚÛîf&eÝ?Ÿ|5BO8üÓjó@ñÌ5é¤ÚCôn7éYK¿¬oLö É~C’RÒƒ¥´"ÅgR<îä/$OÖ+Éz5ÉÏ”ò’.í“rœHIgRh±°RaéÈ'q")ÞMŽíx½ ûÂéÑ$}’¤Û-º¢¿méŸ$ý›$¾Ñ…íaÞ`öÐ[mVø~ Á›ç‚7ƒðQëõý«åNm*ƒ ôÑìBRP‰ŽíöáÊ@Ÿ^¥øv‹ŽwåkYñdDü´öÀwú¡“ ]j9¬Û±žk†è’žsè IaëÁRÈ:µ¢ëœ¤¯’ô£d½*ù™B>YH®†èº¯(t¦P,t…•nëÈ'%ßgéÝd¿hÇ/JÅKÉúméôX’>NŽ»$}\(=&é’þ1KÿU„èö>l¦iÃëiñQרk!xs¿Q!x=¿)¯mk.7·u‘¨@/!:v€“+=}´B%:ìvÜÓµ¯Iþ¶B?®¶gé;}锤¯[[ ѽ>Dwòs…õ[IaÇ}¥Â’>±ô˜“¾vèl’~”¤W-}Çò‡…ô`Ç>)é^­V«Ï¯†è>\ÊYºÏ¡¿&ûE+~ž¤—-ý¦tz\ÒÉÂ:.éÃRéqKHÒ?*‚Jtl¹”y€Ù#@ÌÚ¸°|THÞ¶ÿ¨0|T¾ÙÝ¡·ÞÛ‚7ñ¶ã'ž²¢&‘b5@çÃ;l«O®ô1!:ìvÜ3WøjHOÅúýÉ~è3)ηÖíHß³õz}cÈß÷jˆž-Ý[* ­†è+é)Î$ßÑ“}*I÷•öGI:ž’ÏIš/JK‡JéT’ŽZúkJ~QŠŸ%ëåX Ñú °îuèd¡ô¸åßËþ§!:€Ý0 ÓtÞ¹$Btì¢ÀîqÏ\ákv<j†è>#é«–£:éI?¯o´ü}KoHÊ¶Ž¯…èÛzØöå*D·õšBOJ>“¤{ËäS)tWJúÚŠ²kw“½PÚgR設÷‹äHñ³Ò«•è)¥§,½›’NX:YÈOÉþå^ëÎ#»:`#& Íë5+ÇÛÂñYTƒï–Jð[©ïªt«%üñؤûv³Eút’$Ñ;¶ÛÇWúp™1Ñ`׈¸§“R–â‰FˆþDOþBÑ¢]ëo“õ³úFÛÏ[¾¢wR”ªBtùŠg é[¯%é‰$Ÿqı¢ðg–)¾q(•ò¢åÎZˆ~§U…èþY²^–ôËäô]INÖ Iï•áglÿ6IÿÒÿ+Btlyï €™#@ÜŠqÁyµ}Òn×›¡x[˜=¬|/…à#í–ýÆí;ì˜aÓܺu*ÐoEõ äa[ºÖÓGWéAv GÜ3—Ò@Š'z­Ú÷¤sÃBôRzÑ7…èz¾ªDW\Ñ?rÒCÉZ–ãóBzÅÒk’ž(T|¡ˆc¥ÓÉGR—S( k)Iei±ÖûûEŠ—’ô“ÒzÙI¿(œžµü§Âz@Ò{Eø»¶~méŸÑìVT–¦Ñì‚}Ô¶Qݶ7ƒöaç6o.ZŸöØIÚ3ÉõÆ?é¾9fÚçdÝþyõ‹˜žu£;÷æÏØ2Ÿ¬ô’œã-?ì «!úgWùIɯZú‘$…ôØ5é¯sR?¤»GéÅžõ³½Tm´ý¼¿Ðs /Ÿ "}ÖƒIþ0+^ȯ ¤×"â•B~w |W©t¦¯Á].t! ´˜ìÛz_—öýÈw†ü^NzYY?)B?$ý8eÿÝ ôFáxzz§ˆôl_ùW–þIÒ¿Jú÷Zý¢&0{æ‹«˜=>MLj’ðu’±ÐÛ¶K7¸ Ñ'9n/„à·rÌDû3ú†¢cG8µ²Z…Nˆ»DÄ=sÉŸ_Ä“a¿j¯†è’¾úkWakUt¤ûáŸfÇßW-?/ÇúŽ¥{ ç³ýðR|+Ù§²tª½’­×rø‡…|2·—*Îö5¸«(|1b¾³¢Ÿï(ë)µ|²Hzy°¢¿’¥We¿”#ÿ®°Ÿ„Þ)•ž(ÿ*VCô“ôïDˆ`—à“4À8£ª”GäiÈãæQç×óaíÝ ÇlG>í17]/ó>a£ª=‹as°N­ô¥œçŸ4ìwÏ—þb%Ç“zÕö$)¤‡{òÉRqFº9D/Rü rü4ìzˆþœ¬7"â;–î*î‡ß—ôH²> é“z%Â?‘ãïeLŠ#eø‹¾âΔ|1ç˜/­Ûû¡¯ ùëJq²H~yõãdýH¡Wå⥿-ägrèíBé¹¾ò¯$ýcHÿêÕv<>EŒ2*x5Uaaª­7w.5Õža˳ ²' ´o嘑Aö ¯7î<ëö¡}&ªç—ÛêÔÕ¾B¡‡æ;ÛÝÀ"tg7ù\/ëÑP¼j­†èYz¨ú°t|nùîú1–R'ù¥^èÇ!ý°¶ý¹ß”âiKw– Âï‡ôˆ­O%}R¤xy ø‰Â/Iù#[·•á/úŽ;‹ÂW"»_:ßÞÙ‘¾„ ¼¢VCô½Z¨x9;¿ž¤ç¡·K¥ï ”)éŸ-ý«åù-}"à †™$<¯婱½9ª·U£·]·¾m’¶¶ÍÇw+Çl¤|’ëmôšÓ^Wb ôY©~w ѱ­>½:PHz˜v‡Žv¬s½Ð£’^“ôŠ$…ü`?üa'åϾ§yXÇz¹¢'ëÙ¿7Bt÷²ß“õ¨­ÓYú¤¿â<“ÜAïì%­Ž‹Né¶Œ%–J[KƒÈê‹ÿÀnaéhGþòšâá,¿–Ö*Ñ%Ýד>íhŠ=éY­V¢?%éŽN õ²ß•õX‘t:²>*ä—sø§ÙñƒB>=;Ö…Aè`rôsøbGéÞžò©Rö Ç‚’þ*é‡!½*éGz-¥ô÷9ç_”Ö YñN ¿Ðµþ°%Oö‹ tÌŸ›ê†Âõà»Zo ɋƶæ~©¶_=ˆŸ4@okç­è£îuTx=,äÞÉ!ø¨öLÛ6̆Ê2_PÀô,É–’¬ÒR7Y]…RÒ‡æ‹BK…´”’ºC~ÃBÒy+[ Ø([wte]‹xXÒkÃ3j‡ IDAT²W+ÑC÷^“Nw­%Ýß<¬c½|Mú±¢Q‰.½¡ok5D/zá?IzÒ)>SöG)Åß[úù ü7…t&;u”ô²hu õ ¥Ó‰¾ôIa)r^´}RÖ+–^-¤ VCô—³âç)ô7}ëÍùäÛ·ê9€[E€¦RWQLŒ ͇̇U£×CôúõÚÂô¶¶µÍ›Ë“?îãÎ3nû$ûÌ:wüÐkdïf±’Y2Ï1Vÿ®UˆZÏ[ZLÖ| J…– k!yh(ØûlÝ1/ëjÄC9t½;wIÇ{¡Ó!!zWz¹/½–oT®Ëòsr¼¡'%ÝV:Ü¿#ùÛN:YXúA²~‘C/8|ÖÉ—•uÀÉDœ/¥ƒðÇ…Si©›¥å¸^‰>½–äfůKé;]ûêÖ<[pëЕaUÝõp»­ª¼ÚVŽYoVª7Ãó*š6Ùq#Û`ºpßdfLô=ÌR$¯~ ¥c©›”çÃ>P*JE,$¥ÃeÒ"?}ÀºcNÖJÄC’_Ój÷ìŽ*DOúH¡‡¹´~Ø›Ct;ÞÊ¡G,.ê‡Þ–üTJrÎú I'éYz!…ÏQ&]éåX,-÷¥¯ é””ôåò)9~¸®Ýé‡ Ò/-½¸•Ïö¾éÂ3G€¨k†·õ©m ózH>jÖ­û$èj™7Û;®‚|?†à£öŸäùPL=LJô]ÂRH1°S.¥~7¹ßq¤vq°Pg>e£ëôúß0iý%ø·¸eaÝѵË^ĉœÓOSŠ¿×Zˆ~-ëlWú@Ö·‡¹´^é‡^ÍÒj›ŸNŠ·²ª]„Þé©”¤œõ~²þN¡×³ô|þ"KÑIq¥—µXJ‹élÇz¨~¯TÜr®Bôë•èÒÏïê¸9N;ìHèixèÜ Ð«êñ¶JóR7óRíUèͱқçVß6×þQËÃÎ3É9§Ù¶™!ø¨Ç&½ÆÈýLÈ·ELˆ¾M,eYýBêö kõ¬˜·‹C…ç—Š4¿˜®wnÝxïÜÝÎv`épÇÖ5g…ôSË/iõ=õ]=Éeè}[4+­Ý¢{-D}k­Ýýp¢{°¢¿(éõzÞòy….•IËýÈóI^„>ëØöBïÒ]i`锬ZúyúÁ½ÝôsI„蘩”E:fŽPi °«õfåy[ˆ^χémS3¨Ÿ$8Ÿ¤²|Ø1ömW‡à·°ßuT o%'G Â.¶»%»ƒäÔORî$ ºVž·Šöü¡²(,ÕºNOZ à ĻޥÃÝdõrD8~V…è!ÝÙ¹Ô!ºüt²ÞÊß’´T:Nôä·zºX­D/I/Zþõ@ñL²­ˆ‹§•žò|aß6Èú¬c?ÖWþ‹ÂÇ$÷Cù´ì¿³ôÚR¡ç·ìÉ€ @Œ ˆë¡ù¨±Ïëy©›ƒô¶*ôIô¶vŽ ¹g‚OÔßÊ56-ÇîváÐ ,BôKYŠœœ®V¯#厥ÅdJ*R±¸T*ÕºN¯þî°çY:<—¬k‘U Ñ­£ý:Ö{’m×¢ëiÛÌ‘¶¼Ô‘º&ýÞÒw“å½+Ç ý&¤§ [=Å…ŽÓJ?ò\‘|Û` OK§ÇÃù/)|¬'÷,:Þõ’¤ƒ[ô´À† š†U¡×»[¯{^½­ ½m,ôzE{5o»v³MÍå¶õIîmÒíÓœ7…àµÁ±#Úº¯„U(4ÐÑ-e[ƒ$÷JGî: -Ï[åÁRÝC©(“µPX]¯ûB:ܵµ’ånÇ´¢_ ¹k½+鱿qm!º¥ï¤ä?æ[:еé…Þ”õ¬­"B*¤ï+ô›¾ôt)»¯8_8•ƒÐ ùö<ð§§Ç{Ê.¤ã’uMxŽÍaºpÀìñA$@j¬ÛÆ@o ¾›Uæõåzîõ±Ï§í¾}X›'Ù6ÉcÓì3Í~›i+Ú°îsÿ± KƒØeUÔ·Ðuz’ÔÙÖF°G„tx>IWsDH¯[ú­¾¿ãZ¨èX²ôdó¸ÒúQ/ôZH¯TÛþNa½•CJqpmló7%=›V»†ù“¤ïÒo¡o—²³âKY"Ò•µýT×éÉk9Þ»£´:ö3[ödÀ šÚÂôfØÝ Á‡U£·uÝ^4ÎW-K7_·m¹m½íÆÙ áðNhÃÞáíÛ»ÖÂó¶1Doë:}.)Lö‚Õ=X‹·N ÉtÀÒá9ËWC½¬øu²_ÐêûòÛz!w¤wl}»y\Çz¥¢×ÆDp5D×c½ð’ž[ëÎýIÏ'é÷Yz$Ùw…â³RZêGºR¤86ˆø¬“ü­oÍùã-z °…L:fŽP×V…^_®r©½’¼ ÐÚÆNo†çõí1Ö¾I¶oBfl–BŠä™…ÒtÀÞÖÒœ¤•°"â×¾¢îIêJoKzªy\{ˆ®§“õ—éPÇñx¢;I‘õv²¾ëÐï¡G ûžì8Ý‘n»f­á¥; ½×IzvKnf„DMmÁys[³‚¼Ù½{}œô¶ ½-®}ûÕ~¾÷=-ìbÞ¾x¸´¥r¡L ‡ û “%9%•Z}O7¿Ým;CXK]éDOú8K¿µT…èK=éBz+µ„èåÐ]VıŽt¬'û9~ö÷ éD½•ÏØúc?ôP:Ñ—>dÎNÒ}[w×0iü.€=lTè:Iuø¸JõQ!û¸ëŒk÷¸óîF“|AaËîÝ{ã9Ý›/®¾t°³ôÜÁÎâó…ïí$.¥Ä;90Œu¨c=dÅá¿)éª$EèP?ü¤7Û+­W’ôZcó¶ÎZ:¯ÐB™ô”¿–µ”’ô–¥ï”ÖGYQÞ]j°h=º¹7›ƒ]£Œs¼-5†zsÞ¶o¤M;ÉŽ Á7";®M˜†»ÖV>´²º=\ ?šG…èÖkZ×=¶·uVН$Í•Öw-ýJÒ!§xØá?[zêH©÷–zl³ï $9Ó…;fŽ0ÊNû0b+ÝU ¾x“°x±›ü@rtsÄ[ª…èƒÐc!½ÑvT)½’äŸê¦½øb-Dï–ŽgCú½åEqÏbÒ›÷”z@R±Ùw›…ÏÆ`GmCÖ£±ÿ°õñ ¦÷u¾Áó°Wè8=š¬!zH‹ýÐã!ý®í ÒñráX¢[ñ¸ÎÅZˆÞu<)éÍÇ•{;ñ°¤Ã[p?€$)òŽûÒ7öt@S3o{lXX>,h¯ï×¶< á3Æè;?0‹Ý”uD©ˆ·%-K×Cô'‡…è…\…è¹Úf鱤ô¥¥¯%uïH¹{O©;$-mÅÀf"@Ô5CíqÁxÛ¤û·=6ìÚm;€õçRzLVŠÐ»!]–¤z¡§buLó›òËÉú•ê•èÖ£)éÌe¼y[á'-u·è`S ÚŒ«0¯à¹6Ecy\ÐÞv­aí™´ÝÀÞã ÌÊb×~Ü¡áª]Ò\?ôì°½”^,¤×µöž»k]:^hqA~Vt˜ƒíbó@Ì: 2ª*¼š×ƒñfp>.Lª7¯7ª}“Þ6È2„ìM‹ÝBOŠ~(>ô$…Ôí­†èo´TX×MñË;ß•â C'¶´Õ°ÐR{×íÕ¼Ž7×›Aù´áyóÓ¶s£ûaâs€½l¡“ô”Ã9"Nj-D—Ô퇞ŠÐ/ë;'éêá"þp§ôô\ÒýâÝ"v€Äÿý° ÐãÂóQÁù°JôÁÚ4*\5®ú4íÝè~ŽEw Þ²€M17—ô¤­•qÒÒ%i­]úžÃoÌ['¦ü—£)ʹÐ3¶nw£`3•ÛÝÀŽº˜Fm½”{mªÂðÁÚzj¬WÇ ÖÎçÆ~ÅÚ\ž Ž ocÌãÓî‡v¬B¼ Гnåõ€¼-\OZ¨'ÝÔWÕÛ2®­“Þ°k%¿Ã`bim²¤dE’¾)¤ådõK),ÙŠÒÒAK‹’º’žXÿ¶™\ÀþD€ÚCæz¸ójy0b½9ÕÃó6õ@½Ú}ñ„ìIR²Tfe§¸˜¤Ë…|-)"I$Í%««Ð¢¬níPK:´6Æ @Ô5Ãëz ]ÑSc}Tp^Mj¬7Cùf%ºDˆ¾­‚çkÇÈü,ØsUâËEÄ…Bd¥[êØ>dÅëJ’n[ZªÄy×€}$LW ˜=t@eTzµ\[^Í«ÊòfÞ6&z}_ÕŽO=ú$!ú¤ѧ`ž+€‰Uox“¥"Éq!…®ÖÕ$õ’íÑMÖ–*ñY ëÞz_'F>[‰ÐÔV….­¯H¯‚ðæ¼’7kvå^ßo#!ú4Á8!:&bI…®ãýd](³®Øêk]§·V‰[–tÛúw±þÝ/€Ëæ&˜9t@]³õúr½½žWÛT[nëº]}ê ¢ï(ž£ÂægÀ,´U‰;´R&­¤PS%^JºCIwÜ|f2<Ø+ÐãÔô*Ä®‚îzxÞVÞ ÑUÛÖ Þëݹ g¢Àþü­°·¬«—úÉúº -O]%n‰*q: iXzœ×CîzØÝ¢kÄ|˜êü©ÑŽÊ¸ã' †ÆI<=;Um˜­½oë‘ín “ªÉ…4m•øÑöwy¼ž{ÿ°0sè€6õp¹Z®‡çÕ¼nÐXVyÞ|¼¾Þ<‡Ö®Õö¡È¨t—}Ìs³c? ÌV\|°£ô~Aˆ` ª¾!ƒÂ>ßV%žìCªW‰‹*qÀÖ"@ Ó¬@5z[%ºjëÕ|T˜Þ×êë!úNÁè;F ~˜©4bǽI~ÏÒ£ÛÝ »“-¥lE:_X+Iq¹°ú)¶;ExNŠɱr±vh!Uâ6,œù£€™#@ŒÓ¬F—nîʽn\%ú0Ux^h}ñBt˜[Êòwz9þ)î3!:€†V‰;R! •Éš逥ÎÚ;(˺}݉¼öÖÊ«oßø^`· @Œ2l<ôúcÍ®ÜÛ*Ë›ÛëëU5z[åy¦Wª½96:!ú&  ÷#H0c)ô§l'‡Þºª8>g¿ké±ín€Ù«%®BúÚ¡•Òî'Å ÙÙ¥­…ZjŒ%Þ^%îu3ØnT `æÐã Ñ«`»9ú ¶__à «Lo®·…èU|Õ¾¶ãêѱ«ñK‰Y+­¥«ÒÛƒÐÓ ¿uÍq¢´ÿTXOnwÛŒg­â)4(­‹)|)9V )’å$—v,(¼äõã‰ßv} q­ÍK€uГ¢WAz½;÷f%ù e{s<ô¦Rë+ӛݺ×Côz÷ò„è0$‰¬%½=žŠð;RÜgùd}{»Ûì7*ñ+],ä^²z…å¡UâV!éˆGÖŸ1ÖÞµQ˜ À´Ð“j Ñ›av=ìeT@ݬ\¯ÎW¿Î°nÜ Ñgˆ.Üw~˜©®#_ÊZJ²lýIÖ·{áwCùØœü¶í§¶»Àn·6ޏŠpNŽ )ùR±’¤($Éî$ÅB’Iq°v袬Åuos¨€VÉæ›B˜9tÀ4& Ñëáù°yÇuß^ª}Lôú˜ëiÈã„è3ûüþw~˜µƒ…:gû>§KEò…œõ® =ÒWz_‘uä· ‹¨©W‰'i9….ÜT%ž´è¬%[GF’tDŠ#ëÿšGcvtÀ´êÁrµ\¯ V}^ïÆ}\°>j[½Â}yßÞùÎc*k0cóÖ¡pü±/+r.“/Ä@ïçBôí"òsNo%ééín+°™¬ëÁx.RÒ¥Z)¤(¬äˆ2Ùs ²u¨v肬…›ªÄ÷õ;ؼOÀÌ nEs,ôjÞV…^©wË>IP^_o¾^Í"DŸÔ>þ(œ}ág™²”ŽqôË^|á”b}¤(â e“¾%ù¯Ws>ÚMé­‚»HU%nIÉZ."V«ÄSD’•"Ü^%®µ*q­¯¯^ ù+ À¾A€؈¶±ÈG…èúe7ïkõ5«þñõFCôi‚ñ}¢Ø«Ž$ßóyÄgù 9YMEœ+r|ÉG¤“ýwI~³Hzv»Û‹ýëz•¸•q¡´V’u¥ÈCEaÍY:(épíÐF•xP%Æ"@ܪzx^ïÒ]º9D¯ºoo†âÒda·‰^}žNˆ¾yöÛýîdü,0s‹I‡%¾”ãBJùl–ÝɾSöYK'%=áO²ãX7¢cvUâýd](³®LT%n¹±¶ö¶ƒ¿°oE¦ wÌ:`#šèõåfˆ^èF^U”·…âõù$ª±×‡}p2Ëö”»;zàü²~ïˆ\”>;ÈQFÒíEöW9­…èò©kÊǺ‘Þ(¬ç¶»ÍØ™ÖÂpRVÄ…ÂZ)BW’Ý+©J­U‰[ZÒ×ÞRÒJºcu•*q°½Ð5*D¯‡éÍÁ§éν¹¦ä»íøÜáSÉqO/묒îî:Ñw€z•¸ƒÂþºÌºb«_("IKdÒú*q¯V‰ÇªÄ»/b˜9tÀ, Ñ­õa·5<lnoÛ¯9Žzóü„èØøEÃf[LzôÛséôŸV+ЍþjŸQÖ±H¾»#}–%%ëî^öy¥8Ö‰øu‘üÂv·}¯©^ +Šˆ¯--ÉËE¨g… ¹kÇœ¥Åµ*ñêKj…µ*ñ:²€I fmTˆ.­~ÐßV1n­V”O¢ÊÚ*Ú µ‡è걄 ÑÇ êØ_î(õÊsé¿¿}5÷J–®féL'âî¾}O:²í8ÖË:oûeýªHú›ínûNf­¾0­¾pÅ °Ï—¡e[ý‘ ©P¨^%®#t™°uЛaXˆ^¯Fo Ñ¥!ú°1Ï›µ­·…èõ®äëíf߇è{ò¦Œt´ãÿTJÿãíå|m0ÐÜ\R &Å}¡t¬£8²’uW/ò9·ôË$ýív·}+YR²”B*”¿JN+Iê%ëZ‘%w Ç\ -†tX^W%~´ýUŒ`€i…3/ ˜9tÀfi Ñëáy5o«$¯…^7,@—n¼¦Uç«Bô¶ëVÇ ¿÷wˆî=xO»”ƒŸ¶ÎÑÂÿîo”o¼¹<øª—•d…’>•÷ö#ÝUJg#l¥tg/âbdï&ïú½O<9)t)Ù— ÅJáê1—ŠXL)–^´ö—ß·¯ ¿kaxð/`×!@l¦IBôzØ=ʨ¢êþ½½:_ý:úq'DÂòžºŸÝÌAa ¶Ö‚㹿],¾úpeð³³-8[.ôi8ßçHw%Ç™”-%ÝÙ_TÎÇKû—¥wNˆîµ ñµ.Ô/Ö+z¥u5EdÛ"<'ÅäX ¹X;´u›·­?ãÚÉHÅØ1]¸` 6Û¤•èª=Þf\÷íUxÞv\}üõ¶Ç Ñ[ì™pK’âö‡çÓÿtoÄ_>¼g¾‰èu¤ÏÊÇÅ1+N+ËJq´']‰Çœôza¿¸Ym²W¿!•B9).$ù›Ò±lI…å$—v,X:¬ÐBíÙµéÆ‰$Ék•âüÅÀtÀV¨ËÕr½2|Xõy½{÷qÁú¨mõ wBt˜Âœýøãóz¼~ïÜ >û²¯så;C>^†N+KN>:–ûŠ»ñóÂþÁ$ç¶ÖÆÕ°TJ—-](×’u-åÈ…]&{AÊ‹–©úÿËê˜âG¤8²þŒA6„°Uêèõªô¶*ôJ¿¶©=¢%™;‡éNÛ¯c=zOéGï)cp-âãåÐ7W~p }Þ˜—½ÐÏ*’}'éweŽg‹¤ )|)9V )l•I*’5¡ÃIZ¬]b}•xj~ï  !›¯Î`æÐ[­m,òQ!ú°nÙÛæÕr}Lôúc ѧIqv}Ⳬ`¹èZu-N!I÷®n¯þ¬‡$ݧÕÅÈÍ*q]NØiÐ[©ž×K ¥›Côªûöi+ÏG‰^õLˆ>™ÝÜö=…¬haoÀì ¶Z³½¾Ü Ñ ­Á­öP¼>ŸD5öú°[f¢ïfûḎ°F…èõ0½àÞJwîã¤Æõ'=vÒ}¿„í°¥bø—¢€[F€Ø.ÃBôª ]j̧Ѧ·U±K7ªÒG£‰[‚_Øè€í´Ñ½9Nú¨jtiõu¯¿6Ϻ!z½»Ú ÀF ¶Û¸ÝZv[“Ñmû•ºy\õúù Ñ×Ä.jëžÇOhaºpÀÌ v‚Q!º´j·UŒ[«åÍmmªà½­¢½P{ˆÞ}_…èiw4€™!@ìÃBôz5z[ˆ.Ý¢WÇ·Í›WÚBôzWòõv ³§Bt*ÐwËü,€†pkïaÀ† v’¶½žWó¶JòÁíÃtéÆë`u\¢·]·:~\ø½gBôÝ86:`§™$D¯‡Ý£ŒÊ€«îß«1Ñ«óÕ¯3¬÷}¢ÀŽ•3è˜9tÀN4i%ºj·×}{ž·W½íñý¢ïÔví?æg[°SÕƒåj¹^>¬ú¼Þû¸`}Ô¶z…û> Ñc¶ €ÍC€ØÉêèõªô¶*ôJ¿¶©¢c'Hë‡ Þ'`s vƒ¶±ÈG…èúeo›WËõ1Ñëm4DŸ&ßQ!ºé6°Ï vºzx^ïÒ]º9D¯ºoŸ¶ò|Ô˜èn\cß„è±CÚ­l*Ð0sè€Ý Y^_n†è…Ö‡àV{(^ŸO¢{}Ø4³ Ñõ‚ߨ è€ÝbTˆ^Ó›Aã­tç>Nj\Òc' ÑwDؾí `‹ v“a!zU….µæÓ˜&L¯ªÒG£iׄè“¡ïü,€†ÞCpËлÍFCôæ8飪ѥ¯•Önâëölˆž¥þv]€í@€ØÆ…èͰۚ<ˆnÛ¯ÔÍãª×Ï¿GCôHWsþb.¥;·çú¨P~´0蘽4~v¤úøçõå¼6Õ—kSµÞ_[ï×ëצj½yü ¶^?¦7Û4îÜI?ìÙ–…®F|S¦tt9ç3Ûq}¶è€ÝlX%z½½­Ûu©½{òqݹ7× Ý\‰^ïJ¾Þ®avl%ú•_‘ÜMéØrä3 NǶòú0š©@ÀÌQØíÚ*Ñëèõy[%y5oVŸ†,×+Ð-ש_¿ÙÆq÷0é½n‰s½þé«_JR×éØJÎç¶òú¸.Ü`k ö‚IBôzØ]ÒÛÂò¶îÜ«°½¢7¯3¬÷]¢¯ ò79ÜY‰|V’:)]ºsì]耽bÒJôú4*DoV¥7+Лãª7Ç_oNõ6Ž»x‡ÔP IDAT‡IïuS}ÕïUJKÊ^¨Bô®Ó±«Œ‰¾õ¡ ‘·¶—.ì耽¤­Ûôz Ý Ï›ó¶à|X5zÛñ{*Dÿ¼ßÿJ’ ûPŽ4·²V}^¦tl™° öš¶°zXz½[ö†‡è£ºzߌ}š{Ý4./ŸD,KRG:Ù‹UˆÞ%D°Ýœ©@ÀÌ öª¶±ÈG…èmAù¨îÝ›c¢×Ï·Ñ}š6í£—sþ¨Z/íC X¢3&ú–0¸À– @ìEm!õ°=4ïõ?lnKòbd/^ÉùSézwîŸoyãö‡¨AÒø?ötÀ^6*D¯‡éús×…{ÛØéÍ}Tú,CôMùàè7ß|óv¬¶2ùP ß¾’óiIê¦t÷*Ñ»:`¯¢×CízØÝˆ×+Îëݹ«@o Ñëm©·i’{˜Õ~û¢×»ri0x·í±Â^Pøö+ƒÕ}nuLt*Ñ7£ 71è˜=tÀ~°Ñ}TUz[•z½køÜ2ÕÛ2i¾Õ!º×&¾Öûý° {>éFˆÞ5•è€Ý‹°_Œ Ñ¥ö°»­Ú¼¯õUé£Bôzõùn Ñ×U;¿qùò›qεýÈ•Áê˜ès)»’óglŒ2è˜9tÀ~2*Do†Üõêó¬ö.Û›Az¿åøzûNѯW××ß]^þâ|¿ÿ§QöBétçåUwî÷¢vtÀ~3,D¯W£ Á›èêÏG‰>,D¯·o;Bô›‚óµ)Iò;W–2Á ºùè7ƒ|JZ уîÜ»:`?j Ñëáy}>* oVŸOÒû å:õë7Û8î&½×Qšá¹´œ¯ÍÓ/]úc?òÅ NÔí8ÝùÍ ,IsNÇ.3&ú†¥õ½ê¤¼áá«€› ö«IBôzØ]ÒÛÂò¶îÜë•ëÍnâëóf[êë“Üä÷:N[xnIÅÕœã/ËWÿÛ$'IÒ\W>~ymLôùDˆØÐûÙ¤•èõi\·íͽ¯öîà›ç6úf‡èn̫м Ћjþ_Οÿŵˆ¯'¹˜í²”ï&D°YB¦3G€ØïÚºM¯ÚÍð¼9œ·ì£ÆDßê½Ù/øºnÛU Ï%•Ë9Ǿ¹ý׋ÿpy08;éÅ’]tîº8|"Is.îZä/†´€mE€À mc‘ ÑÛ‚òQÝ»·‰>«}š@:ts×íõ1ϯ}¾6Uz§Ÿsúׯ/üŸ1Åõ,u•Ž_ä%©[¤».ç|nÊ6ïwÍÞ˜ÿS`öÐXÕR ÑCÃCôjÞÓðªôúqÛ¢W¬õãžWݶ7ôRRçí+W¾|wyù7S\KaóN÷]äS’4ŸÒËT¢vtnhV ×—ë!zv½­½²7CôæØë›¢WÚôj*µ¢wêÓÿõÕù__Îùë)¯“ºò=ƒ$©kßµœó—±z¯ÁÔŸ7£›€€õ& чuçÞ¬FÕ½{3D¯‡émáùf„èͱϫ½­½#©+iîJÎúß¿8÷oýˆkS\KÉ.\œ¸Ø_½›ÒÑ•œÏ¢vtn6*D¯¦f÷룪Ï{º9Xž·U¤·]Ò{hS¯gnއ^½-@¯¦î'×®-ÿ 1A[šÒB*îýæFˆ~ÇrÎç#bp ç°O%e*Ð0s财׫ÃG…èê҇uïÞ6z}ª·eÒ }ÜãUÅy}¹­÷¶½+©û‹K—N¿uåÊŸÇ\§MšKŽû«Ý¹Ï¥tÇJÄùÑ¿…s0è 7.D—ÚÃî¶jó¾nîÒ}Xˆ^¯>ßì½Îß•{5¿¢ÿË—_½óÇ+WÞ›â:•t ÷_è>–V»s¿q‰°]ÐmTˆÞ ¹ëÕçYí]¶7ƒô~Ëñõ ö­Ñ¥áÕèõJôuéÿù˯Þý`eåÓ)¯£,ù@*î»0œ”¤nJG®J„è ¿ °ÿä©ÿ¯ŒE€Àx£ÆD¢W!x³}Xõù¨1ч…èõömV%zµÞ Ò×uëRù¿8÷ö_–—§ÑCJ]«½YÍžkëm]Ç7«Ò¯éÿé˯>xëÊ•[}!÷_ì>‘¤Ò>|MùjŽ Dl:t¦×Ömz=Ðn†çÍù¨à¼-`¯ß ég¢×¯Ÿw ö/´U¦×ƒôò_n1D—¤Åt£½ã´tM¾ÚX¾•sí¿ °ïXÑÛî6`ï!@àÖ´…ÕêÐëݲ4‹½-øoVžO3†{½ Ò=I*þ „èÖ…èii ˆý¢Sƒ49Ç¥ínötn][H=,D¯w‡ÞD$õ´>˜®ï×ìÆ}•èÍ}ÆßÞ«µ­Ùý|³‹y©Ö¥ûFCôc¢§Yý¼Ctu.Ð0{èlL3€®/×Cô*ì5&z[¥w3¤®W¥7ƒó[ ÑGßÞVqÞkLÍñÚ›]º_ÑÿpùÊ©‘Ïæ.¦âÄ…Ájˆ^8ÈVôrþfÚsØ;ÒµL€€™+·»ì¡Õ ¸WËY«rÖÍýpFœÓys¹MÒð ¼jSuŽúrÕÎjŸ*<¯wÅÞ«­·ÍÛÚVÔ®$é?õÕ‡’ôÌÅûÆÜË:!逋ù“¥"(ìérOq©#šæ\»¸7¹öÑѾ”Nnw;°Çãwv ­‡£v Ó‡=Övæöaû4÷µ†;lŸÔØv}úóòòùCE¡ãÝîá–ëÔ±_ >žOép²»!å,­òÜ´çÚMœ+´˜ˆÑ€šw¿ÿúÿëv7{èÌΰJôª ]_u^U~kļRÖ¶ç–ó «Hož§ÞîêúUzµ3þcIzþàû[Ú7ÒTÜ¡Ÿ?:\¦’¼CË=Å×û¶iϵ{LòÝ `_ùËv7{:³5.Do†ÝêÅÛ´íWj}(ß<ÿ¨½Ù¥{¥œ7Û8ªº}”Ö=¤øþÁŒ9ö&‹)=p±?8µT÷%kaòµˆ¯»{:DP±M€€MA€Àì Ñ¥c¢7YR¿e[›*Œn«h/4}ˆ^¼Ù¶fUü°ªóqn ÑÿËùóŸHáï<8u%úB*î»Ðœ:\÷öü@¡kç»ö‘iϵÓQ{¬—s~w»Û€½‰€Í1,D¯W£·…èÒÍ!zu|Û¼ùxeXˆÞ¬8¯×¢Ï2»µv5Bô¯?)^8xð*Ñ‹û. §Å}…<Ÿ­´WCt7„ôÓínö¦b»À×;¼b»±½y̰|Xø¸±Óë×o×¼í<ÍÁ·›mÖ¾qç]7½·²rá@‘âÞnwê.Ø;NK—óàÔ\JK– 9ýœ¿.íÅiϵS=8Wh!Q‡¬9õÄïÿú?ow#°7QÀæj«D¯W '­v‘Þ¬o3*A­º/u#˜n^§­»öfÛÔr\uÞi Ù+7U¢ÿ?ç¿>’þæ*Ñ\Üw©Ÿ?=T¦{“¢ë”–Vr>7ŸÒÑiϵ{2€ý%^Ýî`ï"@`óM¢×ÃóQãžz¼Tû˜èuiÈyg¢·å¼1â±›Bô=ÿõ)éÖBôù”î½ØÏ§—ÊtÜR§,ÒÒJ?Ÿ›/öFˆ`U„þ¿ínö.t¶Fsüñzˆ^‹ÞƺŠ«@·^]ãVBôz…ûF4ƒôÖ½1xéС‡§=ùBJÇ«Jt‡ºEòmË‘Ï.8ݵÁvo+:o®‹œâ¿ow#°w1:[oÜXáõmmc¦×¿•±ÐG]¯¹Ü}XëšAü°ó]]¹z©“Ü¿nîȈk·*í¥+ƒ|z.ù픤¹•ç:ɦ=×NÁèÀª~üä›ý_¶»Ø»¨@`ëµEÞÖ•{¥­[öqáø°ŠñúXëÓV¢Ç¶ š|øîêK}×+ÑÿÛ×>SH/-M_‰>ŸÒñ ƒ|úp‘ŽÛ.»I·_ÉùÌbJǦ=€ÃÒÜî6üÿìÝiŒ¤ù}Ø÷ßÿ©êéžcgf9ÇÜ´Kîâ®6’èB"´- PDX’GW(FRÂÈ”C€‘7AòE F8¶@qtXÑaÓ€_Ù/‚ÄZIJÅ”µËå2²´»\îQ5Ý535;WOõüób¦vž~æ©«»gªÏhtuOý‹»¤ZóßÿÏÁ& ÀÃUçÕ)ïˆû#úpûöYG‡Ï¯ž‰^½ÆƒˆèyÄí&£®å^DÿýnŽˆïÙFD?–Š'¯Êî#­âñ”RûHÄÙ}Ñ Ÿ@DÄÚÚ`ãÍ{l¶p€ù·ú4Û±7Ý÷ºú}£¶·†¦sÜ›L³õü¨õÜ·û7ÖÖ®·Ó¶·säVY^\,Ò‰”RÑJéØí2_\Hé‘Y¯5O¶p€ˆñO?ñÕ·~}Þëà`3óQ߯½z»:‰^¯¦ã¶PŸö¬óªáTù¨ëmw;÷\û>«"*“èÿ×ÝIô?¿íÜSñÄAÙ=Ñ*OÅ‘"=¶ZæÎÑ"=¹Íµ=tÉ:Ä Òÿ8ï5pð™@€ù7Q1~Ò»úÜQSß³L³šD¯?ÚIô¦k5ýîùMÞ³N¢7ý<êuÛDŸ´ŽúDú–Ûo­­]/Rl|Ëââ‡f|¿XHʼnղì)Ò±Q,¤truP¾w¤H'g½ÖÃd€Ã,EüÙ…—ßø…/ú«¥<:ìãÎ.·½z}ëöQ¯-Æ<ö0"ú¨Éöqßë÷¥ˆHo­­]D^{vié̘÷kÔNéø­A¹2Œèí"=rsP¾·¸‡#º€Àa–"þæÙ‹ý?ž÷:8tØ[f™¯>¯¯'MŸOsýQQ¾)¢Gåvªýc~®_¯é÷Ƶ¾½¶~cýÄê ì-éhŠ(ŠôÈ­Aùî‘"šõZó‹­XÐ8”òÿ}áå7ù‹ó^‡†€{O=7Eåê¹éõû›®5ëvîMQ»éMU·¾Þz`o2nÂ~äZß^[¿±‘óÚ³KKJ3n!ßNéx-¢Ÿ¼9¼½X§g¹ÎÃðŒ€Àá4(rñ£g–¯tç½ö¦I}ÜvéÓl_ßþ}T·%ü¬½)žš8Ÿô^|g}7#zqúfY¾³˜öÖ$º€À!õ¿]xåÿuÞ‹àpÐ`ïšf½é¹M¯´û¨Ÿ'Ý7KDo2és{¯-}3çÛÏl3¢ß”WR,¦”Š…”NÝDß3]@àê-l–ío¯\½9ï…p¸è°·Õ'ÃsÜÄ«Ï÷¼Ig¤‹â»ÑGiš¨Ÿvê½>‰~{;“è­”Ž­–¹ÿAD/ŠS76oí•íÜt™œ¢ü‘~õ_÷B8|tØf‰Ú“&Õ·sú¸÷õüYŒÚ–~šõïJDoßèíGRJÅ‘¢8}£¼µ˜æÑ¿u©ˆ£:‡DÎñw>öÊ›¿2ïup8 è°÷M³½ù¸û¦‰âõ3Ñ'=ÚëÏbRD¯¯µñ±wÖ×o¬å|û#ÛŒèk¹ÑSqúÆ üæb‘æÑŸYl è_mÝLÿáß¹re0ï…p8 è°?4ÅãúcÃÛ“"tÓvéã"uÓ{Œ[ߨçLcÔúëkÑß]_¿q³,o=wtéÌv¶sßÑ‹4÷ˆþ¬-Ü8®—ƒüéýÉy/€ÃK@€ýcš3«·'MŠäã&Ô§™†ŸÛ§5é=ÆFôÎúúÍ›eyó£G—În'¢¯ç|­•¢=Œè×›o.Å£3Š]à tÍå}ü«ßøò¼Àá& Àþ2MDŸvZ¼é5ã&ÜÇ=gÖ5LkÔÄ|uã"ú­Dô¥õ•ˆ^ê±I[­7­s'FÆòëë·n–ƒ›=zt{=âZ+¥VŠhÝèßxØ]@à K)ýÚ…?~㿜÷: B@€ýj'}šÞt»˜b-ãîßíˆ^ÿyäéÝõÕmGôHKëeÜh1·ˆ. pP¥Hÿós/¿þó_œ÷Bà.ö¯iÏ6÷ü¦=ë$zÓϣç {¿‘ýF9¸ùܶ&Ñci=çë­”ŠaD¿¾9xk©(NÏrízv±- på_¹ðò¿˜&ÿþ€ûÛ4g¢×ïkÚº}Ôk‹1í¥ˆ>r ÷hŠèKKgSJ3O¢o”åÍV‘Ò݈~úF¹ùÅTœŽƒü¬ž5ÀA“âï^xùÍÿ\<`¯Ð`ÿ›eR¼ú¼ú6ϧ¹þ¨(¿Óˆ>êÕf‰è)"âND/·Ñ‹”7ùV‘R*R´Ž¤âÑåà›‹©80¢ è$)â¿¿ðòS<`/Ðà`¨Çãú„yõþúk¦=S|ÔëF½Ï–p=æ}&^+GóÙî³FôÔ]߸u}°ƒˆžËjD?}c0x{±xp]@à È·"ÒO}ìå7þöÅsö(ŽI½)>zmÓcõíß›âxýþ¦kŒzßqªk¯õi&æ×vqccµ?\ÿØö#új+EN)µÅéƒòíÅ"=ˆ. °ïå¸Xäâ>öÊëÿǼ—ãèp°L3‰Þôܦ×NÚÆ}ÔÏ“î{Ð}ÒÖôÃÛÅÊÎ"ú‘ÍœoDôtúF9xëH*N¥]ŽèYlÅ¢€ÀþõeüàÇ_yóµy/&Ðàà©ã÷ñêsÇ=o\ˆ÷½~»éýšîŸ$×n›¬Ÿ4IŸ""­ll¬ö7×?vt{}#Çz‘b³HiáH*½1(ß9R¤Gv3¢?+ °åˆ[EÄ/^xù_:wñêÕy¯¦! ÀÁ5KÔž4©¾³Ðǽߨ÷÷º&ÓFôq[ͧ•Õ+›ƒk;ºt¶˜=¢/l漑†½H§nïIéDJ©˜åZ£èìC¯Q|úÂ˯ÿ“/Î{%0¦Q‘zÚû¦‰âõ3Ñ'=šõ{ݨH^JßID¿½³ˆDô…¢8u³,ßÛ­ˆ. °oä¸EüÂ…—ßøëg/^éÎ{90+®¦s¿ë o7Åé¦íÝ« Ò£ÞcÜúÆ­mÒ:G™f’þAGôwR:¾Óˆ. °¬F¤ÿöv>þãÏ¿ò'_ùâÖ¿è ûF{Þ ˜êöæQ»]FDq÷{Ä¿T7¨¼6U~7]>Nºû¹ò~MÏ©Gñêºëë,"bsÊ÷Ÿdø{P÷"zëk·n]+#¾þW?ôè'Z3†ïVJÇ6s¾‘W[)=Öjý[7eçD+K »´nØ;rÜŽ¿UÄßúè¿~ýõy/vJ@€ƒmÚˆ^âƒmÚmÚ«†!}ÔõdD7ù2ÜgKDÿú­[×b›½ˆtl#絜ój»HGµŠ'o–å{ÇŠâ|!¢pp¼—#ÿ­ÖRñëÏýáë׿½Ø-:|ã"úðû¸`^5*”OÓ‡¡ºµGmþ0&Ñëkû ¢çˆ¯ÿèv&Ñ#-"nGÎ7Û)?Z¾QæÎ‰")"wyÝð° "ÇK9òï 6þÃç_{íÆ¼»M@€ÃaTD¯n­>iê|£·s¯ðvåþ2î7j"½~i#ún¯¸%¢ÿ·n]ÿ‡9¿öœùÐ'RjMxmýBKƒkw"úñ"=ykß;Z¤³­$¢°o "òWrÄï•­Á?øÄ}³;ïÀƒ$ Àá1)¢7ÅîYÎ;¯kÇýçªW¯¿›}š¸Ýô^¹òU]×0¢ººzã]¾òõíEô´8È9"òvJ'Ž¶Ò‡o—åòR§g‰èÓþC€]°ÿ*"ÿóHéŸGÒ¿xîß°E;‡†€‡Ë¸ˆ±5NGÜ à)îß2}ÜvîÉõºV<˜ˆ>Íô£z“û"ú—._yí¯ùÐóÛ‰èeD±‘ó…”N,Åc·Ërùh¤SEJK³\ vA/"z9¢—R\L9¾oä”ß(sùÆÇŸ{ëô¥©x€G@€ÃgTDoŠÓuMçŽOÚνþó¨ˆ>|^u]£Ö]_ç¤!íú”ù4¶Dô?[]½¹Ýˆžr,”‘b=òÍ#)_*ŠÇVËry1¢l§tlÆuÀPŽˆ‹шNäèæˆ~Dî¤"u‹Aêo¶s'åv÷øÒÍþÓ_~wuâ_~ÐK€½M@€Ã©)¢W£ôð{Ó$ù`Äý£zÄý¿s #zÓûVCú´}»»œO ë»Ñ‹ˆ…2§Ø¼»ûp=EäVJÇ·¹~žˆx'"º©‘;©›£ì§”ú)r'•EwP´ú­ëƒËϽþúÚ¼ ‰€‡×4}÷OŒ7°‡Û¿·ã^Œ®¿O}-QùyšˆÖ8ëôyÕ}ý\¾ôÚ9»Íˆ^Äz ®IÅÉ»ýòbD´Et€+EôsD7"ú)¢SæèÞ‹á©[¦²ŸÊvçè‰[Ý©¦Ä€F@€ÃmÚˆ•Ç›Œ›>¸ó;Ǥs‹†ûv3¢ïÄ–ˆþÆêÚˆþ¡3ŸX(Š™~ŸJ‘Ê(b=—È~f­Ì—r”ƒ…Tœl|ÍvçëxP±’":9R7rî§"uÊ\vëSâSoì :P?¼чa½)L'ˇ·G™t&úðý"\DßÉúÐýýÊå¯o'¢9"RZ;}±Hgo—ùr¾ûó.¬€Ù]‹ˆw£6%ž£ìG”ËnEµ\ï¼øêÛýy/x0t bëú¨sÆë†Û²ÏD¯ÛΙèÛèÕH¾“èMÑý¾ˆþ{—.¿öãgÏ¿˜Òˆ÷h”"µr%¢IÅ™õ2_ÑF؈ˆw"¢‘ú¹‘ºõ)ñAÑê·®.?÷úëkó^0°¿èÀP5žW·t¸?¢§=y>ÎðùÕ3Ñ«×x½ÊëQ ~½q¶Dôo®­­þn¯÷ÚOœ;·­ˆ^–q|½¸Ñ‹tz=ç«9çþ‘”åZûT7îL‰oÙ:ýÎ9â©[¦²ŸÊvçè‰[]SâÀƒ& Uõ°\½Lu' IDAT]è­Ø:žâþ(>n÷&)î½^áÕçìd½þšíªEôõmGô"¥V.ãøZ*¯.Åé#éND¿ó•¥”>”vãw€‡g3"z)¢“#u#ç~*R§Ìe%ŒÝAÑê/žZ»òÌKoÝž÷‚†t nÚˆ^ØMÛ¹ÚÎ}œaHošõÚíDô¸/¢ÿN¯÷ÚOng=¥VéäzÎW¤4Œèïoä|5"N?ˆÅÌàýˆx/ÆL‰QôWËõ΋¯¾ÝŸ÷b¶K@šŒ‹èÃïã‚yըؽ˜>͵w#¢çï7jmDô·×ÖWg¥÷µŸ<î…Y#zŽ(RN'×¢¼¾˜ŠGޤtªéZ»i`çVsD/EîF½;·Ë‹9b%"z)ÅÅr3VÊ¥²÷‰õÖJzp)`ÏЀQFEôj˜ž4u¾£·s¯Çïvåþ¦?œ}X}\8õØÖˆ¾¾~ûwVz_û‰óç^Xš1¢GDQäâøZä«‹)n¥8¹˜f=j8¤Ö#â݈èF¤~DîD¤n޲QtR.»¦ÄÆÐ€q&Eô¦Ø]­½í¸Ñ›4UávÜ®zõúÛèóգa½Ó˜fý¾ˆþÛ+½¯ýÔ¶#z:¹q­•âäÌ«Š#¢ÈÑÍýˆÜIEêƒÔ/SÙOe»sôÄ­îÓ_~wuÎëØ÷t`’q=âÞ™èCƒÊóêñ|Üvî)š'Ú[ÑÑëg£‹èõÉùl?:.¦o‰èïî4¢G<²:(¯FÄ©˜¼Í=°?lFD/Etr¤näÜOEꔹ¬œ)^tE«¿xjíÊ3/½u{Þ 8Lt`£"z5LŠÒMè“¶s¯ÿÜÑ«A¼º®qkß­ˆ>Î}ý÷z—^ûɳg^8R3ýî•s¤"ÒÉ̓vò{ìaïGÄ{ÑO2G%†§®)q€ýÃÄÓjŠèÕx>.¢¢yÂ|T@Øz&ú îEô¦÷¾>Çì½i¢|š-ÛÇÙÑ¿¹¶¶ú½Þ+Ÿ9wîÛÅ‘Y.”#ŠÈkíH~o€‡k5"V"G7¹WDÑ+sîFä^Ñ+Ët±ŒXY*ËÞ3_ûF/=¸¿œÀCäb€YLÑ›ÎF¯·%ùpû÷á™èÃx^}ŸQÛ¸ÏÑ#š·Ž_qiä»kþ ¢w×7Ö«×ûÚgÎ{aÖˆ~m³\=ºÐ::Ëk€û¬GÄ»ÑHýˆÜ‰HÝe?¢è¤\v‹(ú«åzçÅWßîÏ{±Ì‡€ÌjšˆÞ¤ëÆMŸGÜ‹çM¯«ž¿Þôø,=YsSDŸ$ǽ߱ê}íï¯ô¾ö™óç^86CDïòÆc³ž _Žˆ‹шNäèæˆ~Dî¤"u‹Aêo¶s'åv÷øÒ;­Ó˜†€lG5PoWƒtÄÖ ôj,߬Ý×d–3Ñ·чkœþ'EôISé["úŵßí]úúO?÷üRJSeñ÷7ɯmñN §Äsî§"uÊ\VÎ/ºƒ¢Õo]\~îõ×׿½` lWu½Ñ#FŸ‡^Ý–½)nOs&úÐN#úð9Û‰èãÎMÏ o‰èﮯ¯þæòÊ«ÓN¢¯–ʼnIÏ€=ìjDt"¢Ÿ":eŽJ OÝ2•ýT¶;GOÜêš`Þt`§ªÛ¢×·GÑëêá<Õ«ž‰^}ìADôQ“ñÛÙνê¾Iôߺ³ûóÇŠbqÜ ¯—ùØßvUޏ•"/çH‹ˆ^ι—‹Ô-"z¹7ÈE·£—rî}ä«oôÒä[`OЀhŠçCõˆžâÞt÷4g¤ ƒvÓ™è©ö»ч úøÐƈ5Nî‹è¿¹Ò{í³"úõAŽëƒòÖ#­BHàA¹w¶NïG¤~DîD¤n޲QtR.»EýÕr½óâ«o÷ç½XxPt`§ê}x_Äý½[·o¯FõqSè“ Ã÷¨½“ˆÞôÜúí&£oUÞ¯XÙØXûíÞ¥¯ÿGçÎ>tÌvîo¯n?T@`jeD,GD?":‘£›#ú¹“ŠÔ-©¿ÙΔÛÝãK7û¶N€;t`7Ô'Ы·GEôˆñ“裶sŸ¤ˆû§á'½v–ˆ>ɸ__cŠˆ¢³¾~û×Vz_ýì¹³/œhµ–š^ÐßÌG·±–¸3%Þ­O‰ß;S¼èŠV¿u}pù¹×__›÷‚`¿Ð€Ý2.¢Wcú4|ÔsšbzuнjÁ§½öv"zýóÍ"U¾ŠK¿Ñë}í?>w®1¢_ÞŒ££FèØ×6#¢½”¢›ËX‰"÷rNË)b9"÷Ê"-·7ÓrÜŠž(–€ì¦Q}¤#&O×·xoú>ÔŽ;á¡Í¡ûaDôØ2‰~ycsã·{—^ûÌùsß~¬¶ûÍ2Ç»k›×Ÿ^l?ò€ÖÀîØŒˆ^ŠèäHÝȹŸŠÔ)sÙ­O‰/žZ»òÌKoÝž÷‚€{t`·MŠèÃ8]5í`uÓóÚ±5Êׯ?¯ˆ>íTúð/¤ˆ(.nl¬ÿÚòÊ+Ÿ=î…GZ­-Û¶¿¹V¶Ÿ^œqì†kñnDôSD§ÌÑM)õs”ýˆ¢“rÙ-¢è¯–ë_}»?ïÅÛ' ¸ˆ±õLôˆ{<ÅɽªqÛ¹Ú¾½Í½~6úƒŒèÓô|w­­êš.onnüæJïÕÏž?÷íÕˆ¾¼G×r9XLEë¾+0‹2"–#¢ÈÑÍýˆÜIEêƒÔßlçNÊíîñ¥›ý§¿üîêœ× <$ŽÑ¤êÖëÕÛEÜ›FÞnU¾Š¸óýZw¿·+?/ÔîoU~.*¯oÕ®?œô¾o}]£Ö?||ˤxÚî®aáîבÊ÷ÅÚ÷#•çU?óð=ʈ(Ï´ÛíÏž?÷b5¢òxqócK ÇǬàÐJýœÓr¹—r^ÎË‘b%GôRNÝÑË9÷Òí…îÇÿÍ¿¹>ïõ{€¯O©µç _©á«ºÆQëŸ%¢×:*¢©=Vý,Õu”1øÐIôï:Ùj‹ˆøsÇ[×/,µ³^€½j3"z)¢“#u#ç~*R§Ìe÷^/ºƒ¢Õ_<µvå™—Þº=€<,“"zõ«º {5¢ çMÓè­Úuê[Ç?Ȉ>üªnç>üªGô…غ{u‹ùá¹ë›w·sÿwiµŽŸn¥òN)ü"ìïGÄ{ÑO2G·>%^DÑ_-×;/¾úvÞ‹ÇŸ»SS¬n:½¤G…ôáí…Ïw&úv"zu‹õFôú}ã¶rDÄæ¹…vû³çÎý»Ç[­ãß}¢uãÙÅö‰1k؉ÕÑK‘»Eox;G^)"z9Åò`–—ʲ÷Ì×¾ÑKwþ· `ßЀ‡m'}Ô™èõíÝ«ÏÙ«}æ)ôˆØ<Ýn?sþì_xbaáäœ^I©=f­CëñnDt#R?"w"R7GÙ(:)—]Sâ:ððÕ#u=B7Eô¦3Ñ›Ây}2½ß«×{Ð}ø>ã"z=¨Wzu }Ðq'¢o<º°Pü̹³ßûok/|÷‰#¦ÐápÊq1"úщÝÑÈT¤n1Hý2•ýT¶;GOÜê>ýåwWç¼^€}A@æaÚˆ>ŒÝÕÞ4‘>.¢7‡^õ2¢W×5Œæ£bz}÷áû—Q™BˆõGZ?{îì~úÑÅÖ“ ­ScÖ ì›ÑK©9÷S‘:e.+gŠÝAÑê/žZ»òÌKoÝž÷‚[~óãN~Þq'qï<ÝVÜ™¾Ž¸—Ú•Ÿ›Bü´Š»ï™G<^__ÝpÕµïŸ$×n—w¿î~ Ã{}÷ADlDÄfccõï]úÇgÚçø‡-­”ZS¼/ðð "G/ŠèEŽåi9"z9•Ë©Œ;·#¯´E·uvÐÅæË:0Oõ ôêíáv}ûõ¦-ÝGƒ^}lÔ$ú¸)ôiÃü0¤Ï2‰^ŸH¯Þ_ è)îìï3 íöžzìG¾ÿÔÒÓÖìŽAD¬Œš/©¿ÙΔÛÝãK7û¶NØ_t`ÞFEôjÔÞID_¨=¯Ïw3¢¯^g\D¯ž‹^ëÕu 'á‡Û¸púðëL»}äï>óø~×ñÅLX#Ðl="ÞˆnDêGäNDêæ(ûE'å²[DÑ_-×;/¾úvÞ‹àÁЀ½`'½~ÞyÓ9èíØYD¯¯mÚÏ1.¢ã~»á«Ð#îôêYè„ôoYZ:þyì³O.´?ê÷¨i#ú¤5V×qo÷a@¯N¢oFÄà±¥ãïÙÇþÓÇÚí§F¬ ö³ˆx'¦ÄSJý¹“Ê¢;(ZýÖõÁåç^}mÞ `ÿЀ½d;}›Ç…óú”ú¸ˆžâþx=j+÷Dô¦ø_ý<ï¡Í[¹oùþÜÑ#ÇãÙ'?v¡õ-#Ö{FŠèçˆnDôSD§ÌѽÃS·Le?•íÎÑ·ºÎàaЀ½¦)¢7…í݈èõ×7M¡×§Ñ«ßG­½ésÔ#úpý)¶nÛÞªÜ_ è[·qFôú׿ GOüúGŸø/N¶ZOŽX'<@ùfDZÎËÅÝmÓsŠ‹ÃÛeNÛ9zQÄÊsüzoÞ«€*Ø‹¦‰è­Ê÷ú×4[º7ôj¸uzÓïOÓN¢ú‹Mg±×'Ð#š§ÐµÛ9"ʧ~ã£Oüõ§¾kÄÚ`Z×"â݈è×·N(:)—Ý"Šþj¹ÞyñÕ·ûó^,ì„€ìUÓFôQg‹:g|ÒVîM“èÑp»ú}–ÏQŸFošL¯O¾åÊ×0œ7M¥—QmEñëyêǾóØâM±N2"–#¢ÈÑÍýˆÜIEêƒÔßlçNÊíîñ¥›}[§p˜èÀ^6*¢›àÑë1½ÏGEôúäø,ç ûM1½úy"îõwÂyÄZD/+÷ §Ð?˜VÿŸ¾õñïûÞSÇ®•âÈkeÿºšs,GŠ^α’R¾˜#z9Š^+çNDîåœ{­£­îsøúµy/ö"ØëšÂõN#zÓã£z5j×·q¯ÿ.Õ41^¬éM¾úsõ:[z®Ý.kωˆ(¿ÿô±óÿõ‡ÏÿÒ¹…ÖwûB™c9G¾š#®r¾¸™Óňèoæ²»Y¦îf ®\ÛH¯l¼ßýÂ[Woƽ¢á60ØFÅåi#úpʼ)œ7ƒÞ4…^ßÂ};¿G5MÔ×§ë›>ó¨€>œ8Ø:u^ýúà±vDú•gžøï9yô?i§tlëgGò ÌéòFÎ++eY^]X¾]–Ë9®^l^¼´^^üæFyõW¯_~éÊêjlÝi ×~ŽÚýMñ. W×S¿=JSDÏ#>WÓç5a\Öî«ÇÓQ“Èù‡N?rþ—ž|ô³OiZHÿ@Y渴ùÚfŽå²\ÙÈñþí²\¾•óÊêf~¿Sn,ÿé­Íå¯Ü¼ñþÿ{}ívåµã¶Ooúç7)¤G4GóQ» 4}f  ûÉ,=EóÖìÕ`^ë£zÓ9å1ævUSȬ†óúkëQ}ÜtqÓVÝã¶ïnºöüé§üÌÉ¿úá#íO·S:=âsìk9âêfΗ6s¾2ÈÑ[|i£Ì—oçòòµÍÜ{0¸ôÎúæå?è¯öþÏ7®o}é–ïãn7=ܶê£bø¸mÙÇý3Ï`‡t`¿ÙnDÞ5>îüó¦ ôúZÆÑ믭óúk›n—±Õ4è×|zaáÈó-g¿ï…cK?|¼HŸÜËSé9b³Ìùò âòf™W6s\ÞÈùòZä•Õ2_¾1\¾¼^®¼¾¶qéŸôo\þ“Û·×ï½´z™¦Koûö¸ÇGMŒ×7½>)œZ0%ئ‰èM[º7Eô¦Çê¾é«i=£LŠšÓL4WoºÞ´SÉßï{N;ñóçN÷3‹Gþ½SíâÏ/¤ôÔˆ÷Ü59òÍ2ÇÊ âÒFÎW6ÊXä|iµÌWVs^yP^êm.ýѵë½ßº|ãjÃÚëŸgšÿì&]cškÎòØ´“éM·G]«éý€mЀýj\D5^åÕh>éÜóá}õ÷Ü®L:ïf@Ÿê½~üÜéGÿýÓǾ퉅ö…SEññÅ¢x®•ÒéVŠSEÄé¸óŸc]YF\.s¾¼™ãÒ be3çËë9_Z”+7ƒËW7Óå·6o/ÿ³k7.¿tõöí)>óNøv‚w“ݘ\oZפ­Ø…sxÀt`?›&¢GÜÆGóI[·7½ßvìÆ„ó´¯ÝÍ-ËG^ëÇÎ>rrñîí7›·ÿ ¿º6åöBßΚ&½GÓsfýËÓ¾Ø%:°ßÕÏ!¯~¯ð¦@^è÷O¡GŒèã~§šeZ¼éù;yý,ñu–IéÝ^Û,×Þîkg¹þ4ï9î³þç;îµ;]708ÆEô¦iôQÓéM[ÀׯS½~ýö8Û‰ª³nÛ>î5;²Þ­éïiž¿Û|§ñ{ÚÇT”Íà!Ѐƒbšˆ^àMá|Ü–íM½éçªí„ïYž3Ísdüžt½YîÛî{N{ÝÄñY̺µûƒZ0#8h¶Ò's¾Óéó¡íÆô݈ҳn·>ëzÆ9ˆ|7^'”À# Ѩˆ>¼=i²|Üyçõ÷ØŽiÃéNƒönlþ°¶?Ÿæñi<¬øý°®Õž÷:ÿ[|õÛ/<[ò߈•÷~:—­ïŒˆ·æ½&€‡A@ ~ÿSŸjŸ^y÷GŠHŸ+ù/… €CH@8Ä^}á#O­_ˆ•÷~:"=‘ç½ €9Й‘þøÛ.üå"òßäøH¹5ï5ì:À!ñ• ζÊŸ{%§Ÿ+"?FÎ>  pÿúÛ.|o+Åçrä¿9-Î{={•€p½öüó'6bã'#çÿ,"—As€Ét€äÕ>öâ ,a#¯ÿxDœœ÷zö`Ÿû—O=uôØ©cŸ‰ˆÏ Êò“ó^À~% ìS_ýö Ï–›ñ¹Hùg"çóó^À~' ì#¿ÿ©OµO¯¼û#E¤Ï•ƒü—"E1ï5:À>ðê yz0hýB¬¼÷™ˆôdž÷‚ `Êé¿íÂ_n¥øÜ Ì?)™÷š2`yíùçOlÄÆO¾’óÏ‘¿Ó´9ÀÃ! ì!ÿò©§Žnäõ÷"âä¼×pØó^÷ÿЇŠÏæB@€Ð "tˆ"B@€ˆÐ "tˆ"B@€ˆÐ "tˆ"B@€ˆÐ "tˆ"B@€ˆÐ "tˆ"B@€ˆÐ "tˆ"B@€ˆÐ "tˆ"B@€ˆÐ "tˆ""¢=ïìgŸÿüçOln¦ )m>^¦t&•éLNq&"RN§#åt÷©9âF±–Sîåˆn*‹åV+w.]ºôÍ/}éKƒy~˜ÕÏþâ/>ÙZ"ŸˆcEäGr¤áï9âFÊy5Џ˜Sêæˆ•b0xýWõW/ÍsýlŸÿüç_ÏùÃ)ç'"Š'RY>™S:š"#âXå©ÃG7#b%ñ^ Šn«Uvüñ·¿ð…/”súÿÙ,L IDATÐè _øBñö¥K·×óÓ¹<¹x*¥x<眶üïoŠAÎq-¥t="n¥œoF˃VþÓ÷{½7¾ô¥/­Ï÷“°—}哟\hß¼ùLNåG‹ˆ“¹ˆ“¹Ì§RJ'"Ç#‘òñ»O½9­¦ïçkq#Êx¿l¥7sûÍç_{íÊj˜ÌüÎïy¥” …BÞ7·mï1TÌàI28 ccðåÛÚÄ0‰œ†y™Œy%‰|ƒ¯­T¿••ÍMÞèiÄt*SÈpX?¿ìÛ ZKà0ž‹F[Ÿ_²dÉDZèUé# y·ly·‰O!F!g`€¡1øòŸ0PKLkˆxµã४ªª1øºJõHeeå¡--ÑÓá Ø0L>@ùôóK»ÞÐÄÀFøgç÷÷¦þW«dòì´iÞ¡ïn™L†O…Cyl8„Ç#6ۢ׈±‰‰72óJv=¯NÞ¸qK ¾vÊ«-,ÄÑÝŸHwìÁÆ3zÒ† ›¥;Tï¬ÎÉîda2àäx 3Æ8 ‘ÇâýÒÞ.[ ØÀ@€c¸nhKtÍèÍ›[b~=¥ ¾`ÔÑí”1• Æ9å1c€£b|©m`z „M ¼F0 äz^-Ú¸ñõ_'®j rîÓ¥ÒÆ Åë›~*¡”R*½é])¥öÃçó#ò|‘/˜`p/o®ã"~f×®ÁÏ-[vóÎ^_¥±+®¸fÐÀ»ÏfæóAü%Ç%à²Qþ ƒ?ž?E"·¯MÀ5Uò¡òòŠ©ŽÃ_1Àç ø/ƒú«bç½h€gØm{xñâÅÛxm•â*++³Z[£gásÀ|€Bž&üÀcó¿Ì¼"‰´'ðÚ*j !ÚrŠÎ ð眂ľ†î±…•Äx=UT߸’Ýñc:@W½U7ñø¡n4kªC4•ÁSš ð Ò]Ú¬%¢W˜øoÑ]îߦnÚ¤Œ«˜yðäŒË9‰ˆ>ÂIpw<&†€íÌXàä8+v~¸óåÓÞ|s·dÓè])¥”Ú›ЕRª ŸoÎD8¦†/!_º§‹6=ã€ïóxèÑ… Zs3-Õø|³O&2Ó„.ÿV8\u·Ä…C¡óæÛoOs •‚pöÞæZÂ&ÝãÀ„«««ßnQ‚B¡³eË;gÃá0¾ `„tS'—€ç ÑýÙžæÏŸÿ‘tP²ò~brð¿3æÞ‡«"×ýTiiivFvöÄô _BbÖ;í&8÷„à ž—ŽQ}·rÊȞݿLL—ð%ÙÒMÝØàÏ`zœ2²Ÿ*ª«³ö¡ÑµFÕîfLLĵgÃàO‰¸VO0ørfÇúǼà¶ÂõMiñºU[8æHã:gã‹ œAÀé¦^ˆüfzÌ1Îòd[©«ìðqãÏô¸ç:ŸÎeàp馃heàÄxÚãq›¸nctPW:@O¬š‚¼9€‘xñ€8#žT³ù饔²ЕRi¯´ôê!™Ù-—ÃP9…Ò==° À#lè¶E‹¾*“j|¾àÕ Ìº|M$\5)‘ôù|á83Á4À‰‰¼vEAø£þŸêêê¥cTâƒÁ]¦R0—"1» ôÇ' ÜͪWUYu#,ø|Á¡‡ÖÞŽ„«Dȃc£>J “hè…ZÍÏÌôÜ=þüVéupõ™Q´ž¦K¸À!ÒM½Ð ÆŸ™ø™”õdA}}›tPW5ùy—‚ø~éu ôvqC£-ÛÅTSNNÖ®,ÏéløŸ )r_˜Wà·í®Âµo¼/Ý£ìU7ñø¡Íü}„ià„îÖk›~”‰–Mªoª‘ŽÑzâÔLÈý –îèÆÃÅ MKG(¥”-Râ¶RJõ…ÏçòüÀeèÿy¢"xÑÝ:bÄ‘„B!Ýú2ÊåÄ,µ"qS$\••#Û´ïœË„ï‚1<×ì7Âs ¾qQuõ_¥STü”ƒ§8?DÇÐÇ‘îé5Â3†ð“ÅUU¯H§$ Ÿ?¸@BêÔ W%t5®ÏW1 ?"ðÙH¶ÏbŒ-L4/ÃjÝ ÇNujL¦ J0Ž‘î‰÷¾ß8ž»&¯Û¸R:ÐzrH­úKÇ;`À¡Ù_!¢o‚q.äw‰Š/Ân0î'æÛ‹Ö7¯–ÎQvX9eJ†g÷‡è çÈ”nŠƒW‰)Ò>ð“{§®Ú²K"@è‰Q_PÙÎmëäJ·ìã#Di|qc£î¨”R’ë¦RJÅ€ßï/<×1øb$ãp¦{õ …à ÀÒ1ɬ<ø61‰l£à½H¸*ÞÃlòû+.c˜ßtlœ¯Ž˜Ùáp¸VºEÅŽßïÿ"“s=gJ·ÄæÇú}zp~ðeî8‹9ávïtȲe7Ç}»hŸ/øe8¸ŒÓâ}­x›‰¯?æè£—êÃ{v¨ŸSr~æKdI÷ÄI ˆªÛÙö‡Ñ›7·ˆEè= $ÿ½¶°pÚ[.bp gÃΣâhܶ㨑÷N_±"*£o͸q£Èq縀Äq? GÀv&þÝ®[nKôyé:@OŒÚñ¹×0ð;éŽÏ ò×7Šo¥”R¶ÑºR*møýþ\ÀóK— u_ÿj æûápøiédU|˜º|[$\·ß».8a€>¯k$PD a¢?D"JǨ¾+ §8Œ›À8[º%\0UµµeþdéÒ[õ¹ý(÷Ÿ%`šÄµë=nñâùoÆëë—ƒS<ó8#^×C+ n™>$"geÁ‰Ç{Œç&rpI’oaÛs„×ÙðmNÆÀEg¥ë=$ï½nܸ‰ì1îxæék­"Њê7þŸtŠJŒÕ9ã=L×3p1RsµùA1ñâIõÍ剼¦Ðã¯nâñC›Õ ûŽpúGQCÓièñJ)ÕEª¬¼TJ©ý*++;Ê竨f8õ ¾©;<€"†ó”ß_ñHYÙœÑÒ1IJd»´N™¥¥¥1_aRRRâñ*~ò¬I‘á9xÁü]³Îï¯8O:Fõ^ 8Æç Þ缚¢Ãsð€xNfVëF¿?x¹tŒ­[Mê8<4_×çóð‚Kƒ¦æðx*ÃyµÜ¼ÁçóeHפ“—Ž=v@MAÎO½ìYO„o¦Íð£‰hGw¿V›Ÿ{USNNª®¸WiâÙiÓ¼µ9×äç>k·Ž™+ Ãó½1Oa6«Ÿûç5ù'ڶ届¡Õ9ãk rîw˜Ö2ð-¤éð¦µÒ *öŒ›ùcØ7<"¿Ï•Rê³t€®”JeäóËOÆzû¤ÍÍ]Ýñ¸õ¾@Åõ¡PÈ+Ý“LÈ8 _ÍÔUvvö¡±üz>ŸoÌaG¼æß %·¤cü„/\4sæÌÁÒ5êàB¡ãóg¦.Cj?Ô´Ç‘ ,+÷–Ïš5˶&6 3»1ÿïá÷W|äYÆwúŸ·2ø1“çÙ²²Ê¤<$ÙÔä^4ðÐ `ú9Rý,ä;Š ·î̤¦5yåÏN›¦ïwURY“3¼v|ÞuCßyëufZ’Ù‰%™0ð%r<µkòs¿ÿ ÒèÁ¡4°:ô kòsïu˜Öv®€Nõ÷OÅà¿K7¨ØZ;!g @s¤;öE„y“ëuG)¥”êFÚ¿!QJ¥¦`08Öç¬a€¸¬.KÀüË-[ßy9LŽI$¹í޳ºß?ûk ÏJNŠÕ×´QæÍÈþgYEE¾tŒÚ¿²ŠŠü-[ßy„ÛÄôa‘d@ ‹=ÞÌ:¿öYÒ-Va¹:QìÞ#̘Q9Üç >Äà{aßÊ’¸"àtÇ]¦I·¤ªú‚QG׌Ï{’%Ýc‘ãˆ92dÛ[«VË9S:F©ƒ©/(V3>ïF'“^gð¯èÃG½Á@„ßæÏ}©fܸ<éÕ?+§Œ¸f|^Èq¼;ªè}ê=>jlh®“ŽP±5tìÛUas{öÎt„RJÙJߘ(¥RŽ/ð»«Sh«ê~⩆i¥ßüÒc¥g?µ‹Ð=Qê÷P1 9åþàM óGCb•$xœãò?üþŠoH—¨ÏòûƒÇ啟*Ý"ì†ùk¹¿âé[qÒÐË‚Á322£k@¸(_/)1†¦§|€_:%ÕÔŒ9¯3jÖ#Köƒ…ŽC+j rî_;aÌqÒ=Jí땜œCkÇçü¬m›¾À é¦$wwum~Î,éÕ7µ9gì´žÀ?c€t]è¥KWºBÅΚ‚±§ìûœÀL³§®Ú"zL)¥l¦t¥Tʘ1£r¸Ï| LÕHï--»“ÅÀ->_p¹ns}P¢Çí×½²²2këÖwî#à‡HÏ&3ø¡rðGÒ!ªƒÏç;¢Üx”*èkóÿÎÞ3wîÜ´¿ah@‚[¸sWŠ“Ïü¾cðމES’ËSµ/ü•tH*¨/(ȬŸs3³ó€#¥{’Ó¥Qvt—eg§MóÖäÌI›ã0é¦2ˆ‰×äý¡)''K:FõLÇŽ*9d¦å /Ýc%æ¥Tì0@ÄæXw†œ´¾ñ é ¥”²™ЕR)Áç›=9#3º ÀW¥[¬F¸H·¹>(ѺK}_^YYyh[[ô _˦$DüÚ竘 …ô½Ž ¿ß_xVèkÒ-Vb|kçî–g‚Á`º5pÀ­‚—ïó¿ûÒÒÒl_ ø¿ ç/wŸÎïÜ ën&5ãÆjGûKý7ôßco´¸»Ý?JG(«'Œ:ô·þ ¦ß3p¸tOÊb¾bg&þV[8F4²\M~ÎíœQÐ…Ò-63¬çŸ§’º‚ÜËœ,ݱ¢ì¹Z:B)¥l§7••RI¯<¸ dþ@·kìç¸ü¢žSÚ½Aƒ피¾cúv.tiéÕCZÛÚŸeà ±nJZÄs¶¼½íþP(¤ƒ-~Å7΋ œ Ýb5Æi®áçfΙ3R:EŠä ô¾ná>{öìÃ3²<FI¬›Rƒ®òùƒ¿—îHF« òŠÈq_óé–¤Cxrê¦MJg¨ôV_PpHmAî<‡Í+&I÷¤:ÝD—Vœ¨+š-T7ñø¡µãóÑôw÷ŸT×ÖòIË?¥#Tl¼>jT63~-ÝÑM]¿~«t„RJÙNèJ©dF¾@Å/ˆé^è¶À½5Ô0ýµ<ø¶tˆmæÍ›×ÀH]ß ÷+Ð+**ÉÌj{ ÉñhJjŒ’-[¶-+))ñH§¤Ÿ/xƒ‚žïÙC4ÑÛî¾àóùÒò¦/‘äîÔ뛸³fÏ>¡=j^$àôx4¥˜«ËýÁŸJG$“š‚1ÓðsFH·$#f¾OºA©¨i½‚Wƒ¡ï?ˆ€1^öÞýQËãÒJ±ë<†ŽÏ*ñF±Ç<·:´î€d5y³c^€mD›9ü‚t‚ŠÕ99Ãú‘tÇ>Úv$¸hD)¥’‰ÞHVJ%¹sç2ìð‡Á(“nIÄÀ->_àZé«Ü9èÔ‹3ÐC¡“•5ð^€ÎŠgSJ`|gËÖm·Hg¤2ŸÏ—±eë¶»¨”nIb¹Q—Ÿž={vZ“ÊŒO¤®mzqzEEÅq ϳte[/suY0xŠt‡Íj ò|º@–tKò¢ÿÕU{ÊÅoÐ-˜åç÷éú‚Qi÷P¢-ê 2kós—óíПk½BÐzªp2ð3ûФ[ 7lX+]¡”RÉBèJ©¤2wîÜ;wíþtû¯X"ºÑç üL:ÃFn€ÎèùèomÝök=ž=)æj¿?¨ÃÝ8())É<¸\º%LˆFÍãsçÎ ’@btâžÃ9cFåðh”ÿÀ‰qNJUYŽÁ#@àHéՌϹÌUÐÏçýÂÌ÷K7(µ3‘nHs¹m&㯯ääôúx,Õ?õâh{Š ß‘nIBÆ‹¬¥#TÿÕMÌ"¿tGW ¼¶ë£]¿îPJ©d¢ЕRIcîܹvînyLWÛÆ Q¨Ü_qt†;Å®Í= ûýÁü0Þ9©†y~ÅW¥;RI(òvÄ} \ Ý’*8e×®Ö¥H“mð‰èc¹‹|ziiivFVôQr‘”ÂF¦»&ß×=µ&?÷ËÝ ýlÞ_ï|pÔ1•ŽPêSìyP:!Ý¡0;‹zz}¢¬.Èߎ¶•Ì8Sº%1c]A}ýéÕÆÅ<^éŽ.˜ ÏÒz”RªwôCºR*)”––f²«õÁ8[º%•ø·~PW‚äV t€^^^qÕ‰èIAßãóùÆI‡¤‚’’Ï–­Ûîð é–TÃàK|¾À Ò‰ ¹…;€!8ð@—²²ÞÆi‰ JqçùýÁ¹Ò¶XS÷_D¸@†tK xtúŠQé¥ö˜´aÃfë6¹Ò_ÌËÏÕÖ nܘ)Ós`=ꦯˆð’tƒê¿Ú‚±_ø<鎮¸oò†æç¤;”R*Ùè])e½P(ädf¸›À_”nIÄÀ’ò@à\éYFî tø ô`08”¾@f‚’RÑ`çኊŠC¤C’ÝСÃ#.•îHYDוƒ)¿²ŸÈ•[xJK¯ÞïÙ„>_àÇ ¾$=)_ƒÁ±ÒÒjÆË#🠖nIÜñÞH)»°óGé€ðãÚ‚1V ³RMMAÞçŒÇù€#¤[’ëùçI‡ÙÜ,ݱ7zßq¢ß“®PJ©d¤t¥”õ¶n}»„‹¤;ÒH1ý±,œ""†Hn wð~9ÈeÜàøDÕ¤°ñÑ(ßÝJ¸Ï|Š_€x¦tGŠ#2¸«¬lNJ¯äÝ€×ív÷òòàù %8'dG þPRR’¶[ên;v0÷q0†K·¤úWñ:]U¥ìãç!é fgÙÊüüÒ!©¨&?ïë`~Œ}ŽU=@Qg…tƒêŸºü¼™&IwtÅ„¹×½¾MºC)¥’‘ЕRVó*æ2¨\º# p î+-½zˆtˆ±èàý¯@/÷WÁ8?‘9)pß_¡à>ðû+.ó¥;ÒÄÇãþ! Ùt†^¬InáÇáÏ ÐgΙ3’,…~^Š N6숴}×â17È•îHŒG`é ¥öU¸aÃZ›¤;à/EÃÒ©¦¦ ç\ß Kº%ÙðFqcã[ÒªïVN9Iwìã…âúÆ?HG(¥T²ÒBJ)k•ç‚Ù²­ÒJnfVë}iºBLn€Nèv€ O$ðï“ê [º%Ý0ð¿ß_$ÝYYY¢[¸a¯èåþŠ _êI#GddµþD:"‘VMÈÍgâ…Ò)…°±h}ójé ¥ö‡ ë9è6!þ}má}@®ŸêÆ™â8ôô3A éùçÉleÁ‰ÇÃÁ5Ò{!¾qÒúMMÒJ)•Ìôf±RÊ:•••Yž(?àpé0p½ÏWñ9éŽÄ2’ôì’’’Ì®Âï^Nà/J¥:ÎØ²eÛw¥;lWRRâñfº÷t¬tKšÊd8wíûú æÏŸß  ]êú]è»}üFª%Ý0gÖìÙ'Hw$ŽÇ`)€AÒ-)f¹t€RR¸¾ùŸ¶Jw¨ ÎQçéŽdV;vìhã8a?~«¾!6/J7¨¾óÀù ¤;ºh:lgô·ÒJ)•ìt€®”²Nk{tÀS¥;Ô§< ^æóù“I É:†úé͈ٳg΄ßKö¤Â/€MÛ­YgȰ#Bú ‡¸¢aÃŽøtDœˆ­BgæO·pwa‡Hµ¤¡Lk~,‘µyN’îH5ŽC÷H7(u €–îP{)«7fŠtD2Z“3œ=æ#¥[R Û ×¿V/Ý¡ú¦nܘ)Ä4Sº£ &Ьћ7·H‡(¥T²óJ(¥TW~ÅW™9(ÝÑÐÄŒ&>ð! Â!rÁÈŒç}Ž‚ãYàÛÒ!‰ÀD;%wöx<‡xÚ£æç† 椋ÁÌÎï\*b£²@à ĸNº£6x†‰_v˜7z½Þ×,X°}Ï_,-½zHvvËfÏñL|3N#ðçD+høQ ¸»ººz³tKŒ}`ß³Èe(”q¶PCúb|Ççó݉D^“N‰—ú‚‚aQn %é”-Ìh$B€íLØM†þsC–x :οÍp€Ñÿ›ï0fÔ®mÜïë(Õ_ÌôæˆFÞc oô6˜]"вéò›ÃÆbdh ÃP€TÛ}È1Žó+çI‡$“g§Mó:ÛÞºÀ‰Ò-©†/ë9ÕÉËókØ´H‘èî¢úÆ¿Kg(¥T*кRÊeeeG1x±tGµ3èi"ó¤ë8¹cáÂæžþ²²9£'zˆÎp.:n8Úñ­ò`ðEUUI§Äí’üüÒ¡ Ǻ>±4ÃàKü~ÿâp8ü´t‹M|>ßa`º6ÝØÛû Zbˆÿ°¸ªªî@ãÒ¥·~à›<¥¥¥ÙYY¿f€+ |ì<Ð0ÍðuéûDêÂD:wîÜ;wµÜ,Õæ2ï÷¤C⥠m¿¤ä9šèMbüÍ8ô¬—Ì Ö5oêíMýω'ã%O!1NãdîX}Ó‡•¢ûcùõ”Šwà¡+¼»?~à¡ÿ»ûm3À5 ¬!` 9NýàÛ¶ôg%bmaá ŽîÊ#vÆ2ñ4€Î8Ùß8·v|ÞYE “ICÞ}ë6ΑîHIŽžž¬êÆço:>CZ€ín«ùo饔J¶ß TJ¥r<w8R:ä ¶‚¹*šé½cÉí·oéËX¼øö×T¨*++;Š<3 \‘ ç “á•••+æÏŸÿ‘tK\vI>ÿí8ô¨áß(î+¸àcj x}F+€ aÆHêX}0B°/æÎïKJJŠ—/_îJ·ØÃ3ÀñÒÝØ¢_ÂDÑp¤ÏÇ.,]º´ÀèxÀÉý5—܈â`.ôù*ΉD>%;r.1褻Zî0J$à?Þ# Ž‰ÖÁp#;´“ ïÜóÙ¡AdÌ :„±`ŒƒÜªýØ"¾rƌʟÜyçüw¥Sb­f|Î$Àú‡á>ð ;¼¬x]óóÔ¹‹R_]¸¨ßô€7<tœ_—Ÿs’!|L_!Ba?›nâè­w]ü&!#“Ð÷r­ž©Ó‡ÒÃ`±#AögêªUík rþDLWÆáËoð4žnq=+NÞ°aûAE/ÕÕí°¦óû`Õ„Ü|Óù`ž   Ö×Lþ) ÷Àšüœ b$ãN l"Âo;o9l¶h3» È9Ù¯ãÐHÏÌ'p,'€áéw±®NBþtGW |rssʽŸVJ))6ßTJ¥¿?8ƒ%Òðˆ~ ‰ô}P³?sçΰsçî«@t-«Ï'ðÂp¸z¶tG<•ß&¦»¥®ÏÄ_…K“ƒ‰¾4Àg8°Ã/|øÞ{õ$WTTbŒ™j âs:èÿMIÌ ‡«î”î°Ay p1=$ݱ&ðbÇ¡VUU½ ø\na IDAT|³Of2·‘Ýç¯9⨩¡P¨_ƒ.[øüçú¼tG‚¹ <ÅL;pW„ÃáõèåS~¿<3 ‡¾ Æ4$óÒD?T/ IgÄZÍøÜçœ!ݱŸt{4Š[¦66¾—È ¯7n‘;„YèÛYº¯74ë.›t¬úÝ-¶;ǾØxFOÚ°a³tG²Z“Ÿw!ÿ1_Š~ äÜï!óäÄuÍâÇ_ÔŒ™ãÌá$ÙÏ!vxú¤uÍ+¤;lV›Ÿ3™‰^BÇqɬ—ÁxŠóRt7¯žºiS¿ jÊÉÉjÉöŒu;p |§8¤_¦ePÉmnníOK,ÔäÜ&;Ž4cÜP¼¾é§ÒR3>× Zºc"ÅØBä\/ˆûj¡P(ä}kë¶_ðXúž™@ß ‡Þ'Ý >ðI¤ËY¤„wúfiuuõ[±ú²³gÏ>¼-j|T"9wÙ:rÄQLJB¡¨tH¬¬ÉÏý",Ý)‚î&¯ûßEu¯½#Yñì´iÞ¡ï½u1¾ËŒ3{ú븪¨¡é¶x¶IÓzjyéØc vCøéoNÔ‡°Üæ8ø]¬Vòvn¿{;€~¿ÿ|†óbñµ踻[+üV:D’ëÒ ¶gxNxÎ!¾¤ºzAB†=C¼kËg ´ Œá‰¸no0ø†’’’‡—/_Þ&Ý1ßÝÅBp³Ç¡[.\ó¸ ,ØàÆÒÒÒyYýD|½ß·0bëÖw¿ à¥Cb…ˆfÝó7„·Ø`ƤõOK§Àô+¢ðÈš 9Ó¦ÐAé·ÝxtTª÷N{óÍÝ5ãsÿà¢^ü²(þh÷ÖIë^{)^m±R¸¶qgÕæç\¢›aÿñlãìÕ9ã'×77H§Ø(; pRÏw‰R´jjÇ‘" ÓùsmUç7®0ú(×x¿N„˘qö¦;ݾ= íö¸×È–á9t“Ï•R*ööûœRJ%Byyð|}Sº£oŸOäð¼«H¤ê^‡ø €ß”¸þÁpÒ–-Û®îˆfézü1èi×ë„ÃU×Åil‡ÃO¼¿ã½b0æÿ9G.9ð5W\q°$DyyÅi .—îØƒ@ÌÊðž[]]ð•’‹ª«ÿ ã~Œ-‰¾vŒ6lxª¼§öp¯qÛdžÃU¿ŒÇ𼫥K—¶, /üŸ “Â]ñ¼Vì™2é‚X©ËÏý|ç±&Ö`àyE§LZßdÅð|_“Ö5¯(ªošÆOѪýþŒSׯߚÀ4¥b‚Ñã-Ü?`àfÃÑœ¢†ÆK’ax¾\¼¾yyÍDž—îé"ÐéՌϙiÍjäž"¼ à»äp|qCã=<ïÎÄu¯o+nhª.ªošF^>L¿¡ËÖñæ¹:Õk'Œ9ŽˆæJw|аñ6s£t†RJ¥" +¥Äø|¾ r0Oº£Æõž²¸ªjÿ7î ººzM4Ã{2@v>EJ¸qæÌ™ƒ¥3â™Sy€þ1å‹Â ϽcÁ‚ÅûbË—/w#‘ª`w"ƒ¬¼aß-Æðwú¥3$„B!/9†%ï ¼tÇŽwKæÏŸ/öû2‰lp½t&ñ›pûbðKJJ<Ò1ªô­sN¤ºêÛ»Ì$Ì‚ ¶Gª«J ðhOäµûŠóËÊʬYÍÓÆÁÕÒ {!ZöÁ‘Çœ5qÝë ý>싎AzãIÏè3G(Óý]Jõ—Ûbp ]cvé7ާõÄI Mߟ¼þõ¸¿W—¢º×Þqz6,:xˆq冱cSòsm_­Î}@6Þ«ÙŸV"ü.ºÛÍ-nhš_TW·S:¨;EuÍo¯o¼¶¥•'àû¼Ñ¾›_–îR½ã²çW` îèÄLÈmnNå{XJ)%ÆŠ£J©4å8~c¥3öÑÍðL_¼x¾+¿—Ü~û¯ÓÁX/ÝÒY×HGÄ{Svz#Áœ‰T-À‰¼p$yã˜Gž¢}í¾bàšÒÒÒléŽDÛºu[ìÙvA8\=sùòå®tÈ 6;ÄgˆÙ™Õ1’;lØðK¤#b è/€Ý)ápXôá¡p¸*âŸ`»dGyȓٛí­T;vìhHwt±´¨¾±´s[Ù¤@€)nh^ÒÒfƸ ÀžcƒÚÈÛò°`šR}6uÓ¦ú¿nþR@ظNnñúÆk ׾ݡnêªUíÅ MAý·tËA jñ˜‹¥#lÁ€ãPÆ•né¢UÄfjQ}Ó:~Ùï”ææŠšnÞØÐtb²4«uãÆLó·¥;ºX:i]ó 饔JU:@WJ‰ðù|‡égÒûx˸޳–Ü~»UÛô.\¸ðmÀ= V®|ÄÕ³fͲñüú~ñ¦à t»'…Ãa±óýB¡‰T/ü)—@üœù‘‘5p¦tD"Íœ9s0ƒBÒ“ïïxï*XôÀEuuõf‚¹Àné–®˜ùZXwÐr/1[¹R¨¯¸ì~!‰X±Ítuuõ °û¹Î­M­FÄIÿ@{ÍL0¬Ø‚ÇŠšfÑÐI唿抚®bÇœAÀzM•á¢JOLx¤ëÿ'Âsއ&74&oÜhÕçÐX)®oü=@¿î8 Æ•Ò ¶¨ËÏ­xºtÇA\‡Þ>ò”¢õ¯­“Îé‹Kñ‡„Uï°Ç¹¶ÌSï¶Ï÷¥3”R*•Ùñ‚¯”JCÎuŽ®èb'عÀ–•çûŠD"[Áž¯À¾s¤õz3SnzªmáN {vìxïÂH$bÅÓ틪«ÓH‚§¾* ¥Íû%ofö÷¶aûä0î7mXy¾¯p8¼’‰g¢Á>…e€ý7:ˆ¬=è1¯…«Ê#‘ˆUÛ¦G"‘ †p>ì{/±7ÆGKgôÙ3ˆià:—'ëð¼«Ië^{i`O‚ñ¤ÜûN•^¼Ôþ.@ïsYa}ÓôµvÙCÅ ?#¢…ÒûE8³cÛòôV71ïDv g)oeⳋšžL»«¨ä¶&?ïBfœ)ݱºæä ’a‡)¥”JZisCX)eŸÏ7D•Ò]¨4Y°Zºã@"‘Ûײƒ+`ÓЕ3fT— ß¹cǻ߱m‰,|Šà\ûW¢çmÙ²í|éˆD˜9gÎH0OºÀlœ lyà£;‹ª«ï'à&鎮¶ëçj¤Ä€k#ÕU×òŸÕ{,®ªZ¦ ´H·€ãºø²tD_ÕÏ›ÎÀñÒZ\Û¸ñcéXÉmnn-Þ°¡QºC©þ˜¸îõm_ïqÚó‹Ö7ßA–þ¼ˆ‡ÂúÆJ€»ÛÂÞä!o‰t„4cø6‹ÎvÞŸW2¨}²n[­iå”)Dæ7Ò]ü­p}ã2饔Ju:@WJ%žã½°çCÕáð‡¤;zbQUÕc¾Mºc‡x3£¶Ÿk×+ÆdÙ>Ôíß7rÄQe¶ Ï÷‡ü ŒËÐq˜è*é†Dð¶™$ÝA òE‹X? av Fº£‹¯QÒ}E”tæyáp•M7Öº‰,|–€ë¥;"iè†ÌeÒ â리kZ/¡”ú¬âúæßt ÒÓ †¼üM€Þ–néi=@_3>§l÷Ï_fÜ7¨§Ôo¶ò{H¥.oËÇ•åIwtÚi8:+ÀRJ)):@WJ%T 8Ì>éŽ.˜mÊËËÌÌø!ìÚ€eeå¡Ò±âºÖ¯Šîz¹½µef(²zÛÖH¤êQý@ºã@|¶ßï/ÝOee•Ç‚¸\ºÀÉò@S$iw:¶r·e›nã—Žè‡ä 3î9òè¤ÙZzĈ£nøyéŽýaðÙ•••YÒ½Õ±};}Uº@íûÃ/¡”Rû*ª{í8<[ºc?þ«¶0çXé µ……ƒÐÍÒqgÓú¦+r››S೺J&kŠG !æë¤;>Ÿiòú×ÿ%¡”Ré@èJ©„2L?-ÝÑɳæÍ›·[:¤7æÏŸßÚ9´±iÕîÖÖv†o1qÈ!I>@gü˸m__ºt©Í[ô~*R½pî‘î82pfIGÄ“ÇÓþ#ÒêwÀîá†^©®®^æßJw|Êð•%%%錾Iê3Ð×î,ÛXê* C3|"ݲƒÛÛÛO•Žè­Ú‚¼Ó%ÝÁŒëõLX¥”­Š×5=€LGé+ÒLt×u–?Ò=¢eE Me—VRµyÌÀáÒ2œL{>*¥TŠÓºR*a***ŽP&ݱç/®ªzEº£/ª««×€,{BœpµÏçËΈ‘d ·.Z¼xqRm ¹k×@?€ÒûCÀå)ôý½—ŠŠŠã$þ€‚‘Hä=éŽÞÊÊʸÀkÒÂÈ¡C8O:£/Œá¤z˜­‹OÀnI$Iºªªª6ùWÒûÃLÓ¥z‹Àüþã'­oú³t…RJÏõ°ëðN|®tA¢­0æ8"š+ݱô`c}ã ’æAE•:ê&æ-y3õõmÒ!J)•.t€®”J˜¨AäW8v¢míím?‘®èA²~à éŽÿ cÙqì8÷³ŸæÏŸß†d=OŠøšÅUU«¤3zkÙ²›wß`ë‡Á#ÙqRrEJÔåïAþµù/áðÂG„údþüù­²éçÉ é€>²e+üÞảD"¤3úÎÜ`«tEwß@éŒÞH²-Ü·î­Ò±ä8X .!ⓤzŠˆ§Š^ØÞT¿ñÉ¥”ê­Iõ¯Â–cpö Jš‡·ú…ͤöã–I뛞–ŽPé«)''‹ˆnîv{ž+¡”RéJèJ©¸#ƒ2é†=˜ø¦åË—'ÓMúž`ClÓù¥ƒ=™™%ÒýGÉ2@gfçjtœÛž2`«t€A€çéˆÞ0Æ“<+ЙoˆD"»¤3b©ººú0‘îø †èPºW…’—7À‹—XøsS)¥†™í:^qLmaαÒñ´z\Ω œ-ݱ/fÔ j唨J%¯O2è»`Œ–îbºaâºæ×¤;”R*]é])W>Ÿï0.–îèô^{KKJn{ôÁöí°f'1].ÝÉ1Ìa<‰,X-kÕÕÕë@x@º£Ž·-jËkZŸù|¾1ݲ‘@§ÒêsˆD"í ŠHwtJªmÜ'i¶p0‹¥#âȱñŸkLeee–tÄÁ¬)5Àq’ Dü²äõ•Rª¯¼÷aFºc/®5[7Ç…Ç¡_H7t£ÅxpYnss²<È®RP}Á¨£‰ðéŽNkv9òwÒJ)•Ît€®”Š+"Ï•¬ØF–€EK—.m‘åË—» K¶î0-Œ’Žè¶ì&N÷ZÝ çZéˆxqÀ7ÂÆ•õ%Õ`²[äñCö} 7^Õ°a+lÂ…¥¥¥ÙÒ=•4g ÿ>‰´KgÄÃŽï¬ðžtÇ><ííí¹Òã´zÇI7Ø®-•Rª‡&®{}€W¥;ºb¤î6îµù9“m\}ÂÍSÖ5­—ÎPé­½×°áHBÃŽ™3}ÅŠäøŒ¤”R)JèJ©xûŽt@'v]][ÃŘÛÞz[ —ä^¥KI° *aÙ üK:#^ª««×1èéŽÏ`œVZzõ錾òù|®­à¿‡ÃáZÙ†øX¸páÛþ*Ý`pFvö™Ò=å8Žý¯¹ÀGYŽˆ—Î#nþ"ݱ/f'Oºá`˜#Ý` ëöžJ©¤Å¶#Â8U:!^ èé†nlÈ@¦gN«´U›?fˆ‚ÒLZ÷ÚKÒJ)•ît€®”Š›`08–)Ò@À ©¶Uð¾î¸ãŽ üIºc[º¡_ì_Îì:·JGÄ›ׯ-Ë2²²ÚΗŽè+¢Œ/8Z´ÎBÉëÇ/—n‚ó%醞r]×þÕÌwÌŸ?ÿ#錸b<&°/+ÝpPé³ryÈnóoᥔê3"þ³tÃ^…Ò ñ°zìØ‘D¸Hºc_è{õõò;8©ôFž_‚á‘ÎèmÎlÿ©t…RJ) +¥âÈuù2é†.Ròìó}1áné†.ŠgUTˆ¯Èê3²{€NÀS‹-¨—p8ü4ë¶¥eæ ¤úŽ¿%ðáÀYÖ éb‰Œy €üù†§I'ô”›‘aÿÆ–óíã&mý «¶¨'‚õt‚#zþ9€GoÞlË.DJ)ÕkE Í l—îøã°ÚÂ1GJgÄyÜÙ2¥;ºbð“… OJw¨ô¶fBÎ4MºÀ¸zRÍæ¤3”RJé])Wt©tA§6¯×±b5`¼‘ë> ‹Î/õD@ú€`÷îÆà¤…€[¤>ƒpnII‰O§÷NEEÅ! þŠhaù¼yóv‹6ÄY$ùÀSÒ L˜={öáÒ=A­voáNÀªH$²Aº#Þ–,Yò1k¤;öÂ?_ü`˜ÍHÑÂNÑë+¥T?ÀÌxYº£+cä爥•SFt@~鎽\ñþP:C¥72dÅ=?Y¼¾ñ饔Rt€®”Š‹@ 0 „|éŽNO,X°Àž§Ùã(‰´ƒÈžÕö}C:¡¯Øîèÿ>昣l8c9!Z[w?À¶'°‡zäTéˆÞr]|À@ɼTòú ø_:€ÓæºÓ¤#z"3Óî-Ü h™tC¢8ÀóÒ û°€&°KôúJ)DdÕy¿ŽëI©º§eÐ% Øõ`%ã…6¬•ÎPé­.?ïr“¥;ìr\Ïl饔Rÿ¡t¥T\ÐÅÒ {0qZ=½IìÚ0´éÀ8Ùçó!ÑÄö®@'àP(dó€?¦–.]ÚÂÀÃÒŸåN—.è-_"œ°¹ººÚª›£ñÒÖ–õñ¡°Ã8Cº¡'vïöXûš À¸鱓M²j €ÃÊÊÊŽ’Ž8dè __)¥úÍ5ƪ÷ˆ N©:1fI7ì…àzþ•t†Jo+§ŒÈdÇ÷!3~^´qãëÒJ)¥þCèJ©ø`Ør>ð'gѦp8ü ÍÒ<ìxe·‹î#›W »Òf%ä§Œ…ÿÌ%Ð¥¥WpŽdtüOê[ºôÖX-ÝÁ 3¥z";;*þ°Á~V-¹ýö-Ò‰Bd^nøŒŒŒc¥bˆäÅɲól•Rª/(;Z+ÝÐ#u¶p¯;v,€Ó¥;ö±|âºæ×¤#Tzóîô=6¼Ï¬ýà¨c~/¡”Rjo:@WJÅܬٳO0Aº£Ó3‘H$ݶµd&Xóп$ÝÐ'lé±~qUUtF¢-ZTõ<ÿ’îØ ãôÊÊÊ,錞ÊÌlý€ ÑCÖ¼6%1¯nP ‡JGLkk«µtf¤Í‘P]]ýÀoJwtEQ)Ýp@ÑŸ dK^_)¥baRÍæl•îØƒ(uèì53a×n%l@7IG¨ôV_0êh~ ÝÀàŸ¾b…µŸ‡”R*]é])sN”¿,ݰ!½nzïáÀüEºáSŒ³JJJ<Ò½Fvná΄'¥„0ÀwKGìc@KKô鈞"¢¯ '¼sÌ1G¾"ÜPÌ´Bº€cŒóy鈃9ì°Ã¬½aä!¶çgj¢Yu)‘åt~8 Æv F”RªhƒtÁPJ П6Í ¦+¥;º"àÉÉõVí8 ÒOÔx`°t˜«Ššþ!¡”Rê³t€®”Škè®ëIËzkkëßì”îè4lÈá“¥#zÍÖè&mèŒmtÀÁiÒ =QYY™Åàó„3þ …ìü}'®Ûú,8‰­?=Úú½ñÁÑG~7´ë¤öâÕg CzwÀûÏqã¤ÏaWJ©~cb‹è<ô•œœC¥+úëðw¶|ࣥ;ö¦«Ï•¬Úü1Ø¡™Ò¶D[ÍõÒJ)¥º§t¥TLù|¾ž.ÝÑ©iñâÛ_—ްtéÒ+¤;ö â/J7ôš+Ðwfg{ÿ.!%‰lÐ$ÝÑ•ƒä ·µ¹ÓÈÞd{Ž–H”%K–|LÀJébž$ݬøG("Ñd×–¯@'ùe2=îqÒ J)Õ_â:0(Ã¥úË%s™tÃÞøÅ¢†Æ´ýL«ìÀäùâ;%ñUS7múPºC)¥T÷t€®”Š)vœ3 î0¥åêóOY´å,’o€nå t~eþüù­Ò¢ˆ¬ZÏÀ©H‚msüá„¶Ý»=-Ü ‚‰^o ¥’•^•nà:\/ݰÃvл¤C㤔Rª¿ ÓÒ ]7¹èµ……ƒˆé"éŽ}Ü& ÒÛšüÜ/C~w6€ñHQ}óCÒJ)¥öOèJ©Øb²eõ9Ø1i=@'fkþù 8mîܹv÷ù|c¥#zàlÉ‹3ðê²e7Ûr¬DB1Œ ç;1sΜ‘ÒÉÈýSºAB–ãl–nèÊö3ÐòtCкR*é9Ä[¥º2 Ã¥úÅÝ}€AÒ]|°ë£–?IG¨ôõ à!‡o”îð‰Aô{ÒJ)¥LèJ©X³e€ÞÞ²óg¥#$…Ãá&›¤;:eîÚÕö_Ò½dÝîÄé¹²«ö––ç`Á ¢+"Ï©Ò âóùŽ :ä'Âs’×—DÆ[#ÝžV·Hº!¹n[ZÐ,X°À'Ò{°å[¸°[¼8)ŽQJ©òò[Ò {qèé„þ`Æ7¥öñÀio¾)þ3S¥¯Ü‚¼™`š(Ý¢ŸM^ÿú¿¤3”RJ˜ЕR1SYYy(“¥;€€ºt]í¸—¤ö00§K7ô’u+Й£âg)K[ºti «Ža`Štð㈮>JÛ:ж€øÑ D¬ôÞû÷âÅ‹·IG²é¦žÕt0}$Æ)ÏNìéÅ% IDAT›æ•ÎPJ©þØ1ìØ·Aö<ÈLŒ¤ ¿>jT6ùmª»0†ï’nP髾 àbüBºÀš÷‡Ô£ ”R* è])3mm¸qÇL/K7Ø€ñ³w÷ ÂÒ ½dÍ›N»"‘È¿¥#l@ÀãÒ ]‘åt‡s…Ú€è  b"‘H;ë¤;@T,„VIHbMôÁÒDxS:Ààaïl9S:B)¥úcúŠQ0½+ÝÑEÒÐ?ä™»¶oß0yC³Þ§QbÚÑöC€ ¸Æq|ÓW¬ˆŠv(¥”ê +¥bˆ?/]°‘nu Düé†O1N@Ò½`Û ôÍX:ÂÆ8ÿ'ÝÐ…%%%éŽý OnX‰D¬Úv?Ѩ“n ¿Ua²a¶bû})D¼Eº¡‹Lé€!b+6`ð%Ò J)Õ_~_ºa&“´tfçkÒ ]1c‰tƒJ_«óGŸÆ5Ò̘7yÝÆ´ßYP)¥’…ЕR1ÃÀ)Ò {C¯H7Øàè£^ {Î0V^>{¼tDO1ضúëÒ¶X´hA€÷¤;º8dÈ‘ã¤#º ó ½õ1‘=òH!Z+ ¯´´4[:"™9«¥d‘M¯³Y°ø!È1Ʀ:•””dHGìƒßn4xà·¥;”Rª_I'|ŠÈ¦-Ð{l턱…Ÿ ÝÑÅÊâÆÆ·¤#TzªÍÏ™LL—Kw0Så¸?–îPJ)Õs:@WJÅÄ–-ï(Ý zºÕõ§Ø¢sÐJšºm+Љ­ØÚÌlÑ÷uÇ6îÒ Ýaâ3¤7í·¨c¦ÍÒ À쌖nH"ÑíÛ·[±ªX ‘cÓ´î´¶EaŸuC>q;@€}Ⱥ!Ïø.t}É:‚ˆh0$’.ë†<®ÿìéMåþ_ÀÍCT"òá{d ÞÿÂuPL1Py nó¶ÛLˆˆhÐ8@'¢!›?~% S­;ºu‰tµYG8£ ¸y¨@$5˺!qÄ~ðæ‘ªø¹žˆ¢(²}s~ Õ0Û¾e|ûö÷  d‚qBbˆD»¬¬år±«z—óz$xкá(ªÿ¯yZõ¬3ˆˆ*¨ŸS¦À ôé<ë†|*úë*?k&M:EDn6ÎÈ Â˜o G‘r€Þ#èsÖ N¹æškN³ŽHÐ+*²®6Ðã8ˆuÃñ¤3™½PüʺãgK¾½fÎ/º¨¸Ù@WH¢6ÐW1 dÝÑC_²sçë*/­S¦LTŇ-x¬BF|Ʋˆˆ†Žt":7tÕàæ¨rODsn_[º‰ÔÃæª;Q\=0#‚IÖ =®¾zÉ™.0Îèxæ™gxýÀ{¬ ŽGN°nHQ)ûúÞ½c:­’F€[7K€×¥¼¸B× ©›: ‰ W¥k¦8Ùºãýu•Må>ëÓ#D¯Og2{MˆˆhÈ8@'¢ð3@„ô^tUTx4V755¥¬#’D#žˆˆ ‡t"’n¸aì >,Êåx\pïÀëˆY`ŠuC¸¼¹¤âét…óº_͉h­uƒª¢Ñ׬ú"‚Ï7O«¾Ùºƒˆˆ†‡ .²nè!À–t&ãâ&*-µUï‚è%– *ò©Ù[·î²l "¢ÂሆdïÞÎIðs§âÞ•+W>cá•ÀÑv~‰Ö I¢Ü@z Ë¾}û^a ™fÝÐóˆÄ.>@Ðû!Ç—zøŽCô:ÐVØgÝÑ|¶¹¶êy':Qii­« Æº£G§'²PIZ3gN…@þÑ8ãÁ™™¶eÆ DDT@ ÑФ‚›#‹áhÃÚ£Ù’®Z¹ ɺۗˆˆŽ#{`:\}Î+ SÑÄû_¸@•aB6ˆ,þìLDTRý`EDI$9­¶nÈãj樟 t…L²nH’ŽŽŽ²?J¸/)‘Ö ù‚¨ù• ç8ͺCB´ÓºÁ“(ê8hÝPÐû#„7ÐiäVÚàwKTL©­ùn÷Æ"%œŠÔ[7ä‹BXcÝ@塵®nŒHô×– "¸uv¦­Õ²ˆˆ /e@DÉ&‚In>TáýçÇ!=ª^†U¸8æ:)T•›}9räcûöT89ŠVó tÕ¸FüjDQàÝoGóñûXì®H‚TŠG¸ÓàÍÊlkiIWÿŠ7[·B߆Üþßn˜^õöÛ·[÷QéɤÓ'”Î b ç@å| :[UG äd œ¤FD¢¢£¡ ‘S¡z®uw"…PoéLE#Û¬#¨oÝzë­.~ г­[@UÇ[7 –‰PûG›²ÙìÓÖ ž806;j´ƒÛ¸Þ/"¼û›†$äô¢H\Ð*3rЇZÒUï›™iÿ™u%ˆé“/šªV 5"˜¤*çBu|÷0k|—vž+päLížñzþUPè‘3;ü ›X¾6ÐÛÒ™ OQ£a·¦¦æ ˆ~̲!‚,ž»v×~Ë"" Ñ(ä^N¨Un;‡vvîDªÒ:£ÇØk®¹æ´;ï¼ó9ë$Ø·oèÇ¥;¸ {Ø@T'zxU5jÔóÖ ž<øû®Q£Ï°Î„t¢b˜½¥ý·-µ5÷z©uK?œÈZÒUŸ®Ï´^¼üpODn¬©©9£"…zÔKÐ:Ì0%0¦çÊßC³our.TYªµ8L°Ñ:ÊC*ƧŒµ+Uu›Ú~b÷õ‰ˆh8ñt""5õ‘>iÝàÙwÞ¹@‡uGŠŠŠó¬¨4ü\ß €ù÷µ*&Z78pÛm·¹y½ñ`õêÕ^„9Å:€¨l>x¹?ç1T>·¾¶æ;™tú$ë"²£@ÔR[•n©­^Øœ®úzsmu{*¥O+ôP½E`6€1Ö­tÈššš3G×ôd¬¨ôeÒÎA¤‹ žÏjü—†_Ÿˆˆ†7ЉhÐNp²uGX9@?…âIˆûÇCˆÎød: ]€>êeÑE3Í“Ì={¬R9±qÇã¯OT6ffÚj©­þ:€«­[úK¡ó»´sÚú5óë6´m±î!¢âhž:uB$árÂå­?†ú™VÔü§Jꇸ2WƒàgGJ!|ŸOîKS7e˜ðɹ›7?aøõ‰ˆh˜q€NDƒ&"ç{:ß1•Jí¶nðN»>èrè^<¢!‹€Ç½™ŸÑ-Àë/X8Õûz®êè· Q銣ì's!õN˜/:`ÓCN×´N«úHýæö;­cˆ¨ðVñÔiկΠÞ*Ð?r5‡N`çÀ<‘4ª±NÈ¢è4¬ÖUU ˆÝö¹Êõ›ÛVš}}""* Љh(Üß ë«_ý*ïÓ>!}ÂË¥tÊ:Œ<åh8º¡¡aôÊ•+÷[|ñùóçW8×âk£¢¡¡q¾uõnÉ’%•ÞÒ¨Ö8ú÷¸vfû#ÖTÚ¢Ê裀Z]#Ñ%È-”¤\ÓCDDƒÆ: A4ÞºàÝ G4¿ÄÏñRª†|TTñ”ŸÏ‹€\EÅ™Lîe{ÖYç"ëàüFA€UÖÔ§8@'*’=gž÷åqO=~%€‹¬[N¯”ΊW®›Zõ¾Ù[Úk]CD×R[5 }¢ïâ\CSŠ¤Úº ‡¿ç`‘†S&>­K;¯³+[ê7oç) DDeÀþV"J,U[Žðøö~qôë¤Ü@§Âˆ¢ð´uC> Áìw ÁÍë2ùÕÑÑQiÝ@TN.½ÿþ¬ä¢¿€à€uË Mˆ"ùuó´ê›•Ÿ!%ÂúŽkI×\ßR[Ý È:ˆÞ§Ñ0ðrM(ð˜u•¶.íú€“-¾¶Û÷¿¸ÿï,¾6ßüѨù]¿yž±H„ ~¢gZ'PÉxÊ: Ÿ¨Ýk£ðdê‡le%èDEV¿uëV ø¤uǤDðÙÖÚš_´ÔÔxºÆ‰ˆò4§«g¶ÔV#äFì‚êmfZ7Q1ÈÖ=D•t6[¦L9Ðë­¾¾Dºø’;“ú@$ èD4âh€./X$Äë€#ä4ë* Ï>ûì³rÖyN±úÂp¨A'T™Ír€Nd nó¶Ûý¥uÇÐè¥H鯖Úê«ÕÑ…»Då®yzÕ¼–ÚšŸˆ¢ÀûŒ´n¢âØVU5г¬;zh$°n Òu0Ö&Ÿ% ð­™ÛÿÇâk‘ ЉhðÄæ‡ÖÞ(ôyë†$Pu4@W?ß?”l«W¯ÎxÖºãˆÈl€7Ðé„r¹˜t"„ ɾÀ.ë–!:À]­ÓªºnÚD7Ç•£Öôä+Zj«+Aîô ë*¾£¢óáèó]…p†E&®ì¾ŽÂ€ì‰¢ìGm¾6Yqó%‘« bn ÷C?NµN ’âæ a¬ÙrŽÙצ$©° *WéÌŽ'!òvÖ-C&xc$©--骛V±uQ¹P@šk«æ·ÔVoP~àbë&²“Óp¡uÃQ$üÞ:JS—v^ µ:qMãHGwÙ|m""²Â: žn]p˜ð÷þÈűŸ:0¶©©)eA¥Am ‹ˆÝ]ÔÓƒMäT*²Ö Dålf¦í!³î(‘Pù§šiÕ¿hž:u‚u Q©kV5»¥¶ú~¬0ݺ‡P9Ï:!_ àîTp DmŸÆv¡ó:ïODD8@'¢¡ðszàîýÒÙùœuBÙµk·Ð© ä%ë†*bö}-€Ÿ›È­®®¸Óº¨ÜÍÜ´í6½Ûº£`ó$ʵ¶ÔV_mBTŠ2é ç´¤kîP‘‡xuùœiÝGGÄNë*=ëÓÕo‡bŠi„Ê k&M2¼®ˆˆŠt",àæGôEë†$Èf³®6õEdœu•†ØkÝÐCÔîwø{ŠN(•Êr€Nä@%F\ à^ëŽ à®ÖÚšïµÖM>Ë:†¨(µÔV/éÒŠ­P½üŽ¡‘£ÅÁ‹ÕííÉ¿¢„ÜQÅMÖ €ŽKˆ—XWQñðo"”~ðÆÑptס?Û§ž}ík_;ÀÍà$„xŒu•U?t møÅ9@§êè¨póçQ9Kg2;õZ­[ I¡oÓl´¡5=ù ë¢$Û0}J]kmõ¾ŒC¨½œ: +öX'Péi­­¹ ÀYw7ü®ªŠ¯ÇDDe‚t"”1cö:|Ê~ë‚ñôke9h¤"Üë†#l6ÐUù@ õËóÖDÔ·Y›Ù…ø ž¶nÑ›Z¦Uãñã­6#reÃô‰g;ê§Pý u %–£åáîT‚ZŸ Ö)½RœuV6ZgÑðâˆÉ× ¦"p½ßÔÏq÷*žRAÑNë†<#,¾h±«›È)á:‘wu[¶lˆ"y ¿·n"xïè±£þoÃôÉX·YZ—®©Ï…Ô^gÝB‰æf€®ª S¡¼?·Ž8>ý«-S¦¸9‚ˆˆ t"¬ ë€|ž†g®)¢ë†Ã‚¤¬¨4De­òÄ6_Ö׃M䓪p(ê6´mA¯°Ñºe˜ÌÎ…ø¡uӧ̵!²Ð2½ê‘ꯌ·n¡äZuè}‡›‡Ò#O¤²¡ÀéS¹ë¬;ˆˆhøp€NDƒ$¸|FÑþœuCRÔѯ• ©Ô žè&¯Q¸N'$ÐÝÖ DÔ?3ÛÚâŽ×ø?ë–á¡çD!Üß2½ú-Ö%DÅÔR[}5‚üÀXëJ¶ «ªÆÀÑ× p±Ê‹Ê_eÒi7§@Qa¹€QrDÅ µÎ8,—;9X7$ˆŸzq€NiœUxyP£ã‘pôºLN‰p€N” uÛóÀøño}ò¨o@ðëža0ŠïµN«ZR¿¹}™u Ñpk©­^à_áhèY‚Tž†„§Ee·»ìƒày@÷‰Ê~yñð_®¸\¡ó ‹aÔh©YOßJ®®Õ"*†3ºÐq€Ï[‡Qáq€NDƒ$¤Dý¼Q=ºÓÓö©oŠœ›k‚òÏ!*Õuó} ›« ¼½.“S!¼¾¶úŒU›¶}èJO× Aó´ªÅ*ø’uÇP ð˜?Uè/*%û¿éÌþLa,t¥Rð´+7Щ,‰à¦-S¦,›ºuëKÖ-DDT8 ÑàDÃÑ ¦££ÃÑ»FçÜLUn Sa„²âgùÂè_'ƒO±r€N”t³2ÛZÖ¤'½&¥ñw\lÝSh ¼¯:]µ_3í ~~n%ŒÖÚšËz«uÇìUÑÿŒ4º»nSÛ}ü=éKˆºRF‡_õ…èT®N;…E¾`BDD…ãç£f"JQñ3=0bľžõ—øyínQADQäirlò}m¾S²d³YЉJÀÜÌ#eG} · Ÿž•-éªZZÇ]S™Ù0}JBÿ@¥uË~ÕIjÔ9³2íQ¿©í^Ïý qìk1J¹NåK7¶ÖÕ±î "¢Âñõƒ%Fɉúù¬®££ƒƒ£þò4d t*UÝ|ĮȚ|Y )O'ƒKwÞyçë"*Œ¹k×v¸±%=ùGÐè›Îµn*$QY°>]Ó‰LÛuÖ-DµnÊ”ór!üÀXë–ØàN ñ­³¶lÙaC'V!¢ÁÑ3TÂ7#TÞÎBöà‡|Î:„ˆˆ ƒt"”H5ëçmJ¥ÜlU{§ÐXœLUÄdÐH¥G$—R/‡+ˆÍº¨§ÇšpÏXGÐ1‚<‡ÜT%*w33Ûï[“žtq*¤¾ ÑK¬{ IU·¤k¶ÏÌ´ý‹u Q­â8_×ä<Ôòœ–E’½mÆÆGw[ÇPÿå²ÙœˆŸçãIÚi D¥Ð¿l­«ûrýúõû¬[ˆˆhè8@'¢A’œ§Ïà³Ù¬§w®EØË¿91ÚÔ¥ÒB”rtºÑ÷µŸ×e øÛÛo_ö+ë"¢r17óÈc÷Í›÷úSŸÚõiý´«‡†Jõ‹­Ójž©ßÜv·u QÔ¤«ÿQ—[wœàT¾P!_Hg2{­shà¢P‘ÕØÕÉú S¹;K³¸Å:„ˆˆ†ÎÏGÍD”(A|½ËåJçCÂa¦âèÕˆèTQ$~ T› ô`´ùÞÕÏë Q™¸ôþû³³6µ5)¢×ØaÝS@¢¢w¶ÖÖ\fBt"Í ÅǬ;NL¿BvÚÌMmŸáð<¹¢Š¬«÷ÓèDôckæœ7Úº‚ˆˆ†Žt",7ƒˆã˜¯gý¤êhÐÈ;Щ@‚?ß×fƒl?¿ŸbåˆÈʬÌÖßdæf ð-ë–Jú Ó«&[‡õ¥µnòY"¸ pr_VïžRè•37µ¿}öæGoCCÓÕ•róó7ÐÉ’—JÎNHShSKºæ•ÖDWÈœfÝÑ‹m9 Õg¶þÒ:„†Ç«vî<›S¨„èTl‚q]ßó÷¿xàëž2,Ê£çŒ:y$·Ð‰ˆŒt",7ƒˆ¢èý¦n~­¢q BUý ЃÕ룫›Ü<¨CDDG¤3™½õ™¶ëáÏ HòQÎPýf&>É:„ÊÛúiÕ¯SÁ‡¬;z±)«©×ÏÉlo·¡á#€Bý|6£êç³*¢ò÷uÚéùÏ—ìÜy‚¯X6ån¡%èD4(¹\ìjð©óô~ó³­¯Ê t* GèFå¢àèuY9@'"rlæÆm?̆T=?·n‚ª.í¼Í:‚Ê—¢Ð[ˆuË16†N7wóæ'¬C¨ÌN¿z‰ätë*+™”TÜrìÙ‘‹—ðòûâÜ1'j°Ž "¢ÁáˆE5ró”3áú¸lEQÎË›J8ñ´nµ îèj •ÈÑ¿""êÍÜÍ›Ÿ¨ÏlûU|@—uÏ ]Õ\[õ6ë*OëÓUïT‘¹ÖÇØ))ýÓÙííI>a‚@¡nÞS+èT,*Æt&ÓyìÿpÑ–-ϸˠ©W*¸‰[èDDÉÄ: Šê~7ƒàg«Ú³¦¦¦€›mýl6åæÍ>%œ§ tÑ_6‚É×íhàk2Q ³6oû‚Fa€?X÷ Fùrk]ŸŸ¨,¬bUù{뎣^ˆBü¦úõí;­S¨ˆÔͦ- r†u•«ßÔö¿}ý’‹n-bÏñœ;fì¨k­#ˆˆhà8@'¢AyñÅ]½-"c­’`÷îݧÀу©TÖÏ›}J4Ÿ t^´øº!?¯ËQ俤 "":±Y·?ÐâYüغe ¸P³?mÝA奦¶úC¦ZwäQþ_Ý–-¬C¨¸DlÞ{ôè4üOw„øãÇûKê·n}‚ÿ(VÒ‰(pÓ£&¸Yf!"¢þáˆeõêÕ€Ÿû«0κ! ººâS­òhau•Ow Ès_wÏž={¨Å×~9=ź€ˆˆæ¢-[ž­Û´í-m‚›?OúK?ºvzõ4ë *zè³´ãoŠNåK37nûOë *>LÞ{ôtv ùX÷1íÇå‹ÑÓOç½0º‚[èDD Ã: …›á§¸Þ©TÖÓý¥•+W&õ¾Mòç$뀑ÑkãêÕ«spò`“ªzz­!"¢~@ë7µÿ-TÞ '¦ôSep›u•‡–Úª·Â×öyKvôÉ7YG áʉâþ™™¶»ûó—ÖmÙ¾À½Ã\ÔÂ-t"¢¤áˆOý иÞY_›úÏXPéPõóaM`÷!–úø-‚œfÝ@DDƒ7ssÛE!¾‚G­[à²Öښˬ#¨ô ð1ë†<]*¸zîÚµ|0¹lé 7q‹hôš9çñ*'.QJe§äˆ8ÚBWœÿ˜ÊÖDDÔ ÑàEŽèÂmÇþˆ5ró뤠S p†uÃa¹Ènˆ-xÞìkçáµDDÉW·eˆX²¯Õ5Ö-ý¥‡ŽŸ'6­µÕòjëŽ#äs³2ÛZ¬+ÈŽÀð½G/FtŒró`3•Á?×mhÛ2¿¥.³ýgªX?\I§Ÿà:Qrp€NDƒ&jsÏoxßn? nè"ðô¤<%Ÿ›ºHÖìµQ\­! •„ÝŠÑó ø‘uK?½¦yZõ¬#¨tQ?Ûƒ‚dz£ö~Þ:ƒl)ÔÓç2èBêlë*Ií§ìëú‡þM¨@]m¡??¦òë ""êЉhÐêbÓp3ö,Rñó뤠SaÌŸ?¿ÀÉÖ=R©”Ùk££;y„;Q‰¨_¿~ßž3Ïÿsí×£ÖÒdÝ@¥©µ®nŒ¨\iÝq˜â¯ç®Ýµß:ƒl©¯#Ü!!7ѺJP¤×Mܱãà`þÖìèSþÀ \4h¢ú‰mUU#¬;ˆˆèÄ8@'¢ÁÙmÐCáçþcÏTÕÓÓà< bܸq§ëŽnᬳβüÞ~Âðkç;­ûÁ""*—Þvf¦ýj_³n9!ÑKZkk^kA¥'ä¾ ÀXëŽn›Ú6mKÄC-4Ì$ÞeO“¬¨´¨âÛ37¶ÿÏ`ÿþ¹k×vAäK…l¢ñ{+À-t"¢àˆ//ƒ8Ç: "q3@Wà)ë*•nŽoä馦¦¬ÙWž´úÚÇqãÆñu™ˆ¨„ê7m»À]Ö-'¢Ð…Ö Tz$èUÖ =DåóW9ër '['E…t*¤ç+£®õ22+·ðsЦ·Ð‰ˆ€t"4‘ÈÓ“Îã–,YÂ>O@Ôσ‘£#´(ÙB”=Óºá5~°H=ØTq®uV÷}üÑç·ÖM>Ë:‚JG&=á^gÝ lߺ¹í›ÖäCn̘ÇëŽ<“­¨tˆÊÇÒ™C~H|êÖ­/Að•B4Èû+…û9Ç: šª›MG\.ÇÉN@?èêìIyJ,m Û¾.ªª£×åÀ:Q @+¤²‚Ÿ[·G¥æäýÖT:º4õ&8ù M ˸}N=æ®]ÛàiëŽ<¼ åÁºÍm{`/RKtêŸ7T ÜÈ-t""ß\üðODÉ”K¹ #kúÁÍ]$·ÓºJƒ\`ÝÇt<Äž^—•¯ÉDD%*ÉtV òš­[ú"*×* ÖT"TÞnЭKR¹{¬#ÈO§_xß¼y)ëJ¼¬ IOW˜»yóùF¡þypÁ¾J¹Ú:‚ˆˆúÆ: ZçK£ ¨Fn†Ã555E¼léëèÑ£9@§‚q4@Ûz¥8:Ù!Š.´N "¢á“Îdö;À3Ö-½Q`ZkmÕLëJ¾L:}—[w€?©_¿ý)ërÇÏ{ uú³»ø>€†Fäë³2ÛZ ý͉ÞGWðÉL:]iÝADD½ãˆíž{¾¸ÀóÖ=‚ø¹ßÛ£;wž ¶îèöÜ­·ÞzÀ:‚JƒBü Ðï@_ºtén>~o©òøF"¢7{ó£¿È{d­[z£ˆÞfÝ@É—Ó®×iÝ‘ànëréQë€|Ù,&Y7PÂÝ5ÿØ9·mVà‡ÃñÏ .ìÒNn¡9Å: •Ÿ7jÊã‚'ŽcGCFpûœ HÜl8ˆF¿7NP(v7”÷•…úMm÷Bñ9ëŽÞ”t² :Ϻ¡[âQ?µŽ ØnÝ/ŠPgÝ@Ôù‚uÃÑ„[èDDNq€NDCåf€.ÀëÏT#7Ã,…î°n ’âæá‘`ÿšù ‹p€NDT.öœ}þßxغ£3×Ϩá&$ ‰ýcëPè/ëׯßgÝAþd›uC> :׺¨/33mÿà­;ŽÐWt†ŽÖDDôr Ñ(| jn;žP$n~}ÒfÝ$cÆŒ©°nðjÉ’%#=˺£Ç¾}'Ùн¼.+Î\¼xñIÖDD4ü.½ÿþl¹r9}³u%×–)SNV‘™ÖA¸}N½‹ý,6@$pñ{†¨O*·Z'ä‘­™3‡Ÿû9Ã: ‰ˆºy£&Êúq?÷©øzBÞ»(ŠøFªûs¹ ˆuG·§î¹ç‹¶‚d‡uANÕ)Ö DDT³3m­P]aÝñ2‚K­(¹Dáb)ëPè½Ö äÓ¨áQÁº£‡ j2é4¤%·ê7·}PO‹âƒ/]eADDG㈆&ˆ›:ã8h싨› bÐ@Dø}݇T.çæû^®´~>ˆ¦Y7Qñthê3€ì±î8ŠâbëJ´YÖÁ õ›Ú7[gOÕíí€üÁºã0E܉l½uQ_èëŽ|¢ú)n¡ùÂ: IHÉ#Ö ybZG8æiÐÈúp€~<ÑTë‚ÃÔɹ-Öyj­ˆˆ¨x.Ú²åYHø¼uÇ1Î]7mâ+¬#(™$r2@GÆä"l²nÈ'f[7Ï)û:ï°Ûº#Ï„Šý/ý…uÁ: É O?½@—uGÕØÓØ¦¦¦~.Ø·|ùò]ÖI¢#FŒ´nðJÅφ³ Ú­ºmÐiÊ:Q™9Øež·îÈI|‘u%”º¹ËùAëòMD6X7ƒtrmâŽUq›uG>¸…NDä èD4$«W¯î„º±p€Þ‹;Ÿy€ÖݶPëˆ$Ie³§Z7¸¥p³. Çj®\¹² ŠíÖ *ië""*®‹ÛÛ_ÁW­;Ž¢q§k­« Æº$Hƺœ ºÑ:á(¢s¬ˆN¤2ª\`ŸuÇaЉû_øuÂ: ™Šºy3/!¸ÙFõ$ŽÃtë† ¬·nHšœÈéÖ ~‰› gUñtl¢Á䯯ÆqÖDDT\‘„ptÜ´ˆÎ°n ä ¹Õpó¹Yh³. ß4r¶®’^WUu¦uÑñ¤3™çÜiÝ‘OE¸…NDä„“7D”lâç¾]á¶co<s 8{cŸpÙ‹CƒY=Ûº£[ɺù`SÅÇ6<éRõrw(ÉŒÛÿÈÿXwôO©¢‹€ÉÖ Ý£·ZGo•¨ÜG×ëˆd„¼Á:‚èD$¥_€“+кMJ|éjë""∠‚«­Kn—ôFmÝm±NHšH¹Þ›lV=‚+W®ÜoÑCk­zD¯´n ""¾eÐC€ ”ŸЩÊë†nOÔ¯_ïçˆar)Étpó@ï!z©uщԯoß©ŠïZwäSÕß7o^ʺƒˆ¨Üñ $@ÎÓýÜk®¹æ4ëo~6ÐU»¸>@ªn¶¬]‰"GßמNâBÜjÝÐC¼ÿˆ¨ E©ÎÈZwt«\›ž4Þ:‚’E“¬@TwY7PbQÜÌ-t""; Qaˆ£á‘b¶u‚+*Y'ôп³nH¨ *¬#ûìmÖÇ ¢n~¿‰*èDDe(ª~žc@I£.袺ߺÁ¿B:“é„· ðÆÆeÝAt"¨B¿hÝqŒªSŸÞõ~ë"¢rÅ:„æ"?t`ÖUW]Å'Öð2@W ÁÍ@/a*TSiëO¢œº9¾]€–¦¦¦`Ýq, áë†<󬈈¨øê×o €ã§ƒz¹Ò¨_º*žQë†rÖ=ðs1ÄUñµQ\ ª8Óº!±T<½€±£ÆŽ|‹uQäF²J€Ç¬;ò‰*·Ð‰ˆŒp€ND1rdì騰ʑ#GαŽð`þüù±s­;ºµ¯\¹òëˆÄŠOVÈ'òGÖ G8:#ÏÊ•+ŸðˆuG·‰ /žfADDT¶['Hàú¹]ÖålÌi#O·nÈãå}vÑàâþù$’(ÜgÝp,y¯uQÌ]»¶+¨ÞjÝqŒêÓvïzŸuQ9∠â¶ÛnëàæˆÆäëN;í´éN²î~kÝd¢ÂúѼœ¬€ ~^û^Fñë„’Û­ˆˆÈ@th¢èsÖ®ÍZ7”³lNÝœh¦¡üè œcÝT¡"û¿ä¬;Ž¢xSóÌ §ZgõGe4â@öXwäSÑO¯bë"¢rÃ:ŒÂÑ FäUÖ ¨ÈÅÖ =DýÜÇœDðT…n ¨¶îè;zí;–Üá(Ð7Y7‘ÅãÖ "‰úàYàfˆ©ŠuC1 ¤Òº¡\é¡Ï*'Yw$Õ¬–ÏKÐfëŽcŒˆ:Rï´Ž êt&³W–Yw£ºfZ·Ð‰ˆŠŒt"*˜âfˆ¤PÐ@ÅÍ–®ˆþʺ!ɘÝÐШ»;‡‹Hê Ö GèÎåË—ï°®èKÁÓޝillgADDÅ%Ðg­ÑNë‚Ap3@W‘ ë†bŠ#¬zH¹Ù†/†–©S/À÷=C ‘üÒºáX ù€uQi'¾ÁëŽ|*òÞ…NDT\ QÁ¤RòmJxpNcc#ŸZ^mPìZ±bÅ&댄«Ty5Á!o´8Låÿ¬ŽgÙ²e[üÁº£[EVõíÖDDT\AðŒuC·$Þ)î¦Y¤«¬6²=m ‹£a~1H*[cÝt÷ #Âk×M™ržuQÌnoZ‚~ú#Ÿ“Ç=µó=ÖDDå„t"*˜¯~õ«ÏØlÝÑ#¹ÔºÁÒ‚×OàåÇ{­JˆÎ³n°ÖÔÔ)ô2ëŽÃ"usòF_êæ÷_¤r¥uWÝkÝÐ-‰ènèå¶îéÿ¯eµÅ,네»/w?€ýÖGQÄkƒuQ¿…øÁ:ã("¼ ˆ¨ˆ8@'¢‚RGƒR-óûv£TÖÍ–®ˆüº¡¨È<ëkO<ñÄl§[wôÐ\äz‚ÀÍï?.khh8úƒˆˆŠªÃ:Bðuk?¹9ÂÝÓFv1ˆª›‡ Z^ô ¯´NHº‰;v„£Ïfz´ñÑ Êëû™«~ëÖ­€þÀºã(Š)ÕµUÜB'"*Љ¨ "ÈÏ­ò\ÞÐÐàfs ØD£?±nè!ܽyO"þ胼qŒu‡%ÕÈ̓!ž9ÿü³Ö[GœHEÝ ?×k¤Dâ÷ZGQñ¨—ÍïHž·N0ñ³^nLjGQäç‹ åõ󄋬JÄ­zqÖ‹£Rï·Ž ê/•ø‹Ö Çp ˆ¨X8@'¢‚êê:x?ü586DÑk­#,455¥úÇÖÝ6-_¾üqëˆQ9rÌÞ×XG˜Šðfë„ÿhjjòu¤[/–.]ú¤ë¬;z¨bu@źáÝc]0`êæ}r¢ã¬Š©S³ŽŽ¿–s­ Š¥¥¦æ|(ηî(q~dÝйA'.߬ÌÖßî®m›Z®~·uQ9àˆ ê®»îzI7?\Fo±n°ðÄO_àë€à{Ö ¥DTÞaÝ`eÑ¢E x•uGägÖ ý€ïZ7&¨[´hÑ«­3ˆˆ¨8T¢ÑÖ «$o€îêw?WèƒdSnè¡lèˆÁãÛ dÆÆíÐjÝÑ‹tkºÊÓ©bDÇ¥¹ÛBÜB'"*Љ¨à"Owm‰¸9Ƽ˜Â¬zˆ†Z7”˜?kjj*Ë?¿s½ ޶4»¹[üDbÑÿ¶nÈT®±n "¢"Éù8~:'xƺaöYôP•² §*¸9Â]Uγn(•ðGÖ ¥D«­z¥rƒuQÍÜÜö¶ZwäS`Ú”tõ•ÖDD¥®,?€'¢á‚x˜Ö666N²Ž(6ÞnÝÐíÉsÏ=÷Aëˆsή]O]baAOÇ”mºãŽÛvZGô×òåË7ØfÝ‘ç½ ,8Û:‚ˆˆ†Ÿ'Y7@.ĉùs»‡ÜÜÛ.(¯ú¨•n6Ð=·\޼7Ð *·Êº olN×ða J‚Bn±îxÅ_+g;DDÊ/²DTp·ß¾´À£Ö=BÀû¬ŠiáÂ…õÒÖÝ~„;¢“FE½< Q4 “kÝÑCŸZ7 ‚§ëFJ\q½uA$î3yónëˆSq4@×s¬Š©º½½‚œuG·Êæ)SJþ÷L:] à"ëŽR2kó#Û¬³îè…ˆâóÖDýuê¾Î{yÒº#ŸÓÖO«á:Ñ0∆‡ª›A´n(&UqóÀ€ |ߺ¡‰â(“-”*±«7†¢ò_Ö "|Ǻ!Ÿ .îÅ%"¢á£Š Ö v ¸‡:êæÞv”Ý©^PdÝP ?¾ûµ.°îè¶zõêÕÖ¥J",°n(–… ^ Öºãý~b¯&Pý¶uB>ýøâÅ‹¹iDDT¢ˆNµî‰¶X7 ŠªŸºâÜG'LiQLq3@‡bºuÂp‰8@³7?ú{~iÝчڊ/~Ä:‚¨?ê6<¶GwXw¼ ·Ð‰ˆ† _\‰hX¬^½:'Ðÿ°î8Lñî%K–Œ°Înù¹ï]¾eÝPâþ¼¡¡á ëˆbP‰n°nÈ'puÂÆ€ˆ¨¯ß—Š3»rºÄ:ƒˆˆ†ÇºéÕSœfÝ¡Àfë†Aqt:€è…Ñ©ÉÖŤ°nè¡‚zë†áÔõëˆárÝu×.€·R<¬1(ªÙoð¶Ñö¡k¯]ìbK‘ˆˆ cÍœ9Px8ª=Éxݼ<.Eä¬[çX“ät‡uC>U\nÝ0ÖMŸ2@­uG¯¤”>7Õð7ô;L_Z“žt¡uщÌÊlkâçÖǷЉˆ Œ/ªD4¬BˆÜ ÐQø+ë„áÐаø ¦[wt»ÿŽ¥K“yÏd‰ê§æÏŸ[w ‡®œÞ`´uGžGW®\ú눡Z¹rå~~ݺã"‘ÞÕÐÐàéß7 Aêà  ¨ƒÓ‘äAë‚ÁÊ"ëgDf?:aÂHëŒb{ » Žîº• ‹æ–X7ôIKcfnjÏxú¡ãR!õíûæÍ+éÖ¨4Ȭ^FeÆútÕ;¬3ˆˆJ èD4¬n¿ý«m€þÚºã0Å»¦Zgœ„›­S,³N(3Õ§ž~úû¬# mÑ¢EçCõÃÖGQýom *®ý IDAT D.Žü®ªj¬Š\gÝÑíÖC ®è€Ô´ÖV_d]Q,!ŠÖX7ã‚ÖtÕ¬# aD”ý$§Xw—”ÞjLGø:€?XwœÀH Ñ÷×¥«j­Cˆúréý÷gEáñ”úÖi5nADT \}8MD¥+ŽeAè‚ó®]´èÖCÑÐÐP¡P7÷+ðµ¥K—>iÝQÆÆ•[¬#†jáÂÆë¸Øºã(Š]¬3†ƒŠºüžQàâŽÎìà“óDD‰3r„Ü Å™Ö€ÿm0TÄÙPàjë†b9ù`n€N뎣ht“uÂP­IOºpó Mß´ô~­noïðÖýpF¤òóõ3j&Y‡õ©bÔž³ÎxÑ¿á:ÑÐq€NDE±lÙ²Gò]ëŽ|¢rë’%KÆZw –HüQ^žÈÎ…8ùÃÛðîk-z¿uÄ`566NQÀÍC!=Jùj a€G¬;úð†E‹?cADDý·azÕd(¼ɼ?;bßO­#†JÜm ¼'“NŸdQ ÕííP<`Ýq4½´uZ•¯N¨BS_†kN¤$?7­Ê»x̺£Î 9½wÃôªÉÖ!D½©_¿~Ÿ<žÄ8³eZÍÛ¬#ˆˆ’®$$"¯rÞîÛ½ £#÷ÏÖƒÑÐÐ0YOƒ¥ÿ¸séÒvëDeicccâžÒ_²dɈ\omÝrŒ’¾š`åÊ•]ù[ëŽ>©~fáÂÆÄoY•¢\Nîàã ÁÏæ®Ýµß:cÈ$Úað2ŠS²ÚµÄ:£Xø‰uÃˈüµuÂ`µL«y»B3Ø)Å-Ît&Ó©ÉØB€ ¹ ¿æqîäU$ÙÛ´î8–p ˆhÈ8@'¢¢Y±bÅ~iÝqц… &ê¹ùóçLjâ;Œ²né¦ÄÝÖp¾™´û£vf¿è\ëŽ^Ü]êW<÷ÜÓß„b³uG_ø§†E‹o¶î "ê¯mUU‰ú3¸PÖ×Ö|‚yÖ‡ùŽuB!ti¼Éº¡7 ½ñwUU‰=Ík "„[7K75O«y“uÇ@=8uêéý²uÇ@Ü?o^lÝ0²£ÆÞh›uG?©üjÝÔª×[‡kÆÆGw¸Ûº£³Zj«þÌ:‚ˆ(É8@'¢¢ ¢Þ6¾E}cÑ¢Eç[‡ô׸qgÜ…§7Ž«n¿}i‹u¡ÀŹ;‘§.\|‹¬;z‘„/ZG ·Õ«WçàëD‹—SýlâÅÿÚÔÔÄŸ]‰È½}òÍéê/•ËpÖM­z½B›¬;Ž=§èü¾uE!Ìݼù ÏXwôâ´‘#¤,p«ß¼}£Çã®Eô_“ôÀŽш(|Àxë–8ù¥¶Jë†á0wíÚ.Õ(I'=Eòó–Újï©Ü…øÁ:ãXù ·Ð‰ˆBQQݾ|ùÏøuÇ1Î *ÿÞÐÐPar"‹-z5MÖy²¢¿±Ž —Sèû“pôµ‹½E¡Ë­;z£ÐÿZ±bÅ6ëŽbX¹rÙ°Öºã¸T?üøO~§¡¡ÁÛ1ÿDD‡5Ïœp*oÅGFVÊ––Úª”ú—¦O©‹bù>7?K«è¿OܱÃÝqªƒ§ë­ z¥øèºéS<ž Tp.qªöO[GôWkmÍg½ÂºcÀöŒtóÚVh³6·}O_[w @€e-éš»·L™r²u Q™[¶´AàñÁ½YëÓUoµŽ "J*Љ¨øŠs/\¸ø½Ö!46ÎiXØø_ .~—u õBðŽ—ýwªsú‹–ÚꟶÔVÍ2¨-éšWJÿÀ¹Ö-ùøuýæöuÖ%áwÖ Ç1]sîÑÿ|éÔýÙŸ²Çºãe14úæ†é϶NéKkzò€¸~Xý¸âØÍìÃaV¦íaÿfÝ1ÕPýmË´ª¯¬™4ÉÕC\‘äv:Úì-í¿ðÖ½˜½>]õëˆÁxøá‡'­}øÁ›×=üàÚë|‘ÞÁªø7©âf·ˆêws64¯}è©uÿîß›×<øçÛ¶mKÌ'DäWI¿Á!"¿rn†Ï§œ?Ö°°ñŸáèÍýu×ýöî<>ÊòÞÿÿûsÏ$¬¢âŽK]"$@PÔZE[íbmk]º×ÓÖ°XmO—ÓÓsÚ_ÓÖs¾ÝNmKÍ2àÒ½{ºÙ»jmEP‹$H”º*‚ì$™¹?¿? !`HfrM&¯çãÁC™Üs_ïh2ÉÜïûº®›ŽÉÄþ'ÉÆ‡Î²³¯Í;ûÙÐ1Ð î_½qú̯äËþÑ7Θñ^‹ôåiy.)E> ·&0‹?#i[èÝpŒË\U5óWùÈÍýjͲ3f\R5cæï¢X‹$½ÝM…`eeÃevÅ!¹B²ÅKÆ•ünùØÒ×õY°X:vô;äþ¹Ž ån·…Žm‰È á\ïX6®ô¶B˜áv0»·ð†Îq'e¼è·ùXÄ--+¹Â=ºWy´ŠÚá2³Âž.©ÈŠ?-é…Ð9z ’ÙMÉÁ‰UËÆ–Ì|à’Kòqe³.-<ûìc––•~998ñt>߃Ãc–§³Ðõ«Yè Ï_¼há} ‹[Ìt«L•’Ýxê12»Þ¥_lݼñ©%‹|vÑ¢EùzÝ @?Ò úÑQñ_$›:K'O+N-ttŸIŸY·îù_UUU»°V]]¬ªšñUsû¡¤Á¡r¼*·ï×ÖÖ® #„ºººçdöåÐ9ºÍtU”H¯¬ªšñÉ|xÍζnøøQ7NŸõ±ªª™±Ûr½iïÇÌýÂÙ¨ÝÛÞ$×W9ÌLö¦Øô×¥cG?¼dìè·ßÓ½ ‚yaÙ„ ÖŽ+­“ùÿJÊÃe»½¹beó¯C§È¶ñ+V?#)ßo\½eÙ¸Ñß_4yr¿-J_MlvGè å>998qߢɣò¦(X2®ä½rûòòµâ0X{Á~MïUÖØ¸ÑMÿ:G/œàf5G¿øÜŠ¥cKÞßXV–·7=,{fé²²ÒÛE™§äúœ¤éLrBè\ÈŽ ­÷™´2tޏO^vNé[BÇx5K–,9jñ¢w)ÖÃ&½Eï®¶IÚ´çOú Çœä²ÿŽ”Y±tÑ‚Ës‘@á£@Nœù¬¤­¡ctŤk×­ÿsUUU°%)§OŸ^žìH/”)ïÞL¹ù§S©ÔŽÐ9px\ºR–xtúô›ú|Ö[UUÕÙk×??_fy·œ|';’IU‡Ò¨ÿ¦d‹Bç8 Ãeöµëžoš>}æû®½öÚ~SDu¥ºº:yã3ß4}ú¬j{Îäß’ilçã\*Ÿ5kÖðtÍLï<¼'øfþ‹ÑãJ×,Wò…¥£GŸœ£hY±|Üè7yzGƒ¤é¡³ŒK_2)#7ìÁÐ ^¿/¹sËCŸSrVè$¹0©±y™¤ÇBç8„×&wýóÏ>&dˆ{¤ÄÒ±¥_2ÙÔgžïeVà{ ï1±±åÇ’Ý:G¯¸ÆÈìûÞþÔ²q£?»hìØ¼Øb¤±¬løÒ±£¯_:®ô~³Ä*w}\Ò{?™ÊÆC™ËýBçèŠe<¯g¡74,¬ôLû“Ý sntùÝnz«¢¢ã'UžwĤÊóFNªÉ2}ú¬»ü6íó¦*<ª¯½4tˆ\«ªšù}™Þ:GޏLw(Î|6•JmÈå@7ß|óˆööô¿»ô¯ÊçYç{™}.UWó_¡c„VUõÑñ²Ì"Iýï‚¡«UòºL¦ã®;î¸ccè8Ýqíµ×sÌ1dÜ®6ézIÇwçyë sæÔþ)ÇñúÄÍ7ß<¨­=½+t޽2 +½£¦¦5tŽÐª¦ÏlWž”‘ù¤ººº%¡sÌS§Ÿ>xóТÔ›ßÝLwýVòFÉ¡¿+_¾|{öö\Ã9c*£8þФËBgyMÍM-®“2¡ƒä²±£?àæß £›¶ÉüÖ" º½¬±±?l ÓmKÇ•N—T:Ç«hÉXüæÉ«ûüçØÒÑ£O¶"ÿ‘».îë±sÅM'6¶, £/<~ÎY§fâÄ2É%KÒrÝo²Yr×ï'<þô¦¾xéèÑ'+_aŠ®rù:Ô{a³T46 ¯²ÌÒ²’ŸÊíúÐ9$I®/W¬lé—Ûªµ”” Ú^lOIÊ‹8öå®+'®lùmè-^üèÍ}ž¤ýnwi«\·:ü[eeeíÝ;ׂ Ûl™ºš8òãX‰*++;²ÀPpK]è_6½ôâ·Žyì‡$ ¥k~‚ÅúUÕŒ™?ŽÓŸ˜;wîó¹­ªªê4Y¢Æåùº´ÒSœ·³ŽÐm&×Gd‰ë§OŸy›{曩Tjs6˜9sæÑq¬ª¶öô'ÔÍ20¬nßµ#/ïïk©Ôw¿qúÌÿ2é‹¡³6S‰dßH$‹o½qúŒßG²{‹‹“¿ž={ö–ÐÑöª®®ŽÖ¯_?.Žíõnö“_»†î­f~¡¤‚(Ðþn˰ää½¼ñÑ•0é*É®òÌÎKÇ•ü^®ÿ’í÷õåEÿÝQ-/+¹Ò=ú¸âxZ_ŽÝcnŸ+Ôò\’:”øcRiWÿ˜1\n_éPû§––•Öe/ÛencYÙÈQessNoílW»ÿdp±}MÒˆ¾÷0•&¢¬±±[3ls…èÙ³¤âô£¬½èiå᪒îö–‰+›:‡´g¶¸Û#’†îû¸¹Ýõò¶íUÓ¦M;ØþæÝ²ä±G.q‹~©Nï½Ýõß“§œ×?~ŽŠ@^¨š1³F®™¡stÓ.¹~é®ïŸ|ò ¬®®îÑ/tÕÕÕÑÚµ/\ óIºV½{Õs&ýí¤“N¸¤ººz@Ü¥9À ô}m•ëI—ì)½zÔ¨QÏwþÿ^]]\·nݱً5QfçIš*)èž‹=e²_Ô××\:G¾ùÈG>zF”È4H:*t–,{ɤGÜõˆ™=n?EÑššššž.3k³fÍ:!“Éœ éäX‰Rsuó1&+ù Ù ¿­›6n8zÞ¼yý~Æ%z~¢@ïžE“'%wnY/id Ù.i¡™=îîO¸ë Wºyâʧž¶Ã¸9¨¡¤ä8ì¥æ‰©ŠýB™] ù‰9Ì+í–‰&”¯Zµ*t\[ZVòïù8£­—^–lÏ×­×á¼æ*XËÊFvxûSÊïYèÿdÊX¬ºù×+šZ³qÊ=ÿ ®“éãò|]M.K,¾´¢qõ¡cô¥%%#[ƒIg…Î2PÄf“›—…Ì@ž]KÇ•üdŸ£ V4µ¿I~Ñ¢EC#e–J*Ýï¦ú‰“ÎifY¹á}É£NñÈÒþ3ÑcsãÄ)Sÿ˜1.–p%ÿ½­=}¥¤SCgé†Á2½ËLïZ»îùÍUÓgþYî2K4&Þ\SS³¾«'UUU 5³qR4Þ]ÓÖ®þ2××á{hk:a(åùw„LWIºj÷5ø„Ö®{>S5}æ~Ë_¯]÷ü0)*¶þ²è¡mrOß:D>š;÷»OUUͬ’éžÐY²ì—®Ü³¤£ÜMqÆU5}æÉÖÈüeIR¬ÍŠ´ßëž» 3ó½Ky)×(I'¤3žÜ»ŠÉ%Ûû­‘Ó‰îG}ôñå’r9€CK¶mž&Y_•ç’T,é"w¿H’Ì$SRËÆ•îX*­—´UòmrÛnfûlÑâG»,!óQ’N“kˆb“ö¼fõáÂÙåúê@(Ï%)#¿7¡‚+Ðêùמ÷å÷Ý+Ê7.-+ýŽ\Ÿ 1þas%v¯a\:vô|™ß=ó—‰+Ÿl9œÓ,3fŒ"¿Èͯêðö+$÷×—Ãb–7’õ¥©­­[ÊF¿ÓÜç«ÓÌPäFÇ$-Б]‰È¿•‰ífåÉͨû8wù¸ÑošÐÔü!CDÊü:°<Ÿ?hðð[ö-ÏÝÝ–,™Z'ÆÄn§Êüd3;J± sóXÒf7_åž|¨²²ruçq&ž{îc‹/ø˜¹Õï;¼›ÝÞÒÒ2¾´´´-WŸ#€þ@^˜={ö–ªªY‘ùýê_uÜ‘’®–ÙÕ®XéŒT5}æI;%í½`™Ôîe›Žô¶+ýê¥É¦3o@KH::tˆ\q³OÌ©OÖ¾ZI*U;¯jÆÌÚ~´JHo+ù±¯¼>wñZmò¼zý6‹/:Vlù²‚ÉPIgîþ×Ýwñ¸÷j~½†eÁŠ¢¨øÖÐ!úÊäÆÕ­KÇ•.Tal¯Òkfì÷Ó"ßÖ¡ö[Ô_f¡ïe~¤ ÌZZVúœb-‘¬QækÜ´EîæQ¤HGšû—J$•HVæŠûz…ŠX²Í2!W¢Ç~…y§}yˆIÍË–Ž+™.ÙBgLå’øo]@ƯXýÌÒ²Ñ?•{Þ­lË¿ )Xþ裞*ù¿vzx³¬ãÚ²²²W¶2hhxô²%‹ý•”K{®›ÜõÏÅk$™›L5,Zð¨âèß&{î_÷=ñäÉSS ‹¾AÒ5û<\ºåå7Júnv?;…$ß÷0€¤R5ôÐ9²`¨v/c}æž?§iwùØ__sSõõ5? È‘ûçÔÕ|/tˆ|·é¥ —ü¡Ð9p —. ÈîÙ}“ÙÛCç Ònö¡Ð{Æö57»#t†|˃èe%ûV¨ñ³Âu²LWÊü3’jÍõ#“Ý#óŸÊ½Þ¥¯Kš.é²Û;¸ôÍŠ¦æ‘r_Ð×cï+–åÝÆ}¥¢©õ‡’f‡Î1˜¬)é÷¡s:—&„΀ì¿bÕrÉòòûÇ=úBˆqçÏŸ?DÒ:=üR[‡ßÖ§·™ô sK™[Jf?•ôt§c"—}uÉ£Vìû`ù”)«$ý¬Ó±§uÄÐ+ë0 P È+©TªCqæÝúçòçgC"ÒÕwß}7Åm`›6¾t…±¬T¶Å¦øsçÎ}>tþâ®»f¿™_%ikè,Ø—2cÆŒÓC§ªØüÚÐ&ÿeycëWC§áìU«¶ºT:GžZ —/_¾]HÜŒ™}n·”56n“¤Ø`q´@7Åz ÷½Ê—¯~!JØ’ž ¥PyÌ>è…Êb}#t†®¸ëü¥ç”\ÞÇÞÛéï;¶lÙþ`oNxÝu×e$fßÇbù†ÎÇUVVv¸ô—NŸÒÐðwVÐ% ty+•ªý¶Ë:Ç€ãþµ9uu?joßY§÷wÂápµ¦Óí¬®®f…‹^H¥jþàæ35€gÿå??t` rÓÕ¡3 (¦Œ™¿w÷~¢(²Ž/Š½Ð“ KJ‚Ï®\¹rdÿ:G!pù-“V­ÚoÉöŒv wYð¯±|1áñæ''.“ô¦†BeÆ>è…jã £î•é©Ð9º”黽ÐÝÝ$Ùï1éÉiÓ¦õjëÈE‹%ÙYû=éþ®Ž5ù<–IŒîÍø :€|æm»>$3¡ûˆÉç¤Ruÿ:ºv÷Ýwï2ÿzn«{ôö;î¸ccè …`N]Ý&ÍÛm„g:ç†>|[`Àq ô>är¯*olýMè ù¢¬qÍz—ýOè¡ J&ƒîƒ¾WEcsJ¦Ÿ†ÎÑŸ™ôã‰M­?êüxdqØ%ÜøèU<ñDs&Òe’ž¥1½@M{ðÁ´\·…ÎÑ%ó –Œ-}C_ µêᇇKÚoU3_ß“sÝsÏ=‰††¿Z¼xAU¤Ìƒ’ŠöùpíäÉçuži¾{<éh¥Í> IDAT€ñ\vbO2(|èòÚÝwß½Kqæmr1Ó$ÇLvÏÆ/1£4ÏÕ××üDÒoCçè‡ÚÝüÚ9sno ¤Ô××Ö™ì_$eBgࢢ¢6f¡}¨¡lt¹¤’Ð9 óOV4µÞ:F¾Ù¹eÇW5À÷$¶(}tè {©øF“V†ÎÑO5)9¤ª«D‰ö 3ÐÍÄîL^ѲRòËdz.t–3¦¥¤dPèÈ K¹Ó¤—Bç芙÷É,ôC¼!ÉÝçfœ“-ô†E ½äÌפ=gnõ’NÙóñ5’ÍšTyÞ¬ƒÀýÀ›,ЏQ @—(Ðä½T*µ9]œx“¤5¡³°ÿݸñÅ÷Ï›7¬ˆ3É’^£‰Ýüƒsêê~:H!ª¯¯ù¾›_/öDÊÌ/ H"fŸ÷—ìS­ù9k+° ž}v§Éfhßk–È›½¬±q[Æü±´þáÚ–‰tMùòåÛ»úà„ÇŸÞ$Óξõ f w©¢©µ1añù’¸A9{жk\èÈòåË·»«&tŽ®Ù… cG¿>×£$–>`d)[7Änþ¥Í[·Ï9ÔAfvÀx±âr€D Ÿ¸ó»ß]™O“òtÏ þÌõƒQ'pý¼yó(¿ú‰¹sg?»çb)ºÁe7Ï©«cIÍšSW÷s¹½QyzGý@àfè@Ÿòw†N0Ä’fU45øeÊ¥¼©ùÏrÏÓ ò}!“K¸ï5©±µÉL]ΤF—Òîz×î͇à÷ÜŽØý`ƯXýL”h»È¥‡Bg)nû 0+Š¿ô† Cˆ,®Îõ7¶o9ðQ?é0NKÚ´çOg‘¹ÝyÔCäîí¼\vÀx«‹\@ ©««[g’¯“Ô:KÁ0ÕŽu ÕÕÕÜmÙÏÔ××üLn,ezhnÒ-sêkðE従JÕ< Ïœ'©)t–Ȥs«ªªŠ^ýH½µl̘1’ÊBç(p»$wESK]è ýÁ°}RRCè!ÄŠòfú^å-?‘ëó¡sô¸²¥ÛSyÀ}Ð%ÜaÂãOo:jGÇn>7t–B`1û ²òå«_ë{¡stÍ.\6nôe¹aÚ´i»¤Î7DÙ¨Ã8źI•çœTyÞÈÍ[wYÂGËìçûàÒu ]{ðSøãyÄŠ§ºF _™;wö³É„],ù㡳ôs.צêjgUWWǡà§Ò7K> /–vCÆd7Ö××Îd I¥R«'Ï—ô›ÐY ¡RѤÐ!€!á,ßž[/˜û´Š¦Ö{Bé/J[[Û,]#é…ÐYú\ìyW KRÅÊ–[ÍôõÐ9ò™™þ«¢©¥¾[Ç* ·:K¸¿ª3Ö¬Ù5±±õFÉ?,iWè<ýš93Ð œ{æ›2ååö‰.¯îƒa:¯8rú² NéòÈC˜6mZzâÄ©-­«×\/iÁ¾3÷›ñÔ :ý=3lgš‰ZºD ß©©©Y?¨¸èµ2ý_è,ýÔ.“½;•ªýïÐAÐ;©Tj‡<~‡âÅÒCksó÷Õ××Ü:È@4{öì-©úÚ·Éì3’XÝ¢™Å,ãô7±|{î,L[fJùÊÖ¯~(öU¾jÕS&]•¯KÃæŠÉòj ÷}MhlùŒ$VQè’³¼±åsÝ>:è tpì~¥¢©õNÉ/piuè,ý3Ð ÜÄ•O¶ÈõËÐ9âµËÊÆ\šÛ!ìÁÎd’veOÏvÝu×e$ý¸ÓÃS,X0¢ó±Ë–=|¼¤‰û=èZrök_»µ§ã(lèú¥Ù³goÙôÒ†·ºÄìÒÃáZkŠ/­¯¯ùYè(ÈŽT*õtém’¶‡Î’'^2E—³çypžª«ùšÇv±¤gB‡(\¢@rlAIɹ £@Í.²â×U6>ùtè ýUySËBí=ù:³-',?g K’I^ÞÔr“d? %¯¸ß^ÞÔú©ÃzŽy¸èÒÐe& 8~¿RÑÔºdH&š(é®ÐYú©cÆŒ9œ%­Ñ™òw…÷Üî…îÿîÀuM¯NùŸ;=RTTd¯ë|X¦=q¥:÷aæ÷÷jl@¿5o޼̜úÚ[äºQ,ÖÄqǤúúúGBAvÍ­­]`²k$u„ÎX³)>¿¾þö‡BÁnsæÔÌ/JF%qá¸O:cS[[·˜ÇçÙËO¢Çl½"½µ¢©å–²ÆÆöÐiú»‰+›é²™’d¦kLz)tžþ&™pf¡¸ò¦–….åëu‹‹––5-W'ŸÛù¹ Op­ÀAQ è÷R©Ú¹òèBIO…Î’§b™}eÓÆ o˜;wîó¡Ã 7êëkî—ë½’ä…g—ÿª½mÐyõõõ-¡³`·ß~ûK©úÚ÷›ìMrý#tžÂæ'LŸ>½4t  Ð•¯\½¢cÈöJ¹ß®RRæŒé§mqtNÅŠ–ûBG)$›çHš))%×LÊÛè{™Oljù´¹DôwuI±I«XÙüï‡[žKRF 9]Jtü~ª¼±åçIë8GÒÄÏËn‹-¦@,ÊçYèQu.Ïo]lo¹¾Ø»³ú_;=pñ¾9røÐ÷H*é”äÁò)SVõn\…Œ@AH¥noȤÛ+åúyè,yfµ)š–ª«ùì¼yóÎRŽT*U;Ï¿CP{_¦åú·9õuï¸ûîo½: ®¾¾æþdÒΑÙÿhà^<ÎÓ|¹ÞqÒI'±ç$Ð*¯ÝQ±²õ£fÑë%5†ÎÓÿx³"¿¢¢±åÝç=ñ³s ¢©¥ÞåÔ:KŽå}¾WùÊÖ;âØ/p3rM›ÝíåM-ßéé)"‹ƒÎ@OÆNÞCekÖW4µ¼Ï#¿TRSè<ýÇ6!tä^ùŠ–ß*O¿'LzÝ’sJ.ÉÕù_ÞºýNuÚêÍ¥×.~lá¿öø¤‘ÝÑé‘I‹Ï/‘¤††¿’é«%όㆧ¬ˆåúµÜ.JÕÕ^˜JÕþ²ºººàgù¤¼qÕ_š›ZÊÍíF…ž!Ù/Ø&w}fX»&T¬hýCè4…nbSëÜ¢ËTØ¿æõîMz¢õ¯QäçIZ:KiH˜Ož¸²ù—½9I”húúê. ô^š¸¢õÁô&}L…ýšÔkfbú°g5޼ÝÞ#r«ÎÕ¹§M›¶Kî”åfúZÃc½þ•ƒ‘GoØûÇMï>Ø9'M:÷ϱü¢}Ž¿Ü,Þ±hÑ¢"ÅÉ{$¸ßXÒ/**§>ÍÏ @á¡@Ppêëk~™—¹ü'€¥šI‹#;§¾ö–üàÛCçAßK¥jþ.ÏL‘¼!t–qI·K™ò9sj懃×J¥V§Rµ×Èí"™:/µ†W÷œÜ¿&Ï”¥RµoK¥jþ:0]'eÊW6ϵäÑr}^ÒÖЙòЙ¾%v5qeË×J[[ }Vtޘظêa¥m’¤ÂüYaýgú^ãW´®ÞtüɯÝózÑ:OŽÄr»íÈŽ_ÑÚëÕq&<þô&Y¸U¶<ŠO 5v!©\¼¸£¼©å;EV|–Ì¿ iKèLùƤ•fúÿBç@ß(²â)OoÀt×Å g—\üêGö̤)S.³Ÿvz8)‹ï]¼øÑ7JÒ¤I“^œ4eÊŸöþ™<ù¼¿꜕•Sÿ¾ïñ›7wlŒ”ù‘dv:tC”ö[²ùù(Lè R]]Ý sêëÞcŠ.‘´4lš>ó”›____{ñÜÚÚšà ¬T*õô°¡C^+S­ ëF’¥¦è’T}íGS©ÔŽÐaÐ;©TÍßSuµ—˜â \þ+Ö×j¶m—ôCS|ù¦^“JÕ}&•J=:€*_¾|{ÅÊ–[Ó»2§î™aÇ÷¨ô²L·Æí~zEcËg&<þô¦Ð¢Šææç6ò´=…Õ®Ðy²Ê5âK.I†Žq¸¦=ø`ºbeË­QŸ¯<]·žŒ\Ó*V6âŒ5k²÷õæ ·Œ»ë”`c ²ÆÆm­_j‹gºì‹bA¹´ÚÍnXÕÔ2¾¼±…•ºˆ²ÆÆvw};tŽƒ‰,w³Ð%iÐàí7JZÞéá#Íý¾Å-üWw·žž»qþü‘G1ô’®íô¡ŽHþÞò©SŸíé¹ è Z}ýímÚ¸¡R®%­ ''\k]vÓ¦ΞSWw( °Çm·Ý¶3UW;Ë_)i}è<½ô‚IÓ7mÜPɶ…§¾¾þ‘9õuo7Åç¸T'f£ìµYÒÏLöždÂNLÕ×¾¿¾¾þóæÍË„àà*Ÿ|rsySËwÊ›ZÆÉâK]ºW…;Ëô`Ýý¦Á™è´ŠÆ–ÏOjm}1t nÚƒ¦+[¿äž™ ×}¡ód‘±víQ¡CôÔ„'V/Þ±eg¥É«ÕÿW¯hsùÿ³ä V¶äâ÷õ`³4MÑ©¡Æ.dç=ñÄK›š«ÓC¶¿Ff7Kz2t¦>çzÐÝÞQÑÔ2zbcó÷®“ø=€É´eꕯïM—,[úº\¾¬lÚ¶¢´_.yç›Nfúæ’†G64}z©]KW›4Eýë5»;žÛ½¼?¤8þë@]–ýæ›oÔ֞Λe…3 +½£¦¦5tŽÐª¦Ïl—T:‡$Eæ“êêꖄΑ e§ŸØ'¯6ÙµP¦o•üÏæºog‡æMmmÍÏT8¨¥ãJÊÌõ1ì}ý´¬Ú.ùyM­¡ƒdË’³Ï>]‰ôÍí]Êï"=mÒ=Q}uüŠU—àͺ%e¥ß2×Çr=N—L/V4¶dìjÑ™gY4(º&6½Ód—J:S/­“ô+üg+Z2)ï®å,-+ù©Ü®C’äúrÅÊ–³ü² %§xÚžTžüÞÝ™É^WÞÔ|ÈýÇ{kþüùCGß6Ù¯rèN—ž–´ÖÌ3òh¤ä%’Fâ9k"®¯˜2åÑì%0ÚH8,{Š™qéÝR—w(æ‹f“~NF?¸ãöÛÿ:Ì@P¨ú^Ó§Oçý§\×*Þ¤µKº/6¿}n]Ý_B‡Aþ™5kÖ©™Œ¿%–_f²i’Ž é0uh÷>§ nþ÷8Š¢¤Ý­ºº:ùܺço c¯âdôõÛo¿ý¥Ð9B«š1ó¿3èȇ+/f¥Œ°hôèc‹’~C,½Õ¤ ”ß+®“ôwýzçÖÙ3ƒ»à,›pÖñqGtƒ™>"©4tžW˜^”t—¥£ºòU«žê«a—•¹Ôc¿¼¯ÆÛÉË›šÿÃØ.-ˆ%%#†EoŽåW™éI'…ÎÔ r=¬ÈïÝ?±±yy¾ý,)+}Å6!tIR"þKÅŠÖµÌö²±£?âÊÏÕ2ÝâÅ›ZçõÅXK[ðf7ûޤ³²pºŒLµ±'>WYY¹9 ç0ÀP Àn6cÆŒ 2n×›ôIg†$ÙòøçQ¤ŸÊŒ«þ¤Ð ô½fÍšub:ß Ù‡mm‘ôˆI?J§Ûï¹ãŽ;6È€~¨ºº:Z»öÅ ³ø7MV¬‰2–ò£ì“´I¦VÅÖ`æ RÜP\\ü8+*8”ƲÓOlWÑ¥æ~¾ÜΑi‚¤‘áÙz“7¸k±ËŠ£öekև˃¾°|üiG{zðnz‹äK ·|µ)#×*“–Æ®¥ŠìÁŠÆæEù^De“K¶¬ì¬Käч%½I!^L;ÝõGÉïÞ®{K[[ù}Á,{fi¤èu’]ä‘O’ÛX…½éÇej–k‘»/²(Zd‰ÁKÊ—/g 2 ‹Ûwn{·»>*SeN±E¦Ÿ$âè¶®–|€î¢@€.ÜxãM£Íâ7›Ùë\^©Ü_4Š%µš|¾»=˜HèoµµµOæxLÂ@)Ð÷a™9sR"Ö›\öfÉÏUnŠÈŒ¤'$=l²?F‘ÿ¹¶¶vSÆÁ4kÖ¬áq—DZ•¼Tf%’—JV*ip‡j—ô¢IëcÙó‘â5±¢§dñS'žJ$â§øº- cÆŒŠŠ2ç(ŽÆ›Çc<²Q’Ÿfn£<;+qì´Æåÿ0Ù?Lö´,^‘I'OZµªàfýãð5–~bZÉ)Û™M‘ütI§Iš¥!vIZ+i­»ž1óu²h•¤¥;6ïx¼Pg˜÷Ä=Rb̸ÒJ7].×å’¦*7Å¡KZ)é!3ýI‰!÷S"_µ”” Ú:81ÞÜ'FòR—J<ÖY¶ûñl½N¥%Û ùz™¯RlÍé‰D&^UìÉæ³W­Úš¥qìcéÂ…£3‘ÞnÒ4™&Iêj;vI«ezH±ÿqWGü» .¸€ßô:tìY³NÌd2“Üm¬•šy©KgH:A‡÷†l«¤g\Þbµš{s&Òòb³555Ûr“=1 ôýÌš5kxG‡O6Ó¹n~®):]ò“%¨îýþKÚ ©YR“ËšûÒL¦­áÎ;ïäâúܬY³†§ÓéÌìx)qllñq‘[Ò]EfÞéð’Úb³v‹}»GGžx1“ð‹Ü×SŽÈóO9eÈÐáÃO•>)Jh°»FÈl˜¹ËüÈWtÛlî±iKdÚ•‰£ Å;2ÑÓ•ÍÍ~ èÇyæ‘ÉÁÑ)Št²Ç:2òhˆG,ײxŸR7Ú$Iîñžîþ{QÂ^èØ•Y7©µõÅù Á‚’’ƒû9Q&1Î-+Y™dgKþšnž"­ÝKâÿÃÍŸˆÜVÅnMQQfQùòÕ/ä0:Ð'ËÊFvd2ÇÆJ%íw;ƤA.™âWÞxl±E¶Ùâ8›¶Xd‹íÅtä¼Ý^äg%,X0bÐ ÇafétÚ6®Y³fíu×]×§×Ì èÐK³fÍžÉdNÊDш®>žˆã´™m(**ÚÀÒ½ýÇ@/ÐæÚk¯->î¸ãNp÷¢8ŽÈDÑþ3^Ìvdv&6Üu×ì @KkùdÑäQC£ÍC‡%‡DGÄ™øÈ(öhß[R/l8öÔuÓ|0*#¯(Ðè:Oôê‡Pø(Ð:’(ÐD€$ t$Q  ‰IèH¢@@:’(ÐD€$ t$Q  ‰IèH¢@@:’(ÐD€$ t$Q  ‰IèH¢@@:’(ÐD€$ t$Q  ‰IèH¢@@:’(ÐD€$ t$Q  ‰IR2t‹»'$UHš$i´¤1’Ž—täž?ƒÃ¥þ©­­mX&“ C’4tèÐCg@AÚ$ióž?OIZ%©IÒ#f¶!d0ÅBPøÜ}°¤·Jz¤iÚ]”ò“Kz\Ò/$ýÐÌZç€>C gÜýlI·Hz·¤£Ç>—4_R­¤ŸšY~,Û9B ëöç_•t¥¤(p@v´JºU»g¥S¤(Hè²ÆÝôyI—T8 7$ÝlfóC€l£@î>UÒO$8 ÷bI·Iú¬™u„ÙB ×Üýc’¾.fÀ@óˆ¤kÌlmè èzÌÝ#Iß–ôÑÐYÁ<-éf¶2tè- t=âîE’~ éúÐYÁ½¤Ý%ú¢ÐA 7(Ð6w7IwIú`è,€¼±AÒEföDè ÐSQèú¥jQžöw¬¤ß»û¨ÐA §˜à°¸û;%ͯ€®Í—t‰™u„‡‹èºÍÝÏt‡(Ïw¤ÿ z‚ @·ìÙ÷üO’. ÷bíÞ}~è p8˜ »> Ês@÷D’æ¸{Qè p8˜àU¹ûPIÍ’NÎæy3±+ÇŠÝåîòÝ£iÏ¿rÅì•DfJX¤Dd2ËzuTefs²}RÈ t¯ÊÝo–ôl¯#“QG&–;M9ä“d©(™P”½"}¤ÑfÖ‘­@.Q 8$wOJj•ôšÞž+»ÚÓiÅç׊ 'Ù:݇Ìì®l r‰=мš·+ åyG&£]”çÐtd2ÚÙÞ‘­•DoÉÆI /P x5ìí :2µ§3ÙÈè#±»vv¤³Q¢W¸ûøld€\£@pPî>RÒ½9G:Sž@?åîÚÕ‘ÎÆ©Þ•“@®Q 8”×K*êé“Ý]mé¬üb$vÏÆD©^MÖ€¾BàP.î͓ۘy¡#“QÜ»¥Ü'îYõò:€C¹¨§OŒÝ•‰ãlfÔÑ»IS‘¤ ²r†@—Ü=)éìž>¿#Ãìs($é8Vï&¡ëœ,E€œ¡@p0g¨ûŸgâÞýÈ?½\ytt¶r@®P 8˜Ó{úÄØ]ÞËÛù'ã½*ÐOÏR È tstOŸHy…)îÝê£=¾î }…ÀÁÑÓ'ÒŸ@aêååß_w€¾Bà`õô‰ÞÛ_¡ù©w3¨†f+ä :€ƒ±Ð€¾D€(ÐD€$ t$Q  ‰IèH¢@@:’(ÐD€$ t$Q  ‰IèH¢@@:’(ÐD€$ t$Q  ‰IèH¢@@:’(Г¼àÇ IDATD€$ t$Q  ‰IèH¢@@’”  ·ÚÚ;´£½C»ÚÚ•‰c'“<¨XCŠ‹T”L„Žèc.iW[»v´µkW{‡Ü]ƒŠŠ4dP±† *R"â^rÐ5 t@^xiëv­Yÿ‚^Ø´E·nצ-Ûµuç.u¤ÓêÈÄ’¤âdBÅEI **ÒðÁƒ4rÄp³çÏ™'§¡ƒþ,ôÒæmjO§»ulqQRÇŒžãDý_G&£–gŸW˳ë´~ãf­Ûø²¶ílëòX“4rÄp:æ(tÌÑÆ):阣ú6pÚ±«M·nïöñGª#† Îa¢ý=ûâÆn;|È`5|hÓäNì®í»Ú´}g›¶ïjÓÖ;µ«£ã Ç›™†¬#† ÖˆáC5|p±’ nüˆvµµë¹—^Ö3/nÔÆÍÛ´iÛvmܶ]étF;ÚÚ_9Îd1l°F1LÇy„N9v¤N9n¤Ž;ê™YÀÏ œ-;vjÅšgõÌóµ~Óf½°i³Ú:ºþù›ˆ"ô4ò(~±:çÌS5bhß½öµ­;wiÛŽ]¡ctiäÃ4¨¸¨ÏÆ‹c×ó›6÷Ùx¹0bØ` Ü·_¯éL¬õ_îöñCùví†MŠÝyLq2©ãÑG‰þéåm;´mg÷¾‡ë˜#û×{‚ŽLFÏo<üï­“©ùSëŸvìjÓö]íÚѶKÛv¶kG[›2q|Ðã눡ƒ5tP±†¤#‡ éôamÙ±SÛò¾/´cF Wq—Ù($ý÷Táî7IúnOžÛ‘ɨ=Ér"š¶Ž´V=½V¯yN«Ÿ{^[vììÕù"3vü1*;ãwö™yS¦ûÿ g^x©[Çžqâqºéí¯Ïq¢þëÉu/h~c«V>½Vmí/#_ÍqG¡É¥gè¢ñ£ûô¢}_hh^£ÿå‘nåùuIùÙ9L´¿«ÿé«^ØÞë¢ cô¶ &å8Qïì-;Ÿ}q£Öoܬ[·éåm;ôò¶‡¼ðùj"31\'=B'Œ3ÇúÜ÷ª£›ïñF{´>qÍsœjÝý;xP±¾|ÃÕ}þ}~çÿýUMÿXÛ­c/?Zo»prŽe׊§žÕÝ¿ÿÛa?ïÓ×½Y'Œ<2‰ò‡KÚ¸u»Ö¿ô²Ö½ô²6lÞªMÛ¶kóöÚ¼m‡:2½»vRœLê˜#‡ëØ#ШcŽÚ}SÝñ#ûô&Ù¾òó¿-Ò#-¡ctiú[/UéÉ'„Žô)“4´çïQ×ÙÿÏÞF·užy‚ÿc ÄFÜ7‘%Q‹%k³dY–·Äv9v*©,µ$ÕIuêÔTMzfºÏ™=çÌ™î3KW©%µ&©¤TÅ©$vlÙ±$[²ö]²î; ¸c¿óA¡#K$ð¾.òÿ;'_œ÷â>¤@Ü‹û¼Ïóh4u9 ‡ˆ(ç¸5Žˆˆòªo܇S7ïáÎð¸ðC8IEÁ w ƒÞ)¼{é&þä•gPï²çìõ©pnŒâø•1$¸!ho_¸ŽnÜÅ“Ýñäö0êyKDrzƼø›Ÿ¿ŸU²s5IEÁÔ̦fæpsp Àý_­Ã†­­õØÞÖ´æ6¯u±xïöãÌí>ŒOMçìu£±8îŽLâîÈ$~òÁEl¨wãèc[Ð^W“³sƒ…p'¯ÝÁ™{±ô@e~¦EAÿ„ý>¼wùžßÝmí9ˆ”hmÐiµ¨s:0äõ ­÷gK$`Èc7•ŸX—›p$ _hn{~¯£¢ñ@³»ZÅHÔÑ;îÍð8ßš¿§ùÿ~ú&Åþv2Ç1ñëäüþ‘Oþ{ÓŽ­­õØÚÒ€:~/&"""’ƧÅDD¤:EQp©g\»‹ñ@ð~•1•¶Éà þõ˘ð©òú‹áÞ¾pݺ‡×žÜƒ­y¬€£Ò—H&UIž¯FQŒ¦1˜Æ±‹7Që´á`w'ëhÉk‚‚²“L*8{§¿ºt 3 Ùu^I{.EÁ½ÑIÜÄÎŽf|õèUÏ—ɤ‚“×îà½Ë7WmÏž-ïô ¾ûî)´Þ¬ÁïáýѯµxÄè‰d“4Ö8TŽê7†}SÂk'§òš@Í/bN°};4{\*F£ŽÞ±Lè^<±µ#ÇÑ—„’yW¢lü??y¿sx/«Ñ‰4Õ¸Ü^?æ7îïT4ä `oW»ŠÑ|Úˆ_¼úÜf)ƒÝZ®b4¹7·Îhþ9ôù (ÊšQl†½ {øùGWðÔö.ìî„AÏD:Q*Ì.‘*"ÑÞ8u —î äµJ67sŒR©ŠÆãøÑ‰s¸Ú;œ×ó*>¼~“Á¾öÜ¡57Ö®ù¥~~æ2Î|܃Wžx ]Müü+6 €÷/Œw.\GRÉ÷ñ¾mm¥îóâ{ÇNc1Éëy#Ѿ÷î)<;½ÏíîÎ빉ŠM“Û)µ~Ì? t©ÌCEÁ°D’zPÅvÚ+õ‹'÷›=¥×¾½oÌ›ñ÷½ÅH“ÁÔ:m9‰V¶ŽàÍsWqþn?¾òô>4ÖÈý]­',S!"¢œëŸðῼþ&. y®ÓjÑÝÒç³R.Dbqüí›'óž<PϨóæIDãê´&RËÔÌþî­“øî»§±Îo’‘VO$ð½wOã—ç¯,y^c«D³tgŸÞèÁß½u2ïÉóe €coâ 7 r~¢ba·ZPY^&¼~t*c›~ÍšC8•X?›×k¥Ìüów)¶oÏnÜR߸:ãšhuþÐ,þâ§ïáÄÕÛ……ˆˆˆ¨h1NDD9õá{ø«Ÿ¿Ê³]WÓâqÁl2äÜ”¹H4†ïüâ8úUšw.cÈëÇwB2Y˜dQ6®÷ ã¿ýä¼Ó™µR¥Ü‰Dcøë_œÀõ¾Âm €Í=6®ô á{ïžF<ã<Òy÷ÒMœºy¯Ða”LúD „D2?³Ÿ‡$æŸ÷7Æ I´|φ¹îM%8ÿ¼g|2«ãûÆ3›ŸNÙI$“øÅÙ«8v‘ĈˆˆˆVÂîDD”3ï\¸w/Ý,h [Y}^r’Iß{÷4½r?d³”¡ÂR³QX,¹¥0‚s P2¬ø¼3<·Î]ÅKûwfÑÃTZÊaÐÿfk"¡ c1ÁôübÆïÙfçñç?}¿÷ÌØØèÉúõHž¢(øÁûg0P›‚¶·7:„ŒÜÄë'Îe\¹oÐë`¯°Àj6C§Õ`>ÁìÂRVU§?ÿè jl•èlàß­OM5NÜZO$àžÉK ™ù爼þ¼Œ} ÎÎcI°:ިףÁUZC¦ç˜™Ïê5ú&üœƒþk½®ª TÛ*`·Z ×ia6ÜßO&ŽÆš_Dpn¾Ð,"ÑXÖç’®<×j4x¬£;;ZÐV[ ƒ^÷©ÿ_0â àúÀÎ|Ü+ý÷•H&ñƒ_}„ÿù‹ŸEE™YêØRòøÆ68+­y9—Ç^•—ó,Ókµh¯«ÉÉkMg°(¸!ÃdУ¡Ú‘“óÚ­ÞóäK³d{ñ±©é¼$Ї|ò ôÁÉÌ7nÊ‘ˆ­Þe‡N[Z"{Dz¯_ G0¡®Ä6¨áÈŽ.<·»[h­‚û4îŽNàFßz³˜Eÿ³Ó—Qç°¡ÅSá+­Vƒçv‰ý.s!_×M"""Ê&Љˆ(ko»šUò¼ÚV‰C[;±«³&£aÕu½½e&#\UŸ$6EAϘ÷F'WL¤RñºÑ?‚“×îHc2ðÊØ½± Ú•*•åfìßÒ}›7àRÏ ~zú²Ô|LÀNœÅÿòÅa-3IÅHô° ¹±znGÜŽ*ìíjG,žÀõaœ½Õ‡I¿ô9“IÿúÁE”MØÖÞ˜IØ”þ Þ»|Kú8ƒ^‡}]í8²ssÊÍ&£á××J3\UØÒRàþ¿wÿ„7FpñÞ "Ñv¶—^ûöX"ï;%œ˜\ÖQïÆ«‡v§Ü0 ÁýÔMn'žÞÙ…cnâôÍ{R ‡ù¥^?~ÿî³Oa­ÖJîìh^³Uöe&#þø·ŽæäµþöÍÂmVKÎÎ[HÕh5áΣþ ߨ¦jLÑX“ôq#¾ ɤê k™öí-%ؾ½w47VúÆ}L KÒà~ÒöÀæØÜhï^¾…+÷¥éñDß÷4þÓ—^‚Ѱ6ë4Z<³kK¡Ã ""¢¶6ˆ¨`ÎÜêÁñ+·3:Önµàù=Ýx¬£%e"4FƒÎÏš}Ø»V…æñßOœ“zÀãvTákÏBµ­BøFƒÝ­h®qáïß> hNøØù¥þõƒóøƒçIDI”½»:Z±«£ý>¼qê&$“Š¢àŸõ̦ÃüŒÌƒh<Ž×Ÿ“nÅ¿­½Ÿ{b7*Ë3¯jÖj5ØPïÆ†z7žÛÝ÷.ßÂö ¥×¾ýÍ3W1*‘p€£mÆóo“º—(7™ð¹ƒ»ÐâqáG'Î# {gxçn÷²»­;FƒnG•ðµhÔ/ŸØ–5:ÌhÔC4ÇøT5¹é °šŸøç™l…1èŸH]^n2!–ˆ#OÝQ¤‡CÛ6æ2´u§ÚV‰¯<½û»ÚñÏ¿:ƒÐü¢Ôñ¡…%¼{ù&^Ü»C¥‰ˆˆˆJKiõ†""¢¢òñÐ8Þ8u)£c÷ljÇüÏbwgkVÉs*]ÿrò¼TûÜz—úÊ3RÉóUÛ*ð§¯> W•Üñ7Fqot2£såJ[m þÇמÏèár"™Ä÷Þ=-µy„2óÑÍfÅgÁjµ¼¼'~ïÙƒY%Ïf1›ðÊÇJ®+ËधoõHóêÁÝøÌžíßKìØÐŒo¼xz.ýâ¼yöÚ#-ó‰Öƒæñ$ïD`:£ä¶ŒLæŸ/òªÛÆ]QŒNÉT —VûlhÓi’´û·´cccmÚ×ê÷©þ^Y/Zkkð~û©¿Õe^¿+x'"""Z«˜@'"¢ŒÌ,,á‡ïŸ‘~С×éð•§÷ã‹OíY3íáHÞÍÁ1©¹çef|ý…'a6³:o¹É„¯?(娀•¼uîštE)Q®éuZ¼rà1üî3`Lö-E¢xýøY>œVQ$ljkrY^=¸‡·oZ³­Àe$“ ~rê’Ôgí[;ðÄÖŽ¬ÏÝV[ƒWî’:f)Åñ«™uà!*enñŠíh<hVÅh2›¾lPåº?4‡hL¬»…³ÒZr#ƒDæŸ×:lhp¥Ï„#QŒOMç",Âýtôòé ñD§nf>šˆˆˆh-aˆˆ¤)~|â–$æI€ÙdÄ7_| u¶¨•†x"‰“è\ ð;GöÁf-ÏÉùÝŽ*¼´O®5á¨?ˆk}#99?Q¶vlhƽôL’›½SxÿòÇ*EEWû†0¿$^‘|`köof ðego÷J%Oê\v¼¼gÎο·«ÛÛåZÞŸº~3 K9‹¨ÈVµÊT`gb$‹$øÐ¤?‡‘Q‰ùç-§jq¨AQôަ®@×ë´¨¶U ±Z4ž¾¢äTYÊðÂãÝRÇ„#QéïúDDDDkèDD$en)Œ_ž¿.uŒN«Åï>s@º…­=ñD¿ºtSx½ÙdÄgönËyŸ{b—TòqÄ@ÿ+c¨xlmmÀÑÇ6KK$ðÓÓâ HÌìâ’Tá#m–nÿ–»Ý/5sõÀæxU9Ãb6á¹Ý[¥Ž9}«‘h,ç±+Fƒ&Á„(Œª˜@ÎA v5«ÐGýâ ôfü¼êBš†°IÝu¥ÚV V ‹ÙG…%íköONqÔŒ öum€Sà÷ÿ ì¼EDDDÄ:Iyçü éÅŸÙ³ ]Mu*ED¥ärÏ T¥Ï“Ýa1«3Òã¨Â––z©cNÝìQ%¢L=·»mµr=îNbØ›ùÌXz”ÈØe&ƒolS1šÒ’L*8!1Kܨ×ãè®-ªÅ³§« 6K™ðúH4†Ë=ƒªÅCTŒkÄ«¥Çƒ!¨•M×"½Uàú8ìSgz2©`<Zk4èá±ÛT‰C-½cbíÛ—‰´qG¢ªv,X¯ô:-lí”:¦wtR¥hˆˆˆˆJèDD$Ì;=ƒówÄ[ÔÀ†z7oߤRDTj>¼!ÞêÖ¨×ãP·ÜÃYOíè’Zk`4mµ Q>i5üÎS{¥«™ß•èAé LˆW0n¨w³úüIõص±eâmÖet:Ü&wßrîN¿Jѧf·xµt8ÅThN•8†&S'Ð÷vµ¥í648¡N}2B4ZÛìvB«•ÉRh"íÖ=ößt ip±{!moo‚Ì;lv1¬êx"""¢RÀ: {çâM©¶z_8¼GzF/­M£þ &+q`ç†f”™Œ*F´¸]¨sÚ…×'’I\ïgKC*.Î*+n“Ûlrgx\êï‘RóJÌ•o«u«Ié‘Ù˜§ppk‡zÁüÚžM­R›FýAx§gTŒˆ¨¸È$Ð`lJ¼•¹¨dRIÛ"½±Úª4%F§¦…Ý2F$Ú··zªs~~5%“ ú6Ž=8j£±Z¬kAß8Ç%©Áf-Gäè“ñ)v ""¢õ t""âÍáFß°Ô1GvvÁYiU)"*5îÊUèíÞØ¢N y¬Sî<×$ÿˆòáÈö. záõ €“×î¨Ð:35#ž@O—ÌYOæ–¸;"Þ&¶¡Æ ·=÷³ÏVn2¡«Enô̵>n®¢õÃZ&6ÓzÙ˜ ‰¸tÞzÕU•¨®ªHù:‰dcÉnQ2óÏ›$7$ÚÈTáH4íºZçoZ¸7VÛ…* û'ýH$“YDG«iuËmÔ͉w‡!"""Z‹˜@'""!çïöIÍ/¬,7ã°d TZ»’IWzÄÏU–2´ä©§»­Qj}ߘmܩ蔛MØ¿yƒÔ17F‰å¾ên=ZŒÄ„×ZËL*FRZ®ö I%J¶·7©ͧmk•;×õ~n®¢õE&é;ªBú/uûvW•Z­.[eÚ×LÓ >¢è-3å‹A¿Àüs£Aû›,Ì&#«#ÑƧØ!G É t™ñ*DDDDkèDD”V,‘À¹åª‡ŸÚ±&£A¥ˆ¨Ô yýRIçmmykýאּ Î%ÞÆ=©(èe{I*>Olí”ú»‰Äâ¸;<¡bDëC<‘D<‘^ŽŠ'Û׺ÛÃãÂk5¶·ËmxÊFWS­Twop³‹a#"*.ÍIßQÿ´ÔF\ÃÞÔIïådaº tàþ}j.Å LÅ’À5öJ˜UY”k½ãé;‡ÔÚmTœ7 ¾g8]•åf©õ¼_!""¢õŽ t""JëFÿˆTòÓl2bï¦6#¢Rs[2I·©©^¥HV¶¹Y®UoìQrTXÐV+×¹árÏ€JѬ'ri¡…0;X@4G¿À Ýe5Ž*Ø­â-£³e2Ð*ñ÷¤èoGOTêÝâ ô¥HÁ·ƒNS¾<îA$>09•ÓÿøTñ„Xw Ù¶Ú…K$Ð?žþ³»Éíx俉ޣôŽñ>[ ²5Ø¥ˆˆˆˆÖ;&Љˆ(­Ë=ƒRëߨÊêsú”[ƒcÂk :Úêòû0qC[j}Ï(“$Tœvv4K­¿32E&t³¢×é —¨Tž]\R1šÒqwtRªr¿³Á£b4+먗;ç=‰yîD¥®Þi‡N+þHi<‡sÐ#Ñ|¡Ù”k<¿N »è Ⴓó9‰ Fâ-ëWH4³ao1ÏÍ[Ç3qº:Êd¿Ÿç§QÑbˆˆRŠDcÒU{7µ« •¢ÙÅ%x§g„×7Ö8¥ÚææBcSê!ðÔÌæ–ت—ŠOwkƒT÷x")UL+«˜kÞ?ÎÐ#Y­Ý.¹Ñ)Z%7s Lðß–Öƒ^‡újñ8c9œƒ>ì BQR׌{6€³Ê ƒ>ý}åÀdî®…#^ñŸU4©\,DÛ«7Õ¸ùoG•Pt4ž!Oâd7¸—Kk´Q®1NDD)ÝnAnGÕ'3‰`hrJj}³G¼%h®˜ zé÷í(ìQ²˜ÍhHh¹M¬W§MxíÀ¤ÎM;¿øaÍí¢s¥Áe‡V+¾!%0· 5ò†¨Ô5UKÌA÷åî¾iÈ›úÞÒ ×ÁYeh5ÔØ*Ó¿¦äýj*#~±Ï7‹Ù„jØŠIÏhúz¹ÉǯÿÒh4hü,ïㆤ\‹ÅÅ»¾@¹™ t"""Zߘ@'"¢”dgWookR)*Uƒir>¬y…Š•|¨uˆ'À`$‡‚‰rI¶íôàDî’ëUƒK¼o<‘Äõ£)~‘X\ª³³Â‚Š2³Š­L¯ÓI'·Fý¹kSMTìVjÓ½šÑV §KPWWUBû@7=ý=žìýêj"±8|Ó©ÛË/kr;KªKv4O;{šj«þ\-‚3ßE+ÝI\Xró^¹I¼»ÑZÄ:­JpgX|v5t5Õ© •¬aÉDsCuafAºírèÁJ‘e§­¶FjýX`Zj5=jcS­Ôúcn¬ëßùx`É4í—T_S¸Á²×†±Îy&*vÍñMóK„–rrÞÁÉÔI\÷C]…Dº MgŽD³Š û|+Ô¦ÑL‰Î&oJQeÞ&8cprJª ¥75;'µ¾²¼L¥HˆˆˆˆJèDD´ª©ÐfÅç<—›LÒ­ƒimK&Œ ¶±îÏæ«²–«Ñê~ØšŽLõ$Q>ÉnB‰'œ5š¥¦j§T…ôôü"ÎÞîS1¢â6,1<’Iì\’=7è´ž8*­°˜Å«TÇrp­ Ì-`!œúûÉ÷" tEQ0”ƒîB2ŠZkKlþù˜XUxcŠÖþÕNèuéEFãq©ï”ÞdpFj}K®;ÑZÃ:­JvÆs[­ M)5"$µfç¥æíÕØ* ÖʲÖ)—$™ž[ªÂ!Ê7k™IºÝu.gÓ®GZ­û·l:æç]AÿÄúœñê–{ˆ/2¿X-µóí`jF®Â¨”iºÚøac9hã>"ÐjýáM‘¢›$‡¼þŒbzè†4­VS°®K™êl«Þä^ýç2èu¨wŠýܽœƒžSƒ“âc ,fœ•α'"""ZO˜@'"¢U䪨Z%ÛÓÚ'Û*°ÚV¡R$éÙ¬ záõIEÉY+R¢\“NúIþ­Ò£övm€A§^ŸH&ñÏï}„éùE£*N²IfWUá® ²èÁÙ•"!*N©ª–‹î="3¸=öOoº±WˆÝã å`ºh•½ÇaƒI⾳Ж"QŒùÓÿû9+­°˜Soâk­k]Ï9è¹3»¸$ÕbC½›㉈ˆhÝcˆˆV5"ð€êA¥VEAê› É%Iª 8kOHWZLÏΫ Q–d7£µt®H IDATç˜ôËV•¥ û$«Ðg–ðßþõ鶪¥N6^¨Ñ਴HuFYŒD‰ÆT‹‡¨ØÈT æ >”f„Ñ ‡ã¡û9 €:GúeC¾ Áùå+‰Dcð ~¾µ¸Klþù„_h¶{cMú÷C³àÏ>ì pzŽ\¾7(ôï·¬»µQÅhˆˆˆˆJèDD´"ÀX $uŒ»€3J©8ù$“$•L ÷[_Ë0NEÊ^!·„U³¹ñôÎÍ(—˜ sKaüÍ/Žc\²ëK©ŠDc˜]ïÞ¡Õj`•üæ’N«E™‰×¢Õ´Ô8…+UCó‹˜[J=¿<•x"‘¶ |ƒËí ñÔWÛÓ¾~8ÅDPîûσFüâ øVOi%Ð{Æ&…Öµ Ìuoœý‰Å1â˾+Àz—H&qúæ=áõf“›[êTŒˆˆˆˆ¨40NDD+š]XB8^_n2I'i퓭Ю(—›ÛœkÖ2¹þ [¸S‘rX-Rëƒsóȼ掖U”™ñÒ¾íÒÇÍ..áÏßxîö«Uq‘ív`5› ÞFVöÚÄñ´ž˜MF©®'"mÀW3>B<‘H¹¦©fåŽXõ®ô t’˜ý°Ÿx‹ìfX¹XôމµSoø¹*ÊÌpVˆÝ§ôpzÖÎÝ¿«F}éŒ """R èDD´¢Ð‚ÜnƒÕçô¨YÉ*£JKa+Ð+Ëå6,†#*EB”ª ¹¿¥X<…%¾ŸsaϦv<ÖÙ"}\,žÀëÇÏáõãçÖt ðyÉ÷Y¡¯ €|w’…pæ¶D¥¨¹F¼šzTbóÆýéÇK5¬2“½Á%6jjpRn„ÕƒF¦ÄŽ­,7Ã!˜@.sKaxF˜MFáï„M‚8=;¡…%¼uþºðz³Éˆ'·oR1""""¢ÒÁ:­(4'¾Kì’ÕŽ´>,H&Ð+Ê \n–KÔ,FÅ»4å“E²å4„æÙÆ=W¾pxW©‚LçÂÝ~ü?ünôä8ªâ0/{](pg’û1H^¼6ÐúÒèÿ¼Ëb\Ű/}‚zµÏ^·£ z.íñƒ^¿t\ËF+ÐK®ú|Ô+Ô¥¦Õíî":~Ø@,M×ZY"™Ä÷ß=-ÕUîÅ=Û þ}Œˆˆˆ¨X0NDD+ I´y€J ¿hÓ§)$+´ >]¶»T¤ÊÍFéc"±¸ ‘¬O¿ÿÜ!XÌ™]ç–Âø§c§ðÝwO¯¹jféë‚äh 5THV /Fxm õ¥E¢},‹ ô!oêöêå&•Öÿ?V+T˜ÏhNûüRDxD…hò¸Xô‹µQo‘ØÐì^¹SÀÃb‰F6NЧ)Š‚¾ƒ“âBÚëj°oK‡ŠQ•µ!"¢ÉV"ÊVgÑÚ‰DO$…×kµ˜Œ#J¯\²j—èT¬ÌF#4 E|²y4Îz.Ù­åøÚ ñ·o̸%ûõ¾a NøðÚ“{°µ¥>ÇƼ䆀² 6ƒäZ¹Y²…û¯ ´¾xœ6˜ z¡XÁ¹,F"Ò÷\ á‚3ó)×4TÛ‘ªþ¹Áåj!?89…îÖ©ødZÓ7{J+Þ36)´®µVüçªsÚa4èxÏôŽyÑV[#üÚë]"™ÄO>¼ˆ«½CÂÇ”™Œøü“§üû)5 ¼©%“±àßa‰ˆˆHL ÑŠB KRëÙê¶ ™0i«©6ƒ^.†H” G*NZFƒ^*q‹±Ej®µzªñÏÎ*‰>»Æ?¾ývlhÆ«wÁ"™Ì-6²íÍ ÅpmŒ!ÂÍ(´Îh5Ô9í¬v› ¡£Þ-uŽ0mñt£3ê]6¡s ûÔK ëuZÔ»ìR¯]HÁ¹fSo\îN6ÕˆU•÷7Î6¸èŸH_ÝÞ;æÃs»…_z] Í/â¿:#ô{]fÐéðõ¡ÆV©bdùO$ð¿ÿóÏT?Ï«w㉭¬Ü'""Z‹˜@'"¢-IÌJ‹d{SZûq¹dœ^[ø$‰^'7Ý&ždÂ‘Š—N#÷~Ž&˜ôSC«§ßüìa|'‹$:\íBߘ¿}x¶”p5ºLg 86WÉÆ(ñkÈ/ˆdR¼{E&ªí•pVXT=åW³Ç%œ@õ¥èÃiÚ·@{]ê×ll>0‘þ\ñ‹µ¯w9‹bc¨¾q¯Ðºúj‡ôge‹Ç%”èö‹'¤7º®'ñDgo÷â 7¤¾Çk4|ùé}¬ð'"""ZèDD´"ÙJD>Р‡Å’ tÉäµd“ø ÉDQ>éur8cœ®šO5¾ýÚóø‡·?€/4›ñëÌ-…ño€míMøí'wK·@.%ymÐËÅPê׆_ž¿¦ú9^Ü»Gvv©~ÊŸf‰êã±€üôtóÏuZmÚÙâ§M¨mø¨?ˆx"!•öŠ%Ð[<â¿§bÐ;*VÉ,Ó¾}™è††x"!ï6HnºX–"Q\¸;€¯ßÁôü¢Ô±&£_=z››ëTŠŽˆˆˆ¨´1NDD+ŠÉ>àÖþ7—XR.``’„(§t’B¢’]#HNµ­ò¹£øÇwNc@¢µêJ®÷ cÄ;…/Ý_rUcqÉkC)V Ën Z eèSÓR¯­ñ§>¦Îu?9žŠV£A½@«ùx"ñ©šÜb?ÓìbsKa¡µÍ5ksþy‹»ZúµEèÐ7î[÷ ôX<™…ELÏ-bÈ;…¾q/ú'üHH^WÀYižð$ÜŽ*"%"""Z˜@'"¢Å$[ùÃn*. É‚®ÞCÒI’ Xå‹V+W®(ê¶l&Àb6ãß¿t?ûè2ÎÜêI;Ï7•éùEüõÏãåý;q°»3g1ªMöÚPè’›eÛÔ­6k9*ËË0»¸”víÔÌ<"ÑLFƒÐkO…æ°ޤ\ÓV+–Àmr;…ZÍúƒÂ ô1ÁùçÀýV÷¥Âšú÷Ôh4¿ÿYËLpVZ…f¬÷Žûð¼ôJÏñ+·ñá{ü÷x±–ì€xE½³Â‚ª,?÷ó©wLlþy½e&cFç­\ñM!²FÍÄ ,E¢ü/És·£ ÿÃçžÁ—îcòœˆˆˆHŸÑŠbq¹ÜZ\¥#­}²ï ¥ÒÑIɤ¿I*b²U°†"HT®'õn|û·?ƒ­-õY¿Ö­Á1üåÏßn!\HÒ• i@R2ˆb¨š'*™9è# êtóŵ Ú+ è#SâUå“Ó!¡uMù*íBê7Ò–ÅÏ%:>žH Ï™§O«,7ãÅ};ðíמGK‰½‰ˆˆˆ ‰ß쉈hEIE.ñOrÞ'}š^'—@/†–·²»:&I¨ˆÉŽ0èXžoÖ2¾ö“øÃφÝZžÕkúƒøóŸj…[HZévè…¿¿½Ç)†Ž*D… ÚòÆâèCi’ínGÌ‚Ð6K*ËÍi×ù§g‹U%#"¢qÞ'eK«‘{H#;W ²ïc&Щ˜ÉþM üjP(››ëÐêù ~qæ*Îßé˸GpnóæqüñËGaË2!¯Ùêìb¸¿ˆKvåáµÖ«†j´ZPGÑé‘h “ÔÞ­’ó·›=Õ¸Ñ?’rMRQ0梵¶&åºX<hVè¼¢íã‹ÁD0„ÅHê¹óËZ=©G©xœ6 zDÚ³÷ŒúðÂãŸjÝñØm0èuH* të°[œ^§Å×_8¬úyÜöJÕÏADDD…Á§dDD´"Ùê¬@§‡É'I ÿJ*’U†ZVrPqR ÉÄ£‘•IUf2â OíÁ¶ö&üèÄYÌ,,eô:™yü忽‡?yõ9¡*Ë|ÓiJ/.;·]Çk­SF½µ›ÐLðàÌ<Ô›SÏbñÓŽQh•LL7»]iè0êŸN›@÷NÏy0ô¨sÙ…c,´žQ±öí6k9–ŒÏ£ÕhÐXí@Ÿ@»øQ‘X&nørán?.Üí‡ÙdÄ“Ýqxû¦uõ»Ó@ƒžB‡ADDD%lýÜ9‘dûQÙD ­}²xqÉùãjˆ'8ç–Ö†¥hTºŠ™­=‹ÃÆFþã?‹Ÿœ¾„Ë÷3zÀܾ{ì¾õòÓE÷9%ÛÞ</üæ*Ù$¾lŸbóÕ£ûÓ& ³Uf4¨úúT8Í5.¡º`,BG½;åº!¹×Í’sE[©øÓÏAŸŠÍ?o¬v@[BUÀ}ã“Bër1S»ÅãJ '’I y§ÐÙÀ¤¨Œp$ŠcoàÜí^|îànt·6:$"""¢’À:­È ù€{)S)*UFƒÜÃñbhᓌÁ¨ç­§…%±¶«2KþÍ’zÌ&#¾òô~t5ÖâÇ\jmû°ÁI?~|â<¾|tŸ fÎ`Ü 'Yý­Ù)FÉŸ±ØXÊÌE;€Š_£Û|,¶vÔL›@öM¥üÿm–2é è—z6íæ˜Q‘zšöòËdÛÌR"™Dß„_hmkèÍnñ×è÷®éºA§ƒþ î'Àsafa ß}çC<ÿø6ݵ¥³ƒˆˆˆ¨0øÔ—ˆˆVd¬X›‡UŠ„J•Åd”Z/›¼VCL2Ie)KÝv”¨Pæ—ä?“íL˜-pÛ«ðOÇN!0;/}ü¥žt6¹±«£U…è2c1É}nŵA² Þb.¾ÖùDùÒT#VÝ cô êá4èÌ7èuð8liäþ™9D¢1˜RtL¯@ÿ½ÚèT‘¨ØæèVOösÝ[`OÖ§,ZGvvá¹ÝÝü÷p$Šp,ŽÅpƒ¾) OÐ?áCpnAêõo_¸Ž¥h/ïß™£¨‰ˆˆˆÖ¦Òî-GDDª±JÎM_”¯v¤µÍ ×IÍÙK&•œUXdJ6éhe’„ŠÔ¼dºÙd„YrÓ åGËŽoþy´×eÖRû§§.K¿ÔT^&÷>[ öºdrmàæ*Z¿jìU(¼ž¤kõ˜™Ç\š¿¿¶ + EÚ¸+Š‚Ñ41ŽO¥O k›Ds¾ô Î?7›Œ¨uÚ²>_¹É—­BhíÈT@8¹¿–˜MFجå¨sÙq`s¾ôô>ü¯_yôÒlHÓÅa%'¯ÝÁÕÞa"%"""Z;˜@'"¢U”•I­g:­¤\2!7›AÕl.ÍI&™ÊÍL8Rqš“üL¶Yä>ó)¿ÊLF|ãŧ°½½IúØ¥HÇ.ÜP!ªÌXMræ—TŠDœl½œ tZÇ4¸?ï[D`fÑøê݆Ҵo€æ *Ðû³ÚE¤šƒ>»¸„…pú{Çj[eI}.ôŠÎ?¯qB“£¹î¢ÿɤ‚oú÷Åz ÑhÐÙàÁ·^~_=º?e§„•üè乌ºÛ­láNDD+ª¬@Ÿ[de)3cz~QxýÜb5¶J#JMv#H)= ¥õ%03'µÞ.9?–òÏ Óá«Ï@RQp£DêØ³·{±§« ‚I-5ÉŽ¾˜-‚ºìæ* ¯ ´Îµx\¸7š> ›Tø¦gWýlL“(5êõ¨¯¶gc³`Exª6ïã‚óÏ[r0'<_â‰$†&S·Í_¦@ÁÙÛ½99o$ÅFЇ Lø°©±6'ç]+vv´ ÚV‰ï¼yBhSDcqüâìUüÁsUŽŽˆˆˆ¨41NDD+ª”L ûC³*EB¥¬¢¬´6b,H&I¬œNEÊ’K Û¬L —­Fƒ¯=€¿žJN-K* N\»ß}æ £Sj×XÜ\eawZç%æ OC«&ÐÓÍ?o¨v@§Í¬±¢³ÒŠŠ2sÚñ#þÕc˜L ‹´‹/CÞ©”]twdwGįE¹Ò3æÅgò~Öâ×PíÀ~öIüÕ¿½X"!tÌþ LúÑZB›<ˆˆˆˆò…-܉ˆhEårí|§fæ ¨ •.G¥\R®Ð­ze5VíR‘òÏȵät0^2ô:-¾öü!¸ªÄæÅ.»Þ?‚™…ÂWsÛ+-R-ç—ÂH*…½Ã­@wVZUŠ„¨44»ý+Ÿ ®œ„ŽÄ⤞?ÞZ›Ý\ñ&Ävpf‹«TôŠ&Ð[2l3_}ãÞB‡Ö¨?¸.ç ‹h®qá¥ý;¥Žyûüu•¢!"""*mL ÑŠjìrm´#±8Bó *EC¥ÊQ!—DHW¤6ÙîŽJ¹Q>$’I%gZfÚ— ÃhÐã«G÷KU^&“ ÎÝîS1*1Nª =©(X GUŒ(µx"HDüü³Iz-ÑZSn2ÁQ%v8˜Yñ¿M‘L¦Þ<Óìήj¶±:}]06µr¢|b•äÿƒÊÍ&Ô8ªdC+˜žÑâO '“ ú'ü…£h=±µCjÓF߸Ox3ÑzÂ:­ÈYi…^'w™ðN³;}šC² of¾pÕ‘IEÁôœø&ƒ^‡*‹\§'^ŸJ›„H&…×k44Áll’ÓXãÄÞ®v©cnŒªÙ íBnÐ ÌÎKuØqqc ©F,7\9>ì[}ö8pÿ>!ÛÖèÍ‚IƱ*á“IEh\Jcµ½dîi¢±8†}bóÏ ­*å é³û¶K­?“£YöDDDDk èDD´"V‹j›\úÐä”JÑP©rJ¶8÷ÏnFpv±¸Ø¼@à~¨Tˆ–"äú¡43cVm«€Ùęͥè™][aÐë„×OCKTS«E¶;ÉÔLú$•Z¼A¹ë’S°ê–h­kœƒ>»¸„ÅÈ£-Ò‡}©¿W¸l•°˜MŶ¬±Ú.4Rbr…êÜÀì<âs¦³­’ϧþI¿Ô¼Bê›ð:„¢ÖV[ƒ¶ÚáõW{† >.…ˆˆˆ¨ØðÉ$­Êm—k7ØÏôj[%´Zñ4³O ’G-“’dççƒV¢3$óøTQ(æ :ñ¤ }Ú Wn3S“@ [*N•åf<ÖÑ"¼^Q ûSWuæƒG²±/T¸ÍU“Ó+WÇ®¦¯ D… S¾R«±4èmžìÓf“Qèóh¥Ï¯àgCkâÌ—Þ±Ò©êóOņ°bv¨»Sxíb$‚!ÉûG"""¢µŽ t""ZU­Ã&µ~ÄŸ~V!­/½Nj#F8ÅìbaÚ¸{Wi!ºš:gñÍŒÖKT¢@4W)’΋KµAÖ3ž±aÉ ‚U‚Tœ¶µ5I­/†äõÕrŸŸ²œrI4I¶¬ÎU|×¢B¨sÙ…;døÚ$³‰"˜f¬N³'7×®fVóÞàÌ#Õ¹"£«´ šjJgDJ)%Ð“Š‚>ÎAO©«¹å&ñ. wG&TŒ†ˆˆˆ¨ôè ¯ õr÷çæM¡¥„*-H} .&Vh}¹ša_[[êUŒhe“Óâ1@S–s7Õ —l{ŽÆTŠäQÉd½AÏÛÔLLÍÌ!0;/ŭ:ñŸT|Úëj`2ü{ž] «QzÕháM5£œÉ;”»6d;“™h­Ðiµ¨wÚ…º¢øÓ0 ¥ý|ÈÕ÷&·go§^K$˜™ûÔx+Ÿ@Ýã°Ád4db^„#QŒO=:ë}%6Kªír£¾DgæH³ybÙÀ„[ ð¡TèuZlkkÄYÁùæwG&ðÂãÛTŽŠˆˆˆ¨tðÉ$­ª±Ú%õP®÷0NŸRï²ãÂ]ñõcþÂ$ÐE*‰Ô(YA™²mÏ#ÑüU Ë|Ž€Õ̙ܙ¸Þ?"µ¾Æ^ ·d;m*.zÝý$•è•Å𣳆ó­Ìd„ÍjÁô¼X’dzn áHÖóŽe%’I%6¤TYÊPQfV1"¢ÒÒävŠ%Ðá36•zãŠÅlþT2;Íîôè0œùt}&}wŠfØkƒžqŸð ì§vlÆA‰öà2.ßÄÞ?#´¶wœãÃÒÙÞÞ$œ@óO#‹Ãhà£b""""€-܉ˆ(­VƒVÁ‡JËnŒJµi¦µ¯A²uå°/ÿí…£ñ8|mzXÌÅ—$‘M.Eây¬@É%ë+-å*E²¶Ý“Z¿­­Q¥H(Ÿª¬eÂkŠ •Ú €áT¡O…O$…×sѧ5 ´GßC-ÜÓUB·xœÐdÕ§ÕØ«PfJ¿ioâQ? Ä6^¶ä¨Í|>ôI´ooW±sM{½[xíØÔ4–8=¥–Z—ð(…¤¢äZKDDDT¬˜@'"¢”Ú$ç„ÛÿÑúÐTí„YàÁä²ÁÉ)©„E. {Rçl’ÜX’/år ôp$ tÙvñ¬â”ZX°_îÁçæ&¶>] ¬›gÉâØæÖ!‘$ 3›·_r¾®È,e¢õDtÜMpvÉ>›Æ©¿KˆV‹ÐàþX‰tõ33¿ˆ¨ÀÆÀwétåêûŒ-7™àqÚT‹£ÊRg¥Uh­Â9èit:4Ilîbˆˆˆè7˜@'"¢”¶·7IWxœºqO•X¨4iµl¨€‰Åó^….ÚúxÙ¦¦Z•"ÉN¹Ù,5]´}r.çäærWYÄ+jé¾³÷@l¿ 6k¹T0/™v«æ"iÍÚÙà‘Zß3šÿúÀ„Ü976ÉýLDk£ÂkYú >ñDò“{’D2™¶º;— t@,Ñï þ&¦‡g¶¯Äb6Ã!˜.´¹¥0|A±NLmuÕ9«þ_͆:ñ V}‚‰ÿõ¬]â÷9⪠QiaˆˆRrVZÑ Ù’ôJïæ—Š£E,‡ÎF¹„óÝ‘ •"Yå|ÃâçÓh4Ø(ùóä‹@e¹xâyznAxÞe¶s„m–2˜Œ£Y{¢ñ8Nßì‘:f_×Õ‚S~ÈŒH0‹# WÀD`³‹K*FôiñD=óum–2Ô:í*FDTšD“ÝËIihñDbÕu:­’ãÒ霛ÿdLÕ”@=—mæÕÖ;êÁÕ^«^ûöemõâoûÆ8=™–ûC“ù¥EDDDT¬˜@'"¢´¶·7I­'8w»O¥h¨É&œ¯õ¨É£fÖ¨¶h¨vu{q£Jxm<‘„O`†g.LBéý“Pò®öIÍÕëtØ»¹]ň(ŸBó‹ÂkmV‹Š‘ÈÙ(ÑÍCA~¯ ½ã>D$FOljª/™dQ>‰ÎAÌÜßhçN] ]_m‡QŸÛ@ÍîôÉîX<Ù…û›xD6¶xJ§}{ŸÄf!Ùñ^™h¯¯˜žLc!ÌÛ©Ô;mÂ×§ÙÅ%æò×¡Šˆˆˆ¨˜1NDDiího‚V#÷XøøµÛ|˜AŸpVZÑ(ÑÉ`jfcS©ç_æÊÍÁ©¶×]Mu*F“=·]<#’3³3%óïé¶WªÉÚÇñöùRÇìîl-ê $']ÂéAÕU*F"gç†f©õ×z‡TŠäQ·F¥Öw5÷µ¨PšEç ÏݯêN×¾½%ÇíÛ Ül‚Ë–þÞcyX=÷qª¥glRh]™Éˆ:çŸ/³YËá¬Ûì¥èçô”Ì&#ì‚¿OâˆˆèDD$Àf-Ç6É*ôp$Šw.È%thm{L2Qrán¿J‘X¾32.UŸ ÙóùhºV„æqüÚm©c¶¶6À-ÑꟊÛu‰¶æ&£ÕųAE`ׯVáõ €®ßU-že—{¥®?{6µB¯ãWk¢•˜ŒÔt– ü:ž¶½Vϰ&Oúúr’j6õ¦¥z§½.'q©M&ùÜ–‡ù矜«NbºD úõªNb<Ò°7?Ý©ˆˆˆˆŠ¿å‘æ'Ú%“Z‰dßï´Ô QZ»,f³Tûóh,Ž‹÷Õ À©r‰˜Ç7¶©Iî8+­pHTtÎ/ETð(SÉYë°Áb6©ÍÚ‘T|ÿ½¤}&ƒ¯<±KŨ(Ÿï ¯oõ¸¤62åîŽV©ŠÆ+½ƒªŽˆQ§nÞ“:f·Ä&¢õH¤³L`nŠ¢ÀZ= ²¼,—¡}¢E`V{pvóKDcñ”ëšK©}ûh±&ÐÅ+Ð'!ŽK£Ö!^>˜Nû'"""Z˜@'""aGb(Jˆ IDATÛ"}L`voœºuë[©TÞ¾Ijý{—oª¶ÃœÁÕ^ñÄnE™[ÛT‰%×6Ô{¤ÖŸ½Ý«R$÷çÙNˆ'軚ÙYÔû—?Æà¤ÜœÊƒÝ°YËUŠˆòíFß0Fýâ]4¶¶ßg˜³Ê*W$DZ‹êˆ¹Ú; oP|¦|[m Üvvt J¥I =z$è?ˆX<±êš–ZuÚ·€ÇiƒÑ O¹&0·€À\ú‘jÌiWK¯àüs³Éˆújñ*æl9+,°[%æ ³ =%™îɤ‚‰{ """¢µŠ t""ÖÙàAw[£ôqï àg®¨•š¶Ú4ֈτœ_Šàý««Ë[ç¯Iµ.?¼} ºÒhǹcC“Ôúë}#ªÍ>~å¶ð €=ÛU‰c­¹Ü3ˆw.\—:Æf)ÃÑò¡¨8E¢1üì£ËÂë=v¶7«Qæd7Wý¸O•ϬX"w$“óÏìâßQ:" t¸f䋚‰i­FƒÆêÔ÷¨ÁÙ9Béç…«™èÏ%_h³‹a¡µ­n—Ð,û\’©xïáô”\UÖ´D48É9èDDDDL ‘”—öî€>ƒ$âÉkwð«Ëê$B©´<µ½Kjý‰«w0âËí,¾ë}ø58&¼Þb6áÀ–ŽœÆ ¦ unT”™…×'o_”KÆŠð‡æp©gPx}CΪâž1_ n Žáõãç¤:{h5|å™RO©x)Š‚¾¡…%ácvw¶Âd4¨UæZÜ.´J$JÉ$~xü â‰dNãøÕå[˜šI_]º¬¹Æ…ιŽDë‘ÛQ%tý9—¦#Žh">Sé^v1Œ×î¤\c³”©Öf>×zÇ$Ú·KŽòÊ…¶zÎAÏFƒ:‰6îC^¹GDDDDkèDD$ÅYeÅS;ä*Å–ýòü5¼qê"’U¿´ölkoB³ÄÐD2‰þÕ©9Ï©„–ðÆ©KRÇ<¹mSI%µZ vu¶Hs½oDjSA:IEÁOžG<±z+Ö‡ز!gç_«Ž]¼|û$’r‰ÃÏìÝž×Ù¥¤ž¤¢àõãçpSâïÕ ×e4†%Ÿ^Ú¿2µ#¾ ~yþjÎÎ?èÂñ+·¥Žyv÷ÖœŸh-©î€™›‚Ì&#ê\ê¶O7«]QŒ¦S®i)¡km¯DÕv!è$æ û‚3˜[«¦_¯šÜâß¿ú&üüÎNDDDëèDD$íùÝÝ訠ñ Ó7{ðÿËÛŸJýð‰Ö. €×ž|‰6S3sø›·N ‹guîù¥þògïI=`s;ª¤Û ƒƒÝ¡×ÉÝêýðøYx§Ågÿ¦òæÙ+è—˜}n·–㱎–œœ{- G¢xýø9»xSªò¸?~ã©|—’h,މJ¾L-„#øÎ›ÇqñÞ€Ôq»;Qe)îŠÈæ'vol“:æäµ»xïÒ­¬Ï=>5ï¼yBjcʶöFljªÍúÜDëE‹'»öëMÕÕ[ˆ·x²o½^*óÏEAï¨Ø}šÉh@£K|S®8+­°YË…ÖrzzÕâ ôH4ÆïëDDD´î1NDDÒ4 ¾xdÊM¦ŒŽŸ„ð?}ï\¸‘³ªb*-õ.;vn›Å;ì àï~y2ãê’Ðü"¾óæqfÒÏ®\¦ðùC»¥ÑÅÀf-ÇÎ -RÇ„#QüÕÏÞÇD ”ñyEÁ›ç®âäµ»RÇÙ¹:méýžÕ¦( ®ô âÿ|ýM\¸Û/}|½ËŽßö © +$ïîÈþúçïãÞþ=c^éMé(¸ß%⿾þzFåõÕ¶ <»«4*¥_سMºÍü;®ãØÅ›WÊøøÛ·N" c2ð[vet>¢õª±&»Är¶ xÖ2œ–¬^£Y¢Ê·&!,F"Bk[ÜNhµ…¹ïÑËzJ2èÐ7Î6îDDD´¾•N/R""**vk9¾úÌüý/OJ·€h<Žw/ÝÄ©›÷°·«w¶Âí¨Ê8ž™…%܃N«Åã’lT¯Ü… ¦ç…é÷áÿúÑ/ñÚ“ckkƒp»ß[ƒcø—“祓ï{»6”tÛëgwoÅõþaD$*÷ç–Âøó7ÞÅ‹ûv`ÿ– RÕ^¾Ð,þõƒ Òs(ë\vìëbûöEcq\ÁûW>†oz6£×pUUà›/ÙdÌqtô°‡ÆÜÿ¬¹58›µÝmØÞÖˆf·+ã ÑXWú†pêÆ½Œ6¶è´Z|éÈ^õ¥ñµ¯ÊR†ÏÚüêŒð1 î6èóâóOî†Û.v/‘H&qîv~~æ bqñQðÒÞí°yE?Q±É¶2;Õá"š<.æ2:֨׫Þf>WzÆÄïÕÚj3ë<– íuÕ¸Ò3(´¶O¢%ýz䨴Âb6a!,¶qb`‹ÃÛ7ªQñ*')DDT”66zð¥#ûðƒ÷Ï@ɰòk)ʼn«·qâêmxUØØèA[­n{%Ö«Âx§g09=ƒ1ß4ú&|˜š™Ô9í%›@Ÿ[ ãÜí¾¼ž³½®®ªŠ¼žsY™Éˆ¯Ý¿üÙûR•ƒsKaüÓ;¢¹Æ‰CÛ6¢³Áƒró£Ý"ÑnãÃ=òÊWPÔ»ìxå‰Ç¤+&Ž žÝÝ_œ¹"u\4ǧ.â£[=8°¥[ZVMÅ y8w§W{‡LÊ}h4|þÉÝ«l* ÿô,¼~ôŽyññà˜ÔƇ‡ÕØ*ñÍ—ž‚µ,³N!$.©(Ÿ$З…æñáõ»øðú]T”™Q_mG½ËŽ:§GÊÍ&”›ŒŸêºŽD±Á?=‹A¯ýã~ŒøÒÉݽzhšÝùI:åÊc-¸72)ݦ¾‡ÿú£_bgG3ïlCK­ î‘usKaÜÁ×î~rï cdžfìßÒ!}Ñzg-3Ánµ`z^>9­ÕhÐ\“ŸÊô etlCµ£dºéôO ¯m¯/ÜfÒv‰ä½zsKaT”™UŒ¨ti4Õ8q{x<íZ蟜‚¢(%ÛÅ(‘Lâûï}”×sÚ,exiÿμž“ˆˆˆÔÃ:eegG3ÂÑ(~òáŬ[ÖNg0œù¤õ³N«…Ñð›KU"‘D4ž:¡4 a1Y1¡Zì¦fæðã“çózÎ/=½¯` tà~[ÆÏìÙŽ7Ï]•>vÈÀÐ{A«ÑÀí¨BE™f“±Xóá0&3ˆ'2K¨³Áƒßö Vžçɰw áÕ»[Ì-…qgxw†'ùÿLF´ ÂÑXÆÑVóÌ®-%ÛÙáµC»1a< 7UQ\¾7ˆË÷aÔëávTÁj6Á`Ð#‰bn)ŒÉàLÆ¿ëZ§ _8¼'£c‰hö81Ý+Ÿ@w;ª¤Ç;dJ¶ÍõƒòÑf>É$ú&Äî M=«ó?ÿ|Yµ­•åe˜]\J»VÐ7æÅÉ1QëI£D}1ÁdpµN›ÊQ©#©(¸Ú›Ùf˜LÕØ+™@'""ZC˜@'"¢¬íßÒk™?|ÿlÚ·ŒD2‰%É銢 oÜ‡î¶ÆœÅAê:²³ Áùœ¹Õ“ÑñIEÁD „GSS™1ôøƒç‘å Ìb¡Õjðõáÿ}ã˜Ôü÷‡Í,,af!ýÃKÝmxvwwN_³Xù¦g3nÅ.Jà‰îNüÖþÇÖ}E>ÝËøX™¹Û2^Ü·Gvt©òÚù`4èñãÏònFÕªÀýN#¾@Îb²[Ëñµž„ÉÀ¯ÐD™jªqájï°ôqùjßõN; :]F››³lSŸ/£SAáëO³ÛUðªú¶ºáDh߸ ô66ºqìâ áõ—lˆˆˆ([¥Ñ[ŠˆˆŠ^w[#¾õòÓEÑ2Ovþ2Þ«w¡»µ¡ÐaÀd4àŸ=\ÒsÏWRn6ákÏ‚¥ˆ:3´zªñ¥§ö ϱ§Ôªm•ø“WŸÅçžØÅäyžÝ«äʽN‹Wî.éäù²Êò2|ó¥§`1þ¾ÂQaÁ·^yÎ5²±Š¨PšÝ™U2·æ±²[¯Ó¢Î%§¥SÞ;*3ÿ¼ð÷Ämuâ(z9=¥Æj—T‡¢‡GÔ­'L QÎ4¹ø³Ï?Mµ£_°%!­Fƒß{ö`AÛ W”™ñÍŸBk<(TC­Ó†o½ütQ$£:ÜøÆ‹Oå­ëZ¦×éphÛF|ûµçÐR"•okI`vÞé™B‡à~…ô¿rOl];ó¹kl•ø³WŸEµ­p£F<Ž*üñ+G™<'Ê:—züc¨|Wv·xäèÎªŠ¢Ú¨˜JïXiÌ?_¶Afzh.ç“Ö­VƒöZñ #¾),„#*FDDDDT¼˜@'"¢œ²[ËñŸÂ—î+ØC¤å9èTZ´Z ~ûðãxqߎ¼Wж×ÕàúâgÖ|²ÖiÃøÂ hvç¯êƒ4 ^Ü»ôÒÓlƒœ%£^#;»ðŸï¼rà1nF(›£…ÏíÞŠÿô¥—Ð\³ö>ÜUVüéçžE{]þ“8{6µãÏ^{v+“çD¹`Ðé¤ÛAW–›á¬´ªÑÊš2ø,-•{ÈX<ÁÉ)¡µF½ՙτϕj{¥T—³~V¡§ÔQï^›TÜa:­O|rIDDªØÕÑŠîÖFœ»Ý÷/ßÂÜR8oçæôÒvdG:êÝøÑÉóŸšVõ\&£ÏíÚŠCÛ6B«Ym¯m–2|ë·Žàíó×qêÆ=$’ɼœ×n-Ç+wckK}^ηV9*,x|SöljG•¥¬Ðá¬{kq{x½cùX¯ÐÝÞ„çwo…Û^•÷óçS¹Ù„o½ü4Î|Ü‹7Ï]Smvü2›µ¿µ'¶µ7©z¢õ¨©Æ‰_Px}sçŸ/kÊ Þ\"íÛ‡}áùîMngFrM µ®×û†…Ö÷û±³£EÕ˜JÙFÉnq7ûG±«£U¥hˆˆˆˆŠèDD¤£^CÝØÕьӷzpéÞ ¦fæT?¯V£Aˆ­ûJZCµß~íyœ¾ÕƒãW>Æìbnÿ=õ:-vw¶áù=ÝR-k…A§ÃËûwboW;Þ:{·†Æ¡(Š*ç*3qpk'žÚÑŪó 9+,èl¬Ew[#:êÝЬ“Í¥Àã¨Â·^~ãSÓ8§7Gš_Tõœf“;ÚšðÄÖéJÎR¦Ñhp`K¶´4àØÅ¸toqÁ$(“Ñ€ÃÛ6â©]0êùyE¤†f· §oö¯/De·ÝZŽ*K™T+ðR©@—ÙðU óÏ—m¨O ÷Œ‹·¨_ªmh®qaÈ'Ö‰àÖà¦çØ…ˆˆˆÖ> ""Õ•›Mxv×V<»k+†|S¸Ö;ŒÞ1&‚¡œ%íÊM&4¹ØÜTîöÆu™]k´Z uwbÿævœ¿Ûs÷a,ËŠt›¥ {º6`ÿ– |àþ|᯽ð$ü¡Y|xãnôä¤[„@£Û‰Ý­ØÕÙº&çf£eæœw×Ðh4¨¶U ©Ú‰&· ¸ª 7ÿ™ÄÔ¹ìøÜÁ]xåà.Œú‚¸54Š;Ø„rÒåÁnµ ³ÑƒMµØÔ\ƒN—ƒ¨KS•¥ _8¼ÏíîÆ‡×ïàrÏ f³û;t;ªðÄ–ìêhá8"•5Õȵo©-LbºÉíÂþ¡µf“Gité÷ ¯-ÄèŒÕ´JÌíÌÌcvq •åìÔ³š}[Ú…èIEÁ…;xn÷V•£""""*.,_!¢)Šò'þ"“cc‰¢ñÜVÑÚŽD109…‰`þÐ|Ó³ÌÍ!‘TŽÆ>•\7 Ðj4øÿÛ»—ظ® Àÿ™_±Ø‰IÒ¨”î€U6lBìØBw€X@aT„„XÒjU±éŽM% „HEâYZJò ‰Çqâø}Y8‹@kwâ;ã;¶¿OòÎçè_xf®Ï?çœÑN;³ÓS™›>‘ÙéÉœ™™Êcgæ27s°w#öêÂÕYÝìQ³uœ=™é‰ÃS$ßXº“?þóB.\[Èå…[Y¼swÏߟ™Ïü©é|àüÙ<ñØÙèÑÓ§216:ÀDGÃ¥…ÅÜ]ÝýoíAݑΡ١94ïc®ÞÈÚÆÛ?óßÉìôdæ¦ÿ™´µ½7._ïù÷ÇGGòè™Ù&ê͵[·~ÒOÞë9nJ’‰Ñî~‡_)¥œïc€¾³† ¼#:p¬®­gùÞZVÖ׳¾¾³0ÕíS“Ûÿ?sÜ·±µ•ÅÛw³´rogÕûvÉÉ©™™?6wÈÃoùÞjVVײº¾žõƺtÚ휞™ÊHg8 !8¬èÀQwôÎÓŽ±Ñ®’|ÀFÚíÌŸšÎü©é¦£ôdj|Ì5À¾ü9X0„è:$Q @:$Q @:$Q @:$Q @:$Q @:$Q @:$Q @:$Q @:$Q @:$Q @:$Q @:$Q @:$Q @:$Q @:$Q @:$Q @:°»Í¦0dJ©3z£_1EìæÎ~–Ôz€`HÕ\ý]îO €ÁQ »Ù÷ƒL©÷ D†TÍõßÛýÊ0( t`7—ö;°ÝR E5×/÷+À (ÐÝü#IµßÁí–·€£¦æÚï«ýÊ0(.à•R–SczGp¤´JI«ÞîïW€AÑp{ùý~vÚ-w¡!#ívÝ)þЃ¤@öò«:ƒ»ÚS V)é´kÕJWb:p(нü¢ÎàN«å(w€#`t¤SwŠŸ•Rª~d$Ͱ«RÊŸ’¼ZgŽîH§î84h´?ë¼/õ# À )Ðwó|Á%ɘàPítúqÒèõ$/÷!ÀÀ)Ðwó|’:”R2Þ©{?¤”’±‘¾­ë¾PJ©µÎ pP´YÀžJ)“¼Ø¹F;»Ñ†ÜH»ñîHÚ­¾¬å®%ùA?&8Z,à]UUõ¡$IÒéל[ÛÛÙÜÞÎÖv•ªªú5-ûÐn•´[­tZíôyÔs¥”§û:#À)ОTUõÃ$_ÐÜÙ®’D‘pJ)ƒ<5ôn’ß?éàPP =©ªj6É«IN7€Cá™RʳM‡xî@zRJ¹™ä«MçàPx%É÷›ð°è@ÏJ)/&y®é µ›I>[JYo:ÀÃr„;ðPªªšHò›$i: C§Jò…RÊKMØ;ЇRJYIò™$o6€áó å9p˜)ЇVJ¹œäÓI®4€¡ñl)å{M‡¨CìK)åµ$ŸLòFÓYhT•ä[¥”gšP—;ÐZªª:—ä§I>ÚtÜF’/•R~Üt€~°¨¥”r%É'’|7;ß2àxx=ÉÇ•çÀQ¢@j+¥l–R¾™äóI.6€ÚNò£$+¥¼Òt€~r„;ÐWUUM&ùv’¯$™h8ýõÛ$_+¥ü®é ƒ @¢ªªù$_Oòt’é†ãPϯ“|§”òrÓAI TUUI>—ä‹I>•d¤ÙDôè­$?IòB)å¯M‡8 tàÀTU5•ä©ìéO%y"v§ ƒ­ìæ¯$ùy’_–RþÖl$€ƒ§@UUÕ¹$óIf’tŽpÜ,Ýÿy«”²ÖtŽy”üæIDATx›ÿÉ€»K˾ ¼IEND®B`‚glance-16.0.0/glance/tests/var/privatekey.key0000666000175100017510000000625313245511421021141 0ustar zuulzuul00000000000000-----BEGIN RSA PRIVATE KEY----- MIIJKAIBAAKCAgEAn0QTUd7pWvesMyoaTJGhc7zzptPmWa7o4jRoPvRAwaEaZZqj Z+ksuXmcALF8weae3ke/8cvyc9TDYv6CkG+0dcp+Vo+ZPQZRPED0/3SXTw3S5mZ2 jZe/ic7+steJcfKg2fUmfBp6vyuPcoDnH01KQKO5njP2VeBAKx5J5IxxnREyzyFB 4RMoxtb24LMmEG1bYx3D7tDEZmM4iWuPKsK9T+S8A4+i8lwdcxGce5M91qPRLc1k IyS8ZTxxIChgoOr+dw4dlTZ2recvHCdiVeOdEcH7Qz7lIaz9Dn49yUTSvW+Jfg/L iFRX/Y0hyDThRwEoD0WhfmAanEwMuME3LUarGJ7KSdN3t5I60n/K1QLxdYFmOVGq vNfwkSNp6HGuRHZeh1Trcvys/WAi4GrkrTe39uUktJUsJg51oOntV743QmQfAkkM vV10beby2lxUgvr8/zrkGnqpPD3utd8JDGnDUZJngHGbEIsg/6JexfKGoAZlHEL5 kSRUKe1+7NtMe1TusSUbOFOuAbbFkx6jTRvoc0dQV+jsoIBTsTR0N5rBjBRkLhbd oS7TRT4sRmIgKpN6kkyyzGRHrWMyC2gMJJgggwg1dKdoeu/WhAfRXtfAbD+nSnhi qHB1N/vOHwkefBE1zLNao8w/NcnuJG9j+FRvfFu0dj3ygW2tZGYQ0MQLLC8CAwEA AQKCAgBL4IvvymqUu0CgE6P57LvlvxS522R4P7uV4W/05jtfxJgl5fmJzO5Q4x4u umB8pJn1vms1EHxPMQNxS1364C0ynSl5pepUx4i2UyAmAG8B680ZlaFPrgdD6Ykw vT0vO2/kx0XxhFAMef1aiQ0TvaftidMqCwmGOlN393Mu3rZWJVZ2lhqj15Pqv4lY 3iD5XJBYdVrekTmwqf7KgaLwtVyqDoiAjdMM8lPZeX965FhmxR8oWh0mHR9gf95J etMmdy6Km//+EbeS/HxWRnE0CD/RsQA7NmDFnXvmhsB6/j4EoHn5xB6ssbpGAxIg JwlY4bUrKXpaEgE7i4PYFb1q5asnTDdUZYAGAGXSBbDiUZM2YOe1aaFB/SA3Y3K2 47brnx7UXhAXSPJ16EZHejSeFbzZfWgj2J1t3DLk18Fpi/5AxxIy/N5J38kcP7xZ RIcSV1QEasYUrHI9buhuJ87tikDBDFEIIeLZxlyeIdwmKrQ7Vzny5Ls94Wg+2UtI XFLDak5SEugdp3LmmTJaugF+s/OiglBVhcaosoKRXb4K29M7mQv2huEAerFA14Bd dp2KByd8ue+fJrAiSxhAyMDAe/uv0ixnmBBtMH0YYHbfUIgl+kR1Ns/bxrJu7T7F kBQWZV4NRbSRB+RGOG2/Ai5jxu0uLu3gtHMO4XzzElWqzHEDoQKCAQEAzfaSRA/v 0831TDL8dmOCO61TQ9GtAa8Ouj+SdyTwk9f9B7NqQWg7qdkbQESpaDLvWYiftoDw mBFHLZe/8RHBaQpEAfbC/+DO6c7O+g1/0Cls33D5VaZOzFnnbHktT3r5xwkZfVBS aPPWl/IZOU8TtNqujQA+mmSnrJ7IuXSsBVq71xgBQT9JBZpUcjZ4eQducmtC43CP GqcSjq559ZKc/sa3PkAtNlKzSUS1abiMcJ86C9PgQ9gOu7y8SSqQ3ivZkVM99rxm wo8KehCcHOPOcIUQKmx4Bs4V3chm8rvygf3aanUHi83xaMeFtIIuOgAJmE9wGQeo k0UGvKBUDIenfwKCAQEAxfVFVxMBfI4mHrgTj/HOq7GMts8iykJK1PuELU6FZhex XOqXRbQ5dCLsyehrKlVPFqUENhXNHaOQrCOZxiVoRje2PfU/1fSqRaPxI7+W1Fsh Fq4PkdJ66NJZJkK5NHwE8SyQf+wpLdL3YhY5LM3tWdX5U9Rr6N8qelE3sLPssAak 1km4/428+rkp1BlCffr3FyL0KJmOYfMiAr8m6hRZWbhkvm5YqX1monxUrKdFJ218 dxzyniqoS1yU5RClY6783dql1UO4AvxpzpCPYDFIwbEb9zkUo0przhmi4KzyxknB /n/viMWzSnsM9YbakH6KunDTUteme1Dri3Drrq9TUQKCAQAVdvL7YOXPnxFHZbDl 7azu5ztcQAfVuxa/1kw/WnwwDDx0hwA13NUK+HNcmUtGbrh/DjwG2x032+UdHUmF qCIN/mHkCoF8BUPLHiB38tw1J3wPNUjm4jQoG96AcYiFVf2d/pbHdo2AHplosHRs go89M+UpELN1h7Ppy4qDuWMME86rtfa7hArqKJFQbdjUVC/wgLkx1tMzJeJLOGfB bgwqiS8jr7CGjsvcgOqfH/qS6iU0glpG98dhTWQaA/OhE9TSzmgQxMW41Qt0eTKr 2Bn1pAhxQ2im3Odue6ou9eNqJLiUi6nDqizUjKakj0SeCs71LqIyGZg58OGo2tSn kaOlAoIBAQCE/fO4vQcJpAJOLwLNePmM9bqAcoZ/9auKjPNO8OrEHPTGZMB+Tscu k+wa9a9RgICiyPgcUec8m0+tpjlAGo+EZRdlZqedWUMviCWQC74MKrD/KK9DG3IB ipfkEX2VmiBD2tm1Z3Z+17XlSuLci/iCmzNnM1XP3GYQSRIt/6Lq23vQjzTfU1z7 4HwOh23Zb0qjW5NG12sFuS9HQx6kskkY8r2UBlRAggP686Z7W+EkzPSKnYMN6cCo 6KkLf3RtlPlDHwq8TUOJlgSLhykbyeCEaDVOkSWhUnU8wJJheS+dMZ5IGbFWZOPA DQ02woOCAdG30ebXSBQL0uB8DL/52sYRAoIBAHtW3NomlxIMqWX8ZYRJIoGharx4 ikTOR/jeETb9t//n6kV19c4ICiXOQp062lwEqFvHkKzxKECFhJZuwFc09hVxUXxC LJjvDfauHWFHcrDTWWbd25CNeZ4Sq79GKf+HJ+Ov87WYcjuBFlCh8ES+2N4WZGCn B5oBq1g6E4p1k6xA5eE6VRiHPuFH8N9t1x6IlCZvZBhuVWdDrDd4qMSDEUTlcxSY mtcAIXTPaPcdb3CjdE5a38r59x7dZ/Te2K7FKETffjSmku7BrJITz3iXEk+sn8ex o3mdnFgeQ6/hxvMGgdK2qNb5ER/s0teFjnfnwHuTSXngMDIDb3kLL0ecWlQ= -----END RSA PRIVATE KEY----- glance-16.0.0/glance/tests/var/testserver-no-disk.ova0000666000175100017510000005000013245511421022510 0ustar zuulzuul00000000000000testserver.ovf0000644€±!Ñ00042560000003130712561117144014345 0ustar jjasekxintelall List of the virtual disks used in the package Logical networks used in the package Logical network used by this appliance. A virtual machine The kind of installed guest operating system Ubuntu_64 Ubuntu_64 Virtual hardware requirements for a virtual machine Virtual Hardware Family 0 testserver virtualbox-2.2 1 virtual CPU Number of virtual CPUs 1 virtual CPU 1 3 1 MegaBytes 512 MB of memory Memory Size 512 MB of memory 2 4 512 0 ideController0 IDE Controller ideController0 3 PIIX4 5 1 ideController1 IDE Controller ideController1 4 PIIX4 5 0 sataController0 SATA Controller sataController0 5 AHCI 20 0 usb USB Controller usb 6 23 3 false sound Sound Card sound 7 ensoniq1371 35 0 true cdrom1 CD-ROM Drive cdrom1 8 4 15 0 disk2 Disk Image disk2 /disk/vmdisk2 9 5 17 true Ethernet adapter on 'NAT' NAT Ethernet adapter on 'NAT' 10 E1000 10 Complete VirtualBox machine configuration in VirtualBox format glance-16.0.0/glance/tests/var/testserver-bad-ovf.ova0000666000175100017510000002400013245511421022463 0ustar zuulzuul00000000000000illegal-xml.ovf0000644000175000017500000000007612662226344012147 0ustar otcotc does not match <> testserver-disk1.vmdk0000644000175000017500000000000412562114301013301 0ustar otcotcABCDglance-16.0.0/glance/tests/var/ca.crt0000666000175100017510000000240513245511421017334 0ustar zuulzuul00000000000000-----BEGIN CERTIFICATE----- MIIDiTCCAnGgAwIBAgIJAMj+Lfpqc9lLMA0GCSqGSIb3DQEBCwUAMFsxCzAJBgNV BAYTAkFVMRMwEQYDVQQIDApTb21lLVN0YXRlMRIwEAYDVQQKDAlPcGVuU3RhY2sx DzANBgNVBAsMBkdsYW5jZTESMBAGA1UEAwwJR2xhbmNlIENBMB4XDTE1MDEzMTA1 MzAyNloXDTI1MDEyODA1MzAyNlowWzELMAkGA1UEBhMCQVUxEzARBgNVBAgMClNv bWUtU3RhdGUxEjAQBgNVBAoMCU9wZW5TdGFjazEPMA0GA1UECwwGR2xhbmNlMRIw EAYDVQQDDAlHbGFuY2UgQ0EwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIB AQDcW4cRtw96/ZYsx3UB1jWWT0pAlsMQ03En7dueh9o4UZYChY2NMqTJ3gVqy1vf 4wyRU1ROb/N5L4KdQiJARH/ARbV+qrWoRvkcWBfg9w/4uZ9ZFhCBbaa2cAtTIGzV ta6HP9UPeyfXrS+jgjqU2QN3bcc0ZCMAiQbtW7Vpw8RNr0NvTJDaSCzmpGQ7TQtB 0jXm1nSG7FZUbojUCYB6TBGd01Cg8GzAai3ngXDq6foVJEwfmaV2Zapb0A4FLquX OzebskY5EL/okQGPofSRCu/ar+HV4HN3+PgIIrfa8RhDDdlv6qE1iEuS6isSH1s+ 7BA2ZKfzT5t8G/8lSjKa/r2pAgMBAAGjUDBOMB0GA1UdDgQWBBT3M/WuigtS7JYZ QD0XJEDD8JSZrTAfBgNVHSMEGDAWgBT3M/WuigtS7JYZQD0XJEDD8JSZrTAMBgNV HRMEBTADAQH/MA0GCSqGSIb3DQEBCwUAA4IBAQCWOhC9kBZAJalQhAeNGIiiJ2bV HpvzSCEXSEAdh3A0XDK1KxoMHy1LhNGYrMmN2a+2O3SoX0FLB4p9zOifq4ACwaMD CjQeB/whsfPt5s0gV3mGMCR+V2b8r5H/30KRbIzQGXmy+/r6Wfe012jcVVXsQawW Omd4d+Bduf5iiL1OCKEMepqjQLu7Yg41ucRpUewBA+A9hoKp7jpwSnzSALX7FWEQ TBJtJ9jEnZl36S81eZJvOXSzeptHyomSAt8eGFCVuPB0dZCXuBNLu4Gsn+dIhfyj NwK4noYZXMndPwGy92KDhjxVnHzd9HwImgr6atmWhPPz5hm50BrA7sv06Nto -----END CERTIFICATE----- glance-16.0.0/glance/tests/var/ca.key0000666000175100017510000000325013245511421017333 0ustar zuulzuul00000000000000-----BEGIN PRIVATE KEY----- MIIEvQIBADANBgkqhkiG9w0BAQEFAASCBKcwggSjAgEAAoIBAQDcW4cRtw96/ZYs x3UB1jWWT0pAlsMQ03En7dueh9o4UZYChY2NMqTJ3gVqy1vf4wyRU1ROb/N5L4Kd QiJARH/ARbV+qrWoRvkcWBfg9w/4uZ9ZFhCBbaa2cAtTIGzVta6HP9UPeyfXrS+j gjqU2QN3bcc0ZCMAiQbtW7Vpw8RNr0NvTJDaSCzmpGQ7TQtB0jXm1nSG7FZUbojU CYB6TBGd01Cg8GzAai3ngXDq6foVJEwfmaV2Zapb0A4FLquXOzebskY5EL/okQGP ofSRCu/ar+HV4HN3+PgIIrfa8RhDDdlv6qE1iEuS6isSH1s+7BA2ZKfzT5t8G/8l SjKa/r2pAgMBAAECggEABeoS+v+906BAypzj4BO+xnUEWi1xuN7j951juqKM0dwm uZSaEwMb9ysVXCNvKNgwOypQZfaNQ2BqEgx3XOA5yZBVabvtOkIFZ6RZp7kZ3aQl yb9U3BR0WAsz0pxZL3c74vdsoYi9rgVA9ROGvP4CIM96fEZ/xgDnhbFjch5GA4u2 8XQ/kJUwLl0Uzxyo10sqGu3hgMwpM8lpaRW6d5EQ628rJEtA/Wmy5GpyCUhTD/5B jE1IzhjT4T5LqiPjA/Dsmz4Sa0+MyKRmA+zfSH6uS4szSaj53GVMHh4K+Xg2/EeD 6I3hGOtzZuYp5HBHE6J8VgeuErBQf32CCglHqN/dLQKBgQD4XaXa+AZtB10cRUV4 LZDB1AePJLloBhKikeTboZyhZEwbNuvw3JSQBAfUdpx3+8Na3Po1Tfy3DlZaVCU2 0PWh2UYrtwA3dymp8GCuSvnsLz1kNGv0Q7WEYaepyKRO8qHCjrTDUFuGVztU+H6O OWPHRd4DnyF3pKN7K4j6pU76HwKBgQDjIXylwPb6TD9ln13ijJ06t9l1E13dSS0B +9QU3f4abjMmW0K7icrNdmsjHafWLGXP2dxB0k4sx448buH+L8uLjC8G80wLQMSJ NAKpxIsmkOMpPUl80ks8bmzsqztmtql6kAgSwSW84vftJyNrFnp2kC2O4ZYGwz1+ 8rj3nBrfNwKBgQDrCJxCyoIyPUy0yy0BnIUnmAILSSKXuV97LvtXiOnTpTmMa339 8pA4dUf/nLtXpA3r98BkH0gu50d6tbR92mMI5bdM+SIgWwk3g33KkrNN+iproFwk zMqC23Mx7ejnuR6xIiEXz/y89eH0+C+zYcX1tz1xSe7+7PO0RK+dGkDR2wKBgHGR L+MtPhDfCSAF9IqvpnpSrR+2BEv+J8wDIAMjEMgka9z06sQc3NOpL17KmD4lyu6H z3L19fK8ASnEg6l2On9XI7iE9HP3+Y1k/SPny3AIKB1ZsKICAG6CBGK+J6BvGwTW ecLu4rC0iCUDWdlUzvzzkGQN9dcBzoDoWoYsft83AoGAAh4MyrM32gwlUgQD8/jX 8rsJlKnme0qMjX4A66caBomjztsH2Qt6cH7DIHx+hU75pnDAuEmR9xqnX7wFTR9Y 0j/XqTVsTjDINRLgMkrg7wIqKtWdicibBx1ER9LzwfNwht/ZFeMLdeUUUYMNv3cg cMSLxlxgFaUggYj/dsF6ypQ= -----END PRIVATE KEY----- glance-16.0.0/glance/tests/var/testserver.ova0000666000175100017510000005000013245511421021146 0ustar zuulzuul00000000000000testserver.ovf0000644€±!Ñ00042560000003210712562113043014337 0ustar jjasekxintelall List of the virtual disks used in the package Logical networks used in the package Logical network used by this appliance. A virtual machine The kind of installed guest operating system Ubuntu_64 Ubuntu_64 Virtual hardware requirements for a virtual machine Virtual Hardware Family 0 testserver virtualbox-2.2 1 virtual CPU Number of virtual CPUs 1 virtual CPU 1 3 1 MegaBytes 512 MB of memory Memory Size 512 MB of memory 2 4 512 0 ideController0 IDE Controller ideController0 3 PIIX4 5 1 ideController1 IDE Controller ideController1 4 PIIX4 5 0 sataController0 SATA Controller sataController0 5 AHCI 20 0 usb USB Controller usb 6 23 3 false sound Sound Card sound 7 ensoniq1371 35 0 true cdrom1 CD-ROM Drive cdrom1 8 4 15 0 disk2 Disk Image disk2 /disk/vmdisk2 9 5 17 true Ethernet adapter on 'NAT' NAT Ethernet adapter on 'NAT' 10 E1000 10 DMTF:x86:64 DMTF:x86:VT-d Complete VirtualBox machine configuration in VirtualBox format testserver-disk1.vmdk0000644€±!Ñ00042560000000000412562114301015504 0ustar jjasekxintelallABCDglance-16.0.0/glance/tests/var/certificate.crt0000666000175100017510000001261613245511421021240 0ustar zuulzuul00000000000000# > openssl x509 -in glance/tests/var/certificate.crt -noout -text # Certificate: # Data: # Version: 1 (0x0) # Serial Number: 1 (0x1) # Signature Algorithm: sha1WithRSAEncryption # Issuer: C=AU, ST=Some-State, O=OpenStack, OU=Glance, CN=Glance CA # Validity # Not Before: Feb 2 20:22:13 2015 GMT # Not After : Jan 31 20:22:13 2024 GMT # Subject: C=AU, ST=Some-State, O=OpenStack, OU=Glance, CN=127.0.0.1 # Subject Public Key Info: # Public Key Algorithm: rsaEncryption # RSA Public Key: (4096 bit) # Modulus (4096 bit): # 00:9f:44:13:51:de:e9:5a:f7:ac:33:2a:1a:4c:91: # a1:73:bc:f3:a6:d3:e6:59:ae:e8:e2:34:68:3e:f4: # 40:c1:a1:1a:65:9a:a3:67:e9:2c:b9:79:9c:00:b1: # 7c:c1:e6:9e:de:47:bf:f1:cb:f2:73:d4:c3:62:fe: # 82:90:6f:b4:75:ca:7e:56:8f:99:3d:06:51:3c:40: # f4:ff:74:97:4f:0d:d2:e6:66:76:8d:97:bf:89:ce: # fe:b2:d7:89:71:f2:a0:d9:f5:26:7c:1a:7a:bf:2b: # 8f:72:80:e7:1f:4d:4a:40:a3:b9:9e:33:f6:55:e0: # 40:2b:1e:49:e4:8c:71:9d:11:32:cf:21:41:e1:13: # 28:c6:d6:f6:e0:b3:26:10:6d:5b:63:1d:c3:ee:d0: # c4:66:63:38:89:6b:8f:2a:c2:bd:4f:e4:bc:03:8f: # a2:f2:5c:1d:73:11:9c:7b:93:3d:d6:a3:d1:2d:cd: # 64:23:24:bc:65:3c:71:20:28:60:a0:ea:fe:77:0e: # 1d:95:36:76:ad:e7:2f:1c:27:62:55:e3:9d:11:c1: # fb:43:3e:e5:21:ac:fd:0e:7e:3d:c9:44:d2:bd:6f: # 89:7e:0f:cb:88:54:57:fd:8d:21:c8:34:e1:47:01: # 28:0f:45:a1:7e:60:1a:9c:4c:0c:b8:c1:37:2d:46: # ab:18:9e:ca:49:d3:77:b7:92:3a:d2:7f:ca:d5:02: # f1:75:81:66:39:51:aa:bc:d7:f0:91:23:69:e8:71: # ae:44:76:5e:87:54:eb:72:fc:ac:fd:60:22:e0:6a: # e4:ad:37:b7:f6:e5:24:b4:95:2c:26:0e:75:a0:e9: # ed:57:be:37:42:64:1f:02:49:0c:bd:5d:74:6d:e6: # f2:da:5c:54:82:fa:fc:ff:3a:e4:1a:7a:a9:3c:3d: # ee:b5:df:09:0c:69:c3:51:92:67:80:71:9b:10:8b: # 20:ff:a2:5e:c5:f2:86:a0:06:65:1c:42:f9:91:24: # 54:29:ed:7e:ec:db:4c:7b:54:ee:b1:25:1b:38:53: # ae:01:b6:c5:93:1e:a3:4d:1b:e8:73:47:50:57:e8: # ec:a0:80:53:b1:34:74:37:9a:c1:8c:14:64:2e:16: # dd:a1:2e:d3:45:3e:2c:46:62:20:2a:93:7a:92:4c: # b2:cc:64:47:ad:63:32:0b:68:0c:24:98:20:83:08: # 35:74:a7:68:7a:ef:d6:84:07:d1:5e:d7:c0:6c:3f: # a7:4a:78:62:a8:70:75:37:fb:ce:1f:09:1e:7c:11: # 35:cc:b3:5a:a3:cc:3f:35:c9:ee:24:6f:63:f8:54: # 6f:7c:5b:b4:76:3d:f2:81:6d:ad:64:66:10:d0:c4: # 0b:2c:2f # Exponent: 65537 (0x10001) # Signature Algorithm: sha1WithRSAEncryption # 5f:e8:a8:93:20:6c:0f:12:90:a6:e2:64:21:ed:63:0e:8c:e0: # 0f:d5:04:13:4d:2a:e9:a5:91:b7:e4:51:94:bd:0a:70:4b:94: # c7:1c:94:ed:d7:64:95:07:6b:a1:4a:bc:0b:53:b5:1a:7e:f1: # 9c:12:59:24:5f:36:72:34:ca:33:ee:28:46:fd:21:e6:52:19: # 0c:3d:94:6b:bd:cb:76:a1:45:7f:30:7b:71:f1:84:b6:3c:e0: # ac:af:13:81:9c:0e:6e:3c:9b:89:19:95:de:8e:9c:ef:70:ac: # 07:ae:74:42:47:35:50:88:36:ec:32:1a:55:24:08:f2:44:57: # 67:fe:0a:bb:6b:a7:bd:bc:af:bf:2a:e4:dd:53:84:6b:de:1d: # 2a:28:21:38:06:7a:5b:d8:83:15:65:31:6d:61:67:00:9e:1a: # 61:85:15:a2:4c:9a:eb:6d:59:8e:34:ac:2c:d5:24:4e:00:ff: # 30:4d:a3:d5:80:63:17:52:65:ac:7f:f4:0a:8e:56:a4:97:51: # 39:81:ae:e8:cb:52:09:b3:47:b4:fd:1b:e2:04:f9:f2:76:e3: # 63:ef:90:aa:54:98:96:05:05:a9:91:76:18:ed:5d:9e:6e:88: # 50:9a:f7:2c:ce:5e:54:ba:15:ec:62:ff:5d:be:af:35:03:b1: # 3f:32:3e:0e -----BEGIN CERTIFICATE----- MIIEKjCCAxICAQEwDQYJKoZIhvcNAQEFBQAwWzELMAkGA1UEBhMCQVUxEzARBgNV BAgMClNvbWUtU3RhdGUxEjAQBgNVBAoMCU9wZW5TdGFjazEPMA0GA1UECwwGR2xh bmNlMRIwEAYDVQQDDAlHbGFuY2UgQ0EwHhcNMTUwMjAyMjAyMjEzWhcNMjQwMTMx MjAyMjEzWjBbMQswCQYDVQQGEwJBVTETMBEGA1UECBMKU29tZS1TdGF0ZTESMBAG A1UEChMJT3BlblN0YWNrMQ8wDQYDVQQLEwZHbGFuY2UxEjAQBgNVBAMTCTEyNy4w LjAuMTCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBAJ9EE1He6Vr3rDMq GkyRoXO886bT5lmu6OI0aD70QMGhGmWao2fpLLl5nACxfMHmnt5Hv/HL8nPUw2L+ gpBvtHXKflaPmT0GUTxA9P90l08N0uZmdo2Xv4nO/rLXiXHyoNn1Jnwaer8rj3KA 5x9NSkCjuZ4z9lXgQCseSeSMcZ0RMs8hQeETKMbW9uCzJhBtW2Mdw+7QxGZjOIlr jyrCvU/kvAOPovJcHXMRnHuTPdaj0S3NZCMkvGU8cSAoYKDq/ncOHZU2dq3nLxwn YlXjnRHB+0M+5SGs/Q5+PclE0r1viX4Py4hUV/2NIcg04UcBKA9FoX5gGpxMDLjB Ny1GqxieyknTd7eSOtJ/ytUC8XWBZjlRqrzX8JEjaehxrkR2XodU63L8rP1gIuBq 5K03t/blJLSVLCYOdaDp7Ve+N0JkHwJJDL1ddG3m8tpcVIL6/P865Bp6qTw97rXf CQxpw1GSZ4BxmxCLIP+iXsXyhqAGZRxC+ZEkVCntfuzbTHtU7rElGzhTrgG2xZMe o00b6HNHUFfo7KCAU7E0dDeawYwUZC4W3aEu00U+LEZiICqTepJMssxkR61jMgto DCSYIIMINXSnaHrv1oQH0V7XwGw/p0p4YqhwdTf7zh8JHnwRNcyzWqPMPzXJ7iRv Y/hUb3xbtHY98oFtrWRmENDECywvAgMBAAEwDQYJKoZIhvcNAQEFBQADggEBAF/o qJMgbA8SkKbiZCHtYw6M4A/VBBNNKumlkbfkUZS9CnBLlMcclO3XZJUHa6FKvAtT tRp+8ZwSWSRfNnI0yjPuKEb9IeZSGQw9lGu9y3ahRX8we3HxhLY84KyvE4GcDm48 m4kZld6OnO9wrAeudEJHNVCINuwyGlUkCPJEV2f+Crtrp728r78q5N1ThGveHSoo ITgGelvYgxVlMW1hZwCeGmGFFaJMmuttWY40rCzVJE4A/zBNo9WAYxdSZax/9AqO VqSXUTmBrujLUgmzR7T9G+IE+fJ242PvkKpUmJYFBamRdhjtXZ5uiFCa9yzOXlS6 Fexi/12+rzUDsT8yPg4= -----END CERTIFICATE----- glance-16.0.0/glance/tests/var/testserver-no-ovf.ova0000666000175100017510000002400013245511421022351 0ustar zuulzuul00000000000000testserver-disk1.vmdk0000644€±!Ñ00042560000000000512561140034015506 0ustar jjasekxintelallABCD glance-16.0.0/glance/tests/integration/0000775000175100017510000000000013245511661017775 5ustar zuulzuul00000000000000glance-16.0.0/glance/tests/integration/v2/0000775000175100017510000000000013245511661020324 5ustar zuulzuul00000000000000glance-16.0.0/glance/tests/integration/v2/base.py0000666000175100017510000001573613245511421021620 0ustar zuulzuul00000000000000# Copyright 2013 Rackspace Hosting # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import atexit import os.path import tempfile import fixtures import glance_store from oslo_config import cfg from oslo_db import options import glance.common.client from glance.common import config import glance.db.sqlalchemy.api import glance.registry.client.v1.client from glance import tests as glance_tests from glance.tests import utils as test_utils TESTING_API_PASTE_CONF = """ [pipeline:glance-api] pipeline = versionnegotiation gzip unauthenticated-context rootapp [pipeline:glance-api-caching] pipeline = versionnegotiation gzip unauthenticated-context cache rootapp [pipeline:glance-api-cachemanagement] pipeline = versionnegotiation gzip unauthenticated-context cache cache_manage rootapp [pipeline:glance-api-fakeauth] pipeline = versionnegotiation gzip fakeauth context rootapp [pipeline:glance-api-noauth] pipeline = versionnegotiation gzip context rootapp [composite:rootapp] paste.composite_factory = glance.api:root_app_factory /: apiversions /v1: apiv1app /v2: apiv2app [app:apiversions] paste.app_factory = glance.api.versions:create_resource [app:apiv1app] paste.app_factory = glance.api.v1.router:API.factory [app:apiv2app] paste.app_factory = glance.api.v2.router:API.factory [filter:versionnegotiation] paste.filter_factory = glance.api.middleware.version_negotiation:VersionNegotiationFilter.factory [filter:gzip] paste.filter_factory = glance.api.middleware.gzip:GzipMiddleware.factory [filter:cache] paste.filter_factory = glance.api.middleware.cache:CacheFilter.factory [filter:cache_manage] paste.filter_factory = glance.api.middleware.cache_manage:CacheManageFilter.factory [filter:context] paste.filter_factory = glance.api.middleware.context:ContextMiddleware.factory [filter:unauthenticated-context] paste.filter_factory = glance.api.middleware.context:UnauthenticatedContextMiddleware.factory [filter:fakeauth] paste.filter_factory = glance.tests.utils:FakeAuthMiddleware.factory """ TESTING_REGISTRY_PASTE_CONF = """ [pipeline:glance-registry] pipeline = unauthenticated-context registryapp [pipeline:glance-registry-fakeauth] pipeline = fakeauth context registryapp [app:registryapp] paste.app_factory = glance.registry.api.v1:API.factory [filter:context] paste.filter_factory = glance.api.middleware.context:ContextMiddleware.factory [filter:unauthenticated-context] paste.filter_factory = glance.api.middleware.context:UnauthenticatedContextMiddleware.factory [filter:fakeauth] paste.filter_factory = glance.tests.utils:FakeAuthMiddleware.factory """ CONF = cfg.CONF class ApiTest(test_utils.BaseTestCase): def setUp(self): super(ApiTest, self).setUp() self.test_dir = self.useFixture(fixtures.TempDir()).path self._configure_logging() self._setup_database() self._setup_stores() self._setup_property_protection() self.glance_registry_app = self._load_paste_app( 'glance-registry', flavor=getattr(self, 'registry_flavor', ''), conf=getattr(self, 'registry_paste_conf', TESTING_REGISTRY_PASTE_CONF), ) self._connect_registry_client() self.glance_api_app = self._load_paste_app( 'glance-api', flavor=getattr(self, 'api_flavor', ''), conf=getattr(self, 'api_paste_conf', TESTING_API_PASTE_CONF), ) self.http = test_utils.Httplib2WsgiAdapter(self.glance_api_app) def _setup_property_protection(self): self._copy_data_file('property-protections.conf', self.test_dir) self.property_file = os.path.join(self.test_dir, 'property-protections.conf') def _configure_logging(self): self.config(default_log_levels=[ 'amqplib=WARN', 'sqlalchemy=WARN', 'boto=WARN', 'suds=INFO', 'keystone=INFO', 'eventlet.wsgi.server=DEBUG' ]) def _setup_database(self): sql_connection = 'sqlite:////%s/tests.sqlite' % self.test_dir options.set_defaults(CONF, connection=sql_connection) glance.db.sqlalchemy.api.clear_db_env() glance_db_env = 'GLANCE_DB_TEST_SQLITE_FILE' if glance_db_env in os.environ: # use the empty db created and cached as a tempfile # instead of spending the time creating a new one db_location = os.environ[glance_db_env] test_utils.execute('cp %s %s/tests.sqlite' % (db_location, self.test_dir)) else: test_utils.db_sync() # copy the clean db to a temp location so that it # can be reused for future tests (osf, db_location) = tempfile.mkstemp() os.close(osf) test_utils.execute('cp %s/tests.sqlite %s' % (self.test_dir, db_location)) os.environ[glance_db_env] = db_location # cleanup the temp file when the test suite is # complete def _delete_cached_db(): try: os.remove(os.environ[glance_db_env]) except Exception: glance_tests.logger.exception( "Error cleaning up the file %s" % os.environ[glance_db_env]) atexit.register(_delete_cached_db) def _setup_stores(self): glance_store.register_opts(CONF) image_dir = os.path.join(self.test_dir, "images") self.config(group='glance_store', filesystem_store_datadir=image_dir) glance_store.create_stores() def _load_paste_app(self, name, flavor, conf): conf_file_path = os.path.join(self.test_dir, '%s-paste.ini' % name) with open(conf_file_path, 'w') as conf_file: conf_file.write(conf) conf_file.flush() return config.load_paste_app(name, flavor=flavor, conf_file=conf_file_path) def _connect_registry_client(self): def get_connection_type(self2): def wrapped(*args, **kwargs): return test_utils.HttplibWsgiAdapter(self.glance_registry_app) return wrapped self.stubs.Set(glance.common.client.BaseClient, 'get_connection_type', get_connection_type) def tearDown(self): glance.db.sqlalchemy.api.clear_db_env() super(ApiTest, self).tearDown() glance-16.0.0/glance/tests/integration/v2/test_property_quota_violations.py0000666000175100017510000001267113245511421027304 0ustar zuulzuul00000000000000# Copyright 2012 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg from oslo_serialization import jsonutils from six.moves import http_client # NOTE(jokke): simplified transition to py3, behaves like py2 xrange from six.moves import range from glance.tests.integration.v2 import base CONF = cfg.CONF class TestPropertyQuotaViolations(base.ApiTest): def __init__(self, *args, **kwargs): super(TestPropertyQuotaViolations, self).__init__(*args, **kwargs) self.api_flavor = 'noauth' self.registry_flavor = 'fakeauth' def _headers(self, custom_headers=None): base_headers = { 'X-Identity-Status': 'Confirmed', 'X-Auth-Token': '932c5c84-02ac-4fe5-a9ba-620af0e2bb96', 'X-User-Id': 'f9a41d13-0c13-47e9-bee2-ce4e8bfe958e', 'X-Tenant-Id': "foo", 'X-Roles': 'member', } base_headers.update(custom_headers or {}) return base_headers def _get(self, image_id=""): path = ('/v2/images/%s' % image_id).rstrip('/') rsp, content = self.http.request(path, 'GET', headers=self._headers()) self.assertEqual(http_client.OK, rsp.status) content = jsonutils.loads(content) return content def _create_image(self, body): path = '/v2/images' headers = self._headers({'content-type': 'application/json'}) rsp, content = self.http.request(path, 'POST', headers=headers, body=jsonutils.dumps(body)) self.assertEqual(http_client.CREATED, rsp.status) return jsonutils.loads(content) def _patch(self, image_id, body, expected_status): path = '/v2/images/%s' % image_id media_type = 'application/openstack-images-v2.1-json-patch' headers = self._headers({'content-type': media_type}) rsp, content = self.http.request(path, 'PATCH', headers=headers, body=jsonutils.dumps(body)) self.assertEqual(expected_status, rsp.status, content) return content def test_property_ops_when_quota_violated(self): # Image list must be empty to begin with image_list = self._get()['images'] self.assertEqual(0, len(image_list)) orig_property_quota = 10 CONF.set_override('image_property_quota', orig_property_quota) # Create an image (with deployer-defined properties) req_body = {'name': 'testimg', 'disk_format': 'aki', 'container_format': 'aki'} for i in range(orig_property_quota): req_body['k_%d' % i] = 'v_%d' % i image = self._create_image(req_body) image_id = image['id'] for i in range(orig_property_quota): self.assertEqual('v_%d' % i, image['k_%d' % i]) # Now reduce property quota. We should be allowed to modify/delete # existing properties (even if the result still exceeds property quota) # but not add new properties nor replace existing properties with new # properties (as long as we're over the quota) self.config(image_property_quota=2) patch_body = [{'op': 'replace', 'path': '/k_4', 'value': 'v_4.new'}] image = jsonutils.loads(self._patch(image_id, patch_body, http_client.OK)) self.assertEqual('v_4.new', image['k_4']) patch_body = [{'op': 'remove', 'path': '/k_7'}] image = jsonutils.loads(self._patch(image_id, patch_body, http_client.OK)) self.assertNotIn('k_7', image) patch_body = [{'op': 'add', 'path': '/k_100', 'value': 'v_100'}] self._patch(image_id, patch_body, http_client.REQUEST_ENTITY_TOO_LARGE) image = self._get(image_id) self.assertNotIn('k_100', image) patch_body = [ {'op': 'remove', 'path': '/k_5'}, {'op': 'add', 'path': '/k_100', 'value': 'v_100'}, ] self._patch(image_id, patch_body, http_client.REQUEST_ENTITY_TOO_LARGE) image = self._get(image_id) self.assertNotIn('k_100', image) self.assertIn('k_5', image) # temporary violations to property quota should be allowed as long as # it's within one PATCH request and the end result does not violate # quotas. patch_body = [{'op': 'add', 'path': '/k_100', 'value': 'v_100'}, {'op': 'add', 'path': '/k_99', 'value': 'v_99'}] to_rm = ['k_%d' % i for i in range(orig_property_quota) if i != 7] patch_body.extend([{'op': 'remove', 'path': '/%s' % k} for k in to_rm]) image = jsonutils.loads(self._patch(image_id, patch_body, http_client.OK)) self.assertEqual('v_99', image['k_99']) self.assertEqual('v_100', image['k_100']) for k in to_rm: self.assertNotIn(k, image) glance-16.0.0/glance/tests/integration/v2/__init__.py0000666000175100017510000000000013245511421022417 0ustar zuulzuul00000000000000glance-16.0.0/glance/tests/integration/v2/test_tasks_api.py0000666000175100017510000005057013245511421023716 0ustar zuulzuul00000000000000# Copyright 2013 Rackspace Hosting # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import eventlet from oslo_serialization import jsonutils as json from six.moves import http_client from glance.api.v2 import tasks from glance.common import timeutils from glance.tests.integration.v2 import base TENANT1 = '6838eb7b-6ded-434a-882c-b344c77fe8df' TENANT2 = '2c014f32-55eb-467d-8fcb-4bd706012f81' TENANT3 = '5a3e60e8-cfa9-4a9e-a90a-62b42cea92b8' TENANT4 = 'c6c87f25-8a94-47ed-8c83-053c25f42df4' def minimal_task_headers(owner='tenant1'): headers = { 'X-Auth-Token': 'user1:%s:admin' % owner, 'Content-Type': 'application/json', } return headers def _new_task_fixture(**kwargs): task_data = { "type": "import", "input": { "import_from": "http://example.com", "import_from_format": "qcow2", "image_properties": { 'disk_format': 'vhd', 'container_format': 'ovf' } } } task_data.update(kwargs) return task_data class TestTasksApi(base.ApiTest): def __init__(self, *args, **kwargs): super(TestTasksApi, self).__init__(*args, **kwargs) self.api_flavor = 'fakeauth' self.registry_flavor = 'fakeauth' def _wait_on_task_execution(self, max_wait=5): """Wait until all the tasks have finished execution and are in state of success or failure. """ start = timeutils.utcnow() # wait for maximum of seconds defined by max_wait while timeutils.delta_seconds(start, timeutils.utcnow()) < max_wait: wait = False # Verify that no task is in status of pending or processing path = "/v2/tasks" res, content = self.http.request(path, 'GET', headers=minimal_task_headers()) content_dict = json.loads(content) self.assertEqual(http_client.OK, res.status) res_tasks = content_dict['tasks'] if len(res_tasks) != 0: for task in res_tasks: if task['status'] in ('pending', 'processing'): wait = True break if wait: # Bug #1541487: we must give time to the server to execute the # task, but the server is run in the same process than the # test. Use eventlet to give the control to the pending server # task. eventlet.sleep(0.05) continue else: break def _post_new_task(self, **kwargs): task_owner = kwargs.get('owner') headers = minimal_task_headers(task_owner) task_data = _new_task_fixture() task_data['input']['import_from'] = "http://example.com" body_content = json.dumps(task_data) path = "/v2/tasks" response, content = self.http.request(path, 'POST', headers=headers, body=body_content) self.assertEqual(http_client.CREATED, response.status) task = json.loads(content) task_id = task['id'] self.assertIsNotNone(task_id) self.assertEqual(task_owner, task['owner']) self.assertEqual(task_data['type'], task['type']) self.assertEqual(task_data['input'], task['input']) self.assertEqual("http://localhost" + path + "/" + task_id, response.webob_resp.headers['Location']) return task, task_data def test_all_task_api(self): # 0. GET /tasks # Verify no tasks path = "/v2/tasks" response, content = self.http.request(path, 'GET', headers=minimal_task_headers()) content_dict = json.loads(content) self.assertEqual(http_client.OK, response.status) self.assertFalse(content_dict['tasks']) # 1. GET /tasks/{task_id} # Verify non-existent task task_id = 'NON_EXISTENT_TASK' path = "/v2/tasks/%s" % task_id response, content = self.http.request(path, 'GET', headers=minimal_task_headers()) self.assertEqual(http_client.NOT_FOUND, response.status) # 2. POST /tasks # Create a new task task_owner = 'tenant1' data, req_input = self._post_new_task(owner=task_owner) # 3. GET /tasks/{task_id} # Get an existing task task_id = data['id'] path = "/v2/tasks/%s" % task_id response, content = self.http.request(path, 'GET', headers=minimal_task_headers()) self.assertEqual(http_client.OK, response.status) # NOTE(sabari): wait for all task executions to finish before checking # task status. self._wait_on_task_execution(max_wait=10) # 4. GET /tasks # Get all tasks (not deleted) path = "/v2/tasks" response, content = self.http.request(path, 'GET', headers=minimal_task_headers()) self.assertEqual(http_client.OK, response.status) self.assertIsNotNone(content) data = json.loads(content) self.assertIsNotNone(data) self.assertEqual(1, len(data['tasks'])) # NOTE(venkatesh) find a way to get expected_keys from tasks controller expected_keys = set(['id', 'expires_at', 'type', 'owner', 'status', 'created_at', 'updated_at', 'self', 'schema']) task = data['tasks'][0] self.assertEqual(expected_keys, set(task.keys())) self.assertEqual(req_input['type'], task['type']) self.assertEqual(task_owner, task['owner']) self.assertEqual('success', task['status']) self.assertIsNotNone(task['created_at']) self.assertIsNotNone(task['updated_at']) def test_task_schema_api(self): # 0. GET /schemas/task # Verify schema for task path = "/v2/schemas/task" response, content = self.http.request(path, 'GET', headers=minimal_task_headers()) self.assertEqual(http_client.OK, response.status) schema = tasks.get_task_schema() expected_schema = schema.minimal() data = json.loads(content) self.assertIsNotNone(data) self.assertEqual(expected_schema, data) # 1. GET /schemas/tasks # Verify schema for tasks path = "/v2/schemas/tasks" response, content = self.http.request(path, 'GET', headers=minimal_task_headers()) self.assertEqual(http_client.OK, response.status) schema = tasks.get_collection_schema() expected_schema = schema.minimal() data = json.loads(content) self.assertIsNotNone(data) self.assertEqual(expected_schema, data) # NOTE(nikhil): wait for all task executions to finish before exiting # else there is a risk of running into deadlock self._wait_on_task_execution() def test_create_new_task(self): # 0. POST /tasks # Create a new task with valid input and type task_data = _new_task_fixture() task_owner = 'tenant1' body_content = json.dumps(task_data) path = "/v2/tasks" response, content = self.http.request( path, 'POST', headers=minimal_task_headers(task_owner), body=body_content) self.assertEqual(http_client.CREATED, response.status) data = json.loads(content) task_id = data['id'] self.assertIsNotNone(task_id) self.assertEqual(task_owner, data['owner']) self.assertEqual(task_data['type'], data['type']) self.assertEqual(task_data['input'], data['input']) # 1. POST /tasks # Create a new task with invalid type # Expect BadRequest(400) Error as response task_data = _new_task_fixture(type='invalid') task_owner = 'tenant1' body_content = json.dumps(task_data) path = "/v2/tasks" response, content = self.http.request( path, 'POST', headers=minimal_task_headers(task_owner), body=body_content) self.assertEqual(http_client.BAD_REQUEST, response.status) # 1. POST /tasks # Create a new task with invalid input for type 'import' # Expect BadRequest(400) Error as response task_data = _new_task_fixture(task_input='{something: invalid}') task_owner = 'tenant1' body_content = json.dumps(task_data) path = "/v2/tasks" response, content = self.http.request( path, 'POST', headers=minimal_task_headers(task_owner), body=body_content) self.assertEqual(http_client.BAD_REQUEST, response.status) # NOTE(nikhil): wait for all task executions to finish before exiting # else there is a risk of running into deadlock self._wait_on_task_execution() def test_tasks_with_filter(self): # 0. GET /v2/tasks # Verify no tasks path = "/v2/tasks" response, content = self.http.request(path, 'GET', headers=minimal_task_headers()) self.assertEqual(http_client.OK, response.status) content_dict = json.loads(content) self.assertFalse(content_dict['tasks']) task_ids = [] # 1. Make 2 POST requests on /tasks with various attributes task_owner = TENANT1 data, req_input1 = self._post_new_task(owner=task_owner) task_ids.append(data['id']) task_owner = TENANT2 data, req_input2 = self._post_new_task(owner=task_owner) task_ids.append(data['id']) # 2. GET /tasks # Verify two import tasks path = "/v2/tasks" response, content = self.http.request(path, 'GET', headers=minimal_task_headers()) self.assertEqual(http_client.OK, response.status) content_dict = json.loads(content) self.assertEqual(2, len(content_dict['tasks'])) # 3. GET /tasks with owner filter # Verify correct task returned with owner params = "owner=%s" % TENANT1 path = "/v2/tasks?%s" % params response, content = self.http.request(path, 'GET', headers=minimal_task_headers()) self.assertEqual(http_client.OK, response.status) content_dict = json.loads(content) self.assertEqual(1, len(content_dict['tasks'])) self.assertEqual(TENANT1, content_dict['tasks'][0]['owner']) # Check the same for different owner. params = "owner=%s" % TENANT2 path = "/v2/tasks?%s" % params response, content = self.http.request(path, 'GET', headers=minimal_task_headers()) self.assertEqual(http_client.OK, response.status) content_dict = json.loads(content) self.assertEqual(1, len(content_dict['tasks'])) self.assertEqual(TENANT2, content_dict['tasks'][0]['owner']) # 4. GET /tasks with type filter # Verify correct task returned with type params = "type=import" path = "/v2/tasks?%s" % params response, content = self.http.request(path, 'GET', headers=minimal_task_headers()) self.assertEqual(http_client.OK, response.status) content_dict = json.loads(content) self.assertEqual(2, len(content_dict['tasks'])) actual_task_ids = [task['id'] for task in content_dict['tasks']] self.assertEqual(set(task_ids), set(actual_task_ids)) # NOTE(nikhil): wait for all task executions to finish before exiting # else there is a risk of running into deadlock self._wait_on_task_execution() def test_limited_tasks(self): """ Ensure marker and limit query params work """ # 0. GET /tasks # Verify no tasks path = "/v2/tasks" response, content = self.http.request(path, 'GET', headers=minimal_task_headers()) self.assertEqual(http_client.OK, response.status) tasks = json.loads(content) self.assertFalse(tasks['tasks']) task_ids = [] # 1. POST /tasks with three tasks with various attributes task, _ = self._post_new_task(owner=TENANT1) task_ids.append(task['id']) task, _ = self._post_new_task(owner=TENANT2) task_ids.append(task['id']) task, _ = self._post_new_task(owner=TENANT3) task_ids.append(task['id']) # 2. GET /tasks # Verify 3 tasks are returned path = "/v2/tasks" response, content = self.http.request(path, 'GET', headers=minimal_task_headers()) self.assertEqual(http_client.OK, response.status) tasks = json.loads(content)['tasks'] self.assertEqual(3, len(tasks)) # 3. GET /tasks with limit of 2 # Verify only two tasks were returned params = "limit=2" path = "/v2/tasks?%s" % params response, content = self.http.request(path, 'GET', headers=minimal_task_headers()) self.assertEqual(http_client.OK, response.status) actual_tasks = json.loads(content)['tasks'] self.assertEqual(2, len(actual_tasks)) self.assertEqual(tasks[0]['id'], actual_tasks[0]['id']) self.assertEqual(tasks[1]['id'], actual_tasks[1]['id']) # 4. GET /tasks with marker # Verify only two tasks were returned params = "marker=%s" % tasks[0]['id'] path = "/v2/tasks?%s" % params response, content = self.http.request(path, 'GET', headers=minimal_task_headers()) self.assertEqual(http_client.OK, response.status) actual_tasks = json.loads(content)['tasks'] self.assertEqual(2, len(actual_tasks)) self.assertEqual(tasks[1]['id'], actual_tasks[0]['id']) self.assertEqual(tasks[2]['id'], actual_tasks[1]['id']) # 5. GET /tasks with marker and limit # Verify only one task was returned with the correct id params = "limit=1&marker=%s" % tasks[1]['id'] path = "/v2/tasks?%s" % params response, content = self.http.request(path, 'GET', headers=minimal_task_headers()) self.assertEqual(http_client.OK, response.status) actual_tasks = json.loads(content)['tasks'] self.assertEqual(1, len(actual_tasks)) self.assertEqual(tasks[2]['id'], actual_tasks[0]['id']) # NOTE(nikhil): wait for all task executions to finish before exiting # else there is a risk of running into deadlock self._wait_on_task_execution() def test_ordered_tasks(self): # 0. GET /tasks # Verify no tasks path = "/v2/tasks" response, content = self.http.request(path, 'GET', headers=minimal_task_headers()) self.assertEqual(http_client.OK, response.status) tasks = json.loads(content) self.assertFalse(tasks['tasks']) task_ids = [] # 1. POST /tasks with three tasks with various attributes task, _ = self._post_new_task(owner=TENANT1) task_ids.append(task['id']) task, _ = self._post_new_task(owner=TENANT2) task_ids.append(task['id']) task, _ = self._post_new_task(owner=TENANT3) task_ids.append(task['id']) # 2. GET /tasks with no query params # Verify three tasks sorted by created_at desc # 2. GET /tasks # Verify 3 tasks are returned path = "/v2/tasks" response, content = self.http.request(path, 'GET', headers=minimal_task_headers()) self.assertEqual(http_client.OK, response.status) actual_tasks = json.loads(content)['tasks'] self.assertEqual(3, len(actual_tasks)) self.assertEqual(task_ids[2], actual_tasks[0]['id']) self.assertEqual(task_ids[1], actual_tasks[1]['id']) self.assertEqual(task_ids[0], actual_tasks[2]['id']) # 3. GET /tasks sorted by owner asc params = 'sort_key=owner&sort_dir=asc' path = '/v2/tasks?%s' % params response, content = self.http.request(path, 'GET', headers=minimal_task_headers()) self.assertEqual(http_client.OK, response.status) expected_task_owners = [TENANT1, TENANT2, TENANT3] expected_task_owners.sort() actual_tasks = json.loads(content)['tasks'] self.assertEqual(3, len(actual_tasks)) self.assertEqual(expected_task_owners, [t['owner'] for t in actual_tasks]) # 4. GET /tasks sorted by owner desc with a marker params = 'sort_key=owner&sort_dir=desc&marker=%s' % task_ids[0] path = '/v2/tasks?%s' % params response, content = self.http.request(path, 'GET', headers=minimal_task_headers()) self.assertEqual(http_client.OK, response.status) actual_tasks = json.loads(content)['tasks'] self.assertEqual(2, len(actual_tasks)) self.assertEqual(task_ids[2], actual_tasks[0]['id']) self.assertEqual(task_ids[1], actual_tasks[1]['id']) self.assertEqual(TENANT3, actual_tasks[0]['owner']) self.assertEqual(TENANT2, actual_tasks[1]['owner']) # 5. GET /tasks sorted by owner asc with a marker params = 'sort_key=owner&sort_dir=asc&marker=%s' % task_ids[0] path = '/v2/tasks?%s' % params response, content = self.http.request(path, 'GET', headers=minimal_task_headers()) self.assertEqual(http_client.OK, response.status) actual_tasks = json.loads(content)['tasks'] self.assertEqual(0, len(actual_tasks)) # NOTE(nikhil): wait for all task executions to finish before exiting # else there is a risk of running into deadlock self._wait_on_task_execution() def test_delete_task(self): # 0. POST /tasks # Create a new task with valid input and type task_data = _new_task_fixture() task_owner = 'tenant1' body_content = json.dumps(task_data) path = "/v2/tasks" response, content = self.http.request( path, 'POST', headers=minimal_task_headers(task_owner), body=body_content) self.assertEqual(http_client.CREATED, response.status) data = json.loads(content) task_id = data['id'] # 1. DELETE on /tasks/{task_id} # Attempt to delete a task path = "/v2/tasks/%s" % task_id response, content = self.http.request(path, 'DELETE', headers=minimal_task_headers()) self.assertEqual(http_client.METHOD_NOT_ALLOWED, response.status) self.assertEqual('GET', response.webob_resp.headers.get('Allow')) self.assertEqual(('GET',), response.webob_resp.allow) self.assertEqual(('GET',), response.allow) # 2. GET /tasks/{task_id} # Ensure that methods mentioned in the Allow header work path = "/v2/tasks/%s" % task_id response, content = self.http.request(path, 'GET', headers=minimal_task_headers()) self.assertEqual(http_client.OK, response.status) self.assertIsNotNone(content) # NOTE(nikhil): wait for all task executions to finish before exiting # else there is a risk of running into deadlock self._wait_on_task_execution() glance-16.0.0/glance/tests/integration/__init__.py0000666000175100017510000000000013245511421022070 0ustar zuulzuul00000000000000glance-16.0.0/glance/tests/integration/legacy_functional/0000775000175100017510000000000013245511661023463 5ustar zuulzuul00000000000000glance-16.0.0/glance/tests/integration/legacy_functional/test_v1_api.py0000666000175100017510000022164213245511421026256 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime import hashlib import os import tempfile from oslo_serialization import jsonutils from oslo_utils import units from six.moves import http_client import testtools from glance.common import timeutils from glance.tests.integration.legacy_functional import base from glance.tests.utils import minimal_headers FIVE_KB = 5 * units.Ki FIVE_GB = 5 * units.Gi class TestApi(base.ApiTest): def test_get_head_simple_post(self): # 0. GET /images # Verify no public images path = "/v1/images" response, content = self.http.request(path, 'GET') self.assertEqual(http_client.OK, response.status) self.assertEqual('{"images": []}', content) # 1. GET /images/detail # Verify no public images path = "/v1/images/detail" response, content = self.http.request(path, 'GET') self.assertEqual(http_client.OK, response.status) self.assertEqual('{"images": []}', content) # 2. POST /images with public image named Image1 # attribute and no custom properties. Verify a 200 OK is returned image_data = b"*" * FIVE_KB headers = minimal_headers('Image1') path = "/v1/images" response, content = self.http.request(path, 'POST', headers=headers, body=image_data) self.assertEqual(http_client.CREATED, response.status) data = jsonutils.loads(content) image_id = data['image']['id'] self.assertEqual(hashlib.md5(image_data).hexdigest(), data['image']['checksum']) self.assertEqual(FIVE_KB, data['image']['size']) self.assertEqual("Image1", data['image']['name']) self.assertTrue(data['image']['is_public']) # 3. HEAD image # Verify image found now path = "/v1/images/%s" % image_id response, content = self.http.request(path, 'HEAD') self.assertEqual(http_client.OK, response.status) self.assertEqual("Image1", response['x-image-meta-name']) # 4. GET image # Verify all information on image we just added is correct path = "/v1/images/%s" % image_id response, content = self.http.request(path, 'GET') self.assertEqual(http_client.OK, response.status) expected_image_headers = { 'x-image-meta-id': image_id, 'x-image-meta-name': 'Image1', 'x-image-meta-is_public': 'True', 'x-image-meta-status': 'active', 'x-image-meta-disk_format': 'raw', 'x-image-meta-container_format': 'ovf', 'x-image-meta-size': str(FIVE_KB)} expected_std_headers = { 'content-length': str(FIVE_KB), 'content-type': 'application/octet-stream'} for expected_key, expected_value in expected_image_headers.items(): self.assertEqual(expected_value, response[expected_key], "For key '%s' expected header value '%s'. " "Got '%s'" % (expected_key, expected_value, response[expected_key])) for expected_key, expected_value in expected_std_headers.items(): self.assertEqual(expected_value, response[expected_key], "For key '%s' expected header value '%s'. " "Got '%s'" % (expected_key, expected_value, response[expected_key])) content = content.encode('utf-8') self.assertEqual(image_data, content) self.assertEqual(hashlib.md5(image_data).hexdigest(), hashlib.md5(content).hexdigest()) # 5. GET /images # Verify no public images path = "/v1/images" response, content = self.http.request(path, 'GET') self.assertEqual(http_client.OK, response.status) expected_result = {"images": [ {"container_format": "ovf", "disk_format": "raw", "id": image_id, "name": "Image1", "checksum": "c2e5db72bd7fd153f53ede5da5a06de3", "size": 5120}]} self.assertEqual(expected_result, jsonutils.loads(content)) # 6. GET /images/detail # Verify image and all its metadata path = "/v1/images/detail" response, content = self.http.request(path, 'GET') self.assertEqual(http_client.OK, response.status) expected_image = { "status": "active", "name": "Image1", "deleted": False, "container_format": "ovf", "disk_format": "raw", "id": image_id, "is_public": True, "deleted_at": None, "properties": {}, "size": 5120} image = jsonutils.loads(content) for expected_key, expected_value in expected_image.items(): self.assertEqual(expected_value, image['images'][0][expected_key], "For key '%s' expected header value '%s'. " "Got '%s'" % (expected_key, expected_value, image['images'][0][expected_key])) # 7. PUT image with custom properties of "distro" and "arch" # Verify 200 returned headers = {'X-Image-Meta-Property-Distro': 'Ubuntu', 'X-Image-Meta-Property-Arch': 'x86_64'} path = "/v1/images/%s" % image_id response, content = self.http.request(path, 'PUT', headers=headers) self.assertEqual(http_client.OK, response.status) data = jsonutils.loads(content) self.assertEqual("x86_64", data['image']['properties']['arch']) self.assertEqual("Ubuntu", data['image']['properties']['distro']) # 8. GET /images/detail # Verify image and all its metadata path = "/v1/images/detail" response, content = self.http.request(path, 'GET') self.assertEqual(http_client.OK, response.status) expected_image = { "status": "active", "name": "Image1", "deleted": False, "container_format": "ovf", "disk_format": "raw", "id": image_id, "is_public": True, "deleted_at": None, "properties": {'distro': 'Ubuntu', 'arch': 'x86_64'}, "size": 5120} image = jsonutils.loads(content) for expected_key, expected_value in expected_image.items(): self.assertEqual(expected_value, image['images'][0][expected_key], "For key '%s' expected header value '%s'. " "Got '%s'" % (expected_key, expected_value, image['images'][0][expected_key])) # 9. PUT image and remove a previously existing property. headers = {'X-Image-Meta-Property-Arch': 'x86_64'} path = "/v1/images/%s" % image_id response, content = self.http.request(path, 'PUT', headers=headers) self.assertEqual(http_client.OK, response.status) path = "/v1/images/detail" response, content = self.http.request(path, 'GET') self.assertEqual(http_client.OK, response.status) data = jsonutils.loads(content)['images'][0] self.assertEqual(1, len(data['properties'])) self.assertEqual("x86_64", data['properties']['arch']) # 10. PUT image and add a previously deleted property. headers = {'X-Image-Meta-Property-Distro': 'Ubuntu', 'X-Image-Meta-Property-Arch': 'x86_64'} path = "/v1/images/%s" % image_id response, content = self.http.request(path, 'PUT', headers=headers) self.assertEqual(http_client.OK, response.status) data = jsonutils.loads(content) path = "/v1/images/detail" response, content = self.http.request(path, 'GET') self.assertEqual(http_client.OK, response.status) data = jsonutils.loads(content)['images'][0] self.assertEqual(2, len(data['properties'])) self.assertEqual("x86_64", data['properties']['arch']) self.assertEqual("Ubuntu", data['properties']['distro']) self.assertNotEqual(data['created_at'], data['updated_at']) # DELETE image path = "/v1/images/%s" % image_id response, content = self.http.request(path, 'DELETE') self.assertEqual(http_client.OK, response.status) def test_queued_process_flow(self): """ We test the process flow where a user registers an image with Glance but does not immediately upload an image file. Later, the user uploads an image file using a PUT operation. We track the changing of image status throughout this process. 0. GET /images - Verify no public images 1. POST /images with public image named Image1 with no location attribute and no image data. - Verify 201 returned 2. GET /images - Verify one public image 3. HEAD image - Verify image now in queued status 4. PUT image with image data - Verify 200 returned 5. HEAD images - Verify image now in active status 6. GET /images - Verify one public image """ # 0. GET /images # Verify no public images path = "/v1/images" response, content = self.http.request(path, 'GET') self.assertEqual(http_client.OK, response.status) self.assertEqual('{"images": []}', content) # 1. POST /images with public image named Image1 # with no location or image data headers = minimal_headers('Image1') path = "/v1/images" response, content = self.http.request(path, 'POST', headers=headers) self.assertEqual(http_client.CREATED, response.status) data = jsonutils.loads(content) self.assertIsNone(data['image']['checksum']) self.assertEqual(0, data['image']['size']) self.assertEqual('ovf', data['image']['container_format']) self.assertEqual('raw', data['image']['disk_format']) self.assertEqual("Image1", data['image']['name']) self.assertTrue(data['image']['is_public']) image_id = data['image']['id'] # 2. GET /images # Verify 1 public image path = "/v1/images" response, content = self.http.request(path, 'GET') self.assertEqual(http_client.OK, response.status) data = jsonutils.loads(content) self.assertEqual(image_id, data['images'][0]['id']) self.assertIsNone(data['images'][0]['checksum']) self.assertEqual(0, data['images'][0]['size']) self.assertEqual('ovf', data['images'][0]['container_format']) self.assertEqual('raw', data['images'][0]['disk_format']) self.assertEqual("Image1", data['images'][0]['name']) # 3. HEAD /images # Verify status is in queued path = "/v1/images/%s" % (image_id) response, content = self.http.request(path, 'HEAD') self.assertEqual(http_client.OK, response.status) self.assertEqual("Image1", response['x-image-meta-name']) self.assertEqual("queued", response['x-image-meta-status']) self.assertEqual('0', response['x-image-meta-size']) self.assertEqual(image_id, response['x-image-meta-id']) # 4. PUT image with image data, verify 200 returned image_data = b"*" * FIVE_KB headers = {'Content-Type': 'application/octet-stream'} path = "/v1/images/%s" % (image_id) response, content = self.http.request(path, 'PUT', headers=headers, body=image_data) self.assertEqual(http_client.OK, response.status) data = jsonutils.loads(content) self.assertEqual(hashlib.md5(image_data).hexdigest(), data['image']['checksum']) self.assertEqual(FIVE_KB, data['image']['size']) self.assertEqual("Image1", data['image']['name']) self.assertTrue(data['image']['is_public']) # 5. HEAD /images # Verify status is in active path = "/v1/images/%s" % (image_id) response, content = self.http.request(path, 'HEAD') self.assertEqual(http_client.OK, response.status) self.assertEqual("Image1", response['x-image-meta-name']) self.assertEqual("active", response['x-image-meta-status']) # 6. GET /images # Verify 1 public image still... path = "/v1/images" response, content = self.http.request(path, 'GET') self.assertEqual(http_client.OK, response.status) data = jsonutils.loads(content) self.assertEqual(hashlib.md5(image_data).hexdigest(), data['images'][0]['checksum']) self.assertEqual(image_id, data['images'][0]['id']) self.assertEqual(FIVE_KB, data['images'][0]['size']) self.assertEqual('ovf', data['images'][0]['container_format']) self.assertEqual('raw', data['images'][0]['disk_format']) self.assertEqual("Image1", data['images'][0]['name']) # DELETE image path = "/v1/images/%s" % (image_id) response, content = self.http.request(path, 'DELETE') self.assertEqual(http_client.OK, response.status) def test_v1_not_enabled(self): self.config(enable_v1_api=False) path = "/v1/images" response, content = self.http.request(path, 'GET') self.assertEqual(http_client.MULTIPLE_CHOICES, response.status) def test_v1_enabled(self): self.config(enable_v1_api=True) path = "/v1/images" response, content = self.http.request(path, 'GET') self.assertEqual(http_client.OK, response.status) def test_zero_initial_size(self): """ A test to ensure that an image with size explicitly set to zero has status that immediately transitions to active. """ # 1. POST /images with public image named Image1 # attribute and a size of zero. # Verify a 201 OK is returned headers = {'Content-Type': 'application/octet-stream', 'X-Image-Meta-Size': '0', 'X-Image-Meta-Name': 'Image1', 'X-Image-Meta-disk_format': 'raw', 'X-image-Meta-container_format': 'ovf', 'X-Image-Meta-Is-Public': 'True'} path = "/v1/images" response, content = self.http.request(path, 'POST', headers=headers) self.assertEqual(http_client.CREATED, response.status) image = jsonutils.loads(content)['image'] self.assertEqual('active', image['status']) # 2. HEAD image-location # Verify image size is zero and the status is active path = response.get('location') response, content = self.http.request(path, 'HEAD') self.assertEqual(http_client.OK, response.status) self.assertEqual('0', response['x-image-meta-size']) self.assertEqual('active', response['x-image-meta-status']) # 3. GET image-location # Verify image content is empty response, content = self.http.request(path, 'GET') self.assertEqual(http_client.OK, response.status) self.assertEqual(0, len(content)) def test_traceback_not_consumed(self): """ A test that errors coming from the POST API do not get consumed and print the actual error message, and not something like <traceback object at 0x1918d40> :see https://bugs.launchpad.net/glance/+bug/755912 """ # POST /images with binary data, but not setting # Content-Type to application/octet-stream, verify a # 400 returned and that the error is readable. with tempfile.NamedTemporaryFile() as test_data_file: test_data_file.write(b"XXX") test_data_file.flush() path = "/v1/images" headers = minimal_headers('Image1') headers['Content-Type'] = 'not octet-stream' response, content = self.http.request(path, 'POST', body=test_data_file.name, headers=headers) self.assertEqual(http_client.BAD_REQUEST, response.status) expected = "Content-Type must be application/octet-stream" self.assertIn(expected, content, "Could not find '%s' in '%s'" % (expected, content)) def test_filtered_images(self): """ Set up four test images and ensure each query param filter works """ # 0. GET /images # Verify no public images path = "/v1/images" response, content = self.http.request(path, 'GET') self.assertEqual(http_client.OK, response.status) self.assertEqual('{"images": []}', content) image_ids = [] # 1. POST /images with three public images, and one private image # with various attributes headers = {'Content-Type': 'application/octet-stream', 'X-Image-Meta-Name': 'Image1', 'X-Image-Meta-Status': 'active', 'X-Image-Meta-Container-Format': 'ovf', 'X-Image-Meta-Disk-Format': 'vdi', 'X-Image-Meta-Size': '19', 'X-Image-Meta-Is-Public': 'True', 'X-Image-Meta-Protected': 'True', 'X-Image-Meta-Property-pants': 'are on'} path = "/v1/images" response, content = self.http.request(path, 'POST', headers=headers) self.assertEqual(http_client.CREATED, response.status) data = jsonutils.loads(content) self.assertEqual("are on", data['image']['properties']['pants']) self.assertTrue(data['image']['is_public']) image_ids.append(data['image']['id']) headers = {'Content-Type': 'application/octet-stream', 'X-Image-Meta-Name': 'My Image!', 'X-Image-Meta-Status': 'active', 'X-Image-Meta-Container-Format': 'ovf', 'X-Image-Meta-Disk-Format': 'vhd', 'X-Image-Meta-Size': '20', 'X-Image-Meta-Is-Public': 'True', 'X-Image-Meta-Protected': 'False', 'X-Image-Meta-Property-pants': 'are on'} path = "/v1/images" response, content = self.http.request(path, 'POST', headers=headers) self.assertEqual(http_client.CREATED, response.status) data = jsonutils.loads(content) self.assertEqual("are on", data['image']['properties']['pants']) self.assertTrue(data['image']['is_public']) image_ids.append(data['image']['id']) headers = {'Content-Type': 'application/octet-stream', 'X-Image-Meta-Name': 'My Image!', 'X-Image-Meta-Status': 'saving', 'X-Image-Meta-Container-Format': 'ami', 'X-Image-Meta-Disk-Format': 'ami', 'X-Image-Meta-Size': '21', 'X-Image-Meta-Is-Public': 'True', 'X-Image-Meta-Protected': 'False', 'X-Image-Meta-Property-pants': 'are off'} path = "/v1/images" response, content = self.http.request(path, 'POST', headers=headers) self.assertEqual(http_client.CREATED, response.status) data = jsonutils.loads(content) self.assertEqual("are off", data['image']['properties']['pants']) self.assertTrue(data['image']['is_public']) image_ids.append(data['image']['id']) headers = {'Content-Type': 'application/octet-stream', 'X-Image-Meta-Name': 'My Private Image', 'X-Image-Meta-Status': 'active', 'X-Image-Meta-Container-Format': 'ami', 'X-Image-Meta-Disk-Format': 'ami', 'X-Image-Meta-Size': '22', 'X-Image-Meta-Is-Public': 'False', 'X-Image-Meta-Protected': 'False'} path = "/v1/images" response, content = self.http.request(path, 'POST', headers=headers) self.assertEqual(http_client.CREATED, response.status) data = jsonutils.loads(content) self.assertFalse(data['image']['is_public']) image_ids.append(data['image']['id']) # 2. GET /images # Verify three public images path = "/v1/images" response, content = self.http.request(path, 'GET') self.assertEqual(http_client.OK, response.status) data = jsonutils.loads(content) self.assertEqual(3, len(data['images'])) # 3. GET /images with name filter # Verify correct images returned with name params = "name=My%20Image!" path = "/v1/images?%s" % (params) response, content = self.http.request(path, 'GET') self.assertEqual(http_client.OK, response.status) data = jsonutils.loads(content) self.assertEqual(2, len(data['images'])) for image in data['images']: self.assertEqual("My Image!", image['name']) # 4. GET /images with status filter # Verify correct images returned with status params = "status=queued" path = "/v1/images/detail?%s" % (params) response, content = self.http.request(path, 'GET') self.assertEqual(http_client.OK, response.status) data = jsonutils.loads(content) self.assertEqual(3, len(data['images'])) for image in data['images']: self.assertEqual("queued", image['status']) params = "status=active" path = "/v1/images/detail?%s" % (params) response, content = self.http.request(path, 'GET') self.assertEqual(http_client.OK, response.status) data = jsonutils.loads(content) self.assertEqual(0, len(data['images'])) # 5. GET /images with container_format filter # Verify correct images returned with container_format params = "container_format=ovf" path = "/v1/images?%s" % (params) response, content = self.http.request(path, 'GET') self.assertEqual(http_client.OK, response.status) data = jsonutils.loads(content) self.assertEqual(2, len(data['images'])) for image in data['images']: self.assertEqual("ovf", image['container_format']) # 6. GET /images with disk_format filter # Verify correct images returned with disk_format params = "disk_format=vdi" path = "/v1/images?%s" % (params) response, content = self.http.request(path, 'GET') self.assertEqual(http_client.OK, response.status) data = jsonutils.loads(content) self.assertEqual(1, len(data['images'])) for image in data['images']: self.assertEqual("vdi", image['disk_format']) # 7. GET /images with size_max filter # Verify correct images returned with size <= expected params = "size_max=20" path = "/v1/images?%s" % (params) response, content = self.http.request(path, 'GET') self.assertEqual(http_client.OK, response.status) data = jsonutils.loads(content) self.assertEqual(2, len(data['images'])) for image in data['images']: self.assertLessEqual(image['size'], 20) # 8. GET /images with size_min filter # Verify correct images returned with size >= expected params = "size_min=20" path = "/v1/images?%s" % (params) response, content = self.http.request(path, 'GET') self.assertEqual(http_client.OK, response.status) data = jsonutils.loads(content) self.assertEqual(2, len(data['images'])) for image in data['images']: self.assertGreaterEqual(image['size'], 20) # 9. Get /images with is_public=None filter # Verify correct images returned with property # Bug lp:803656 Support is_public in filtering params = "is_public=None" path = "/v1/images?%s" % (params) response, content = self.http.request(path, 'GET') self.assertEqual(http_client.OK, response.status) data = jsonutils.loads(content) self.assertEqual(4, len(data['images'])) # 10. Get /images with is_public=False filter # Verify correct images returned with property # Bug lp:803656 Support is_public in filtering params = "is_public=False" path = "/v1/images?%s" % (params) response, content = self.http.request(path, 'GET') self.assertEqual(http_client.OK, response.status) data = jsonutils.loads(content) self.assertEqual(1, len(data['images'])) for image in data['images']: self.assertEqual("My Private Image", image['name']) # 11. Get /images with is_public=True filter # Verify correct images returned with property # Bug lp:803656 Support is_public in filtering params = "is_public=True" path = "/v1/images?%s" % (params) response, content = self.http.request(path, 'GET') self.assertEqual(http_client.OK, response.status) data = jsonutils.loads(content) self.assertEqual(3, len(data['images'])) for image in data['images']: self.assertNotEqual(image['name'], "My Private Image") # 12. Get /images with protected=False filter # Verify correct images returned with property params = "protected=False" path = "/v1/images?%s" % (params) response, content = self.http.request(path, 'GET') self.assertEqual(http_client.OK, response.status) data = jsonutils.loads(content) self.assertEqual(2, len(data['images'])) for image in data['images']: self.assertNotEqual(image['name'], "Image1") # 13. Get /images with protected=True filter # Verify correct images returned with property params = "protected=True" path = "/v1/images?%s" % (params) response, content = self.http.request(path, 'GET') self.assertEqual(http_client.OK, response.status) data = jsonutils.loads(content) self.assertEqual(1, len(data['images'])) for image in data['images']: self.assertEqual("Image1", image['name']) # 14. GET /images with property filter # Verify correct images returned with property params = "property-pants=are%20on" path = "/v1/images/detail?%s" % (params) response, content = self.http.request(path, 'GET') self.assertEqual(http_client.OK, response.status) data = jsonutils.loads(content) self.assertEqual(2, len(data['images'])) for image in data['images']: self.assertEqual("are on", image['properties']['pants']) # 15. GET /images with property filter and name filter # Verify correct images returned with property and name # Make sure you quote the url when using more than one param! params = "name=My%20Image!&property-pants=are%20on" path = "/v1/images/detail?%s" % (params) response, content = self.http.request(path, 'GET') self.assertEqual(http_client.OK, response.status) data = jsonutils.loads(content) self.assertEqual(1, len(data['images'])) for image in data['images']: self.assertEqual("are on", image['properties']['pants']) self.assertEqual("My Image!", image['name']) # 16. GET /images with past changes-since filter yesterday = timeutils.isotime(timeutils.utcnow() - datetime.timedelta(1)) params = "changes-since=%s" % yesterday path = "/v1/images?%s" % (params) response, content = self.http.request(path, 'GET') self.assertEqual(http_client.OK, response.status) data = jsonutils.loads(content) self.assertEqual(3, len(data['images'])) # one timezone west of Greenwich equates to an hour ago # taking care to pre-urlencode '+' as '%2B', otherwise the timezone # '+' is wrongly decoded as a space # TODO(eglynn): investigate '+' --> decoding, an artifact # of WSGI/webob dispatch? now = timeutils.utcnow() hour_ago = now.strftime('%Y-%m-%dT%H:%M:%S%%2B01:00') params = "changes-since=%s" % hour_ago path = "/v1/images?%s" % (params) response, content = self.http.request(path, 'GET') self.assertEqual(http_client.OK, response.status) data = jsonutils.loads(content) self.assertEqual(3, len(data['images'])) # 17. GET /images with future changes-since filter tomorrow = timeutils.isotime(timeutils.utcnow() + datetime.timedelta(1)) params = "changes-since=%s" % tomorrow path = "/v1/images?%s" % (params) response, content = self.http.request(path, 'GET') self.assertEqual(http_client.OK, response.status) data = jsonutils.loads(content) self.assertEqual(0, len(data['images'])) # one timezone east of Greenwich equates to an hour from now now = timeutils.utcnow() hour_hence = now.strftime('%Y-%m-%dT%H:%M:%S-01:00') params = "changes-since=%s" % hour_hence path = "/v1/images?%s" % (params) response, content = self.http.request(path, 'GET') self.assertEqual(http_client.OK, response.status) data = jsonutils.loads(content) self.assertEqual(0, len(data['images'])) # 18. GET /images with size_min filter # Verify correct images returned with size >= expected params = "size_min=-1" path = "/v1/images?%s" % (params) response, content = self.http.request(path, 'GET') self.assertEqual(http_client.BAD_REQUEST, response.status) self.assertIn("filter size_min got -1", content) # 19. GET /images with size_min filter # Verify correct images returned with size >= expected params = "size_max=-1" path = "/v1/images?%s" % (params) response, content = self.http.request(path, 'GET') self.assertEqual(http_client.BAD_REQUEST, response.status) self.assertIn("filter size_max got -1", content) # 20. GET /images with size_min filter # Verify correct images returned with size >= expected params = "min_ram=-1" path = "/v1/images?%s" % (params) response, content = self.http.request(path, 'GET') self.assertEqual(http_client.BAD_REQUEST, response.status) self.assertIn("Bad value passed to filter min_ram got -1", content) # 21. GET /images with size_min filter # Verify correct images returned with size >= expected params = "protected=imalittleteapot" path = "/v1/images?%s" % (params) response, content = self.http.request(path, 'GET') self.assertEqual(http_client.BAD_REQUEST, response.status) self.assertIn("protected got imalittleteapot", content) # 22. GET /images with size_min filter # Verify correct images returned with size >= expected params = "is_public=imalittleteapot" path = "/v1/images?%s" % (params) response, content = self.http.request(path, 'GET') self.assertEqual(http_client.BAD_REQUEST, response.status) self.assertIn("is_public got imalittleteapot", content) def test_limited_images(self): """ Ensure marker and limit query params work """ # 0. GET /images # Verify no public images path = "/v1/images" response, content = self.http.request(path, 'GET') self.assertEqual(http_client.OK, response.status) self.assertEqual('{"images": []}', content) image_ids = [] # 1. POST /images with three public images with various attributes headers = minimal_headers('Image1') path = "/v1/images" response, content = self.http.request(path, 'POST', headers=headers) self.assertEqual(http_client.CREATED, response.status) image_ids.append(jsonutils.loads(content)['image']['id']) headers = minimal_headers('Image2') path = "/v1/images" response, content = self.http.request(path, 'POST', headers=headers) self.assertEqual(http_client.CREATED, response.status) image_ids.append(jsonutils.loads(content)['image']['id']) headers = minimal_headers('Image3') path = "/v1/images" response, content = self.http.request(path, 'POST', headers=headers) self.assertEqual(http_client.CREATED, response.status) image_ids.append(jsonutils.loads(content)['image']['id']) # 2. GET /images with all images path = "/v1/images" response, content = self.http.request(path, 'GET') self.assertEqual(http_client.OK, response.status) images = jsonutils.loads(content)['images'] self.assertEqual(3, len(images)) # 3. GET /images with limit of 2 # Verify only two images were returned params = "limit=2" path = "/v1/images?%s" % (params) response, content = self.http.request(path, 'GET') self.assertEqual(http_client.OK, response.status) data = jsonutils.loads(content)['images'] self.assertEqual(2, len(data)) self.assertEqual(images[0]['id'], data[0]['id']) self.assertEqual(images[1]['id'], data[1]['id']) # 4. GET /images with marker # Verify only two images were returned params = "marker=%s" % images[0]['id'] path = "/v1/images?%s" % (params) response, content = self.http.request(path, 'GET') self.assertEqual(http_client.OK, response.status) data = jsonutils.loads(content)['images'] self.assertEqual(2, len(data)) self.assertEqual(images[1]['id'], data[0]['id']) self.assertEqual(images[2]['id'], data[1]['id']) # 5. GET /images with marker and limit # Verify only one image was returned with the correct id params = "limit=1&marker=%s" % images[1]['id'] path = "/v1/images?%s" % (params) response, content = self.http.request(path, 'GET') self.assertEqual(http_client.OK, response.status) data = jsonutils.loads(content)['images'] self.assertEqual(1, len(data)) self.assertEqual(images[2]['id'], data[0]['id']) # 6. GET /images/detail with marker and limit # Verify only one image was returned with the correct id params = "limit=1&marker=%s" % images[1]['id'] path = "/v1/images?%s" % (params) response, content = self.http.request(path, 'GET') self.assertEqual(http_client.OK, response.status) data = jsonutils.loads(content)['images'] self.assertEqual(1, len(data)) self.assertEqual(images[2]['id'], data[0]['id']) # DELETE images for image_id in image_ids: path = "/v1/images/%s" % (image_id) response, content = self.http.request(path, 'DELETE') self.assertEqual(http_client.OK, response.status) def test_ordered_images(self): """ Set up three test images and ensure each query param filter works """ # 0. GET /images # Verify no public images path = "/v1/images" response, content = self.http.request(path, 'GET') self.assertEqual(http_client.OK, response.status) self.assertEqual('{"images": []}', content) # 1. POST /images with three public images with various attributes image_ids = [] headers = {'Content-Type': 'application/octet-stream', 'X-Image-Meta-Name': 'Image1', 'X-Image-Meta-Status': 'active', 'X-Image-Meta-Container-Format': 'ovf', 'X-Image-Meta-Disk-Format': 'vdi', 'X-Image-Meta-Size': '19', 'X-Image-Meta-Is-Public': 'True'} path = "/v1/images" response, content = self.http.request(path, 'POST', headers=headers) self.assertEqual(http_client.CREATED, response.status) image_ids.append(jsonutils.loads(content)['image']['id']) headers = {'Content-Type': 'application/octet-stream', 'X-Image-Meta-Name': 'ASDF', 'X-Image-Meta-Status': 'active', 'X-Image-Meta-Container-Format': 'bare', 'X-Image-Meta-Disk-Format': 'iso', 'X-Image-Meta-Size': '2', 'X-Image-Meta-Is-Public': 'True'} path = "/v1/images" response, content = self.http.request(path, 'POST', headers=headers) self.assertEqual(http_client.CREATED, response.status) image_ids.append(jsonutils.loads(content)['image']['id']) headers = {'Content-Type': 'application/octet-stream', 'X-Image-Meta-Name': 'XYZ', 'X-Image-Meta-Status': 'saving', 'X-Image-Meta-Container-Format': 'ami', 'X-Image-Meta-Disk-Format': 'ami', 'X-Image-Meta-Size': '5', 'X-Image-Meta-Is-Public': 'True'} path = "/v1/images" response, content = self.http.request(path, 'POST', headers=headers) self.assertEqual(http_client.CREATED, response.status) image_ids.append(jsonutils.loads(content)['image']['id']) # 2. GET /images with no query params # Verify three public images sorted by created_at desc path = "/v1/images" response, content = self.http.request(path, 'GET') self.assertEqual(http_client.OK, response.status) data = jsonutils.loads(content) self.assertEqual(3, len(data['images'])) self.assertEqual(image_ids[2], data['images'][0]['id']) self.assertEqual(image_ids[1], data['images'][1]['id']) self.assertEqual(image_ids[0], data['images'][2]['id']) # 3. GET /images sorted by name asc params = 'sort_key=name&sort_dir=asc' path = "/v1/images?%s" % (params) response, content = self.http.request(path, 'GET') self.assertEqual(http_client.OK, response.status) data = jsonutils.loads(content) self.assertEqual(3, len(data['images'])) self.assertEqual(image_ids[1], data['images'][0]['id']) self.assertEqual(image_ids[0], data['images'][1]['id']) self.assertEqual(image_ids[2], data['images'][2]['id']) # 4. GET /images sorted by size desc params = 'sort_key=size&sort_dir=desc' path = "/v1/images?%s" % (params) response, content = self.http.request(path, 'GET') self.assertEqual(http_client.OK, response.status) data = jsonutils.loads(content) self.assertEqual(3, len(data['images'])) self.assertEqual(image_ids[0], data['images'][0]['id']) self.assertEqual(image_ids[2], data['images'][1]['id']) self.assertEqual(image_ids[1], data['images'][2]['id']) # 5. GET /images sorted by size desc with a marker params = 'sort_key=size&sort_dir=desc&marker=%s' % image_ids[0] path = "/v1/images?%s" % (params) response, content = self.http.request(path, 'GET') self.assertEqual(http_client.OK, response.status) data = jsonutils.loads(content) self.assertEqual(2, len(data['images'])) self.assertEqual(image_ids[2], data['images'][0]['id']) self.assertEqual(image_ids[1], data['images'][1]['id']) # 6. GET /images sorted by name asc with a marker params = 'sort_key=name&sort_dir=asc&marker=%s' % image_ids[2] path = "/v1/images?%s" % (params) response, content = self.http.request(path, 'GET') self.assertEqual(http_client.OK, response.status) data = jsonutils.loads(content) self.assertEqual(0, len(data['images'])) # DELETE images for image_id in image_ids: path = "/v1/images/%s" % (image_id) response, content = self.http.request(path, 'DELETE') self.assertEqual(http_client.OK, response.status) def test_duplicate_image_upload(self): """ Upload initial image, then attempt to upload duplicate image """ # 0. GET /images # Verify no public images path = "/v1/images" response, content = self.http.request(path, 'GET') self.assertEqual(http_client.OK, response.status) self.assertEqual('{"images": []}', content) # 1. POST /images with public image named Image1 headers = {'Content-Type': 'application/octet-stream', 'X-Image-Meta-Name': 'Image1', 'X-Image-Meta-Status': 'active', 'X-Image-Meta-Container-Format': 'ovf', 'X-Image-Meta-Disk-Format': 'vdi', 'X-Image-Meta-Size': '19', 'X-Image-Meta-Is-Public': 'True'} path = "/v1/images" response, content = self.http.request(path, 'POST', headers=headers) self.assertEqual(http_client.CREATED, response.status) image = jsonutils.loads(content)['image'] # 2. POST /images with public image named Image1, and ID: 1 headers = {'Content-Type': 'application/octet-stream', 'X-Image-Meta-Name': 'Image1 Update', 'X-Image-Meta-Status': 'active', 'X-Image-Meta-Container-Format': 'ovf', 'X-Image-Meta-Disk-Format': 'vdi', 'X-Image-Meta-Size': '19', 'X-Image-Meta-Id': image['id'], 'X-Image-Meta-Is-Public': 'True'} path = "/v1/images" response, content = self.http.request(path, 'POST', headers=headers) self.assertEqual(http_client.CONFLICT, response.status) def test_delete_not_existing(self): """ We test the following: 0. GET /images/1 - Verify 404 1. DELETE /images/1 - Verify 404 """ # 0. GET /images # Verify no public images path = "/v1/images" response, content = self.http.request(path, 'GET') self.assertEqual(http_client.OK, response.status) self.assertEqual('{"images": []}', content) # 1. DELETE /images/1 # Verify 404 returned path = "/v1/images/1" response, content = self.http.request(path, 'DELETE') self.assertEqual(http_client.NOT_FOUND, response.status) def _do_test_post_image_content_bad_format(self, format): """ We test that missing container/disk format fails with 400 "Bad Request" :see https://bugs.launchpad.net/glance/+bug/933702 """ # Verify no public images path = "/v1/images" response, content = self.http.request(path, 'GET') self.assertEqual(http_client.OK, response.status) images = jsonutils.loads(content)['images'] self.assertEqual(0, len(images)) path = "/v1/images" # POST /images without given format being specified headers = minimal_headers('Image1') headers['X-Image-Meta-' + format] = 'bad_value' with tempfile.NamedTemporaryFile() as test_data_file: test_data_file.write(b"XXX") test_data_file.flush() response, content = self.http.request(path, 'POST', headers=headers, body=test_data_file.name) self.assertEqual(http_client.BAD_REQUEST, response.status) type = format.replace('_format', '') expected = "Invalid %s format 'bad_value' for image" % type self.assertIn(expected, content, "Could not find '%s' in '%s'" % (expected, content)) # make sure the image was not created # Verify no public images path = "/v1/images" response, content = self.http.request(path, 'GET') self.assertEqual(http_client.OK, response.status) images = jsonutils.loads(content)['images'] self.assertEqual(0, len(images)) def test_post_image_content_bad_container_format(self): self._do_test_post_image_content_bad_format('container_format') def test_post_image_content_bad_disk_format(self): self._do_test_post_image_content_bad_format('disk_format') def _do_test_put_image_content_missing_format(self, format): """ We test that missing container/disk format only fails with 400 "Bad Request" when the image content is PUT (i.e. not on the original POST of a queued image). :see https://bugs.launchpad.net/glance/+bug/937216 """ # POST queued image path = "/v1/images" headers = { 'X-Image-Meta-Name': 'Image1', 'X-Image-Meta-Is-Public': 'True', } response, content = self.http.request(path, 'POST', headers=headers) self.assertEqual(http_client.CREATED, response.status) data = jsonutils.loads(content) image_id = data['image']['id'] self.addDetail('image_data', testtools.content.json_content(data)) # PUT image content images without given format being specified path = "/v1/images/%s" % (image_id) headers = minimal_headers('Image1') del headers['X-Image-Meta-' + format] with tempfile.NamedTemporaryFile() as test_data_file: test_data_file.write(b"XXX") test_data_file.flush() response, content = self.http.request(path, 'PUT', headers=headers, body=test_data_file.name) self.assertEqual(http_client.BAD_REQUEST, response.status) type = format.replace('_format', '').capitalize() expected = "%s format is not specified" % type self.assertIn(expected, content, "Could not find '%s' in '%s'" % (expected, content)) def test_put_image_content_bad_container_format(self): self._do_test_put_image_content_missing_format('container_format') def test_put_image_content_bad_disk_format(self): self._do_test_put_image_content_missing_format('disk_format') def _do_test_mismatched_attribute(self, attribute, value): """ Test mismatched attribute. """ image_data = "*" * FIVE_KB headers = minimal_headers('Image1') headers[attribute] = value path = "/v1/images" response, content = self.http.request(path, 'POST', headers=headers, body=image_data) self.assertEqual(http_client.BAD_REQUEST, response.status) images_dir = os.path.join(self.test_dir, 'images') image_count = len([name for name in os.listdir(images_dir) if os.path.isfile(os.path.join(images_dir, name))]) self.assertEqual(0, image_count) def test_mismatched_size(self): """ Test mismatched size. """ self._do_test_mismatched_attribute('x-image-meta-size', str(FIVE_KB + 1)) def test_mismatched_checksum(self): """ Test mismatched checksum. """ self._do_test_mismatched_attribute('x-image-meta-checksum', 'foobar') class TestApiWithFakeAuth(base.ApiTest): def __init__(self, *args, **kwargs): super(TestApiWithFakeAuth, self).__init__(*args, **kwargs) self.api_flavor = 'fakeauth' self.registry_flavor = 'fakeauth' def test_ownership(self): # Add an image with admin privileges and ensure the owner # can be set to something other than what was used to authenticate auth_headers = { 'X-Auth-Token': 'user1:tenant1:admin', } create_headers = { 'X-Image-Meta-Name': 'MyImage', 'X-Image-Meta-disk_format': 'raw', 'X-Image-Meta-container_format': 'ovf', 'X-Image-Meta-Is-Public': 'True', 'X-Image-Meta-Owner': 'tenant2', } create_headers.update(auth_headers) path = "/v1/images" response, content = self.http.request(path, 'POST', headers=create_headers) self.assertEqual(http_client.CREATED, response.status) data = jsonutils.loads(content) image_id = data['image']['id'] path = "/v1/images/%s" % (image_id) response, content = self.http.request(path, 'HEAD', headers=auth_headers) self.assertEqual(http_client.OK, response.status) self.assertEqual('tenant2', response['x-image-meta-owner']) # Now add an image without admin privileges and ensure the owner # cannot be set to something other than what was used to authenticate auth_headers = { 'X-Auth-Token': 'user1:tenant1:role1', } create_headers.update(auth_headers) path = "/v1/images" response, content = self.http.request(path, 'POST', headers=create_headers) self.assertEqual(http_client.CREATED, response.status) data = jsonutils.loads(content) image_id = data['image']['id'] # We have to be admin to see the owner auth_headers = { 'X-Auth-Token': 'user1:tenant1:admin', } create_headers.update(auth_headers) path = "/v1/images/%s" % (image_id) response, content = self.http.request(path, 'HEAD', headers=auth_headers) self.assertEqual(http_client.OK, response.status) self.assertEqual('tenant1', response['x-image-meta-owner']) # Make sure the non-privileged user can't update their owner either update_headers = { 'X-Image-Meta-Name': 'MyImage2', 'X-Image-Meta-Owner': 'tenant2', 'X-Auth-Token': 'user1:tenant1:role1', } path = "/v1/images/%s" % (image_id) response, content = self.http.request(path, 'PUT', headers=update_headers) self.assertEqual(http_client.OK, response.status) # We have to be admin to see the owner auth_headers = { 'X-Auth-Token': 'user1:tenant1:admin', } path = "/v1/images/%s" % (image_id) response, content = self.http.request(path, 'HEAD', headers=auth_headers) self.assertEqual(http_client.OK, response.status) self.assertEqual('tenant1', response['x-image-meta-owner']) # An admin user should be able to update the owner auth_headers = { 'X-Auth-Token': 'user1:tenant3:admin', } update_headers = { 'X-Image-Meta-Name': 'MyImage2', 'X-Image-Meta-Owner': 'tenant2', } update_headers.update(auth_headers) path = "/v1/images/%s" % (image_id) response, content = self.http.request(path, 'PUT', headers=update_headers) self.assertEqual(http_client.OK, response.status) path = "/v1/images/%s" % (image_id) response, content = self.http.request(path, 'HEAD', headers=auth_headers) self.assertEqual(http_client.OK, response.status) self.assertEqual('tenant2', response['x-image-meta-owner']) def test_image_visibility_to_different_users(self): owners = ['admin', 'tenant1', 'tenant2', 'none'] visibilities = {'public': 'True', 'private': 'False'} image_ids = {} for owner in owners: for visibility, is_public in visibilities.items(): name = '%s-%s' % (owner, visibility) headers = { 'Content-Type': 'application/octet-stream', 'X-Image-Meta-Name': name, 'X-Image-Meta-Status': 'active', 'X-Image-Meta-Is-Public': is_public, 'X-Image-Meta-Owner': owner, 'X-Auth-Token': 'createuser:createtenant:admin', } path = "/v1/images" response, content = self.http.request(path, 'POST', headers=headers) self.assertEqual(http_client.CREATED, response.status) data = jsonutils.loads(content) image_ids[name] = data['image']['id'] def list_images(tenant, role='', is_public=None): auth_token = 'user:%s:%s' % (tenant, role) headers = {'X-Auth-Token': auth_token} path = "/v1/images/detail" if is_public is not None: path += '?is_public=%s' % is_public response, content = self.http.request(path, 'GET', headers=headers) self.assertEqual(http_client.OK, response.status) return jsonutils.loads(content)['images'] # 1. Known user sees public and their own images images = list_images('tenant1') self.assertEqual(5, len(images)) for image in images: self.assertTrue(image['is_public'] or image['owner'] == 'tenant1') # 2. Unknown user sees only public images images = list_images('none') self.assertEqual(4, len(images)) for image in images: self.assertTrue(image['is_public']) # 3. Unknown admin sees only public images images = list_images('none', role='admin') self.assertEqual(4, len(images)) for image in images: self.assertTrue(image['is_public']) # 4. Unknown admin, is_public=none, shows all images images = list_images('none', role='admin', is_public='none') self.assertEqual(8, len(images)) # 5. Unknown admin, is_public=true, shows only public images images = list_images('none', role='admin', is_public='true') self.assertEqual(4, len(images)) for image in images: self.assertTrue(image['is_public']) # 6. Unknown admin, is_public=false, sees only private images images = list_images('none', role='admin', is_public='false') self.assertEqual(4, len(images)) for image in images: self.assertFalse(image['is_public']) # 7. Known admin sees public and their own images images = list_images('admin', role='admin') self.assertEqual(5, len(images)) for image in images: self.assertTrue(image['is_public'] or image['owner'] == 'admin') # 8. Known admin, is_public=none, shows all images images = list_images('admin', role='admin', is_public='none') self.assertEqual(8, len(images)) # 9. Known admin, is_public=true, sees all public and their images images = list_images('admin', role='admin', is_public='true') self.assertEqual(5, len(images)) for image in images: self.assertTrue(image['is_public'] or image['owner'] == 'admin') # 10. Known admin, is_public=false, sees all private images images = list_images('admin', role='admin', is_public='false') self.assertEqual(4, len(images)) for image in images: self.assertFalse(image['is_public']) def test_property_protections(self): # Enable property protection self.config(property_protection_file=self.property_file) self.init() CREATE_HEADERS = { 'X-Image-Meta-Name': 'MyImage', 'X-Image-Meta-disk_format': 'raw', 'X-Image-Meta-container_format': 'ovf', 'X-Image-Meta-Is-Public': 'True', 'X-Image-Meta-Owner': 'tenant2', } # Create an image for role member with extra properties # Raises 403 since user is not allowed to create 'foo' auth_headers = { 'X-Auth-Token': 'user1:tenant1:member', } custom_props = { 'x-image-meta-property-foo': 'bar' } auth_headers.update(custom_props) auth_headers.update(CREATE_HEADERS) path = "/v1/images" response, content = self.http.request(path, 'POST', headers=auth_headers) self.assertEqual(http_client.FORBIDDEN, response.status) # Create an image for role member without 'foo' auth_headers = { 'X-Auth-Token': 'user1:tenant1:member', } custom_props = { 'x-image-meta-property-x_owner_foo': 'o_s_bar', } auth_headers.update(custom_props) auth_headers.update(CREATE_HEADERS) path = "/v1/images" response, content = self.http.request(path, 'POST', headers=auth_headers) self.assertEqual(http_client.CREATED, response.status) # Returned image entity should have 'x_owner_foo' data = jsonutils.loads(content) self.assertEqual('o_s_bar', data['image']['properties']['x_owner_foo']) # Create an image for role spl_role with extra properties auth_headers = { 'X-Auth-Token': 'user1:tenant1:spl_role', } custom_props = { 'X-Image-Meta-Property-spl_create_prop': 'create_bar', 'X-Image-Meta-Property-spl_read_prop': 'read_bar', 'X-Image-Meta-Property-spl_update_prop': 'update_bar', 'X-Image-Meta-Property-spl_delete_prop': 'delete_bar' } auth_headers.update(custom_props) auth_headers.update(CREATE_HEADERS) path = "/v1/images" response, content = self.http.request(path, 'POST', headers=auth_headers) self.assertEqual(http_client.CREATED, response.status) data = jsonutils.loads(content) image_id = data['image']['id'] # Attempt to update two properties, one protected(spl_read_prop), the # other not(spl_update_prop). Request should be forbidden. auth_headers = { 'X-Auth-Token': 'user1:tenant1:spl_role', } custom_props = { 'X-Image-Meta-Property-spl_read_prop': 'r', 'X-Image-Meta-Property-spl_update_prop': 'u', 'X-Glance-Registry-Purge-Props': 'False' } auth_headers.update(auth_headers) auth_headers.update(custom_props) path = "/v1/images/%s" % image_id response, content = self.http.request(path, 'PUT', headers=auth_headers) self.assertEqual(http_client.FORBIDDEN, response.status) # Attempt to create properties which are forbidden auth_headers = { 'X-Auth-Token': 'user1:tenant1:spl_role', } custom_props = { 'X-Image-Meta-Property-spl_new_prop': 'new', 'X-Glance-Registry-Purge-Props': 'True' } auth_headers.update(auth_headers) auth_headers.update(custom_props) path = "/v1/images/%s" % image_id response, content = self.http.request(path, 'PUT', headers=auth_headers) self.assertEqual(http_client.FORBIDDEN, response.status) # Attempt to update, create and delete properties auth_headers = { 'X-Auth-Token': 'user1:tenant1:spl_role', } custom_props = { 'X-Image-Meta-Property-spl_create_prop': 'create_bar', 'X-Image-Meta-Property-spl_read_prop': 'read_bar', 'X-Image-Meta-Property-spl_update_prop': 'u', 'X-Glance-Registry-Purge-Props': 'True' } auth_headers.update(auth_headers) auth_headers.update(custom_props) path = "/v1/images/%s" % image_id response, content = self.http.request(path, 'PUT', headers=auth_headers) self.assertEqual(http_client.OK, response.status) # Returned image entity should reflect the changes image = jsonutils.loads(content) # 'spl_update_prop' has update permission for spl_role # hence the value has changed self.assertEqual('u', image['image']['properties']['spl_update_prop']) # 'spl_delete_prop' has delete permission for spl_role # hence the property has been deleted self.assertNotIn('spl_delete_prop', image['image']['properties']) # 'spl_create_prop' has create permission for spl_role # hence the property has been created self.assertEqual('create_bar', image['image']['properties']['spl_create_prop']) # Image Deletion should work auth_headers = { 'X-Auth-Token': 'user1:tenant1:spl_role', } path = "/v1/images/%s" % image_id response, content = self.http.request(path, 'DELETE', headers=auth_headers) self.assertEqual(http_client.OK, response.status) # This image should be no longer be directly accessible auth_headers = { 'X-Auth-Token': 'user1:tenant1:spl_role', } path = "/v1/images/%s" % image_id response, content = self.http.request(path, 'HEAD', headers=auth_headers) self.assertEqual(http_client.NOT_FOUND, response.status) def test_property_protections_special_chars(self): # Enable property protection self.config(property_protection_file=self.property_file) self.init() CREATE_HEADERS = { 'X-Image-Meta-Name': 'MyImage', 'X-Image-Meta-disk_format': 'raw', 'X-Image-Meta-container_format': 'ovf', 'X-Image-Meta-Is-Public': 'True', 'X-Image-Meta-Owner': 'tenant2', 'X-Image-Meta-Size': '0', } # Create an image auth_headers = { 'X-Auth-Token': 'user1:tenant1:member', } auth_headers.update(CREATE_HEADERS) path = "/v1/images" response, content = self.http.request(path, 'POST', headers=auth_headers) self.assertEqual(http_client.CREATED, response.status) data = jsonutils.loads(content) image_id = data['image']['id'] # Verify both admin and unknown role can create properties marked with # '@' auth_headers = { 'X-Auth-Token': 'user1:tenant1:admin', } custom_props = { 'X-Image-Meta-Property-x_all_permitted_admin': '1' } auth_headers.update(custom_props) path = "/v1/images/%s" % image_id response, content = self.http.request(path, 'PUT', headers=auth_headers) self.assertEqual(http_client.OK, response.status) image = jsonutils.loads(content) self.assertEqual('1', image['image']['properties']['x_all_permitted_admin']) auth_headers = { 'X-Auth-Token': 'user1:tenant1:joe_soap', } custom_props = { 'X-Image-Meta-Property-x_all_permitted_joe_soap': '1', 'X-Glance-Registry-Purge-Props': 'False' } auth_headers.update(custom_props) path = "/v1/images/%s" % image_id response, content = self.http.request(path, 'PUT', headers=auth_headers) self.assertEqual(http_client.OK, response.status) image = jsonutils.loads(content) self.assertEqual( '1', image['image']['properties']['x_all_permitted_joe_soap']) # Verify both admin and unknown role can read properties marked with # '@' auth_headers = { 'X-Auth-Token': 'user1:tenant1:admin', } path = "/v1/images/%s" % image_id response, content = self.http.request(path, 'HEAD', headers=auth_headers) self.assertEqual(http_client.OK, response.status) self.assertEqual('1', response.get( 'x-image-meta-property-x_all_permitted_admin')) self.assertEqual('1', response.get( 'x-image-meta-property-x_all_permitted_joe_soap')) auth_headers = { 'X-Auth-Token': 'user1:tenant1:joe_soap', } path = "/v1/images/%s" % image_id response, content = self.http.request(path, 'HEAD', headers=auth_headers) self.assertEqual(http_client.OK, response.status) self.assertEqual('1', response.get( 'x-image-meta-property-x_all_permitted_admin')) self.assertEqual('1', response.get( 'x-image-meta-property-x_all_permitted_joe_soap')) # Verify both admin and unknown role can update properties marked with # '@' auth_headers = { 'X-Auth-Token': 'user1:tenant1:admin', } custom_props = { 'X-Image-Meta-Property-x_all_permitted_admin': '2', 'X-Glance-Registry-Purge-Props': 'False' } auth_headers.update(custom_props) path = "/v1/images/%s" % image_id response, content = self.http.request(path, 'PUT', headers=auth_headers) self.assertEqual(http_client.OK, response.status) image = jsonutils.loads(content) self.assertEqual('2', image['image']['properties']['x_all_permitted_admin']) auth_headers = { 'X-Auth-Token': 'user1:tenant1:joe_soap', } custom_props = { 'X-Image-Meta-Property-x_all_permitted_joe_soap': '2', 'X-Glance-Registry-Purge-Props': 'False' } auth_headers.update(custom_props) path = "/v1/images/%s" % image_id response, content = self.http.request(path, 'PUT', headers=auth_headers) self.assertEqual(http_client.OK, response.status) image = jsonutils.loads(content) self.assertEqual( '2', image['image']['properties']['x_all_permitted_joe_soap']) # Verify both admin and unknown role can delete properties marked with # '@' auth_headers = { 'X-Auth-Token': 'user1:tenant1:admin', } custom_props = { 'X-Image-Meta-Property-x_all_permitted_joe_soap': '2', 'X-Glance-Registry-Purge-Props': 'True' } auth_headers.update(custom_props) path = "/v1/images/%s" % image_id response, content = self.http.request(path, 'PUT', headers=auth_headers) self.assertEqual(http_client.OK, response.status) image = jsonutils.loads(content) self.assertNotIn('x_all_permitted_admin', image['image']['properties']) auth_headers = { 'X-Auth-Token': 'user1:tenant1:joe_soap', } custom_props = { 'X-Glance-Registry-Purge-Props': 'True' } auth_headers.update(custom_props) path = "/v1/images/%s" % image_id response, content = self.http.request(path, 'PUT', headers=auth_headers) self.assertEqual(http_client.OK, response.status) image = jsonutils.loads(content) self.assertNotIn('x_all_permitted_joe_soap', image['image']['properties']) # Verify neither admin nor unknown role can create a property protected # with '!' auth_headers = { 'X-Auth-Token': 'user1:tenant1:admin', } custom_props = { 'X-Image-Meta-Property-x_none_permitted_admin': '1' } auth_headers.update(custom_props) path = "/v1/images/%s" % image_id response, content = self.http.request(path, 'PUT', headers=auth_headers) self.assertEqual(http_client.FORBIDDEN, response.status) auth_headers = { 'X-Auth-Token': 'user1:tenant1:joe_soap', } custom_props = { 'X-Image-Meta-Property-x_none_permitted_joe_soap': '1' } auth_headers.update(custom_props) path = "/v1/images/%s" % image_id response, content = self.http.request(path, 'PUT', headers=auth_headers) self.assertEqual(http_client.FORBIDDEN, response.status) # Verify neither admin nor unknown role can read properties marked with # '!' auth_headers = { 'X-Auth-Token': 'user1:tenant1:admin', } custom_props = { 'X-Image-Meta-Property-x_none_read': '1' } auth_headers.update(custom_props) auth_headers.update(CREATE_HEADERS) path = "/v1/images" response, content = self.http.request(path, 'POST', headers=auth_headers) self.assertEqual(http_client.CREATED, response.status) data = jsonutils.loads(content) image_id = data['image']['id'] auth_headers = { 'X-Auth-Token': 'user1:tenant1:admin', } path = "/v1/images/%s" % image_id response, content = self.http.request(path, 'HEAD', headers=auth_headers) self.assertEqual(http_client.OK, response.status) self.assertRaises(KeyError, response.get, 'X-Image-Meta-Property-x_none_read') auth_headers = { 'X-Auth-Token': 'user1:tenant1:joe_soap', } path = "/v1/images/%s" % image_id response, content = self.http.request(path, 'HEAD', headers=auth_headers) self.assertEqual(http_client.OK, response.status) self.assertRaises(KeyError, response.get, 'X-Image-Meta-Property-x_none_read') # Verify neither admin nor unknown role can update properties marked # with '!' auth_headers = { 'X-Auth-Token': 'user1:tenant1:admin', } custom_props = { 'X-Image-Meta-Property-x_none_update': '1' } auth_headers.update(custom_props) auth_headers.update(CREATE_HEADERS) path = "/v1/images" response, content = self.http.request(path, 'POST', headers=auth_headers) self.assertEqual(http_client.CREATED, response.status) data = jsonutils.loads(content) image_id = data['image']['id'] auth_headers = { 'X-Auth-Token': 'user1:tenant1:admin', } custom_props = { 'X-Image-Meta-Property-x_none_update': '2' } auth_headers.update(custom_props) path = "/v1/images/%s" % image_id response, content = self.http.request(path, 'PUT', headers=auth_headers) self.assertEqual(http_client.FORBIDDEN, response.status) auth_headers = { 'X-Auth-Token': 'user1:tenant1:joe_soap', } custom_props = { 'X-Image-Meta-Property-x_none_update': '2' } auth_headers.update(custom_props) path = "/v1/images/%s" % image_id response, content = self.http.request(path, 'PUT', headers=auth_headers) self.assertEqual(http_client.FORBIDDEN, response.status) # Verify neither admin nor unknown role can delete properties marked # with '!' auth_headers = { 'X-Auth-Token': 'user1:tenant1:admin', } custom_props = { 'X-Image-Meta-Property-x_none_delete': '1' } auth_headers.update(custom_props) auth_headers.update(CREATE_HEADERS) path = "/v1/images" response, content = self.http.request(path, 'POST', headers=auth_headers) self.assertEqual(http_client.CREATED, response.status) data = jsonutils.loads(content) image_id = data['image']['id'] auth_headers = { 'X-Auth-Token': 'user1:tenant1:admin', } custom_props = { 'X-Glance-Registry-Purge-Props': 'True' } auth_headers.update(custom_props) path = "/v1/images/%s" % image_id response, content = self.http.request(path, 'PUT', headers=auth_headers) self.assertEqual(http_client.FORBIDDEN, response.status) auth_headers = { 'X-Auth-Token': 'user1:tenant1:joe_soap', } custom_props = { 'X-Glance-Registry-Purge-Props': 'True' } auth_headers.update(custom_props) path = "/v1/images/%s" % image_id response, content = self.http.request(path, 'PUT', headers=auth_headers) self.assertEqual(http_client.FORBIDDEN, response.status) glance-16.0.0/glance/tests/integration/legacy_functional/base.py0000666000175100017510000001623113245511421024746 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import atexit import os.path import tempfile import fixtures import glance_store from oslo_config import cfg from oslo_db import options import glance.common.client from glance.common import config import glance.db.sqlalchemy.api import glance.registry.client.v1.client from glance import tests as glance_tests from glance.tests import utils as test_utils TESTING_API_PASTE_CONF = """ [pipeline:glance-api] pipeline = versionnegotiation gzip unauthenticated-context rootapp [pipeline:glance-api-caching] pipeline = versionnegotiation gzip unauthenticated-context cache rootapp [pipeline:glance-api-cachemanagement] pipeline = versionnegotiation gzip unauthenticated-context cache cache_manage rootapp [pipeline:glance-api-fakeauth] pipeline = versionnegotiation gzip fakeauth context rootapp [pipeline:glance-api-noauth] pipeline = versionnegotiation gzip context rootapp [composite:rootapp] paste.composite_factory = glance.api:root_app_factory /: apiversions /v1: apiv1app /v2: apiv2app [app:apiversions] paste.app_factory = glance.api.versions:create_resource [app:apiv1app] paste.app_factory = glance.api.v1.router:API.factory [app:apiv2app] paste.app_factory = glance.api.v2.router:API.factory [filter:versionnegotiation] paste.filter_factory = glance.api.middleware.version_negotiation:VersionNegotiationFilter.factory [filter:gzip] paste.filter_factory = glance.api.middleware.gzip:GzipMiddleware.factory [filter:cache] paste.filter_factory = glance.api.middleware.cache:CacheFilter.factory [filter:cache_manage] paste.filter_factory = glance.api.middleware.cache_manage:CacheManageFilter.factory [filter:context] paste.filter_factory = glance.api.middleware.context:ContextMiddleware.factory [filter:unauthenticated-context] paste.filter_factory = glance.api.middleware.context:UnauthenticatedContextMiddleware.factory [filter:fakeauth] paste.filter_factory = glance.tests.utils:FakeAuthMiddleware.factory """ TESTING_REGISTRY_PASTE_CONF = """ [pipeline:glance-registry] pipeline = unauthenticated-context registryapp [pipeline:glance-registry-fakeauth] pipeline = fakeauth context registryapp [app:registryapp] paste.app_factory = glance.registry.api.v1:API.factory [filter:context] paste.filter_factory = glance.api.middleware.context:ContextMiddleware.factory [filter:unauthenticated-context] paste.filter_factory = glance.api.middleware.context:UnauthenticatedContextMiddleware.factory [filter:fakeauth] paste.filter_factory = glance.tests.utils:FakeAuthMiddleware.factory """ CONF = cfg.CONF class ApiTest(test_utils.BaseTestCase): def setUp(self): super(ApiTest, self).setUp() self.init() def init(self): self.test_dir = self.useFixture(fixtures.TempDir()).path self._configure_logging() self._configure_policy() self._setup_database() self._setup_stores() self._setup_property_protection() self.glance_registry_app = self._load_paste_app( 'glance-registry', flavor=getattr(self, 'registry_flavor', ''), conf=getattr(self, 'registry_paste_conf', TESTING_REGISTRY_PASTE_CONF), ) self._connect_registry_client() self.glance_api_app = self._load_paste_app( 'glance-api', flavor=getattr(self, 'api_flavor', ''), conf=getattr(self, 'api_paste_conf', TESTING_API_PASTE_CONF), ) self.http = test_utils.Httplib2WsgiAdapter(self.glance_api_app) def _setup_property_protection(self): self._copy_data_file('property-protections.conf', self.test_dir) self.property_file = os.path.join(self.test_dir, 'property-protections.conf') def _configure_policy(self): policy_file = self._copy_data_file('policy.json', self.test_dir) self.config(policy_file=policy_file, group='oslo_policy') def _configure_logging(self): self.config(default_log_levels=[ 'amqplib=WARN', 'sqlalchemy=WARN', 'boto=WARN', 'suds=INFO', 'keystone=INFO', 'eventlet.wsgi.server=DEBUG' ]) def _setup_database(self): sql_connection = 'sqlite:////%s/tests.sqlite' % self.test_dir options.set_defaults(CONF, connection=sql_connection) glance.db.sqlalchemy.api.clear_db_env() glance_db_env = 'GLANCE_DB_TEST_SQLITE_FILE' if glance_db_env in os.environ: # use the empty db created and cached as a tempfile # instead of spending the time creating a new one db_location = os.environ[glance_db_env] test_utils.execute('cp %s %s/tests.sqlite' % (db_location, self.test_dir)) else: test_utils.db_sync() # copy the clean db to a temp location so that it # can be reused for future tests (osf, db_location) = tempfile.mkstemp() os.close(osf) test_utils.execute('cp %s/tests.sqlite %s' % (self.test_dir, db_location)) os.environ[glance_db_env] = db_location # cleanup the temp file when the test suite is # complete def _delete_cached_db(): try: os.remove(os.environ[glance_db_env]) except Exception: glance_tests.logger.exception( "Error cleaning up the file %s" % os.environ[glance_db_env]) atexit.register(_delete_cached_db) def _setup_stores(self): glance_store.register_opts(CONF) image_dir = os.path.join(self.test_dir, "images") self.config(group='glance_store', filesystem_store_datadir=image_dir) glance_store.create_stores() def _load_paste_app(self, name, flavor, conf): conf_file_path = os.path.join(self.test_dir, '%s-paste.ini' % name) with open(conf_file_path, 'w') as conf_file: conf_file.write(conf) conf_file.flush() return config.load_paste_app(name, flavor=flavor, conf_file=conf_file_path) def _connect_registry_client(self): def get_connection_type(self2): def wrapped(*args, **kwargs): return test_utils.HttplibWsgiAdapter(self.glance_registry_app) return wrapped self.stubs.Set(glance.common.client.BaseClient, 'get_connection_type', get_connection_type) def tearDown(self): glance.db.sqlalchemy.api.clear_db_env() super(ApiTest, self).tearDown() glance-16.0.0/glance/tests/integration/legacy_functional/__init__.py0000666000175100017510000000000013245511421025556 0ustar zuulzuul00000000000000glance-16.0.0/glance/tests/__init__.py0000666000175100017510000000277013245511421017565 0ustar zuulzuul00000000000000# Copyright 2010-2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # See http://code.google.com/p/python-nose/issues/detail?id=373 # The code below enables tests to work with i18n _() blocks import six.moves.builtins as __builtin__ setattr(__builtin__, '_', lambda x: x) # Set up logging to output debugging import logging logger = logging.getLogger() hdlr = logging.FileHandler('run_tests.log', 'w') formatter = logging.Formatter('%(asctime)s %(levelname)s %(message)s') hdlr.setFormatter(formatter) logger.addHandler(hdlr) logger.setLevel(logging.DEBUG) import eventlet # NOTE(jokke): As per the eventlet commit # b756447bab51046dfc6f1e0e299cc997ab343701 there's circular import happening # which can be solved making sure the hubs are properly and fully imported # before calling monkey_patch(). This is solved in eventlet 0.22.0 but we # need to address it before that is widely used around. eventlet.hubs.get_hub() eventlet.patcher.monkey_patch() glance-16.0.0/glance/tests/etc/0000775000175100017510000000000013245511661016225 5ustar zuulzuul00000000000000glance-16.0.0/glance/tests/etc/policy.json0000666000175100017510000000260413245511421020415 0ustar zuulzuul00000000000000{ "context_is_admin": "role:admin", "default": "", "glance_creator": "role:admin or role:spl_role", "add_image": "", "delete_image": "", "get_image": "", "get_images": "", "modify_image": "", "publicize_image": "", "communitize_image": "", "copy_from": "", "download_image": "", "upload_image": "", "delete_image_location": "", "get_image_location": "", "set_image_location": "", "add_member": "", "delete_member": "", "get_member": "", "get_members": "", "modify_member": "", "manage_image_cache": "", "get_task": "role:admin", "get_tasks": "role:admin", "add_task": "role:admin", "modify_task": "role:admin", "get_metadef_namespace": "", "get_metadef_namespaces":"", "modify_metadef_namespace":"", "add_metadef_namespace":"", "get_metadef_object":"", "get_metadef_objects":"", "modify_metadef_object":"", "add_metadef_object":"", "list_metadef_resource_types":"", "get_metadef_resource_type":"", "add_metadef_resource_type_association":"", "get_metadef_property":"", "get_metadef_properties":"", "modify_metadef_property":"", "add_metadef_property":"", "get_metadef_tag":"", "get_metadef_tags":"", "modify_metadef_tag":"", "add_metadef_tag":"", "add_metadef_tags":"", "deactivate": "", "reactivate": "" } glance-16.0.0/glance/tests/etc/schema-image.json0000666000175100017510000000000313245511421021425 0ustar zuulzuul00000000000000{} glance-16.0.0/glance/tests/etc/property-protections-policies.conf0000666000175100017510000000163313245511421025133 0ustar zuulzuul00000000000000[spl_creator_policy] create = glance_creator read = glance_creator update = context_is_admin delete = context_is_admin [spl_default_policy] create = context_is_admin read = default update = context_is_admin delete = context_is_admin [^x_all_permitted.*] create = @ read = @ update = @ delete = @ [^x_none_permitted.*] create = ! read = ! update = ! delete = ! [x_none_read] create = context_is_admin read = ! update = ! delete = ! [x_none_update] create = context_is_admin read = context_is_admin update = ! delete = context_is_admin [x_none_delete] create = context_is_admin read = context_is_admin update = context_is_admin delete = ! [x_foo_matcher] create = context_is_admin read = context_is_admin update = context_is_admin delete = context_is_admin [x_foo_*] create = @ read = @ update = @ delete = @ [.*] create = context_is_admin read = context_is_admin update = context_is_admin delete = context_is_admin glance-16.0.0/glance/tests/etc/glance-swift.conf0000666000175100017510000000100413245511421021446 0ustar zuulzuul00000000000000[ref1] user = tenant:user1 key = key1 auth_address = example.com [ref2] user = user2 key = key2 auth_address = http://example.com [store_2] user = tenant:user1 key = key1 auth_address= https://localhost:8080 [store_3] user= tenant:user2 key= key2 auth_address= https://localhost:8080 [store_4] user = tenant:user1 key = key1 auth_address = http://localhost:80 [store_5] user = tenant:user1 key = key1 auth_address = http://localhost [store_6] user = tenant:user1 key = key1 auth_address = https://localhost/v1 glance-16.0.0/glance/tests/etc/property-protections.conf0000666000175100017510000000267313245511421023333 0ustar zuulzuul00000000000000[^x_owner_.*] create = admin,member read = admin,member update = admin,member delete = admin,member [spl_create_prop] create = admin,spl_role read = admin,spl_role update = admin delete = admin [spl_read_prop] create = admin,spl_role read = admin,spl_role update = admin delete = admin [spl_read_only_prop] create = admin read = admin,spl_role update = admin delete = admin [spl_update_prop] create = admin,spl_role read = admin,spl_role update = admin,spl_role delete = admin [spl_update_only_prop] create = admin read = admin update = admin,spl_role delete = admin [spl_delete_prop] create = admin,spl_role read = admin,spl_role update = admin delete = admin,spl_role [spl_delete_empty_prop] create = admin,spl_role read = admin,spl_role update = admin delete = admin,spl_role [^x_all_permitted.*] create = @ read = @ update = @ delete = @ [^x_none_permitted.*] create = ! read = ! update = ! delete = ! [x_none_read] create = admin,member read = ! update = ! delete = ! [x_none_update] create = admin,member read = admin,member update = ! delete = admin,member [x_none_delete] create = admin,member read = admin,member update = admin,member delete = ! [x_case_insensitive] create = admin,Member read = admin,Member update = admin,Member delete = admin,Member [x_foo_matcher] create = admin read = admin update = admin delete = admin [x_foo_*] create = @ read = @ update = @ delete = @ [.*] create = admin read = admin update = admin delete = admin glance-16.0.0/glance/tests/utils.py0000666000175100017510000005522113245511421017165 0ustar zuulzuul00000000000000# Copyright 2010-2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Common utilities used in testing""" import errno import functools import os import shlex import shutil import socket import subprocess from alembic import command as alembic_command import fixtures from oslo_config import cfg from oslo_config import fixture as cfg_fixture from oslo_log import log from oslo_serialization import jsonutils from oslotest import moxstubout import six from six.moves import BaseHTTPServer from six.moves import http_client as http import testtools import webob from glance.common import config from glance.common import exception from glance.common import property_utils from glance.common import timeutils from glance.common import utils from glance.common import wsgi from glance import context from glance.db.sqlalchemy import alembic_migrations from glance.db.sqlalchemy import api as db_api from glance.db.sqlalchemy import models as db_models from glance.tests.unit import fixtures as glance_fixtures CONF = cfg.CONF try: CONF.debug except cfg.NoSuchOptError: # NOTE(sigmavirus24): If we run the entire test suite, the logging options # will be registered appropriately and we do not need to re-register them. # However, when we run a test in isolation (or use --debug), those options # will not be registered for us. In order for a test in a class that # inherits from BaseTestCase to even run, we will need to register them # ourselves. BaseTestCase.config will set the debug level if something # calls self.config(debug=True) so we need these options registered # appropriately. # See bug 1433785 for more details. log.register_options(CONF) class BaseTestCase(testtools.TestCase): def setUp(self): super(BaseTestCase, self).setUp() self._config_fixture = self.useFixture(cfg_fixture.Config()) # NOTE(bcwaldon): parse_args has to be called to register certain # command-line options - specifically we need config_dir for # the following policy tests config.parse_args(args=[]) self.addCleanup(CONF.reset) mox_fixture = self.useFixture(moxstubout.MoxStubout()) self.stubs = mox_fixture.stubs self.stubs.Set(exception, '_FATAL_EXCEPTION_FORMAT_ERRORS', True) self.test_dir = self.useFixture(fixtures.TempDir()).path self.conf_dir = os.path.join(self.test_dir, 'etc') utils.safe_mkdirs(self.conf_dir) self.set_policy() # Limit the amount of DeprecationWarning messages in the unit test logs self.useFixture(glance_fixtures.WarningsFixture()) def set_policy(self): conf_file = "policy.json" self.policy_file = self._copy_data_file(conf_file, self.conf_dir) self.config(policy_file=self.policy_file, group='oslo_policy') def set_property_protections(self, use_policies=False): self.unset_property_protections() conf_file = "property-protections.conf" if use_policies: conf_file = "property-protections-policies.conf" self.config(property_protection_rule_format="policies") self.property_file = self._copy_data_file(conf_file, self.test_dir) self.config(property_protection_file=self.property_file) def unset_property_protections(self): for section in property_utils.CONFIG.sections(): property_utils.CONFIG.remove_section(section) def _copy_data_file(self, file_name, dst_dir): src_file_name = os.path.join('glance/tests/etc', file_name) shutil.copy(src_file_name, dst_dir) dst_file_name = os.path.join(dst_dir, file_name) return dst_file_name def set_property_protection_rules(self, rules): with open(self.property_file, 'w') as f: for rule_key in rules.keys(): f.write('[%s]\n' % rule_key) for operation in rules[rule_key].keys(): roles_str = ','.join(rules[rule_key][operation]) f.write('%s = %s\n' % (operation, roles_str)) def config(self, **kw): """ Override some configuration values. The keyword arguments are the names of configuration options to override and their values. If a group argument is supplied, the overrides are applied to the specified configuration option group. All overrides are automatically cleared at the end of the current test by the fixtures cleanup process. """ self._config_fixture.config(**kw) class requires(object): """Decorator that initiates additional test setup/teardown.""" def __init__(self, setup=None, teardown=None): self.setup = setup self.teardown = teardown def __call__(self, func): def _runner(*args, **kw): if self.setup: self.setup(args[0]) func(*args, **kw) if self.teardown: self.teardown(args[0]) _runner.__name__ = func.__name__ _runner.__doc__ = func.__doc__ return _runner class depends_on_exe(object): """Decorator to skip test if an executable is unavailable""" def __init__(self, exe): self.exe = exe def __call__(self, func): def _runner(*args, **kw): cmd = 'which %s' % self.exe exitcode, out, err = execute(cmd, raise_error=False) if exitcode != 0: args[0].disabled_message = 'test requires exe: %s' % self.exe args[0].disabled = True func(*args, **kw) _runner.__name__ = func.__name__ _runner.__doc__ = func.__doc__ return _runner def skip_if_disabled(func): """Decorator that skips a test if test case is disabled.""" @functools.wraps(func) def wrapped(*a, **kwargs): func.__test__ = False test_obj = a[0] message = getattr(test_obj, 'disabled_message', 'Test disabled') if getattr(test_obj, 'disabled', False): test_obj.skipTest(message) func(*a, **kwargs) return wrapped def fork_exec(cmd, exec_env=None, logfile=None, pass_fds=None): """ Execute a command using fork/exec. This is needed for programs system executions that need path searching but cannot have a shell as their parent process, for example: glance-api. When glance-api starts it sets itself as the parent process for its own process group. Thus the pid that a Popen process would have is not the right pid to use for killing the process group. This patch gives the test env direct access to the actual pid. :param cmd: Command to execute as an array of arguments. :param exec_env: A dictionary representing the environment with which to run the command. :param logfile: A path to a file which will hold the stdout/err of the child process. :param pass_fds: Sequence of file descriptors passed to the child. """ env = os.environ.copy() if exec_env is not None: for env_name, env_val in exec_env.items(): if callable(env_val): env[env_name] = env_val(env.get(env_name)) else: env[env_name] = env_val pid = os.fork() if pid == 0: if logfile: fds = [1, 2] with open(logfile, 'r+b') as fptr: for desc in fds: # close fds try: os.dup2(fptr.fileno(), desc) except OSError: pass if pass_fds and hasattr(os, 'set_inheritable'): # os.set_inheritable() is only available and needed # since Python 3.4. On Python 3.3 and older, file descriptors are # inheritable by default. for fd in pass_fds: os.set_inheritable(fd, True) args = shlex.split(cmd) os.execvpe(args[0], args, env) else: return pid def wait_for_fork(pid, raise_error=True, expected_exitcode=0): """ Wait for a process to complete This function will wait for the given pid to complete. If the exit code does not match that of the expected_exitcode an error is raised. """ rc = 0 try: (pid, rc) = os.waitpid(pid, 0) rc = os.WEXITSTATUS(rc) if rc != expected_exitcode: raise RuntimeError('The exit code %d is not %d' % (rc, expected_exitcode)) except Exception: if raise_error: raise return rc def execute(cmd, raise_error=True, no_venv=False, exec_env=None, expect_exit=True, expected_exitcode=0, context=None): """ Executes a command in a subprocess. Returns a tuple of (exitcode, out, err), where out is the string output from stdout and err is the string output from stderr when executing the command. :param cmd: Command string to execute :param raise_error: If returncode is not 0 (success), then raise a RuntimeError? Default: True) :param no_venv: Disable the virtual environment :param exec_env: Optional dictionary of additional environment variables; values may be callables, which will be passed the current value of the named environment variable :param expect_exit: Optional flag true iff timely exit is expected :param expected_exitcode: expected exitcode from the launcher :param context: additional context for error message """ env = os.environ.copy() if exec_env is not None: for env_name, env_val in exec_env.items(): if callable(env_val): env[env_name] = env_val(env.get(env_name)) else: env[env_name] = env_val # If we're asked to omit the virtualenv, and if one is set up, # restore the various environment variables if no_venv and 'VIRTUAL_ENV' in env: # Clip off the first element of PATH env['PATH'] = env['PATH'].split(os.pathsep, 1)[-1] del env['VIRTUAL_ENV'] # Make sure that we use the programs in the # current source directory's bin/ directory. path_ext = [os.path.join(os.getcwd(), 'bin')] # Also jack in the path cmd comes from, if it's absolute args = shlex.split(cmd) executable = args[0] if os.path.isabs(executable): path_ext.append(os.path.dirname(executable)) env['PATH'] = ':'.join(path_ext) + ':' + env['PATH'] process = subprocess.Popen(args, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE, env=env) if expect_exit: result = process.communicate() (out, err) = result exitcode = process.returncode else: out = '' err = '' exitcode = 0 if exitcode != expected_exitcode and raise_error: msg = ("Command %(cmd)s did not succeed. Returned an exit " "code of %(exitcode)d." "\n\nSTDOUT: %(out)s" "\n\nSTDERR: %(err)s" % {'cmd': cmd, 'exitcode': exitcode, 'out': out, 'err': err}) if context: msg += "\n\nCONTEXT: %s" % context raise RuntimeError(msg) return exitcode, out, err def find_executable(cmdname): """ Searches the path for a given cmdname. Returns an absolute filename if an executable with the given name exists in the path, or None if one does not. :param cmdname: The bare name of the executable to search for """ # Keep an eye out for the possibility of an absolute pathname if os.path.isabs(cmdname): return cmdname # Get a list of the directories to search path = ([os.path.join(os.getcwd(), 'bin')] + os.environ['PATH'].split(os.pathsep)) # Search through each in turn for elem in path: full_path = os.path.join(elem, cmdname) if os.access(full_path, os.X_OK): return full_path # No dice... return None def get_unused_port(): """ Returns an unused port on localhost. """ port, s = get_unused_port_and_socket() s.close() return port def get_unused_port_and_socket(): """ Returns an unused port on localhost and the open socket from which it was created. """ s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.bind(('localhost', 0)) addr, port = s.getsockname() return (port, s) def get_unused_port_ipv6(): """ Returns an unused port on localhost on IPv6 (uses ::1). """ port, s = get_unused_port_and_socket_ipv6() s.close() return port def get_unused_port_and_socket_ipv6(): """ Returns an unused port on localhost and the open socket from which it was created, but uses IPv6 (::1). """ s = socket.socket(socket.AF_INET6, socket.SOCK_STREAM) s.bind(('::1', 0)) # Ignoring flowinfo and scopeid... addr, port, flowinfo, scopeid = s.getsockname() return (port, s) def xattr_writes_supported(path): """ Returns True if the we can write a file to the supplied path and subsequently write a xattr to that file. """ try: import xattr except ImportError: return False def set_xattr(path, key, value): xattr.setxattr(path, "user.%s" % key, value) # We do a quick attempt to write a user xattr to a temporary file # to check that the filesystem is even enabled to support xattrs fake_filepath = os.path.join(path, 'testing-checkme') result = True with open(fake_filepath, 'wb') as fake_file: fake_file.write(b"XXX") fake_file.flush() try: set_xattr(fake_filepath, 'hits', b'1') except IOError as e: if e.errno == errno.EOPNOTSUPP: result = False else: # Cleanup after ourselves... if os.path.exists(fake_filepath): os.unlink(fake_filepath) return result def minimal_headers(name, public=True): headers = { 'Content-Type': 'application/octet-stream', 'X-Image-Meta-Name': name, 'X-Image-Meta-disk_format': 'raw', 'X-Image-Meta-container_format': 'ovf', } if public: headers['X-Image-Meta-Is-Public'] = 'True' return headers def minimal_add_command(port, name, suffix='', public=True): visibility = 'is_public=True' if public else '' return ("bin/glance --port=%d add %s" " disk_format=raw container_format=ovf" " name=%s %s" % (port, visibility, name, suffix)) def start_http_server(image_id, image_data): def _get_http_handler_class(fixture): class StaticHTTPRequestHandler(BaseHTTPServer.BaseHTTPRequestHandler): def do_GET(self): self.send_response(http.OK) self.send_header('Content-Length', str(len(fixture))) self.end_headers() self.wfile.write(fixture) return def do_HEAD(self): # reserve non_existing_image_path for the cases where we expect # 404 from the server if 'non_existing_image_path' in self.path: self.send_response(http.NOT_FOUND) else: self.send_response(http.OK) self.send_header('Content-Length', str(len(fixture))) self.end_headers() return def log_message(self, *args, **kwargs): # Override this method to prevent debug output from going # to stderr during testing return return StaticHTTPRequestHandler server_address = ('127.0.0.1', 0) handler_class = _get_http_handler_class(image_data) httpd = BaseHTTPServer.HTTPServer(server_address, handler_class) port = httpd.socket.getsockname()[1] pid = os.fork() if pid == 0: httpd.serve_forever() else: return pid, port class RegistryAPIMixIn(object): def create_fixtures(self): for fixture in self.FIXTURES: db_api.image_create(self.context, fixture) with open(os.path.join(self.test_dir, fixture['id']), 'wb') as image: image.write(b"chunk00000remainder") def destroy_fixtures(self): db_models.unregister_models(db_api.get_engine()) db_models.register_models(db_api.get_engine()) def get_fixture(self, **kwargs): fixture = {'name': 'fake public image', 'status': 'active', 'disk_format': 'vhd', 'container_format': 'ovf', 'visibility': 'public', 'size': 20, 'checksum': None} if 'is_public' in kwargs: fixture.pop('visibility') fixture.update(kwargs) return fixture def get_minimal_fixture(self, **kwargs): fixture = {'name': 'fake public image', 'visibility': 'public', 'disk_format': 'vhd', 'container_format': 'ovf'} if 'is_public' in kwargs: fixture.pop('visibility') fixture.update(kwargs) return fixture def get_extra_fixture(self, id, name, **kwargs): created_at = kwargs.pop('created_at', timeutils.utcnow()) updated_at = kwargs.pop('updated_at', created_at) return self.get_fixture( id=id, name=name, deleted=False, deleted_at=None, created_at=created_at, updated_at=updated_at, **kwargs) def get_api_response_ext(self, http_resp, url='/images', headers=None, body=None, method=None, api=None, content_type=None): if api is None: api = self.api if headers is None: headers = {} req = webob.Request.blank(url) for k, v in six.iteritems(headers): req.headers[k] = v if method: req.method = method if body: req.body = body if content_type == 'json': req.content_type = 'application/json' elif content_type == 'octet': req.content_type = 'application/octet-stream' res = req.get_response(api) self.assertEqual(res.status_int, http_resp) return res def assertEqualImages(self, res, uuids, key='images', unjsonify=True): images = jsonutils.loads(res.body)[key] if unjsonify else res self.assertEqual(len(images), len(uuids)) for i, value in enumerate(uuids): self.assertEqual(images[i]['id'], value) class FakeAuthMiddleware(wsgi.Middleware): def __init__(self, app, is_admin=False): super(FakeAuthMiddleware, self).__init__(app) self.is_admin = is_admin def process_request(self, req): auth_token = req.headers.get('X-Auth-Token') user = None tenant = None roles = [] if auth_token: user, tenant, role = auth_token.split(':') if tenant.lower() == 'none': tenant = None roles = [role] req.headers['X-User-Id'] = user req.headers['X-Tenant-Id'] = tenant req.headers['X-Roles'] = role req.headers['X-Identity-Status'] = 'Confirmed' kwargs = { 'user': user, 'tenant': tenant, 'roles': roles, 'is_admin': self.is_admin, 'auth_token': auth_token, } req.context = context.RequestContext(**kwargs) class FakeHTTPResponse(object): def __init__(self, status=http.OK, headers=None, data=None, *args, **kwargs): data = data or b'I am a teapot, short and stout\n' self.data = six.BytesIO(data) self.read = self.data.read self.status = status self.headers = headers or {'content-length': len(data)} def getheader(self, name, default=None): return self.headers.get(name.lower(), default) def getheaders(self): return self.headers or {} def read(self, amt): self.data.read(amt) class Httplib2WsgiAdapter(object): def __init__(self, app): self.app = app def request(self, uri, method="GET", body=None, headers=None): req = webob.Request.blank(uri, method=method, headers=headers) if isinstance(body, str): req.body = body.encode('utf-8') else: req.body = body resp = req.get_response(self.app) return Httplib2WebobResponse(resp), resp.body.decode('utf-8') class Httplib2WebobResponse(object): def __init__(self, webob_resp): self.webob_resp = webob_resp @property def status(self): return self.webob_resp.status_code def __getitem__(self, key): return self.webob_resp.headers[key] def get(self, key): return self.webob_resp.headers[key] @property def allow(self): return self.webob_resp.allow @allow.setter def allow(self, allowed): if type(allowed) is not str: raise TypeError('Allow header should be a str') self.webob_resp.allow = allowed class HttplibWsgiAdapter(object): def __init__(self, app): self.app = app self.req = None def request(self, method, url, body=None, headers=None): if headers is None: headers = {} self.req = webob.Request.blank(url, method=method, headers=headers) self.req.body = body def getresponse(self): response = self.req.get_response(self.app) return FakeHTTPResponse(response.status_code, response.headers, response.body) def db_sync(version='heads', engine=None): """Migrate the database to `version` or the most recent version.""" if engine is None: engine = db_api.get_engine() alembic_config = alembic_migrations.get_alembic_config(engine=engine) alembic_command.upgrade(alembic_config, version) def is_sqlite_version_prior_to(major, minor): import sqlite3 tup = sqlite3.sqlite_version_info return tup[0] < major or (tup[0] == major and tup[1] < minor) glance-16.0.0/glance/tests/test_hacking.py0000666000175100017510000001337613245511421020475 0ustar zuulzuul00000000000000# Copyright 2014 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from glance.hacking import checks from glance.tests import utils class HackingTestCase(utils.BaseTestCase): def test_assert_true_instance(self): self.assertEqual(1, len(list(checks.assert_true_instance( "self.assertTrue(isinstance(e, " "exception.BuildAbortException))")))) self.assertEqual( 0, len(list(checks.assert_true_instance("self.assertTrue()")))) def test_assert_equal_type(self): self.assertEqual(1, len(list(checks.assert_equal_type( "self.assertEqual(type(als['QuicAssist']), list)")))) self.assertEqual( 0, len(list(checks.assert_equal_type("self.assertTrue()")))) def test_assert_equal_none(self): self.assertEqual(1, len(list(checks.assert_equal_none( "self.assertEqual(A, None)")))) self.assertEqual(1, len(list(checks.assert_equal_none( "self.assertEqual(None, A)")))) self.assertEqual( 0, len(list(checks.assert_equal_none("self.assertIsNone()")))) def test_no_translate_debug_logs(self): self.assertEqual(1, len(list(checks.no_translate_debug_logs( "LOG.debug(_('foo'))", "glance/store/foo.py")))) self.assertEqual(0, len(list(checks.no_translate_debug_logs( "LOG.debug('foo')", "glance/store/foo.py")))) self.assertEqual(0, len(list(checks.no_translate_debug_logs( "LOG.info(_('foo'))", "glance/store/foo.py")))) def test_no_direct_use_of_unicode_function(self): self.assertEqual(1, len(list(checks.no_direct_use_of_unicode_function( "unicode('the party dont start til the unicode walks in')")))) self.assertEqual(1, len(list(checks.no_direct_use_of_unicode_function( """unicode('something ' 'something else""")))) self.assertEqual(0, len(list(checks.no_direct_use_of_unicode_function( "six.text_type('party over')")))) self.assertEqual(0, len(list(checks.no_direct_use_of_unicode_function( "not_actually_unicode('something completely different')")))) def test_no_contextlib_nested(self): self.assertEqual(1, len(list(checks.check_no_contextlib_nested( "with contextlib.nested(")))) self.assertEqual(1, len(list(checks.check_no_contextlib_nested( "with nested(")))) self.assertEqual(0, len(list(checks.check_no_contextlib_nested( "with foo as bar")))) def test_dict_constructor_with_list_copy(self): self.assertEqual(1, len(list(checks.dict_constructor_with_list_copy( " dict([(i, connect_info[i])")))) self.assertEqual(1, len(list(checks.dict_constructor_with_list_copy( " attrs = dict([(k, _from_json(v))")))) self.assertEqual(1, len(list(checks.dict_constructor_with_list_copy( " type_names = dict((value, key) for key, value in")))) self.assertEqual(1, len(list(checks.dict_constructor_with_list_copy( " dict((value, key) for key, value in")))) self.assertEqual(1, len(list(checks.dict_constructor_with_list_copy( "foo(param=dict((k, v) for k, v in bar.items()))")))) self.assertEqual(1, len(list(checks.dict_constructor_with_list_copy( " dict([[i,i] for i in range(3)])")))) self.assertEqual(1, len(list(checks.dict_constructor_with_list_copy( " dd = dict([i,i] for i in range(3))")))) self.assertEqual(0, len(list(checks.dict_constructor_with_list_copy( " create_kwargs = dict(snapshot=snapshot,")))) self.assertEqual(0, len(list(checks.dict_constructor_with_list_copy( " self._render_dict(xml, data_el, data.__dict__)")))) def test_check_python3_xrange(self): func = checks.check_python3_xrange self.assertEqual(1, len(list(func('for i in xrange(10)')))) self.assertEqual(1, len(list(func('for i in xrange (10)')))) self.assertEqual(0, len(list(func('for i in range(10)')))) self.assertEqual(0, len(list(func('for i in six.moves.range(10)')))) self.assertEqual(0, len(list(func('testxrange(10)')))) def test_dict_iteritems(self): self.assertEqual(1, len(list(checks.check_python3_no_iteritems( "obj.iteritems()")))) self.assertEqual(0, len(list(checks.check_python3_no_iteritems( "six.iteritems(obj)")))) self.assertEqual(0, len(list(checks.check_python3_no_iteritems( "obj.items()")))) def test_dict_iterkeys(self): self.assertEqual(1, len(list(checks.check_python3_no_iterkeys( "obj.iterkeys()")))) self.assertEqual(0, len(list(checks.check_python3_no_iterkeys( "six.iterkeys(obj)")))) self.assertEqual(0, len(list(checks.check_python3_no_iterkeys( "obj.keys()")))) def test_dict_itervalues(self): self.assertEqual(1, len(list(checks.check_python3_no_itervalues( "obj.itervalues()")))) self.assertEqual(0, len(list(checks.check_python3_no_itervalues( "six.itervalues(ob)")))) self.assertEqual(0, len(list(checks.check_python3_no_itervalues( "obj.values()")))) glance-16.0.0/glance/domain/0000775000175100017510000000000013245511661015557 5ustar zuulzuul00000000000000glance-16.0.0/glance/domain/proxy.py0000666000175100017510000004573113245511421017320 0ustar zuulzuul00000000000000# Copyright 2013 OpenStack Foundation # Copyright 2013 IBM Corp. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. def _proxy(target, attr): def get_attr(self): return getattr(getattr(self, target), attr) def set_attr(self, value): return setattr(getattr(self, target), attr, value) def del_attr(self): return delattr(getattr(self, target), attr) return property(get_attr, set_attr, del_attr) class Helper(object): def __init__(self, proxy_class=None, proxy_kwargs=None): self.proxy_class = proxy_class self.proxy_kwargs = proxy_kwargs or {} def proxy(self, obj): if obj is None or self.proxy_class is None: return obj return self.proxy_class(obj, **self.proxy_kwargs) def unproxy(self, obj): if obj is None or self.proxy_class is None: return obj return obj.base class TaskRepo(object): def __init__(self, base, task_proxy_class=None, task_proxy_kwargs=None): self.base = base self.task_proxy_helper = Helper(task_proxy_class, task_proxy_kwargs) def get(self, task_id): task = self.base.get(task_id) return self.task_proxy_helper.proxy(task) def add(self, task): self.base.add(self.task_proxy_helper.unproxy(task)) def save(self, task): self.base.save(self.task_proxy_helper.unproxy(task)) def remove(self, task): base_task = self.task_proxy_helper.unproxy(task) self.base.remove(base_task) class TaskStubRepo(object): def __init__(self, base, task_stub_proxy_class=None, task_stub_proxy_kwargs=None): self.base = base self.task_stub_proxy_helper = Helper(task_stub_proxy_class, task_stub_proxy_kwargs) def list(self, *args, **kwargs): tasks = self.base.list(*args, **kwargs) return [self.task_stub_proxy_helper.proxy(task) for task in tasks] class Repo(object): def __init__(self, base, item_proxy_class=None, item_proxy_kwargs=None): self.base = base self.helper = Helper(item_proxy_class, item_proxy_kwargs) def get(self, item_id): return self.helper.proxy(self.base.get(item_id)) def list(self, *args, **kwargs): items = self.base.list(*args, **kwargs) return [self.helper.proxy(item) for item in items] def add(self, item): base_item = self.helper.unproxy(item) result = self.base.add(base_item) return self.helper.proxy(result) def save(self, item, from_state=None): base_item = self.helper.unproxy(item) result = self.base.save(base_item, from_state=from_state) return self.helper.proxy(result) def remove(self, item): base_item = self.helper.unproxy(item) result = self.base.remove(base_item) return self.helper.proxy(result) class MemberRepo(object): def __init__(self, image, base, member_proxy_class=None, member_proxy_kwargs=None): self.image = image self.base = base self.member_proxy_helper = Helper(member_proxy_class, member_proxy_kwargs) def get(self, member_id): member = self.base.get(member_id) return self.member_proxy_helper.proxy(member) def add(self, member): self.base.add(self.member_proxy_helper.unproxy(member)) def list(self, *args, **kwargs): members = self.base.list(*args, **kwargs) return [self.member_proxy_helper.proxy(member) for member in members] def remove(self, member): base_item = self.member_proxy_helper.unproxy(member) result = self.base.remove(base_item) return self.member_proxy_helper.proxy(result) def save(self, member, from_state=None): base_item = self.member_proxy_helper.unproxy(member) result = self.base.save(base_item, from_state=from_state) return self.member_proxy_helper.proxy(result) class ImageFactory(object): def __init__(self, base, proxy_class=None, proxy_kwargs=None): self.helper = Helper(proxy_class, proxy_kwargs) self.base = base def new_image(self, **kwargs): return self.helper.proxy(self.base.new_image(**kwargs)) class ImageMembershipFactory(object): def __init__(self, base, proxy_class=None, proxy_kwargs=None): self.helper = Helper(proxy_class, proxy_kwargs) self.base = base def new_image_member(self, image, member, **kwargs): return self.helper.proxy(self.base.new_image_member(image, member, **kwargs)) class Image(object): def __init__(self, base, member_repo_proxy_class=None, member_repo_proxy_kwargs=None): self.base = base self.helper = Helper(member_repo_proxy_class, member_repo_proxy_kwargs) name = _proxy('base', 'name') image_id = _proxy('base', 'image_id') status = _proxy('base', 'status') created_at = _proxy('base', 'created_at') updated_at = _proxy('base', 'updated_at') visibility = _proxy('base', 'visibility') min_disk = _proxy('base', 'min_disk') min_ram = _proxy('base', 'min_ram') protected = _proxy('base', 'protected') locations = _proxy('base', 'locations') checksum = _proxy('base', 'checksum') owner = _proxy('base', 'owner') disk_format = _proxy('base', 'disk_format') container_format = _proxy('base', 'container_format') size = _proxy('base', 'size') virtual_size = _proxy('base', 'virtual_size') extra_properties = _proxy('base', 'extra_properties') tags = _proxy('base', 'tags') def delete(self): self.base.delete() def deactivate(self): self.base.deactivate() def reactivate(self): self.base.reactivate() def set_data(self, data, size=None): self.base.set_data(data, size) def get_data(self, *args, **kwargs): return self.base.get_data(*args, **kwargs) class ImageMember(object): def __init__(self, base): self.base = base id = _proxy('base', 'id') image_id = _proxy('base', 'image_id') member_id = _proxy('base', 'member_id') status = _proxy('base', 'status') created_at = _proxy('base', 'created_at') updated_at = _proxy('base', 'updated_at') class Task(object): def __init__(self, base): self.base = base task_id = _proxy('base', 'task_id') type = _proxy('base', 'type') status = _proxy('base', 'status') owner = _proxy('base', 'owner') expires_at = _proxy('base', 'expires_at') created_at = _proxy('base', 'created_at') updated_at = _proxy('base', 'updated_at') task_input = _proxy('base', 'task_input') result = _proxy('base', 'result') message = _proxy('base', 'message') def begin_processing(self): self.base.begin_processing() def succeed(self, result): self.base.succeed(result) def fail(self, message): self.base.fail(message) def run(self, executor): self.base.run(executor) class TaskStub(object): def __init__(self, base): self.base = base task_id = _proxy('base', 'task_id') type = _proxy('base', 'type') status = _proxy('base', 'status') owner = _proxy('base', 'owner') expires_at = _proxy('base', 'expires_at') created_at = _proxy('base', 'created_at') updated_at = _proxy('base', 'updated_at') class TaskFactory(object): def __init__(self, base, task_proxy_class=None, task_proxy_kwargs=None): self.task_helper = Helper(task_proxy_class, task_proxy_kwargs) self.base = base def new_task(self, **kwargs): t = self.base.new_task(**kwargs) return self.task_helper.proxy(t) # Metadef Namespace classes class MetadefNamespaceRepo(object): def __init__(self, base, namespace_proxy_class=None, namespace_proxy_kwargs=None): self.base = base self.namespace_proxy_helper = Helper(namespace_proxy_class, namespace_proxy_kwargs) def get(self, namespace): namespace_obj = self.base.get(namespace) return self.namespace_proxy_helper.proxy(namespace_obj) def add(self, namespace): self.base.add(self.namespace_proxy_helper.unproxy(namespace)) def list(self, *args, **kwargs): namespaces = self.base.list(*args, **kwargs) return [self.namespace_proxy_helper.proxy(namespace) for namespace in namespaces] def remove(self, item): base_item = self.namespace_proxy_helper.unproxy(item) result = self.base.remove(base_item) return self.namespace_proxy_helper.proxy(result) def remove_objects(self, item): base_item = self.namespace_proxy_helper.unproxy(item) result = self.base.remove_objects(base_item) return self.namespace_proxy_helper.proxy(result) def remove_properties(self, item): base_item = self.namespace_proxy_helper.unproxy(item) result = self.base.remove_properties(base_item) return self.namespace_proxy_helper.proxy(result) def remove_tags(self, item): base_item = self.namespace_proxy_helper.unproxy(item) result = self.base.remove_tags(base_item) return self.namespace_proxy_helper.proxy(result) def save(self, item): base_item = self.namespace_proxy_helper.unproxy(item) result = self.base.save(base_item) return self.namespace_proxy_helper.proxy(result) class MetadefNamespace(object): def __init__(self, base): self.base = base namespace_id = _proxy('base', 'namespace_id') namespace = _proxy('base', 'namespace') display_name = _proxy('base', 'display_name') description = _proxy('base', 'description') owner = _proxy('base', 'owner') visibility = _proxy('base', 'visibility') protected = _proxy('base', 'protected') created_at = _proxy('base', 'created_at') updated_at = _proxy('base', 'updated_at') def delete(self): self.base.delete() class MetadefNamespaceFactory(object): def __init__(self, base, meta_namespace_proxy_class=None, meta_namespace_proxy_kwargs=None): self.meta_namespace_helper = Helper(meta_namespace_proxy_class, meta_namespace_proxy_kwargs) self.base = base def new_namespace(self, **kwargs): t = self.base.new_namespace(**kwargs) return self.meta_namespace_helper.proxy(t) # Metadef object classes class MetadefObjectRepo(object): def __init__(self, base, object_proxy_class=None, object_proxy_kwargs=None): self.base = base self.object_proxy_helper = Helper(object_proxy_class, object_proxy_kwargs) def get(self, namespace, object_name): meta_object = self.base.get(namespace, object_name) return self.object_proxy_helper.proxy(meta_object) def add(self, meta_object): self.base.add(self.object_proxy_helper.unproxy(meta_object)) def list(self, *args, **kwargs): objects = self.base.list(*args, **kwargs) return [self.object_proxy_helper.proxy(meta_object) for meta_object in objects] def remove(self, item): base_item = self.object_proxy_helper.unproxy(item) result = self.base.remove(base_item) return self.object_proxy_helper.proxy(result) def save(self, item): base_item = self.object_proxy_helper.unproxy(item) result = self.base.save(base_item) return self.object_proxy_helper.proxy(result) class MetadefObject(object): def __init__(self, base): self.base = base namespace = _proxy('base', 'namespace') object_id = _proxy('base', 'object_id') name = _proxy('base', 'name') required = _proxy('base', 'required') description = _proxy('base', 'description') properties = _proxy('base', 'properties') created_at = _proxy('base', 'created_at') updated_at = _proxy('base', 'updated_at') def delete(self): self.base.delete() class MetadefObjectFactory(object): def __init__(self, base, meta_object_proxy_class=None, meta_object_proxy_kwargs=None): self.meta_object_helper = Helper(meta_object_proxy_class, meta_object_proxy_kwargs) self.base = base def new_object(self, **kwargs): t = self.base.new_object(**kwargs) return self.meta_object_helper.proxy(t) # Metadef ResourceType classes class MetadefResourceTypeRepo(object): def __init__(self, base, resource_type_proxy_class=None, resource_type_proxy_kwargs=None): self.base = base self.resource_type_proxy_helper = Helper(resource_type_proxy_class, resource_type_proxy_kwargs) def add(self, meta_resource_type): self.base.add(self.resource_type_proxy_helper.unproxy( meta_resource_type)) def get(self, *args, **kwargs): resource_type = self.base.get(*args, **kwargs) return self.resource_type_proxy_helper.proxy(resource_type) def list(self, *args, **kwargs): resource_types = self.base.list(*args, **kwargs) return [self.resource_type_proxy_helper.proxy(resource_type) for resource_type in resource_types] def remove(self, item): base_item = self.resource_type_proxy_helper.unproxy(item) result = self.base.remove(base_item) return self.resource_type_proxy_helper.proxy(result) class MetadefResourceType(object): def __init__(self, base): self.base = base namespace = _proxy('base', 'namespace') name = _proxy('base', 'name') prefix = _proxy('base', 'prefix') properties_target = _proxy('base', 'properties_target') created_at = _proxy('base', 'created_at') updated_at = _proxy('base', 'updated_at') def delete(self): self.base.delete() class MetadefResourceTypeFactory(object): def __init__(self, base, resource_type_proxy_class=None, resource_type_proxy_kwargs=None): self.resource_type_helper = Helper(resource_type_proxy_class, resource_type_proxy_kwargs) self.base = base def new_resource_type(self, **kwargs): t = self.base.new_resource_type(**kwargs) return self.resource_type_helper.proxy(t) # Metadef namespace property classes class MetadefPropertyRepo(object): def __init__(self, base, property_proxy_class=None, property_proxy_kwargs=None): self.base = base self.property_proxy_helper = Helper(property_proxy_class, property_proxy_kwargs) def get(self, namespace, property_name): property = self.base.get(namespace, property_name) return self.property_proxy_helper.proxy(property) def add(self, property): self.base.add(self.property_proxy_helper.unproxy(property)) def list(self, *args, **kwargs): properties = self.base.list(*args, **kwargs) return [self.property_proxy_helper.proxy(property) for property in properties] def remove(self, item): base_item = self.property_proxy_helper.unproxy(item) result = self.base.remove(base_item) return self.property_proxy_helper.proxy(result) def save(self, item): base_item = self.property_proxy_helper.unproxy(item) result = self.base.save(base_item) return self.property_proxy_helper.proxy(result) class MetadefProperty(object): def __init__(self, base): self.base = base namespace = _proxy('base', 'namespace') property_id = _proxy('base', 'property_id') name = _proxy('base', 'name') schema = _proxy('base', 'schema') def delete(self): self.base.delete() class MetadefPropertyFactory(object): def __init__(self, base, property_proxy_class=None, property_proxy_kwargs=None): self.meta_object_helper = Helper(property_proxy_class, property_proxy_kwargs) self.base = base def new_namespace_property(self, **kwargs): t = self.base.new_namespace_property(**kwargs) return self.meta_object_helper.proxy(t) # Metadef tag classes class MetadefTagRepo(object): def __init__(self, base, tag_proxy_class=None, tag_proxy_kwargs=None): self.base = base self.tag_proxy_helper = Helper(tag_proxy_class, tag_proxy_kwargs) def get(self, namespace, name): meta_tag = self.base.get(namespace, name) return self.tag_proxy_helper.proxy(meta_tag) def add(self, meta_tag): self.base.add(self.tag_proxy_helper.unproxy(meta_tag)) def add_tags(self, meta_tags): tags_list = [] for meta_tag in meta_tags: tags_list.append(self.tag_proxy_helper.unproxy(meta_tag)) self.base.add_tags(tags_list) def list(self, *args, **kwargs): tags = self.base.list(*args, **kwargs) return [self.tag_proxy_helper.proxy(meta_tag) for meta_tag in tags] def remove(self, item): base_item = self.tag_proxy_helper.unproxy(item) result = self.base.remove(base_item) return self.tag_proxy_helper.proxy(result) def save(self, item): base_item = self.tag_proxy_helper.unproxy(item) result = self.base.save(base_item) return self.tag_proxy_helper.proxy(result) class MetadefTag(object): def __init__(self, base): self.base = base namespace = _proxy('base', 'namespace') tag_id = _proxy('base', 'tag_id') name = _proxy('base', 'name') created_at = _proxy('base', 'created_at') updated_at = _proxy('base', 'updated_at') def delete(self): self.base.delete() class MetadefTagFactory(object): def __init__(self, base, meta_tag_proxy_class=None, meta_tag_proxy_kwargs=None): self.meta_tag_helper = Helper(meta_tag_proxy_class, meta_tag_proxy_kwargs) self.base = base def new_tag(self, **kwargs): t = self.base.new_tag(**kwargs) return self.meta_tag_helper.proxy(t) glance-16.0.0/glance/domain/__init__.py0000666000175100017510000005462313245511421017676 0ustar zuulzuul00000000000000# Copyright 2012 OpenStack Foundation # Copyright 2013 IBM Corp. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections import datetime import uuid from oslo_config import cfg from oslo_log import log as logging from oslo_utils import excutils from oslo_utils import importutils import six from glance.common import exception from glance.common import timeutils from glance.i18n import _, _LE, _LI, _LW LOG = logging.getLogger(__name__) CONF = cfg.CONF CONF.import_opt('task_executor', 'glance.common.config', group='task') _delayed_delete_imported = False def _import_delayed_delete(): # glance_store (indirectly) imports glance.domain therefore we can't put # the CONF.import_opt outside - we have to do it in a convoluted/indirect # way! global _delayed_delete_imported if not _delayed_delete_imported: CONF.import_opt('delayed_delete', 'glance_store') _delayed_delete_imported = True class ImageFactory(object): _readonly_properties = ['created_at', 'updated_at', 'status', 'checksum', 'size', 'virtual_size'] _reserved_properties = ['owner', 'locations', 'deleted', 'deleted_at', 'direct_url', 'self', 'file', 'schema'] def _check_readonly(self, kwargs): for key in self._readonly_properties: if key in kwargs: raise exception.ReadonlyProperty(property=key) def _check_unexpected(self, kwargs): if kwargs: msg = _('new_image() got unexpected keywords %s') raise TypeError(msg % kwargs.keys()) def _check_reserved(self, properties): if properties is not None: for key in self._reserved_properties: if key in properties: raise exception.ReservedProperty(property=key) def new_image(self, image_id=None, name=None, visibility='shared', min_disk=0, min_ram=0, protected=False, owner=None, disk_format=None, container_format=None, extra_properties=None, tags=None, **other_args): extra_properties = extra_properties or {} self._check_readonly(other_args) self._check_unexpected(other_args) self._check_reserved(extra_properties) if image_id is None: image_id = str(uuid.uuid4()) created_at = timeutils.utcnow() updated_at = created_at status = 'queued' return Image(image_id=image_id, name=name, status=status, created_at=created_at, updated_at=updated_at, visibility=visibility, min_disk=min_disk, min_ram=min_ram, protected=protected, owner=owner, disk_format=disk_format, container_format=container_format, extra_properties=extra_properties, tags=tags or []) class Image(object): valid_state_targets = { # Each key denotes a "current" state for the image. Corresponding # values list the valid states to which we can jump from that "current" # state. # NOTE(flwang): In v2, we are deprecating the 'killed' status, so it's # allowed to restore image from 'saving' to 'queued' so that upload # can be retried. 'queued': ('saving', 'uploading', 'importing', 'active', 'deleted'), 'saving': ('active', 'killed', 'deleted', 'queued'), 'uploading': ('importing', 'queued', 'deleted'), 'importing': ('active', 'deleted', 'queued'), 'active': ('pending_delete', 'deleted', 'deactivated'), 'killed': ('deleted',), 'pending_delete': ('deleted',), 'deleted': (), 'deactivated': ('active', 'deleted'), } def __init__(self, image_id, status, created_at, updated_at, **kwargs): self.image_id = image_id self.status = status self.created_at = created_at self.updated_at = updated_at self.name = kwargs.pop('name', None) self.visibility = kwargs.pop('visibility', 'shared') self.min_disk = kwargs.pop('min_disk', 0) self.min_ram = kwargs.pop('min_ram', 0) self.protected = kwargs.pop('protected', False) self.locations = kwargs.pop('locations', []) self.checksum = kwargs.pop('checksum', None) self.owner = kwargs.pop('owner', None) self._disk_format = kwargs.pop('disk_format', None) self._container_format = kwargs.pop('container_format', None) self.size = kwargs.pop('size', None) self.virtual_size = kwargs.pop('virtual_size', None) extra_properties = kwargs.pop('extra_properties', {}) self.extra_properties = ExtraProperties(extra_properties) self.tags = kwargs.pop('tags', []) if kwargs: message = _("__init__() got unexpected keyword argument '%s'") raise TypeError(message % list(kwargs.keys())[0]) @property def status(self): return self._status @status.setter def status(self, status): has_status = hasattr(self, '_status') if has_status: if status not in self.valid_state_targets[self._status]: kw = {'cur_status': self._status, 'new_status': status} e = exception.InvalidImageStatusTransition(**kw) LOG.debug(e) raise e if self._status in ('queued', 'uploading') and status in ( 'saving', 'active', 'importing'): missing = [k for k in ['disk_format', 'container_format'] if not getattr(self, k)] if len(missing) > 0: if len(missing) == 1: msg = _('Property %s must be set prior to ' 'saving data.') else: msg = _('Properties %s must be set prior to ' 'saving data.') raise ValueError(msg % ', '.join(missing)) # NOTE(flwang): Image size should be cleared as long as the image # status is updated to 'queued' if status == 'queued': self.size = None self.virtual_size = None self._status = status @property def visibility(self): return self._visibility @visibility.setter def visibility(self, visibility): if visibility not in ('community', 'public', 'private', 'shared'): raise ValueError(_('Visibility must be one of "community", ' '"public", "private", or "shared"')) self._visibility = visibility @property def tags(self): return self._tags @tags.setter def tags(self, value): self._tags = set(value) @property def container_format(self): return self._container_format @container_format.setter def container_format(self, value): if hasattr(self, '_container_format') and self.status != 'queued': msg = _("Attribute container_format can be only replaced " "for a queued image.") raise exception.Forbidden(message=msg) self._container_format = value @property def disk_format(self): return self._disk_format @disk_format.setter def disk_format(self, value): if hasattr(self, '_disk_format') and self.status != 'queued': msg = _("Attribute disk_format can be only replaced " "for a queued image.") raise exception.Forbidden(message=msg) self._disk_format = value @property def min_disk(self): return self._min_disk @min_disk.setter def min_disk(self, value): if value and value < 0: extra_msg = _('Cannot be a negative value') raise exception.InvalidParameterValue(value=value, param='min_disk', extra_msg=extra_msg) self._min_disk = value @property def min_ram(self): return self._min_ram @min_ram.setter def min_ram(self, value): if value and value < 0: extra_msg = _('Cannot be a negative value') raise exception.InvalidParameterValue(value=value, param='min_ram', extra_msg=extra_msg) self._min_ram = value def delete(self): if self.protected: raise exception.ProtectedImageDelete(image_id=self.image_id) if CONF.delayed_delete and self.locations: self.status = 'pending_delete' else: self.status = 'deleted' def deactivate(self): if self.status == 'active': self.status = 'deactivated' elif self.status == 'deactivated': # Noop if already deactive pass else: LOG.debug("Not allowed to deactivate image in status '%s'", self.status) msg = (_("Not allowed to deactivate image in status '%s'") % self.status) raise exception.Forbidden(message=msg) def reactivate(self): if self.status == 'deactivated': self.status = 'active' elif self.status == 'active': # Noop if already active pass else: LOG.debug("Not allowed to reactivate image in status '%s'", self.status) msg = (_("Not allowed to reactivate image in status '%s'") % self.status) raise exception.Forbidden(message=msg) def get_data(self, *args, **kwargs): raise NotImplementedError() def set_data(self, data, size=None): raise NotImplementedError() class ExtraProperties(collections.MutableMapping, dict): def __getitem__(self, key): return dict.__getitem__(self, key) def __setitem__(self, key, value): return dict.__setitem__(self, key, value) def __delitem__(self, key): return dict.__delitem__(self, key) def __eq__(self, other): if isinstance(other, ExtraProperties): return dict(self).__eq__(dict(other)) elif isinstance(other, dict): return dict(self).__eq__(other) else: return False def __ne__(self, other): return not self.__eq__(other) def __len__(self): return dict(self).__len__() def keys(self): return dict(self).keys() class ImageMembership(object): def __init__(self, image_id, member_id, created_at, updated_at, id=None, status=None): self.id = id self.image_id = image_id self.member_id = member_id self.created_at = created_at self.updated_at = updated_at self.status = status @property def status(self): return self._status @status.setter def status(self, status): if status not in ('pending', 'accepted', 'rejected'): msg = _('Status must be "pending", "accepted" or "rejected".') raise ValueError(msg) self._status = status class ImageMemberFactory(object): def new_image_member(self, image, member_id): created_at = timeutils.utcnow() updated_at = created_at return ImageMembership(image_id=image.image_id, member_id=member_id, created_at=created_at, updated_at=updated_at, status='pending') class Task(object): _supported_task_type = ('import', 'api_image_import') _supported_task_status = ('pending', 'processing', 'success', 'failure') def __init__(self, task_id, task_type, status, owner, expires_at, created_at, updated_at, task_input, result, message): if task_type not in self._supported_task_type: raise exception.InvalidTaskType(task_type) if status not in self._supported_task_status: raise exception.InvalidTaskStatus(status) self.task_id = task_id self._status = status self.type = task_type self.owner = owner self.expires_at = expires_at # NOTE(nikhil): We use '_time_to_live' to determine how long a # task should live from the time it succeeds or fails. task_time_to_live = CONF.task.task_time_to_live self._time_to_live = datetime.timedelta(hours=task_time_to_live) self.created_at = created_at self.updated_at = updated_at self.task_input = task_input self.result = result self.message = message @property def status(self): return self._status @property def message(self): return self._message @message.setter def message(self, message): if message: self._message = six.text_type(message) else: self._message = six.text_type('') def _validate_task_status_transition(self, cur_status, new_status): valid_transitions = { 'pending': ['processing', 'failure'], 'processing': ['success', 'failure'], 'success': [], 'failure': [], } if new_status in valid_transitions[cur_status]: return True else: return False def _set_task_status(self, new_status): if self._validate_task_status_transition(self.status, new_status): old_status = self.status self._status = new_status LOG.info(_LI("Task [%(task_id)s] status changing from " "%(cur_status)s to %(new_status)s"), {'task_id': self.task_id, 'cur_status': old_status, 'new_status': new_status}) else: LOG.error(_LE("Task [%(task_id)s] status failed to change from " "%(cur_status)s to %(new_status)s"), {'task_id': self.task_id, 'cur_status': self.status, 'new_status': new_status}) raise exception.InvalidTaskStatusTransition( cur_status=self.status, new_status=new_status ) def begin_processing(self): new_status = 'processing' self._set_task_status(new_status) def succeed(self, result): new_status = 'success' self.result = result self._set_task_status(new_status) self.expires_at = timeutils.utcnow() + self._time_to_live def fail(self, message): new_status = 'failure' self.message = message self._set_task_status(new_status) self.expires_at = timeutils.utcnow() + self._time_to_live def run(self, executor): executor.begin_processing(self.task_id) class TaskStub(object): def __init__(self, task_id, task_type, status, owner, expires_at, created_at, updated_at): self.task_id = task_id self._status = status self.type = task_type self.owner = owner self.expires_at = expires_at self.created_at = created_at self.updated_at = updated_at @property def status(self): return self._status class TaskFactory(object): def new_task(self, task_type, owner, task_input=None, **kwargs): task_id = str(uuid.uuid4()) status = 'pending' # Note(nikhil): expires_at would be set on the task, only when it # succeeds or fails. expires_at = None created_at = timeutils.utcnow() updated_at = created_at return Task( task_id, task_type, status, owner, expires_at, created_at, updated_at, task_input, kwargs.get('result'), kwargs.get('message') ) class TaskExecutorFactory(object): eventlet_deprecation_warned = False def __init__(self, task_repo, image_repo, image_factory): self.task_repo = task_repo self.image_repo = image_repo self.image_factory = image_factory def new_task_executor(self, context): try: # NOTE(flaper87): Backwards compatibility layer. # It'll allow us to provide a deprecation path to # users that are currently consuming the `eventlet` # executor. task_executor = CONF.task.task_executor if task_executor == 'eventlet': # NOTE(jokke): Making sure we do not log the deprecation # warning 1000 times or anything crazy like that. if not TaskExecutorFactory.eventlet_deprecation_warned: msg = _LW("The `eventlet` executor has been deprecated. " "Use `taskflow` instead.") LOG.warn(msg) TaskExecutorFactory.eventlet_deprecation_warned = True task_executor = 'taskflow' executor_cls = ('glance.async.%s_executor.' 'TaskExecutor' % task_executor) LOG.debug("Loading %s executor", task_executor) executor = importutils.import_class(executor_cls) return executor(context, self.task_repo, self.image_repo, self.image_factory) except ImportError: with excutils.save_and_reraise_exception(): LOG.exception(_LE("Failed to load the %s executor provided " "in the config.") % CONF.task.task_executor) class MetadefNamespace(object): def __init__(self, namespace_id, namespace, display_name, description, owner, visibility, protected, created_at, updated_at): self.namespace_id = namespace_id self.namespace = namespace self.display_name = display_name self.description = description self.owner = owner self.visibility = visibility or "private" self.protected = protected or False self.created_at = created_at self.updated_at = updated_at def delete(self): if self.protected: raise exception.ProtectedMetadefNamespaceDelete( namespace=self.namespace) class MetadefNamespaceFactory(object): def new_namespace(self, namespace, owner, **kwargs): namespace_id = str(uuid.uuid4()) created_at = timeutils.utcnow() updated_at = created_at return MetadefNamespace( namespace_id, namespace, kwargs.get('display_name'), kwargs.get('description'), owner, kwargs.get('visibility'), kwargs.get('protected'), created_at, updated_at ) class MetadefObject(object): def __init__(self, namespace, object_id, name, created_at, updated_at, required, description, properties): self.namespace = namespace self.object_id = object_id self.name = name self.created_at = created_at self.updated_at = updated_at self.required = required self.description = description self.properties = properties def delete(self): if self.namespace.protected: raise exception.ProtectedMetadefObjectDelete(object_name=self.name) class MetadefObjectFactory(object): def new_object(self, namespace, name, **kwargs): object_id = str(uuid.uuid4()) created_at = timeutils.utcnow() updated_at = created_at return MetadefObject( namespace, object_id, name, created_at, updated_at, kwargs.get('required'), kwargs.get('description'), kwargs.get('properties') ) class MetadefResourceType(object): def __init__(self, namespace, name, prefix, properties_target, created_at, updated_at): self.namespace = namespace self.name = name self.prefix = prefix self.properties_target = properties_target self.created_at = created_at self.updated_at = updated_at def delete(self): if self.namespace.protected: raise exception.ProtectedMetadefResourceTypeAssociationDelete( resource_type=self.name) class MetadefResourceTypeFactory(object): def new_resource_type(self, namespace, name, **kwargs): created_at = timeutils.utcnow() updated_at = created_at return MetadefResourceType( namespace, name, kwargs.get('prefix'), kwargs.get('properties_target'), created_at, updated_at ) class MetadefProperty(object): def __init__(self, namespace, property_id, name, schema): self.namespace = namespace self.property_id = property_id self.name = name self.schema = schema def delete(self): if self.namespace.protected: raise exception.ProtectedMetadefNamespacePropDelete( property_name=self.name) class MetadefPropertyFactory(object): def new_namespace_property(self, namespace, name, schema, **kwargs): property_id = str(uuid.uuid4()) return MetadefProperty( namespace, property_id, name, schema ) class MetadefTag(object): def __init__(self, namespace, tag_id, name, created_at, updated_at): self.namespace = namespace self.tag_id = tag_id self.name = name self.created_at = created_at self.updated_at = updated_at def delete(self): if self.namespace.protected: raise exception.ProtectedMetadefTagDelete(tag_name=self.name) class MetadefTagFactory(object): def new_tag(self, namespace, name, **kwargs): tag_id = str(uuid.uuid4()) created_at = timeutils.utcnow() updated_at = created_at return MetadefTag( namespace, tag_id, name, created_at, updated_at ) glance-16.0.0/glance/context.py0000666000175100017510000000462113245511421016345 0ustar zuulzuul00000000000000# Copyright 2011-2014 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_context import context from glance.api import policy class RequestContext(context.RequestContext): """Stores information about the security context. Stores how the user accesses the system, as well as additional request information. """ def __init__(self, owner_is_tenant=True, service_catalog=None, policy_enforcer=None, **kwargs): super(RequestContext, self).__init__(**kwargs) self.owner_is_tenant = owner_is_tenant self.service_catalog = service_catalog self.policy_enforcer = policy_enforcer or policy.Enforcer() if not self.is_admin: self.is_admin = self.policy_enforcer.check_is_admin(self) def to_dict(self): d = super(RequestContext, self).to_dict() d.update({ 'roles': self.roles, 'service_catalog': self.service_catalog, }) return d def to_policy_values(self): pdict = super(RequestContext, self).to_policy_values() pdict['user'] = self.user_id pdict['tenant'] = self.project_id return pdict @classmethod def from_dict(cls, values): return cls(**values) @property def owner(self): """Return the owner to correlate with an image.""" return self.project_id if self.owner_is_tenant else self.user_id @property def can_see_deleted(self): """Admins can see deleted by default""" return self.show_deleted or self.is_admin def get_admin_context(show_deleted=False): """Create an administrator context.""" return RequestContext(auth_token=None, project_id=None, is_admin=True, show_deleted=show_deleted, overwrite=False) glance-16.0.0/requirements.txt0000666000175100017510000000323413245511426016346 0ustar zuulzuul00000000000000# The order of packages is significant, because pip processes them in the order # of appearance. Changing the order has an impact on the overall integration # process, which may cause wedges in the gate later. pbr!=2.1.0,>=2.0.0 # Apache-2.0 defusedxml>=0.5.0 # PSF # < 0.8.0/0.8 does not work, see https://bugs.launchpad.net/bugs/1153983 SQLAlchemy!=1.1.5,!=1.1.6,!=1.1.7,!=1.1.8,>=1.0.10 # MIT eventlet!=0.18.3,!=0.20.1,<0.21.0,>=0.18.2 # MIT PasteDeploy>=1.5.0 # MIT Routes>=2.3.1 # MIT WebOb>=1.7.1 # MIT sqlalchemy-migrate>=0.11.0 # Apache-2.0 sqlparse>=0.2.2 # BSD alembic>=0.8.10 # MIT httplib2>=0.9.1 # MIT oslo.config>=5.1.0 # Apache-2.0 oslo.concurrency>=3.25.0 # Apache-2.0 oslo.context>=2.19.2 # Apache-2.0 oslo.utils>=3.33.0 # Apache-2.0 stevedore>=1.20.0 # Apache-2.0 futurist>=1.2.0 # Apache-2.0 taskflow>=2.16.0 # Apache-2.0 keystoneauth1>=3.3.0 # Apache-2.0 keystonemiddleware>=4.17.0 # Apache-2.0 WSME>=0.8.0 # MIT PrettyTable<0.8,>=0.7.1 # BSD # For paste.util.template used in keystone.common.template Paste>=2.0.2 # MIT jsonschema<3.0.0,>=2.6.0 # MIT python-keystoneclient>=3.8.0 # Apache-2.0 pyOpenSSL>=16.2.0 # Apache-2.0 # Required by openstack.common libraries six>=1.10.0 # MIT oslo.db>=4.27.0 # Apache-2.0 oslo.i18n>=3.15.3 # Apache-2.0 oslo.log>=3.36.0 # Apache-2.0 oslo.messaging>=5.29.0 # Apache-2.0 oslo.middleware>=3.31.0 # Apache-2.0 oslo.policy>=1.30.0 # Apache-2.0 retrying!=1.3.0,>=1.2.3 # Apache-2.0 osprofiler>=1.4.0 # Apache-2.0 # Glance Store glance-store>=0.22.0 # Apache-2.0 debtcollector>=1.2.0 # Apache-2.0 cryptography!=2.0,>=1.9 # BSD/Apache-2.0 cursive>=0.2.1 # Apache-2.0 # timeutils iso8601>=0.1.11 # MIT monotonic>=0.6 # Apache-2.0 glance-16.0.0/setup.py0000666000175100017510000000200613245511421014563 0ustar zuulzuul00000000000000# Copyright (c) 2013 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. # THIS FILE IS MANAGED BY THE GLOBAL REQUIREMENTS REPO - DO NOT EDIT import setuptools # In python < 2.7.4, a lazy loading of package `pbr` will break # setuptools if some other modules registered functions in `atexit`. # solution from: http://bugs.python.org/issue15881#msg170215 try: import multiprocessing # noqa except ImportError: pass setuptools.setup( setup_requires=['pbr>=2.0.0'], pbr=True) glance-16.0.0/.zuul.yaml0000666000175100017510000000031613245511421015014 0ustar zuulzuul00000000000000- project: check: jobs: - openstack-tox-functional - openstack-tox-functional-py35 gate: jobs: - openstack-tox-functional - openstack-tox-functional-py35